Data Protection and Privacy: Enforcing Rights in a Changing World 9781509954513, 9781509954544, 9781509954537

This book brings together papers that offer conceptual analyses, highlight issues, propose solutions, and discuss practi

260 23 5MB

English Pages [311] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
List of Contributors
1. The Norm Development of Digital Sovereignty between China, Russia, the EU and the US: From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event
I. Introduction
II. Why Bother to Contextualise Digital Sovereignty?
III. Norm Emergence from 1998 to 2012: Pleas for digital sovereignty as a reaction to US Digital Hegemony
IV. Norm Cascade 2013–2020: The institutionalisation of Digit
V. Norm universalisation in the US and the EU: 2016–2020
VI. Norm Internalisation from 2020 on: Digital sovereignty as
VII. Conclusion and Discussion. Contestation, variation and multilateralisation of digital sovereignty
References
2. Artountability: Art and Algorithmic Accountability
I. Introduction
II. Algorithmic Accountability
III. Transparency
IV. The Role of Art in Promoting Algorithmic Accountability and Transparency
V. Discussion and Conclusion
References
3. Expectations of Privacy: The Three Tests Deployed by the European Court of Human Rights
I. Introduction
II. Test 1: Reasonable Expectation (as per the question of whether the applicant can invoke the right to privacy Article 8 § 1)
III. Test 2: Reasonable Foreseeable (as per the 'in accordance with the law' requirement Article 8 § 2)
IV. Test 3: Legitimate Expectation (as per the 'necessary in a democratic society' requirement Article 8 § 2)
V. Conclusion
References
4. Multistakeholderism in the Brazilian General Data Protection Law: history and learnings
I. Introduction
II. Multistakeholderism in the form of LGPD: Multiple Hands and different policy spaces
III. Multistakeholderism in the content of LGPD
IV. Conclusion
References
5. The Dual Function of Explanations: Why Computing Explanations is of Value
I. Introduction
II. The Dual Function of Explanations
III. PLEAD's Approach to Explanation Automation
IV. Conclusion
References
6. COVID-19 Pandemic and GDPR: When Scientific Research becomes a Component of Public Deliberation
I. Introduction
II. Scientific Knowledge: A Basis for Public Deliberation
III. A European Strategy for Data: The Open Science Institutionalisation
IV. The Convergence of Data Protection and the Free Flow of Knowledge
V. Some Issues Still Open for Data Protection in Scientific Research and in the EOSC
VI. Conclusion
References
7. The Pandemic Crisis as Test Case to Verify the European Union's Personal Data Protection System Ability to Support Scientific Research
I. Introduction
II. The Meaning of Personal Health Data
III. Legal Basis for Processing Health Personal Data
IV. Management of Personal Data for Scientific Research Purposes Related to the Covid-19 Pandemic: Transparency, Information, Retention Period
V. Criticisms of the Application of the GDPR
VI. Conclusion
References
8. Data Protection Law and the EU Digital COVID Certificate Framework
I. Introduction
II. From pre-COVID Health Certificates and Immunity Passports to the EU Digital COVID Certificate Framework
III. The Interplay between the Regulations and EU Data Protection Law
IV. Broader Legal, Societal and Ethical Considerations
V. Conclusion
References
9. The DPIA: Clashing Stakeholder Interests in the Smart City?
I. Introduction
II. Fundamentals
III. Methodology
IV. Analysis
V. Discussion
VI. Conclusion
Appendix
References
10. Solidarity – ‘The Power of the Powerless’ Closing Remarks of the European Data Protection Supervisor
References
Index
Recommend Papers

Data Protection and Privacy: Enforcing Rights in a Changing World
 9781509954513, 9781509954544, 9781509954537

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

DATA PROTECTION AND PRIVACY This book brings together papers that offer conceptual analyses, highlight issues, propose solutions, and discuss practices regarding privacy, data protection and enforcing rights in a changing world. It is one of the results of the 14th annual International Conference on Computers, Privacy and Data Protection (CPDP), which took place online in January 2021. The pandemic has produced deep and ongoing changes in how, when, why, and the media through which, we interact. Many of these changes correspond to new approaches in the collection and use of our data – new in terms of scale, form, and purpose. This raises difficult questions as to which rights we have, and should have, in relation to such novel forms of data processing, the degree to which these rights should be balanced against other poignant social interests, and how these rights should be enforced in light of the fluidity and uncertainty of circumstances. This interdisciplinary book has been written at a time when the scale and impact of data processing on society – on individuals as well as on social systems – is becoming ever starker. It discusses open issues as well as daring and prospective approaches and is an insightful resource for readers with an interest in computers, privacy and data protection.

Computers, Privacy and Data Protection Previous volumes in this series (published by Springer) 2009 Reinventing Data Protection? Editors: Serge Gutwirth, Yves Poullet, Paul De Hert, Cécile de Terwangne, Sjaak Nouwt ISBN 978-1-4020-9497-2 (Print) ISBN 978-1-4020-9498-9 (Online) 2010 Data Protection in A Profiled World? Editors: Serge Gutwirth, Yves Poullet, Paul De Hert ISBN 978-90-481-8864-2 (Print) ISBN: 978-90-481-8865-9 (Online) 2011 Computers, Privacy and Data Protection: An Element of Choice Editors: Serge Gutwirth, Yves Poullet, Paul De Hert, Ronald Leenes ISBN: 978-94-007-0640-8 (Print) 978-94-007-0641-5 (Online) 2012 European Data Protection: In Good Health? Editors: Serge Gutwirth, Ronald Leenes, Paul De Hert, Yves Poullet ISBN: 978-94-007-2902-5 (Print) 978-94-007-2903-2 (Online) 2013 European Data Protection: Coming of Age Editors: Serge Gutwirth, Ronald Leenes, Paul De Hert, Yves Poullet ISBN: 978-94-007-5184-2 (Print) 978-94-007-5170-5 (Online) 2014 Reloading Data Protection Multidisciplinary Insights and Contemporary Challenges Editors: Serge Gutwirth, Ronald Leenes, Paul De Hert ISBN: 978-94-007-7539-8 (Print) 978-94-007-7540-4 (Online) 2015 Reforming European Data Protection Law Editors: Serge Gutwirth, Ronald Leenes, Paul De Hert ISBN: 978-94-017-9384-1 (Print) 978-94-017-9385-8 (Online) 2016 Data Protection on the Move Current Developments in ICT and Privacy/Data Protection Editors: Serge Gutwirth, Ronald Leenes, Paul De Hert ISBN: 978-94-017-7375-1 (Print) 978-94-017-7376-8 (Online)

2017 Data Protection and Privacy: (In)visibilities and Infrastructures Editors: Ronald Leenes, Rosamunde van Brakel, Serge Gutwirth, Paul De Hert ISBN: 978-3-319-56177-6 (Print) 978-3-319-50796-5 (Online) Previous titles in this series (published by Hart Publishing) 2018 Data Protection and Privacy: The Age of Intelligent Machines Editors: Ronald Leenes, Rosamunde van Brakel, Serge Gutwirth, Paul De Hert ISBN: 978-1-509-91934-5 (Print) 978-1-509-91935-2 (EPDF) 978-1-509-91936-9 (EPUB) 2019 Data Protection and Privacy: The Internet of Bodies Editors: Ronald Leenes, Rosamunde van Brakel, Serge Gutwirth, Paul De Hert ISBN: 978-1-509-92620-6 (Print) 978-1-509-92621-3 (EPDF) 978-1-509-9622-0 (EPUB) 2020 Data Protection and Privacy: Data Protection and Democracy Editors: Ronald Leenes, Dara Hallinan, Serge Gutwirth, Paul De Hert ISBN: 978-1-509-93274-0 (Print) 978-1-509-93275-7 (EPDF) 978-1-509-93276-4 (EPUB) 2021 Data Protection and Privacy: Data Protection and Artificial Intelligence Editors: Dara Hallinan, Ronald Leenes, Paul De Hert ISBN: 978-1-509-94175-9 (Print) 978-1-509-94176-6 (EPDF) 978-1-509-94177-3 (EPUB) 2022 Data Protection and Privacy: Enforcing Rights in a Changing World Editors: Dara Hallinan, Ronald Leenes, Paul De Hert ISBN: 978-1-509-95451-3 (Print) 978-1-509-95453-7 (EPDF) 978-1-509-95452-0 (EPUB)

iv

Data Protection and Privacy Enforcing Rights in a Changing World

Edited by

Dara Hallinan Ronald Leenes and

Paul De Hert

HART PUBLISHING Bloomsbury Publishing Plc Kemp House, Chawley Park, Cumnor Hill, Oxford, OX2 9PH, UK 1385 Broadway, New York, NY 10018, USA 29 Earlsfort Terrace, Dublin 2, Ireland HART PUBLISHING, the Hart/Stag logo, BLOOMSBURY and the Diana logo are trademarks of Bloomsbury Publishing Plc First published in Great Britain 2022 Copyright © The editors and contributors severally 2022 The editors and contributors have asserted their right under the Copyright, Designs and Patents Act 1988 to be identified as Authors of this work. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publishers. While every care has been taken to ensure the accuracy of this work, no responsibility for loss or damage occasioned to any person acting or refraining from action as a result of any statement in it can be accepted by the authors, editors or publishers. All UK Government legislation and other public sector information used in the work is Crown Copyright ©. All House of Lords and House of Commons information used in the work is Parliamentary Copyright ©. This information is reused under the terms of the Open Government Licence v3.0 (http://www.nationalarchives.gov.uk/doc/ open-government-licence/version/3) except where otherwise stated. All Eur-lex material used in the work is © European Union, http://eur-lex.europa.eu/, 1998–2022. A catalogue record for this book is available from the British Library. A catalogue record for this book is available from the Library of Congress. Library of Congress Control Number: 2021949615 ISBN: HB: 978-1-50995-451-3 ePDF: 978-1-50995-453-7 ePub: 978-1-50995-452-0 Typeset by Compuscript Ltd, Shannon To find out more about our authors and books visit www.hartpublishing.co.uk. Here you will find extracts, author information, details of forthcoming events and the option to sign up for our newsletters.

PREFACE It is the middle of 2021 as we write this preface. Year on year, data protection seems to grow in importance. As reliance on information processing increases, there is a parallel growth in social systems built around information processing. As a result, the significance of laws which regulates the degree to which, and conditions under which, individuals might be enmeshed within these systems, also grows. One need look no further than the COVID-19 pandemic to provide an example of this dynamic. The pandemic has spurred the development of novel approaches in data processing, which have in turn furnished us with novel social possibilities. In relation to each of these possibilities, however, concerns are also raised as to the impact on individual and group rights and societal interests. Data protection often provides the legal frame for these discussions. Whilst data protection discussions apparently expand in scope and depth, however, the future of data protection remains undetermined. It is not always clear precisely where the lines of legitimacy and illegitimacy should lie in relation to novel personal data processing activities. Often, there are multiple perspectives on questions of legitimacy, each with a unique perspective on how data processing might cause harms, or provide opportunities. Each of these perspectives points toward different interpretations of existing concepts and principles in data protection law, each of these perspectives points toward different solutions in terms of the introduction, and conditions, of new legislative approaches. It is against this background that the international privacy and data protection crowd gathered for the fourteenth time to participate in the international Computers, Privacy and Data Protection Conference (CPDP) – from 27 to 29 January 2021. An audience of over 1,000 people had the chance to discuss a wide range of contemporary topics and issues with over 400 speakers in more than 80 panels. Owing to the pandemic, CPDP took place, for the first time, online. This meant that the usual in person socialising during the breaks, side events and at ad-hoc dinners and pub crawls could not happen. Instead, however, CPDP offered a special online networking platform where participants could engage and recreate some of the feel of the physical event. Striving for diversity and balance, CPDP gathers academics, lawyers, practitioners, policymakers, computer scientists and civil society from all over the world to exchange ideas and discuss the latest emerging issues and trends. This unique multidisciplinary formula has served to make CPDP one of the leading data protection and privacy conferences in Europe and around the world.

viii  Preface The conference was again a hive of activity and discussion. Conversations naturally dealt with data protection law. However, conversations also addressed much broader themes. Amongst these themes, the impact of the pandemic on data processing practices, AI, and international transfers featured prominently, as did discussions on the enforcement of rights in a changing world – the theme of the conference. The conference is definitely the place to be, but we are also happy to produce a tangible spin-off every year: the CPDP book. Papers in the book are cited frequently and the series has a broad readership. The book cycle starts with a call for papers in the summer preceding the conference. Submissions are then peer reviewed and authors whose papers are accepted present their work in the various academic panels at the conference. After the conference, speakers are also invited to submit papers based on panel discussions. All papers submitted on the basis of both calls are then (again) double-blind peer reviewed. This year, we received 13 papers in the second round of submissions, of which nine were accepted for publication. It is these nine papers that are found in this volume, complemented by the conference closing speech traditionally given by the EDPS (Wojciech Wiewiórowski). The conference addressed numerous topics concerning privacy and data protection in its 80 plus panels. These naturally included panels dealing with the enforcement of rights in a changing world, but also included panels dealing with numerous other topics, such as racial justice, democratic oversight of police uses of surveillance technologies, consent, and children’s privacy. The conference covered far too many topics to list them all here. For more information, we refer the interested reader to the conference website: www.cpdpconferences.org. Whilst extensive, the current volume offers but a fraction of what is available across the whole conference. Nevertheless, the editors consider the volume to consist of an important set of chapters addressing both contemporary and prospective privacy and data protection issues. All the chapters in this book have been peer reviewed and commented on by at least two referees with expertise and interest in the relevant subject matter. Since their work is crucial for maintaining the scientific quality of the book, we would like to take this opportunity to thank all the CPDP reviewers for their commitment and efforts: Ala’A Al-Momani; Alessandra Calvi; Alessandro Acquisti; Alessandro Mantelero; Andres Chomczyk Penedo; Andrew Adams; Arnold Roosendaal; Bart Preneel; Bianca-Ioana Marcu; Bo Zhao; Bram Visser; Chris Hoofnagle; Dennis Hirsch; Diana Dimitrova; Edoardo Celeste; Georgios Bouchagiar; Gergely Biczók; Gianclaudio Malgieri; Giovanni Livraga; Hideyuki Matsumi; Ian Brown; Inge Graef; Iraklis Symeonidis; Irene Kamara; István Böröcz; Ivan Szekely; Joseph Savirimuthu; Katerina Demetzou; Kristina Irion; Lina Jasmontaite; Linnet Taylor; Magda Brewczynska; Mara Paun; Maria Grazia Porcedda; Marit Hansen; Massimo Durante; Michael Veale; Nicholas Martin; Oliver Vettermann; Robin Pierce; Rosamunde Van Brakel; Sascha van Schendel; Silvia De Conca; Simone Casiraghi; Stefano Fantin; Stephanie von Maltzan; Tal Zarsky; Tineke Broer. We would also like to thank Malika Meursing for her tireless work in bringing the book together.

Preface  ix As is now customary, the conference concluded with some closing remarks from the European Data Protection Supervisor, Wojciech Wiewiórowski – appointed as Supervisor on 5 December 2019. We are very happy and honoured that he was willing to continue the tradition and we look forward to more of his closing speeches in future. Dara Hallinan, Ronald Leenes & Paul De Hert 7 July 2021

x

CONTENTS Preface������������������������������������������������������������������������������������������������������������������������� vii List of Contributors��������������������������������������������������������������������������������������������������� xiii 1. The Norm Development of Digital Sovereignty between China, Russia, the EU and the US: From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event��������������������������������������������������������������������������1 Johannes Thumfart 2. Artountability: Art and Algorithmic Accountability�����������������������������������������45 Peter Booth, Lucas Evers, Eduard Fosch Villaronga, Christoph Lutz, Fiona McDermott, Piera Riccio, Vincent Rioux, Alan M Sears, Aurelia Tamò-Larrieux and Maranke Wieringa 3. Expectations of Privacy: The Three Tests Deployed by the European Court of Human Rights���������������������������������������������������������������������������������������67 Bart Van der Sloot 4. Multistakeholderism in the Brazilian General Data Protection Law: History and Learnings��������������������������������������������������������������������������������97 Bruno Bioni and Mariana Rielli 5. The Dual Function of Explanations: Why Computing Explanations is of Value�����������������������������������������������������������������������������������������������������������127 Niko Tsakalakis, Sophie Stalla-Bourdillon, Laura Carmichael, Dong Huynh, Luc Moreau and Ayah Helal 6. COVID-19 Pandemic and GDPR: When Scientific Research becomes a Component of Public Deliberation�����������������������������������������������������������������157 Ludovica Paseri 7. The Pandemic Crisis as Test Case to Verify the European Union’s Personal Data Protection System Ability to Support Scientific Research��������187 Valentina Colcelli 8. Data Protection Law and the EU Digital COVID Certificate Framework������211 Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou

xii  Contents 9. The DPIA: Clashing Stakeholder Interests in the Smart City?�������������������������245 Laurens Vandercruysse, Michaël Dooms and Caroline Buts 10. Solidarity – ‘The Power of the Powerless’: Closing Remarks of the European Data Protection Supervisor��������������������������������������������������������285 Wojciech Wiewiórowski Index��������������������������������������������������������������������������������������������������������������������������289

LIST OF CONTRIBUTORS Bruno Bioni is a PhD candidate in Commercial Law at the Faculty of Law of the University of São Paulo, Brazil. Peter Booth is Associate Professor at BI Norwegian Business School, Norway. Caroline Buts is Professor of Economics at the Department of Applied Economics of the Vrije Universiteit Brussel, Belgium. Laura Carmichael is Research Fellow at Southampton Law School, University of Southampton, UK. Valentina Colcelli is Researcher at the Italian National Research Council. Michaël Dooms is Associate Professor of Strategic Management at the Department of Business of the Vrije Universiteit Brussel, Belgium. Daniela Dzurakova (née Galatova) is a PhD researcher at the Faculty of Law of the Pan-European University, Slovakia. In addition to her PhD research, she works as a legal assistant for the European Commission. The article includes information exclusively from her PhD activities and do not represent any official stand stemming from her job as a civil servant. Lucas Evers leads the Wetlab at the Waag Technology & Society Foundation, the Netherlands. Eduard Fosch Villaronga is Assistant Professor at the eLaw Centre for Law and Digital Technologies, Leiden University, the Netherlands. Olga Gkotsopoulou is a PhD researcher at the Law, Science, Technology and Society (LSTS) Research Group at Vrije Universiteit Brussel, Belgium. Ayah Helal is Research Associate at the Department of Informatics, King’s College London, UK. Dong Huynh is Research Fellow at the Department of Informatics, King’s College London, UK. Christoph Lutz is Associate Professor at BI Norwegian Business School, Norway. Fiona McDermott is Associate Professor at Trinity College Dublin, Ireland. Luc Moreau is Professor of Computer Science and Head of the Department of Informatics at King’s College London, UK.

xiv  List of Contributors Ludovica Paseri is a PhD candidate in Law, Science and Technology at the University of Bologna, Italy. Piera Riccio is Research Assistant at the Politecnico di Torino, Italy. Mariana Rielli is Head of Projects at the Data Privacy Brazil Research Association. Vincent Rioux is Head of the Beaux-Arts de Paris Digital Pole at the National School of Fine Arts, France. Alan M Sears is Researcher and Lecturer at the eLaw Centre for Law and Digital Technologies, Leiden University, the Netherlands. Sophie Stalla-Bourdillon is Professor of Information Technology Law and Data Governance at Southampton Law School, University of Southampton, UK. Aurelia Tamò-Larrieux is Postdoctoral Researcher at the University of St. Gallen, Switzerland. Johannes Thumfart is Senior Researcher at the Law, Science, Technology and Society (LSTS) Research Group at Vrije Universiteit Brussel, Belgium. Niko Tsakalakis is Senior Research Fellow at Southampton Law School, University of Southampton, UK. Laurens Vandercruysse is Researcher in Data Protection Economics at the Department of Business of the Vrije Universiteit Brussel, Belgium. Bart Van der Sloot is Associate Professor at the Tilburg Institute for Law, Technology and Society, Tilburg University, the Netherlands. Maranke Wieringa is a PhD candidate at the Datafied Society of Utrecht University, the Netherlands. Wojciech Wiewiórowski is the European Data Protection Supervisor.

1 The Norm Development of Digital Sovereignty between China, Russia, the EU and the US: From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event JOHANNES THUMFART 1

Abstract This chapter examines the norm development process of digital sovereignty in China, the EU, the US and Russia, investigating concepts such as digital sovereignty, technological sovereignty, internet sovereignty, data sovereignty, souveraineté numérique, digitale Souveränität, 网络主权 (‘network sovereignty’), 信息主权 (‘information sovereignty’) and Суверенный интернет (‘sovereign internet’). It develops an intellectual history of the norm development of digital sovereignty, roughly following Finnemore and Sikkink’s three-stage model, with each stage being initiated by a catalytic event. The first phase, norm emergence, lasts from the late 1990s and the Patriot Act in 2001 to Russia’s laws on internet control in 2012. During this phase, under the US’s largely uncontested digital hegemony, China is the prime norm entrepreneur of digital sovereignty, promoting 网络主权 (‘network sovereignty’) and 信息主权 (‘information sovereignty’). The second phase, norm cascade, begins with the catalytic event of the Snowden revelations in 2013. This phase is characterised by an increasingly multipolar order. During this phase, the EU adopts a notion of digital sovereignty with a focus on economic aspects. And Russia’s notion of Суверенный интернет (‘sovereign internet’) becomes increasingly radicalised. In Russia and France, illiberal accounts of digital sovereignty are supported by Carl Schmitt’s geopolitical theories. From 2016 to 2020, the US and the EU underwent an additional phase,

1 Vrije

Universiteit Brussel.

2  Johannes Thumfart norm universalisation. Triggered by the catalytic events of Russia’s interference with the US general election and Brexit in 2016, these countries and regions became aware that their political systems were vulnerable to manipulation. The COVID crisis constitutes the most recent catalytic event and initiates the fourth stage of the norm development cycle, the stage of norm internalisation. Processes of digital sovereignty are increasingly implemented, and they emerge in a bottom-up manner, with civil society playing an important role. However, this, in turn, makes clear that digital sovereignty in liberal societies is strongly characterised and limited by the power of the private sector and restrictions on governmental power, such as federalism and multilateralism.

Keywords Norm development; digital sovereignty; network sovereignty; sovereign internet; COVID crisis; internet freedom; souveraineté numérique; Snowden revelations; international law; digitale Souveraenitaet; internet sovereignty; international relations; European Union; digital hegemony; US China rivalry, RUNet, Russia, Carl Schmitt

I. Introduction During and after crises, countries undergo a complex process of problem assessment and selective transformation. Since the impact of crises usually is not restricted to only one issue, this transformation can be far-reaching. For example, the 1918 influenza pandemic had a number of political and economic effects,2 and the more recent Oil Crisis in 1973/4 brought about changes in the fields of finance, geopolitics and the organisation of labour.3 However, even in the wake of crises, selective transformation does not simply arise ex nihilo, but rather ‘the actions that are taken depend on the ideas that are lying around,’ as Milton Friedman put it.4 In this sense, crises can be understood as accelerators, as catalysts for the spread and implementation of political ideas. Although the COVID crisis is primarily a health crisis, it also gave rise to a perceived or real necessity for selective change in other sectors, primarily regarding practices of bordering, securitisation and digitalisation. In regard to practices of bordering, nations and regions have been subjected to various lockdown regimes, and export embargos on health products were enacted5 or discussed. 2 Laura Spinney, Pale Rider: The Spanish Flu of 1918 and How It Changed the World, 1st US edn (New York: Public Affairs, 2017). 3 Timothy Mitchell, Carbon Democracy (London and New York: Verso, 2013), 173ff. 4 Milton Friedman, Capitalism and Freedom (Chicago: University of Chicago Press, 2002), xiv. 5 So far, the most drastic example of this was an EU-requested raid of an AstraZeneca plant in Italy, in order to prevent the export of the vaccine. Angela Giuffrida and Daniel Boffey, ‘AstraZeneca plant

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  3 Regarding securitisation,6 civil liberties have been restricted in order to promote public health security. By bringing about measures of social distancing and with them remote work, remote education, online shopping, online culture and contact tracing, the COVID crisis promoted digitalisation.7 The resurgence of practices of bordering and securitisation and the concomitant furthering of digitalisation is the threefold, partly contradictory impact of the COVID crisis.8 These three aspects converge in the norm of digital sovereignty, which became a household buzzword in digital policy debates during the last years and particularly surrounding this crisis. The term digital sovereignty is part of a whole group of related, yet not identical expressions: technological sovereignty, information sovereignty, cyber sovereignty, internet sovereignty, data sovereignty, souveraineté numérique, soberania digital, digitale Souveränität, 网络主权 (‘network sovereignty’), 信息主权 (‘information sovereignty’), Суверенный интернет (‘sovereign internet’). Whilst these terms can by no means all be simply equated, they all can be summarised as signifying the idea of – primarily, yet not exclusively – national control over digital phenomena. Figure 1  Digital sovereignty at the intersection of securitisation, practices of bordering, and digitalisation

inspected by Italian police at EU’s request,’ The Guardian, March 24, 2021. www.theguardian.com/ world/2021/mar/24/astrazeneca-dismisses-claim-29m-vaccine-doses-in-italy-were-bound-for-uk. 6 Ugo Gaudino, ‘The Ideological Securitization of COVID-19: Perspectives from the Right and the Left’, E-international Relations, Jul 28 2020, www.e-ir.info/2020/07/28/theideological-securitization-of-covid-19-perspectives-from-the-right-and-the-left/. 7 Aamer Baig et al., ‘The COVID-19 recovery will be digital: A plan for the first 90 days,’ McKinsey Digital, May 14, 2020, www.mckinsey.com/business-functions/mckinsey-digital/our-insights/thecovid-19-recovery-will-be-digital-a-plan-for-the-first-90-days. 8 As discussed in section VI, particularly regarding the EU, the COVID crisis also deepened fiscal integration, which is somewhat contradictory.

4  Johannes Thumfart What distinguishes digital sovereignty from, for example, data sovereignty or information sovereignty, is its vast scope: it includes control over data, soft- and hardware, digital processes, services and infrastructures.9 Therefore, digital sovereignty can roughly be understood as the norm of national control over all kinds of aspects of digital technologies and their impact, which, in nations that are legitimised by popular sovereignty, indirectly includes the individual control over digital technologies. Since this chapter is primarily focused on inter-state relations, the widespread concept of immediate individual digital sovereignty is not very relevant here.10 A similar assessment holds true in regard to the widespread idea of corporate digital sovereignty: some scholars have compared the power of transnational digital platforms to the state-like properties of the British East India Company and the Dutch VOC.11 However, corporations cannot claim sovereignty sensu stricto, since they do not have the authority to regulate and to enforce laws. Nevertheless, the relation between states and private actors is, of course, one of the defining features of digital sovereignty. Whilst this chapter focuses on norm development processes, it regards digital sovereignty from a normatively neutral perspective. Depending on the national political culture and specific context, digital sovereignty can be emancipatory or repressive. In all nations alike, digital sovereignty includes attempts to align cyberspace with the territory of a state, eg by data localisation or by applying local laws in general. Digital sovereignty is analysed as a norm in the broadest possible sense, as part of prescriptive statements and agreements constituting targets considered desirable by policy makers and/or civil society. These statements and agreements include legal rules, shared moral and diplomatic principles and similar standards. In most cases, digital sovereignty is not explicitly put into law. However, regarding all of the examined examples, it is explicitly part of the political agenda. This broad definition of norm is of particular relevance regarding International Relations, where, due to the absence of a global sovereign who can enforce the international rule of law, informal and even merely rhetorical norms are of great importance. Whilst being aware of a number of similar, yet independent debates in the Global South,12 9 This definition is based on the definition by Floridi, but re-formulated with an emphasis on national sovereignty, since, as far as legal norms and political reality are concerned, sovereignty is, by definition, a property of nation states. Of course, this needs to be applied to the special conditions of the EU’s pooled sovereignty, which will be discussed only briefly in the last section. See Luciano Floridi, ‘The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially for the EU,’ Philosophy & Technology 33, (2020): 369–378. https://link.springer.com/article/10.1007/s13347-020-00423-6. 10 It will be discussed briefly in section III. 11 Johannes Thumfart, ‘Public and private just wars: Distributed cyber deterrence based on Vitoria and Grotius,’ Internet Policy Review 9, no 3, (2020). Special Issue on ‘Geopolitics, Jurisdiction and Surveillance’. https://policyreview.info/articles/analysis/public-and-private-just-wars-distributedcyber-deterrence-based-vitoria-and; Benjamin Bratton, The stack: On software and sovereignty (Cambridge, Massachusetts: MIT Press, 2015); Cedric Ryngaert and Mark Zoetekouw, ‘The End of Territory? The Re-Emergence of Community as a Principle of Jurisdictional Order in the Internet Era,’ in The Net and the Nation State, ed. Uta Kohl, (Cambridge: Cambridge University Press, 2017) 185–201. 12 Johannes Thumfart, ‘Cultures of digital sovereignty in China, Russia and India. A comparative perspective from NWICO and WSIS to SCO.’ Manuscript, to be published in Digital Sovereignty in the BRICS Countries, ed. Luca Belli and Min Jiang (2022).

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  5 this chapter focusses on China, Russia, the EU and the US. The second section assesses the norm of digital sovereignty from a philosophical point of view and highlights the need for a contextualised analysis. The third to sixth sections are dedicated to an intellectual history of the norm development of digital sovereignty, roughly following Finnemore and Sikkink’s three stages model,13 adding the stage of ‘norm universalisation.’ Every stage is initiated by a catalytic event. The third section sketches the phase of norm emergence, starting with US digital hegemony and surveillance in the late 1990s, peaking in the context of the Patriot Act in 2001 and Prism, ending with Russia’s laws on Internet control in 2012. During this phase, marked by the US’s largely uncontested digital hegemony, China is the prime norm entrepreneur of digital sovereignty, promoting technical 网络主权 (‘network sovereignty’) and cultural 信息主权 (‘information sovereignty’). The fourth section depicts the phase of norm cascade starting with the Snowden revelations in 2013 as a catalytic event. This phase is characterised by an increasingly multipolar order. The EU14 adopts a notion of digital sovereignty with a focus on economic aspects. And Russia’s notion of Суверенный интернет (‘sovereign internet’) becomes increasingly radicalised, leading to the reported disconnection of the country’s national digital sphere from the global internet in 2019.15 In Russia and France, illiberal accounts of digital sovereignty are supported by Carl Schmitt’s geopolitical theories, which is interesting from a point of view of intellectual history. Figure 2  The norm development process of digital sovereignty

13 Martha Finnemore and Kathryn Sikkink, ‘International Norm Dynamics and Political Change,’ International Organization 52, no. 4 (Autumn 1998): 887–917. 14 The analysis of the EU in this contribution is characterised by a focus on France and Germany, which is, next to the disproportionate significance of these actors, mostly owed to the restrictions in terms of essay-space. 15 Jane Wakefield, ‘Russia “successfully tests” its unplugged internet,’ BBC, 24 December 2019. www.bbc.com/news/technology-50902496.

6  Johannes Thumfart The fifth section deals with the years from 2016 to 2020 in the US and the EU: In this period, the US and the EU, being classically ‘western’ powers, experience a phase of what can be considered norm universalisation. Triggered by the catalytic event of Russia’s interference with the US general election and Brexit in 2016, these countries and regions become aware that their political systems are vulnerable to manipulation. As a result, civil society and policy makers demand and implement controls that prevent digital platforms from spreading disinformation originating from hostile powers. Furthermore, particularly the economic rise of China has led to measures and rhetoric of digital economic nationalism. The sixth section deals with the COVID crisis as a catalytic event that furthers the norm internalisation of digital sovereignty. This section is also focused on the US and the EU. Processes of digital sovereignty, for example countering disinformation, are widely implemented, and they emerge with civil society playing an important role, constituting, in aspects, a form of bottom-up digital sovereignty. Additionally, projects aimed at increasing the security and autonomy of digital infrastructure are pursued in the EU and the US.

II.  Why Bother to Contextualise Digital Sovereignty? In order to assess the core problem of digital sovereignty, it is best to take a step back and reflect on the concept of sovereignty itself. The term has so many different and contradictory aspects, eg internal, external and popular sovereignty, that it has been accurately described as having ‘an emotive quality’, but ‘lacking meaningful specific content’.16 Namely in international law, the norms of sovereignty as self-determination of a people and as territorial jurisdiction are often conflicting,17 for example regarding Catalan or Kurdish separatists who claim sovereignty within another sovereign territory. Moreover, the notion of sovereignty as actual national independence must be understood in a less and less literal sense in the present age of multilateralism and international treaties. Furthermore, just like any other historical phenomenon, the European norm of sovereignty, which was universalised in international law, did not develop in a vacuum, but in specific technological and societal contexts of Europe’s sixteenth and seventeenth centuries.18 Up to the present day, concepts and practices of

16 Elihu Lauterpacht, ‘Sovereignty – Myth or Reality?’ International Affairs 73 (1997): 137–150, 141. 17 Jeremy Waldron, ‘Two Conceptions of Self-Determination,’ in The Philosophy of International Law, ed. Samantha Besson and John Tasioulas (Oxford: Oxford UP, 2010): 397–413. 18 First of all, the discourse of sovereignty, and the religious undertone of claiming an almost transcendent ‘highest power’, is characterised by the secularisation of international relations in the Westphalian order. See: Carl Schmitt, Political Theology (Cambridge, Mass.: MIT Press, 1985). Furthermore, sovereign states with clear borders can also only be imagined due to the invention of technology of cartography. See: Michael Biggs, ‘Putting the State on the Map: Cartography, Territory, and

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  7 sovereignty adapt to the milieu to which they apply. The functioning of sovereignty on ships on the high seas, in outer space and sovereignty over airspace are not identical to the functioning of territorial sovereignty on the land. Internal sovereignty means something else in liberal than in illiberal regimes. Therefore, one can hardly expect digital sovereignty to be identical to pre-digital sovereignty. Rather, the notion of sovereignty can be expected to adapt to this new techno-social milieu and to be not exclusively ‘grounded in the monopolistic spatiality of territorial sovereignty’.19 However, it is impossible to conceive of adaption without an idea of the essence of the object which adapts. In order to tackle this hard question after the essence of sovereignty, it is best to concentrate on its formal and minimal definition detached from empirical realities as provided by legal positivist Kelsen: The naturalistic direction of modern political sciences propagates an illusionary understanding of sovereignty as an empirical-scientific fact. It can be proven that such a thing as sovereignty cannot exist as a social reality. In the sphere of natural realities, sovereignty must imply roughly freedom and independence of one power from the other. In this factual sense, no state can be sovereign because even the greatest superpower is being determined by others on all sides, in terms of economic, juridical and cultural life.20

Therefore, to Kelsen, sovereignty ‘is only the order, which is not contained in any other order, because it cannot be deduced from another order’ and merely ‘a property of the state determined as a legal order’.21 Relating this minimalist definition to the digital sovereignty discourse reduces the concept of digital sovereignty to the idea that national laws should apply to cyberspace, which is rather self-evident. However, this most straightforward interpretation of digital sovereignty is connected to considerable problems regarding its implementation. In practice, sovereignty is primarily attached to territory. Cyberspace, in turn, is, of course, based on physical infrastructure that is connected to jurisdictions, however, it also transcends the territorial limits of jurisdictions. This is the case, for example, regarding the ubiquitous accessibility of information and cloud computing,22 in which

European State Formation,’ Comparative Studies in Society and History 41, no. 2 (1999): 374–411. Another decisive role for the development of modern state sovereignty in early modern age played the successive rationalisation of the management of citizen data, inasmuch as such information was crucial to taxation. Benedict Anderson, Imagined Communities (London: Verso, 2016), 168–169; James C. Scott, Seeing like a state (New Haven and London: Yale UP, 1998), 22–24; 67–68. 19 Mireille Hildebrandt, ‘Extraterritorial Jurisdiction to enforce in Cyberspace? Bodin, Schmitt, Grotius, in Cyberspace,’ University of Toronto Law Journal 63, no. 2 (Spring 2013): 196–224, 224. 20 Hans Kelsen, Das Problem der Souveränität (Tübingen: Mohr Siebeck, 1928), 6f, translated by the author. 21 ibid 13f, translated by the author. 22 Paul de Hert and Michal Czerniawski, ‘Expanding the European data protection scope beyond territory: Article 3 of the General Data Protection Regulation in its wider context,’ International Data Privacy Law 6, no. 3 (August 2016): 230–243.

8  Johannes Thumfart context data is stored in a distributed and dynamic way and not attached to one territory.23 Cyberspace is not non-territorial, but nevertheless ‘post-territorial’.24 Therefore, extending the rule of law into cyberspace is a difficult task. In his standard work on sovereignty and cyberspace, Mueller criticises that ‘the attempt by governments to align the internet with their jurisdictional boundaries (…) leads them inexorably towards assertions of extraterritorial jurisdiction’.25 Mueller’s cyberspace jurisdiction paradox is true in many cases. There is for example no international consensus regarding the question of whether the laws of the country of the origin of an act of digital communication or of its destination should apply to an act of digital communication, such as a website involved in activities that are illegal in one of these countries.26 Similar problems arise in the sector of e-evidence, where, for example according to the US CLOUD act, the location of the data controller is the decisive category that determines its accessibility for law enforcement, whilst the GDPR regards the location of the data as the decisive category.27 All of these approaches bring about considerable problems regarding their extraterritorial implications. Yet, if one takes Kelsen’s separation of sovereignty from empirical territory seriously, then there is no reason why extraterritorial jurisdiction should be a problem per se.28 Rather, as is pre-digital sovereignty, digital sovereignty is connected to a number of conflicts, especially regarding its extraterritorial implications.29 Rather than the abandonment of sovereignty, this requires a context-aware, elastic

23 Paul De Hert, Cihan Parlar and Johannes Thumfart, ‘Legal Arguments Used in Courts Regarding Territoriality and Cross-Border Production Orders: From Yahoo Belgium to Microsoft Ireland,’ New Journal of European Criminal Law 9, no. 339 (2018). 24 Paul de Hert and Johannes Thumfart, ‘The Microsoft Ireland case and the Cyberspace Sovereignty Trilemma. Post-Territorial technologies and companies question territorial state sovereignty and regulatory state monopolies,’ Brussels Privacy Hub Working Paper 4/11, July 2018. https://brusselsprivacyhub.eu/publications/BPH-Working-Paper-VOL4-N11.pdf. 25 Milton Mueller, Will the Internet Fragment? Sovereignty, Globalization, and Cyberspace, (Cambridge, UK: Polity, 2017), 3, 108, Kindle. 26 Uta Kohl, Jurisdiction and the Internet, (Cambridge, UK: Cambridge UP, 2007), 24ff. 27 Roxana Vatanparast, ‘Data Governance and the Elasticity of Sovereignty’, Brooklyn Journal of International Law 46, no. 1 (2020): 26f. 28 Kelsen sees no problem in extraterritorial jurisdiction per se. Hans Kelsen, General Theory of Law and State, (Harvard UP: Cambridge, 1949), 209. This passage has also been discussed by Svantesson as an unequivocal endorsement of jurisprudential legitimacy of extraterritoriality. Dan Svantesson, ‘A Jurisprudential Justification for Extraterritoriality in (Private) International Law,’ Santa Clara Journal of International Law 13 (2015): 517–571, 542. 29 These problems with extraterritoriality regard prominently the questions of data protection, which led to Schrems I and II. Kristina Irion, ‘Schrems II and Surveillance: Third Countries’ National Security Powers in the Purview of EU Law,’ European Law Blog, July 24, 2020, https:// europeanlawblog.eu/2020/07/24/schrems-ii-and-surveillance-third-countries-national-securitypowers-in-the-purview-of-eu-law/. See also the problem of the taxation of digital services, which causes comparable transatlantic conflicts. Scott A. Hodge and Daniel Bunn, ‘Digital Tax Deadlock: Where Do We Go from Here?’ Tax Foundation, 1 July 2020, https://taxfoundation.org/oecd-digitaltax-project-developments/#:~:text=The%20digital%20services%20tax%20essentially%20works%20 like%20a%20tariff.&text=The%20OECD%20approach%20in%20Pillar,services%20and%20 consumer%2Dfacing%20businesses.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  9 ‘repurposing of sovereignty’ in the digital age30 and an understanding of ‘territory and borders (…) as fluid, adaptable practices rather than as static and immutable structures’.31 Another, perhaps even more complex category of dilemmas arises when one defines sovereignty in a way that goes further than the core functions of liberal statehood and includes notions of political control related to national selfdetermination. Especially in liberal societies, it is not clear how the concept of sovereignty as far-reaching political control could possibly apply to a great number of phenomena of the digital sphere related to infrastructure, culture and economy, which are, in great parts, perceived to be independent from state power. Likewise, a state’s sovereignty over the digital realm means something very different from the perspective of the Russian or Chinese illiberal understanding of political power. To tackle these problems, it is crucial to keep in mind that the developing concept of digital sovereignty and related terms do not have a specific fixed meaning, but rather, their meaning changes with the traditions they come from, the contexts in which they are used and the power-relations inherent to these contexts. Therefore, it is indispensable to subject the digital sovereignty debate to ideo-historical contextualisation and discourse analysis.

III.  Norm Emergence from 1998 to 2012: Pleas for Digital Sovereignty as a Reaction to US Digital Hegemony Within a purely academic discourse foremost occupied with legal theory, the question of how to reconcile the, then still marginal, transnational digital flows with the territorial limits of sovereignty has been discussed since as early as 1996 by household names such as Goldsmith, who famously argued ‘against cyberanarchy’.32 This largely US-American academic discourse is not without irony. By theorising sovereign state power as being challenged by the newly developed digital infrastructure, it obscures the fact that this infrastructure was, in its beginning, a public project, which was only privatised due to the declining relevance of national security issues after the end of the cold war.33 30 Roxana Vatanparast, ‘Data Governance and the Elasticity of Sovereignty’, Brooklyn Journal of International Law 46, no. 1 (2020), 5. 31 Daniel Lambach, ‘The Territorialization of Cyberspace*,’ International Studies Review 22, no. 3 (September 1, 2020): 482–506, https://doi.org/10.1093/isr/viz022. 32 Tim Wu, ‘Cyberspace Sovereignty--The Internet and the International System,’ Harvard Journal of Law and Technology 10, no. 647 (1996): 647–666; Saskia Sassen, ‘On the Internet and Sovereignty,’ Indiana Journal of Global Legal Studies 5, no.2, (1998) 9; Henry Perritt, ‘The Internet as a Threat to Sovereignty?’ Indiana Journal of Global Legal Studies 5, no. 2, (1998) 4; Jack Goldsmith, ‘Against Cyberanarchy,’ University of Chicago Law Occasional Paper 40 (1999). 33 Nathan Newman, Net Loss. Internet Prophets, Private Profits, and the Costs to Community, (Philadelphia: Penn State UP, 2002), 73.

10  Johannes Thumfart In geostrategic terms, the discourse about the decay of sovereignty due to digitalisation was even interpreted as an advantage for the US. It was argued that the perceived contradiction between national sovereignty and digital technologies would automatically weaken the grip of dictatorships.34 This stratagem, also known as the ‘internet dictator’s dilemma’, remained a guiding principle of US foreign digital policy up to the Internet Freedom speech of 2010. The broader interest in the idea of digital sovereignty, then not necessarily named as such, but rather called ‘data sovereignty,’ arose in 2001, with the US Patriot Act that allows the US Government to swiftly access personal data stored by American companies without notification of the data subject.35 Due to the relatively small number of internet users back then and the lack of public knowledge, this discourse was foremost restricted to expert circles. Regarding the digital civil society, the internet was primarily understood in terms of tech-libertarianism during this period, which is directly opposed to national, traditional politics in general and, eg, propagates the idea of ‘the sovereign individual’, enabled through crypto-technologies.36 In popular culture, this explicit anti-statism is expressed by Barlow, who writes in 1996: ‘Governments of the Industrial World (…). I come from Cyberspace (…). You have no sovereignty where we gather.’37 In China, on the other hand, the discourse about the digital was from the beginning understood as an issue of national concern. This can be explained in four ways: First of all, the illiberal and socialist country knowns no clear separation between the private and public sector. Second, the Chinese understanding of sovereignty developed from a Confucian and imperial tradition that is characterised by a universal understanding of sovereignty according to the all-encompassing ‘Tianxia’ (天下) system and not by the European system of balance of competing secular powers.38 Third, to the Marxist tradition of thought, it is inevitable that technology, the material base, has far-reaching impacts on the political and ideological superstructure. Fourth, since China conceives of itself as a post- and anticolonial power, it is, in principle, critical towards any developments coming from the US as the dominant global power. Already before digitalisation, back then on the occasion of the installation of a global satellite network, several countries with a colonial history, including China and members of the non-aligned

34 Joseph S. Nye Jr. and William A. Owens, ‘America’s Information Edge,’ Foreign Affairs 75, no. 2 (Mar-Apr, 1996): 20–36. 35 Milton Mueller, Will the Internet Fragment? Sovereignty, Globalization, and Cyberspace, (Cambridge, UK: Polity, 2017), 79, Kindle. 36 James Dale Davidson and William Rees-Mogg, The Sovereign Individual: Mastering the Transition to the Information Age, (New York: Touchstone, 1997). 37 John Perry Barlow, ‘A Declaration of the Independence of Cyberspace,’ EFF 8 February 1996, www. eff.org/de/cyberspace-independence. 38 Zhao Tingyang, Redefining A Philosophy for World Governance, (Singapore: Palgrave Macmillan, 2019); 7; Andrew Coleman and Jackson Nyamuya Maogoto, ‘‘Westphalian’ Meets ‘Eastphalian’ Sovereignty: China in a Globalised World’, Asian Journal of International Law 3 (2013): 237–269.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  11 movement, have challenged the US’s global informational hegemony during the New World Information and Communication Order (NWICO) debates at UNESCO by demanding sovereignty over information.39 Whilst being only one among a handful of essays written in the late nineties about ‘network sovereignty’ (网络主权), ‘network colonialism’ (网络殖民主义) and ‘information sovereignty’ (信息主权),40 all of these aspects of the Chinese position on digital sovereignty become extraordinarily clear in a remarkable essay from 1998 written by Wenxiang Gong of the School of International Relations at Peking University.41 In a classical Marxist manner, and possibly following Rifkin,42 the essay argues that, after the steam engine and electrification, the information age brings about a third industrial revolution and that governments need to react to this shift. Furthermore, the author states that national sovereignty includes the right to control the dissemination of information. He bases this claim, which is not evident from a liberal perspective, on the Confucian philosopher Mencius. Along these lines, the essay drafts a conflict between the notion of ‘information sovereignty’ (信息主权, Xinxi zhuquan) and the global communication of the dawning information age. Making a particularly intelligent point, the essay argues that western democracies, too, are challenged by the informational freedom of the internet, citing hypothetical, yet realistic43 problems that the German Government could have with prohibiting Nazi propaganda disseminated via websites based in the US. In a third line of argument, the essay criticises US cultural hegemony and cites the Opium War and the Boxer Rebellion embodying the Chinese experience with western hegemony and the necessity to tackle it in solidarity with other leaders of the developing world, explicitly referring to the aforementioned NWICO debates at UNESCO: Whilst the information superpowers sing the hymns of ‘international freedom of communication’ and ‘information without borders,’ many developing countries feel that their rights are being taken away and even their national security is being threatened.44

39 Thomas R. Wolfe, ‘A New International Information Order: The Developing World and the Free Flow of Information Controversy,’ Syracuse Journal of International Law and Commerce 8, no. 1(1980): 7. Available at: https://surface.syr.edu/jilc/vol8/iss1/7. 40 Wanshu Cong and Johannes Thumfart, The Chinese origins of digital modes of sovereignty (1996 to 2001) – Between authoritarian censorship and resistance against digital colonialism. (To be published). 41 Wenxiang Gong (龚文庠), ‘International Communication in the Information Age: New Issues Facing International Relations (信息时代的国际传播:国际关系面临的新问题),’ Studies of International Politics (国际政治研究)2 (1998): 41–45. 42 Jeremy Rifkin, The End of Work. The Decline of the Global Labor Force and the Dawn of the Post-Market Era, (New York: G. P. Putnam’s Sons, 1995) 59f. 43 Take the case LICRA v Yahoo! in 2000. 44 Wenxiang Gong (龚文庠), ‘International Communication in the Information Age: New Issues Facing International Relations (信息时代的国际传播:国际关系面临的新问题),’ Studies of International Politics (国际政治研究)2 (1998): 45. Translation by the author.

12  Johannes Thumfart Regardless of their ideological pedigree, these arguments against US digital hegemony and the correlated ideology of the ‘free flow of information’ accurately describe the power dynamics in this discourse. Whilst the US is the digital hegemon in those days and has therefore a significant interest to promote a laissezfaire approach to other powers, China can be understood as the leading norm entrepreneur of internet sovereignty contesting the US’s power.45 Besides these more general characteristics, the Chinese position is explicitly informed by resistance against US surveillance within the context of Echelon and the Patriot Act46 and even includes privacy concerns regarding Microsoft products.47 A more technical Chinese discourse on 网络主权 (wangluo zhuquan, ‘network sovereignty’) found a particularly noteworthy expression in 2000, when the influential flight engineer Ji Zhaojun published the essay ‘Network Security, Sovereignty, and Innovation’, comparing digital sovereignty to sovereignty over airspace and maritime space and promoting the idea that open digital networks further US dominance.48 In 2004, Chen Xueshi of the National University of Defense Technology affiliated to the People’s Liberation Army defined national ‘information borders,’ which has since then been a characteristic feature of the Chinese discourse.49 During these first years of the twenty-first century, several Chinese researchers have argued that network sovereignty should be equated with ‘information sovereignty’ (信息主权) that signifies a state’s capacity to ‘protect,’ ‘manage,’ and ‘regulate’ information.50 From the non-digital and digital persecution of Falun Gong from 1999 on,51 Beijing successively enforced digital censorship on other issues, eg the cause of Tibetans or Uyghurs or the events of June 1989.52

45 Sarah McKune and Shazeda Ahmed, ‘The Contestation and Shaping of Cyber Norms Through China’s Internet Sovereignty Agenda,’ International Journal of Communication 12 (2018): 3835–3855, 3840. 46 Wang Zhengping and Xu Tieguang (王正平 徐铁光), ‘Western cyber hegemony and cyber rights in developing countries,’ (西方网络霸权主义与发展中国家的网络权利), Thought Front (思想战线) 2 no. 37 (2011). 47 See: Wanshu Cong and Johannes Thumfart, The Chinese origins of digital modes of sovereignty (1996 to 2001) – Between authoritarian censorship and resistance against digital colonialism. (To be published) 48 Sarah McKune and Shazeda Ahmed, ‘The Contestation and Shaping of Cyber Norms Through China’s Internet Sovereignty Agenda,’ International Journal of Communication 12 (2018): 3835–3855, 3838. 49 ibid 3838. 50 Jinhan Zeng et al.: ‘China’s Solution to Global Cyber Governance: Unpacking the Domestic Discourse of ‘Internet Sovereignty’,’ Politics & Policy 45, no. 3 (2017): 432–464, 443. 51 Jae Ho Chung, ‘Challenging the State: Falungong and Regulatory Dilemmas in China,’ in Sovereignty under Challenge, ed. John Dickey Montgomery and Nathan Glazer (New Brunswick and London: Transaction, 2002) 83–106. 52 Rogier Creemers, ‘China’s Approach to Cyber Sovereignty,’ Konrad-Adenauer-Stiftung, 2020, www.kas.de/documents/252038/7995358/China%E2%80%99s+Approach+to+Cyber+Sovereignty. pdf/2c6916a6-164c-fb0c-4e29-f933f472ac3f?version=1.0&t=1606143361537.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  13 From 2003 to 2006, the country completed the infamous project of the ‘Great Chinese Firewall’ and the ‘Golden Shield Project’.53 These applications of the Chinese understanding of network/information sovereignty became almost proverbial. However, the Chinese contribution to the norm development of digital sovereignty is also owed to its foreign policy ambitions to influence global internet governance. Although the country had a (mostly passive) presence at ICANN already since 1999, its ambitions regarding internet governance revealed themselves at the first phase of the World Summit on the Information Society (WSIS),54 after three countries belonging to the global south, Brazil, Cuba, and Iran, proposed to create an intergovernmental framework to replace the existing ICANN-led Internet governance model.55 Already during the second preparatory committee of WSIS in Geneva in 2003, the head of the Chinese delegation criticised the status of internet governance as ‘monopolised by one state and one corporation that neither facilitate further growth of the internet, nor fully embody the principle of equity and full representation’.56 During the WSIS, China’s spokesperson tried to raise understanding for the country’s restrictive approach to the freedom of speech, exhorting the international community to ‘fully respect the differences in social systems and cultural diversity’.57 Under these premises, China, together with Iran and Cuba, advocated for a strict political control of the internet within the UN. The Tunis agenda of 2005 that concluded WSIS ended by recognising the demand for digital sovereignty: ‘Policy authority for Internet-related public policy issues is the sovereign right of States.’58 In explicit opposition to the Chinese endeavours and explicitly inspired by them, French tech entrepreneur Bernard Benhamou, also part of the French delegation at the 2005 WSIS in Tunis, and tech strategist Laurent Sorbier conceived of a liberal countermodel of digital sovereignty. Although there were several earlier practical projects in the EU59 that might be retrospectively

53 Xiao Qiang, ‘How China’s Internet Police Control Speech on the Internet,’ Radio Free Asia, 21 November 2008, www.rfa.org/english/commentaries/china_internet-11242008134108.html. 54 Gianluigi Negro, ‘A history of Chinese global Internet governance and its relations with ITU and ICANN,’ Chinese Journal of Communication 13, no. 1 (2020): 104–121, 8. 55 Abu Bhuiyan, Internet Governance and the Global South: Demand for a new framework, (New York, NY: Palgrave Macmillan, 2014), 51f. 56 Gianluigi Negro, ‘A history of Chinese global Internet governance and its relations with ITU and ICANN,’ Chinese Journal of Communication 13, no.1 (2020): 104–121, 10. 57 Wang Xudong, ‘China. Strengthening Cooperation, Promoting Development and Moving Towards the Information Society Together,’ Statement at the World Summit on the Information Society, 10 December 2003, www.itu.int/net/wsis/geneva/coverage/statements/china/cn.html. 58 ‘WSIS: Tunis Agenda for the Information Society,’ accessed 29 June 2021, www.itu.int/net/wsis/ docs2/tunis/off/6rev1.html. 59 Lokke Moerel and Paul Timmers mention the eu.domain name system starting in 2005. See Lokke Moerel & Paul Timmers, ‘Reflections on Digital Sovereignty’ EU Cyber Direct, Research in Focus series, 2021, Available at SSRN: https://ssrn.com/abstract=3772777.

14  Johannes Thumfart interpreted as a manifestation of digital sovereignty, their essay on souveraineté numérique from 2006 is the first European theoretical discussion of this subject that explicitly uses this term.60 And Benhamou has stayed influential in this discourse, since he has been working with the French Government and founded the Institut de la souveraineté numérique in 2014 (see section IV). One of the leading motives of this essay from 2006 is its opposition to US digital policy. Not surprisingly, the US is foremost accused of handing over too much power to private actors. According to the essay, this laissez-faire approach paradoxically grants too many possibilities to authoritarian regimes such as China and Iran to gain political control over the internet. Furthermore, the US is accused of pursuing a unilateral approach of internet control via ICANN and defending the status quo of internet governance out of economic motives, but also because it touches US sovereignty. In particular, compared to the later, rather confrontational European accounts of digital sovereignty (sections IV–VI), several features of this essay are striking. Neither does it have an economic agenda, nor does it normatively define digital sovereignty as the property of one nation or of Europe. Rather, it perceives it as a joint democratic endeavour of all western nations and their citizens. This universalist optimism is rather typical for the expanding and uncontested EU of that period.61 Its committed, altruistic, idealistic tonality is also characteristic of the stage of norm emergence.62 It must also be noted that, since it is written during the Bush-years, the essay understands itself not so much as anti-American, but rather as opposing a neo-liberal, foremost economic understanding of the internet that the essay attributes to GOP policies. In fact, the early European attempt to challenge the US led model of internet governance during WSIS was stopped by GOP government officials exerting diplomatic pressure and arguing for the maintenance of the status quo due to economic necessity.63 Although the internet had been originally conceived of as a cold war project by the Rand corporation64 and the concept of democratisation through digitalisation was formulated as early as 1996 by none lesser than Joseph Nye,65 the US at first, indeed, did not primarily promote a political, but an economic understanding of the digital sphere. This changed in 2010, when the Obama administration proclaimed the propagation of the norm of ‘Internet Freedom’ as an aim central to

60 Bernard Benhamou and Laurent Sorbier, ‘Souveraineté et Résaux Numériques,’ Institut français des relations internationals 3 (Autumn 2006): 519–530. 61 Perry Anderson, The Old New World, (London: Verso, 2009), 41ff. 62 Martha Finnemore and Kathryn Sikkink, ‘International Norm Dynamics and Political Change,’ International Organization 52, No. 4 (Autumn 1998): 887–917, 898. 63 Letter by Gutierrez and Rice to UK Foreign Secretary, 7 November 2005. 64 Rand Corporation, ‘Paul Baran and the Origins of the Internet,’ Rand Corporation Website, www.rand.org/about/history/baran.html. 65 Joseph S. Nye Jr. and William A. Owens, ‘America’s Information Edge,’ Foreign Affairs 75, No. 2 (Mar-Apr 1996): 20–36. Nye is an influential US intellectual and an architect of Bill Clinton’s foreign policy. Probably Hillary Clinton’s Internet Freedom Speech of 2010 is a direct reflection of Nye’s role in Bill Clinton’s government.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  15 its foreign policy.66 In her trailblazing ‘Internet Freedom’ speech held during that year, Clinton formulated the new imperative: (…) Online organizing has been a critical tool for advancing democracy (…). We’ve seen the power of these tools to change history. We want to put these tools in the hands of people who will use them to advance democracy and human rights.67

However, the developments connected to the internet-enabled Arab Spring in 2011 proved these expectations wrong. Internet-enthusiasm was followed by a lasting wave of Islamism, authoritarianism and civil war all over the Arab world.68 Russia, too, experienced the greatest wave of protests of its post-soviet history in 2011, supported by the development of various digital projects.69 As a reaction, Moscow started tightening political control of the internet.70 In 2012, the state introduced a unified register of banned websites, which, at first, included sites containing child pornography and drugs, which would two years later also extend to political content.71 And in 2011, the Shanghai Cooperation Organization (SCO),72 which, at that time, primarily included Russia and China, proposed an International Code of Conduct for Information Security to the UN General Assembly, which emphasised the ‘respect for the sovereignty, territorial integrity and political independence of all States (…) and respect for the diversity of history, culture and social systems of all countries’.73 Nevertheless, the notion of ‘Сувере нный интернет’, a sovereign internet, was not yet theorised before the Snowden revelations. In addition to that, and ironically enough, Wikileaks’ release of the diplomatic cables in November 2010 led western policy makers to the conclusion that Internet Freedom can be a political problem for democracies, too.74 66 Richard Fontaine and Will Rogers, ‘Internet Freedom. A Foreign Policy Imperative in the Digital Age,’ Center for a New American Security, June 2011; Fergus Hanson, ‘Baked in and wired: ediplomacy@State,’ Brookings Institution, 25 October 2012, www.brookings.edu/research/internetfreedom-the-role-of-the-u-s-state-department/. Hillary Clinton, ‘Remarks on Internet Freedom,’ Speech at the Newseum, 21 January 2010, https://2009-2017.state.gov/secretary/20092013clinton/ rm/2010/01/135519.htm. 67 Hillary Clinton, ‘Remarks on Internet Freedom,’ Speech at the Newseum, 21 January 2010, https://2009-2017.state.gov/secretary/20092013clinton/rm/2010/01/135519.htm. 68 Habibul Haque Khondker, ‘The impact of the Arab Spring on democracy and development in the MENA region,’ Sociology Compass 13, 9 (September 2019), https://onlinelibrary.wiley.com/doi/ abs/10.1111/soc4.12726. 69 Gregory Asmolov, ‘Runet in Crisis Situations,’ in Internet in Russia. Societies and Political Orders in Transition ed. Sergey Davydov (Cham: Springer, 2020), 231–250, 243. 70 Jo Harper, ‘Russians protest state censorship of the internet,’ Deutsche Welle, 23 June 2017, www. dw.com/en/russians-protest-state-censorship-of-the-internet/a-39807536. 71 Alena Epifanova, ‘‘Deciphering Russia’s ‘Sovereign Internet Law,’’ Deutsche Gesellschaft für Auswärtige Politik, 16 January 2020, https://dgap.org/de/node/33332. 72 Johannes Thumfart, ‘Cultures of digital sovereignty in China, Russia and India. A comparative perspective from NWICO and WSIS to SCO.’ Manuscript, to be published in Digital Sovereignty in the BRICS Countries, eds. Luca Belli and Min Jiang (2022). 73 Sarah McKune, Citisen Lab, ‘An Analysis of the International Code of Conduct for Information Security’, Open Effect / Citizen Lab, https://openeffect.ca/code-conduct/. 74 David Leigh, ‘US embassy cables leak sparks global diplomatic crisis,’ The Guardian, 28 November 2010, www.theguardian.com/world/2010/nov/28/us-embassy-cable-leak-diplomacy-crisis.

16  Johannes Thumfart

IV.  Norm Cascade 2013–2020: The Institutionalisation of Digital Sovereignty in a Multipolar World In June 2013, Edward Snowden’s revelations undermined the credibility of the norm of Internet Freedom even further. Evidently, its promotion by the US not only failed in bringing about sustainable democratic change, but was accompanied by global surveillance to an historically unknown extent. As a reaction to this ‘normative instability’,75 a global process of digital disentanglement set in, also labelled as a push towards a ‘fragmentation of the internet’,76 characterised by ‘concerted efforts by nation-states to localise data within their borders’.77 To understand the drivers of this process, it is important to note that the NSA and the Five Eyes did not only spy on political targets, but also on economic ones.78 Therefore, measures of digital sovereignty were regarded as vital for non-US economies. As will be shown later, in Germany and France, the private sector and economic policy makers were the most outspoken in promoting the norm of digital sovereignty. Regarding China, the challenge to the US’s digital hegemony has a considerable quantitative dimension. In 2008, the country surpassed the US in the number of internet users and has been the leading country in terms of internet users ever since.79 Accordingly, Beijing intensified its ambitions regarding global internet governance. In 2014, the Chinese engineer and policy maker Houlin Zhao was elected Secretary-General of the ITU. In the same year, Chinese officials organised the World Internet Conference, a forum for the Chinese Government to discuss digital issues with representatives of Baidu, Alibaba, Tencent, Apple, Amazon, Microsoft, Samsung, LinkedIn, ICANN, and political leaders of the Shanghai Cooperation Organization (SCO) and associated countries.80

75 Matthias C. Kettemann, The Normative Order of the Internet. A Theory of Rule and Regulation Online, (Oxford: OUP, 2020), 212. 76 Milton Mueller, Will the Internet Fragment? Sovereignty, Globalization, and Cyberspace, (Cambridge, UK: Polity, 2017), 12. 77 Tatevik Sargsyan, ‘The Turn to Infrastructure in Privacy Governance,’ in The Turn to Infrastructure in Internet Governance, eds. Francesca Musiani et al. (NY: Palgrave Macmillan, 2016), 194. 78 Erik Kirschbaum, ‘Snowden says NSA engages in industrial espionage: TV,’ Reuters, 26 January 2014, www.reuters.com/article/us-security-snowden-germany-idUSBREA0P0DE20140126; See also: Laura K. Donohue, ‘High Technology, Consumer Privacy, and U.S. National Security,’ Georgetown University Law Center, 2015, PDF p 6 ehttps://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?ref erer=&httpsredir=1&article=2469&context=facpub. 79 David Robson, ‘Why China’s internet use has overtaken the West,’ BBC, 9 March 2017, www. bbc.com/future/article/20170309-why-chinas-internet-reveals-where-were-headed-ourselves#:~: text=Based%20on%20raw%20statistics%2C%20China,many%20with%20high%2Dspeed%20 connections. 80 Catherine Shu, ‘China Tried To Get World Internet Conference Attendees To Ratify This Ridiculous Draft Declaration,’ Tech Crunch, 21 November 2014, https://techcrunch.com/2014/11/20/ worldinternetconference-declaration/.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  17 At the event, a draft of a common declaration regarding digital sovereignty was reportedly slipped under the attendees’ hotel room doors at night, which included the passage: We should respect each country’s rights to the development, use and governance of the internet, refrain from abusing resources and technological strengths to violate other countries’ internet sovereignty.81

Since western observers were not yet familiar with the notion of digital sovereignty, this was merely seen as a particularly brazen, even ‘hilariously farcical’82 attempt to justify China’s domestic censorship practices. Ultimately, it was not mentioned during the summit’s closing speeches.83 At the second World Internet Conference in 2015, the country was more outspoken regarding its understanding of digital sovereignty and unveiled its vision of a multipolar digital order, which includes a clear attack on US hegemony: We should respect each country’s internet sovereignty, respect each country’s right to choose their own development path and management model of the internet, and we should respect countries’ rights to participate equally in public policy making of the international cyberspace.84

The year 2015 also marks the introduction of the ‘digital silk road’ project.85 With this project, Beijing seeks to promote the global construction of financial, information and telecommunications networks by Chinese firms. Next to economic aims, this should increase China’s capacity to take part in setting international technology standards and to further its ambitions related to internet governance.86 The following five years saw the rise of Huawei as a significant actor in defining global technology standards for 5G and building out mobile infrastructure via institutions like the ITU.87 In 2015, China also deepened the political alliance regarding digital technologies with Russia: the SCO-member-states agreed on a ‘cooperation in ensuring international information security’,88 combating terrorists and extremists, which might be evaluated as the oppression of dissidents from a western perspective. The SCO has been characterised as ‘transnational

81 ibid. 82 ibid. 83 Franz-Stefan Gady, ‘The Wuzhen Summit and Chinese Internet Sovereignty,’ Huffpost, 12 September 2014, www.huffpost.com/entry/the-wuzhen-summit-and-chi_b_6287040. 84 Vivienne Zeng, ‘Alibaba’s Jack Ma sings praises of Xi’s global vision of ‘internet management’,’ Hong Kong Free Press, 18 December 2015, https://hongkongfp.com/2015/12/18/albabas-jack-ma-singspraises-of-xis-global-vision-of-internet-management/. 85 Robert Greene and Paul Triolo, ‘Will China Control the Global Internet Via its Digital Silk?’ Carnegie Endowment, 8 May 2020, https://carnegieendowment.org/2020/05/08/will-china-controlglobal-internet-via-its-digital-silk-road-pub-81857. 86 ibid. 87 ibid. 88 SCO, ‘Agreement on Cooperation in Ensuring International Information Security between the Member States of the Shanghai Cooperation Organization,’ 2015 http://eng.sectsco.org/load/207508/.

18  Johannes Thumfart authoritarianism’89 and ‘one of the most successful examples of multilateral embrace of digital authoritarian norms and practices’.90 Meanwhile in Europe in 2012 (a year before the Snowden-revelations) the term souveraineté numérique was debated in the French Parliament.91 This is not surprising since Benhamou, co-author of the first essay on digital sovereignty (see previous section), had been working with the French Government and founded later, in 2014, his Institut de la Souveraineté Numérique. Furthermore, notions of sovereignty generally resonate well with the country’s statist culture. In 2012, French media entrepreneur Bellanger also became a norm entrepreneur92 in matters of digital sovereignty and published an essay entitled De la souveraineté numérique.93 Although not an academic publication, this essay demonstrates an ostentatiously classical approach to digital sovereignty, defining sovereignty in the first sentence as ‘the control of a territory by a population by means of a common law issued from collective will’.94 Bellanger did not bother to quote his French predecessors in this debate, Benhamou and Sorbier, although he was probably aware of their work. From a point of view of intellectual history, it is striking that, instead, he bases his far less liberal concept of digital sovereignty on the theories of Carl Schmitt, the arch-enemy of the Anglo-Saxon global liberal order.95 According to Bellanger, Anglo-Saxon maritime power, which Schmitt labelled as ­‘thalassocracy’, has transformed into ‘internetocracy’96 Correspondingly, he interprets the stand-off between US Internet Freedom and Chinese digital sovereignty as yet another manifestation of the, according to Schmitt, eternal fight between maritime and terrestrial powers. His own account of digital sovereignty is clearly in line with Schmitt’s and China’s illiberal accounts. This traditionalist tonality based on legitimacy is characteristic to the second phase of norm development.97

89 Gerasimos Tsourapas, Global Autocracies: Strategies of Transnational Repression, Legitimation, and Co-Optation in World Politics. International Studies Review, viaa061, 2020. https://doi. org/10.1093/isr/viaa061, 20. See also: Johannes Thumfart, ‘Cultures of digital sovereignty in China, Russia and India. A comparative perspective from NWICO and WSIS to SCO,’ manuscript, to be published in Digital Sovereignty in the BRICS Countries, eds. Luca Belli and Min Jiang (2022). 90 Sarah McKune and Shazeda Ahmed, ‘The Contestation and Shaping of Cyber Norms Through China’s Internet Sovereignty Agenda,’ International Journal of Communication 12 (2018): 3835–3855, 3841. 91 Commission des affaires économiques, ‘Audition, ouverte à la presse, de Mme Fleur Pellerin,’ Assemblée Nationale, 17 October 2012, www.assemblee-nationale.fr/14/cr-eco/12-13/c1213010.asp. 92 Bellanger has stayed active in this capacity. In 2019, he spoke in the French senate’s enquete commission for digital sovereignty. Website of the Commission d’enquête sur la souveraineté ­numérique, www.senat.fr/commission/enquete/souverainete_numerique.html. 93 Pierre Bellanger, ‘De la souveraineté numérique,’ Le Débat 170, (2012/3): 149–159. 94 ibid 150. Translated by the author. 95 Carl Schmitt, Land and Sea (Washington D.C.: Plutarch Press, 1997). 96 Pierre Bellanger, ‘De la souveraineté numérique,’ Le Débat 170, (2012/3): 149–159, 154. 97 Martha Finnemore and Kathryn Sikkink, ‘International Norm Dynamics and Political Change,’ International Organization 52, No. 4 (Autumn 1998): 887–917, 898.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  19 Bellanger would not be much of an entrepreneur if he had missed the opportunity to reissue his plea for digital sovereignty after Snowden. In January 2014, he published his monograph La Souveraineté Numérique. The book largely builds on his earlier ideas, including his adaptation of Schmitt’s theory and tonality. He wrote: The Snowden affair has shown us that we are not the sovereigns over our digital networks. The state is incapable of guaranteeing privacy, secrecy of communication and the fairness of economic competition on this territory.98

In his monograph, Bellanger also introduces a new aspect of his concept of digital sovereignty, which can be easily derived from Schmitt’s anti-Anglo-Saxon theory and its ‘post-colonial imagination’.99 It includes a remarkable, and highly problematic, degree of historical relativism regarding the crimes of French colonialism: US digital hegemony, Bellanger wrote, is, in fact, a form of colonialism and, in that respect, does not differ from historical French colonialism.100 With this historical comparison, Bellanger also repeats the French culture politician Catherine Morin-Desailly, who warned of Europe becoming a ‘digital colony’ as early as March 2013, which means several months before the Snowden revelations.101 Whether intended or not, this is clearly a reminiscence of the Chinese notion of digital sovereignty, which is strongly influenced by post-colonial thought.102 However, this ideo-historical trope drastically changes its political undertone when it is employed by representatives of former colonial powers. Some months after the publication of Bellanger’s monograph, on 14 May 2014, digital entrepreneur and then French Minister of the Economy Arnaud Montebourg, who was advised by Bellanger, warned that Europe and France could become ‘digital colonies of the US’, and promoted ‘the digital sovereignty of France, from an economic and democratic point of view’.103 According to Bellanger and Montebourg, the French telecommunication company Orange, a largely privatised former state-owned enterprise, should be responsible for the national digital infrastructure, which is rather a display of digital Gaullist rhetoric than realistic. 98 Pierre Bellanger, La Souveraineté Numérique (Paris: Èditions Stock, 2014), E-Book, 32, translation by the author. 99 Andreas Kalyvas, ‘Carl Schmitt’s Postcolonial Imagination,’ Constellations 25, no. 1 (March 2018): 35–53, https://doi.org/10.1111/1467-8675.12353. 100 ‘L’Amérique n’est pas différente de la France coloniale.’ Pierre Bellanger, La Souveraineté Numérique (Paris: Èditions Stock, 2014), E-Book, 613. 101 Catherine Morin-Desailly, ‘Rapport d’information, fait au nom de la commission des affaires européennes,’ n° 443 (2012–2013)–20 mars 2013, www.senat.fr/notice-rapport/2012/r12-443-notice. html. 102 See: Wanshu Cong and Johannes Thumfart, The Chinese origins of digital modes of sovereignty (1996 to 2001) – Between authoritarian censorship and resistance against digital colonialism (To be published). 103 The editors, ‘Google: Arnaud Montebourg ne veut pas que la France devienne une colonie numérique des USA,’ ZDNet, 14 Mai 2014, www.zdnet.fr/actualites/google-arnaud-montebourg-neveut-pas-que-la-france-devienne-une-colonie-numerique-des-usa-39801099.htm. Translated by the author.

20  Johannes Thumfart At that time, the German debate about digital sovereignty was by far not as developed as in France. However, given the progressive nature of digitalisation issues in general, it is remarkable that this debate was started from conservative up to extreme right-wing circles. It seems that the first mention of ‘digitale Souveränität’ in terms of policies stems from an article about the extreme rightwing party AfD,104 who presented a respective program in September 2013. However, the exact term is not mentioned in the program itself. Instead, its author, then AfD’s digital expert Michaela Merz, was writing about the ‘establishment of German sovereignty in the digital environment’.105 Of course, this was typically right-wing revanchism and sovereigntism triggered by the Snowden revelations. However, considering the party’s close relations to Moscow,106 who, as we will see later on, had already publicly established the notion of its ‘sovereign Internet’ back then, this could also be owed to Russian influence. In November 2013, the German Minister of the Interior Hans-Peter Friedrich of the conservative CSU party, in turn, demanded not a national, but explicitly a European digital infrastructure as a reaction to the Snowden revelations and stressed the need for digitale Souveränität: We can only achieve the digital sovereignty of Europe if we gain and strengthen sovereignty over technological network infrastructure.107

The right wing AfD accused him of having been inspired by their program,108 which might be the reason for the explicitly European, and not national interpretation of digital sovereignty within the CSU, which is often preoccupied with taking on issues of the far right whilst at the same time distancing itself from the far right. Some months after that, in December 2013, Alexander Dobrindt, then German Minister of traffic and digital infrastructure, and, likewise member of the CSU, placed a populist demand in the country’s leading tabloid as a reaction to the NSA-scandal, again emphasising the European dimension of digital sovereignty: ‘As Germans and Europeans, we have to regain our digital sovereignty.’109

104 Detlef Borchers, ‘AfD stellt Plan für ein digitales Deutschland vor,’ Heise Online, 13 September 2013. www.heise.de/newsticker/meldung/AfD-stellt-Plan-fuer-ein-digitales-Deutschland-vor-1956294. html?from-classic=1. 105 Michaela Merz (then AfD’s digital expert), ‘Fünf Punkte für ein digitales Deutschland,’ 13 September 2013: ‘Herstellung der Souveränität Deutschlands im digitalen Umfeld.’ 106 BR (staff), ‘Die AfD und ihre Russland-Nähe,’ Bayerischer Rundfunk, 10 December 2020. www. br.de/nachrichten/deutschland-welt/die-afd-und-ihre-russland-naehe,SIhXrBe. 107 Stefan Krempl, ‘NSA-Affäre: Innenminister plädiert für europäische Infrastruktur,’ Heise Online, 18 November 2013. ‘Wir können die digitale Souveränität Europas nur erhalten, wenn wir auch die Souveränität über die technische Netzinfrastruktur erlangen und verstärken.’ www.heise.de/ newsticker/meldung/NSA-Affaere-Innenminister-plaediert-fuer-europaeische-Infrastruktur2049400.html. 108 Presseportal (staff), ‘Datensicherheit muss mehr als ein Lippenbekenntnis sein,’ presseportal.de, 04.11.2013, www.presseportal.de/pm/110332/2590028. 109 Der Spiegel (staff), ‘Dobrindt verspricht Deutschen schnellstes Netz der Welt,’ Der Spiegel, 22 December 2013, www.spiegel.de/netzwelt/netzpolitik/internetminister-dobrindt-versprichtschnellstes-netz-der-welt-a-940506.html.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  21 On 16 May 2014, only two days after the battle cry of his French counterpart Montebourg, Sigmar Gabriel, then German Minister of the Economy and member of the Social Democratic party, published an ardent call to arms in a leading newspaper. He was clearly invoking the idea of digital sovereignty, but not using this expression yet and re-interpreting it in a more democratic, bottom-up manner referring to self-determination: It is the future of democracy in the digital age, and nothing less, that is at stake here, and with it, the freedom, emancipation, participation and self-determination of 500 million people in Europe.110

In March 2015, Gabriel met with Günther Oettinger, then the EU’s digital commissioner, at the German digital industry’s fair CeBIT, in order to discuss the strategic relevance of issues of digital sovereignty.111 The meeting was publicly announced, but the press was excluded. CeBIT’s partner country that year was China and the fair was inaugurated by AliBaba’s founder Jack Ma. This encounter might have provided an additional pathway of the concept of digital sovereignty to travel to Germany, particularly because Chinese norm entrepreneurship was so outspoken in 2015. In April 2015, the concept of ‘digitale Souveränität’ found its way into the agenda112 of the Ministry of Economy led by Gabriel and, in May that year, into a position paper by Bitkom,113 the German digital industry’s association. During the same month, a particularly hawkish notion of digital sovereignty was promoted by digital commissioner Oettinger. According to several news outlets, he advertised the project of the European Single Digital Market with the notion that this will ‘reinforce our digital authority and will give us our digital sovereignty’ and even called for Europe’s ‘digital independence’.114

110 Sigmar Gabriel, ‘Political consequences of the Google debate,’ Frankfurter Allgemeine Zeitung, 16 May 2014, www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/sigmar-gabriel-consequencesof-the-google-debate-12948701.html?printPagedArticle=true#pageIndex_7. 111 Federal Ministry for Economic Affairs and Energy (staff), ‘Bundesminister Gabriel besucht die CeBIT 2015,’ Press Release, 13 March 2015 www.bmwi.de/Redaktion/DE/Pressemitteilungen/201 5/20150313-bundesminister-gabriel-besucht-cebit-2015.html. 112 Federal Ministry for Economic Affairs and Energy, ‘Industrie 4.0 und Digitale Wirtschaft. Impulse für Wachstum, Beschäftigung und Innovation,’ April 2015, 3, www.bmwi.de/Redaktion/DE/ Publikationen/Industrie/industrie-4-0-und-digitale-wirtschaft.pdf%3F__blob%3DpublicationFile%2 6v%3D3. 113 Bitkom, ‘Digitale Souveränität: Anforderungen an Technologien- und Kompetenzfelder mit Schlüsselfunktion,’ Positionspapier, 2019, 7, www.bitkom.org/Bitkom/Publikationen/Digital e-Souveraenitaet-Anforderungen-an-Technologien-und-Kompetenzfelder-mit-Schluesselfunktion. The original paper from 2015 has been removed during this research. It was located under the address: www.bitkom.org/sites/default/files/pdf/Presse/Anhaenge-an-PIs/2015/05-Mai/BITKOM-PositionDigitale-Souveraenitaet1.pdf. 114 Ryan Heath, ‘Brussel’s Playbook Briefing,’ Politico, 5 July 2015, www.politico.eu/newsletter/ brussels-playbook/uk-election-greek-mystery-messages-digital-divides/; Ian Traynor, ‘EU unveils plans to set up digital single market for online firms,’ The Guardian, 6 May 2015, www.theguardian. com/technology/2015/may/06/eu-unveils-plans-digital-single-market-online-firms.

22  Johannes Thumfart However, not only the rather conservative circles close to the industry propagated digital sovereignty, but also progressive digital rights activists. Its bipartisan nature makes the call for digital sovereignty particularly attractive. Then MEP of the German Green Party and digital rights activist Jan Philipp Albrecht, one of the driving forces behind the GDPR, has joined the battle cry for digital sovereignty as early as 2014, when he connected globalisation in general to ‘attempts to circumvent the decisions made by sovereign nations’. Regarding the US’s digital hegemony, Albrecht found even more drastic words: ‘The acceptance of the advancing loss of sovereignty appears almost totalitarian, if we consider the consequences of the sacrifice of self-determination on society as a whole (…).’115 In another essay from 2016, he promised ‘regaining control and sovereignty in the digital age’ and underlined the importance of the GDPR as a major step ‘towards effective rules for the digital age’.116 The populist undertones of these demands are particularly uncanny considering the fact that ‘take back control’ served as the sovereigntist slogan of the Brexiteers that year. In October 2015, the ambitions of digital rights activists led to a partwise transatlantic digital disentanglement. Due to a complaint by Austrian lawyer and privacy activist Max Schrems, the European Court of Justice declared the Commission’s US Safe Harbour Decision as invalid, citing the Snowden revelations.117 The successive regulatory battle of the EU against US tech led to the adoption of the GDPR in 2016 and, much later on, to Schrems II. It established the EU as a ‘regulatory superpower’ regarding the internet.118 With the EU’s and China’s (and, as will be seen later, also Russia’s) institutionalisation of digital sovereignty, the time around the year 2015 shows characteristics of a ‘tipping point’ at which a critical mass of state actors adopted the norm.119 In this respect, it is important to note that Europe’s striking turn towards digital sovereignty around the year 2015 has a number of reasons that go well beyond the Snowden revelations. First of all, as mentioned earlier, it is no coincidence that this turn was initiated by politicians working in the field of economic policy. Europe has a considerable trade deficit regarding the US in the digital sector120 and is, by far, the US’s digital 115 Jan Philipp Albrecht, ‘Hands off our Data!’, published by Jan Philipp Albrecht 2015, 13, 91. www. janalbrecht.eu/wp-content/uploads/2018/02/JP_Albrecht_hands-off_final_WEB.pdf. The German version was published in 2014. 116 Jan Philipp Albrecht, ‘Regaining Control and Sovereignty in the Digital Age,’ in Enforcing Privacy, eds. David Wright and Paul De Hert (Cham: Springer, 2016): 473–488, 474. 117 Court of Justice of the European Union, ‘The Court of Justice declares that the Commission’s US Safe Harbour Decision is invalid,’ Press Release No 117/15, Luxembourg, 6 October 2015, https://curia. europa.eu/jcms/upload/docs/application/pdf/2015-10/cp150117en.pdf. 118 Mark Leonard, ‘Connectivity Wars. Weaponizing Interdependence,’ European Council on Foreign Relations, January 2017, 23. www.ecfr.eu/page/-/Connectivity_Wars.pdf. 119 Martha Finnemore and Kathryn Sikkink, ‘International Norm Dynamics and Political Change,’ International Organization 52, no. 4 (Autumn 1998): 887–917, 895. 120 In 2017, for example, the US exported $189.9 billion information and communications technology (ICT) and potentially ICT-enabled services to the EU and imported $118.1 billion, which accounts for surplus of $72 billion. Frances G. Burwell and Kenneth Propp, ‘The European Union and the

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  23 industry’s most important market.121 At the same time, only one European firm, Deutsche Telekom, ranks among the top 20 of the biggest digital companies.122 In terms of pure norm propagation, it is clear from today’s perspective that the EU’s ambition to become a ‘regulatory superpower’ in the digital sphere can be considered a success.123 However, it was not successful in turning the tide regarding the digital economy. On the contrary, it is mostly European smalland medium-sized enterprises who suffer from the continents’ complex privacy rules,124 whilst US tech giants can afford compliance. Additionally, the GDPR’s enforcement confronts the EU with considerable problems regarding its lack of executive power.125 Therefore, EU representatives are increasingly realising that being a regulatory power might be too little and becoming a tech power could be necessary,126 regardless of the realistic chances for such an enterprise. This leaves the question of what happened in Russia in the post-Snowden era. First of all, it must be noted that the Russian approach to digital sovereignty differs greatly from the Chinese. Whilst China promotes the extension of its influence in multilateral institutions such as ITU and ICANN and although Russia is part of various attempts to develop collaboration in the field of information security within the SCO, the Russian notion of ‘Суверенный интернет’ (sovereign internet) is decidedly unilateral. The notion of ‘Суверенный интернет’ finds its origin in a column published some weeks after the Snowden-revelations of 2013 by Sergei Zheleznyak, a leading politician for implementing Putin’s authoritarian turn.127 Under the umbrella of digital sovereignty, Zheleznyak condemned the US, demanded a ‘national server network’ and ‘own information products’. This resonated well with a general tighter political control over the internet installed since the protests of 2011 (see previous section). In 2014, the government Search for Digital Sovereignty. The European Union Building “Fortress Europe” or Preparing for a New World?’, Atlantic Council, June 2020, 2, www.atlanticcouncil.org/wp-content/uploads/2020/06/ The-European-Union-and-the-Search-for-Digital-Sovereignty-Building-Fortress-Europe-orPreparing-for-a-New-World.pdf. 121 2017, US corporations, through their local affiliates in Europe, supplied $175 billion in ICT services, while only supplying $3 billion in China and $21 billion in Latin America. ibid. 122 ‘Top 100 Digital Companies,’ Forbes, 2019 Ranking, www.forbes.com/top-digital-companies/ list/#tab:rank. 123 A number of countries has followed the EU’s example. See: Dan Simmons, ‘10 Countries with GDPR-like Data Privacy Laws,’ Comforte Blog, 17 January 2019. https://insights.comforte.com/countrie s-with-gdpr-like-data-privacy-laws. 124 David Barnard-Wills et al., ‘Report on the SME experience of the GDPR,’ STAR, 31, www.trilateralresearch.com/wp-content/uploads/2020/01/STAR-II-D2.2-SMEs-experience-with-the-GDPR-v1.0-. pdf. 125 Natasha Lomas, ‘GDPR’s two-year review flags lack of ‘vigorous’ enforcement,’ Tech Crunch, 24 June 2020, https://techcrunch.com/2020/06/24/gdprs-two-year-review-flags-lack-of-vigorous-enforcement/. 126 Carla Hobbs (ed.), ‘Europe’s Digital Sovereignty: From Rulemaker to Superpower in the Age of US-China Rivalry,’ European Council on Foreign Relations, July 2020, 1. www.ecfr.eu/page/-/europe_ digital_sovereignty_rulemaker_superpower_age_us_china_rivalry.pdf. 127 Miriam Elder, ‘Russia needs to reclaim its ‘digital sovereignty’ from US, says MP,’ The Guardian, 19 June 2013, www.theguardian.com/world/2013/jun/19/russia-digital-soveriegnty-nsa-surveillance.

24  Johannes Thumfart extended its list of prohibited websites to political content.128 Furthermore, since 2015, the Russian law on data localisation has been enforced, which requires personal data from individuals located in Russia to be stored within Russian territory.129 In 2019, Russia adopted the law of ‘sovereign internet’, which grants the country’s government far-reaching control over the internet, including deeppacket inspection of transnational data traffic. In addition to that, the law includes the creation of a national system of domain-names. In particular, it should enable the Russian Government to disconnect RUnet from the global internet, restoring total territorial control. According to the Russian Government, the disconnect has been successfully tested, without ordinary users noticing a difference.130 Similar to the concept of souveraineté numérique, the Russian concept of Суверенный интернет is closely indebted to the ideas of Carl Schmitt. Schmitt is highly influential among Russia’s Eurasia theorists surrounding Alexander Dugin, in particular his aforementioned notion of a geopolitical standoff between illiberal terrestrial and liberal maritime powers, whereby Russia identifies with the first.131 Immediately after the reportedly successful attempt to disconnect RUnet by the end of 2019, Leonid Savin, a Eurasia political activist, pointed out that sovereignty in the territorial, Schmittian sense requires a re-alignment of cyberspace with territory, which must ultimately include the possibility to disconnect from global communication.132 Obviously, this Russian territorial interpretation of digital sovereignty has a considerably hypocritical aspect. In the broad range of extraterritorial cyber campaigns under the label of ‘hybrid warfare’133 which Russia has conducted since the Russo-Georgian war of 2008, the country has been repeatedly violating the digital sovereignty of other nations understood in this way. However, from a standpoint of geopolitical realism, Russia’s campaign can only be described as extremely successful. As Krastev and Holmes write: ‘The Kremlin aims to hold up a mirror

128 Alena Epifanova, ‘Deciphering Russia’s “Sovereign Internet Law,”’ Deutsche Gesellschaft für Auswärtige Politik, 16 January 2020, https://dgap.org/de/node/33332. 129 Natalia Gulyaeva et al., ‘Russia Update: Regulator Publishes Data Localization Clarifications,’ Hogan Lovells Chronicle of Data Protection, 11 August 2015, www.hldataprotection.com/2015/08/ articles/international-eu-privacy/russia-update-regulator-publishes-data-localization-clarifications/. Alexander Savelyev, ‘Russia’s new personal data localization regulations: A step forward or a self-imposed sanction?’, Computer, Law & Security Review 32 (2016): 128–145. 130 Jane Wakefield, ‘Russia ‘successfully tests’ its unplugged internet,’ BBC, 24 December 2019. www.bbc.com/news/technology-50902496. 131 Timothy Snyder, The Road to Unfreedom (New York: Penguin Random House, 2018), 89. 132 Leonid Savin, Номос Киберпространства и суверенный Интернет, Pluriversum, 30 December 2019, https://pluriversum.org/opinion/cyberspace/nomos-kiberprostranstva-isuverennyj-internet/; Leonid Savin, Суверенный Интернет – дело принципиальной важности для национальной безопасности, News Front, 30 December 2019, https://news-front.info/2019/12/30/ suverennyj-internet-delo-princzipialnoj-vazhnosti-dlya-naczionalnoj-bezopasnosti/. 133 The label ‘war’ might be dangerously misleading inasmuch as it unnecessarily exaggerates already existing tensions. Bettina Renz and Hanna Smith, ‘Russia and Hybrid Warfare. Going beyond the label,’ Aleksandri Papers 1/2016, www.stratcomcoe.org/bettina-renz-and-hanna-smith-russia-and-hybridwarfare-going-beyond-label; Timothy Snyder, The Road to Unfreedom (New York: Penguin Random House, 2018) 193/194.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  25 in which America can contemplate its own proclivity to violate the international rules it pretends to respect.’134 As the next section shows, such contemplation took also took place regarding digital sovereignty.

V. Norm Universalisation in the US and the EU: 2016–2020 The stage of norm universalisation, which is not included in Sikkink’s and Finnemore’s stage model of norm development, can be defined as the moment when a norm reverses its implications politically and thereby becomes universal. In the case of digital sovereignty, those who were or felt disadvantaged (first China, later the EU and Russia) developed a norm that aimed at counteracting the US’s hegemonic digital power. In this stage, digital sovereignty had a partisan nature that is not fully compatible with the universal character of norms. Norm universalisation means that the roles reverse, that it is suddenly the powerful (mainly the US) who start to invoke a norm to mitigate the actions of others (at this point in time primarily Russia, but also China). The US’s shift from internet freedom to digital sovereignty can be illustrated very well with a conversation between Obama and Putin in the aftermath of the Russian interference with the 2016 US general election. During this conversation that took place in December of that year, of all people, Obama, under whose presidency US surveillance was extended to an unprecedented degree, reportedly said that ‘International law, including the law for armed conflict, applies to actions in cyberspace.’135 Only some months after that, the Second Tallinn Manual was published by NATO. It underlined that ‘the right to employ force in self-defence extends beyond kinetic armed attacks to those that are perpetrated solely through cyber operations’.136 Whilst this option is primarily reserved for a cyber operation ‘that seriously injures or kills a number of persons or that causes significant damage to, or destruction of, property’, the Manual is explicitly undecided regarding attacks that do not cause such physical violence (as for example interference with ­elections). ‘The case of cyber operations that do not result in injury, death, damage, or destruction, but that otherwise have extensive negative effects, remains unsettled.’137

134 Ivan Krastev and Stephen Holmes, The Light That Failed: A Reckoning (London: Allen Lane, an imprint of Penguin Books, 2019), E-book. 135 William M. Arkin, et al. ‘What Obama Said to Putin on the Red Phone About the Election Hack,’ NBC News, 19 December 2016, https://perma.cc/5CKG-G5XC. 136 M. N. Schmitt and NATO Cooperative Cyber Defence Centre of Excellence (eds.). Tallinn manual 2.0 on the international law applicable to cyber operations 2nd edn (Cambridge: CUP, 2017) 340f. 137 Ibid. See also: Johannes Thumfart, ‘Public and private just wars: Distributed cyber deterrence based on Vitoria and Grotius,’ Internet Policy Review 9, no. 3 (2020). Special Issue on ‘Geopolitics, Jurisdiction and Surveillance’. https://policyreview.info/articles/analysis/public-and-private-just-warsdistributed-cyber-deterrence-based-vitoria-and.

26  Johannes Thumfart This is a far stretch from the UN Charter’s doctrine, according to which only physical violence justifies a physically violent reprisal, and has clearly the intention to promote deterrence regarding softer forms of cyberattacks aimed, for example, at democratic deliberative processes. In the US’s civil society, too, the Russian interference with the 2016 general election created a shift in sentiment from Internet Freedom towards a stronger political control of social networks. This became clear when Mark Zuckerberg was invited to a hearing before the senate in 2018, where he was critically interrogated by republican and democratic senators alike. Most notably, democratic senator Al Franken interrogated Facebook’s representatives about political ads paid for in foreign currency. This political, albeit not regulatory pressure led Facebook to adjust its policy and to prohibit ads by foreigners regarding a list of social issues, which covers almost every controversial political subject in the US: ‘civil and social rights, crime, economy, education, environmental politics, guns, health, immigration, political values, governance security, foreign policy’.138 Although this step has been widely welcomed, it also bears traces of nationalism, which all kinds of digital sovereignty share. Lawyer Jennifer Daskal criticised the company’s policy as ‘segregating free speech’, pointing out that foreign organisations, for example regarding environmental policy or refugee’s rights, might have legitimate interest to place ads in the US.139 In October 2019, Twitter’s chief executive announced in a tweet that his platform would, in turn, ban all political advertising globally. In the thread that followed, he wrote that ‘political message reach should be earned, not bought’.140 An important characteristic of this tendency towards regulation during that period is that it is, in line with the US liberal tradition regarding free speech, not enforced by the state, but enacted by private platforms.141 However, with the historical perspective in mind, it is obvious that the US developed a mode of ‘information sovereignty’ during these years that can be compared to the Chinese conception elaborated in section III. Next to this private sector-oriented way of developing digital sovereignty in the US in terms of content moderation by digital platforms, another development took hold that can likewise be understood in terms of universalisation. Profiting from fears connected to a perceived or real loss of US economic influence and corresponding to China’s rapid rise in the digital sector, Trump won the election with a promise of explicitly anti-Chinese economic nationalism. 138 Jennifer Daskal, ‘Facebook’s ban on foreign political ads means the site is segregating speech,’ The Washington Post, 16 December 2019: www.washingtonpost.com/outlook/2019/12/16/facebooksban-foreign-political-ads-means-site-is-segregating-speech/. 139 ibid. 140 Quoted according to: Joan Donovan et al., ‘What Does Twitter’s Ban on Political Ads Mean for Platform Governance?’ Centre for international governance innovation, 5 November 2019, www.cigionline.org/articles/what-does-twitters-ban-political-ads-mean-platform-governance. 141 Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale: Yale UP, 2018).

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  27 On 15 May 2019, he signed an executive order that banned communication devices which could pose a potential risk to US national security. In this context, Trump placed Huawei on the ‘entity list’, a trade blacklist that bars anyone on it from buying parts and components from US companies without the government’s approval. Whilst Trump’s actions can be understood as a strategy to gain leverage within the US-Chinese trade war,142 there can be no doubt that this signifies a remarkable turn towards digital sovereignty in a statist sense. In January 2020, then US Secretary of State Mike Pompeo explicitly warned the UK that its sovereignty was in jeopardy if it allowed Huawei to develop its 5G infrastructure.143 Similar, yet less successful threats were made against Germany.144 Evidently, the US turned from a promoter of Internet Freedom to a promoter of digital sovereignty. However, the US was not the only place which was hit hard by Russian political campaigns. Europe experienced the Russian involvement in the Brexit referendum in 2016 and Russian disinformation campaigns during the French presidential election of 2017. Furthermore, the Russian annexation of Crimea forced the EU to undertake a ‘return to geopolitics’.145 Whilst digital sovereignty has been an issue in the EU from 2013 onwards, it was primarily understood in an economic sense, since, unlike Russia, the US was seen as an ally who misbehaved and as an economic competitor, but not as an enemy threatening national security. This changed with the open confrontation with Russia, which led to an understanding of digital sovereignty that now also included security aspects. On the EU-level, this brought about the European Commission’s voluntary, self-regulatory Code of Practice on Disinformation from 2018, which was signed by Facebook, Google, Twitter and Mozilla, as well as by advertisers. Microsoft and TikTok later joined the agreement, which represents a compromise between a US-like private-sector solution and the statist solution that was demanded by many political actors in the EU. The Member States implemented stricter regulations regarding social media. Germany, for example, issued a ‘Network Enforcement Act’ in 2017 to combat agitation, hate speech and disinformation online. France issued the law ‘Regarding the Fight Against Information Manipulation’ in 2018. However, even post-2016, the EU’s agenda of digital sovereignty was, still, largely characterised by economic concerns. For example, even when Zuckerberg 142 Lyu Mengting and Chia-yi Lee, ‘US Blacklist on Huawei: Leverage for the US-China Trade Talks?’ RSIS-WTO Parliamentary Workshop, No. 111, 7 June 2019. www.rsis.edu.sg/wp-content/ uploads/2019/06/CO19111.pdf. 143 Clea Skopeliti, ‘UK sovereignty in jeopardy if Huawei used for 5G, US warns,’ The Guardian, 27 January 2020, www.theguardian.com/technology/2020/jan/27/uk-sovereignty-injeopardy-if-huawei-used-for-5g-us-warns. 144 Madison Cartwright, ‘Internationalising state power through the internet: Google, Huawei and geopolitical struggle,’ Internet Policy Review, 9, no. 3, (2020) https://policyreview.info/articles/analysis/ internationalising-state-power-through-internet-google-huawei-and-geopolitical. 145 Frances G. Burwell and Kenneth Propp, ‘The European Union and the Search for Digital Sovereignty. The European Union Building “Fortress Europe” or Preparing for a New World?’, Atlantic Council, (June 2020): 3.

28  Johannes Thumfart testified in front of the EU Parliament in 2018, economic issues, such as the question of antitrust, appeared more pressing to EU parliamentarians than Facebook’s political influence. Accordingly, the EU continued its quest for an infrastructural digital sovereignty during these years. On 6 June 2017, France and Germany celebrated a bizarre ‘Cloud Independence Day.’146 In 2019, the European Cloud project Gaia-X was revealed, promoted by industry representatives as a means to ‘strengthen Europe’s digital sovereignty and data sovereignty in Europe’.147 In the same year, a PWC report commissioned by conservative parts of the German Government concluded that the administration’s dependency on Microsoft products and its influence on pricing ‘is threatening digital sovereignty’.148 Among the heads of state in the EU, it was Macron who was most outspoken in promoting digital sovereignty, since during his 2017 campaign, he faced Russian digital propaganda. He held two landmark speeches on ‘European sovereignty’ in 2017 and 2018 where he used the term ‘sovereignty’ ad nauseam (2017: more than 30 times; 2018: more than 20 times).149 Even there, the economic sense of digital sovereignty remained prevalent. Macron conceived of digital sovereignty as part of a larger system of internal and external sovereignty, economic and commercial sovereignty, climate and energy sovereignty and health and food sovereignty. Particularly revealing is the fact that Macron has placed the terms digital sovereignty and food sovereignty side by side. The notion of food sovereignty is not a legal term, but closely related to post-colonial and anti-globalist activism in the agricultural sector of Latin America and Mediterranean Europe. Its leading promoter La Via Campesina defines the term in the following way: ‘Food sovereignty is the peoples’, Countries’ or State Unions’ right to define their agricultural and food policy, without any dumping vis-à-vis third countries.’150 In his domestic speech from 2017, Macron was explicit regarding his support for this concept. He criticised the EU Common Agricultural Policy (CAP) as

146 Federal Office for Information Security, ‘Cloud Independence Day: BSI und ANSSI ­stellen European Secure Cloud Label vor,’ Press Release, 6 June 2017, www.bsi.bund.de/DE/Presse/ Kurzmeldungen/Meldungen/news_Cloud_Independence_Day_06062017.html. 147 Bitkom, ‘A sovereign cloud and data infrastructure for Germany and Europe,’ Position Paper, 15 November 2019, www.bitkom.org/EN/List-and-detailpages/Publications/A-sovereign-cloud-and-d ata-infrastructure-for-Germany-and-Europe. 148 PWC (Strategy&), ‘Strategische Marktanalyse zur Reduzierung von Abhängigkeiten von einzelnen Software-Anbietern,’ Final Report, August 2019, 19. 149 Emmanuel Macron, ‘Discours du Président de la République au Parlement européen à Strasbourg,’ 17 April 2018, www.elysee.fr/emmanuel-macron/2018/04/17/discours-du-president-de-la-republiqueau-parlement-europeen-a-strasbourg; Emmanuel Macron, ‘Initiative pour l’Europe – Discours d’Emmanuel Macron pour une Europe souveraine, unie, démocratique,’ 26 September 2017, www. elysee.fr/emmanuel-macron/2017/09/26/initiative-pour-l-europe-discours-d-emmanuel-macronpour-une-europe-souveraine-unie-democratique. 150 La Via Campesina, ‘What is food sovereignty?’, 15 January 2003, https://viacampesina.org/en/ food-sovereignty/.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  29 ‘taboo’ and supported France’s agricultural sector’s criticism of the CAP. He also explicitly promoted more flexibility for national governments and regions in organising agricultural policy in a sustainable way.151 The parallelisation of these demands with digital sovereignty clearly delineates an infrastructural program close to economic protectionism, supporting local industry against global competition and its rules.

VI.  Norm Internalisation from 2020 on: Digital Sovereignty as ‘the New Normal’ Post-COVID Although the COVID crisis started in China and revealed the catastrophic consequences of the lack of transparency152 that characterises the Chinese notion of ‘information sovereignty’, it struck an even more devastating blow against the US’s liberal laissez-faire tradition. Not only did it expose the US health system’s relatively low level of resilience, it also globally promoted policy responses first undertaken by the Chinese Government and closely related to its understanding of domestic digital sovereignty: eg digital contact tracing,153 mass surveillance154 and measures against the spread of, neutrally speaking, unwanted information.155 This norm internalisation can be expected to be of lasting character, since, in spite of the authoritarian constitution of its most influential norm entrepreneur, it is not exclusively based on a top-down process. In liberal and illiberal societies alike, the crisis brought about a rapid migration to digital technologies, which is not likely to be reversed.156 With an increasing amount of their activities taking

151 Emmanuel Macron, ‘Initiative pour l’Europe – Discours d’Emmanuel Macron pour une Europe souveraine, unie, démocratique,’ 26 September 2017, www.elysee.fr/emmanuel-macron/2017/09/26/ initiative-pour-l-europe-discours-d-emmanuel-macron-pour-une-europe-souveraine-uniedemocratique. 152 PBS, ‘China’s Covid Secrets,’ Frontline 2021, 9, 2 February 2021, www.pbs.org/wgbh/frontline/film/ chinas-covid-secrets/. 153 Wanshu Cong, ‘From Pandemic Control to Data-Driven Governance: The Case of China’s Health Code,’ Frontiers in Political Science, 14 April 2021. www.frontiersin.org/articles/10.3389/fpos.2021.627959/ full?utm_source=S-TWT&utm_medium=SNET&utm_campaign=ECO_FPOS_XXXXXXXX_ auto-dlvrit. 154 Arjun Kharpal, ‘Use of surveillance to fight coronavirus raises concerns about government power after pandemic ends,’ CNBC, 26 March 2020, www.cnbc.com/2020/03/27/ coronavirus-surveillance-used-by-governments-to-fight-pandemic-privacy-concerns.html; Raluca Csernatoni, ‘Coronavirus Tracking Apps: Normalizing Surveillance During States of Emergency,’ Peace Research Institute Oslo, 5 October 2020, https://blogs.prio.org/2020/10/ coronavirus-tracking-apps-normalizing-surveillance-during-states-of-emergency/. 155 ‘COVID-19 Media Freedom Monitoring,’ International Press Institute, https://ipi.media/ covid19-media-freedom-monitoring/. 156 Aamer Baig et al., ‘The COVID-19 recovery will be digital: A plan for the first 90 days,’ McKinsey Digital, 14 May 2020, www.mckinsey.com/business-functions/mckinsey-digital/our-insights/thecovid-19-recovery-will-be-digital-a-plan-for-the-first-90-days.

30  Johannes Thumfart place online, citizens and companies expect states to guarantee their privacy, security and convenience. Under the regime of digital sovereignty, states derive a considerable degree of their legitimacy from meeting these higher output expectations in regard to digital policies. Just as in the era of pre-digital sovereignty,157 these new modes of bottom-up legitimacy are not entirely rational. The spread of dashboard platforms such as the COVID-tracker of Johns Hopkins University158 led to a new kind of political discourse, where governments are evaluated by ordinary citizens on the basis of data provided by digital global comparative governance analyses in real-time. That this data is, in fact, not comparable at all, due to national differences in health data collection, health systems and demography,159 has very little impact on its partly imaginary function of legitimisation via digital accountability. Analogue to the ‘digital Nationalism’ that has developed earlier in China’s civil society,160 the COVID crisis sparked a kind of dashboard patriotism based on countries’ ‘good’ performance in the crisis.161 Since Russia162 and China have already adopted their local variations of digital sovereignty and unsurprisingly expanded it during the COVID crisis (and since chapter space is limited), they are only of peripheral interest for this section. Instead, it is focusing on the liberal world, ie the US and the EU, where the COVID crisis completed the norm development cycle of digital sovereignty. During the COVID crisis, the anti-Chinese direction of the US’s digital sovereignty became increasingly prominent. In the summer of 2020, US President Trump, eager to distract from his failure in handling the crisis, issued executive orders to achieve a ban of WeChat and TikTok on grounds of national security.163 This came only a day after the US Department of State announced an expansion of its ‘Clean Network’ initiative, which aims at banning Chinese communication technology in the field of service carriers, software applications, cloud solutions and undersea cables and is promoted as serving the protection of intellectual 157 Benedict Anderson, Imagined Communities (London: Verso, 2016). 158 Johns Hopkins University, ‘Covid-19 Dashboard,’ https://coronavirus.jhu.edu/map.html. 159 Chris Morris and Anthony Reuben, ‘Coronavirus: Why are international comparisons difficult?’ BBC, 17 June 2020, www.bbc.com/news/52311014. 160 Florian Schneider, China’s digital nationalism (New York, NY: Oxford University Press, 2018). 161 Sijeong Lim and Aseem Prakash, ‘Pandemics and Citisen Perceptions about Their Country: Did COVID‐19 Increase National Pride in South Korea?,’ Nations and Nationalism, 6 June 2021, nana.12749, https://doi.org/10.1111/nana.12749; Matthew Karnitschnig, ‘Germans being German about coronavirus,’ Politico, 29 April 2020, www.politico.eu/article/germans-being-german-about-coronavirus/. 162 Russia increased its digital sovereignty during the crisis. It keeps spreading misinformation to the West. At the same time, harsh domestic laws against disinformation are used to silence dissidents and control the narrative of the crisis. Associated Press, ‘Russia used English-language sites to spread Covid-19 disinformation, US officials say,’ The Guardian, 29 July 2020, www.theguardian.com/ us-news/2020/jul/28/russia-covid-19-disinformation-websites-us-intelligence; The editors, ‘New ‘fake news’ law stifles independent reporting in Russia on COVID-19,’ International Press Institute, 8 May 2020 https://ipi.media/new-fake-news-law-stifles-independent-reporting-in-russia-on-covid-19/. 163 Kari Paul, ‘Trump’s bid to ban TikTok and WeChat: where are we now?,’ The Guardian, 29 September 2020, www.theguardian.com/technology/2020/sep/29/trump-tiktok-wechat-china-us-explainer.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  31 property, explicitly in relation to COVID research.164 In order to achieve this, the ‘Clean Network’ Initiative seeks to construct a broad international anti-Chinese alliance, which has been criticised for furthering a ‘great decoupling and a new cold war’.165 However, it was always doubtful whether these actions of the Trump administration could be effective, because the US and Chinese digital economies are already considerably entangled. Chinese firms such as Alibaba or TenCent, which co-owns several big US gaming platforms, have directly or indirectly already gathered data of US citizens and will continue to do so.166 Additionally, US-based companies such as Zoom maintain large operations in China, and it is unclear how far their data can be accessed by the Chinese Government.167 In June 2021, the new administration has revoked Trump’s ban of WeChat and TikTok, but ordered continuing evaluations regarding ‘undue risk of catastrophic effects on the security or resiliency of the critical infrastructure or digital economy of the United States’.168 During the pandemic, content moderation became much more visible, which can also be regarded as a symptom of an internalisation of digital sovereignty. However, of course, it was not the US Government that censored or corrected content labelled as disinformation online, but rather, private digital platforms such as Facebook and Twitter have deepened their efforts to censor, comment on or correct disinformation. Most prominently, they did so even if that content originated from the President himself.169 In June 2021, Facebook announced that it will continue to ban ex-President Trump for two years and then reassess the decision and that it will apply the same criteria to all politicians.170 This will, of course, fuel speculation regarding the information sovereignty exerted by private digital platforms. Also in other respects, the US’s liberal Constitution puts constraints on the exercise of internal digital sovereignty by Washington. For example, due to US federalism, the adoption of contact tracing was limited and differed from

164 Michael Pompeo, ‘Announcing the Expansion of the Clean Network to Safeguard America’s Assets,’ Press Release, 5 August 2020, www.state.gov/announcing-the-expansion-of-the-cleannetwork-to-safeguard-americas-assets/. 165 Sven Agten, ‘The Great Decoupling and a New Cold War,’ in Adventures in the Chinese Economy: 16 Years from the Inside, ed. Sven Agten (Singapore: Springer Singapore, 2021), 219–37, https://doi. org/10.1007/978-981-16-1167-4_12. 166 Aynne Kokas, ‘China already has your data. Trump’s TikTok and WeChat bans can’t stop that,’ Washington Post, 11 August 2020, www.washingtonpost.com/outlook/2020/08/11/tiktokwechat-bans-ineffective/. 167 ibid. 168 ‘Executive Order on Protecting Americans’ Sensitive Data from Foreign Adversaries,’ The White House, June 9, 2021, www.whitehouse.gov/briefing-room/presidential-actions/2021/06/09/ executive-order-on-protecting-americans-sensitive-data-from-foreign-adversaries/. 169 Reuters Staff, ‘Facebook, Twitter take action over Trump’s misleading COVID-19 posts,’ 2 October 2020, Reuters, www.reuters.com/article/us-twitter-trump/facebook-twitter-take-action-overtrumps-misleading-covid-19-posts-idUSKBN26R2Z3. 170 Adi Robertson, ‘Facebook Gives Trump a 2-Year Suspension, Changes Rules for Politicians,’ The Verge, 4 June 2021, www.theverge.com/2021/6/4/22519073/facebook-trump-ban-2-year-oversightboard-decision-political-figures-newsworthiness.

32  Johannes Thumfart State to State.171 (Even in China, the harmonisation of contact tracing apps across provinces was not seamless.172) It is still to be seen how the Biden administration will tackle the wider issue of digital sovereignty. As of June 2021, whilst having revoked the WeChat and TikTok ban, it is promoting a tough stance towards China and Russia and continuing the ‘Clean Network’ initiative,173 which has been criticised by the Internet Society (ISOC) for promoting a ‘Splinternet’.174 At the same time, and somewhat paradoxically, Washington is attempting to influence the EU to abandon its ‘digital sovereignty’ agenda in opposition to the US.175 In conclusion: The COVID crisis and Trump’s economic nationalist agenda led the US to further abandon its Internet Freedom approach and deepen its commitment to the digital sovereignty norm. However, particularly with the ending of the Trump era, the US does not display an exclusively national understanding of digital sovereignty. Rather, it interprets digital sovereignty according to the characteristics of its fading, but still highly influential liberal model of global rule, which is based on multilateral alliances and the power of the private sector. This leads to the question of how COVID served as a catalyst of digital sovereignty in the EU. Transnational outbreaks of diseases are particularly relevant for promoting policies within the EU as a supranational political body. For example, the SARS outbreak in 2003 promoted the creation of the European Centre for Disease Prevention and Control (ECDC), and the swine flu epidemic in 2009 resulted in the creation of a mechanism for joint procurement of vaccines.176 Most probably, the COVID crisis will bring about similar tendencies towards the further European integration of health policies. Beyond health, the crisis concerns foremost the economy. Since the EU’s post-COVID recovery plan required the EU Commission to borrow €750 billion, mostly from capital markets, it can be expected to promote fiscal and economic integration.177 Post-COVID 171 Jefferson Graham, ‘Tracking coronavirus: Are Apple and Google contact tracing apps available in your state?’, USA Today, 2 October 2020, www.usatoday.com/story/tech/2020/10/02/apple-googlecoronavirus-contact-tracing-apps/3592355001/. 172 Wanshu Cong, ‘From Pandemic Control to Data-Driven Governance: The Case of China’s Health Code,’ Frontiers in Political Science, 14 April 2021. www.frontiersin.org/articles/10.3389/fpos.2021.627959/ full?utm_source=S-TWT&utm_medium=SNET&utm_campaign=ECO_FPOS_XXXXXXXX_ auto-dlvrit. 173 Washington Post (staff), ‘Transcript: The Path Forward: Safeguarding Global Innovation with Keith J. Krach & Gen. Stanley A. McChrystal,’ Washington Post, 22 April 2021, www. washingtonpost.com/washington-post-live/2021/04/22/transcript-path-forward-safeguarding-globalinnovation-with-keith-j-krach-gen-stanley-mcchrystal/. 174 ‘Internet Society Statement on U.S. Clean Network Program,’ Internet Society (blog), 7 August 2020, www.internetsociety.org/news/statements/2020/internet-society-statement-on-u-s-clean-networkprogram/. 175 Tyson Barker, ‘Biden’s Plan to Cooperate With Europe on Tech,’ Foreign Policy (blog), 16 June 2021, https://foreignpolicy.com/2021/06/16/bidens-mission-to-defeat-digital-sovereignty/. 176 Eleanor Brooks and Robert Geyer, ‘The Development of EU Health Policy and the Covid-19 Pandemic: Trends and Implications,’ Journal of European Integration 42, no. 8 (2020): 1057–76, https:// doi.org/10.1080/07036337.2020.1853718. 177 Chih-Mei Luo, ‘The COVID-19 Crisis: The EU Recovery Fund and Its Implications for European Integration – a Paradigm Shift,’ European Review, 19 March 2021, 1–19, https://doi.org/10.1017/ S106279872100003X.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  33 digital sovereignty within the EU must primarily be understood as a part of the infrastructural investments in the context of this recovery plan, but also has some legal and political dimensions. In the EU and particularly in Germany, digital sovereignty has been already at the top of the agenda for a while (see sections IV to V). It came therefore as no surprise that ‘expanding the EU’s digital sovereignty’ was named a priority when Germany took over the EU Council Presidency during the COVID crisis.178 Already when the German EU Commission President von der Leyen took office in 2019, she identified generally ‘technological sovereignty’ as one of her core goals, but named only digital technologies as examples.179 It was kind of redundant, when on 1 March 2021, German chancellor Merkel, together with the heads of state of Denmark, Estonia and Finland, wrote an open letter to the EU Commission to demand the implementation of policies of digital sovereignty.180 The European Commission conceived of a number of policy responses to the COVID crisis related to digital technologies, ranging from data protection over skill sharing to measures to improve connectivity.181 In tackling the spread of disinformation, the Commission remained committed to liberalism, inasmuch as it promoted a ‘whole-of-society approach’, involving public authorities, journalists, researchers, fact-checkers, online platforms and civil society and referred to the self-regulatory Code of Practice on Disinformation from 2018 (See section V).182 The draft for the Digital Services Act published during the pandemic further specifies duties of transparency and risk mitigation of digital platforms, particularly ‘very large online platforms’. However, rather than imposing pooled state sovereignty on digital platforms, this framework is aimed towards establishing a ‘flexible co-regulatory environment’.183

178 ‘Together for Europe’s Recovery, Programme for Germany’s Presidency of the Council of the European Union, 1 July to 31 December 2020,’ 8, www.eu2020.de/blob/2360248/e0312c50f910931819a b67f630d15b2f/06-30-pdf-programm-en-data.pdf. 179 Ursula von der Leyen, ‘A Union that strives for more. My agenda for Europe. Political Guidelines for the next European Commission 2019–2024,’ 13, https://ec.europa.eu/commission/sites/beta-political/ files/political-guidelines-next-commission_en.pdf. 180 Angela Merkel, Mette Frederiksen, Sanna Marin & Kaja Kallas. Letter to the President of the European Commission. (2021, March 1) https://valtioneuvosto.fi/documents/10616/56906592/DE+D K+FI+EE+Letter+to+the+COM+President+on+Digital+Sovereignty_final.pdf/36db4c7f-3de9-103 a-d01a-9c8ccd703561/DE+DK+FI+EE+Letter+to+the+COM+President+on+Digital+Sovereignty_ final.pdf?t=1614670944134. 181 European Commission, ‘Digital technologies – actions in response to coronavirus pandemic,’ 14 September 2020, https://ec.europa.eu/digital-single-market/en/content/digital-technologies-actionsresponse-coronavirus-pandemic. 182 European Commission, ‘Tackling Covid-19 disinformation – Getting the facts right,’ Joint Communication to the European Parliament, the European Council, the Council, The European Economic and Social Committee and the Committee of the Regions, JOIN(2020) 8 final, 10 June 2020, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020JC0008. 183 Commission Staff Working Document/Impact Assessment/Accompanying the document Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC, 16 December 2020, https://data.consilium. europa.eu/doc/document/ST-14124-2020-ADD-1/en/pdf.

34  Johannes Thumfart Also, the ‘Council conclusions on strengthening resilience and countering hybrid threats, including disinformation in the context of the COVID-19 pandemic’ from December 2020 underlines media and digital literacy, media pluralism and media independence.184 However, it is striking that, although the origin of online disinformation can hardly be verified, the latter document explicitly limits disinformation to ‘hybrid threats originating from hostile state and non-state actors’. Such narrowing of the discourse frames the complex, multi-factor civil society issue of disinformation as a one-dimensional, military issue, which creates new international fault lines in respect to digital sovereignty. As mentioned before, a strong correlation between infrastructural notions of digital sovereignty and the crisis is predominant. An ideas paper by the European Parliamentary Research Service issued in July 2020 reads: The coronavirus pandemic (…) showed the essential role played by the high-tech sector (…) and has accelerated the reflection on the need for sovereign digital technologies (…).185

Indeed, the EU recovery plan includes sizeable investments in digital infrastructure projects such as supercomputers, Gaia-X and a European e-identity. When announcing these investments, von der Leyen assured once more: ‘None of this is an end in itself – it is about Europe’s digital sovereignty, on a small and large scale.’186 So far, as of June 2021, this problematic, because extremely far-reaching rhetoric of digital sovereignty in vicinity to economic protectionism does not include a categorical exclusion of Chinese products, comparable to the US’s approach. The Commission endorsed a toolbox to address security risks related to the rollout of 5G, merely mentioning ‘suppliers considered to be high risk’ without explicit reference to China.187 Since the EU’s economy largely depends on exports to China and the US, it is not likely that it will engage in any kind of economic protectionism, for this can be expected to bring about retaliatory measures. Due to the possible participation of Chinese and US tech firms188 in the EU’s digital sovereignty flagship project Gaia-X, the EU has already been accused of

184 General Secretariat of the Council, ‘Council conclusions on strengthening resilience and countering hybrid threats, including disinformation in the context of the COVID-19 pandemic,’ 15 December 2020, https://data.consilium.europa.eu/doc/document/ST-14064-2020-INIT/en/pdf. 185 Tambiama Madiega, ‘Digital sovereignty for Europe,’ European Parliamentary Research Service Ideas Paper, July 2020, 1. 186 Ursula von der Leyen, ‘State of the Union Address,’ 16 September 2020, https://ec.europa.eu/ commission/presscorner/detail/en/SPEECH_20_1655. 187 European Commission, ‘Secure 5G networks: Commission endorses EU toolbox and sets out next steps,’ Press Release, 29 January 2020, https://ec.europa.eu/commission/presscorner/detail/en/ ip_20_123. 188 Philipp Grüll and Samuel Stolton, ‘US and Chinese tech giants welcomed into ‘EU sovereign’ cloud project,’ Euractiv, 15 October 2020, www.euractiv.com/section/digital/news/us-and-chinesetech-giants-welcomed-into-eu-sovereign-cloud-project/.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  35 mere ‘sovereignty-washing’.189 In terms of norm development, however, such accusations regarding the disingenuous use of a norm and, if these accusations hold true, the invocation of a norm exclusively for PR-reasons, are the best proof for the complete internalisation of a norm. In contrast to an exclusively state-centred understanding of digital sovereignty widespread within the EU, the COVID crisis has shown that the private sector’s independence is of crucial importance in times of crises, since it can be an obstacle to state overreach. For example, Apple’s and Google’s backing of a decentralised contact tracing app led some EU Member States and the UK to adjust their apps accordingly.190

VII.  Conclusion and Discussion. Contestation, Variation and Multilateralisation of Digital Sovereignty The COVID crisis plays an accelerating role in the norm development process of digital sovereignty, which has been on-going for about 20 years. The phase from 1998 to 2012, including 9/11, the Patriot Act and Prism, up to the beginning of far-reaching attempts of internet control in Russia, marks the phase of norm emergence. This phase is characterised by uncontested US digital hegemony and an idealistic tonality of its opponents. During this phase, China is the single most important norm entrepreneur for digital sovereignty. 2013 to 2020, the period starting with the Snowden-revelations, represents a phase of norm cascade, in which digital sovereignty measures and rhetoric are implemented by a number of actors including Russia and the EU. During this phase, arguments in favour of digital sovereignty are brought forward in an increasingly conservative tonality, appealing to legitimacy. From a point of view of intellectual history, it is noteworthy that, in France and Russia, this includes the application of the illiberal legal theorist Carl Schmitt’s arguments. From 2016 to 2020, the US and the EU undergo an additional phase, which can be labelled as a phase of norm universalization. With the Russian campaign around the US general election and Brexit in 2016 and the French presidential election in 2017, once-hegemonic Western countries experience their vulnerability to foreign political propaganda. This leads to self-regulation in the US tech sector and laws against hate speech and disinformation in France and Germany.

189 Evgeny Morozov. Tweet 17 October 2020, 12:06 PM, https://twitter.com/evgenymorozov/status/13 17406673749815297?s=20. 190 Natasha Lomas, ‘UK gives up on centralised coronavirus contacts-tracing app – will ‘likely’ switch to model backed by Apple and Google,’ Tech Crunch, 18 June 2020, https://techcrunch.com/2020/06/18/ uk-gives-up-on-centralised-coronavirus-contacts-tracing-app-will-switch-to-model-backed-byapple-and-google/; Natasha Lomas, ‘Germany ditches centralised approach to app for COVID-19 contacts tracing,’ Tech Crunch, 27 April 2020, https://techcrunch.com/2020/04/27/germany-ditchescentralised-approach-to-app-for-covid-19-contacts-tracing/.

36  Johannes Thumfart During this phase, too, a focus on the economic aspect of digital sovereignty can be identified. The US enacts the Huawei ban and the EU is developing its plans for a ‘sovereign cloud infrastructure.’ Both developments are connected to the rise of China’s digital economy in that period. In turn, the ‘new normal’ starting with the COVID crisis represents a phase of the internalisation of digital sovereignty. Through contact tracing, digitalisation of everyday life and governmental accountability through COVID dashboards that increasingly involve civil society, bottom-up digital sovereignty becomes widespread. However, this internalisation also makes clear that digital sovereignty in liberal societies is strongly characterised and limited by the power of the private sector and restrictions on the power of national governments, such as federalism and multilateralism. Yet, the contestedness191 and local variation of a norm can be considered symptoms of norm internalization. The concept of digital sovereignty is here to stay, and governments will have to relate to it in one way or another. Further research should include questions that are already outlined in this contribution: is the norm of digital sovereignty merely a kind of digital nationalism or mercantilism or can it serve as a foundation of international cooperation? Despite popular misconceptions, China and Russia are also among the leaders in this development, with the establishment of the Shanghai Cooperation Organization (SCO) that is promoting the norm of digital sovereignty and transnational authoritarianism since 2007.192 It remains to be seen how and if similar Western block formation, for example regarding the US’s ‘Clean Network’ initiative and cyberwar capacities within NATO, develops new ways to multilateralise digital sovereignty. The Eastern and Western block formation will constitute new flashpoints on the digital sovereignty battlefield. Regarding the EU, it remains to be seen how digital sovereignty will be distributed between the member states and the supranational level. Due to the high degree of regional economic and technological integration, it is next to impossible that national approaches of digital sovereignty in terms of digital infrastructure and industry will be pursued within the EU. However, single member states might push for national digital sovereignty regarding information sovereignty and impose further limits on the content regulation of digital platforms.193

191 Antje Wiener, A Theory of Contestation, (Springer, 2014), 3. 192 Johannes Thumfart, ‘Cultures of digital sovereignty in China, Russia and India. A comparative perspective from NWICO and WSIS to SCO,’ manuscript, to be published in Digital Sovereignty in the BRICS Countries, eds. Luca Belli and Min Jiang, 2022. 193 David Brennan, ‘Big Tech Must Be Reined in with Anti-Censorship Rules, Polish PM Says,’ Newsweek, 9 June 2021, www.newsweek.com/big-tech-must-reined-anti-censorship-rules-polishprime-minister-mateusz-morawiecki-1598891.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  37

References Agten, Sven. ‘The Great Decoupling and a New Cold War.’ In Adventures in the Chinese Economy: 16 Years from the Inside, edited by Sven Agten Singapore: Springer Singapore, 2021, 219–37. Albrecht, Jan Philipp. Hands off our Data! Jan Philipp Albrecht, 2015. www.janalbrecht.eu/ wp-content/uploads/2018/02/JP_Albrecht_hands-off_final_WEB.pdf. —— ‘Regaining Control and Sovereignty in the Digital Age.’ In Enforcing Privacy, edited by David Wright and Paul De Hert Cham: Springer, 2016: 473–488. Anderson, Benedict. Imagined Communities. London: Verso, 2016. Anderson, Perry. The Old New World. London and New York: Verso, 2009. Arkin, William M. et al. ‘What Obama Said to Putin on the Red Phone About the Election Hack.’ NBC News, 19 December 2016. https://perma.cc/5CKG-G5XC. Asmolov, Gregory. ‘Runet in Crisis Situations.’ In Internet in Russia. Societies and Political Orders in Transition edited by Sergey Davydov. Cham: Springer, 2020: 231–250. Associated Press. ‘Russia used English-language sites to spread Covid-19 disinformation, US officials say.’ The Guardian, 29 July 2020. www.theguardian.com/us-news/2020/ jul/28/russia-covid-19-disinformation-websites-us-intelligence. Baig, Aamer et al., ‘The COVID-19 recovery will be digital: A plan for the first 90 days.’ McKinsey Digital, 14 May 2020. www.mckinsey.com/business-functions/mckinsey-digital/ our-insights/the-covid-19-recovery-will-be-digital-a-plan-for-the-first-90-days. Bambauer, Derek E. ‘Trump’s Section 230 reform is repudiation in disguise.’ Brookings Institution Tech Stream, 8 October 2020. www.brookings.edu/techstream/trumps-section230-reform-is-repudiation-in-disguise/. Barnard-Wills, David et al., ‘Report on the SME experience of the GDPR,’ STAR, 31. www. trilateralresearch.com/wp-content/uploads/2020/01/STAR-II-D2.2-SMEs-experienc e-with-the-GDPR-v1.0-.pdf. Barker, Tyson. ‘Biden’s Plan to Cooperate With Europe on Tech,’ Foreign Policy (blog), 16 June 2021. https://foreignpolicy.com/2021/06/16/bidens-mission-to-defeat-digitalsovereignty/. Bellanger, Pierre. La Souveraineté Numérique. Paris: Èditions Stock, 2014, E-Book. —— ‘De la souveraineté numérique.’ Le Débat 170, 2012/3: 149–159. Benhamou, Bernard and Laurent Sorbier. ‘Souveraineté et Résaux Numériques.’ Institut français des relations internationals 3 (Autumn 2006): 519–530. Bhuiyan, Abu. Internet Governance and the Global South: Demand for a new framework. New York, N.Y. : Palgrave Macmillan, 2014. Bitkom. ‘A sovereign cloud and data infrastructure for Germany and Europe.’ Position Paper, November 15, 2019. www.bitkom.org/EN/List-and-detailpages/Publications/ A-sovereign-cloud-and-data-infrastructure-for-Germany-and-Europe. —— ‘Digitale Souveränität: Anforderungen an Technologien- und Kompetenzfelder mit Schlüsselfunktion.’ Positionspapier, 2019, www.bitkom.org/Bitkom/Publikationen/ Digitale-Souveraenitaet-Anforderungen-an-Technologien-und-Kompetenzfelder-mitSchluesselfunktion. Brennan, David. ‘Big Tech Must Be Reined in with Anti-Censorship Rules, Polish PM Says.’ Newsweek, 9 June 2021. www.newsweek.com/big-tech-must-reined-anti-censorshiprules-polish-prime-minister-mateusz-morawiecki-1598891.

38  Johannes Thumfart Brooks, Eleanor and Robert Geyer. ‘The Development of EU Health Policy and the Covid-19 Pandemic: Trends and Implications.’ Journal of European Integration 42, no. 8 (16 November 2020): 1057–76, https://doi.org/10.1080/07036337.2020.1853718. Burwell, Frances G. and Kenneth Propp. ‘The European Union and the Search for Digital Sovereignty. The European Union Building ‘Fortress Europe’ or Preparing for a New World?’ Atlantic Council, June 2020, 2. www.atlanticcouncil.org/wp-content/ uploads/2020/06/The-European-Union-and-the-Search-for-Digital-Sovereignty-Bui lding-Fortress-Europe-or-Preparing-for-a-New-World.pdf. Cartwright, Madison. ‘Internationalising state power through the internet: Google, Huawei and geopolitical struggle.’ Internet Policy Review 9, no. 3, (2020). https:// policyreview.info/articles/analysis/internationalising-state-power-through-internetgoogle-huawei-and-geopolitical. Clinton, Hillary. ‘Remarks on Internet Freedom.’ Speech at the Newseum, 21 January 2010. https://2009-2017.state.gov/secretary/20092013clinton/rm/2010/01/135519.htm. Coleman, Andrew and Jackson Nyamuya Maogoto. ‘“Westphalian” Meets “Eastphalian” Sovereignty: China in a Globalised World.’ Asian Journal of International Law 3 (2013): 237–269. Cong, Wanshu. ‘From Pandemic Control to Data-Driven Governance: The Case of China’s Health Code.’ Frontiers in Political Science, 14 April 2021. www.frontiersin.org/ articles/10.3389/fpos.2021.627959/full?utm_source=S-TWT&utm_medium= SNET&utm_campaign=ECO_FPOS_XXXXXXXX_auto-dlvrit. Cong, Wanshu and Johannes Thumfart, ‘The Chinese origins of digital modes of sovereignty (1996 to 2001) – Between authoritarian censorship and resistance against digital colonialism’ (forthcoming). Court of Justice of the European Union. ‘The Court of Justice declares that the Commission’s US Safe Harbour Decision is invalid.’ Press Release No117/15, Luxembourg, 6 October 2015. https://curia.europa.eu/jcms/upload/docs/application/pdf/2015-10/cp150117en.pdf. Csernatoni, Raluca ‘Coronavirus Tracking Apps: Normalizing Surveillance During States of Emergency,’ Peace Research Institute Oslo, 5 October 2020. https://blogs.prio.org/2020/ 10/coronavirus-tracking-apps-normalizing-surveillance-during-states-of-emergency/. Davidson, James Dale and William Rees-Mogg. The Sovereign Individual: Mastering the Transition to the Information Age. New York: Touchstone, 1997. Elder, Miriam. ‘Russia needs to reclaim its “digital sovereignty” from US, says MP.’ The Guardian, 19 June 2013. Ellehuus, Rachel and Donatienne Ruy, ‘Did Russia Influence Brexit?’ Center for Strategic and International Studies Blog, 21 July 2020. www.csis.org/blogs/brexit-bits-bobs-andblogs/did-russia-influence-brexit. Emmanuel Macron. ‘Initiative pour l’Europe – Discours d’Emmanuel Macron pour une Europe souveraine, unie, démocratique.’ 26 September 2017. https://www.elysee. fr/emmanuel-macron/2017/09/26/initiative-pour-l-europe-discours-d-emmanuelmacron-pour-une-europe-souveraine-unie-democratique. Epifanova, Alena. ‘Deciphering Russia’s Sovereign Internet Law.’ Deutsche Gesellschaft für Auswärtige Politik, 16 January 2020. https://dgap.org/de/node/33332. European Commission. ‘Digital technologies – actions in response to coronavirus pandemic.’ 14 September 2020. https://ec.europa.eu/digital-single-market/en/content/ digital-technologies-actions-response-coronavirus-pandemic. —— ‘Secure 5G networks: Commission endorses EU toolbox and sets out next steps.’ Press Release, 29 January 2020. https://ec.europa.eu/commission/presscorner/detail/en/ ip_20_123.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  39 —— ‘Tackling Covid-19 disinformation – Getting the facts right.’ Joint Communication to the European Parliament, the European Council, the Council, The European Economic and Social Committee and the Committee of the Regions, JOIN(2020) 8 final, 10 June 2020. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52020JC0008. Federal Ministry for Economic Affairs and Energy. ‘Industrie 4.0 und Digitale Wirtschaft. Impulse für Wachstum, Beschäftigung und Innovation.’ April 2015. www.bmwi. de/Redaktion/DE/Publikationen/Industrie/industrie-4-0-und-digitale-wirtschaft. pdf%3F__blob%3DpublicationFile%26v%3D3. Federal Office for Information Security. ‘Cloud Independence Day: BSI und ANSSI stellen European Secure Cloud Label vor.’ Press Release, 6 June 2017. www.bsi.bund.de/ DE/Presse/Kurzmeldungen/Meldungen/news_Cloud_Independence_Day_06062017. html. Finnemore, Martha and Kathryn Sikkink. ‘International Norm Dynamics and Political Change.’ International Organization 52, no. 4 (Autumn 1998): 887–917. Floridi, Luciano, ‘The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially for the EU,’ Philosophy & Technology 33 (2020): 369–378. https://link. springer.com/article/10.1007/s13347-020-00423-6. Fontaine, Richard and Will Rogers. ‘Internet Freedom. A Foreign Policy Imperative in the Digital Age.’ Center for a New American Security, June 2011. Friedman, Milton. Capitalism and Freedom. Chicago: University of Chicago Press, 2002. Gabriel, Sigmar. ‘Political consequences of the Google debate.’ Frankfurter Allgemeine Zeitung, 16 May 2014. www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/ sigmar-gabriel-consequences-of-the-google-debate-12948701.html?printPagedArticle= true#pageIndex_7. Gady, Franz-Stefan. ‘The Wuzhen Summit and Chinese Internet Sovereignty.’ Huffpost, 12 September 2014, www.huffpost.com/entry/the-wuzhen-summit-and-chi_b_6287040. Gaudino, Ugo. ‘The Ideological Securitization of COVID-19: Perspectives from the Right and the Left.’ E-international Relations, 28 July 2020. www.e-ir.info/2020/07/28/ the-ideological-securitization-of-ovid-19-perspectives-from-the-right-andthe-left/. Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale: Yale UP, 2018. Goldsmith, Jack. ‘Against Cyberanarchy.’ University of Chicago Law Occasional Paper 40 (1999). Gong, Wenxiang (龚文庠). ‘International Communication in the Information Age: New Issues Facing International Relations (信息时代的国际传播:国际关系面临的新问题).’ Studies of International Politics (国际政治研究) 2 (1998): 41–45. Greene, Robert and Paul Triolo, ‘Will China Control the Global Internet Via its Digital Silk Road?’ Carnegie Endowment, 8 May 2020. https://carnegieendowment.org/2020/05/08/ will-china-control-global-internet-via-its-digital-silk-road-pub-81857. Grüll, Philipp and Samuel Stolton. ‘US and Chinese tech giants welcomed into ‘EU sovereign’ cloud project.’ Euractiv, 15 October 2020. www.euractiv.com/section/digital/news/ us-and-chinese-tech-giants-welcomed-into-eu-sovereign-cloud-project/. Gulyaeva, Natalia et al. ‘Russia Update: Regulator Publishes Data Localization Clarifications.’ Hogan Lovells Chronicle of Data Protection, 11 August 2015. www. hldataprotection.com/2015/08/articles/international-eu-privacy/russia-update-regulatorpublishes-data-localization-clarifications/. Hanson, Fergus. ‘Baked in and wired: ediplomacy@State.’ Brookings Institution, 25 October 2012, www.brookings.edu/research/internet-freedom-the-role-of-the-u-s-state-department/.

40  Johannes Thumfart Harper, Jo. ‘Russians protest state censorship of the internet.’ Deutsche Welle, 23 June 2017. www.dw.com/en/russians-protest-state-censorship-of-the-internet/a-39807536. Heath, Ryan. ‘Brussel’s Playbook Briefing.’ Politico, 5 July 2015. www.politico.eu/newsletter/ brussels-playbook/uk-election-greek-mystery-messages-digital-divides/. de Hert, Paul and Michal Czerniawski. ‘Expanding the European data protection scope beyond territory: Article 3 of the General Data Protection Regulation in its wider context.’ International Data Privacy Law 6, no. 3 (August 2016): 230–243. de Hert, Paul and Johannes Thumfart. ‘The Microsoft Ireland case and the Cyberspace Sovereignty Trilemma. Post-Territorial technologies and companies question territorial state sovereignty and regulatory state monopolies.’ Brussels Privacy Hub Working Paper 4/11, July 2018. https://brusselsprivacyhub.eu/publications/BPH-Working-PaperVOL4-N11.pdf. de Hert, Paul and Cihan Parlar and Johannes Thumfart, ‘Legal Arguments Used in Courts Regarding Territoriality and Cross-Border Production Orders: From Yahoo Belgium to Microsoft Ireland’ New Journal of European Criminal Law 9 (2018). Hildebrandt, Mireille. ‘Extraterritorial Jurisdiction to enforce in Cyberspace? Bodin, Schmitt, Grotius, in Cyberspace.’ University of Toronto Law Journal 63, no. 2 (Spring 2013): 196–224. Hobbs, Carla (ed.). ‘Europe’s Digital Sovereignty: From Rulemaker to Superpower in the Age of US-China Rivalry.’ European Council on Foreign Relations, July 2020, 1. www. ecfr.eu/page/-/europe_digital_sovereignty_rulemaker_superpower_age_us_china_ rivalry.pdf. Hodge, Scott A. and Daniel Bunn. ‘Digital Tax Deadlock: Where Do We Go from Here?’ Tax Foundation, 1 July 2020. https://taxfoundation.org/oecd-digital-tax-projectdevelopments/#:~:text=The%20digital%20services%20tax%20essentially%20 works%20like%20a%20tariff.&text=The%20OECD%20approach%20in%20Pillar, services%20and%20consumer%2Dfacing%20businesses. ‘Internet Society Statement on U.S. Clean Network Program,’ Internet Society (blog), 7 August 2020, www.internetsociety.org/news/statements/2020/internet-society-statement-on-u-sclean-network-program/. Irion, Kristina. ‘Schrems II and Surveillance: Third Countries’ National Security Powers in the Purview of EU Law.’ European Law Blog, 24 July 2020. https://europeanlawblog. eu/2020/07/24/schrems-ii-and-surveillance-third-countries-national-security-powersin-the-purview-of-eu-law/. Jefferson, Graham. ‘Tracking coronavirus: Are Apple and Google contact tracing apps available in your state?’ USA Today, 2 October 2020. Johns Hopkins University. ‘Covid-19 Dashboard,’ https://coronavirus.jhu.edu/map.html. Kalyvas, Andreas. ‘Carl Schmitt’s Postcolonial Imagination.’ Constellations 25, no. 1 (March 2018): 35–53. Karnitschnig, Matthew. ‘Germans being German about coronavirus.’ Politico, 29 April 2020. www.politico.eu/article/germans-being-german-about-coronavirus/. Kettemann, Matthias C. The Normative Order of the Internet. A Theory of Rule and Regulation Online. Oxford: OUP, 2020. Kelsen, Hans. Das Problem der Souveränität. Tübingen: Mohr Siebeck, 1928. —— General Theory of Law and State. Harvard UP: Cambridge, 1949. Kharpal, Arjun. ‘Use of surveillance to fight coronavirus raises concerns about government power after pandemic ends.’ CNBC, 26 March 2020.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  41 Khondker, Habibul Haque. ‘The impact of the Arab Spring on democracy and development in the MENA region.’ Sociology Compass 13, 9 (September 2019). https://onlinelibrary. wiley.com/doi/abs/10.1111/soc4.12726 Kirschbaum, Erik. ‘Snowden says NSA engages in industrial espionage: TV.’ Reuters, 26 Januar y 2014. www.reuters.com/article/us-security-snowden-germanyidUSBREA0P0DE20140126. Kohl, Uta. Jurisdiction and the Internet. Cambridge, UK: Cambridge UP, 2007. Kokas, Aynne. ‘China already has your data. Trump’s TikTok and WeChat bans can’t stop that,’ Washington Post, 11 August 2020. www.washingtonpost.com/outlook/2020/08/11/ tiktok-wechat-bans-ineffective/. Krastev, Ivan and Stephen Holmes. The Light That Failed: A Reckoning. London: Allen Lane, an imprint of Penguin Books, 2019. La Via Campesina. ‘What is food sovereignty?’, 15 January 2003. https://viacampesina.org/ en/food-sovereignty/. Lambach, Daniel. ‘The Territorialization of Cyberspace.’ International Studies Review 22, no. 3 (1 September 2020): 482–506. https://doi.org/10.1093/isr/viz022. Lauterpacht, Elihu. ‘Sovereignty – Myth or Reality?’ International Affairs 73 (1997): 137–150. Leigh, David. ‘US embassy cables leak sparks global diplomatic crisis.’ The Guardian, 28 November 2010. www.theguardian.com/world/2010/nov/28/us-embassy-cable-leakdiplomacy-crisis. Leonard, Mark. ‘Connectivity Wars. Weaponizing Interdependence.’ European Council on Foreign Relations, January 2017, 23. www.ecfr.eu/page/-/Connectivity_Wars.pdf. Lim, Sijeong and Aseem Prakash. ‘Pandemics and Citisen Perceptions about Their Country: Did COVID‐19 Increase National Pride in South Korea?’ Nations and Nationalism. 6 June 2021, nana.12749. https://doi.org/10.1111/nana.12749. Lomas, Natasha. ‘GDPR’s two-year review flags lack of ‘vigorous’ enforcement.’ Tech Crunch, 24 June 2020. https://techcrunch.com/2020/06/24/gdprs-two-year-reviewflags-lack-of-vigorous-enforcement/. —— ‘UK gives up on centralised coronavirus contacts-tracing app – will ‘likely’ switch to model backed by Apple and Google.’ Tech Crunch, 18 June 2020. Macron, Emmanuel. ‘Discours du Président de la République au Parlement européen à Strasbourg.’ 17 April 2018. www.elysee.fr/emmanuel-macron/2018/04/17/discours-dupresident-de-la-republique-au-parlement-europeen-a-strasbourg. Luo, Chih-Mei. ‘The COVID-19 Crisis: The EU Recovery Fund and Its Implications for European Integration – a Paradigm Shift.’ European Review, 19 March 2021, 1–19. https://doi.org/10.1017/S106279872100003X. Madiega, Tambiama. ‘Digital sovereignty for Europe.’ European Parliamentary Research Service Ideas Paper, July 2020, 1. Merkel, Angela, Mette Frederiksen, Sanna Marin & Kaja Kallas. Letter to the President of the European Commission. 1 March 2021. https://valtioneuvosto.fi/documents/10616/56906592/DE+DK+FI+EE+Letter+to+the+COM+President+on+Digita l+Sovereignty_final.pdf/36db4c7f-3de9-103a-d01a-9c8ccd703561/DE+DK+FI+EE+L etter+to+the+COM+President+on+Digital+Sovereignty_final.pdf?t=1614670944134. McKune, Sarah and Shazeda Ahmed. ‘The Contestation and Shaping of Cyber Norms Through China’s Internet Sovereignty Agenda.’ International Journal of Communication 12 (2018): 3835–3855. Mitchell, Timothy. Carbon Democracy. London and New York: Verso, 2013.

42  Johannes Thumfart Morin-Desailly, Catherine. ‘Rapport d’information, fait au nom de la commission des affaires européennes.’ n° 443 (2012–2013)–20 mars 2013. www.senat.fr/notice-rapport/ 2012/r12-443-notice.html. Morozov, Evgeny. Tweet 17 October 2020, 12:06 PM, https://twitter.com/evgenymorozov/ status/1317406673749815297?s=20. Morris, Chris and Anthony Reuben. ‘Coronavirus: Why are international comparisons difficult?’ BBC, 17 June 2020. www.bbc.com/news/52311014. Moerel, Lokke & Paul Timmers. ‘Reflections on Digital Sovereignty’ EU Cyber Direct, Research in Focus series, 2021. Available at SSRN: https://ssrn.com/abstract=3772777. Mueller, Milton. Will the Internet Fragment? Sovereignty, Globalization, and Cyberspace. Cambridge, UK: Polity, 2017. Negro, Gianluigi. ‘A history of Chinese global Internet governance and its relations with ITU and ICANN.’ Chinese Journal of Communication 13 no. 1 (2020): 104–121. Newman, Nathan. Net Loss. Internet Prophets, Private Profits, and the Costs to Community. Philadelphia: Penn State UP, 2002. Perritt, Henry. ‘The Internet as a Threat to Sovereignty?’ Indiana Journal of Global Legal Studies 5, no. 2, Article 4 (1998). Pompeo, Michael. ‘Announcing the Expansion of the Clean Network to Safeguard America’s Assets.’ Press Release, 5 August 2020. www.state.gov/announcing-the-expansionof-the-clean-network-to-safeguard-americas-assets/. PWC (Strategy&). ‘Strategische Marktanalyse zur Reduzierung von Abhängigkeiten von einzelnen Software-Anbietern.’ Final Report, August 2019. Qiang, Xiao. ‘How China’s Internet Police Control Speech on the Internet.’ Radio Free Asia, 21 November 2008, www.rfa.org/english/commentaries/china_internet-11242008134108. html. Rand Corporation. ‘Paul Baran and the Origins of the Internet.’ Rand Corporation Website, www.rand.org/about/history/baran.html. Renz, Bettina and Hanna Smith. ‘Russia and Hybrid Warfare. Going beyond the label.’ Aleksandri Papers 1/2016. www.stratcomcoe.org/bettina-renz-and-hanna-smith-russiaand-hybrid-warfare-going-beyond-label. Rifkin, Jeremy. The End of Work. The Decline of the Global Labor Force and the Dawn of the Post-Market Era. New York: G. P. Putnam’s Sons, 1995. Robson, David. ‘Why China’s internet use has overtaken the West.’ BBC, 9 March 2017. www.bbc.com/future/article/20170309-why-chinas-internet-reveals-where-wereheaded-ourselves#:~:text=Based%20on%20raw%20statistics%2C%20China,many%20 with%20high%2Dspeed%20connections. Ryngaert, Cedric & Mark Zoetekouw. ‘The End of Territory? The Re-Emergence of Community as a Principle of Jurisdictional Order in the Internet Era.’ In The Net and the Nation State, edited by Uta Kohl, 185–201. Cambridge: Cambridge University Press, 2017. Sargsyan, Tatevik. ‘The Turn to Infrastructure in Privacy Governance.’ In The Turn to Infrastructure in Internet Governance edited by Francesca Musiani et al. NY: Palgrave Macmillan, 2016. Sassen, Saskia. ‘On the Internet and Sovereignty.’ Indiana Journal of Global Legal Studies 5, no. 2 Article 9 (1998). Savelyev, Alexander. ‘Russia’s new personal data localization regulations: A step forward or a self-imposed sanction?’ Computer, Law & Security Review 32 (2016): 128–145.

From the Late 1990s to the COVID Crisis 2020/21 as Catalytic Event  43 Savin, Leonid. Номос Киберпространства и суверенный Интернет. Pluriversum, 30 December 2019. https://pluriversum.org/opinion/cyberspace/nomos-kiberprostranstva-isuverennyj-internet/. —— Суверенный Интернет – дело принципиальной важности для национальной безопасности. News Front, 30 December 2019. https://news-front.info/2019/12/30/ suverennyj-internet-delo-princzipialnoj-vazhnosti-dlya-naczionalnoj-bezopasnosti/. Schmitt, Carl. Land and Sea. Washington D.C.: Plutarch Press, 1997. —— Political Theology. Cambridge, Mass.: MIT Press, 1985. Scott, James C. Seeing like a state. New Haven and London: Yale UP, 1998. Shu, Catherine. ‘China Tried To Get World Internet Conference Attendees To Ratify This Ridiculous Draft Declaration.’ Tech Crunch, 21 November 2014. https://techcrunch. com/2014/11/20/worldinternetconference-declaration/. Skopeliti, Clea. ‘UK sovereignty in jeopardy if Huawei used for 5G, US warns.’ The Guardian, 27 January 2020. Snyder, Timothy. The Road to Unfreedom. New York: Penguin Random House, 2018. Solís, Mireya. ‘The post Covid-19 world: Economic nationalism triumphant?’ Brookings Institution Blog, 10 July 2020. www.brookings.edu/blog/order-from-chaos/2020/07/10/ the-post-covid-19-world-economic-nationalism-triumphant/. Spinney, Laura. Pale Rider: The Spanish Flu of 1918 and How It Changed the World. First US edition, New York: Public Affairs, 2017. Staff. ‘Facebook, Twitter take action over Trump’s misleading COVID-19 posts.’ 2 October 2020, Reuters. www.reuters.com/article/us-twitter-trump/facebook-twitter-take-actionover-trumps-misleading-covid-19-posts-idUSKBN26R2Z3. Staff. ‘New ‘fake news’ law stifles independent reporting in Russia on COVID-19.’ International Press Institute, 8 May 2020. Svantesson, Dan. ‘A Jurisprudential Justification for Extraterritoriality in (Private) International Law,’ Santa Clara Journal of International Law 13 (2015): 517–571. The editors. ‘Google: Arnaud Montebourg ne veut pas que la France devienne une colonie numérique des USA.’ ZDNet, 14 May 2014. www.zdnet.fr/actualites/google-arnaudmontebourg-ne-veut-pas-que-la-france-devienne-une-colonie-numerique-des-usa39801099.htm. Thumfart, Johannes. ‘Public and private just wars: Distributed cyber deterrence based on Vitoria and Grotius.’ Internet Policy Review, 9 (2020): 3. Special Issue on ‘Geopolitics, Jurisdiction and Surveillance’. https://policyreview.info/articles/analysis/ public-and-private-just-wars-distributed-cyber-deterrence-based-vitoria-and. —— ‘Cultures of digital sovereignty in China, Russia and India. A comparative perspective from NWICO and WSIS to SCO.’ In Digital Sovereignty in the BRICS Countries, edited by Luca Belli and Min Jiang. To be published in 2022. Tsourapas, Gerasimos. ‘Global Autocracies: Strategies of Transnational Repression, Legitimation, and Co-Optation in World Politics. ‘International Studies Review, viaa061, 2020. https://doi.org/10.1093/isr/viaa061. Traynor, Ian. ‘EU unveils plans to set up digital single market for online firms.’ The Guardian, 6 May 2015. www.theguardian.com/technology/2015/may/06/eu-unveilsplans-digital-single-market-online-firms. Vatanparast, Roxana. ‘Data Governance and the Elasticity of Sovereignty’. 46 Brooklyn Journal of International Law 46, no. 1 (2020).

44  Johannes Thumfart von der Leyen, Ursula. ‘A Union that strives for more. My agenda for Europe. Political Guidelines for the next European Commission 2019–2024.’ 13. https://ec.europa.eu/ commission/sites/beta-political/files/political-guidelines-next-commission_en.pdf. von der Leyen, Ursula. ‘State of the Union Address.’ 16 September 2020. https://ec.europa. eu/commission/presscorner/detail/en/SPEECH_20_1655. Waldron, Jeremy. ‘Two Conceptions of Self-Determination.’ The Philosophy of International Law, edited by Samantha Besson and John Tasioulas. Oxford: Oxford UP, 2010. 397–413. Wakefield, Jane. ‘Russia ‘successfully tests’ its unplugged internet.’ BBC, 24 December 2019. Washington Post (staff). ‘Transcript: The Path Forward: Safeguarding Global Innovation with Keith J. Krach & Gen. Stanley A. McChrystal.’ Washington Post, 22 April 2021. www.washingtonpost.com/washington-post-live/2021/04/22/transcript-path-forwardsafeguarding-global-innovation-with-keith-j-krach-gen-stanley-mcchrystal/. Website of the Commission d’enquête sur la souveraineté numérique, www.senat.fr/ commission/enquete/souverainete_numerique.html. Wiener, Antje. A Theory of Contestation. Springer, 2014. Wolfe, Thomas R. ‘A New International Information Order: The Developing World and the Free Flow of Information Controversy.’ Syracuse Journal of International Law and Commerce 8, no. 1, Article 7, (1980). Available at: https://surface.syr.edu/jilc/vol8/iss1/7. Wu, Tim. ‘Cyberspace Sovereignty–The Internet and the International System.’ Harvard Journal of Law & Technology. 10, no. 647 (1996): 647–666. Xudong, Wang. China. Strengthening Cooperation, Promoting Development and Moving Towards the Information Society Together.’ Statement at the World Summit on the Information Society, 10 December 2003. www.itu.int/net/wsis/geneva/coverage/ statements/china/cn.html. Zeng, Jinhan et al.: ‘China’s Solution to Global Cyber Governance: Unpacking the Domestic Discourse of ‘Internet Sovereignty’.’ Politics & Policy 45, no. 3 (2017): 432 Harvard Journal of Law & Technology464. Zeng, Vivienne. ‘Alibaba’s Jack Ma sings praises of Xi’s global vision of ‘internet management’.’ Hong Kong Free Press, 18 December 2015. https://hongkongfp.com/2015/12/18/ albabas-jack-ma-sings-praises-of-xis-global-vision-of-internet-management/. Zhao, Tingyang, Redefining A Philosophy for World Governance, (Singapore: Palgrave Macmillan, 2019). Zhengping, Wang and Xu Tieguang (王正平 徐铁光). ‘Western cyber hegemony and cyber rights in developing countries.’ (西方网络霸权主义与发展中国家的网络权利). Thought Front (思想战线), Issue 2, 37, 2011.

2 Artountability: Art and Algorithmic Accountability PETER BOOTH1, LUCAS EVERS2, EDUARD FOSCH VILLARONGA3, CHRISTOPH LUTZ4, FIONA McDERMOTT5, PIERA RICCIO6, VINCENT RIOUX7, ALAN M SEARS8, AURELIA TAMO-LARRIEUX9 AND MARANKE WIERINGA10

Abstract Given the complexity of the inner working of algorithms and the ulterior effects these systems may have on society, the European Union has begun an ‘Algorithmic Awareness-Building’ exercise to inform policy-making on algorithmic decisions’ challenges and opportunities. We contribute to this effort by identifying how art can have a strong voice in promoting algorithmic accountability and transparency in the public debate. After introducing algorithmic accountability and transparency concepts, we focus on the cognitive, affective, societal, educational, and ethical functions art can have in realising Europe’s goals.

Keywords Art, privacy, transparency, algorithms, accountability, responsibility. 1 Associate Professor BI Norwegian Business School (NO), [email protected]. 2 Lead Waag’s Wetlab, WAAG, Technology & Society Foundation (NL), [email protected]. 3 Assistant Professor, eLaw Center for Law and Digital Technologies, Leiden University, The Netherlands, [email protected]. 4 Associate Professor, BI Norwegian Business School, Norway, [email protected]. 5 Associate Professor, Trinity College Dublin (IE), [email protected]. 6 Research Assistant, Politecnico di Torino (IT) [email protected]. 7 Head of the Beaux-Arts de Paris Digital Pole, National Superior School of Fine Arts (FR), vincent. [email protected]. 8 Researcher & Lecturer, eLaw Center for Law and Digital Technologies, Leiden University (NL) [email protected]. 9 Postdoctoral Researcher, University of St. Gallen, Switzerland, [email protected]. 10 PhD candidate, Datafied Society, Utrecht University (NL), [email protected].

46  Peter Booth et al

I. Introduction Governments and corporations worldwide employ automated decision-making to harness individuals’ characteristics and preferences.11 Such profiling and automated decision-making enable organisations to make their services and products more precise and effective.12 Similarly, governments can use detailed data to improve their relationship with their citizens, allow for more accessible services, predict crime, or anticipate tax and welfare fraud.13 Algorithmic profiling and automated decision-making techniques help analyse vast volumes of data continuously and in real-time, which would be impossible to do otherwise.14,15 Although resource efficiency and increased productivity are parameters organisations often consider, automated decision-making usually supports ulterior motives that can impact individuals and society, for example, by violating privacy rights or raising inequality.16,17 Given the potential adverse effects that automated decision-making may have on individuals and society, European data protection law states that individuals have the right not to be subject to a decision based solely on automated processing (Article 22 of the General Data Protection Regulation (GDPR)). To avoid steep fines, companies and governments in Europe include some human elements into their decision-making processes, raising questions regarding accountability and distributed responsibility. Some authors even argue that the increasingly complex decision-making processes involving multiple agents, including algorithms, will challenge the ascription of responsibility to humans for their behaviour, creating a sort of responsibility gap.18

11 Moritz Büchi, Eduard Fosch-Villaronga, Cristoph Lutz, Aurelia Tamò-Larrieux, Shruthi Velidi, and Salome Viljoen. ‘The Chilling Effects of Algorithmic Profiling: Mapping the Issues,’ Computer Law & Security Review 36 (2020): 105367, https://doi.org/10.1016/j.clsr.2019.105367. 12 Sandra Wachter, and Brent Mittelstadt, ‘A right to reasonable inferences: Re-thinking data ­protection law in the age of big data and AI’ Columbia Business Law Review 2 (2019): 494–620. https:// doi.org/10.7916/cblr.v2019i2.3424. 13 Jon Henley, and Robert Booth, ‘Welfare surveillance system violates human rights, Dutch court rules’, The Guardian (5 February 2020). www.theguardian.com/technology/2020/feb/05/ welfare-surveillance-system-violates-human-rights-dutch-court-rules. 14 Panagiota Galetsi, Korina Katsaliaki, and Sameer Kumar, ‘Big data analytics in health sector: Theoretical framework, techniques and prospects.’ International Journal of Information Management 50, (2020): 206–216. https://doi.org/10.1016/j.ijinfomgt.2019.05.003. 15 R. H. Hamilton, and William A. Sodeman, ‘The questions we ask: Opportunities and challenges for using big data analytics to strategically manage human capital resources.’ Business Horizons 63(1), (2020): 85–95. https://doi.org/10.1016/j.bushor.2019.10.001. 16 Maximilian Kasy, and Rediet Abebe, ‘Fairness, equality, and power in algorithmic decisionmaking.’ In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (March 2021): 576–586. https://doi.org/10.1145/3442188.3445919. 17 Lelia Marie Hampton, ‘Black feminist musings on algorithmic oppression.’ arXiv preprint (2021) arXiv:2101.09869. https://doi.org/10.1145/3442188.3445929. 18 Deborah G. Johnson, and Helen Fay Nissenbaum, Computers, ethics & social values (Prentice-Hall, 1995).

Artountability: Art and Algorithmic Accountability  47 Different disciplinary projects on accountability have emerged, trying to find solutions to accountability challenges and the responsibility gap. Some commentators believe that transparency could combat information asymmetries, foster trust, facilitate accountability, support autonomy, and allow for a greater control level.19,20,21 Other scholars think about ways to embed privacy, including transparency, into their technological system’s very design.22,23 A recurrent topic these scholars encounter is the low level of ‘situation awareness’ the general population has about such opaque practices.24 Situation awareness means ‘being aware of what is happening around you and understanding what information means to you now and in the future’.25 Given the complexity of algorithms and the ulterior effects these systems may have on society, the European Union (EU) has created an ‘Algorithmic Awareness-Building’ strategy. The objective of the project is threefold. First, the EU wants to foster a broad understanding of the role algorithmic decision-making plays concerning individuals. This endeavour includes promoting a public debate on issues raised by algorithmic systems. Second, the EU wants to determine and create an evidence-based resource about all the arising problems. Evidence-based approaches are a critical step in taking stock of the Union’s challenges and learning from such implementations. Third, the EU wants to be able to propose actionable solutions to address the issues identified. These solutions can be in the form of additional policies addressing (sector) specific challenges or creating new technologies that help address privacy, transparency, and discrimination issues. Furthermore, solutions stemming from the private sector and civil-society groups should be promoted within the EU. The objectives mentioned above do not mention the role of art in c­ ontributing to, in particular, the identification of issues raised by algorithmic systems. Due to its cognitive, affective, societal, educational, and ethical functions, art plays a central role in promoting awareness about the challenges that arise from automated decision-making. Moreover, art can be a strong voice to be reckoned with when tackling specific difficulties in this context by offering novel imaginaries

19 Heike Felzmann, Eduard Fosch-Villaronga, Cristoph Lutz and Aurelia Tamò-Larrieux. ‘Towards transparency by design for artificial intelligence.’ Science and Engineering Ethics, 26, (2020): 3333–3361. https://doi.org/10.1007/s11948-020-00276-4. 20 Jens Forssbaeck, and Lars Oxelheim, The Oxford handbook of economic and institutional transparency. (Oxford Handbooks, 2014). 21 Bo Carlsson, ‘Transparency of innovation policy.’ The Oxford handbook of economic and institutional transparency (2014): 219–238. 22 Aurelia Tamò-Larrieux, Designing for privacy and its legal framework (Springer, 2018). 23 Felzmann, et al., ‘Towards transparency’. 24 Peter Fröhlich, Matthias Baldauf, Thomas Meneweger, Ingrid Erickson, Manfred Tscheligi, Thomas Gable, Boris de Ruyter and Fabio Paternò, ‘Everyday automation experience: non-expert users encountering ubiquitous automated systems.’ Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, ACM (May 2019): 1–8 https://doi.org/10.1145/3290607.3299013. 25 Mica R. Endsley, and D.G. Jones, Designing for situation awareness: An approach to user-centered design (CRC Press, 2016), 13.

48  Peter Booth et al and alternatives.26 To raise awareness of algorithmic practices and reflect on the role of art in this process, we organised the Artountability panel at the Computers, Privacy, and Data Protection (CPDP) conference in 2021.27 At the interplay of art, society, and technology, the made-up term ‘artountability’ combined accountability and art in the context of algorithms shedding some insight into the accountability of algorithms from a multi-disciplinary angle. In this chapter, we summarise the panel’s findings on interdisciplinary artistic perspectives on algorithmic accountability. We divided this chapter into six sections, including an introduction (I) and conclusion (VI). We start by elaborating upon the concept of algorithmic accountability (II). This literature review reveals the various aspects that are encompassed by the term algorithmic accountability. We then analyse the concept of transparency (III) and its multiple dimensions and meanings. This section aims to link transparency with accountability and provide concrete examples of how art can contribute to this relationship. After these more theoretical foundations (II and III), we reflect on the role of art in promoting accountability and transparency (IV). We describe for pathways or functions how art may encourage accountability and transparency. This chapter also introduces specific art projects and installations that successfully engage with algorithmic accountability and transparency as a sort of deep dive into the topic of accountability (or artountability, for that matter).

II.  Algorithmic Accountability Accountability for algorithmic or computer systems is not new, per se. Numerous works have touched upon the theme of algorithmic accountability, albeit using different terms and stemming from other disciplines.28,29,30,31,32 Thus, while the term may be new, the theme or phenomenon certainly stands in a much older tradition of, for instance, computational accountability33,34 and literate programming.35 26 Annette Markham, ‘The limits of the imaginary: Challenges to intervening in future speculations of memory, data, and algorithms.’ New Media & Society, 23(3), (2020): 382–405. https://doi. org/10.1177/1461444820929322. 27 Laiden ‘Accountability, AI and Art’, www.laiden.org/events/accountability-ai-art. 28 Johnson, et al., Computers. 29 D. E. Knuth, ‘Literate programming’. Computers and Chemical Engineering, 22(12), (1984): 1745–1747. https://doi.org/10.1093/comjnl/27.2.97. 30 Lawrence Lessig. Code: and other laws of cyberspace (Basic Books, 1999). 31 Frank Pasquale. The black box society: the secret algorithms that control money and information (Harvard University Press, 2015). 32 Alex Rosenblat, Tamara Kneese and Danah Boyd. ‘Algorithmic accountability’. The Social, Cultural & Ethical Dimensions of Big Data, New York: Data & Society Research Institute, 2014. https://datasociety. net/pubs/2014-0317/AlgorithmicAccountabilityPrimer.pdf. 33 Batya Friedman and Helen Nissenbaum ‘Bias in computer systems.’ ACM SIGCHI Bulletin, 14(3), (1996): 330–347. ISBN: 9781315259697. 34 Helen Nissenbaum. ‘Computing and accountability’. Communications of the ACM, 37(1), (1994): 72–80. https://doi.org/10.1145/175222.175228. 35 D. E. Knuth, ‘Literate programming’. Computers and Chemical Engineering, 22(12), (1984): 1745–1747. https://doi.org/10.1016/S0098-1354(98)00029-5.

Artountability: Art and Algorithmic Accountability  49 ‘Algorithmic accountability’ as a term, though frequently used36,37,38 has only recently been theoretically excavated.39,40,41 Through an interdisciplinary systematic literature review on the phenomenon of algorithmic accountability,42 defined it as follows: Algorithmic accountability concerns a networked account for a socio-technical algorithmic system, following the various stages of the system’s life cycle. In this accountability relationship, multiple actors (e.g. decision makers, developers, users) have the obligation to explain and justify their use, design, and/or decisions of/ concerning the system and the subsequent effects of that conduct. As different kinds of actors are in play during the life of the system, they may be held to account by various types of fora (e.g. internal/external to the organization, formal/informal), either for particular aspects of the system (i.e. a modular account) or for the entirety of the system (i.e. an integral account). Such fora must be able to pose questions and pass judgement, after which one or several actors may face consequences. The relationship(s) between forum/fora and actor(s) departs from a particular perspective on accountability.

This definition builds upon existing accountability theory43 and augments it with algorithmic systems’ medium-specific characteristics. In other words: it bridges the gap between what is familiar (ie, much of accountability theory can fruitfully apply) and that which is new (ie, the new, algorithmic, concerns). Importantly, algorithmic accountability needs to consider and follow the system’s life cycle. Wieringa identified different kinds of considerations in play at various times during this life cycle and distinguishes between ex-ante, in medias res, and ex-post considerations. Ex-ante considerations busy themselves with questions of ‘who, what, when, where, why, and to whom it affects’. In medias res considerations deal with development and implementation and comprise questions such as how something works, how values are safeguarded, and so forth. Finally, ex-post considerations deal with whence and wither questions; these

36 Alex Rosenblat, Tamara Kneese and Danah Boyd. ‘Algorithmic accountability’. The Social, Cultural & Ethical Dimensions of Big Data, New York: Data & Society Research Institute, 2014. https:// datasociety.net/pubs/2014-0317/AlgorithmicAccountabilityPrimer.pdf. 37 Nicholas Diakopoulos. ‘Accountability in algorithmic decision making.’ Communications of the ACM, 59(2), (2016): 56–62. https://doi.org/10.1145/2844110. 38 Anton Vedder and Laurens Naudts. ‘Accountability for the use of algorithms in a big data environment.’ International Review of Law, Computers and Technology, 31(2), (2017): 206–224. https:// doi.org/10.1080/13600869.2017.1298547. 39 Maranke A. Wieringa. ‘What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability’. ACM Conference on Fairness, Accountability, and Transparency (FAT* ’20), Barcelona ACM, (2020): 1–18, https://doi.org/10.1145/3351095.3372833. 40 Joshua A. Kroll. ‘Accountability in computer systems,’ in The Oxford Handbook of the Ethics of AI: 181–196, eds M. D. Dubber, F. Pasquale and S. Das (Oxford University Press, 2020). Available at SSRN: https://ssrn.com/abstract=3608468. 41 Madalina Busuioc. ‘Accountable artificial intelligence: Holding algorithms to account.’ Public Administration Review, 2020. https://doi.org/10.1111/puar.13293. 42 Wieringa, ‘What to account for,’ 10. 43 Mark Bovens. ‘Analysing and assessing accountability: A conceptual framework.’ European Law Journal, 13(4), (2007): 447–468. https://doi.org/10.1111/j.1468-0386.2007.00378.x.

50  Peter Booth et al are questions concerning maintenance, evaluation, and adaptation (or even termination). Accountability, wherever it is located in a system’s life cycle, is facilitated and operationalised through other principles such as traceability,44 reviewability,45 fairness,46 and the more abstract principle of transparency. Principles such as traceability and reviewability make concrete what needs to be documented, logged, and explicitly formulated to make accountability possible in concrete situations. Fairness and other public values such as non-discrimination are ultimately at stake in the quest for accountability. For our present contribution in assessing the role of art regarding algorithmic accountability, we wish, however, to focus on the more abstract level of transparency.

III. Transparency Accountability often connects to the topic of transparency because it is usually ‘seen as a means to provide accountability and show that something is done with due diligence’.47 However, transparency is not a one-dimensional concept, but one that solicits different expectations as well as challenges.48 One dominant understanding of transparency, with a clear connection to accountability, is the ideal that ‘sunlight is the best disinfectant’ and that transparent communication makes underlying operations perceivable and detectable.49,50,51 This understanding is what is called the transparency as information paradigm; transparency is understood in a static way (information must be shared), it entails the idea of verifiability (whether someone can verify the truthfulness and accuracy of the information),

44 Joshua A. Kroll. ‘Outlining traceability: A principle for operationalizing accountability in computing systems.’ In FAccT ‘21: ACM Conference on Fairness, Accountability, and Transparency, March 2021, Toronto, CA (Virtual), 2020: 758–771. https://doi.org/10.1145/3442188.3445937. 45 Jennifer Cobbe, Michelle Seng Ah Lee and Jatinder Singh. ‘Reviewable automated decisionmaking: A framework for accountable algorithmic systems’. FAccT ‘21: ACM Conference on Fairness, Accountability, and Transparency, March 2021, Toronto, CA: 598–609. https://doi.org/ 10.1145/3442188.3445921. 46 Michael Veale, Max Van Kleek and Reuben Binns. ‘Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making.’ CHI ‘18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Paper No. 440, pp, 1–14). ACM. https://doi. org/10.1145/3173574.3174014. 47 Heike Felzmann, Eduard Fosch-Villaronga, Cristoph Lutz and Aurelia Tamò-Larrieux. ‘Robots and transparency: The multiple dimensions of transparency in the context of robot technologies.’ IEEE Robotics & Automation Magazine, 26(2), (2019): 71. https://doi.org/10.1109/MRA.2019.2904644. 48 Heike Felzmann, Eduard Fosch-Villaronga, Cristoph Lutz and Aurelia Tamò-Larrieux. ‘Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns.’ Big Data & Society, 6(1), (2019): 1–14. https://doi.org/10.1177/2053951719860542. 49 Felzmann, et al., ‘Robots and transparency’. 50 Felzmann et al., ‘Transparency you can trust’. 51 Felzmann, et al., ‘Towards transparency’.

Artountability: Art and Algorithmic Accountability  51 as well as the notion of explainability and inspectability (only through the information can one explain certain [digital] processes and inspect them). While this understanding of transparency enables accountability and auditability (information must be seen as a precondition to doing so), this understanding should be embedded within the other dimensions of transparency. One such other dimension that is key is performativity or performative transparency. To achieve the objective of accountability, organisations need to perform transparency. In other words, transparency is a social and organisational phenomenon: an organisation performs transparency by outlining how it processes data. It can do so more transparently or less transparently. An example of opaqueness (ie, the opposite of transparency) would be information overflow that, in turn, diminishes transparency, as users would not be able to analyse the overwhelming amount of information. Here we see a challenge or conflict between transparency as information and transparency as performativity as aspects of transparency.52 The performative dimension of transparency also includes systems’ ability to interoperate with each other and the ability of an individual and society to engage with the system. Another critical dimension of transparency, and related to the performativity element, is its relational dimension. Organisations need to consider the audience to whom information is given, be it the user, the public, other developers, or government authorities.53 Only with the audience in mind can transparency be effectively performed and lead to accountability (eg, the relevant information is sent to a government authority in charge of auditing/controlling a procedure, and the information is presented to show compliance with the current framework). Here we have to consider what the appropriate target group is and the corresponding information. Decision criteria and justifiability thereof are required. Such targeting of information provisions also signals trustworthiness and a willingness to be accountable. Lastly, transparency is embedded within a legal, regulatory, and organisational context. Concerning the legal context, much has been written about how the EU’s General Data Protection Regulation (GDPR) mandates transparency (as a fundamental principle Article 5(1) of the GDPR) and accountability (likewise a fundamental principle found in Article 5(2) of the GDPR). However, those provisions have also mainly focused on the information provision dimension without providing much concrete guidance to developers on implementing transparency requirements.54 In this respect, some authors have proposed taking a ‘transparency by design’ approach to help develop guiding principles to consider the ‘relevant

52 Heike Felzmann, Eduard Fosch-Villaronga, Cristoph Lutz and Aurelia Tamò-Larrieux. ‘Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns.’ Big Data & Society, 6(1), (2019): 1–14. https://doi.org/10.1177/2053951719860542. 53 Tal Z. Zarsky, ‘Transparent predictions.’ University of Illinois Law Review 4 (2013): 1503–1570. www.illinoislawreview.org/wp-content/ilr-content/articles/2013/4/Zarsky.pdf. 54 Felzmann, et al., ‘Towards transparency’ 19.

52  Peter Booth et al contextual, technical, informational, and stakeholder-sensitive considerations’ into the design of AI systems.55

IV.  The Role of Art in Promoting Algorithmic Accountability and Transparency Based on the fruitful discussions in the Artountability panel and further reflections after the panel, we identified different pathways to how art can have a strong voice in promoting algorithmic accountability and transparency. This section presents an overview of five functions art can have in fostering algorithmic accountability before demonstrating, with concrete cases, how different artworks creatively engage with AI systems, scrutinise their inherent biases, and engage with transparency and accountability issues. The five functions are: cognitive, emotional, social, educational, and ethical.

A.  The Functions of Art i.  Cognitive Function The cognitive function of art in promoting algorithmic accountability and transparency lies in its potential to foster awareness and make audiences reflect on aspects they might otherwise not reflect on. Thus, art has the power to evoke thought and challenge existing narratives that may be harmful or considered inappropriate in contemporary times. Such existing narratives have been thematised in the literature about AI imaginaries.56,57 The studies show how critical interventions, often with an artistic focus, can contribute to ‘consciousness-raising’, for example, within the Museum of Random Memories (MoRM) project, a series of eight public engagement experiments across five countries.58 In reflecting on the MoRM project, Markham (p 390) writes that ‘once the placid surfaces of everyday interfaces were broken through, participants immediately began to think more critically about how platforms or AI actually operate’. Through artistic research, the MoRM is a project wherein a speculative understanding is created about how our perception of reality and memory is mediated and changed through machine 55 ibid. 56 Becky Kazansky and Stefania Milan, ‘Bodies not templates: Contesting dominant algorithmic imaginaries.’ New Media & Society 23(2), (2021): 363–381. https://doi.org/10.1177/1461444820929316. 57 Astrid Mager and Christian Katzenbach, ‘Future imaginaries in the making and governing of digital technology: Multiple, contested, commodified.’ New Media & Society 23(2), (2021): 223–236. https://doi.org/10.1177/1461444820929321. 58 Annette Markham, ‘The limits of the imaginary: Challenges to intervening in future speculations of memory, data, and algorithms.’ New Media & Society 23(3), (2020): 382–405. https://doi. org/10.1177/1461444820929322.

Artountability: Art and Algorithmic Accountability  53 vision. Thus, the MoRM project succeeded in fostering algorithmic transparency, alluding to the cognitive function of art. However, the conclusions relating to accountability are less straightforward and more pessimistic, especially if AI artifacts are already strongly naturalised and embedded into our everyday lives. In such cases, the participants failed to imagine alternative options to the dominant paradigm. Moreover, ‘through MoRM, we can see how the naturalization process removes agency from all parties, human or not, effectively making it impossible to assign responsibility, blame, or accountability’ (p 398).

ii.  Affective Function Technology-inspired artworks can help overcome biases by challenging current narratives, stimulating our imagination, and providing us with diverse viewpoints on potential future scenarios and developments and their implications for society.59 It thus seems that the cognitive function alone might not be enough to foster algorithmic accountability. Art also has an emotional or affective function. It can evoke feelings and emotions, thus enabling a deeper connection to the subject. Aesthetic theory has stressed art’s role in pleasing the audience or making it feel a certain way.60,61 An AI produces artwork that engages with AI systems or that can, for example, foster empathy and allow individuals to feel similar to someone affected by an AI system. In this sense, art broadens perspectives and can potentially create transparency in a more grounded, personal, and relational manner. Participatory and socially engaged art is constructive for this function.

iii.  Societal Function In addition to the cognitive and affective functions, we also discussed societal aspects or functions of art in the panel and the ensuing reflections. According to this function, art brings together different stakeholders around a topic and creates novel communities. By doing so, art might interest new audiences in AI, transparency, and accountability or engage people emotionally. Thus, the societal function is connected to the other functions, and the functions are generally not mutually exclusive. Through new communities of practice, transparency could be enhanced as AI systems are scrutinised from more perspectives that potentially offer deeper transparency. The societal function can also lead to increased accountability,

59 Sabine Roeser, Veronica Alfano and Caroline Nevejan. ‘The role of art in emotional-moral reflection on risky and controversial technologies: The case of BNCI.’ Ethical Theory and Moral Practice, 21(2), (2018): 275–289. 60 Simon O’Sullivan, ‘The aesthetics of affect: Thinking art beyond representation.’ Angelaki: Journal of Theoretical Humanities, 6(3), (2001): 125–135. https://doi.org/10.1080/09697250120087987. 61 Ernst Van Alphen, ‘Affective operations of art and literature. RES: Anthropology and Aesthetics’, 53(1), (2008): 20–30. https://doi.org/10.1086/RESvn1ms25608806.

54  Peter Booth et al especially if the artwork manages to reach those with a say in the AI system’s design or implementation.

iv.  Educational Function Another aspect of art that should be stressed is the educational function of art. Both the creation and reception of art come with practical learning experiences. In terms of AI systems, artists’ engagement with AI in their artwork will lead to a deeper understanding of the systems themselves. Thus, from the artist’s point of view, increased digital skills (eg, programming skills, video editing skills, 3-D printing skills) can function in the area of new technologies and AI systems. These skills enable the artist to understand systems more meaningfully and deeply, offering personal transparency. However, the skills can then also be taught to other interested parties through art and design participatory learning. An example of a project at the intersection of art and design highlighting the educational function is Project Alias. This open-source smart parasite can modify smart speakers (eg, Google Home, Amazon Echo), making them less dependent on the vendor’s data ecosystem. In many cases, artists that leverage digital technologies also operate within an ethos of openness and make their code and protocols available freely for others to reuse. Thus, the educational function of art fosters the artist’s personal development and a deeper understanding of transparency, enabling alternative design visions that can hold different parties to account and create communities of practice.

v.  Ethical Function Closely related to the above functions of art promoting algorithmic transparency and accountability through better cognitive, affective, societal, and educational understandings is the ethical function. Art as a thorough practice of producing meaning, in other words, what it means to be subject to what algorithms do, can create an understanding about whether what algorithms do is correct or not, and more importantly, whether the way algorithms function is ethically justifiable or not. The how relates to the way art can play a role in avoiding bias and shaping how normalcy is understood. The Normalising Machine, by Mushon Zer-Aviv, is an artwork that, by questioning what normalcy means, explains playfully and interactively how algorithms function discriminatorily and must be very difficult for self-learning algorithms to understand how the general rules of how humans decide what normal is and whatnot. The audience is invited to point at an interactive screen to choose which of the two previous persons in the installation was more normal to train the algorithm. What can be learned from this artwork is that algorithms can be discriminatory, morally and ethically ambivalent, and that careful consideration is needed for a successful societal implementation.

Artountability: Art and Algorithmic Accountability  55

Figure 1  The Normalising Machine – An experiment in machine learning and algorithmic prejudice (Installation view) Mushon Zer-Aviv (2018), Software developed by Dan Stavy and Eran Weissenstern with additional help from Ingo Randolf using OpenFrameworks. Supported by Shenkar College, Print Screen Festival, Ars Electronica Festival. The 2012 version was developed with Yonatan Ben Simhon and with additional help from Stepan Boltalin

B.  Specific Art Projects to Promote Accountability and Transparency Whether transparency translates into accountability depends on many aspects. An often forgotten part of transparency is its demand side, ie, that the information needs to be disclosed and needs to be received, interpreted, and understood by the respective audience.62 For that, information needs to adapt to the audience’s needs. Given the previous section on the power of art in explicating power dynamics and raising awareness of societal issues, art could be determinant in shaping a collective and societal understanding of AI societal consequences, increasing the throughput from transparency to accountability. Art could thus help realise the goals for transparency of Article 13 of the proposed EU AI Act (2021), which stresses that high-risk AI systems need to be designed and developed in a way to ensure that their operation is sufficiently transparent to enable users to understand and control how the AI system produces its output. We do not aim to state that art fulfils the transparency criteria as stated in Article 13 AI Act directly, rather it helps build greater sensitivity about AI-related logics and dangers. Art’s educational role thus fosters algorithmic awareness, sensitivity, and literacy in the general public. This literacy is, in turn, required to be able to effectively interpret and assess the information that users of AI are provided with.



62 Felzmann,

et al., ‘Robots and transparency’.

56  Peter Booth et al

i.  Art for Enhancing the Transparency of AI One of the most illustrative examinations of a single AI system’s workings is the ‘Anatomy of an AI’ by Kate Crawford and Vladan Joler.63 Taking the case of ‘Alexa’, a commonplace personal listening and speaking platform created by Amazon, Crawford, and Joler dissect the complex multiscalar processes behind the production, training, and operation of the hardware entities. They illustrate the various actions set in motion through single voice command, describing how ‘in this fleeting moment of interaction, a vast matrix of capacities is invoked: interlaced chains of resource extraction, human labor and algorithmic processing across networks of mining, logistics, distribution, processing, prediction, and optimization’. While this study breaks down the visible infrastructural components, it also strives to shed light on the invisible processes that contribute to constituting devices with AI capabilities.

Figure 2  Kate Crawford and Vladan Joler, ‘Anatomy of an AI System: The Amazon Echo as an Anatomical Map of Human Labor, Data and Planetary Resources,’ AI Now Institute and Share Lab, (September 7, 2018) https://anatomyof.ai

The illustrated essay highlights how as everyday physical objects are imbued with algorithms and computation, the processes go unnoticed by the user. They stated, ‘[w]hile consumers become accustomed to a small hardware device in their living rooms, or a phone app, or a semi-autonomous car, the real work is being done within machine learning systems that are generally remote from the user and

63 Kate

Crawford, and Vladan Joler. ‘Anatomy of AI Systems’, https://anatomyof.ai/.

Artountability: Art and Algorithmic Accountability  57 utterly invisible to her’. If Alexa is thus ‘a disembodied listening agent that never shows its deep connections to remote systems’, they aim to illustrate the ‘black box’ nature of the device by parsing the multiple opaque components that allow an AI to function. Within this visual compendium, Crawford and Joler synthesise and problematise the multiple unaccounted-for forces (economic, environmental, and socio-cultural) that have shaped the AI (including the materials, waste, energy, human labour, training data, machine learning, and neural nets). Ultimately, as an artwork, the Anatomy of an AI calls upon us to pay attention to how AI systems have both inputs and implications for multiple temporal and geographic scales, encouraging us to consider the complicated and often opaque entanglement of labour, resources, and data contained within.

ii.  Art Against Facial Recognition & Surveillance Facial recognition applications could misrepresent reality and affect society. The Computer Vision Dazzel Camouflage (CV Dazzel) and the WeAlgo arts projects illustrate the dangers of these applications and represent how important it is to raise awareness in a participatory art practice.64 Adam Harvey’s project CV Dazzel presents instructions and examples for employing make-up to thwart facial recognition technologies. Amongst other things, users of this camouflage tool are encouraged to obscure the nose bridge area and apply colours that contrast with their skin tone. While CV Dazzel’s techniques may be ineffective against more sophisticated facial recognition technologies, the project’s value lies in the educational outcomes that emerge through participation in an ongoing process of reverse engineering and all too imperceptible technology. Monahan (p 172) writes, ‘discourses of the “right to hide” are weak variations of the ‘right to privacy’,65 and so it would be wrong to reduce artistic initiatives like CV Dazzel to aesthetics of anti-surveillance. As an artwork that people wear, modify and discuss, the project’s political and societal value lies in the attention and awareness it draws to the threats of AI and surveillance technologies. The Netherlands-based collaborative project, WeAlgo, takes the dialogue of how surveillance technologies read us further. Launched during the COVID-19 crisis when much communication occurred over video streaming services, WeAlgo offers a free video streaming alternative to commercial products. Importantly, it strips away all unnecessary facial and background data to only that which is necessary for the facial technologies to read and locate us. Which is to say, the mask that supplants our face when we talk on WeAlgo is how the machine sees us.

64 Cristopher Sonn and Alison Baker. ‘Creating inclusive knowledges: Exploring the transformative potential of arts and cultural practice.’ International Journal of Inclusive Education, 20(3), (2016): 215– 228. https://doi.org/10.1080/13603116.2015.1047663. 65 Torin Monahan, ‘The Right to Hide? Anti-Surveillance Camouflage and the Aestheticization of Resistance.’ Communication and Critical/ Cultural Studies, 12(2), (2016): 159–178. https://doi. org/10.1080/14791420.2015.1006646.

58  Peter Booth et al Arguably to a greater extent than CV Dazzel, WeAlgo takes its participants on a journey that is more than a prompt for the awareness that we are being reduced to bits of processed information. In illustrating the points that machines read on our faces, the art project strides towards showing ‘the machines waving back at us’.66 As such, WeAlgo aligns with what Bridle has coined the ‘New Aesthetic’, a state where the virtual world begins to appear in the physical world increasingly, or as Paul describes, ‘the process of seeing like and being seen through digital devices’.67 Like CV Dazzel, WeAlgo is not a direct affront to facial recognition technologies. Instead, it functions as a platform for consciousness-raising and a site to enact a more privacy-oriented society. Artist Martine Stig of the WeAlgo collaboration pointed out that the facial recognition technology of WeAlgo apart from carrying the problematic relation between privacy and surveillance algorithms, simultaneously makes users aware of technology that can support privacy between users by using patterns instead of recognisable video feeds of human participants (see fig. 3).

Figure 3  Screenshot of wealgo.org, a project by Martine Stig, Stef Kolman, Tomas van der Wansum and Roel Noback (2021)

As artworks that rely on participation for any outcomes, both artworks are limited by their audiences’ size and demographic breadth. When camouflage make-up takes considerable time to apply, and the WeAlgo platform ca not functionally compete with commercial streaming services, the challenge is to achieve consciousness-raising beyond an art audience.

iii.  Art Against Algorithm’s Power to Exacerbate Biases Joy Boulawmini is an artist and researcher who addresses the topic of harmful algorithmic bias in her work. She is working to give a voice to people on this topic, 66 James Bridle. ‘Waving at the machines’ (transcript of a keynote presented at Web Directions South, Sydney, 2011). www.webdirections.org/resources/james-bridle-waving-at-the-machines/. 67 Christian Paul. ‘From immateriality to neomateriality: Art and the conditions of digital materiality.’ In ISEA 2015 – Proceedings of the 21st International Symposium on Electronic Art, 4. 2015, https:// isea2015.org/proceeding/submissions/ISEA2015_submission_154.pdf.

Artountability: Art and Algorithmic Accountability  59 founding and directing the Algorithmic Justice League.68 Boulawmini uses art and poetry to highlight the conscious and unconscious prejudices of machine learning and artificial intelligence applications, a phenomenon which she refers to as ‘the coded gaze’.69 One such artwork is a spoken word visual poem entitled ‘AI, Ain’t I A Woman?’. It focused on the biases and failures of artificial intelligence in recognising iconic African American women such as Oprah Winfrey, Serena Williams, and Michelle Obama. More recently, Boulawmini has also featured in the featurelength film ‘Coded Bias’, which charts her discovery of algorithmic racial bias in facial recognition technology.70 Collaborative work by artist Trevor Paglen and AI researcher Kate Crawford highlighted issues of bias embedded in image-based machine vision applications. Using classification systems from ImageNet (one of the largest, publicly accessible online databases of images), in 2018, they developed the online artwork entitled ‘ImageNet Roulette.’71 It enabled public members to upload personal photos to reveal the troubling ways in which the category ‘person’ within the ImageNet database classified people, often exposing the system as being racist or sexist. After the launch of ImageNet Roulette, ImageNet announced removing 600,000 images of people from its database. On the aim of the artwork, they wrote: We created ImageNet Roulette as a provocation: it acts as a window into some of the racist, misogynistic, cruel, and simply absurd categorizations embedded within ImageNet. It lets the training set ‘speak for itself,’ and in doing so, highlights why classifying people in this way is unscientific at best and deeply harmful at worst.72

For Paglen and Crawford, leveraging public artwork like this is significant for extending critical discourse on the potential problematic workings of AI technologies beyond academic circles and into mainstream public discussions. In 2017, Caroline Sinders started developing her art and research project Feminist Dataset, intending to create an ideologic dataset, protesting the current gender-based biases and discrimination in the datasets utilised in AI systems.73 Also, concerning the research on more representative datasets, the work of Mimi Onouha must be mentioned; with the artwork, ‘The Library of Missing Data’,74 evidence is provided for all the phenomena that are usually absent in relevant datasets and how they reflect the priorities that society assigns to different individuals and different contexts. 68 Algorithmic Justice League, www.ajl.org/. 69 Joy Boulamwini. ‘Poet of Code’, www.poetofcode.com/. 70 Shalini Kantayya. ‘Coded Bias’, Independent Lens www.pbs.org/independentlens/films/coded-bias/. 71 Kate Crawford, and Trevor Plagen. ‘Excavating AI – The Politics of Images in Machine Learning Training Sets’, https://excavating.ai/. 72 Christina Ruiz, ‘Leading online database to remove 600,000 images after art project reveals its racist bias’, The Art Newspaper, 23 September 2019, www.theartnewspaper.com/news/trevor-paglen. 73 Caroline Sanders, ‘Feminist Dataset’, https://carolinesinders.com/wp-content/uploads/2020/05/ Feminist-Data-Set-Final-Draft-2020-0526.pdf. 74 Mimi Onuoha, ‘The Library of Missing Datasets’, https://mimionuoha.com/the-library-of-missingdatasets-v-20.

60  Peter Booth et al

iv.  Art against Algorithm’s Power to Govern Our Future Algorithmic biases can involve people’s lives from different points of view. Work is one of the sectors in which systematic detriment of specific categories of persons can more closely affect our society. In the context of labour, algorithms can impact our lives when, for example, they are used as a support for deciding who is hired, promoted, or, even more radically, who gets access to a particular education. In 2018, collective no:topia proposed an interactive art installation addressing this topic, AI Oracle (see Fig. 4).75 This installation, exhibited in different locations worldwide, aimed to engage visitors to reflect on the power of automated decision-making when it comes to choices relating to their future. It is common to underestimate this issue if we believe that it only concerns marginal aspects of our lives.

Figure 4  Pictures of a visitor of AI Oracle during the 2019 exhibition in FUTURIUM museum (Berlin). Art installation by Collective Notopia (represented by Piera Riccio and Shirley Ogolla), structure design by Vincenzo Werner, photos by Stephan Melchior

The visitors of the installation are immersed in a dystopian scenario in which a machine scans them and simulates collecting all of their data: age, gender, sex, education, financial records, usage of social media and dating applications, as well as data about friends, pets, colleagues, and relatives. This scanning process finally leads to a decision regarding their future job. The visitors are confronted with a reality in which an algorithm chooses a career for them according to what the processing of data thinks would suit them best. The dystopian exaggeration of the decision-making process has the main objective of inviting visitors to ask ‘why?’ and be critical of the job proposed. Is this machine being fair towards everyone? Is it finding the best employment for all? Does it suffer from the stereotypical biases



75 Shirley

Ogolla, and Piera Riccio (Collective no:topia). ‘AI Oracle’, https://ai-oracle.info/.

Artountability: Art and Algorithmic Accountability  61 that we are still trying to fight today? In such a context, what would be the role of personal dreams and passion?

V.  Discussion and Conclusion Individuals’ characteristics and preferences are used to build profiles and combined with automated decision-making to make products and services more precise and effective. While there may be some benefits, algorithmic profiling and automated decision-making processes can impact individuals and society at large, for example, by invading individual privacy or rising inequality. Given the complicated processes in such algorithmic systems and the existing legal framework, there may be a bit of a responsibility gap. Increasing transparency, embedding privacy into the technologies, and raising awareness are commonly proposed to increase accountability in this area. The European Union, for instance, has begun an ‘Algorithmic Awareness-Building’ exercise to inform policy-making on algorithmic decisions, challenges and opportunities. An often overlooked area that could nevertheless contribute is the role of art in this arena. This chapter thus explored how art can have a strong voice in promoting algorithmic accountability and transparency in the public debate. After introducing the intersection of algorithmic accountability and art, we then elaborated upon the concept of algorithmic accountability and what exactly it constitutes. Algorithmic accountability needs to follow the system’s life cycle, where there are ex-ante, in medias res, and ex-post considerations that should be taken into account. Accountability, wherever it is located in a system’s life cycle, is facilitated and operationalised through other principles such as traceability, reviewability, fairness, and the more abstract principle of transparency. The present contribution in assessing the role of art regarding algorithmic accountability focuses on the more abstract level of transparency. We then analysed the concept of transparency and its multiple dimensions. Transparency as information makes underlying operations perceivable and detectable, which entails the idea of verifiability and the notion of explainability and inspectability. While this understanding of transparency enables accountability and auditability, this understanding should be embedded within the other dimensions of transparency. One such dimension is performative transparency, where organisations perform transparency by outlining how it processes data, which it can do in a transparent or opaque manner. Another critical dimension of transparency related to the performativity element is the relational dimension of transparency. Only with the audience in mind, ie, the information provided is targeted towards the relevant audience, can transparency be effectively performed and lead to accountability. Finally, transparency is embedded within a legal, regulatory, and organisational context. While the GDPR mandates transparency and accountability, a ‘transparency by design’ approach may help in addressing shortcomings.

62  Peter Booth et al While attempts to increase the legibility of algorithmic decision-making to bring about accountability are critical, it is also essential to acknowledge the limits of transparency as a responsive action. As Ananny and Crawford have noted concerning calls for transparency around algorithmic decision-making systems, transparency on its own will not mean users can change or influence these systems, as transparency can promote seeing without knowing.76 Next, the chapter delved into the role of art in promoting algorithmic accountability and transparency. The five functions of art – cognitive, emotional, social, educational, and ethical – were examined. The cognitive function of art in promoting algorithmic accountability and transparency lies in its potential to foster awareness, evoke thought, and challenge existing narratives that may be harmful or considered inappropriate in contemporary times. An important project in this area is the MoRM, which challenges participants to think about how AI systems operate critically. Art also has an emotional or affective function. An AI produces artwork that engages with AI systems or that can, for example, foster empathy and allow individuals to feel similar to someone affected by an AI system. In this sense, art broadens perspectives and can potentially create transparency in a more grounded, personal, and relational manner. Also, art has a social function by bringing together different stakeholders around a topic and creating novel communities. By doing so, art might interest new audiences in AI, transparency, and accountability or engage people emotionally. As such, the social function is connected to the other functions of art. The educational function of art fosters the artist’s personal development and a deeper understanding of transparency, enabling alternative design visions that can hold different parties to account and create communities of practice. The ethical function of art addresses what it means to be subject to what algorithms can understand what is correct or not, and more importantly, whether how algorithms function is ethically justifiable or not. Art has the power to explicate power dynamics and raising awareness of societal issues and could be determinant in shaping a collective and societal understanding of AI societal consequences, increasing the throughput from transparency to accountability. Art can also bring many different stakeholders together, from young audiences to engineers, to policymakers, to vulnerable groups, subject to the discriminatory effect of such systems. Because of the multiplicity of stakeholders and art projects, artwork in general ‘works’ on several levels. First, by making use of the AI technologies, engineers, and the industry use. Second, creating meaning and understanding about the ambiguous, complex, and controversial elements found in the way AI in context is developed and works. Third, by offering itself as an object to reflect upon, being present as a way to question without answering (being as art should be useful in its uselessness).

76 Mike Ananny and Kate Crawford. ‘Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.’ New Media & Society, 20(3), (2018): 973–989. https:// doi.org/10.1177/1461444816676645.

Artountability: Art and Algorithmic Accountability  63 In the final section, we examined specific art projects to promote accountability and transparency. One example of art for enhancing the transparency of AI is the ‘Anatomy of an AI’ project by Kate Crawford and Vladan Joler, which focuses on Amazon’s Alexa. It breaks down the visible infrastructural components and aims to shed light on the invisible processes that contribute to constituting devices with AI capabilities. Ultimately, the Anatomy of an AI project encourages us to consider the complicated and often opaque entanglement of labour, resources, and data contained within. As for art against facial recognition and surveillance, the Computer Vision Dazzle Camouflage (CV Dazzle) and the WeAlgo arts projects illustrate the dangers of these applications and represent how important it is to raise awareness in a participatory art practice. CV Dazzle presents instructions and examples for employing make-up to thwart facial recognition technologies. WeAlgo offers a free video streaming alternative that strips away all unnecessary facial and background data that is necessary for the facial technologies to read and locate us. There has also been art against algorithm’s power to exacerbate biases. Boulawmini of the Algorithmic Justice League uses art and poetry to highlight the conscious and unconscious prejudices of machine learning and artificial intelligence applications. Collaborative work by artist Trevor Paglen and AI researcher Kate Crawford highlights issues of bias embedded in image-based machine vision applications, which has been essential for extending critical discourse on the potential problematic workings of AI technologies beyond academic circles and into mainstream public discussions. Caroline Sinders developed her art and research project Feminist Dataset to protest the current gender-based biases and discrimination in the datasets utilised in AI systems. Finally, regarding art against the algorithm’s power to govern our future, collective no:topia proposed an interactive art installation, AI Oracle, which aimed to engage visitors to reflect on the power of automated decision-making when it comes to choices relating to their future.

References Algorithmic Justice League, https://www.ajl.org/. Ananny, Mike, and Kate Crawford. ‘Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability.’ New Media & Society, 20(3), (2018): 973–989. https://doi.org/10.1177/1461444816676645. Boulawmini, Joy, ‘Poet of Code’, www.poetofcode.com/. Bovens, Mark. ‘Analysing and assessing accountability: A conceptual framework.’ European Law Journal, 13(4), (2007): 447–468. https://doi.org/10.1111/j.1468-0386.2007.00378.x. Büchi, Moritz, Eduard Fosch-Villaronga, Cristoph Lutz, Aurelia Tamò-Larrieux, Shruthi Velidi, and Salome Viljoen. ‘The Chilling Effects of Algorithmic Profiling: Mapping the Issues.’ Computer Law & Security Review 36 (2020): 105367. https://doi.org/10.1016/ j.clsr.2019.105367.

64  Peter Booth et al Busuioc, Madalina. ‘Accountable artificial intelligence: Holding algorithms to account.’ Public Administration Review, 2020. https://doi.org/10.1111/puar.13293. Bridle, James. ‘Waving at the machines’ (transcript of a keynote presented at Web Directions South, Sydney, 2011). www.webdirections.org/resources/james-bridle-waving-at-th e-machines/. Carlsson, Bo, ‘Transparency of innovation policy.’ The Oxford handbook of economic and institutional transparency (2014): 219–238. Cobbe, Jennifer, Michelle Seng Ah Lee and Jatinder Singh. ‘Reviewable automated decision-making: A framework for accountable algorithmic systems’ FAccT ‘21: ACM Conference on Fairness, Accountability, and Transparency, March 2021, Toronto, CA (Virtual): 598–609. https://doi.org/10.1145/3442188.3445921. Crawford, Kate, and Vladan Joler. ‘Anatomy of AI Systems’, https://anatomyof.ai/. Crawford, Kate, and Trevor Plagen. ‘Excavating AI – The Politics of Images in Machine Learning Training Sets’, https://excavating.ai/. Diakopoulos, Nicholas. ‘Accountability in algorithmic decision making.’ Communications of the ACM, 59(2), (2016): 56–62. https://doi.org/10.1145/2844110. Endsley, Mica R., and D. G. Jones. Designing for situation awareness: An approach to user-centered design CRC Press, 2016. European AI Act (2021) Retrieved from https://digital-strategy.ec.europa.eu/en/library/ proposal-regulation-laying-down-harmonised-rules-artificial-intelligence (last accessed 2 June 2021). Felzmann, Heike, Eduard Fosch-Villaronga, Cristoph Lutz, and Aurelia Tamò-Larrieux. ‘Robots and transparency: The multiple dimensions of transparency in the context of robot technologies.’ IEEE Robotics & Automation Magazine, 26(2), (2019): 71–78. https:// doi.org/10.1109/MRA.2019.2904644. —— ‘Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns.’ Big Data & Society, 6(1), (2019): 1–14. https://doi.org/10.1177/2053951719860542. —— ‘Towards transparency by design for artificial intelligence.’ Science and Engineering Ethics, 26, (2020): 3333–3361. https://doi.org/10.1007/s11948-020-00276-4. Friedman, Batya, and H. Nissenbaum. ‘Bias in computer systems.’ ACM SIGCHI Bulletin, 14(3), (1996): 330–347. https://doi.org/10.1145/249170.249184. Forssbaeck, Jens, and Lars Oxelheim. The Oxford handbook of economic and institutional transparency, Oxford Handbooks, 2014. Fröhlich, Peter, Matthias Baldauf, Thomas Meneweger, Ingrid Erickson, Manfred Tscheligi, Thomas Gable, Boris de Ruyter, and Fabio Paternò. ‘Everyday automation experience: non-expert users encountering ubiquitous automated systems.’ Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, ACM (May 2019): 1–8. ACM. https://doi.org/10.1145/3290607.3299013. Galetsi, Panagiota, Korina Katsaliaki, and Sameer Kumar. ‘Big data analytics in health sector: Theoretical framework, techniques and prospects.’ International Journal of Information Management, 50, (2020): 206–216. https://doi.org/10.1016/j.ijinfomgt.2019.05.003. Hamilton, R. H., and William A. Sodeman. ‘The questions we ask: Opportunities and challenges for using big data analytics to strategically manage human capital resources.’ Business Horizons, 63(1), (2020): 85–95. https://doi.org/10.1016/j.bushor.2019.10.001. Hampton, Lelia Marie, ‘Black feminist musings on algorithmic oppression.’ arXiv preprint (2021) arXiv:2101.09869. https://doi.org/10.1145/3442188.3445929.

Artountability: Art and Algorithmic Accountability  65 Henley, Jon, and Robert Booth. ‘Welfare surveillance system violates human rights, Dutch court rules.’ The Guardian (5 February 2020). www.theguardian.com/technology/2020/ feb/05/welfare-surveillance-system-violates-human-rights-dutch-court-rules. Johnson, Deborah G., and Helen Fay Nissenbaum. Computers, ethics & social values. Prentice-Hall, 1995. Kantayya, Shalini, ‘Coded Bias’, Independent Lens www.pbs.org/independentlens/films/ coded-bias/. Kasy, Maximilian, and Rediet Abebe. ‘Fairness, equality, and power in algorithmic decision-making.’ Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (March 2021): 576–586). https://doi.org/10.1145/3442188.3445919. Kazansky, Becky, and Stefania Milan. ‘Bodies not templates’: Contesting dominant algorithmic imaginaries. New Media & Society, 23(2), (2021): 363–381. https://doi. org/10.1177/1461444820929316. Knuth, D. E. ‘Literate programming.’ Computers and Chemical Engineering, 22(12), (1984): 1745–1747. https://doi.org/10.1016/S0098-1354(98)00029-5. Kroll, Joshua A. ‘Outlining traceability: A principle for operationalizing accountability in computing systems.’ FAccT ‘21: ACM Conference on Fairness, Accountability, and Transparency, March 2021, Toronto, CA (Virtual), (2020): 758–771. https://doi. org/10.1145/3442188.3445937. Kroll, Joshua A. ‘Accountability in computer systems.’ In The Oxford Handbook of the Ethics of AI, edited by M. D. Dubber, F. Pasquale, and S. Das, 181–196. Oxford University Press, 2020. Available at SSRN: https://ssrn.com/abstract=3608468. Laiden, ‘Accountability, AI and Art’, www.laiden.org/events/accountability-ai-art. Lessig, Lawrence. Code: and other laws of cyberspace. Basic Books, 1999. Mager, Astrid, and Christian Katzenbach. ‘Future imaginaries in the making and governing of digital technology: Multiple, contested, commodified.’ New Media & Society, 23(2), (2021): 223–236. https://doi.org/10.1177/1461444820929321. Markham, Annette, ‘The limits of the imaginary: Challenges to intervening in future speculations of memory, data, and algorithms.’ New Media & Society, 23(3), (2020): 382–405. https://doi.org/10.1177/1461444820929322. Monahan, Torin. ‘The Right to Hide? Anti-Surveillance Camouflage and the Aestheticization of Resistance.’ Communication and Critical/ Cultural Studies, 12(2), (2016): 159–178. https://doi.org/10.1080/14791420.2015.1006646. Nissenbaum, Helen. ‘Computing and accountability’. Communications of the ACM, 37(1), (1994): 72–80. https://doi.org/10.1145/175222.175228. Ogolla, Shirley, and Piera Riccio (Collective no:topia). ‘AI Oracle’, https://ai-oracle.info/. Onuoha, Mimi, ‘The Library of Missing Datasets’, https://mimionuoha.com/the-library-o f-missing-datasets-v-20. O’Sullivan, Simon ‘The aesthetics of affect: Thinking art beyond representation.’ Angelaki: Journal of Theoretical Humanities, 6(3), (2001): 125–135. https://doi. org/10.1080/09697250120087987. Paul, Christian. ‘From immateriality to neomateriality: Art and the conditions of digital materiality.’ In ISEA 2015 – Proceedings of the 21st International Symposium on Electronic Art, 4. 2015, https://isea2015.org/proceeding/submissions/ISEA2015_submission_ 154.pdf. Pasquale, Frank. The black box society: the secret algorithms that control money and information. Harvard University Press, 2015.

66  Peter Booth et al Roeser, Sabine, Veronica Alfano, and Caroline Nevejan. ‘The role of art in emotional-moral reflection on risky and controversial technologies: The case of BNCI.’ Ethical Theory and Moral Practice, 21(2), (2018): 275–289. Rosenblat, Alex, Tamara Kneese, and Danah Boyd. ‘Algorithmic accountability.’ In The Social, Cultural & Ethical Dimensions of Big Data, New York: Data & Society Research Institute, 2014. https://datasociety.net/pubs/2014-0317/AlgorithmicAccountabilityPrimer.pdf. Ruiz, Christina, ‘Leading online database to remove 600,000 images after art project reveals its racist bias’, The Art Newspaper, 23 September 2019, www.theartnewspaper.com/news/ trevor-paglen. Sanders, Caroline, ‘Feminist Dataset’, https://carolinesinders.com/wp-content/uploads/ 2020/05/Feminist-Data-Set-Final-Draft-2020-0526.pdf. Sonn, Cristopher, and Alison Baker. ‘Creating inclusive knowledges: Exploring the transformative potential of arts and cultural practice.’ International Journal of Inclusive Education, 20(3), (2016): 215–228. https://doi.org/10.1080/13603116.2015.1047663. Tamò-Larrieux, Aurelia. Designing for privacy and its legal framework. Springer, 2018. Van Alphen, Ernst. ‘Affective operations of art and literature. RES: Anthropology and Aesthetics’, 53(1), (2008): 20–30. https://doi.org/10.1086/RESvn1ms25608806. Veale, Michael, Max Van Kleek, and Reuben Binns. ‘Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making.’ CHI ‘18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Paper No. 440, pp, 1–14). ACM. https://doi.org/10.1145/3173574.3174014. Vedder, Anton, and Laurens Naudts. ‘Accountability for the use of algorithms in a big data environment.’ International Review of Law, Computers and Technology, 31(2), (2017): 206–224. https://doi.org/10.1080/13600869.2017.1298547. Wachter, Sandra, and Brent Mittelstadt. ‘A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI.’ Columbia Business Law Review 2, (2019):494–620. https://doi.org/10.7916/cblr.v2019i2.3424. Wieringa, Maranke A. ‘What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability’. ACM Conference on Fairness, Accountability, and Transparency (FAT* ’20), Barcelona ACM, 2020:1–18, https://doi. org/10.1145/3351095.3372833. Zarsky, Tal Z. ‘Transparent predictions.’ University of Illinois Law Review, 4, (2013), 1503–1570. www.illinoislawreview.org/wp-content/ilr-content/articles/2013/4/Zarsky.pdf.

3 Expectations of Privacy: The Three Tests Deployed by the European Court of Human Rights BART VAN DER SLOOT1

Abstract The reasonable expectation of privacy is a concept deeply ingrained in the American approach to privacy protection. In order to determine whether a plaintiff can rely on the right to privacy, both a subjective test (the applicant’s expectation) and an objective test (the reasonableness of that expectation) is deployed. Though the test is lauded for its adaptivity and connection to a common understanding of what is reasonable, critics have argued that judges will substitute the ‘common man’s’ feelings on a matter for its own legal reasoning, that the doctrine has been used to deny citizens privacy claims in public places and that when privacy rights are structurally undermined, courts will deem it unreasonable for citizens to expect any privacy, leading to a normalisation of privacy infringements. Not surprisingly, the European Court of Human Rights has been reluctant to use the doctrine, only doing so in a very small number of cases, mostly in order to find that even in public and professional settings, citizens can reasonably expect to have privacy. The Court has adopted two additional tests, which aim to guarantee citizens’ privacy, even when such would not be ‘reasonable’ stricto sensu. First, governments must ensure citizens can always reasonably foresee when and under which circumstances their privacy might be limited and more importantly, when not. Laws cannot grant such broad and wide powers to the executive branch that citizens can no longer reasonably expect any privacy. Second, even when it might no longer be ‘reasonable’ to expect any privacy, the Court will still honour claims to that extent that applicants have a ‘legitimate expectation of privacy’. For example, when royalty or celebrities are structurally harassed by the press and could not ‘reasonably’ expect to have 1 Associate Professor, Tilburg Institute for Law, Technology and Society, Tilburg University, Netherlands.

68  Bart Van der Sloot any privacy in the public domain, the Court will grant their claim to privacy when their expectations were ‘legitimate’.

Keywords Reasonable expectation; privacy; chilling effect; third party doctrine; European Convention on Human Rights.

I. Introduction The term ‘reasonable expectation’ of privacy was first adopted in the US in the famous Katz case, which was relatively straightforward.2 Katz was suspected of engaging in criminal activities; he used a telephone booth just outside his home for those activities, which the police had wiretapped. Katz invoked his fourth amendment right holding: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.3

The court found that there had been no violation of Katz’s right to privacy, as the fourth amendment protected people, not places, and Katz had relinquished his right to privacy by disclosing information in a public space, using a public telephone booth.4 ‘What a person knowingly exposes to the public, even in his own home or office, is not a subject of Fourth Amendment protection. But what he seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.’5 It was judge Marshall Harlan II, in his concurring opinion, who introduced the standard approach to determining the reasonable expectation of privacy. My understanding of the rule that has emerged from prior decisions is that there is a twofold requirement, first that a person have exhibited an actual (subjective) expectation of privacy and, second, that the expectation be one that society is prepared to recognize as ‘reasonable.’ Thus a man’s home is, for most purposes, a place where he expects privacy, but objects, activities, or statements that he exposes to the ‘plain view’

2 Colin Shaff, ‘Is the Court Allergic to Katz-Problems Posed By New Methods of Electronic Surveillance to the Reasonable-Expectation-of-Privacy Test.’ Southern California Interdisciplinary Law Journal 23 (2014): 409. 3 Amendments to the Constitution of the United States of America. 4 Alicia Shelton, ‘A Reasonable Expectation of Privacy Online: Do Not Track Legislation.’ University of Baltimore Law Forum 45 (2014): 35. 5 Katz v United States 389 U.S. 347 (1967).

Expectations of Privacy: The Three Tests Deployed by the ECtHR  69 of outsiders are not ‘protected’ because no intention to keep them to himself has been exhibited. On the other hand, conversations in the open would not be protected against being overheard, for the expectation of privacy under the circumstances would be unreasonable.6

American courts have adopted the ‘reasonable expectation’ test in thousands of cases since, establishing it as the standard approach to privacy protection in the US.7 Though it has been adopted in most other common law jurisdictions, such as Australia,8 Canada,9 New Zealand,10 South Africa11 and the UK,12 the test has received mixed reviews.13 It is lauded as being intuitive and adaptive,14 being directly linked to the common understanding of what is reasonable and so connects well to the general sentiment in society, and allowing courts to deny claims where applicants have clearly relinquished their privacy (the infamous third party doctrine).15 The ‘third-party doctrine’ originated with two Supreme Court decisions in the late 1970s, United States v. Miller and Smith v. Maryland. Miller involved government access to financial records held by a bank, and Smith involved government access to telephone records held by a telephone company. Together, they are known for the rule that there is no Fourth Amendment interest in information knowingly and voluntarily revealed to ‘third parties.’ In this context, a ‘third party’ includes any non-­governmental institution or entity established by law. Thus, under an aggressive reading of the thirdparty doctrine, the Fourth Amendment would not guarantee the privacy of any personal data held by any private company. This would include virtually all records of electronic 6 See further: Haley Plourde-Cole, ‘Back to Katz: Reasonable Expectation of Privacy in the Facebook Age.’ Fordham Urban Law Journal 38 (2010): 571. David C. Roth, ‘Florida v. Jardines: Trespassing on the Reasonable Expectation of Privacy.’ Denver University Law Review 91 (2013): 551. 7 The first time in Smith v Maryland 442 U.S. 735 (1979). 8 Mamoun Alazab, Seung-Hun Hong and Jenny Ng. ‘Louder bark with no bite: Privacy protection through the regulation of mandatory data breach notification in Australia.’ Future Generation Computer Systems 116 (2021). 9 Krista Boa, ‘Privacy outside the castle: Surveillance technologies and reasonable expectations of privacy in Canadian judicial reasoning.’ Surveillance & Society 4.4 (2007). 10 Tony Bennett, ‘Emerging privacy torts in Canada and New Zealand: an English perspective.’ European Intellectual Property Review 36.5 (2014): 298–305. 11 Burchell, Jonathan. ‘The legal protection of privacy in South Africa: A transplantable hybrid.’ Electronic Journal of Comparative Law 13.1 (2009). 12 Nick Taylor, ‘A Conceptual Legal Framework for privacy, accountability and transparency in visual surveillance systems.’ Surveillance & Society 8.4 (2011): 455–470. 13 Samantha Arrington, ‘Expansion of the Katz Reasonable Expectation of Privacy Test Is Necessary to Perpetuate a Majoritarian View of the Reasonable Expectation of Privacy in Electronic Communications to Third Parties.’ University of Detroit Mercy Law Review 90 (2012): 179. 14 Samantha Arrington, ‘Expansion of the Katz Reasonable Expectation of Privacy Test Is Necessary to Perpetuate a Majoritarian View of the Reasonable Expectation of Privacy in Electronic Communications to Third Parties.’ University of Detroit Mercy Law Review 90 (2012): 179. Interestingly, Kerr has argued that in practice, the subjective test plays no role of significance. Orin S. Kerr ‘‘Katz’ Has Only One Step: The Irrelevance of Subjective Expectations.’ The University of Chicago Law Review (2015): 113–134. 15 Neil Richards, ‘The Third Party Doctrine and the Future of the Cloud.’ Washington University Law Review 94 (2016): 1441. Richard A. Epstein, ‘Privacy and the Third Hand: Lessons from the Common Law of Reasonable Expectations.’ Berkeley Technical Law Journal 24 (2009): 1199.

70  Bart Van der Sloot communications, web browsing activity, and cloud data, to name just a few examples. In practice, congressional alarm over implications of this theory has resulted in legislation affording privacy to some categories of third-party records. In 1978, Congress passed the Right to Financial Privacy Act in response to Miller, and in 1986, it passed the Electronic Commutations Privacy Act (ECPA) following Smith. Lawmakers have also created more targeted laws aimed at protecting the privacy of cable subscribers and video store customers. But these efforts have been scattershot and often hobbled by changing technologies. ECPA, for example, was ahead of its time in many respects, but it is now woefully outdated and in need of an overhaul to account for changes in the communications technology we use and how we use it.16

However, critics have pointed to a number of drawbacks as well. First, the link to the general societal understanding of what is reasonable or not to expect in terms of privacy means that when the common sentiment is that other values, such as economic or security related interest should prevail, privacy interferences might not meet the objective reasonableness test. The use of this concept ‘strongly suggests that society’s views, not the Court’s, are the most important determinants of the privacy afforded by the Fourth Amendment []’.17 Second, the doctrine is considered an ‘all-or-nothing concept; if a person has waived or ceded privacy to one person she has ceded it to all’.18 Another binary distinction applied by courts, namely that if a person has passively disclosed information in public areas, such as her face, she can no longer reasonably claim any privacy in this respect, such as when facial recognition systems are deployed, are also deemed too abstract and out of touch with modern-day society.19 Third, when governmental organisations or companies structurally undermine citizens’ privacy and citizens are aware of that, it might no longer be deemed reasonable for them to expect any privacy. This has been a problem especially with new monitoring techniques, such as mass surveillance, and might mean that the reasonable expectation of privacy doctrine normalises structural privacy infringements.20 Fourth and finally, the test, according to critics, places an unreasonable burden on the citizen to take precautionary measures.21 In Pineda-Moreno, for example, police officers attached a tracking device while a suspect’s car was parked in the driveway and the court

16 Michael W. Price, ‘Rethinking Privacy: Fourth Amendment Papers and the Third-Party Doctrine.’ Journal of National Security Law & Policy 8 (2015) 247. 17 Christopher Slobogin, Privacy at risk: The new government surveillance and the Fourth Amendment (University of Chicago Press, 2008). See also: Christine S. Scott-Hayward, Henry F. Fradella and Ryan G. Fischer. ‘Does privacy require secrecy: Societal expectations of privacy in the digital age.’ American Journal of Criminal Law. 43 (2015): 19. 18 Laurent Sacharoff, ‘The Relational Nature of Privacy.’ Lewis & Clark Law Review 16 (2012): 1249. 19 Andrew D. Selbst, ‘Contextual expectations of privacy.’ Cardozo Law Review 35 (2013): 643. 20 Koerner, Matthew R. ‘Drones and the fourth amendment: Redefining expectations of privacy.’ Duke Law Journal 64 (2014): 1129. 21 Madelaine Virginia Ford, ‘Mosaic Theory and the Fourth Amendment: How Jones Can Save Privacy in the Fact of Evolving Technology.’ American University. Journal of Gender, Social Policy & the Law 19.4 (2011): 18.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  71 found that because the man had not taken steps to exclude individuals from his driveway, he did not have a reasonable expectation of privacy.22 The doctrine of reasonable expectation of privacy might have worked well in the sixties of the previous century, critics have argued, in an epoch in which the boundary and demarcation between the private and public domain, between what is personal and what is disclosed or what is communication data and what is metadata were relatively fixed and in which technologies to structurally monitor public spaces were non-existent or highly expensive and had limited functionalities. But it has led to weak privacy protection in current society, even although in recent times, courts have been willing to take a somewhat more liberal interpretation of the test.23 Inter alia, the doctrine is said to have had as consequence that social media messages, even when posted on private, password protected pages, enjoy no privacy protection,24 that mass surveillance activities often do not fall under the protective scope of the right to privacy, that drone surveillance is not covered by the fourth amendment,25 that privacy in public places is often absent,26 that lawyer-client and other types of professional forms of confidentiality can be limited,27 and that in general, the ‘reasonable expectation’ doctrine leads to weak privacy rights in the digital domain.28 In Europe, privacy protection has traditionally been approached in a more legalistic manner, rooted in the continental rule of law approach to human rights.29 In the wake of the Second World War, countries were vividly aware of the dangers of continuous privacy violations and monitoring practices by totalitarian regimes, still in place at that time in countries such as in Spain and the Soviet Union. In that light, the European Convention on Human Rights (ECHR), which is overseen by

22 Pineda-Moreno 591 F.3d, at 1216. 23 See in particular: Riley v California, 573 U.S. 373 (2014). 24 Brian Mund, ‘Social media searches and the reasonable expectation of privacy.’ Yale Journal of Law & Technology. 19 (2017): 238. Stephen E. Henderson, ‘Expectations of privacy in social media.’ Mississippi College Law Review 31 (2012): 227. Brian Mund, ‘Social media searches and the reasonable expectation of privacy.’ Yale Journal of Law & Technology 19 (2017): 238. 25 Taly Matiteyahu, ‘Drone regulations and fourth amendment rights: The interaction of state drone statutes and the reasonable expectation of privacy.’ Columbia Journal of Law and Social Problems 48 (2014): 265. 26 Mariko Hirose, ‘Privacy in public spaces: The reasonable expectation of privacy against the dragnet use of facial recognition technology.’ Connecticut Law Review 49 (2016): 1591. Joel R. Reidenberg, ‘Privacy in public.’ University of Miami Law Review 69 (2014): 141. Teresa Scassa, ‘Information privacy in public space: location data, data protection and the reasonable expectation of privacy.’ Canadian Journal of Law and Technology 7.1 (2010): 7. 27 Edward J. Imwinkelried, ‘Dangerous Trend Blurring the Distinction between a Reasonable Expectation of Confidentiality in Privilege Law and a Reasonable Expectation of Privacy in Fourth Amendment Jurisprudence.’ Loyola of Los Angeles Law Review 57 (2011): 1. See also: Paul Stanley, The Law of Confidentiality: a Restatement (Bloomsbury Publishing, 2008). 28 Brandon T. Crowther, ‘(Un) reasonable expectation of digital privacy.’ Brigham Young University Law Review (2012): 343. 29 Jernej Letnar Černič, ‘Impact of the European Court of Human Rights on the rule of law in Central and Eastern Europe.’ Hague Journal on the Rule of Law 10.1 (2018): 111–137.

72  Bart Van der Sloot the European Court of Human Rights (ECtHR), was adopted in 1950, specifying in Article 8: 1. Everyone has the right to respect for his private and family life, his home and his correspondence. 2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.30

The ECtHR has been very hesitant to use the reasonable expectation doctrine. So far, as section II of this chapter will show, it has done so only in a small number of cases, mostly limited to one specific context: workplace privacy. When doing so, it has interpreted the doctrine in a way that is in many respects diametrically opposed to how it is used in the US. In addition, it has adopted two safety nets to avert the perils of the American approach. First, as section III will explain, the Court has stressed that governments must ensure that citizens are always able to reasonable foresee when their privacy will be interfered with and importantly, when not. Thus, governments cannot adopt such broad and privacy invasive legislation that citizens can no longer reasonably expect any privacy. This especially applies when governments adopt new technologies to structurally monitor public places or digital communications. Second, section IV will show that the Court stresses that when private organisations, in particular the press, violate citizens’ privacy, the question is not whether their privacy expectations were reasonable, but whether their expectations were legitimate. Thus, even when royalty or celebrities are structurally harassed by the press and could not ‘reasonably’ expect to have any privacy in the public domain, the Court will grant them a claim to privacy when their expectations were ‘legitimate’. Finally, section V will provide a brief conclusion and analysis. The first test, discussed in section II, concerns the question to what extent people can invoke the right to privacy in public or professional places or when they have disclosed information about themselves. The second and the third tests concern the question of when an applicant can rely on her right to privacy, whether the interference with her right was legitimate or not. There are three requirements for legitimately curtailing the right to privacy under the ECHR: the interference must have a legal basis, it must serve a public interest and it must be necessary in a democratic society. The second test relates to the requirement of having a legal basis, the third test to the requirement of being necessary in a democratic society. The necessity requirement is where the Court also deploys the concepts of proportionality and subsidiarity.



30 Convention

for the Protection of Human Rights and Fundamental Freedoms Rome, 4.XI.1950.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  73

II.  Test 1: Reasonable Expectation (as per the question of whether the applicant can invoke the right to privacy Article 8 § 1) Of the 8,000 or so judgments, decisions, advisory opinions, reports and resolutions on Article 8 ECHR that have been issued so far, only in about 20 of those was the term ‘reasonable expectation of privacy’ used by the Court.31 Most of these judgments are issued against countries that use the concept in their national legal system,32 in particular the UK,33 with 9 of the 20 cases being issued against that country.34 It might be argued that the Court simply adopts the language that has already been used in the national legal procedure. Consequently, although in scholarly literature, it is sometimes suggested that the ‘reasonable expectation’ test is one of the established ways in which the European Court addresses matters concerning privacy,35 this is very much an overstatement. This is evidenced not only by the small number of cases and the fact that these cases are predominantly issued only against one of the 47 Member States of the Council of Europe, but also by the fact that the doctrine was only introduced late in the jurisprudence of the Court and that it has been mostly limited to one specific context: workplace privacy.36 31 The term is used more often in cases, when it is referred to by applicants in their pleas, but not adopted by the Court. See inter alia: ECtHR, Martin v United Kingdom, app no. 63608/00, 27 March 2003. ECtHR, Sorvisto v Finland, app no. 19348/04, 13 January 2009. ECtHR, Azukaitiené v Lithuania, app no. 59764/13, 22 October 2019. It is sometimes referred to by government, eg: ECtHR, Neulinger and Shuruk v Switzerland, app no. 41615/07, 06 July 2010. It is sometimes referred to in separate opinions, eg: ECtHR, Lambert v France, app no. 23618/94, 01 July 1997. And it is used when the Court refers to national legal rules, eg: ECtHR, Mosely v United Kingdom, app no. 48009/08, 10 May 2011. 32 Interestingly, the same does not hold true for Ireland, were the doctrine is also used. Bruce Carolan, ‘The Implications of the Right to Privacy under the European Convention on Human Rights for Irish Personal Injury Claims.’ Irish Journal of European Law 4 (1995): 161. 33 Emily Whittaker, ‘Campbell v Mirror Group Newspapers LTD [2004] UKHL 22: The Relationship between the European Convention on Human Rights and Privacy in the Common Law.’ UK Law. Students Review 1 (2012): 94. Mark Thomson, ‘The Increasing Protection of Personal Privacy.’ Convergence 4 (2008): 257. Marc P. Misthal, ‘Reigning in the Paparazzi: The Human Rights Act, The European Convention on Human Rights and Fundamental Freedoms, and the Rights of Privacy and Publicity in England.’ International Legal Perspective 10 (1998): 287. 34 Michael Ford, ‘Two conceptions of worker privacy.’ Industrial Law Journal 31.2 (2002): 135–155. Eric Barendt, ‘Problems with the ‘reasonable expectation of privacy’ test.’ Journal of Media Law 8.2 (2016): 129–137. 35 Eg H. Tomás. Gómez-Arostegui, ‘Defining private life under the European convention on human rights by referring to reasonable expectations.’ California Western International Law Journal 35 (2004): 153. 36 Sjaak Nouwt, Berend R. De Vries, and Corien Prins, Reasonable expectations of privacy?: eleven country reports on camera surveillance and workplace privacy (Cambridge University Press, 2005). See also for the American approach of workplace privacy: Nonnie L. Shivers, ‘Firing Immoral Public Employees: If Article 8 of the European Convention on Human Rights Protects Employee Privacy Rights, Then Why Can’t We,’ Arizona Journal of International and Comparative Law 21 (2004): 621. John DR Craig, and Hazel D. Oliver, ‘The Right to Privacy in the Public Workplace: Should the Private Sector be Concerned?’ Industrial Law Journal 27.1 (1998): 49–59. Keith David Ewing, ‘The Human Rights Act and Labour Law,’ Industrial Law Journal 27.4 (1998): 275–292.

74  Bart Van der Sloot The first time the ‘reasonable expectation’ doctrine was used in a case before the ECtHR was by the British Government in the admissibility decision of Halford v UK (1995), in which the applicant alleged that as a result of her complaints of sex discrimination, she was subjected to surveillance, including the bugging of her office and interception of her calls on her private home telephone and her office telephones. The applicant had two telephones in her office: one telephone with an external number for personal calls and one telephone for police work. The calls from both these telephones were paid for by the police, her employer. Consequently, the government argued that ‘the applicant had no reasonable expectation of privacy in relation to those telephones’.37 The Court (1997), however, stressed that the business premises may, under circumstances, constitute a person’s ‘home’, as protected under paragraph 1 of Article 8 ECHR, that ‘private life’, another term mentioned in that paragraph, might extent to a person’s work place, and that ‘correspondence’ includes communication both from private and from business telephones.38 There is no evidence of any warning having been given to Ms Halford, as a user of the internal telecommunications system operated at the Merseyside police headquarters, that calls made on that system would be liable to interception. She would, the Court considers, have had a reasonable expectation of privacy for such calls, which expectation was moreover reinforced by a number of factors. As Assistant Chief Constable she had sole use of her office where there were two telephones, one of which was specifically designated for her private use. Furthermore, she had been given the assurance, in response to a memorandum, that she could use her office telephones for the purposes of her sex-discrimination case. For all of the above reasons, the Court concludes that the conversations held by Ms Halford on her office telephones fell within the scope of the notions of ‘private life’ and ‘correspondence’ and that Article 8 is therefore applicable to this part of the complaint.39

In P.G. and J.H. v. the United Kingdom (2001), the Court went one step further when applicants complained that listening devices were used while they were at the police station to obtain voice samples. The Court explicitly stressed that a ‘a person’s reasonable expectations as to privacy may be a significant, although not necessarily conclusive’ and stressed that people should have privacy in public spaces as well, especially when systematic or permanent records were made.40 Consequently, the Court found that although it was true that when being charged, the applicants answered formal questions in a place where police officers were listening to them, the recording and analysis of their voices on this occasion must still be regarded as an interference with their privacy.41 Subsequently, in Copland v the 37 ECtHR, Halford v United Kingdom, application no. 20605/92, 02 March 1995. See also: ECtHR, Halford v United Kingdom, app no. 20605/92, 18 April 1996. 38 Elena Sychenko, and Daria Chernyaeva, ‘The Impact of the ECHR on Employee’s Privacy Protection’ Italian Labour Law e-Journal 12.2 (2019): 171–188. 39 ECtHR, Halford v United Kingdom, app no. 20605/92, 25 June 1997. 40 ECtHR, P.G. and J.H. v United Kingdom, app no. 44787/98, 25 September 2001. 41 This line was confirmed in: ECtHR, Perry v United Kingdom, app no. 63737/00, 17 July 2003. See for a case in which this doctrine is applied to professional secrecy in lawyer-client relationships: ECtHR, Aalmoes and others v Netherlands, app no. 16269/02, 25 November 2004.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  75 United Kingdom (2007), the applicant’s telephone, e-mail and internet usage were subjected to monitoring by her employer, the Carmarthenshire College, a statutory body administered by the state. The Court stressed that the applicant ‘had been given no warning that her calls would be liable to monitoring, therefore she had a reasonable expectation as to the privacy of calls made from her work telephone. The same expectation should apply in relation to the applicant’s e-mail and Internet usage’.42 Slowly but surely, the Court deployed the reasonable expectation of privacy doctrine in cases other than against the UK, though mainly keeping it to professional and work-related settings. In Peev v Bulgaria (2007), the applicant complained that his office had been searched and his draft resignation letter seized. Primarily referring to the subjective side of the expectation of privacy test, in the Court’s opinion, the applicant did have a reasonable expectation of privacy, if not in respect of the entirety of his office, at least in respect of his desk and his filing cabinets. This is shown by the great number of personal belongings that he kept there. Moreover, such an arrangement is implicit in habitual employer-employee relations and there is nothing in the particular circumstances of the case – such as a regulation or stated policy of the applicant’s employer discouraging employees from storing personal papers and effects in their desks or filing cabinets – to suggest that the applicant’s expectation was unwarranted or unreasonable. The fact that he was employed by a public authority and that his office was located on government premises does not of itself alter this conclusion, especially considering that the applicant was not a prosecutor, but a criminology expert employed by the Prosecutor’s Office. Therefore, a search which extended to the applicant’s desk and filing cabinets must be regarded as an interference with his private life.43

In Steeg v Germany (2008), the Court extended the reasonable expectation of privacy to the contents of documents and electronic storage media.44 A remarkable next step was taken in Pay v UK (2008), in which the applicant had joined the Lancashire Probation Service and was involved in the treatment of sex offenders. He was also the director of Roissy, an organisation that advertised its services on the internet as the builder and supplier of BDSM products and the organiser of BDSM events and performances. A photograph of the applicant, wearing a mask, with two semi-naked women was also circulated. Roissy was registered at the applicant’s address and its website included links to a number of BDSM websites which advertised various events and included photographs of the applicant and others, semi-naked, performing acts which the accompanying text indicated had taken place at a local private members’ club and involved male domination over submissive women. He was fired from work because his behaviour was seen as incompatible with his treatment of sex offenders. The Court acknowledged that the nature of the applicant’s acts were shown in the internet photographs and



42 ECtHR,

Copland v United Kingdom, app no. 62617/00, 03 April 2007. Peev v Bulgaria, app no. 64209/01, 26 July 2007. 44 ECtHR, Steeg v Germany, app nos. 9676/05, 10744/05 and 41349/06, 03 June 2008. 43 ECtHR,

76  Bart Van der Sloot referred to in advertisements. It acknowledged that the applicant’s conduct and openness about it could give rise to doubts as to whether the applicant’s activities may be said to fall with the scope of private life and, if so, whether [] there has been a waiver or forfeiture of the rights guaranteed by Article 8. The Court notes, however, that the applicant’s performances took place in a nightclub which was likely to be frequented only by a selfselecting group of like-minded people and that the photographs of his act which were published on the internet were anonymised.45

That is why the ECtHR was prepared to assess the case on the assumption that the applicant had a reasonable expectation of privacy. Though in Friend and others v the UK (2009),46 J.S. v the United Kingdom (2015)47 and Garamukanwa v UK (2019),48 the Court set out limitations to its approach, and in Gillan and Quiton v the UK (2010),49 Köpke v Germany (2010)50 and Vukota-Boic v Switzerland (2016),51 it reaffirmed its wide interpretation of ‘reasonable expectation of privacy’ doctrine. In other cases, it even extended its scope. For example, in Barbulescu v Romania (2016), the Court had to examine whether the applicant had a reasonable expectation of privacy when communicating from the Yahoo Messenger account that he had registered at his employer’s request. The Court noted that the applicant’s employer’s internal regulations strictly prohibited employees from using the company’s computers and resources for personal purposes. Therefore, there was an important difference with Halford and Copland, in which the personal use of an office telephone was allowed or, at least, tolerated, and from Peev, in which the employer’s regulations did not forbid employees to keep personal belongings in their professional office. The object of the applicant’s complaint before the Court was limited to the monitoring of his communications within the framework of disciplinary proceedings; the employer’s decision to terminate the applicant’s contract was not based on either the actual content of his communications nor on the fact of their eventual disclosure. The Court therefore had to determine whether, in view of the general prohibition imposed by his employer, the applicant retained a reasonable expectation that his communications would not be monitored. He did, the ECtHR found, pointing to the fact that transcripts of his communications were used in the domestic labour court proceedings and some of the messages were of a private and intimate nature.52 The Grand Chamber (2017), reaffirmed that finding, stressing that ‘an employer’s instructions cannot reduce private social life in the workplace to zero’.53 45 ECtHR, Pay v United Kingdom, app no. 32792/05, 16 September 2008. 46 ECtHR, Friend and others v United Kingdom, app nos. 16072/06 and 27809/08, 24 November 2009. See also: ECtHR, Haldimann and others v Switzerland, app no. 21830/09, 24 February 2015. 47 ECtHR, J.S. v United Kingdom, app no. 445/10, 03 March 2015. 48 ECtHR, Garamukanwa v United Kingdom, app no. 70573/17, 14 May 2019. 49 ECtHR, Gillan and Quinton v United Kingdom, app no. 4158/05, 12 January 2010. 50 ECtHR, Köpke v Germany, app no. 420/07, 05 October 2010. 51 ECtHR, Bukota-Bojic v Switzerland, app no. 61838/10, 18 October 2016. 52 ECtHR, Barbulescu v Romania, app no. 61496/08, 12 January 2016. See also the Partly Dissenting Opinion of Judge Pinto De Albuquerque. 53 ECtHR, Barbulescu v Romania, app no. 61496/08, 05 September 2017.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  77 In Akhlyustin v Russia (2017), this line was confirmed54 and in Antovic and Mirkovic v Montenegro (2017), the ECtHR extended its approach to recordings made in classrooms by university teachers, possibly to assess their teaching quality, which although not private, but public, are workplaces where social relationships are developed.55 In their joint concurring opinion, judges Vucinic and Lemmens explained that there is a reasonable expectation not only to keep private things private, even in professional or public spaces, but also to keep information and social interactions shared between a secluded number of persons restricted to that group.56 This may be viewed as a direct response to the infamous American ‘third party doctrine’. A final important step was taken in Benedik v Slovenia (2018), in which the Court extended its approach beyond the workplace in a controversial matter. The applicant’s internet connection was monitored, because he was spreading child pornography. Remarkably, the Court stressed that from a subjective angle, the applicant expected that his activities would remain private and that his identity would not be disclosed. Although the ECtHR accepted that the applicant did not hide his dynamic IP address, it also underlined that this cannot be decisive in the assessment of whether his expectation of privacy was reasonable from an objective standpoint. On that point, the Court reiterated that anonymity on the internet is an important aspect of the right to privacy, that a dynamic IP address, even if visible to other users of the network, cannot be traced to the specific computer without the ISP’s verification, and concluded that it was sufficient to note that Article 37 of the Constitution guaranteed the privacy of correspondence and of communications and required that any interference with this right be based on a court order. Therefore, also from the standpoint of the legislation in force at the relevant time, the applicant’s expectation of privacy with respect to his online activity could not be said to be unwarranted or unreasonable.57 54 ECtHR, Akhlyustin v Russia, app no. 21200/05, 07 November 2017. 55 ECtHR, Antovic and Mirkovic v Montenegro, app no. 70838/13, 28 November 2017. 56 ‘It seems to us that in such an interaction the teacher may have an expectation of privacy, in the sense that he or she may normally expect that what is going on in the classroom can be followed only by those who are entitled to attend the class and who actually attend it. No ‘unwanted attention’ from others, who have nothing to do with the class. [] It seems to us that at least in an academic environment, where both the teaching and the learning activities are covered by academic freedom, the said expectation of privacy can be considered a ‘reasonable’ one. Surveillance as a measure of control by the dean is, in our opinion, not something a teacher should normally expect.’ In their dissenting opinion, judges Spano, Bianku and Kjolbro disagreed with the finding of an interference with the applicants’ private lives. They found conclusive that the video monitoring took place at the university auditoriums, that the applicants had been notified of the video surveillance, that what was monitored was the applicants’ professional activity, that the surveillance was remote, that there was no audio recording and thus no recording of the teaching or discussions, that the pictures were blurred and the persons could not easily be recognised, that the video recordings were only accessible to the dean and were automatically deleted after 30 days, and that the data or information was not subsequently used.’ The ECtHR subsequently reaffirmed its lenient approach: ECtHR López Ribalda and others v Spain, app nos. 1874/13 and 8567/13, 09 January 2018. ECtHR, López Ribalda and others v Spain, app nos. 1874/13 and 8567/13, 17 October 2019. ECtHR, Libert v France, app no. 588/13, 22 February 2018. 57 ECtHR, Benedik v Slovenia, app no. 62357/14, 24 April 2018. Judge Yudkivska, joined by judge Bosnjak, in a concurring opinion, stressed that the majority had not been firm enough in its stance with respect to the reasonable expectation of privacy: ‘there should be no doubt that his expectations

78  Bart Van der Sloot

III.  Test 2: Reasonable Foreseeable (as per the ‘in accordance with the law’ requirement Article 8 § 2) From the previous section, it is clear that the ECtHR uses the ‘reasonable expectation of privacy’ doctrine fundamentally different from how it is applied in the US. Rather than stressing that in public places, people have no or limited privacy, the Court will find that citizen’s do have privacy, even in public or professional arenas; rather than finding that when citizens have voluntarily shared their data with others, they can no longer invoke a right to privacy, the ECtHR has stressed that even when citizens advertise and post on public website certain extreme sexual behaviour, they have a reasonable expectation of privacy; rather than finding that when people know that and have consented to employers monitoring their behaviour, they have relinquished their privacy, the Court has stressed that no agreement can ever reduce a reasonable expectation of privacy in the workplace; and rather than testing whether subjective expectations were ‘reasonable’ to the common man, the ECtHR will rather quickly find that if applicants believed that their documents, information or conduct would remain secret, they can successfully rely on Article 8 ECHR, even when they engage in clearly illegitimate conduct.58 In addition to this fundamental reinterpretation of the American doctrine, the ECtHR has stressed that citizens must always be able to reasonably foresee when their right to privacy might be interfered with, and more importantly, when not. Governments are not allowed to adopt laws that grant such wide powers at the discretion of the executive power that citizens are no longer able to assess when they can reasonably expect to enjoy privacy. Rather, governments should specify in detail when, how and under which circumstances citizens’ privacy might be interfered with. This doctrine is linked to paragraph 2 of Article 8 ECHR, which specifies that the right to privacy can be limited if three requirements are met: an interference is prescribed for by law, serves a legitimate interest and is necessary in a democratic society.

of privacy were perfectly legitimate, notwithstanding the abhorrently illegal character of his activity []’. Judge Vehabovic, in a dissenting opinion, to the contrary, felt that the applicant did not have a reasonable expectation of privacy, specifically disagreeing with the Court’s approach where ‘the subjective angle of the applicant on his expectation for privacy should be taken into account where a criminal activity is under consideration. In nearly all cases, criminals would not wish their activities to be known to others. This kind of expectation of privacy would not be reasonable when based on an unlawful, or in this case a criminal, incentive. An expectation to hide criminal activity should not be considered as reasonable. On a second issue concerning the reasonable expectation of privacy, the applicant exchanged files including child pornography through a public network account which was visible to others. The applicant therefore knew, or ought to have known, that his actions were not anonymous. The applicant did not intend to conceal his activity at the time of commission of the offence.’ 58 A more elaborate analysis of the topic discussed in this section can be found in: Bart van der Sloot, ‘The quality of law: How the European Court of Human Rights gradually became a European Constitutional Court for privacy cases.’ Journal of Intellectual Property, Information Technology and Electronic Commerce Law 11 (2020): 160.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  79 The first time the Court set out the contours of this doctrine was in the case of the Sunday Times v the UK (1979), in which the applicants argued, inter alia, that the law of contempt of court was so vague and uncertain and the principles enunciated in a decision at national level so novel that the restraint imposed on them could not be regarded as ‘prescribed by law’. The Court stressed that two requirements derived from the requirement to have a legal basis for interferences with human rights. Firstly, the law must be adequately accessible: the citizen must be able to have an indication that is adequate in the circumstances of the legal rules applicable to a given case. Secondly, a norm cannot be regarded as a ‘law’ unless it is formulated with sufficient precision to enable the citizen to regulate his conduct: he must be able – if need be with appropriate advice – to foresee, to a degree that is reasonable in the circumstances, the consequences which a given action may entail. Those consequences need not be foreseeable with absolute certainty: experience shows this to be unattainable. Again, whilst certainty is highly desirable, it may bring in its train excessive rigidity and the law must be able to keep pace with changing circumstances.59

Although the ECtHR was initially hesitant to apply the principles of accessibility and foreseeability to matters concerning the right to privacy, it was in cases on Article 8 ECHR that this doctrine gained significance.60 In Malone v the UK (1984), the Court stressed that the notion of foreseeability cannot be exactly the same in the special context of interception of communications for the purposes of police investigations, but it also stressed that the phrase ‘in accordance with the law’ does not merely refer back to domestic law but also relates to the quality of the law, requiring it to be compatible with the rule of law, which is expressly mentioned in the preamble to the Convention. The phrase thus implies – and this follows from the object and purpose of Article 8 – that there must be a measure of legal protection in domestic law against arbitrary interferences by public authorities with the rights safeguarded by paragraph 1. Especially where a power of the executive is exercised in secret, the risks of arbitrariness are evident.61

The ‘quality of law’ doctrine implies that citizens must always be able to assess when their privacy might be at risk and vice versa, when not. Laws cannot give such broad powers to the executive branch that citizens can never know when to expect a limitation of their privacy.62 The Court can find a violation of Article 8 ECHR when the national law does not specify in detail when and how privacy-invasive

59 ECtHR, Sunday Times v the United Kingdom, app no. 6538/74, 26 April 1979, § 49. 60 ECtHR, Silver and others v the United Kingdom, app nos. 5947/72, 6205/73, 7052/75, 7061/75, 7107/75, 7113/75 and 7136/75, 25 March 1983, § 85. 61 ECtHR, Malone v the United Kingdom, app no. 8691/79, 2 August 1984, § 67. 62 ECtHR, Leander v Sweden, app no. 9248/81, 26 March 1987. ECmHR, Mersch and others v Luxemburg, app nos. 0439/83, 10440/83, 10441/83, 10452/83, 10512/83 and 10513/83, 10 May 1985.

80  Bart Van der Sloot powers might be used. For example, when dealing with a law that sanctioned police monitoring activities, it referred to: • Unclarity with respect to the categories of people liable to have their telephones tapped; • Unclarity with respect to the nature of the offences which may give rise to such an order; • Absence of a limit on the duration of telephone tapping; • No procedure for drawing up the summary reports containing intercepted conversations; • Unclarity on the point of the precautions to be taken in order to communicate the recordings for possible inspection by the judge and the defence; • Unclarity about the circumstances in which recordings may or must be erased.63 While the Court normally assesses whether the executive branch had stayed within the limits set out by the law (eg, when the police wire-taps a person’s telephone, it will assess whether the law granted the police the authority to wire-tap the person in question, whether the police respected the limits as to the duration of the wire-tap contained in that law, whether the police obtained a court order before wire-tapping as required by the law, etc), in cases where it uses the ‘quality of law’ doctrine, it focusses on the legislative branch and assesses whether the law itself allows citizens to reasonably foresee when their privacy might be limited (eg does the law specify in detail who may use what power under which circumstances, does it set out limits to the use of those powers and are there sufficient checks and balances in place, such as judicial oversight?).64 Consequently, while the Court normally focusses on the question of whether there has been an actual interference with a person’s privacy and if so, whether that interference was legitimate (ie in accordance with the law), it now focusses on the question of whether the interference was arbitrary.65 The ECtHR has developed this doctrine because it is alarmed by the legislative branches of a number of European countries, if not all, that have granted the executive power such broad powers and lay down so few limitations to the use of those powers, that it is nearly impossible for the executive branch to violate the law: 63 ECtHR, Kruslin v France, app no. 11801/85, 24 April 1990. ECtHR, Huvig v France, app no. 11105/84, 24 April 1990. 64 ECtHR, Kopp v Switzerland, app no. 23224/94, 25 March 1998. ECtHR, Amann v Switzerland, app no. 27798/95, 16 February 2000. ECtHR, Valenzuela Contreras v Spain, app no. 27671/95, 30 July 1998. ECtHR, Rotaru v Romania, app no. 28341/95, 04 May 2000. ECtHR, Hasan and Chaush v Bulgaria, app no. 30985/96, 26 October 2000. ECtHR, Gorzelik and others v Poland, app no. 44158/98, 17 February 2004. ECtHR, Bordovskiy v Russia, app no. 49491/99, 08 February 2005. ECtHR, Weber and Saravia v Germany, app no. 54934/00, 29 June 2006, §94-95. 65 Jacob Livingston Slosser, ‘Components of Legal Concepts: Quality of Law, Evaluative Judgement, and Metaphorical Framing of Article 8 ECHR.’ European Law Journal 25.6 (2019): 593–607.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  81 there are virtually no boundaries on what the intelligence agencies in a number of countries are legally allowed to do in terms of undermining citizens’ privacy. Thus, nearly all interferences with privacy based on those laws would be ‘legitimate’, stricto sensu, although they are arbitrary, because it is fully at the discretion of the executive branch to use its powers when and how it sees fit. Because under its standard approach, the Court could not find a violation on this point (even the ‘necessary in a democratic society’ test often did not offer relief, because the Court acknowledges that national security and safeguarding the public order are especially weighty interests), the last number of years the ECtHR has used its ‘quality of law’ test with increased frequency to find privacy violations when laws do not set sufficient safeguards against the arbitrary use of power by the executive power. Thus, the very fact that citizens cannot reasonably foresee when their privacy is violated or not is in itself a privacy violation. This line of reasoning was reenforced by the Court in the case of Zakharov v Russia (2015). In all cases in which the ECtHR deployed the ‘quality of law’ doctrine, it still required that the applicant could demonstrate that she had actually and individually suffered harm from an action by the executive branch based on that law, eg that the police had wire-tapped her telecommunication. With the rise of mass surveillance activities, the question that became increasingly urgent was: what if the laws are so broad that everyone’s communication could or will be monitored and allow intelligence agencies not to inform citizens whether their communications have been monitored out of, in itself legitimate, concerns over national security, so that citizens neither know whether their privacy has been interfered with nor have any way of finding out. In such circumstances the threat of surveillance can be claimed in itself to restrict free communication through the postal and telecommunication services, thereby constituting for all users or potential users a direct interference with the right guaranteed by Article 8. [] In such cases, the individual may claim to be a victim of a violation occasioned by the mere existence of secret measures or of legislation permitting secret measures only if he is able to show that, due to his personal situation, he is potentially at risk of being subjected to such measures.66

That means that rather than finding that because laws authorise everyone’s privacy to be interfered with arbitrarily, citizens can no longer reasonably expect to enjoy their privacy, the exact opposite is true. The very fact that citizens cannot reasonably foresee when their privacy will be respected or not is in itself an interference with their privacy; consequently, everyone can ask the ECtHR to assess whether a law sets sufficient checks and balances against the arbitrary use of power by the executive branch and if not, this in itself is enough to find a violation, even although the powers have not been used vis-à-vis the applicant.67 66 ECtHR, Roman Zakharov v Russia, app no. 47143/06, 04 December 2015, § 171. 67 The Court took a similar approach in: ECtHR, Centrum för Rättvisa v Sweden, app no. 35252/08, 19 June 2018. ECtHR, Big Brother Watch and others v the United Kingdom, app nos. 58170/13, 62322/14 and 24960/15, 13 September 2018. The Court has also applied the same requirements to laws granting

82  Bart Van der Sloot

IV.  Test 3: Legitimate Expectation (as per the ‘necessary in a democratic society’ requirement Article 8 § 2) Section II showed that when using the ‘reasonable expectation of privacy’ test, the ECtHR deploys it in a manner in many respects opposite to how it is understood in the US.68 Rather than limiting citizens’ privacy in public places, the ECtHR has found that even in public and professional environments, even when advertising extreme sexual behaviour, even when engaging in clearly illegitimate activities, even when already disclosing information to a significant crowd, even when knowing and explicitly agreeing to their privacy being limited by their employer, citizens can still claim to have a reasonable expectation of privacy. Section III showed that the Court has found that states must adopt laws that allow citizens to reasonably foresee when their privacy will be respected and when not. When laws grant such broad powers to the executive branch that it is no longer reasonable for citizens to expect any privacy, for example with respect to their online communications, this in itself will be deemed a violation of Article 8 ECHR, the right to privacy. This section will show that the ECtHR has adopted a final test to ensure that even public figures enjoy a right to privacy, even in public places, even when they engage in activities that would be relevant for the public, even when they have themselves sought the limelight. Seven years after the ECtHR first adopted the ‘reasonable expectation of privacy’ doctrine in Halford v UK (1997), the Court adopted a new doctrine, namely that of the ‘legitimate expectation of privacy’, in which it combined strands from both the ‘reasonable expectation’ and the ‘reasonable foreseeable’ privacy tests as discussed in sections II and III. The notion of ‘legitimate expectation’ is predominantly applied in cases that revolve around a conflict between two private parties, were one invokes the right to privacy and the other relies on the freedom of expression. Though under the ECHR, natural persons can only submit a complaint against a Member State, and not against another citizen or private organisation, they are allowed to challenge judgments issued on the national level, in which a

the executive power other powers, such as placing children out of home, a law on the professional assistance with home births and a law on the rights of detainees to receive family visits. ECtHR, Olsson v Sweden, app no. 10465/83, 24 March 1988. ECtHR, Ternovsky v Hungary, app no. 67545/09, 14 December 2010. ECtHR, Kungurov v Russia, app no. 70468/17, 18 February 2020. 68 Joe Purshouse, ‘The Reasonable Expectation of Privacy and the Criminal Suspect,’ The Modern Law Review 79.5 (2016): 871–884. Lilian Edwards and Lachlan Urquhart. ‘Privacy in public spaces: what expectations of privacy do we have in social media intelligence?.’ International Journal of Law and Information Technology 24.3 (2016): 279–310. Mark J. Taylor and James Wilson, ‘Reasonable expectations of privacy and disclosure of health data,’ Medical law review 27.3 (2019): 432–460. Timothy Pinto, ‘Who controls the Naomi Campbell information flow? A practical analysis of the law of privacy,’ Journal of Intellectual Property Law & Practice 1.5 (2006): 354–361. Jelena Gligorijević, ‘Children’s Privacy: The Role of Parental Control and Consent,’ Human Rights Law Review 19.2 (2019): 201–229. Emmanuel Salami, ‘The Case of Copland vs United Kingdom: A Cursory Look at the Right to Privacy Through the European Convention on Human Rights,’ Available at SSRN 2973303 (2017).

Expectations of Privacy: The Three Tests Deployed by the ECtHR  83 national court resolved a conflict between these two rights. To determine whether an interference with one of these rights (eg the government did or did not sanction a speech act that is believed to have infringed the right to privacy, to the dissatisfaction of one or the other party) was necessary in a democratic society, the third condition for legitimately curtailing the right to privacy or the freedom of expression, the ECtHR will determine, inter alia, whether the party invoking her right to privacy had a legitimate expectation of privacy. Thus, it explicitly substitutes this notion for that of ‘reasonable expectation’, inter alia, because it is afraid that applying an objective test of reasonableness might entail that celebrities and royalty could not rely on the right to privacy when they are structurally monitored and surveilled, this time, not by the state, but by the press. Like with the notion of reasonable foreseeability, the Court finds that the fact that people are structurally monitored cannot mean that they would be denied a claim to privacy, because a reasonable person could no longer expect to have any. The first time the doctrine of legitimate expectation of privacy was used was in Von Hannover v Germany (2004).69 Caroline Von Hannover, the princess of Monaco, had spent more than ten years in unsuccessful litigation in German courts. She alleged that as soon as she left her house, she was constantly hounded by paparazzi who followed her every daily movement, be it crossing the road, fetching her children from school, doing her shopping, going out walking, engaging in sport or going on holiday. Accepting that there had been an inference with the applicant’s private life, the Court assessed whether the domestic courts had struck a fair balance between the applicant’s right to privacy and the press’ right to freedom of speech. It considered that anyone, even if they are known to the general public, must be able to enjoy a ‘legitimate expectation’ of privacy. Increased vigilance in protecting private life, it stressed, is necessary to contend with new communication technologies which make it possible to store and reproduce personal data, adding that the distinction drawn between figures of contemporary society ‘par excellence’ and ‘relatively’ public figures has to be clear and obvious so that, in a State governed by the rule of law, the individual has precise indications as to the behaviour he or she should adopt. Above all, they need to know exactly when and where they are in a protected sphere or, on the contrary, in a sphere in which they must expect interference from others, especially the tabloid press.70

Thus, similar to its ‘reasonable foreseeability’ doctrine with respect to privacy interferences by governmental organisations, in order to guarantee a ‘legitimate expectation of privacy’, the legal regime of a Member State must be so precise that anyone can reasonably foresee when their privacy might be interfered with by

69 See further: Michael Doherty, ‘Politicians as a species of ’ Public Figure’ and the Right to Privacy,’ Humanitas Journal of European Studies 1(1) (2007): 35–56. 70 ECtHR, Von Hannover v Germany, app no. 59320/00, 24 June 2004, § 69.

84  Bart Van der Sloot private parties, also ensuring that there always is an expectation of privacy, even by public figures.71 In Standard Verlags GMBH v Austria (No. 2) (2009),72 the Court went one step further, therewith also distancing itself from the American focus on the freedom of speech and the first Amendment and showing that in Europe, the right to privacy is at least as important as the freedom of expression, if not more.73 It observed that the impugned article dealt with rumours about the politicians’ private and family life and stressed that politicians too have a ‘legitimate expectation of privacy’. It distinguished a politician’s alleged marital problems from her state of health which, though belonging to the personal sphere, can have a bearing on the exercise of her functions. Because the presidential couple’s private life had not played a role during the president’s second term in office and because the alleged extramarital relationship of the First Lady did not have any link with the President’s public functions and responsibilities, the Court found that the publication did not contribute to any public debate in respect of which the press has to fulfil its role of ‘public watchdog’, but merely served to satisfy the curiosity of a certain readership. The fact that the alleged extra-marital affair of the First Lady was with the leader of the extreme right or neo-fascist party, which on many accounts, if true, could be deemed politically sensitive, did not alter that fact. Thus, the ECtHR found that the newspapers’ obligation to pay compensation to the claimants was not disproportionate to the legitimate aim of protecting public figures’ privacy.74 71 Interestingly, in the Concurring Opinion, judge Zupancic made a plea for using the ‘reasonable expectation of privacy’ test instead of the ‘legitimate expectation of privacy’ text. ‘I agree with the outcome of this case. However, I would suggest a different determinative test: the one we have used in Halford v. the United Kingdom, which speaks of ‘reasonable expectation of privacy’. [T]he proposed criterion of reasonable expectation of privacy permits a nuanced approach to every new case. [T]he ‘reasonableness’ of the expectation of privacy could be reduced to the aforementioned balancing test. But reasonableness is also an allusion to informed common sense, which tells us that he who lives in a glass house may not have the right to throw stones.’ In another concurring opinion, however, judge Cabral Barreto doubted whether there was, in the case of miss Von Hanover, a reasonable expectation of privacy. 72 ECtHR, Standard Verlag GMBH v Austria (No. 2), app no. 21277/05, 04 June 2009. See also the dissenting opinion in: ECtHR, Verlagsgruppe News GMBH v Austria (No. 2), app no. 10520/02, 14 December 2006. 73 Bilyana Petkova, ‘Privacy as Europe’s first Amendment.’ European Law Journal 25.2 (2019): 140–154. 74 Both the case of Axel Springer AG v Germany (2012) and the second Von Hannover case v Germany (2012) showed that there are limits to what is ‘legitimate’ to expect for public figures in terms of privacy, as did the three consecutive judgments against Austria, all on the same underlying matter, in which the privacy of non-public individuals were discussed. ECtHR, Axel Springer AG v Germany, app no. 39954/08, 07 February 2012. ECtHR, Von Hannover (2) v Germany, app nos. 40660/08 and 60641/08, 07 February 2012. ECtHR, Rothe v Austria, app no. 6490/07, 04 December 2012. ECtHR, Küchl v Austria, app no. 51151/06, 04 December 2012. ECtHR, Verlagsgruppe Nes GMBH and Bobi v Austria, app no. 59631/09, 04 December 2012. It is unclear whether it is a coincidence that the ‘reasonable expectation of privacy’ doctrine was used primarily in cases vis-à-vis the UK, fitting the utilitarian balancing act intrinsic to many common law countries, while the ‘legitimate expectation of privacy tests’, focussing more on legitimacy and normative minimum requirements of the rule of law, were used against German speaking countries. In any case, the Court gradually started to use the latter concept in cases against a number of northern European countries as well. See also for a more hesitant approach: ECtHR, Wegrzynowski and Smolczewski v Poland, app no. 33846/07, 16 July 2013. ECtHR, Ristamäki and Korvola v Finland, app no. 66456/09, 29 October 2013.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  85 In Ruusunen v Finland (2014), the Court even condoned a conviction under criminal law of a former girlfriend of the Prime Minister who wrote an autobiographical book about her relationship. The ECtHR stressed that the facts set out in the book were not in dispute and were presented in a compassionate manner and that the style was not provocative or exaggerated. The Prime Minister was clearly a public figure and he had even consented to use his photograph on the cover of the book.75 The Court stressed that even though the emphasis in the book was on the applicant’s private life, it nevertheless contained elements of public interest and that the majority of the information concerning the Prime Minister’s private life had already been widely disclosed. In spite of all these considerations, the ECtHR found the conviction under criminal law reasonable, because some information in the book concerned his sex and intimate life, which had not been disclosed yet. The Court took a similar approach in Ojala and Etukeno Oy v Finland (2014) and Salumaki v Finland (2014).76 This means that criminal law convictions for publishing intimate details about a politician will not be deemed a violation of the freedom of speech if a person’s ‘legitimate expectation of privacy’ is at stake. In Alhpa Doryforiki Tileorasi Anonymi Etairia v Greece (2018),77 the Court took a remarkable next step,78 when

75 ECtHR, Ruusunen v Finland, app no. 73579/10, 14 January 2014. 76 ECtHR, Ojala and Etukeno Oy v Finland, app no. 69939/10, 14 January 2014. ECtHR, Sulamaki v Finland, app no. 23605/09, 29 April 2014. See for limits on the ‘legitimate expectation of privacy doctrine’: ECtHR, Lillo-Stenberg and Saether v Norway, app no. 13258/09, 16 January 2014. In two cases against Germany, the Court stressed that when a person actively seeks the limelight, her reasonable expectation of privacy may be limited. ECtHR, Bohlen v Germany, app no. 53495/09, 19 February 2015. Judge Zupancic again stressed that he would have preferred the ‘reasonable expectation of privacy’ test. ‘It is a tautology to say that everybody who has ever published a book has sought publicity. This is what publishing anything is all about. However, for the Federal Court of Justice to maintain that Mr Bohlen himself courted publicity to further his own interests, and was consequently hostage to adverse publicity because he had published a book, is going too far.’ ECtHR, Ernst August von Hannover v Germany, app no. 53649/09, 19 February 2015. Again, Zupancic disagreed and stressed that to maintain that Mr von Hannover deserved the ‘particularly clever’ negative publicity on account of his bellicose character, and simultaneously that because this was ‘only’ a cigarette advertisement, it was not injurious to his personality rights, was going too far. See however also: ECtHR, Faludy-Kovács v Hungary, app no. 20487/13, 23 January 2018. 77 ECtHR, Alpha Doryforiki Teleorasi Anonymi Etairia v Greece, app no. 72562/10, 22 February 2018. See also: ECtHR, Petkeviciute v Lithuania, app no. 57676/11, 27 February 2018. See however for a more lenient approach: ECtHR, M.L. and W.W. v Germany, app nos. 60798/10 and 65599/10, 28 June 2018. 78 In Couderc and Hachette Filipacchi Associés v France (2015), the ECtHR took a more cautious approach, though stressing at the same time that an ‘individual’s alleged or real previous tolerance or accommodation with regard to publications touching on his or her private life does not necessarily deprive the person concerned of the right to privacy’. ECtHR, Couderc and Hachette Filipacchi Associes v France, app no. 40454/07, 10 November 2015, § 130. See also ECtHR, Sousa Goucha v Portugal, app no. 70434/12, 22 March 2016, 55. And the Dissenting Opinion of Judges Sajó and Karakas in ECtHR, Satakunnan Markkinapörssi Oy and Satamedia Oy v Finalnd, app no. 931/13, 27 June 2017. In Halldorsson v Iceland (2017), the applicant, a journalist, broadcasted a television program on an Icelandic company, a shelf company in Panama and some dubious transactions. The applicant speculated on who the business partners might be and pictures were shown, inter alia, of A, a prominent businessman in Iceland. The Court again found a fine that Icelandic courts had imposed on the applicant not unreasonable. ECtHR, Halldórsson v Iceland, app no. 44322/13, 04 July 2017.

86  Bart Van der Sloot a broadcasting company had shown a television programme named ‘Jungle’ in which three videos that had been filmed with a hidden camera were broadcasted. In the first video, A.C., then a member of the Hellenic Parliament and chairman of the inter-party committee on electronic gambling, was shown entering a gambling arcade and playing on two machines. The second video showed a meeting between A.C. and associates of the television host of ‘Jungle’, M.T., during which the first video was shown to A.C. The third video showed a meeting between A.C. and M.T. in the latter’s office. Although the ECtHR observed that the report in question was not without political importance, because the parliamentarian was not informed about the fact that he was filmed and the ECtHR was unconvinced that secret filming was necessary for the second and third video’s. Consequently, it found that the sanction of €100,000 was not only in conformity with the Convention, but lenient, given the legitimate expectation of privacy of A.C. Thus, the Court went one step further than in Standard Verlags GMBH, because in the Greek case, the publication was directly related to the public office held by the member of Parliament. Still, the ECtHR found that it was only necessary secretly film that person for the first video and not for the second or third. To provide a final example, in Khadija Ismayilova v Azerbaijan (no. 3) (2020), a video of the applicant, a well-known investigative journalist, in which she was shown having sex with her then boyfriend, was leaked to the press. One newspaper did not publish the material, but merely referenced it and commented in a negative manner. The government argued that the applicant had no legitimate expectation of privacy, not only because she was a public figure, but also because the material that was referenced in the report was already made public by other outlets. The Court disagreed. It is true that, once a person’s privacy has been breached and the information about it has entered into public domain, the damage is already done and it is virtually impossible to restore the situation to when the breach had never happened. However, while responsible reporting on matters of public interest in accordance with the ethics of journalism is protected by the Convention, there can be no legitimate public interest in exploiting an existing breach of a person’s privacy for the purpose of satisfying the prurient curiosity of a certain readership, publicly ridiculing the victim and causing them further harm.79

Consequently, even when sensitive information is already widely circulating on the internet and quoted in several media outlets and even when a specific outlet does not publish that information itself, but only references it, this still might be deemed a violation of a public figure’s ‘legitimate expectation of privacy’, according to the ECtHR.



79 ECtHR,

Khadija Ismayilova v Azerbaijan (no. 3), app no. 35283/14, 07 May 2020.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  87

V. Conclusion Even under absolute extreme conditions, where no privacy is to be expected, people will find ways to create a private sphere. Goffman studied peoples’ behaviour in what he called ‘total institutions’, such as asylums, mental institutions, prisons and camps. People put a blanket over their heads to create a personal space when denied a private room, they create personal belongings, if even small figures made out of toilet paper, they spent free time in lavatories to enjoy a moment of silence, retreat into their heads and design fictitious dream worlds which they will not share with anyone or will create personal storage spaces for their belongings, if not in their mattress, then in their own body. ‘These storage spaces protect the object from damage, misuse, and misappropriation, and allow the user to conceal what he possesses from others. More important, these places can represent and extension of the self and its autonomy, becoming more important as the individual forgoes other repositories of selfhood.’80 Though current day society is obviously incomparable with total institutions as defined by Goffman, on certain points, less privacy is guaranteed now than in the sixties of the previous century when living behind the walls of such an institution. For example, CCTV cameras used by the police or placed in and outside shops and private homes are increasingly equipped with facial recognition software, which cannot only be used to identify a person and follow her every movement in public or connect publicly available information about that person to the image. Facial recognition can also be used to recognise and potentially manipulate emotions. In smart cities, for example, emotion detection as a specific form of facial recognition can ‘play a role in security and control, such as when certain emotions such as fear and anger are automatically recognized and used to act quickly and prevent escalation’.81 In addition, in the retail sector, companies can scan their customers’ faces for emotions (and heartbeat through pupil dilation, revealing excitement) in order to personalise products and offers. These are emotions people are not necessarily (consciously) aware of themselves, while their body may show implicit signals. And this is only one contemporary technique deployed in current day society. The parallel to the total institution such as a prison is telling, because ­obviously, it can be argued that depriving prisoners’ privacy and freedom is part of the raison d’être of a prison facility, just like on a different scale, the raison d’être of the state may be to keep its citizens safe, for which interferences with their privacy, such as learning from surveillance activities, may be necessary. Indeed, the ECtHR initially found that prisoners were not or only to a limited extent allowed to submit a complaint about a lack of freedom or privacy. Over time, however, the ECtHR has changed its position. Under the ECHR, prisoners currently do have all the

80 Erving Goffman, Asylums: Essays on the social situation of mental patients and other inmates. (AldineTransaction, 1968) 220–221. 81 Esther et al. Keymolen, ‘At first sight’, WODC, 2020.

88  Bart Van der Sloot rights ordinary citizens have and what is more, because they are fully dependent on the government for their life and pursuit of happiness, the state has additional positive obligations to ensure that their rights are not only respected, but can also be effectively exercised.82 For example, with respect to reproductive privacy, the Court has underlined that when prisoners are not allowed to receive conjugal visits, the government must ensure that prisoners can reproduce with their spouse through artificial insemination.83 The reorientation of the Court’s position on prisoner’s rights is illustrative. It finds that whatever the circumstances, there must always be a reasonable expectation of privacy, be it in public, the office, in prison or at home. Every person must have such an expectation, whether it is a private or a public figure, a politician, a celebrity, royalty or even a person that advertises himself on the internet as a dominant master over submissive women. Every activity should enjoy a minimum level of privacy, even if it is conversing with students by university teachers in an auditorium, even if it is using the employer’s communication channels for private and intimate messages, against the explicit orders of the employer, even if it is spreading child pornography through webservices. Countries must ensure that citizens have such a reasonable expectation of privacy. When citizens cannot ascertain whether their privacy has been interfered with, they might restrict or chasten their behaviour. Even this chilling effect itself, in absence of any actual interference with the right to privacy, can lead to a violation of the ECHR.84 Thus, different from the American approach, if citizens have no reasonable expectation of privacy, this is an interference of their privacy in itself. Not surprisingly, the ECtHR has only hesitantly used the American concept of ‘reasonable expectation of privacy’ in its jurisprudence. Rather than stressing that in public places, people have no or limited privacy, the Court will find that citizens do have privacy, even in public or professional arenas; rather than finding that when citizens have voluntarily shared their data with others, they can no longer invoke a right to privacy, the ECtHR has stressed that even when citizens advertise and post on public website certain extreme sexual behaviour, they have a reasonable expectation of privacy; rather than finding that when people know that and have consented to employers monitoring their behaviour, they have relinquished their privacy, the Court has stressed that no agreement can ever reduce a reasonable expectation of privacy in the workplace; and rather than testing whether subjective expectations were ‘reasonable’ to the common man, the ECtHR will rather quickly find that if applicants believed that their documents, information or conduct would remain secret, they can successfully rely on Article 8 ECHR, even when engaging in clearly illegitimate conduct. In addition to this fundamental reinterpretation of the American doctrine, the Court has adopted two additional tests, which aim to guarantee citizens’ privacy,

82 www.echr.coe.int/Documents/Guide_Prisoners_rights_ENG.pdf. 83 ECtHR, 84 ECtHR,

Dickson v the UK, app no. 44362/04, 04 December 2007. Colon v the Netherlands, app no. 49458/06, 15 May 2012.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  89 even when such would not be ‘reasonable’ stricto sensu. First, the ECtHR has stressed that when the government limits citizens’ privacy, it must ensure that citizens can reasonably foresee when such measures will be deployed. Governments must specify in detail in their law when, how and under which circumstances citizens’ privacy might be interfered with. If they cannot, this in itself will be deemed a privacy violation. Second, the ECtHR has stressed that the legal regime must be equally clear with respect to privacy interferences by private parties, especially the media. The Court substituted the ‘legitimate expectation of privacy’ doctrine for the ‘reasonable expectation’, because it fears that the latter might entail that celebrities and royalty could not rely on the right to privacy when their privacy is structurally undermined. Thus, governments must ensure that even public figures can enjoy a sense of privacy, even in public places, even when they have themselves sought the limelight and even when their ‘expectations’ to privacy would no longer be objectively reasonable to the common man. As many commentators have signalled before, it seems as though the American ‘reasonable expectation of privacy’ doctrine is unfit for the twenty-first century, as the classic conceptions of what is public and what is private, the public sphere and the private sphere and public/professional and private life are increasingly difficult to apply in a world in which people live in smart homes and carry with them smart phones in public places, which holds the key to many of their most intimate details, people work from home and use social media at work, in which communication is never private but always mediated and in which personal data are a key asset to most private and public organisations and data are made available via open access models for purposes of transparency and re-use for commercial practices.85 This chapter has discussed the European approach to this concept, which could be taken as a source of inspiration for revising and reshaping the American ‘reasonable expectation of privacy’ doctrine, something that many experts have called for. But more fundamentally, the question should be posed whether a model based on expectations is viable, even under the European interpretation, in the twenty-first century. Although the ECtHR adopts a broad understanding of what is a legitimate privacy claim, including instances where people have themselves disclosed data about themselves or have breached contractual agreements not to use professional equipment for personal communication, the Court also takes into account the extent to which people have been warned, have alternatives available and could have expected privacy interferences. Thus, in time, this could have a consequence, like is already the case in the US, where people’s legitimate claim to privacy is limited when they are made aware that they are monitored at work, in public spaces and even at home and can be monitored when using communication channels (Article 8 § 1 ECHR). Similarly, although the ECtHR has stressed that citizens must be able to reasonably foresee when their privacy will be interfered

85 Bert-Jaap Koops, Jaap-Henk Hoepman and Ronald Leenes. ‘Open-source intelligence and privacy by design.’ Computer Law & Security Review 29.6 (2013): 676–688.

90  Bart Van der Sloot with and that the ground for such interference must be laid down in a law (Article 8 § 2 ECHR), the ECtHR has allowed for several exceptions, for example when governmental agencies claim that they cannot disclose when, why and how they will interfere with the right to privacy of citizens, as this would undermine the effectiveness of their operations. Finally, although the ECtHR has found that under the necessity and proportionality requirement (Article 8 § 2 ECHR), even public figures should be able to invoke a right to privacy in public places, these rights are limited and the right of the public to know private details about politicians, royalty and celebrity will often prevail. In a certain sense, in the twenty-first century, everyone is a celebrity. Many persons have social media, use Twitter, Tiktok or Instagram to disclose information about themselves to the public at large and will follow each other in online places and communities. What is and what is not a public figure will be increasingly blurry, which may have significant repercussions for the legitimacy of privacy interferences. Instead of working with a model that is based on subjective expectations of rights bearers, the ECtHR could consider working with objective tests that are binary in nature instead of fluid and gradual. This would align with the European model that is based on the rule of law and inspired by deontological theory instead of the American model, that is based in a utilitarian and consequentialist tradition.86 Although the European Court has been able to steer away from may of the worst aspects of the American approach to privacy, it is unclear why it would want to use a model that is based on expectations in the first place. Human rights aim to provide a barrier for consequentialist or utilitarian reasoning. The width of these rules and obligations can be small, because the essential goal is to protect the absolute minimum conditions of a democratic society. Building on a deontologist ethics, human rights provide minimum rules and obligations, which must be respected in every situation.

References Alazab, Mamoun, Seung-Hun Hong, and Jenny Ng. ‘Louder bark with no bite: Privacy protection through the regulation of mandatory data breach notification in Australia.’ Future Generation Computer Systems 116 (2021). Arrington, Samantha. ‘Expansion of the Katz Reasonable Expectation of Privacy Test Is Necessary to Perpetuate a Majoritarian View of the Reasonable Expectation of Privacy in Electronic Communications to Third Parties.’ University of Detroit Mercy Law Review 90 (2012): 179. Barendt, Eric. ‘Problems with the “reasonable expectation of privacy” test.’ Journal of Media Law 8.2 (2016): 129–137.

86 Bart an der Sloot ‘The Practical and Theoretical Problems with ‘Balancing’ Delfi, Coty and the Redundancy of the Human Rights Framework.’ Maastricht Journal of European and Comparative Law 23.3 (2016): 439–459.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  91 Bennett, Tony. ‘Emerging privacy torts in Canada and New Zealand: an English perspective.’ European Intellectual Property Review 36.5 (2014): 298–305. Boa, Krista ‘Privacy outside the castle: Surveillance technologies and reasonable expectations of privacy in Canadian judicial reasoning.’ Surveillance & Society 4.4 (2007). Burchell, Jonathan. ‘The legal protection of privacy in South Africa: A transplantable hybrid.’ Electron J Comp Law 13.1 (2009). Carolan, Bruce. ‘The Implications of the Right to Privacy under the European Convention on Human Rights for Irish Personal Injury Claims.’ Irish Journal of European Law 4 (1995): 161. Craig, John DR and Hazel D. Oliver. ‘The Right to Privacy in the Public Workplace: Should the Private Sector be Concerned?’ Industrial Law Journal 27.1 (1998): 49–59. Crowther, Brandon T. ‘(Un) reasonable expectation of digital privacy.’ Brigham Young University Law Review (2012): 343. Doherty, Michael. ‘Politicians as a species of Public Figure and the Right to Privacy’ Humanitas Journal of European Studies 1(1) (2007): 35–56. ECtHR López Ribalda and others v Spain, app nos. 1874/13 and 8567/13, 9 January 2018. —— Aalmoes and others v Netherlands, app no. 16269/02, 25 November 2004. —— Akhlyustin v Russia, app no. 21200/05, 7 November 2017. —— Alpha Doryforiki Teleorasi Anonymi Etairia v Greece, app no. 72562/10, 22 February 2018. —— Amann v Switzerland, app no. 27798/95, 16 February 2000. —— Antovic and Mirkovic v Montenegro, app no. 70838/13, 28 November 2017. —— Axel Springer AG v Germany, app no. 39954/08, 7 February 2012. —— Azukaitiené v Lithuania, app no. 59764/13, 22 October 2019. —— Barbulescu v Romania, app no. 61496/08, 5 September 2017. —— Barbulescu v Romania, app no. 61496/08, 12 January 2016. —— Benedik v Slovenia, app no. 62357/14, 24 April 2018. —— Big Brother Watch and others v the United Kingdom, app nos. 58170/13, 62322/14 and 24960/15, 13 September 2018. —— Bohlen v Germany, app no. 53495/09, 19 February 2015. —— Bordovskiy v Russia, app no. 49491/99, 08 February 2005. —— Bukota-Bojic v Switzerland, app no. 61838/10, 18 October 2016. —— Centrum för Rättvisa v Sweden, app no. 35252/08, 19 June 2018. —— Colon v the Netherlands, app no. 49458/06, 15 May 2012. —— Copland v United Kingdom, app no. 62617/00, 03 April 2007. —— Couderc and Hachette Filipacchi Associes v France, app no. 40454/07, 10 November 2015. —— Dickson v the UK, app no. 44362/04, 4 December 2007. —— Ernst August von Hannover v Germany, app no. 53649/09, 19 February 2015. —— Faludy-Kovács v Hungary, app no. 20487/13, 23 January 2018. —— Friend and others v United Kingdom, app nos. 16072/06 and 27809/08, 24 November 2009. —— Garamukanwa v United Kingdom, app no. 70573/17, 14 May 2019. —— Gillan and Quinton v United Kingdom, app no. 4158/05, 12 January 2010. —— Gorzelik and others v Poland, app no. 44158/98, 17 February 2004. —— Haldimann and others v Switzerland, app no. 21830/09, 24 February 2015. —— Halford v United Kingdom, app no. 20605/92, 2 March 1995. —— Halford v United Kingdom, app no. 20605/92, 18 April 1996.

92  Bart Van der Sloot —— Halford v United Kingdom, app no. 20605/92, 25 June 1997. —— Halldórsson v Iceland, app no. 44322/13, 4 July 2017. —— Hasan and Chaush v Bulgaria, app no. 30985/96, 26 October 2000. —— Huvig v France, app no. 11105/84, 24 April 1990. —— J.S. v United Kingdom, app no. 445/10, 3 March 2015. —— Khadija Ismayilova v Azerbaijan (no. 3), app no. 35283/14, 7 May 2020. —— Köpke v Germany, app no. 420/07, 5 October 2010. —— Kopp v Switzerland, app no. 23224/94, 25 March 1998. —— Kruslin v France, app no. 11801/85, 24 April 1990. —— Küchl v Austria, app no. 51151/06, 4 December 2012. —— Kungurov v Russia, app no. 70468/17, 18 February 2020. —— Lambert v France, app no. 23618/94, 1 July 1997. —— Leander v Sweden, app no. 9248/81, 26 March 1987. ECmHR, Mersch and others v Luxemburg, app nos. 0439/83, 10440/83, 10441/83, 10452/83, 10512/83 and 10513/83, 10 May 1985. —— Libert v France, app no. 588/13, 22 February 2018. —— Lillo-Stenberg and Saether v Norway, app no. 13258/09, 16 January 2014. —— López Ribalda and others v Spain, app nos. 1874/13 and 8567/13, 17 October 2019. —— M.L. and W.W. v Germany, app nos. 60798/10 and 65599/10, 28 June 2018. —— Malone v the United Kingdom, app no. 8691/79, 2 August 1984. —— Martin v United Kingdom, app no. 63608/00, 27 March 2003. —— Mosely v United Kingdom, app no. 48009/08, 10 May 2011. —— Neulinger and Shuruk v Switzerland, app no. 41615/07, 6 July 2010. —— Ojala and Etukeno Oy v Finland, app no. 69939/10, 14 January 2014. —— Olsson v Sweden, app no. 10465/83, 24 March 1988. —— P.G. and J.H. v United Kingdom, app no. 44787/98, 25 September 2001. —— Pay v United Kingdom, app no. 32792/05, 16 September 2008. —— Peev v Bulgaria, app no. 64209/01, 26 July 2007. —— Perry v United Kingdom, app no. 63737/00, 17 July 2003. —— Petkeviciute v Lithuania, app no. 57676/11, 27 February 2018. —— Ristamäki and Korvola v Finland, app no. 66456/09, 29 October 2013. —— Roman Zakharov v Russia, app no. 47143/06, 04 December 2015. —— Rotaru v Romania, app no. 28341/95, 4 May 2000. —— Rothe v Austria, app no. 6490/07, 4 December 2012. —— Ruusunen v Finland, app no. 73579/10, 14 January 2014. —— Satakunnan Markkinapörssi Oy and Satamedia Oy v Finland, app no. 931/13, 27 June 2017. —— Silver and others v the United Kingdom, app nos. 5947/72, 6205/73, 7052/75, 7061/75, 7107/75, 7113/75 and 7136/75, 25 March 1983. —— Sorvisto v Finland, app no. 19348/04, 13 January 2009. —— Sousa Goucha v Portugal, app no. 70434/12, 22 March 2016. —— Standard Verlag GMBH v Austria (No. 2), app no. 21277/05, 4 June 2009. —— Steeg v Germany, app nos. 9676/05, 10744/05 and 41349/06, 3 June 2008. —— Sulamaki v Finland, app no. 23605/09, 29 April 2014. —— Sunday Times v the United Kingdom, app no. 6538/74, 26 April 1979. —— Ternovsky v Hungary, app no. 67545/09, 14 December 2010. —— Valenzuela Contreras v Spain, app no. 27671/95, 30 July 1998.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  93 —— Verlagsgruppe Nes GMBH and Bobi v Austria, app no. 59631/09, 4 December 2012. —— Verlagsgruppe News GMBH v Austria (No. 2), app no. 10520/02, 14 December 2006. —— Von Hannover (2) v Germany, app nos. 40660/08 and 60641/08, 07 February 2012. —— Von Hannover v Germany, app no. 59320/00, 24 June 2004. —— Weber and Saravia v Germany, app no. 54934/00, 29 June 2006. —— Wegrzynowski and Smolczewski v Poland, app no. 33846/07, 16 July 2013. Edwards, Lilian and Lachlan Urquhart. ‘Privacy in public spaces: what expectations of privacy do we have in social media intelligence?’ International Journal of Law and Information Technology 24.3 (2016): 279–310. Epstein, Richard A. ‘Privacy and the Third Hand: Lessons from the Common Law of Reasonable Expectations.’ Berkeley Technology Law Journal 24 (2009): 1199. Ewing, Keith David. ‘The Human Rights Act and Labour Law.’ Industrial Law Journal 27.4 (1998): 275–292. Ford, Madelaine Virginia. ‘Mosaic Theory and the Fourth Amendment: How Jones Can Save Privacy in the Fact of Evolving Technology.’ American University Journal of Gender, Social Policy & the Law 19.4 (2011): 18. Ford, Michael. ‘Two conceptions of worker privacy.’ Industrial Law Journal 31.2 (2002): 135–155. Gligorijević, Jelena. ‘Children’s Privacy: The Role of Parental Control and Consent.’ Human Rights Law Review 19.2 (2019): 201–229. Goffman, Erving. Asylums: Essays on the social situation of mental patients and other inmates. AldineTransaction, 1968, 220–221. Henderson, Stephen E. ‘Expectations of privacy in social media.’ Mississippi College Law Review 31 (2012): 227. Hirose, Mariko. ‘Privacy in public spaces: The reasonable expectation of privacy against the dragnet use of facial recognition technology.’ Connecticut Law Review 49 (2016): 1591. Imwinkelried, Edward J. ‘Dangerous Trend Blurring the Distinction between a Reasonable Expectation of Confidentiality in Privilege Law and a Reasonable Expectation of Privacy in Fourth Amendment Jurisprudence.’ Loyola of Los Angeles Law Review 57 (2011). Jernej Letnar Černič, ‘Impact of the European Court of Human Rights on the rule of law in Central and Eastern Europe.’ Hague Journal on the Rule of Law 10.1 (2018): 111–37. Katz v United States, 389 U.S. 347 (1967). Kerr, Orin S. ‘“Katz” Has Only One Step: The Irrelevance of Subjective Expectations.’ The University of Chicago Law Review (2015): 113–134. Keymolen, Esther et al., ‘At first sight’, WODC, 2020. Koerner, Matthew R. ‘Drones and the fourth amendment: Redefining expectations of privacy.’ Duke Law Journal 64 (2014): 1129. Koops, Bert-Jaap, Jaap-Henk Hoepman and Ronald Leenes. ‘Open-source intelligence and privacy by design.’ Computer Law & Security Review 29.6 (2013): 676–688. Matiteyahu, Taly ‘Drone regulations and fourth amendment rights: The interaction of state drone statutes and the reasonable expectation of privacy.’ Columbia Journal of Law and Social Problems 48 (2014): 265. Misthal, Marc P. ‘Reigning in the Paparazzi: The Human Rights Act, The European Convention on Human Rights and Fundamental Freedoms, and the Rights of Privacy and Publicity in England.’ Int’l Legal Perspective. 10 (1998): 287. Mund, Brian ‘Social media searches and the reasonable expectation of privacy.’ Yale Journal of Law & Technology. 19 (2017): 238.

94  Bart Van der Sloot Nouwt, Sjaak Berend R. De Vries and Corien Prins. Reasonable expectations of privacy?: eleven country reports on camera surveillance and workplace privacy. Cambridge University Press, 2005. Petkova, Bilyana. ‘Privacy as Europe’s first Amendment.’ European Law Journal 25.2 (2019): 140–154. Pineda-Moreno, 591 F.3d at 1216 (2010). Pinto, Timothy. ‘Who controls the Naomi Campbell information flow? A practical analysis of the law of privacy.’ Journal of Intellectual Property Law & Practice 1.5 (2006): 354–361. Plourde-Cole, Haley. ‘Back to Katz: Reasonable Expectation of Privacy in the Facebook Age.’ Fordham Urban Law Journal 38 (2010): 571. Price, Michael W. ‘Rethinking Privacy: Fourth Amendment Papers and the Third-Party Doctrine.’ Journal of National Security Law & Policy 8 (2015) 247. Purshouse, Joe. ‘The Reasonable Expectation of Privacy and the Criminal Suspect.’ The Modern Law Review 79.5 (2016): 871–884. Reidenberg, Joel R. ‘Privacy in public.’ University of Miami Law Review 69 (2014): 141. Richards, Neil. ‘The Third Party Doctrine and the Future of the Cloud.’ Washington University Law Review 94 (2016) 1441. Riley v California, 573 U.S. 373 (2014). Roth, David C. ‘Florida v. Jardines: Trespassing on the Reasonable Expectation of Privacy.’ Denver University Law Review. 91 (2013): 551. Sacharoff, Laurent. ‘The Relational Nature of Privacy.’ Lewis & Clark Law. Review 16 (2012): 1249. Salami, Emmanuel. ‘The Case of Copland vs United Kingdom: A Cursory Look at the Right to Privacy Through the European Convention on Human Rights.’ Available at SSRN 2973303 (2017). Scassa, Teresa. ‘Information privacy in public space: location data, data protection and the reasonable expectation of privacy.’ Canadian Journal of Law and Technology 7.1 (2010): 7. Scott-Hayward, Christine S, Henry F. Fradella and Ryan G. Fischer. ‘Does privacy require secrecy: Societal expectations of privacy in the digital age.’ American Journal of Criminal Law 43 (2015): 19. Selbst, Andrew D. ‘Contextual expectations of privacy.’ Cardozo Law Review 35 (2013): 643. Shaff, Colin. ‘Is the Court Allergic to Katz-Problems Posed By New Methods of Electronic Surveillance to the Reasonable-Expectation-of-Privacy Test.’ Southern California Interdisciplinary Law Journal 23 (2014): 409. Shelton, Alicia. ‘A Reasonable Expectation of Privacy Online: Do Not Track Legislation.’ University of Baltimore Law Forum 45 (2014): 35. Shivers, Nonnie L. ‘Firing Immoral Public Employees: If Article 8 of the European Convention on Human Rights Protects Employee Privacy Rights, Then Why Can’t We.’ Arizona Journal of International and Comparative 21 (2004) Slobogin, Christopher. Privacy at risk: The new government surveillance and the Fourth Amendment. University of Chicago Press, 2008. van der Sloot, Bart. ‘The Practical and Theoretical Problems with ‘Balancing’ Delfi, Coty and the Redundancy of the Human Rights Framework.’ Maastricht Journal of European and Comparative Law 23.3 (2016): 439–459. —— ‘The quality of law: How the European Court of Human Rights gradually became a European Constitutional Court for privacy cases.’ Journal of Intellectual Property, Information Technology and Electronic Commerce Law 11 (2020): 160.

Expectations of Privacy: The Three Tests Deployed by the ECtHR  95 Slosser, Jacob Livingston. ‘Components of Legal Concepts: Quality of Law, Evaluative Judgement, and Metaphorical Framing of Article 8 ECHR.’ European Law Journal 25.6 (2019): 593–607. Smith v Maryland, 442 U.S. 735 (1979). Stanley, Paul. The law of confidentiality: a restatement. Bloomsbury Publishing, 2008. Sychenko, Elena and Daria Chernyaeva. ‘The Impact of the ECHR on Employee’s Privacy Protection’ Italian Labour Law e-Journal 12.2 (2019): 171–188. Taylor, Mark J and James Wilson. ‘Reasonable expectations of privacy and disclosure of health data.’ Medical law review 27.3 (2019): 432–460. Taylor, Nick. ‘A Conceptual Legal Framework for privacy, accountability and transparency in visual surveillance systems.’ Surveillance & Society 8.4 (2011): 455–470. Thomson, Mark. ‘The Increasing Protection of Personal Privacy.’ Convergence 4 (2008): 257. Tomás Gómez-Arostegui, H. ‘Defining private life under the European convention on human rights by referring to reasonable expectations.’ California Western International Law Journal 35 (2004): 153. Whittaker, Emily. ‘Campbell v Mirror Group Newspapers LTD [2004] UKHL 22: The Relationship between the European Convention on Human Rights and Privacy in the Common Law.’ UK Law. Students Review 1 (2012): 94.

96

4 Multistakeholderism in the Brazilian General Data Protection Law: History and Learnings BRUNO BIONI1 AND MARIANA RIELLI2

Abstract The multistakeholder approach has been a striking feature of Internet governance in the last few decades, a field in which Brazil has played a pivotal role. This article analyses how multistakeholderism gained particular relevance in the process of drafting and enacting the Brazilian General Data Protection Law, passed in 2018, known by the Portuguese acronym ‘LGPD’. It explores the antecedents of this process, as well as the dynamics and relationships between stakeholders throughout the 10 years between the first version of the Ministry of Justice’s draft Bill and the coming into force of LGPD, in September 2020. The article gives special attention to the formation of a tactical coalition between a significant number of players with the common goal of pressuring the Brazilian Senate to pass the LGPD Bill without substantial changes. It investigates the origins and the context behind this temporary coalition, how it was articulated and why it eventually broke down. The last section of this chapter also analyses how multistakeholderism affected the content of the norm, particularly the provisions related to the lawful bases of consent and legitimate interests and the institutional arrangement 1 Bruno Bioni is a PhD candidate in Commercial Law and holds a masters degree in Civil Law at the Faculty of Law of the University of São Paulo (USP). He was a visiting researcher at the European Data Protection Board/EDPB and the Council of Europe Data Protection Department. Bruno was also a visiting researcher at the Research Centre for Law, Technology and Society of The University of Ottawa Faculty of Lawl and a government relations adviser at The Brazilian Internet Steering Committee/CGI.br and the Brazilian Network Information Center/NIC.br. He integrates the Latin American Network of Surveillance, Technology and Society Studies/LAVITS. He is a professor and founder of Data Privacy Brasil. 2 Mariana Rielli holds a bachelor’s degree in law from the Faculty of Law of the University of São Paulo (USP). She has a professional background in NGOs, working primarily with freedom of expression and information and digital rights. She is currently the head of projects at Data Privacy Brasil Research Association.

98  Bruno Bioni and Mariana Rielli established by LGPD for its own enforcement. The LGPD created a solid data protection regime in Brazil, undeniably inspired by its European counterpart. Its most emblematic feature, however, is how it mobilised Brazilian civil society (in the broad sense) over the last decade, leaving a strong legacy of participation and democratic growth.

Keywords Multistakeholderism, Brazilian General Data Protection Law, LGPD, data protection, privacy, rough consensus, consent, legitimate interests, enforcement.

I. Introduction ‘LGPD is a lesson on participatory democracy, but above all it is a lesson on compromise, on how to build quality public policy with antagonic parties that are ultimately motivated (…) by the best interest of Brazil’, reports Sergio Gallindo, Brasscom’s3 executive director and one of the people who articulated the multistakeholder coalition in favour of passing LGPD. ‘I don’t remember having a recent bill that had this kind of negotiation process (…) inside a House representative’s office (…) in the light of day (…) in such an open way’ confesses Marcelo Bechara, director of government relations at Grupo Globo and one of the most active players on this agenda. ‘There was a detachment from the various stakeholders involved who agreed to engage in (…) a dialogue to build consensus,’ recalls Renata Mielli, a member of the collective Barão de Itararé and one of the founders of the Coalition Rights in the Network. ‘Suddenly there were people from civil society, clearly with a left-wing bias, drafting a text together with representatives of banks, clearly with a right-wing bias (…)’, recalls House representative Orlando Silva, who was one of the rapporteurs of the Bill that later became LGPD. ‘It was a democratic process (…) a collective process (…) of clashing ideas (…) for a common denominator’, summarises Bruno Bioni, who at the time was a researcher at GPoPAI-USP and was also one of the founders of the Coalition Rights in the Network. These statements, from representatives of the private sector, NGOs and collectives, public sector and academia, symbolise that multistakeholderism is in the very DNA of Act No. 13.709 / 2018 (LGPD). While the authors acknowledge that even the definition of multistakeholderism can be difficult and that there are important debates concerning the idea of a ‘stakeholder’, who defines who has a stake in something and the different levels

3 Brasscom is the Association of Information and Communication Technology (ICT) and Digital Technologies Companies.

Multistakeholderism in the Brazilian General Data Protection Law  99 of involvement that are required,4 this chapter employs the term to qualify the different processes, both formal and informal, where representatives of various sectors – namely private sector, public sector, academia and NGOs – interacted and participated in drafting, debating and passing LGPD. The following items will show that the specificities of these processes and relationships changed over the years and depending on the forum where the debate was taking place at the time. Therefore, the common denominator that qualifies multistakeholderism for the purposes of this chapter is the involvement of representatives of different sectors, whether by contribution to a public consultation or public hearing, participation in meetings or ‘behind the scenes’ articulations. Having established that, the main thesis of the article is that LGPD was characterised by a multi-participative process that was particularly successful in forming the ‘rough consensus’5 that permeated the articulations behind its passage in August 2018. This is one of the findings of the research project ‘History of LGPD’,6 which gathered over 10 hours of testimonies from 18 interviewees that took part directly in the process. Still, seeking to assess how representative of multistakeholderism LGPD is, the research method that gave rise to this article crosses the content of the History of LGPD with the written contributions to the two Public Consultations on the Draft Data Protection Act, as well as the various versions of LGPD which were the subject of years of debate, both as a draft Bill and in Congress. Comprehensive data protection laws are regulatory frameworks of quite complex scope. This is because their object is not a specific sector, but, on the contrary, any and all economic activity that involves the processing of personal data, encompassing both public and private sectors. This transversality arouses different interests and creates a difficult task in the structuring of standards for an appropriate information flow. Hence the relevance of investigating the process of framing this type of legislation. In the case of LGPD, the multistakeholder negotiation process left public vestiges that are further investigated in this chapter. In the first part, the process of articulation and eventual passage of LGPD, through the collaboration and engagement of different players, is explored. In the

4 William J. Drake addresses some of these issues in: William J. Drake, ‘Review of Multistakeholderism: External Limitations and Internal Limits, ’Multistakeholder Internet Dialogue 2 (2011): 68–72. 5 The term, described into further detail in the following sections of this chapter, refers to a concept developed by Internet governance literature to address situations where, although there is no pure consensus, in the sense of direct agreement between the members of a certain group on all matters, there is a sense of general agreement based on the consideration and discussion of all points of dissent. See Allen Buchanan, and Robert O. Keohane, ‘The Legitimacy of Global Governance Institutions,’ Ethics & International Affairs 20, no.4 (2006): 405–37. doi:10.1111/j.1747-7093.2006.00043.x. 6 History of LGPD was the first project of the Observatory on Privacy and Data Protection. It is a collection of several hours of interviews with individuals who were part, as representatives of different sectors, of the process that culminated in passing LGPD. It is presented in a multimedia platform composed of short videos interspersed with explanatory texts. Available at: http://35.227.175.13/en/ memory/.

100  Bruno Bioni and Mariana Rielli second part, a specific look is directed at some parts of the law that reflect how this negotiation process reached mediated solutions, to the point of carving out a peculiar normative design. Both in its form and content, the history of LGPD provides rich learnings about multistakeholderism.

II.  Multistakeholderism in the Form of LGPD: Multiple Hands and Different Policy Spaces A.  The Long Road to Building a Multistakeholder Coalition On 13 July 2018, Brasscom launched a Manifest for the Passage of the Brazilian General Data Protection Law.7 On that occasion, 80 signatories, the vast majority organisations, met around a common objective: to pressure the Brazilian Senate to pass the then Bill No. 538 which eventually became Act No. 13.709/2018. The group was diverse, comprised of companies from different productive sectors, research centers, third sector entities such as NGOs9 and even public bodies, such as Procons – an administrative agency for consumer protection. However, this confluence of interests materialised in the document was not a given at the beginning of the process that culminated in the Brazilian General Data Protection Law. How, then, was a broad coalition formed, composed of groups that traditionally have no affinity and that even disputed different versions of the draft text? A look at the eight years that passed between the first antecedents of what came to be LGPD and its effective passage, in August 2018, may provide clues about this phenomenon.

B.  Antecedents and Public Consultations The normative framework of personal data protection in Brazil goes beyond a comprehensive law. Its pillars were built over decades of sectoral norms such as Act No. 8.078/1990 (Consumer Protection Code), Act No. 12.965/2014 (Brazilian Civil

7 ‘Manifesto pela aprovação da lei de proteção de dados pessoais’, Brasscom, accessed 20 January 2021, https://brasscom.org.br/manifesto-pela-aprovacao-da-lei-de-protecao-de-dados-pessoais/. 8 At that time, the formation of a broad coalition was considered crucial, since, after the long and complex process of passing the Bill in the House of Representatives, which involved a provisional consensus of the various sectors involved, it was essential to pass the Bill in the Senate without substantial changes, as this would imply the return of the text to the House and a consequent delay in the process and possible loss of ‘momentum’ for the final passage of the law. See item 19, ch ‘An Astral Conjunction’ of the History of LGPD, from the Observatory on Privacy and Data Protection. Available at: https://observatorioprivácia.com.br/memoria/2018-uma-conjuncao-astral/. 9 The Coalition Rights in the Network, formed at the time by 29 entities, signed the manifesto.

Multistakeholderism in the Brazilian General Data Protection Law  101 Rights Framework for the Internet) and Act No. 12.414/2011 (Credit Registration Act, amended by Complementary Act No. 166/2019).10 However, the relevance of a norm that would regulate the matter in a crosssector manner was already perceived by certain stakeholders at least since the 1970s and 80s, as demonstrated by legislative initiatives to respond to the project of a Brazilian National Registry of Natural People (RENAPE)11 and to the exponential processing of personal data by means of public databases,12 as illustrated by Bill No. 2.796/1980, authored by then Congresswoman Cristina Tavares.13 Then merely a seed, this idea gained greater traction from the mid-2000s, with the participation of Brazil in internal Mercosur negotiations14 that harbored pressure, by countries like Argentina15 for the drafting of a common data protection regulation. Although the idea was never realised, the debates triggered by this proposal served as fuel for the internalisation of the subject by the Brazilian Executive Branch. The process of materialising this interest into the creation of a Brazilian general law began at the Secretariat for Legislative Affairs, in partnership with the Department of Consumer Protection and Defence, both from the Ministry of Justice, which, under the coordination of Laura Schertel Mendes and with collaboration of then consultant Danilo Doneda, prepared a draft Data Protection Bill16 and, in December 2010, submitted it for public consultation,17 10 Danilo Doneda, ‘Panorama histórico da proteção de dados pessoais’, in Tratado de Proteção de Dados Pessoais, eds. Bruno Bioni, Danilo Doneda, Laura Schertel Mendes, Otávio Luiz Rodrigues Junior, Ingo Wolfgang Sarlet (Rio de Janeiro: Forense, 2021), 20-03. 11 Marcelo Viana, ‘Um novo “1984”? O projeto RENAPE e as discussões tecnopolíticas no campo da informática brasileira durante os governos militares na década de 1970,’ Oficina do Historiador, Special supplement, I EPHIS/PUCRS (May 2014): 1148–11171, http://revistaseletronicas.pucrs.br/ojs/index. php/oficinadohistoriador/article/view/18998/12057. 12 As Rafael Zanatta explains in the first episode of the History of LGPD: ‘The first generation is a generation that brought up the discussion about personal data during the military dictatorship. So, for example, there was a project by the Federal Government at the time of Geisel to implement a national system of natural persons, called RENAPE, which brought a very strong reaction from lawyers such as Raymundo Faoro, René Ariel Dotti – including, the first bill on personal data in Brazil, in 78, which was authored by José Roberto Faria Lima (…), a failed attempt to make data protection legislation.’ Available at: https://observatorioprivácia.com.br/memoria/2010-2015-o-tema-entra-em-pauta/. 13 Danilo Doneda, ‘Panorama histórico da proteção de dados pessoais,’ in Tratado de Proteção de Dados Pessoais, eds. Bruno Bioni, Danilo Doneda, Laura Schertel Mendes, Otávio Luiz Rodrigues Junior, Ingo Wolfgang Sarlet (Rio de Janeiro: Forense, 2021), 20-03. 14 Danilo Doneda explains that Working Subgroup number 13 (SGT13), referring to Electronic Commerce, received a proposal from the Republic of Argentina, in 2004, for common regulation. From that moment on, there was a ‘discreet but growing’ debate on the subject by the Brazilian Government, which had as one of its milestones the ‘I International Seminar on Protection of Personal Data’, promoted by the Ministry of Development, Industry and Foreign trade. 15 Argentina was the first country in the bloc to pass a general data protection law, the Personal Data Protection Act No. 25.326 (PDPA), in 2000. 16 Available at: http://culturadigital.br/dadospessoais/files/2010/11/PL-Protecao-de-Dados.pdf. 17 The public consultation mechanism as an instrument of political participation is regulated by Act No. 9,784, of 29 January 1999, which regulates the administrative process within the scope of the Federal Public Administration (Direct and Indirect): ‘Art. 31. When the matter of the process involves a matter of general interest, the competent body may, by means of a motivated order, open a period of public consultation for the manifestation of third parties (…)’.

102  Bruno Bioni and Mariana Rielli following the mould of Act No. 12.975/2014 (Brazilian Civil Rights Framework for the Internet).18 On that occasion, the system received 794 contributions19 in the form of comments on each of the articles, paragraphs and items of the preliminary draft. The group of entities that participated already reflected a multistakeholder interest in the matter, since representatives from all sectors were present, although with uneven distribution:20 nine business associations,21 three NGOs22 and one research centre;23 from the private sector, four companies;24 from the public sector, one foundation25 and one committee from a sui generis public entity.26 According to Danilo Doneda, in an interview for the History of LGPD27 project, this first public consultation ‘wasn’t very fruitful’ from a technical standpoint. Instead, it revealed that at that time data protection was still a relatively unknown subject, confined mainly to academia. The years following that first Ministry of Justice initiative were marked by an environment that ‘‘delayed’ any substantial advance of a general data protection law, but at the same time helped build the contextual and conceptual basis on which this norm would come to be established years later. Thus, between 2011 and 2015, other legislation included in the data protection microsystem were passed, such as Act Nos. 12.414 (Credit Registration Act), 12.527/2011 (Access to Information Act) and 12.965/2014 (Brazilian Civil Rights Framework for the Internet), the latter driven directly by the revelations made by Edward Snowden about the mass surveillance system built by the US Government.28 18 The Brazilian Civil Rights Framework for the Internet (MCI), in the draft Bill phase, was submitted to public consultation promoted by the Secretariat for Legislative Affairs (SAL) of the Ministry of Justice (MJ) through the portal Cultura Digital between April and May 2010. The first consultation on the Draft Data Protection Law, in fact, took advantage of the platform created for discussions on the MCI. 19 Available at: http://culturadigital.br/dadospessoais/. 20 Such systematisation was taken from a report by the Brazilian Association of Direct Marketing (ABEMD), available at: www.abemd.org.br/interno/DadosPessoais_ContribuicoesdasEntidades.pdf. 21 ABEMD (Brazilian Association of Direct Marketing); ABRAREC (Brazilian Association of Company Customer Relations); ABA (Brazilian Association of Advertisers); QIBRAS (Quality of Information Brazil); Computer, Internet and Technology Commission – CIIT of the Brazilian Association of Computer and Telecommunications Law – ABDI; ABTA (Brazilian Pay-TV Association), SindiTelebrasil (National Union of Telephony and Cellular and Personal Mobile Service Companies); Brazilian Association of Credit Card and Service Companies; National Confederation of Financial Institutions. 22 IDEC (Brazilian Institute for Consumer Protection); Hacker Transparency Organization; PROTESTE (Brazilian Consumer Protection Association). 23 Research Group on Public Policies for Access to Information at the University of São Paulo. 24 Equifax Brasil; Nokia S.A.; Telemar Norte-Leste S.A. (‘Oi’); Morrison & Foerster (MoFo) – Global Privacy Alliance (GPA). 25 PROCON – SP Foundation. 26 OAB/SP Science and Technology Commission. 27 See item 6 of Episode 1 ‘The subject becomes part of the public agenda’ of the History of LGPD. Available at: https://observatorioprivácia.com.br/memoria/2010-2015-o-tema-entra-em-pauta/. 28 Francisco Brito Cruz describes how the ‘Snowden effect’ directly influenced the text of the Brazilian Civil Rights Framework for the Internet regarding privacy and data protection. Francisco Carvalho de Brito Cruz, Direito, democracia e cultura digital: a experiência de elaboração legislativa do Marco Civil da Internet (master’s thesis., University of São Paulo, 2015), 138.

Multistakeholderism in the Brazilian General Data Protection Law  103 In the meantime, two comprehensive data protection Bills were presented to Congress: in the House, Bill 4060/2012,29 authored by then House representative Milton Monti (PR-SP); in the Senate, Bill 330/2013,30 authored by then senator Antônio Carlos Valadares (PSB/SE). The latter would come to ‘compete’ with the proposal that was eventually sent by the Executive. This busy context left the Ministry of Justice draft Bill in the background for the period and also highlighted the need to ‘work’ the text and adapt it to the normative and contextual changes that were occurring. Internationally, it was in that 2012–2016 period that the European General Data Protection Regulation (GDPR), widely recognised as a strong influence for LGPD, was discussed and passed. Thus, on 28 January 2015, the Ministry of Justice submitted a new version of the draft Bill for a second public consultation,31 which lasted until 7 July of the same year and received more than 1,800 contributions, divided into: i) comments on each part of the text (articles, items, paragraphs); ii) general comments divided by thematic areas; and iii) written contributions in pdf format. In addition, the system allowed both individuals and companies to participate.32 The sectoral distribution on that occasion, taking into account only registered legal entities, was as follows:33 11 business associations and the like,34 four non-governmental organisations35 and two research centres;36 in the private sector, six companies;37 in the public sector, a ministerial secretariat.38 Finally, it was 29 Dispõe sobre o tratamento de dados pessoais, e dá outras providências, Bill No. 4060, Brasília, DF, (2012), www.camara.leg.br/proposicoesWeb/fichadetramitacao?idProposicao=548066. 30 Dispõe sobre a proteção, o tratamento e o uso dos dados pessoais, e dá outras providências, Bill No. 330, Brasília, DF, (2013), www25.senado.leg.br/web/atividade/materias/-/materia/113947. 31 Available at: http://pensando.mj.gov.br/dadospessoais/. 32 For identification, e-mail, password and a username were required. 33 Such systematisation was extracted from the report ‘What is at stake in the debate on personal data in Brazil? Final report on the public debate promoted by the Ministry of Justice on the draft law on the protection of personal data’, from InternetLab. Internetlab, O que está em jogo no debate sobre dados pessoais no Brasil? Relatório final sobre o debate público promovido pelo Ministério da Justiça sobre o Anteprojeto de Lei de Proteção de Dados Pessoais (São Paulo, 2016), www.internetlab.org.br/ wp-content/uploads/2016/05/reporta_apl_dados_pessoais_final.pdf. 34 ABA (Brazilian Association of Advertisers); ABDTIC (Brazilian Association of Information and Communications Technology Law); ABEMD (Brazilian Association of Direct Marketing); ABEP (Brazilian Association of Research Companies); ABINEE (Brazilian Association of the Electrical and Electronic Industry); ABRANET (Brazilian Internet Association); Association of Religious Liberty and Business; CNI (National Confederation of Industry); CNseg (National Confederation of General Insurance Companies, Private Pension and Life, Supplementary Health and Capitalization); Febraban (Brazilian Federation of Banks); FIESP (Federation of Industries of the State of São Paulo); SindiTeleBrasil (National Union of Telephony and Cellular and Personal Mobile Service Companies). 35 CTS-FGV (Fundação Getúlio Vargas Technology and Society Center); Intervozes; ITS-Rio (Institute of Technology and Society of Rio de Janeiro) and Proteste (Brazilian Association for Consumer Protection). 36 GEPI-FGV (Teaching and Research Group on Innovation of the Getúlio Vargas Foundation) and GPoPAI (Research Group on Public Policies on Access to Information). 37 3M do Brasil; Boa Vista Serviços; Cisco (Cisco Systems, Inc); Claro; Sky and Vivo. 38 SEAE/MF (Economic Monitoring Secretariat of the Ministry of Finance).

104  Bruno Bioni and Mariana Rielli possible to note a more expressive participation, in this process, of foreign entities, representative of different sectors.39 With the maturation of the subject over the years and its evident relevance and horizontality, in addition to the fact that it is a relatively technical matter, the second public consultation is considered a “milestone” for the stakeholders who participated in it: a movement of ‘professionalisation’ could be observed, mainly among the third sector and public policy teams of companies, something that was considered necessary for these groups to be ‘up to the standard’ of the debate. While the two public consultations carried out by the Ministry of Justice in 2010–2011 and in 2015 do not yet represent the culmination of multistakeholderism in the process of drafting the Brazilian General Data Protection Law, they were an important first step, built on the learnings of previous processes, such as the Brazilian Civil Rights Framework for the Internet, to gather different perspectives that eventually had repercussions on the content of the proposal.

C.  The Brazilian Internet Steering Committee Seminar Another element that helped shape LGPD and the multistakeholder character that it incorporated, was the Internet Steering Committee (CGI.br) and, specifically, its Privacy and Data Protection Seminar, whose first edition took place in 2010, in partnership with the Information and Communication Technologies Working Group – GTTIC, of the Federal Public Prosecutor’s Office and the Getúlio Vargas Foundation of São Paulo – FGV/SP.40 The multistakeholder model of CGI.br, whose origins can be found in the early days of the Internet in Brazil,41 was soon incorporated by the Seminar, which, throughout its 11years of existence, has consolidated itself as the main policy space42 39 American Bar Association; BSA (The Business Software Alliance); ESA (Entertainment Software Association (USA)); GSMA (Groupe Speciale Mobile Association); Privacy International; US Business Council (Brazil-U.S. Business Council). 40 An important antecedent of the Seminar, which influenced the entire debate on Internet governance in the following decade, was the Internet Decalogue, launched in 2009, which contains ten principles that should guide the use of the network. Item number 1 concerns Freedom, privacy and human rights: ‘The use of the Internet must be guided by the principles of freedom of expression, privacy of the individual and respect for human rights, recognizing them as fundamental for the preservation of a just and democratic society.’ 41 A process that had, at different times, mutual support between academic entities, such as Fapesp, third sector entities, such as IBASE, and the Brazilian Government, in addition to the companies that eventually promoted the commercial distribution of the Internet in the country. Raquel Gatto and Demi Getschko, ‘Governança da Internet: conceitos, evolução e abrangência’, Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos, 27 (2009): 37, http://ce-resd.facom.ufms.br/sbrc/2009/081.pdf. 42 Mayer conceptualises a policy space as a space for public debate that brings together stakeholders, instruments, objectives and other elements for the elaboration of policies on a given subject. In the area of Internet governance, Levinson summarises it as a space for articulation and learning of the subjects thereby discussed. Levinson and Marzouki understand that today, these spaces are complex, as they involve several stakeholders such as state agencies and organisations, in short, players who can have different interests and can cross national and regional borders. See Nanette Levinson and Meryem Marzouki, ‘International Organizations and Global Internet Governance: Interorganizational

Multistakeholderism in the Brazilian General Data Protection Law  105 on privacy and data protection matters in the country. The privileged position of the Seminar arises not only from the high level of discussions and articulations held in it, but from the fact that this knowledge is produced collectively by representatives of all sectors affected by the subject – business, academia, NGOs and government – a characteristic that grew even stronger over the years.43 As a qualified forum for discussion on privacy and data protection, in general, the CGI.br Seminar hosted debates about the Bills (as well as the Ministry of Justice draft Bill) that would eventually become the LGPD and about certain aspects of the law after its enactment in 2018.44 Thus, over almost a decade, panelists, representing different sectors, as well as the audience, were able to have contact with the specifics of the process and the disputes over the content of the legislation. Nevertheless, perhaps the most direct influence of the Seminar, and specifically of the multistakeholder model of the Internet Steering Committee (CGI.br), over the LGPD process has been on the role of the rapporteur of the Bills that were presented to the House of Representatives, Congressman Orlando Silva (PCdoB-SP). As described in the next item, Silva conducted the works in a highly participatory manner that was, according to himself, inspired by the format of Seminar.45 Architecture’, in The Turn to Infrastructure in Internet Governance, eds. Laura DeNardis, Francesca Musiani, Nanette S. Levinson and Derrick L. Cogburn (New York: Palgrave Macmillan, 2016), 47–71. Jörg Mayer, ‘Policy space: what, for what, and where?’ Development Policy Review, v. 27, no. 4 (2009): 373–395, www.wto.org/english/res_e/reser_e/gtdw_e/wkshop08_e/mayer_e.pdf. 43 In the 2020 edition, for example, all roundtables were composed of at least one representative from each sector. See: https://seminarioprivácia.cgi.br/. 44 Throughout its editions, the CGI.br and NIC.br Privacy Seminar addressed the topic of privacy and data protection regulation in the country in several instances, including, in addition to LGPD itself, discussions related to its preliminary draft Bill and Congressional Bills. Examples are the panels ‘Palestra Magna – Regulatory panorama of privacy in Brazil’ (2012), ‘Roundtable VI “Global and national perspectives on the protection of personal data”’(2013), ‘Roundtable V “Public and private databases: potential risks for the protection of privacy”’(2013), ‘Seminar II: The operationalization/application of Brazilian law: legal imperatives in interface with technological imperatives’ (2014), ‘Seminar III: Risks and prospects for the protection of privacy and personal data’ (2014), ‘Session 7: Public Debate on Draft Bills and the APL on Privacy and Data Protection’ (2015), ‘Session 3: Seminar Forms of Consent and Protection of Fundamental Rights and Freedoms’ (2015), ‘Session 4: Debate cocktail: Legislative initiatives on personal data protection’ (2016), ‘Session 2: Seminar – Sharing economy: what is the impact of personal data protection on this (new) business model?’ (2016), ‘Session 4: Seminar Regulatory bodies, inspection and enforcement of personal data protection laws: an overview from the foreign experience’ (2017), ‘Session 3: Seminar Protection of personal data as an element of innovation and competitiveness: the challenges of building a “Digital Agenda” for Brazil’ (2017), ‘Panel 3: The role of the private sector in protection of privacy and personal data’ (2018), ‘Debate Cocktail on the Brazilian Situation: Regulatory Models for the application and enforcement of personal data protection laws’ (2018), ‘Panel 1: General Data Protection Regulation and Convention 108: first impressions and expectations about the process of modernizing personal data protection standards’ (2018), ‘Cocktail of debates – Perspectives for the implementation of data protection in Brazil and the foreign experience: cooperation between regulator, regulated and data subjects’ (2019), ‘Panel 2 – Allocating responsibilities, rights and duties of data ecosystem agents: a cross-sectional look at LGPD’ (2019), ‘Panel 3 – Security guarantees for compliance with the General Data Protection Law’ (2020). 45 Orlando Silva speaks at the opening of the VIII Seminar on Protection of Privacy and Data Protection of the Internet Steering Committee (CGI.br). Available at: www.youtube.com/ watch?v=GKMul1c4YYU&list=PLQq8-9yVHyOZVGYJeegT8I-mHrWOPIiYh&index=1&t=6s&ab_ channel=NICbrvideos.

106  Bruno Bioni and Mariana Rielli

D.  Special Commission and Public Hearings Continuing the chronological history of LGPD, after the second public consultation promoted by the Ministry of Justice, between January and July 2015, the government presented the final version of the draft Bill46 on 20 October of the same year. Following that, still some time passed before the draft was finally sent to the House, which came to occur only on 12 May 2016 as one of the last acts of then President Dilma Rousseff before her definitive removal from office due to the impeachment process. This moment marks the beginning of a broader, deeper and more diverse debate on the subject within the Legislative, which, before 2016, was focused on the two Bills in progress until then – Bills 4060/2012 and 330/2015. However, the ‘‘new’ proposal, which received the number 5276/2016, carried at least five years of previous debates, with contributions from all sectors. The groups and individuals who had already engaged in the drafting process through public consultations would now become involved in a new phase of discussions, in a new forum, with its own specific set of rules. It was at that moment, in fact, that NGOs and collectives of activists (not all active in the process until then) who had already experienced the articulations for the passage of the Brazilian Civil Rights Framework for the Internet, decided to formalise a network of organisations focused on digital rights advocacy, named Coalition Rights in the Network.47 Its first mission was precisely to guarantee that the interests of broader civil society would be incorporated into LGPD, which was then under discussion in Congress.48 The Coalition’s influence on the ongoing process was evident with the articulation promoted by some of the organisations for the choice of a rapporteur who, at the same time, understood the importance of the matter and was willing to listen to all interested parties. Such articulation resulted in the appointment of House representative Orlando Silva (PCdoB-SP) to report Bill 5276/2016, now attached to Bill 4060/2012, still within the scope of the Labor, Administration and Public Service Commission (CTASP). Acknowledging the need to expand and deepen the debate, while maintaining relative autonomy, Silva requested that the Bill be processed by four House Commissions, which, according to the House’s Internal Regulations, automatically triggers the creation of a Special Commission exclusively to discuss the matter. Thus, the Special Commission on Processing and Protection of Personal

46 Ministério da Justiça, ‘MJ apresenta nova versão do Anteprojeto de Lei de Proteção de Dados Pessoais’, Pensando o Direito, 21 October 2015, http://pensando.mj.gov.br/2015/10/21/ mj-apresenta-nova-versao-do-anteprojeto-de-lei-de-protecao-de-dados-pessoais/. 47 Available at: https://direitosnarede.org.br/. 48 See item 2 of ch 2 ‘The Draft arrives at the House’ of the History of LGPD Available at: https:// observatorioprivácia.com.br/memoria/2016-2017-o-anteprojeto-chega-a-camara/.

Multistakeholderism in the Brazilian General Data Protection Law  107 Data was installed on 26 October 2016, under the presidency of House representative Bruna Furlan (PSDB-SP).49 Orlando Silva was once again appointed rapporteur. The Special Commission, then, became the new forum for this second phase of multistakeholder discussion around the Bill. A few months after its formalisation, the first50 of a total of 11 thematic public hearings51 was held (on topics such as fundamental data protection concepts, regulation models, civil liability models, definition of personal, sensitive and anonymised data, the lawful basis of legitimate interests, among others). Two international seminars were also organised and all events managed to replicate the successful multistakeholder logic of the CGI.br Privacy and Data Protection Seminar.52,53 By the end of the cycle of public hearings, in July 2017, there was a substantial gain in collective knowledge on the subject (and even a certain ‘levelling’ in relation to parliamentarians and other players who engaged in the discussion at a later time). But there was also a reduction in tensions between representatives of NGOs and academia and representatives of the private sector. Bia Barbosa, also

49 An important point is that parliamentarians who were very familiar with the subject and who were considered strategic due to their transit, were appointed to the posts of three vice-presidents of the Special Commission of LGPD (Art 39 of the Internal Regulations of the Chamber of Deputies) between various parties and actors, which facilitated negotiations. They were: 1st Vice-President: André Figueiredo (PDT/CE), 2nd Vice-President: Alessandro Molon (PSB/RJ), 3rd Vice-President: Milton Monti (PR/SP). Available at: www2.camara.leg.br/ividade-legislativa/comissoes/comissoestemporarias/especiais/55a-legislatura/pl-4060-12-tratamento-e-protecao-de-dados-pessoais/conheca -the-commission / members-of-the-commission. 50 In his work plan, presented on 22 November 2016, Congressman Orlando Silva presented as the first item of activities to be carried out by the Commission: ‘a) Conduct hearings of invited experts, civil servants and public authorities who can contribute to the carrying out of this work, including members of the Judiciary, the Public Prosecutor’s Office and parliamentarians reporting on bills related to the subject; Law operators in general; and representatives of society organizations specialized in the subject, among others; (…)’. Available at: www2.camara.leg.br/ividade-legislativa/comissoes/comissoes-temporarias/especiais/55a-legislatura/pl-4060-12-tratamento-e-protecao-de-dados-pessoais/ documentos/other-documents/script-of-work-presented-on-22-11-2016. 51 Part of the presentations by the participants of the 11 public hearings can be accessed here: www2.camara.leg.br/ividade-legislativa/comissoes/comissoes-temporarias/especiais/55a-legislatura/ pl-4060-12-tratamento- e-protection-of-personal-data /documents/hearings-and-events. 52 Orlando Silva, in an interview, stated that multistakeholderism was not just an incidental feature of the process, but a requirement for each of the roundtables. See Item 14 of ch 2, ‘The Draft Arrives at the House’, from the History of LGPD. 53 Between the first and the last public hearing, about eight months elapsed and, in this period, participants in the process report that other articulations developed: on the side of organised civil society, for example, a campaign was carried out to bring the debate on data protection closer to society as a whole (‘Your data is you’ campaign); at the same time, both NGOs and representatives of the private sector and members of the academy moved to provide Commission members with information, from the most basic to the most complex, on the subject; Brasscom, the largest association of companies in the ICT sector, organised a manifest during this period (https://brasscom.org.br/manifesto-sobrea-futura-lei-de-protecao-de-dados-pessoais/), in which it sought to summarise its positions on basic topics of legislation, such as the concept of personal data, consent and other lawful bases, regulatory models, etc. At this point in the debate, Rafael Zanatta states that the relationship between NGOs and the private sector/associations representing companies was one of disagreement and antagonism, not articulation. See more in Episode 2 ‘The Draft arrives at the House’ of the History of LGPD.

108  Bruno Bioni and Mariana Rielli interviewed by the History of LGPD project and one of founders of the Coalition Rights in the Network, explained that it was during this process that NGOs and activists realised that the private sector was not ‘monolithic’ and that, within it, there were also a myriad of different interests. As a consequence, it was absolutely vital to promote an open dialogue with all parties, particularly because a Brazilian General Data Protection Law, by its very nature, would affect everyone.54

E.  2018: ‘Astral Conjunction’ and the Coordinated Movement to Pass LGPD After two ‘phases’ of multistakeholder dialogue around a Brazilian General Data Protection Law, at the end of 2017 there was still no concrete prospect of passing the law.55 It took what Doneda calls an ‘astral conjunction’ for that to be, in fact, possible. In summary, this favourable context can be attributed to at least four prominent factors, which occurred concomitantly in Brazil and internationally and contributed to a scenario that was highly auspicious to the passage of a Brazilian General Data Protection Law. They were: i) the Cambridge Analytica scandal, which expanded a debate sometimes restricted to specific circles and became a ‘hot topic’ for the mainstream media and the general public; ii) the entry into force, in May 2018, of the GDPR, which increased the need for greater legal certainty regarding data processing in Brazil; iii) Brazil’s expressed desire to join the Organization for Economic Cooperation and Development (OECD), which requires, as good practice, the regulation of the use of personal data, as well as an independent and autonomous supervisory body; and, finally, iv) an internal articulation with the House of Representatives for the passage of the amendments to the Credit Registration Act, which involved the passage of the General Data Protection Law as an indispensable condition. Given this scenario of a true ‘ultimatum’ for Congress to pass the law, rapporteur Orlando Silva noted the need to finally overcome some remaining disagreements over key provisions. In order to address this, the Congressman resorted to a ‘third phase’ of multistakeholder dialogues, bringing together representatives of all sectors for ‘working sessions’, with the following methodology, described in an interview: the reading aloud of every single provision of the latest version of the Bill, with highlights of specific points of the text on which any participant raised a disagreement, followed by defence against and in favour of the highlighted section and vote to decide the final version.56 54 See Item 17 of ch 2 ‘The Draft arrives at the House’ of the History of LGPD. 55 Doneda recalls that ‘in many moments it was thought that nothing was going to happen’, but that, at the same time, today he considers that this long period of debates was essential for the maturation of the law. See Item 17 of ch 2 ‘The Draft arrives at the House’ of the History of LGPD. 56 See Item 11 of ch 3 ‘An astral conjunction’ of the History of LGPD. Available at: https://observatorioprivácia.com.br/memoria/2018-uma-conjuncao-astral/. In the same item, Sérgio Gallindo,

Multistakeholderism in the Brazilian General Data Protection Law  109 This method was not adopted by chance, but was rather transplanted from the deliberations of student movements such as the National Student Union (UNE), of which Rapporteur Orlando Silva was the first black President, elected in 1995. It materialises the idea of a ‘rough consensus’, applied by the literature57 to the context of Internet governance, which refers to situations in which although there is no pure consensus, in the sense of direct agreement between the members of a certain group on all matters, there is a sense of general agreement based on the consideration and discussion of all points of dissent. According to Raquel Gatto, when studying this concept within the scope of the Internet Engineering Task Force (IETF), ‘rough consensus is reached when all issues have been discussed, but not necessarily included in the final text’.58 Another element of the IETF concept that translates well into Silva’s method is the central role played by the decision-maker himself, in that case, the Rapporteur. Besides managing the process, the idea of a rough consensus presupposes a final decision, informed by the ‘dominant view’ of the group. In sum, in order to reach a rough consensus, all matters and positions must be considered and the final version, while not necessarily accommodating of all points of disagreement, must be representative of a broader sense of agreement based on the legitimacy of the process itself and the one conducting it. About this process, Marcel Leonardi, who was public policy manager of Google at that time, understands that, had it not been for the somewhat innovative initiative of Congressman Silva, the Bill would have followed the traditional trajectory of Congress: after the presentation of a basic text by the rapporteur, each sector would articulate to see its interests represented in amendments or other types of alterations typical of the legislative process, without a dialogue or search for consensus of any kind.59 Similarly, Marcelo Bechara notes that the process of drafting the LGPD had an unusual level of ‘openness’, including detailed negotiations that occurred ‘in the light of day’, accessible to anyone interested.60

from Brasscom, describes another rule of the process established by Silva: matters without a chance of consensus would be decided by him. He also claims to have never witnessed a legislative process with ‘such a degree of transparency and such a degree of internal democracy’, an opinion shared by Marcelo Bechara, also in an interview for the project. 57 See Buchanan, and Keohane. ‘The Legitimacy of Global Governance Institutions.’ 58 Raquel Gatto, A perspectiva contratualista na construção do consenso da sociedade na internet. PhD. Dissertation., University of São Paulo, 2016. Available at: https://sapientia.pucsp.br/bitstream/ handle/18852/2/Raquel%20Fortes%20Gatto.pdf. 59 See Item 12 of ch 3 ‘An astral conjunction’ of the History of LGPD. 60 ‘I don’t remember having a recent bill that had a negotiation process like that, in a room inside the Congressman’s office, in the light of day, in such an open way. I don’t remember recently having a project where everyone sat down to discuss the text, in front of each other, article by article. I don’t think it’s the first time it happened, I hope it won’t be the last, but I don’t remember, recently, this model, opening it up for everyone, to have happened.’

110  Bruno Bioni and Mariana Rielli Once this last round of discussions on the text of the House Bill was concluded, there was, for a brief period, a ‘‘dispute’‘ between this version and the Bill that was underway since 2013 in the Senate (Bill 330/2013). The latter was admittedly more favourable to sectors that make more intensive use of personal data for different purposes. Thus, Marcelo Bechara and Aloysio Nunes report that, in parallel to the apparent consensus obtained in the House, there were articulations ‘behind the curtains’ to pass the Senate Bill before.61 This dispute ended, at least provisionally, on 29 May 2018, with the House passing the Bill that originated seven years earlier in the first draft by the Ministry of Justice. This was a crucial moment for the diverse group of stakeholders who were involved in the process: the proximity of the 2018 World Cup and the parliamentary recess, as well as the weight of several years of articulations and efforts to reach a rough consensus created pressure for there to be no substantial changes in the Senate once it received the text (which, in turn, would lead to a return to the House and delay the process). It was in this very context that, finally, a tactical coalition emerged, materialised in the manifesto headed by Brasscom, whose objective was, primarily, to create sufficient political pressure for the Senate to pass the Bill without material changes. For that, Sérgio Gallindo points out that it was necessary to show the senators that the text had already been the subject of wide discussion and, more than that, of a reasonable consensus.62 In other words, showing that this Bill was a result of mutual compromises of different sectors of Brazilian society. Thus, under the leadership of then senator Ricardo Ferraço, the Bill was passed with minor textual changes and could therefore be sent to be signed into law by then President Michel Temer. All initiatives to appeal to different sectors of society so that they engaged in the drafting process, as well as the proactive accommodation of interests that were initially antagonistic but ended up settling along the way, culminated in this last joint movement to guarantee that the efforts of so many years would not be lost near the ‘finish line’.

F. Tactical Coalitions: What they are and How they are Formed63 The idea of a tactical coalition is not new, particularly in the fields of political science and international relations, and it is related to the distinction between the concepts of ‘strategy’ and ‘tactics’, the first representing a set of approaches aimed 61 See Item 14 of ch 3 ‘An astral conjunction’ of the History of LGPD. 62 See Item 19 of ch 3 ‘An astral conjunction’ of the History of LGPD. Andriei Gutierrez, in the same item, affirms that, for the private sector, the claims had already ‘run out’ and there was a certain general acceptance that, for the sake of legal certainty that a general law would bring, it was more strategic to leave specific disagreements aside and join other sectors in defence of Senate approval. 63 We are particularly grateful for Rafael Zanatta’s comments on that subject pointing out the literature to be reviewed.

Multistakeholderism in the Brazilian General Data Protection Law  111 at fulfilling long-term objectives and the second, the specific actions considered necessary to achieve short-term goals, informed by other broader aspirations. Studying political alliances in the twenty-first century, Jeremy Ghez defines the objectives of tactical alliances as: ‘The primary purpose of a tactical alliance is to counter an immediate threat or adversary that has the potential to challenge a state’s most vital interests’.64 Tactics, in that sense, are pragmatic in essence, precisely because the highly specific and temporary nature of such relations often derives from the fact that there is no ideological approximation between the players that would justify a permanent amalgamation. Research on environmental initiatives65 also reveals that tactical activities, including alliances or coalitions, tend to require less ‘substantial investments’ from organisations than strategic, long-term ones, which may involve an ‘organisational or managerial shift’. In the specific case of the Brazilian General Data Protection Law, its ‘horizontal’ scope, which encompasses virtually all sectors, placed the passage of the law on the strategic agendas of organisations and stakeholders that otherwise pursue very different, and in some cases conflicting, objectives. The tactical option for a temporary coalition occurred despite these differences, based on a contextual analysis that a ‘sum of forces’ was necessary, in that moment, to achieve a common goal. In addition to the specific moment in time in which the tactical coalition emerged, it is also important to take into account the element of previous history among the various stakeholders that formed it. As described in the previous items of this chapter, these players interacted at different times during the processing of LGPD, whose participatory nature allowed for approximations through, for example, the public consultations and public hearings, in addition to informal spaces. At the same time, the strong culture of multistakeholderism in Internet governance, led in Brazil by the Internet Steering Committee and materialised in the aforementioned seminars, can be understood as a relevant factor for the constitution of the tactical coalition, as it has created, over the years, a privileged space in which players from different sectors, with different and potentially ­conflicting views, could not only coexist, but effectively dialogue and contribute to the policy-making process. Also, the two public consultations should be seen as a public forum for such open and multiparticipative dialogue. As will be pointed out later, the second public consultation played a key role. The Ministry of Justice, who ‘held the pen’ at that moment, could visualise clearly the existence of very polarised positions and 64 Jeremy Ghez, ‘Alliances in the 21st Century Implications for the US-European partnership’, Rand Europe, (United Kingdom, 2011), 10, www.rand.org/content/dam/rand/pubs/occasional_papers/2011/ RAND_OP340.pdf. 65 See Michael Polonsky, Kathryn Lefroy, Romana Garman and Norman Chia, ‘Strategic and Tactical Alliances: Do Environmental Non-Profits Manage Them Differently?’ Australasian Marketing Journal 19 (2001): 43–51.

112  Bruno Bioni and Mariana Rielli understood the need to find common ground between them before sending the draft Bill to Congress. The formation of a tactical coalition does not presuppose an absence of conflicts; on the contrary, as previously seen, the process that culminated in the passage of LGPD was long and tortuous. The ultimate association of players that traditionally find themselves in opposite poles of the political spectrum, or, more specifically, players with starkly divergent approaches to privacy and data protection, derived from a very specific confluence of factors which, together, precipitated the creation of this alliance. The tactical nature of the coalition, marked by both pragmatism and transience, is even more evident when one observes that the next acts of this long trajectory to establish a comprehensive regulation of data protection in Brazil did not maintain that same characteristic. Absent the context that triggered the union of the 80 entities in favour of the Senate passage of the Bill without substantial changes, there was the breakdown of the multistakeholder coalition, a movement that can be clearly observed in several emblematic moments that followed the enactment of LGPD.

G. The Breakdown of the Coalition after the Passage of LGPD: Vetoes, the Authority Saga and Debates Around Validity The undeniable victory of a unanimous approval of the law in both the House and the Senate did not, however, mark the end of the LGPD’s ‘via crucis’. This is because there was still at least one step left in the constitutional process of lawmaking: presidential sanction and, in this case, by a government that was no longer the same as the one that introduced the draft Bill. Thus, Miriam Wimmer reports that at that time several public bodies that had not actively participated in the legislative process met to discuss and propose a series of vetoes, due to fears about how the data protection regulation could negatively affect them.66 Faced with this movement, the various stakeholders who had been engaging in the process mobilised to pressure Temer67 and demonstrate the relevance of LGPD and even how it could eventually become a ‘legacy’ of his government.68 66 See Item 22, ch 3 ‘An astral conjunction’ of the History of LGPD. 67 In item 22, ch 3 ‘An astral conjunction’ of the History of LGPD, Bechara explains: ‘there was a reaction from several ministries when the bill passed, including some of them asking for a full veto. Then a figure emerges, who was the Deputy Chief of Legal Affairs of the Civil House at the time, Dr. Gustavo Rocha, who was a “lion”. He managed to show the President of the Republic the importance of the law, and the president understood. And practically against everything and everyone, the bill was signed into law, except for the vetoes on the National Authority (…). Gustavo was a strong, firm guy, he acted, worked, had an important defense in relation to the sanction of the law. And the few vetoes they had were thanks to him, because otherwise there would have been much more’. 68 Gutierrez in item 23, ch 3 ‘An astral conjunction’ of the History of LGPD.

Multistakeholderism in the Brazilian General Data Protection Law  113 Orlando Silva’s remarks about the period are emblematic: he states that, despite all the multistakeholder efforts made to reach a relative consensus on the provisions of the law, when there was a new opportunity to negotiate vetoes of specific sections, players who had ‘sat at the table’ with his mandate and other stakeholders resorted to pressuring the government to obtain a result more favourable to their particular interests.69 It is observed that, in the interval in which the possible vetoes were discussed, neither the Coalition Rights in the Network nor Brasscom released public statements. In that case, the negotiations took place ‘behind the scenes’. Details on the content of the vetoes made by Michel Temer are not the subject of this chapter, but it is noteworthy that it was at that time that the proposal to create a National Data Protection Authority with a regulatory agency structure, with autonomy and administrative, functional and financial independence was suppressed and replaced by a model of a government body subordinated to the Presidency of the Republic, according to Provisional Measure 869/2018.70 The next ‘phase’ of LGPD was the debates on Conversion Bill 07/2019, originated from MPV 869/2018, which, in order to create the new Authority, altered LGPD. According to the Brazilian Constitution, Provisional Measures have immediate (but provisional, as the name suggests) force of law, but must be later analysed by Congress and definitively converted, or not, into ordinary legislation.71 This process allows the content of the Measure to be changed by the Legislative, through a conversion Bill, which created a new opportunity to discuss the content of LGPD at that time.72 It so happens that, in 2019, a new legislature started, changing the composition of the Congress, and there was no longer the ‘momentum’ or the ‘astral conjunction’ that enabled the formation of a tactical coalition. Thus, the report made by stakeholders that took part in this process73 shows that, except for the consensus on the need to maintain the independent character of the Data Protection Authority – which was the subject of yet another manifesto headed by Brasscom74 – there was again a split between sectors, who defended their own proposals separately. 69 ibid. 70 Altera a Lei No. 13.709, de 14 de agosto de 2018, para dispor sobre a proteção de dados pessoais e para criar a Autoridade Nacional de Proteção de Dados, e dá outras providências, Provisional Measure No. 869, Brasília, DF, (2013), www.congressonacional.leg.br/materias/medidas-provisorias/-/ mpv/135062. 71 ‘Entenda a tramitação da Medida Provisória’, Congresso Nacional, accessed 20 January 2021, www. congressonacional.leg.br/materias/medidas-provisorias/entenda-a-tramitacao-da-medida-provisoria. 72 On that occasion, again Congressman Orlando Silva was appointed rapporteur in the Commission destined to deliver an opinion on Provisional Measure 869/2018 within 45 days, under penalty of blocking the agenda of the House. 73 Sérgio Gallindo, for example, affirms the following: ‘At that moment, business representatives felt it was right to advocate improvements, just as civil society also advocated its own improvements. So, in the course of Provisional Measure 869/2018, there was no coalition per se. The parties resolved to address the issues from the point of view of improvement.’ See Item 3, ch 4, ‘The Authority saga’ of the History of LGPD Available at: https://observatorioprivácia.com.br/memoria/2019-a-saga-da-autoridade/. 74 ‘Manifesto pela criação imediata da Autoridade Nacional de Proteção de Dados Pessoais – ANPD’, Brasscom, accessed 20 January 2021, https://brasscom.org.br/manifesto-pela-criacaoimediata-da-autoridade-nacional-de-protecao-de-dados-pessoais-anpd/.

114  Bruno Bioni and Mariana Rielli

H.  Summarising: LGPD as an Example of Democratic Engagement The long period that preceded the passage of the Brazilian General Data Protection Law, in 2018, and also the one that succeeded it and lasted until the definition of its entry into force, in September 2020, was marked by a multi-participative nature, which enriched and brought greater legitimacy to the process. Multistakeholderism, however, is about bringing together different perspectives and not necessarily about achieving a definitive consensus. The analysis of this decade of debates allowed us to identify that only in a specific moment, permeated by an extremely favourable context, there was, in fact, a tactical coalition in favour of passing LGPD. This is a finding that does not diminish, in any way, the relevance of the active participation of multiple sectors in the process. In addition to strengthening the democratic DNA of the law, the multistakeholder character of the debates also had consequences on the content of the norm.

III.  Multistakeholderism in the Content of LGPD How was this multistakeholder, emblematic nature of the process that culminated in the passage of LGPD reflected in its content? As this is an extensive and complex law, this chapter does not intend to examine such reflection on all its provisions, but has chosen instead to focus on two specific aspects: first, the dispute over one of LGPD’s ‘pillars’, the lawful bases of Article 7 and, specifically, consent and legitimate interests; second, the institutional arrangement created by the law to promote, among other things, its enforcement. These aspects were chosen not only because they illustrate well the central idea of the chapter, but also because both are among the most discussed topics of implementation of the norm after its enactment.

A.  Consent and Legitimate Interests: Forging Article 7 through Dissents and Agreements A review of the main milestones in the process of drafting LGPD was made in the previous section of this chapter. Now, it aims to highlight how two of the lawful grounds for processing personal data, consent and legitimate interests, were addressed during this process and, especially, how multistakeholderism affected this specific section of the law. The first text submitted for public consultation, still in 2010, provided in its Article 975 that the processing of personal data would rely, as a rule, on the 75 ‘Art. 9º – The processing of personal data can only occur after the free, express and informed consent of the data subject, which can be given in writing or by other means that certifies it, after prior notification to the data subject regarding the information contained in art. 11.’

Multistakeholderism in the Brazilian General Data Protection Law  115 obtaining of ‘free, expressed and informed’ consent of the data subject and that the other lawful grounds76 would be exceptions to this rule.77 At that time, there was no legitimate interest lawful basis and, when analysing the contributions to the first public consultation, only one comment that corresponds to the idea of legitimate interests that was later incorporated into the law was identified.78 In this first moment, therefore, there was a focus and a prevalence of the data subject’s consent as a lawful ground for the processing of personal data, an approach consistent with other domestic legislative choices, such as the Brazilian Civil Rights Framework for the Internet,79 but distant, for example, from the model adopted in Europe.80 The change in this scenario occurred with the presentation of the first substitute amendment by then senator Aloysio Nunes in reference to the Bills that he reported in the Senate in 2015. On that occasion, for the first time, the lawful bases for the processing of personal data were included in the form of parallel items, without preponderance of one over the other, and with the presence of the legitimate interests ground. There are indications that this very relevant movement did not occur in a vacuum: in the same period, the second public consultation on the Ministry of Justice’s draft Bill was taking place and, unlike the first round, on that occasion there were significant contributions not only on the need to change the lawful bases regime, but also to include the option of legitimate interests and further regulate it. Although these were different processes, at that point there were already dialogues established between the Senate and the Bill that would later be introduced to the House.81 76 In the Bill these lawful grounds were: contractual or legal obligation, data of unrestricted public access, exercise of functions proper to the powers of the state, historical, scientific or statistical research, protection of life or physical safety of the data subject or third party, when consent is not given, exercise of the right of defence. 77 Regarding initial texts, there is also mention of the version of Bill 4060/2012, by then Congressman Milton Monti, that mentioned consent in two moments: when it refers to sensitive data and when dealing with children’s personal data; and the version of Bill 330/2013, authored by then senator Antônio Carlos Valadares, which provided for the ‘prior and express consent of the data subject as a requirement for collection, in the case of sensitive data or international data interconnection carried out by a private data bank’. Aside from sensitive data, whose lawful basis are addressed by the Bill, the only other mention of lawful bases, or exemption from them, is set out in Art 4, item V, which provides for the obligation of ‘prior knowledge of the data subject, in the case of data for which express consent is not required’. 78 According to the report: ‘Finally, Nokia also stresses that it is necessary that the cases of dismissal also cover cases where processing is necessary for the legitimate interests of the data controller or the third party to whom the data is communicated, the fundamental rights and freedoms of the data subject are always preserved’. ‘Dados pessoais – contribuições das entidades’, Associação Brasileira de Marketing Direto, accessed 20 January 2021, www.abemd.org.br/interno/DadosPessoais_ ContribuicoesdasEntidades.pdf. 79 Whose provisions regarding the processing of personal data only provide for the lawful basis of consent, with qualifiers even more demanding than those of LGPD: Art 7, VII, IX. 80 Both Directive 95/46/EC, which regulated data protection in the block until 2018, and the GDPR establish a list of hierarchically equal bases for the processing of personal data. 81 One clue in this regard was the participation of then senator Aloysio Nunes, rapporteur of Bill 330/2013, in the event of launching the post-public consultation version of the Executive’s Draft Bill, when he stated that the Executive’s text should be forwarded, with constitutional urgency, to the Legislative.

116  Bruno Bioni and Mariana Rielli Thus, with the incorporation of comments that indicated the need to adapt the text to international best practices, the Executive’s version was sent to the House with the provision of nine hierarchically equivalent lawful bases, so that, as of May 2016, both ‘competing’ Bills had different wording from the original draft, that is, without the prevalence of consent and with the addition of legitimate interests as a lawful ground for processing. More relevant, however, was the dispute that followed this decision: as shown by an InternetLab study82 on the Ministry of Justice’s second public consultation, there was, at that time, a split in positions regarding legitimate interests, although there were no entities that explicitly rejected the proposal as a whole. On the one hand, companies such as Claro, Vivo, Sky and business associations such as Febraban and Brasscom, among others, defended the existence of a ‘consent fatigue’, that is, a disproportionate burden placed on the data subject by the need to make an informed decision concerning the processing of his personal data. Thus, the companies pointed out that having legitimate interests as a lawful basis could facilitate processing activities in situations in which there would be no undue impact on the rights of individuals, without, however, entering into more in-depth considerations about these rights and interests of data subjects. Other contributions, such as that of academic Marcel Leonardi, highlighted the fact that the lawful basis of legitimate interests has been a part of European regulations on the matter since 1995, so that there would be a gap and a delay if Brazil chose not to follow this same path. Representatives of NGOs and academia, entities such as ITS Rio and the Research Group on Public Policies for Access to Information (GPoPAI) added to the debate, choosing not to focus on the convenience of including legitimate interests in the legislation or not, but rather that its eventual inclusion would need to be accompanied by a complementary article that established parameters for its application, in order to ensure the rights of the data subjects. Thus, suggestions such as the inclusion of mandatory data anonymisation as a safeguard were made.83 Such contributions reverberated in the choices made by the Ministry of Justice, since the post public consultation draft that was sent to the House, in addition to establishing legitimate interests as a lawful basis for the processing of personal data, also created a series of paragraphs that made this provision more robust and balanced, encompassing elements such as the legitimate expectations of the data subject (Article 10, paragraph 1), the need for transparency measures and the possibility to opt-out (Article 10, paragraph 2), the principle of necessity and

82 Internetlab, ‘O que está em jogo no debate sobre dados pessoais no Brasil? Relatório final sobre o debate público promovido pelo Ministério da Justiça sobre o Anteprojeto de Lei de Proteção de Dados Pessoais’. São Paulo, 2016. www.internetlab.org.br/wp-content/uploads/2016/05/reporta_apl_dados_ pessoais_final.pdf. 83 ibid.

Multistakeholderism in the Brazilian General Data Protection Law  117 anonymisation, when compatible with the purpose of the processing (Article 10, paragraph 3), and, finally, the possibility of the ‘competent body’ requesting a data protection impact assessment.84 Looking at this moment of the 2015 public consultation, it is possible to notice, on the one hand, a movement of consensus related to the deleterious effects of maintaining consent as the main lawful basis and of not having a more ‘‘flexible’‘ ground for processing,85 and on the other hand, a movement of dissent between companies and the associations that represent them, on the one hand, and NGOs and academic entities on the other, the latter having defended the imposition of certain constraints on the application of legitimate interests. During the legislative process that followed this period, there were very few changes in the legitimate interests provision, but reports from those who were behind articulations reveal that the dispute between part of the private sector and representatives of NGOs, academia and activists over the conditions for the application of legitimate interests crossed the public hearings, rounds of discussion and reached the very last moments of the proceedings in the House of Representatives. This suggests that, although there were several mutual concessions during the process, the dispute around certain interests occurred until the end. Beatriz Barbosa elaborates on this by sharing how she pressured Orlando Silva and the legislative consultant who assisted him to include a different, more protective version of the legitimate interest provision literally hours before the final report by Silva was issued and sent to the House Plenary.86 84 Dispõe sobre o tratamento de dados pessoais para a garantia do livre desenvolvimento da personalidade e da dignidade da pessoa natural, Bill No. 5276, Brasília, DF, (2016), www.camara.leg.br/ proposicoesWeb/fichadetramitacao?idProposicao=2084378. 85 In this sense, mention is made of contributions from companies, academia and NGOs: FGV Direito Rio Technology and Society Center; Marcel Leonardi; Institute of Technology and Society of Rio (ITS-Rio), Research Group on Public Policies for Access to Information (GPoPAI) from USP, Center for Information Policy Leadership, InternetLab, Claro SA, Telefônica Brasil SA (Vivo) and National Confederation of Industry (CNI). 86 Memória da LGPD – Capítulo 5, ‘Como a lei mudou desde 2010’, Associação Data Privacy de Pesquisa, accessed 20 January 2021, https://observatorioprivacidade.com.br/memoria/como-a-leimudou-desde-2010/. Transcript of the testimony of Beatriz Barbosa, Coordinator of Intervozes. ‘There was an article that established the lawful bases for processing personal data and that, for civil society, was a major concern, which is the legitimate interest of companies in processing that data. This has always been a concern for civil society, that this lawful basis was not a blank check for companies to process data the way they wanted, so we wanted to put some constraints on this section. We had already gone to the negotiating table and it had not happened, I had not been able to include this because the companies had not left the negotiating table, but there were sectors of civil society very concerned. I remember that, half an hour before Congressman Orlando filed his text, the final version of the substitute amendment after the round of negotiations, he was on a committee discussing another topic, talking to a consultant from the House who was going to write the final wording of the substitute amendment for him and I arrived and said to him “Orlando, this will not work, this section cannot pass like this, if it passes like this, civil society will criticise your report and it will be very bad for it to arrive at the plenary with strong criticism from civil society.” He said: “it’s good, how do you want it?” Then I got a little piece from the pad I had in my bag, wrote it, highlighted the two pieces that needed to be included and handed it to Orlando. Then he took it and gave it to the consultant, who made a not very happy face. He made the inclusion at our request and that was one of the parts of the bill that survived this whole process.’

118  Bruno Bioni and Mariana Rielli This chapter does not intend to carry out an exercise in ‘reverse futurology’ and to predict whether the Ministry of Justice draft Bill, or the Bills that originated in the House and the Senate would have reached the same result had it not been for the public consultations, especially that of 2015, and the multistakeholder debates that took place throughout the legislative process. However, it is undeniable, and even recognised by the parties involved, that it was this dynamic of multiple interests that informed the non-technical actors responsible for conducting the process and resulted in a ‘more balanced’ version of the text. The dispute over the lawful bases of consent and legitimate interest demonstrates: (i) the strength of a consensus reached between traditionally antagonistic sectors on the need to overcome the consent hierarchy; (ii) the existence of disagreements over the content of the legitimate interest provision, if more or less ‘protective’ of the data subject’s interests; (iii) the proposal of an alternative, mainly by academia, that would guarantee the inclusion of a legitimate interests provision while also creating parameters for its application. It also serves to reinforce the idea that, despite the strong multistakeholderism throughout the entire trajectory of LGPD, the tactical coalition itself happened only at a specific time and that until the end of the process in the House and at later times, there were attempts by each sector to assert their own particular interests.

B. Institutional Arrangement for the Enforcement of LGPD i.  ANPD does not have a Monopoly: The Process of Putting LGPD into Practice Involves Multiple Parties During the first public consultation, promoted by the Ministry of Justice in 2010 and 2011, there were disagreements about the need for a data protection authority,87 but also mistrust around the idea of the law encouraging the private sector to collaborate with the regulation process (eg through the creation of codes of conduct).88 In the second public consultation (with a more mature debate), the private sector, until then the biggest opponent of the creation of a National Data Protection

87 To Rafael Zanatta, Brazilian legal culture and institutional arrangement will encourage a process of competition of institutions in the implementation of LGPD, as explained in item 12 of the episode ‘The saga of the Authority’ of the History of LGPD. Available at: https://observatorioprivácia.com.br/ memoria/2019-a-saga-da-autoridade/. 88 In the chapter entitled ‘The saga of the Authority’ in the History of LGPD, Doneda adds, in item 15: ‘Certainly it is Orlando Silva’s insight to realise that it was possible to do something that did not seem possible: get everyone arguing on the table to arrive on a final text, legitimised by several sectors, to the point of passing unanimously in both House and Senate. Perhaps, individually, it is by far the greatest contribution to having it passed at that time’. Available at: https://observatorioprivácia.com.br/ memoria/2019-a-saga-da-autoridade/.

Multistakeholderism in the Brazilian General Data Protection Law  119 Authority, and NGOs and research centres,89 until then afraid of a possible ‘privatisation’ of the regulation process,90 began to ‘trim’ these edges. The final version of LGPD relies on what is called co-regulation,91 precisely a strategy that represents a middle ground between purely state control or entirely private self-regulation. Through this choice, the law recognises the complexity of the regulatory task in question and distributes it among several players. In this sense, LGPD establishes that: a) ANPD must cooperate with other regulatory bodies (Article 55-J, items IX, XXI, XXII, XXIII and paragraphs 3 and 4); b) the traditional system of protection of diffuse and collective rights in Brazil may also be activated, both in administrative and judicial spheres (respectively, Procons and the National Consumer Protection Secretariat and public and collective civil actions) (Article 22);92 c) data processing agents, within the private and public sectors, are encouraged to organise themselves through codes of conduct (Articles 32, 46, 49); d) stamps, certifications and other private contractual instruments are mechanisms of international transfers (Article 33, II, d; and Article 35), so that this self-organisation of private agents can be rewarded by ANPD to unlock the cross-border data flow (Article 55-J, XIV); e) it is the duty of ANPD to carry out public consultations, as well as regulatory impact assessments so that the parties affected by the exercise of its regulatory power are effectively heard (Article 55-J, paragraph 2). In addition, it is important to note the final version of LGPD created by the National Data Protection Council/CNPD. In a multistakeholder format,93 with

89 See the contribution made by the Research Group on Public Policies for Access to Information (GPOPAI) to the second Public Consultation on the Draft, promoted by the Ministry of Justice in 2015. Available at: http://pensando.mj.gov.br/dadospessoais/wp-content/uploads/sites/3/2015/07/07c449c07 6fabbb00f3d3b850e063417.pdf. 90 Julie Cohen, Between truth and power: The legal constructions of informational capitalism (Oxford University Press, 2019). Julie Cohen, Configuring the networked self: Law, code, and the play of everyday practice (Yale University Press, 2012). In the report on the second public consultation prepared by InternetLab, it is worth noting, for example, the dissent regarding the operationalisation of data processing and control of personal data by citizens, seen by the private sector as a possible bureaucratic obstacle, and by the third sector as an effective instrument of empowerment. Internetlab, ‘O que está em jogo no debate sobre dados pessoais no Brasil? Relatório final sobre o debate público promovido pelo Ministério da Justiça sobre o Anteprojeto de Lei de Proteção de Dados Pessoais’ (São Paulo, 2016). 91 Miriam Wimmer, ‘Os desafios do enforcement na LGPD: fiscalização, aplicação de sanções administrativas e coordenação intergovernamental’, in Tratado de Proteção de Dados Pessoais, eds. Bruno Bioni, Danilo Doneda, Laura Schertel Mendes, Otávio Luiz Rodrigues Junior, Ingo Wolfgang Sarlet (Rio de Janeiro: Forense, 2021). Rafael A. F. Zanatta, ‘A Proteção de Dados Pessoais entre Leis, Códigos e Programação: os limites do Marco Civil da Internet’, in Direito e Internet III: Marco Civil da Internet, eds. Newton de Lucca, Adalberto Simão Filho, Cíntia Rosa Pereira de Lima, (São Paulo: Quartier Latin, 2015), 447–470. 92 Hunter Dorwart, Mariana Rielli and Rafael Zanatta, ‘Concerning the institutional arrangements in Brazil for the defense of collective and diffuse rights’, Future of Privacy Forum, 21 December 2021, https://fpf.org/blog/the-complex-landscape-of-enforcing-the-lgpd-in-brazil-public-prosecutorscourts-and-the-national-system-of-consumer-defense/. 93 The Council is composed of government representatives (Casa Civil, Ministries, Institutional Security Office); civil society (organisations with proven experience in the protection of personal data);

120  Bruno Bioni and Mariana Rielli a seat reserved for CGI.br, CNPD is an advisory body to the ANPD94 with the power to propose guidelines and carry out studies and public hearings, as well as to prepare annual reports relating to the execution of the National Data Protection Policy. To summarise, LGPD’s institutional arrangement does not ‘bet’ on a supervisory system in which there is a single authority that centralises all actions. On the contrary, a networked governance system is established, in which competences are distributed among a series of players, both private and public. This is precisely one of the definitions of multistakeholderism, a model in which this web of players is opposed to a regulatory strategy monopolised by the state.

IV. Conclusion Comprehensive data protection laws cover a wide range of interests and must find a relative consensus on what the appropriate information flow is for multiple stakeholders. LGPD has, among all its merits, the fact that this reality is directly reflected in both form (policy making) and normative content. The articulations that resulted in the Brazilian Data Protection Law present a very rich case on correlations of forces. In particular, it was possible to observe the active participation of different sectors that, while representing their own interests, also contributed to the elaboration of a balanced text and even the formation of a tactical coalition that pushed the Senate to pass the law, a result of the confluence of efforts by companies, business associations, NGOs, academic entities and public bodies. The content of LGPD is also representative of its multistakeholder character. In this sense, this chapter explored disputes over the content of the lawful bases for the processing of personal data (Article 7 in the final version of the law), especially consent and legitimate interests, and addressed what is perhaps the greatest example of this characteristic in the normative content of the law: the design of inter-institutional cooperation, which involves both public and private sectors, for the interpretation and enforcement of LGPD. Multistakeholderism in LGPD has no end date: a norm that was developed by many hands, forged through a long process of dissent and eventual consensus, will also have its practical application built by the dialogue between different players: businesses, academia, NGOs, activists, public bodies. This is an arrangement prescribed and encouraged by LGPD itself.

the academic community (scientific and technological institutions); the business sector (related to the area of processing personal data and union entities); and a seat reserved for the Brazilian Internet Steering Committee. 94 ‘Autoridade Nacional de Proteção de Dados’, Gov.br, accessed 6 January 2021, www.gov.br/ anpd/pt-br.

Multistakeholderism in the Brazilian General Data Protection Law  121

References Argentina. ‘Ley 25.326/2000 – Ley de Protección de los Datos Personales. Disposiciones Generales. Principios generales relativos a la protección de datos. Derechos de los titulares de datos. Usuarios y responsables de archivos, registros y bancos de datos. Control. Sanciones. Acción de protección de los datos personales’. Accessed 20 January 2021. http://servicios.infoleg.gob.ar/infolegInternet/anexos/60000-64999/64790/norma.htm. Associação Brasileira De Marketing Direto. ‘Dados pessoais – contribuições das entidades’. Accessed 21 January 2021. www.abemd.org.br/interno/DadosPessoais_ ContribuicoesdasEntidades.pdf. Associação Data Privacy Brasil De Pesquisa. Observatório da Privacidade. Memória LGPD. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memorias/. ——. Memória da LGPD – Capítulo ‘O tema entra em pauta’ – item 1. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2010-2015-o-tema-entra-empauta/. ——. Memória da LGPD – Capítulo ‘O tema entra em pauta’ – item 6. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2010-2015-o-tema-entra-empauta/. ——. Memória da LGPD – Capítulo ‘O anteprojeto chega à Câmara’ – item 2. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2016-2017-oanteprojeto-chega-a-camara/. ——. Memória da LGPD – Capítulo ‘O anteprojeto chega à Câmara’ – item 14. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2016-2017-oanteprojeto-chega-a-camara/. ——. Memória da LGPD – Capítulo ‘O anteprojeto chega à Câmara’ – item 17. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2016-2017-oanteprojeto-chega-a-camara/. ——. Memória da LGPD – Capítulo ‘Uma Conjunção Astral’ – item 19. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2018-uma-conjuncao-astral/. ——. Memória da LGPD – Capítulo ‘Uma Conjunção Astral’ – item 11. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2018-uma-conjuncao-astral/. ——. Memória da LGPD – Capítulo ‘Uma Conjunção Astral’ – item 12. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2018-uma-conjuncao-astral/. ——. Memória da LGPD – Capítulo ‘Uma Conjunção Astral’ – item 14. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2018-uma-conjuncao-astral/. ——. Memória da LGPD – Capítulo ‘Uma Conjunção Astral’ – item 19. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2018-uma-conjuncao-astral/. ——. Memória da LGPD – Capítulo ‘Uma Conjunção Astral’ – item 22. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2018-uma-conjuncao-astral/. ——. Memória da LGPD – Capítulo ‘A saga da Autoridade’ – item 3. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2019-a-saga-da-autoridade/. ——. Memória da LGPD – Capítulo ‘A saga da Autoridade’ – item 12. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2019-a-saga-da-autoridade/. ——. Memória da LGPD – Capítulo ‘A saga da Autoridade’ – item 15. Accessed 21 January 2021. https://observatorioprivacidade.com.br/memoria/2019-a-saga-da-autoridade/. ——. Memória da LGPD – Capítulo ‘Como a lei mudou desde 2010’. Accessed 21 January 21, https://observatorioprivacidade.com.br/memoria/como-a-lei-mudou-desde-2010/.

122  Bruno Bioni and Mariana Rielli Buchanan, Allen, and Robert O. Keohane. ‘The Legitimacy of Global Governance Institutions.’ Ethics & International Affairs 20, no. 4 (2006): 405–37. doi:10.1111/j.174 7-7093.2006.00043.x. Brasil. Dispõe sobre a proteção do consumidor e dá outras providências. Act No. 8.78, Brasília, DF, (1990). www.planalto.gov.br/ccivil_03/leis/L8078compilado.htm. ——. Regula o processo administrativo no âmbito da Administração Pública Federal. Act No. 9.784, Brasília, DF, (1999). www.planalto.gov.br/ccivil_03/leis/l9784.htm. ——. Disciplina a formação e consulta a bancos de dados com informações de adimplemento, de pessoas naturais ou de pessoas jurídicas, para formação de histórico de crédito. Bill No. 12.414, Brasília, DF, (2011). www.planalto.gov.br/ccivil_03/_Ato20112014/2011/Lei/L12414.htm. ——. Regula o acesso a informações previsto no inciso XXXIII do art. 5º, no inciso II do § 3º do art. 37 e no § 2º do art. 216 da Constituição Federal; altera a Lei nº 8.112, de 11 de dezembro de 1990; revoga a Lei nº 11.111, de 5 de maio de 2005, e dispositivos da Lei nº 8.159, de 8 de janeiro de 1991; e dá outras providências. Bill no. 12.527, Brasília, DF, (2011). www.planalto.gov.br/ccivil_03/_ato2011-2014/2011/lei/l12527.htm. ——. Estabelece princípios, garantias, direitos e deveres para o uso da Internet no Brasil. Act no. 12.965, Brasília, DF, (2014). www.planalto.gov.br/ccivil_03/_ato2011-2014/2014/ lei/l12965.htm. ——. Lei Geral de Proteção de Dados Pessoais (LGPD). Act No. 12.965, Brasília, DF, (2014). www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/L13709.htm. ——. Lei Complementar nº 166, de 08 de abril de 2019. Altera a Lei Complementar nº 105, de 10 de janeiro de 2001, e a Lei nº 12.414, de 9 de junho de 2011, para dispor sobre os cadastros positivos de crédito e regular a responsabilidade civil dos operadores. Act no. 166, Brasília, DF, (2019). www.planalto.gov.br/ccivil_03/leis/lcp/Lcp166.htm. Brasscom. ‘Manifesto pela aprovação da lei de proteção de dados pessoais’. Accessed 20 January 2021. ttps://brasscom.org.br/manifesto-pela-aprovacao-da-lei-de-protecao-dedados-pessoais/. ——. ‘Manifesto pela criação imediata da Autoridade Nacional de Proteção de Dados Pessoais – ANPD’. Accessed 20 January 2021. https://brasscom.org.br/manifesto-pela-criacaoimediata-da-autoridade-nacional-de-protecao-de-dados-pessoais-anpd/. ——. ‘Manifesto sobre a Futura Lei de Proteção de Dados Pessoais’. Accessed 20 January 2021. https://brasscom.org.br/manifesto-sobre-a-futura-lei-de-protecao-de-dadospessoais/. Câmara Dos Deputados. ‘Comissão Especial destinada a proferir parecer ao Projeto de Lei nº 4.060, de 2012, do Dep. Milton Monti, que ‘dispõe sobre o tratamento de dados pessoais e dá outras providências, e apensos. Comissão de Proteção de Dados. Roteiro de trabalho. Proposta do Relator – Deputado Orlando Silva’. Accessed 20 January 2021. www2.camara.leg.br/atividade-legislativa/comissoes/comissoes-temporarias/ especiais/55a-legislatura/pl-4060-12-tratamento-e-protecao-de-dados-pessoais/ documentos/outros-documentos/roteiro-de-trabalho-apresentado-em-22-11-2016. ——. ‘Apresentações de palestrantes. Arquivos apresentados pelos palestrantes nas Audiências e Eventos da Comissão Especial’. Comissão de Proteção de Dados. Accessed 20 January 2021. www2.camara.leg.br/atividade-legislativa/comissoes/comissoestemporarias/especiais/55a-legislatura/pl-4060-12-tratamento-e-protecao-de-dadospessoais/documentos/audiencias-e-eventos. ——. Assegura aos cidadãos acesso às informações sobre sua pessoa constantes de bancos de dados e dá outras providências. Bill No. 2796/1980, Brasília, DF, (2014) www.camara.

Multistakeholderism in the Brazilian General Data Protection Law  123 leg.br/proposicoesWeb/prop_mostrarintegra;jsessionid=FBF15270DD557906FEB1829 EFEA68AED.proposicoesWeb1?codteor=1172300&filename=Avulso+-PL+2796/1980. ——. Projeto de Lei de Conversão 07/2019. Altera a Lei nº 13.709, de 14 de agosto de 2018, para dispor sobre a proteção de dados pessoais e para criar a Autoridade Nacional de Proteção de Dados, e dá outras providências. Bill No. 07/2019, Brasília, DF, (2019) Disponível em: www.camara.leg.br/propostas-legislativas/2201766. Coalizão Direitos na Rede. Accessed 20 January 2021. https://direitosnarede.org.br/. Cohen, Julie. Between truth and power: The legal constructions of informational capitalism. Oxford University Press, 2019. ——. Configuring the networked self: Law, code, and the play of everyday practice. Yale University Press, 2012. Comitê Gestor De Internet No Brasil (CGI.br). ‘Resolução CGI.br/RES/2009/003/P. Princípios para a governança e uso da internet no Brasil’. Accessed 20 January 2021. www. cgi.br/resolucoes/documento/2009/003/. ——. ‘Seminário de Proteção à Privacidade e aos Dados Pessoais’. Accessed 20 January 2021. https://seminarioprivacidade.cgi.br/. ——. ‘[IX Seminário de Privacidade] Mesa de Abertura’. 2018. (39m05s). Accessed 20 January 2021. www.youtube.com/watch?v=GKMul1c4YYU&list=PLQq8-9yVHyOZVGYJeegT8ImHrWOPIiYh&index=1&t=6s&ab_channel=NICbrvideos. Congresso Nacional. ‘Entenda a tramitação da Medida Provisória’. Accessed 20 January 2021. www.congressonacional.leg.br/materias/medidas-provisorias/entenda-a-tramitacaoda-medida-provisoria. ——. Medida Provisória 869/2018 (Proteção de dados pessoais). Altera a Lei nº 13.709, de 14 de agosto de 2018, para dispor sobre a proteção de dados pessoais e para criar a Autoridade Nacional de Proteção de Dados, e dá outras providências. Bill No. 869, Brasília, DF, (2018) Disponível em: www.congressonacional.leg.br/materias/medidasprovisorias/-/mpv/135062. ——. Dispõe sobre o tratamento de dados pessoais, e dá outras providências. Bill No. 4060, Brasília, DF, (2012) www.camara.leg.br/proposicoesWeb/fichadetramitacao?idProposi cao=548066. ——. Dispõe sobre o tratamento de dados pessoais para a garantia do livre desenvolvimento da personalidade e da dignidade da pessoa natural. Bill No. 5276, Brasília, DF, (2016). www.camara.leg.br/proposicoesWeb/fichadetramitacao?idProposicao=2084378. Cruz, Francisco Carvalho de Brito. ‘Direito, democracia e cultura digital: a experiência de elaboração legislativa do Marco Civil da Internet’. Master’s thesis, University of São Paulo, 2015. Denardis, Laura. ‘Introduction: internet governance as an object of research inquiry’. In: Researching internet governance: methods, frameworks, futures. MIT Press, 2020. Doneda, Danilo. ‘Panorama histórico da proteção de dados pessoais’. In Tratado de Proteção de Dados Pessoais, edited by Bruno Bioni, Danilo Doneda, Laura Schertel Mendes, Otávio Luiz Rodrigues Junior, Ingo Wolfgang Sarlet, 20-03. Rio de Janeiro: Forense, 2021. Dorwart, Hunter. Rielli, Mariana. Zanatta, Rafael.’Concerning the institutional arrangements in Brazil for the defense of collective and diffuse rights’. Future of Privacy Forum, 21 December 2021. https://fpf.org/blog/the-complex-landscape-of-enforcingthe-lgpd-in-brazil-public-prosecutors-courts-and-the-national-system-of-consumerdefense/. Drake, William J. ‘Review of Multistakeholderism: External Limitations and Internal Limits. ’Multistakeholder Internet Dialogue 2 (2011): 68–72.

124  Bruno Bioni and Mariana Rielli European Data Protection Supervisor. ‘The history of the General Data Protection Regulation’. Accessed 20 January 2021. https://edps.europa.eu/data-protection/dataprotection/legislation/history-general-data-protection-regulation_en. Gatto, Raquel. A perspectiva contratualista na construção do consenso da sociedade na internet. PhD. Dissertation, University of São Paulo, 2016. Gatto, Raquel and Getschko, Demi. ‘Governança da Internet: conceitos, evolução e abrangência.’ Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos 27 (2009): 61–97. http://ce-resd.facom.ufms.br/sbrc/2009/081.pdf. Ghez, Jeremy. ‘Alliances in the 21st Century Implications for the US-European partnership’. Rand Europe, United Kingdom, 2011. www.rand.org/content/dam/rand/pubs/ occasional_papers/2011/RAND_OP340.pdf. Gov.br. ‘Autoridade Nacional de Proteção de Dados’. Accessed 6 January 2021. www.gov.br/ anpd/pt-br. Internetlab. ‘O que está em jogo no debate sobre dados pessoais no Brasil? Relatório final sobre o debate público promovido pelo Ministério da Justiça sobre o Anteprojeto de Lei de Proteção de Dados Pessoais’. São Paulo, 2016. www.internetlab.org.br/wp-content/ uploads/2016/05/reporta_apl_dados_pessoais_final.pdf. Machado Meyer. ‘Tabela comparativa – Lei Geral de Proteção de Dados’. Accessed 21 January 2021. www.machadomeyer.com.br/images/Alteracoes-na-Lei-Geral-de-Protecao-deDados.pdf. Mayer, Jörg. ‘Policy space: what, for what, and where?,’ Development Policy Review, v. 27, no. 4 (2009): 373–395, www.wto.org/english/res_e/reser_e/gtdw_e/wkshop08_e/ mayer_e.pdf. Ministério Da Cultura. ‘Dados pessoais’. Accessed 20 January 2021. http://culturadigital.br/ dadospessoais/. ——. ‘PL de proteção de dados’. Accessed 20 January 2021. http://culturadigital.br/ dadospessoais/files/2010/11/PL-Protecao-de-Dados.pdf. Ministério Da Justiça. ‘Anteprojeto de Lei de Proteção de Dados’. Accessed 20 January 2021. http://culturadigital.br/dadospessoais/files/2010/11/PL-Protecao-de-Dados.pdf. ——. ‘MJ apresenta nova versão do Anteprojeto de Lei de Proteção de Dados Pessoais’. Pensando o Direito, 21 October 2015. http://pensando.mj.gov.br/2015/10/21/mjapresenta-nova-versao-do-anteprojeto-de-lei-de-protecao-de-dados-pessoais/. ——. ‘Proteção de Dados Pessoais’. Accessed 21 January 2021. http://pensando.mj.gov. br/dadospessoais/. Levinson, Nanette. Marzouki, Meryem. ‘International Organizations and Global Internet Governance: Interorganizational Architecture’. In The Turn to Infrastructure in Internet Governance, edited by Laura DeNardis, Francesca Musiani, Nanette S. Levinson and Derrick L. Cogburn, 47–71. New York: Palgrave Macmillan, 2016. Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009. Polonsky, Michael, Kathryn Lefroy, Romana Garman and Norman Chia. ‘Strategic and Tactical Alliances: Do Environmental Non-Profits Manage Them Differently?’ Australasian Marketing Journal 19 (2001): 43–51. 10.1016/j.ausmj.2010.11.006. Parlamento Europeu. ‘Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/ EC (General Data Protection Regulation)’. Accessed 21 January 2021 https://eur-lex. europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679.

Multistakeholderism in the Brazilian General Data Protection Law  125 ——. ‘Directiva 95/46/CE do Parlamento Europeu e do Conselho, de 24 de Outubro de 1995, relativa à protecção das pessoas singulares no que diz respeito ao tratamento de dados pessoais e à livre circulação desses dados’. Accessed 20 January 2021. https:// eur-lex.europa.eu/legal-content/PT/TXT/?uri=celex%3A31995L0046. Projeto de Lei do Senado n° 330, de 2013. Dispõe sobre a proteção, o tratamento e o uso dos dados pessoais, e dá outras providências. Bill No. 330, Brasília, DF, (2013) www25. senado.leg.br/web/atividade/materias/-/materia/113947. Viana, Marcelo. ‘Um novo ‘1984’? O projeto RENAPE e as discussões tecnopolíticas no campo da informática brasileira durante os governos militares na década de 1970’. Oficina do Historiador, Special supplement, I EPHIS/PUCRS (May 2014): 1148–11171. http://revistaseletronicas.pucrs.br/ojs/index.php/oficinadohistoriador/article/view/ 18998/12057. Wimmer, Miriam. ‘Os desafios do enforcement na LGPD: fiscalização, aplicação de sanções administrativas e coordenação intergovernamental’. In Tratado de Proteção de Dados Pessoais, edited by Bruno Bioni, Danilo Doneda, Laura Schertel Mendes, Otávio Luiz Rodrigues Junior, Ingo Wolfgang Sarlet. Rio de Janeiro: Forense, 2021. Zanatta, Rafael A. F. ‘A Proteção de Dados Pessoais entre Leis, Códigos e Programação: os limites do Marco Civil da Internet’. In Direito e Internet III: Marco Civil da Internet, edited by Newton de Lucca, Adalberto Simão Filho, Cíntia Rosa Pereira de Lima. São Paulo: Quartier Latin, 2015. 447–470.

126

5 The Dual Function of Explanations: Why Computing Explanations is of Value NIKO TSAKALAKIS,α SOPHIE STALLA-BOURDILLON,α LAURA CARMICHAEL,α DONG HUYNH,β LUC MOREAUβ AND AYAH HELALβ

Abstract The increasing dependence of decision-making on some level of automation has naturally led to discussions about the trustworthiness of such automation, calls for transparent automated decision making and the emergence of ‘explainable Artificial Intelligence’ (XAI). Although XAI research has produced a number of taxonomies for the explanation of Artificial Intelligence (AI) and Machine Learning (ML) models, the legal debate has so far been mainly focused on whether a ‘right to explanation’ exists in the GDPR. Lately, a growing body of interdisciplinary literature is concentrating on the goals and substance of explanations produced for automated decision making, with a view to clarify their role and improve their value against unfairness, discrimination and opacity for the purposes of ensuring compliance with Article 22 of the GDPR. At the same time, several researchers have warned that transparency of the algorithmic processes in itself is not enough and tools for better and easier assessment and review of the whole socio-technical system that includes automated decision-making are needed. In this chapter, we suggest that generating computed explanations would be useful for most of the obligations set forth by the GDPR and can assist towards a holistic compliance strategy when used as detective controls. Computing explanations to support the detection of data protection breaches facilitates the monitoring and auditing of automated decision-making pipelines. Carefully constructed explanations can

α Southampton Law School, University of Southampton, University Road, Southampton SO17 1BJ, UK. β Department of Informatics, King’s College London, Strand, London WC2R 2LS, UK.

128  Niko Tsakalakis et al empower both the data controller and external recipients such as data subjects and regulators and should be seen as key controls in order to meet a­ ccountability and data protection-by-design obligations. To illustrate this claim, this chapter presents the work undertaken by the Provenance-driven and Legally-grounded Explanations for Automated Decisions (PLEAD) project towards ‘explainableby-design’ socio-technical systems. PLEAD acknowledges the dual function of explanations as internal detective controls (to benefit data controllers) and ­external detective controls (to benefit data subjects) and leverages provenance-based technology to compute explanations and support the deployment of systematic compliance strategies.

Keywords Automated decision-making; data protection; accountability; machine learning; transparency; explainability; interpretability; counterfactuals; artificial intelligence; GDPR.

I. Introduction Impactful decision-making is increasingly supported by AI and ML systems.1 Put simply, ML-based AI techniques leverage historical training data to create (mathematical) models from discovered patterns and correlations in the data. Such models are then applied to newly inputted data so that the algorithms can generate predictions about future behaviour.2 In the same way that human-decision making is prone to error, bias, and prejudice,3 there are various well-known examples where automated decision-making, despite its various benefits,4 has been found to be unreliable, defective, and even discriminatory against those subject to such decisions5 The need for tools and frameworks to help understand and interpret AI 1 AI is defined by the ICO as ‘an umbrella term for a range of algorithm-based technologies that often try to mimic human thought to solve complex tasks’: ICO, Explaining decisions made with AI – Part 1: The basics of explaining AI (2019), 4, https://ico.org.uk/media/2616434/explaining-ai-decisions-part-1. pdf. It is important to note that, in the European Union, Article 22 of the GDPR places a general prohibition on solely automated decision-making that results in serious impactful effects, ie wholly accomplished by an AI system without a human in the loop. The exceptions are discussed below. 2 ICO, Explaining decisions made with AI – Part 1: The basics of explaining AI. 3 For further information see Daniel Brown, ‘2 – The limits of human and automated decisionmaking,’ in Mastering Information Retrieval and Probabilistic Decision Intelligence Technology, ed. Daniel Brown (Chandos Publishing, 2004); Ari Ezra Waldman, ‘Power, Process, and Automated Decision-Making,’ Fordham Law Review 88 (2019). 4 Eg: ‘improved efficiencies’ and ‘resource savings’: Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, 5. 5 For examples see S. Lowry and G. Macpherson, ‘A blot on the profession,’ British medical journal (Clinical research ed.) 296, no. 6623 (1988), https://doi.org/10.1136/bmj.296.6623.657,

The Dual Function of Explanations  129 behaviour has been recognised by those involved in the development of AI systems since the 1960s and 1970s.6 Paradigms underlying this need led to the emergence of ‘Explainable AI’ (XAI).7 The purpose of XAI research is to contribute towards the explainability of AI models, by providing explanations of the steps that the AI system took to arrive at this decision.8 However, it is doubtful whether explainability approaches alone are sufficient to provide understanding to the recipients of algorithmic decisions, which are usually opaque.9 Research commissioned by IBM10 on the adoption and exploration of AI by businesses in 2019, found that 83% of (4,514) senior business decision-makers11 who responded to their survey agreed: ‘[b]eing able to explain how AI arrived at a decision is universally important’.12 But the information expected by applicable legal rules to adequately explain algorithmic decisions go beyond the technical operation of the AI system. This can be aptly demonstrated in the case of data protection law. Article 22 of the EU General Data Protection Regulation (GDPR) has been the focus of attention, leading to debates about the existence of a ‘right to explanation’.13 https://pubmed.ncbi.nlm.nih.gov/3128356, www.ncbi.nlm.nih.gov/pmc/articles/PMC2545288/; Ian Sample, ‘AI watchdog needed to regulate automated decision-making, say experts,’ The Guardian, 27 January 2019, www.theguardian.com/technology/2017/jan/27/ai-artificial-intelligence-watchdogneeded-to-prevent-discriminatory-automated-decisions; Bill Turque, ‘Creative … motivating’ and fired,’ The Washington Post, 6 March 2012. 6 Andreas Holzinger et al., ‘Causability and explainability of artificial intelligence in medicine,’ WIREs Data Mining and Knowledge Discovery 9, no. 4 (2019), https://doi.org/10.1002/widm.1312; Alun Preece, ‘Asking ‘Why’ in AI: Explainability of intelligent systems – perspectives and challenges,’ Intelligent Systems in Accounting, Finance and Management 25 (2018). 7 Alejandro Barredo Arrieta et al., ‘Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,’ Information Fusion 58 (2020/06/01/ 2020), https://doi.org/https://doi.org/10.1016/j.inffus.2019.12.012. 8 Swati Sachan et al., ‘An explainable AI decision-support-system to automate loan underwriting,’ Expert Systems with Applications 144 (2020/04/15/ 2020): 2, https://doi.org/https://doi.org/10.1016/ j.eswa.2019.113100, www.sciencedirect.com/science/article/pii/S0957417419308176. Explainability is often used interchangeably with the concept of interpretability, ‘the understanding of working logic of an AI-based decision-making system’: ibid. 9 Jenna Burrell, ‘How the machine ‘thinks’: Understanding opacity in machine learning algorithms,’ Big Data & Society 3, no. 1 (2016), https://doi.org/10.1177/2053951715622512. 10 IBM and Morning Consult, From Roadblock to Scale: The Global Sprint Towards AI (2020), http:// filecache.mediaroom.com/mr5mr_ibmnews/183710/Roadblock-to-Scale-exec-summary.pdf. 11 IBM and Morning Consult, From Roadblock to Scale: The Global Sprint Towards AI, 5. 12 ibid, 4; see also Greg Satell and Josh Sutton, ‘We Need AI That Is Explainable, Auditable, and Transparent,’ Harvard Business Review, updated 28 October 2019, accessed 5 September, 2020, https:// hbr.org/2019/10/we-need-ai-that-is-explainable-auditable-and-transparent: ‘What’s far more insidious and pervasive are the more subtle glitches that go unnoticed, but have very real effects on people’s lives. […] Once you get on the wrong side of an algorithm, your life immediately becomes more difficult. Unable to get into a good school or to get a job, you earn less money and live in a worse neighbourhood. Those facts get fed into new algorithms and your situation degrades even further. Each step of your descent is documented, measured, and evaluated.’; Art 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. 13 For instance, see: Bryce Goodman and Seth Flaxman, ‘European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation.”,’ AI Magazine 38, no. 3 (2017); Andrew D Selbst and Julia Powles, ‘Meaningful information and the right to explanation,’ International Data Privacy Law 7, no. 4 (2017), https://doi.org/10.1093/idpl/ipx022; Lilian Edwards and Michael Veale, ‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You Are Looking For,’ Duke Law & Technology Review 16 (2017), https://doi.org/http://dx.doi.org/10.2139/ssrn.2972855;

130  Niko Tsakalakis et al Whether such a right exists or not, explanations for data protection purposes will need to differ to approaches in XAI, where the focus often is on explaining the inner workings of the ‘black box’.14 For data protection, explanations for data protection purposes need to ensure relevance to their recipients (usually the data subjects). Achieving this will often require information beyond the behaviour of the ‘black box’15 and an understanding of how the decision will impact the rights and freedoms of the individuals. Selbst et al, affirming that Article 22 should give rise to a right to explanations, consider that the role of explanations is to provide information that is meaningful for the exercise of data subjects’ rights.16 Edwards and Veale note that XAI generated explanations are usually too technical and therefore are not necessarily useful but ‘subject-centric’ explanations that focus on particular regions of a model and create explanations around the ‘black box’ rather than opening it can be effective.17 Such explanations, for example, based on ‘causal chains’ allowing the querying of specific events about the algorithmic process, are currently in development.18 The limits of XAI approaches to explanations have been noted in the literature. XAI explanations are often ‘not tailored to individuals’ understanding and comprehensibility’.19 Explanations that attempt to decompose ML models to explain its focus on the model rather than the recipient of the explanation and are unlikely to provide information that is meaningful for the data subjects and run the risk of infringing trade secrets. In addition, explanations that interpret the ‘logic’20 of the system focus on the processing performed inside the ‘black box’,21 overlooking the impact of other factors such as the training data, or the deployment context of the ML model.22 Holding a complex socio-technical process to

Sandra Wachter, Brent Mittelstadt and Luciano Floridi, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation,’ International Data Privacy Law 7, no. 2 (2017), https://doi.org/10.1093/idpl/ipx005. 14 Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679; Algorithm Watch and Bertelsmann Stiftung, Automating Society: Taking Stock of Automated Decision Making in the EU (2019), 8, https://algorithmwatch.org/wp-content/uploads/2019/01/Automating_Society_Report_2019.pdf. 15 Jennifer Cobbe and Jatinder Singh. ‘Reviewable Automated Decision-Making.’ Computer Law & Security Review 39 (2020). https://doi.org/https://doi.org/10.1016/j.clsr.2020.105475. 16 Selbst and Powles, ‘Meaningful information and the right to explanation.’ 17 Edwards and Veale, ‘Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For,’ 81. 18 Anna Collins, Daniele Magazzeni and Simon Parsons, ‘Towards an Argumentation-Based Approach to Explainable Planning’ (paper presented at the 2nd ICAPS Workshop on Explainable Planning, Berkeley, CA, 2019). 19 Gianclaudio Malgieri and Giovanni Comandé, ‘Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation,’ International Data Privacy Law 7, no. 4 (2017): 245, https://doi.org/10.1093/idpl/ipx019, https://doi.org/10.1093/idpl/ipx019. 20 GDPR Art 14(2)(f). 21 Edwards and Veale, ‘Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For.’ 22 Cobbe and Singh, ‘Reviewable Automated Decision-Making.’

The Dual Function of Explanations  131 account requires a holistic view of events that happened before and after the application of the algorithmic models.23 Notwithstanding explanations for the mechanics within ‘black boxes’, explanations that are based on a holistic view of the decision-making process have been mainly considered for their potential to empower data subjects in the exercise of their rights. This approach is problematic for two reasons: First, because of the narrow scope of some key articles such as Article 22. Article 22 only governs decisions that have been taken ‘solely’ by automated means. Therefore, the value of explainability for partially automated decision-making has been underexplored. Second, because this right-based approach overlooks the potential of explanations which should also be conceived as risk mitigation measures or controls, which ought to be implemented as part of a data protection by design approach. The need to develop a systematic and iterative approach in order to achieve GDPR compliance has been discussed extensively in the literature, but mostly as a high-level compliance goal24 or strategy.25 Suggestions to operationalise compliance have had less coverage,26 leaving decisions about implementation to the industry. A systematic strategy for demonstrating GDPR compliance, stemming from the principle of accountability27 combined with the requirement of data protection by design28 should require the implementation of a variety of controls: preventive, detective and corrective controls. Preventive controls refer to measures that aim to prevent incidents, ie ex ante.29 Detective controls, on the other hand, aim to detect incidents that occur during the runtime of a process.30 Corrective 23 Jennifer Cobbe, ‘Administrative law and the machines of government: judicial review of automated public-sector decision-making,’ Legal Studies 39, no. 4 (2019), https://doi.org/10.1017/lst.2019.9. 24 See, eg: Y. Martin and A. Kung, ‘Methods and Tools for GDPR Compliance Through Privacy and Data Protection Engineering’ (paper presented at the 2018 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 23–27 April 2018 2018); Clément Labadie and Christine Legner, ‘Understanding Data Protection Regulations from a Data Management Perspective: A Capability-Based Approach to EU-GDPR’ (paper presented at the 14th International Conference on Wirtschaftsinformatik, Siegen, Germany 24–27 February 2019). 25 For example, Kristian Beckers et al., A Problem-Based Approach for Computer-Aided Privacy Threat Identification (Berlin, Heidelberg, 2014); Mina Deng et al., ‘A privacy threat analysis framework: supporting the elicitation and fulfillment of privacy requirements,’ Requirements Engineering 16, no. 1 (2011/03/01 2011), https://doi.org/10.1007/s00766-010-0115-7, https://doi. org/10.1007/s00766-010-0115-7; N. Notario et al., ‘PRIPARE: Integrating Privacy Best Practices into a Privacy Engineering Methodology’ (paper presented at the 2015 IEEE Security and Privacy Workshops, 21–22 May 2015). 26 See, eg: Marcelo Corrales, Paulius Jurčys and George Kousiouris, ‘Smart Contracts and Smart Disclosure: Coding a GDPR Compliance Framework,’ in Legal Tech, Smart Contracts and Blockchain, eds. Marcelo Corrales, Mark Fenwick, and Helena Haapio (Singapore: Springer Singapore, 2019); Margot E. Kaminski and Gianclaudio Malgieri, ‘Multi-layered explanations from algorithmic impact assessments in the GDPR’ (Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, Association for Computing Machinery, 2020). 27 GDPR Art 5(2) ‘… be able to demonstrate compliance …’. 28 GDPR Art 25. 29 Yousef Kh. Majdalawi and Faten Hamad, ‘Security, Privacy Risks and Challenges that Face Using of Cloud Computing,’ International Journal of Advanced Science and Technology 13, no. 3 (2019). 30 Majdalawi and Hamad, ‘Security, Privacy Risks and Challenges that Face Using of Cloud Computing.’

132  Niko Tsakalakis et al controls aim to reduce the consequences of an incident once it has occurred, ie ex post.31 Incidents in a data protection law context refer to violations of the principles related to the processing of personal data and other related obligations and of the rights and freedoms of individuals.32 This chapter suggests that within complex decision-making pipelines, explanations have the potential to act as detective controls, offering the recipients the opportunity to check the performance of the decision-making process and seek corrective measures if needed. Monitoring processing activities while in execution, ie runtime compliance,33 is a key component of a systematic compliance strategy and regularly triggering explanations at different nodes of the process should help with identifying incidents. The same is true for auditing34 and demonstrating accountability. Further, measures to systematically monitor, interpret and audit algorithmic processing are set to become increasingly important, at least within the European market, following the EU’s proposal for an ‘Artificial Intelligence Act’.35 The proposal introduces stricter regulation around the use of AI tools, including the interpretation of algorithmic output,36 ongoing monitoring of operations and automated log-keeping.37 Against this background, the PLEAD project seeks to assist towards a holistic approach to systematic compliance by automating the generation of provenancebased explanations to support compliance strategies. PLEAD does not attempt to open the ‘black box’ just yet. It instead aims to explain the decision-making process as a whole, from its input throughout its output and impact, relying on provenance data recorded by the decision-making processes. Provenance records can capture trails of actions, where each action can be attributed to a specific actor, entity and activity. This audit trail, therefore, is capable of showing a complete record of the origin of a decision, such as the data considered during the decisionmaking process and who provided them. The explanations that are produced by PLEAD’s approach can be used to explain the details of algorithmic processing in the context of automated or semi-automated decision-making even before a resolution is acted upon, ie even before a decision is taken. Where provenance 31 ibid. 32 See GDPR Recs 39 and 75. 33 John Paul Kasse et al., The Need for Compliance Verification in Collaborative Business Processes (Cham, 2018). 34 The ICO defines an audit as the process ‘to determine whether the organisation has implemented policies and procedures to regulate the processing of personal data and whether that processing is carried out in accordance with such policies and procedures. When an organisation complies with its data protection requirements, it is effectively identifying and controlling risks to prevent personal data breaches.’: ICO, A guide to ICO audits (2018), 3, https://ico.org.uk/media/for-organisations/documents/2787/guide-to-data-protection-audits.pdf. 35 ‘Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS’ COM(2021) 206 final, 2021 [hereinafter ‘Proposal for an AI Act’]. 36 Proposal for an AI Act, Art 13. 37 ibid, Art 14.

The Dual Function of Explanations  133 data about processing inside the ‘black box’ have been recorded, eg as a result of an XAI approach, PLEAD is able to take these into account. PLEAD explanations, therefore, do not solely focus on the decision itself but refer to the broader decision-making process. Explanations are thus conceived as both external detective controls empowering data subjects and help them determine when to exercise their rights and internal detective controls aimed at putting controllers in a position to monitor and audit processing activities and ultimately demonstrate compliance, independently from the reception of a data subject request. By introducing and commenting upon the PLEAD’s approach, this chapter seeks to assess the potential of computable explanations as a means to produce ‘explainable-bydesign’ socio-technical systems. This chapter is thus organised into two main parts. First, section II unpacks the dual function of explanations as external and internal detective controls and therefore shows that explanation generation does not only serve data subject empowerment. Section III then unfolds the approach of the PLEAD project built to effectively support accountability obligations through explanation automation, and illustrates the potential of computable explanations in context, while highlighting remaining challenges.

II.  The Dual Function of Explanations Explainability refers to ‘the ability for the human user to understand the agent’s logic’.38 An agent should be understood as an algorithmic system, eg, a recommendation system, a training and tutoring system, a robot, a self-driving car39 that is involved in a decision-making process. A decision could be defined as any actioned upon resolution or more simply action taken after consideration of input information comprising the algorithmic output,40 ie the approval of the loan, the display of the news article or the archival of the email in the spam folder.41 The action will in some cases be performed by the decision-making process automatically and forms part of the algorithmic output. In such cases, the decision-making process is described as being solely automated. In other cases, the algorithmic output is used as part of a wider process where humans take it into account and

38 Avi Rosenfeld and Ariella Richardson, ‘Explainability in human–agent systems,’ Autonomous Agents and Multi-Agent Systems 33, no. 6 (2019): 678, https://doi.org/10.1007/s10458-019-09408-y. 39 Rosenfeld and Richardson, ‘Explainability in human–agent systems.’ 40 ICO, Explaining decisions made with AI – Part 1: The basics of explaining AI, 5. 41 Or, in the words of Castelluccia and Le Métayer, an ‘analysis of large amounts of personal data to infer correlations or, more generally, to derive information deemed useful to make decisions […] such as on access to credit, employment, medical treatment, judicial sentences, among other things’: Claude Castelluccia and Daniel Le Métayer, Understanding algorithmic decision-making: Opportunities and challenges (2019), 1, www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU (2019)624261_EN.pdf.

134  Niko Tsakalakis et al combine it with other information in order to make a resolution and act upon it.42 The means to effect explainability is an explanation. An ‘explanation’, generally, should be understood as one or more statements that provide a reason or a justification for a decision or the decision-making process.43 However, the definition differs slightly depending on the field at stake. In the field of XAI, explanations aim to collect ‘features of the interpretable domain [] that have contributed for a given example to produce a decision’.44 In other words, an explanation comprises statements that aim to interpret the behaviour of an algorithm based on its training data, ie the data used to develop the algorithm, its input or newly inputted data, eg data about a specific case, and its output, ie the decision in that specific case.45 In contrast, the term ‘explanation’ used throughout this chapter differs slightly from when the term is used in the field of XAI. For data protection purposes, what matters is not to explain how the algorithm works,46 but to justify why the algorithm works the way it does.47 The GDPR does not contain a definition of explanations, although Recital 71 refers to it. Obligations to generate some forms of accounts stem from a combination of Articles 13–15 and 22. Purely automated decision-making defined as ‘a decision based solely on automated processing, including profiling, which produces legal effects […] or similarly significantly affects [them]’48 is generally prohibited. It is only permissible under the exceptions set forth in Article 22(2).49 Where automated decision-making takes place, data controllers have an obligation to provide certain information to the data subject both ex ante and ex post of the

42 What is referred to as ‘a human in the loop’ ICO, Explaining decisions made with AI – Part 1: The basics of explaining AI, 6. 43 Alun Preece, ‘Asking “Why” in AI: Explainability of intelligent systems – perspectives and ­challenges,’ Intelligent Systems in Accounting, Finance and Management 25, no. 2 (2018), https://doi. org/10.1002/isaf.1422. The author further distinguishes between ‘transparency’ that justifies a d ­ ecision by reference to the way the algorithm behind it works, or ‘post-hoc interpretations’ that justify a ­decision without reference to the inner workings of an algorithm. 44 Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller, ‘Methods for interpreting and understanding deep neural networks,’ Digital Signal Processing 73 (2018/02/01/ 2018): 2, https://doi. org/https://doi.org/10.1016/j.dsp.2017.10.011. 45 Rosenfeld and Richardson, ‘Explainability in human–agent systems.’ See table 1. 46 ‘What matters is to justify why the rules are the way they are, explaining what the rules are must further this end’ Talia B. Gillis and Joshua Simons, ‘Explanation < Justification: GDPR and the Perils of Privacy,‘ Pennsylvania Journal of Law and Innovation 2 (2019): 71, 76. 47 ‘In the end, controllers must be able to show that the correlations applied in the algorithm can legitimately be used as a justification for the automated decisions’ Lokke Moerel and Marijn Storm, ‘Automated decisions based on profiling: Information, explanation or justification That is the ­question!’ In Autonomous systems and the law, eds. NE Aggerwal, Enriques Horst, Jennifer Luca Payne and Kristin van Zwieten (Verlag C.H. Beck 2019), 94. 48 GDPR Art 22(1). 49 GDPR Art 22(2): ‘(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (c) is based on the data subject’s explicit consent.’

The Dual Function of Explanations  135 processing. Before the processing takes place, data controllers must inform the data subjects about ‘the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject’.50 After the processing, data controllers must ensure access to information confirming the existence of automated decision-making under Article 15(1)(h), and when purely automated decision making is happening based upon a valid exception appropriate safeguards, ie additional rights, must be granted such as the right to obtain human intervention, to express an opinion and to contest the decision.51 Recital 71 specifies that these safeguards include ‘specific information to the data subject’ and a right ‘to obtain an explanation of the decision reached’. The fact that explanations are mentioned in a non-binding recital is the main reason behind the debate about the existence of a right to explanation.52 However, recitals are intended to assist in the interpretation of the binding part of EU regulations and without an explanation the right to contest the decision is likely to be purely formal. In the field of data protection law, explanations are therefore not necessarily focused upon the way the algorithm actually works. They are defined in relation to their functionality and are necessarily linked to the exercise of what we term ‘corrective rights’, which enable data subjects to proactively or retroactively terminate or amend processing activities, such as the right to withdraw consent, the right to object, the right to rectify, the right to erasure or the right to contest purely automated decisions. By way of example, to enable data subjects to contest purely automated decisions, at a minimum information relating to whether the decision was purely automated should thus be given, as well as information relating to the applicable exception, information relating to the relevance and accuracy of the data used as input and information relating to the fairness of the treatment. If explanations are the objects of data subject rights and in that sense could thus be seen as external detective controls enabling data subjects to detect violations of the framework and react, they are also necessarily the objects of obligations imposed upon controllers, usually formulated in terms of principles, ie data minimisation, accuracy, fairness. With the introduction of the principle of accountability in GDPR Article 5(2) and the requirement of data protection by design in GDPR Article 25, explanations have thus the potential to become internal detective controls not necessarily connected to data subjects’ requests. The explanations that will be referenced in this chapter are justifications of a decision taken, ‘showing the rationale behind each step in the decision’.53 50 The provision is replicated across Arts 13(2)(f) and 14(2)(g). 51 GDPR Art 22(3). See also Art 15(1)(h) of the GDPR. 52 See Selbst and Powles, ‘Meaningful information and the right to explanation,’ 235. 53 O Biran and Courtenay Cotton, Explanation and Justification in Machine Learning: A Survey (2017), 1.

136  Niko Tsakalakis et al

A.  Explanations as External Detective Controls As attested by the formulation of Article 22(1), automated decision-making falls within the scope of the GDPR when impactful decisions are produced.54 The Article 29 Data Protection Working Party (WP29)55 in its opinion on automated decision-making considers as impactful two categories of decisions: decisions that impact someone’s legal rights, citing as examples the cancellation of contracts, the denial of social benefits or the refusal of citizenship; and, decisions that significantly affect someone’s circumstances, citing decisions that impact on finance, health, employment or education opportunities.56 The latter is contextually interpreted by the actual effect on the individual rather than the type of processing operation.57 For impactful decisions, automated decision-making is permissible only for the three exceptions narrowly defined in Article 22(2), ie to enter into, or for the performance of, a contract; if it is authorised by law; or, if the data subject gives explicit consent. Such processing is, however, subject to suitable safeguards. Generally speaking, suitable safeguards, as already mentioned, are required both ex ante and ex post. Ex ante, explanations can assist in providing meaningful information about the logic involved and the significance and consequences of the processing prior to the start of the processing.58 They are usually explanative statements that are provided in static information pages and aim to assist in the data subjects’ exercise of their rights to be informed. Ex ante explanations must at a minimum include sufficient detail for the data subject to understand the criteria used to reach a decision59 and real tangible examples of the possible effects.60 WP29 illustrates using as an example a credit scoring process for the assessment of a loan application as follows. Let us assume a scenario where a bank (the data controller) offers loans to its customers. In order for a customer to secure a loan, the customer has to fill out an application. The application requests a number of details, from contact information to financial data and spending habits. The answers of the application are used by a credit reference agency (CRA) which collaborates with the bank to calculate a 54 ‘which produces legal effects concerning him or her or similarly significantly affects him or her.’ 55 As of 25 May 2018 WP29, which was previously responsible for monitoring the application of data protection law across the EU, has been replaced by the European Data Protection Board (EDPB). The EDPB has since endorsed all Opinions of the WP29. 56 Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, 21–22. 57 WP29 citing examples of processing that, even though not significant for the general population, might prove significant when addressing minorities or vulnerable groups, or might be triggered by the action of others. As an example, WP29 gives an example of a credit card provider that decides to lower the credit limit of a customer based not on their credit history but on the profile of customers who live in the same area or shop in the same businesses. See Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, 22. 58 GDPR Arts 13(2)(f), 14(2)(g) and 15(1)(h). 59 Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, 25. 60 ibid, 26.

The Dual Function of Explanations  137 credit report on behalf of the bank. The filled-out application is factored in along with additional information that the CRA has gathered about the applicant. An algorithmic model then calculates a credit report, which is sent back to the bank. The bank incorporates this credit report into its own decision-making process, which typically weighs the credit report along with other information about the applicant that the bank holds to calculate a creditworthiness score. If the creditworthiness score is below a certain threshold, the application is automatically rejected. Above that threshold, the application is forwarded to an employee of the bank who assesses the case, adjusts the size and the interest of the loan if needed and approves or refuses the application.61 Although the CRA is not directly captured by the obligations of Article 22 (these apply to the bank who is the data controller taking the decision), in practice the CRA will have to assist in explainability obligations. This is either because the CRA will be considered as a joint controller62 or because as a data processor such obligation stems from the contractual agreement between the two parties.63 Also note that the recent EU proposal for an ‘AI Act’,64 if introduced in its current form, will impose similar obligations for credit institutions.65 Based on the above scenario, the ex ante explanations of a controller that performs credit scoring, according to WP29, should contain details of the main characteristics of reaching a decision; the sources of the data used and the relevance; assurances that the methods used remain fair, effective and unbiased; contact details to request reconsideration of declined decisions.66 The possible effects of the processing could be illustrated with examples of how different credit report values would influence a decision to grant or deny the loan.67 Ex post explanations, on the other hand, should be accounts that provide specific information to justify the decision reached and help data subjects detect potential violations of the framework. The aim of these explanations is to provide data subjects with an adequate understanding of the decision so that they can exercise their corrective rights,68 ie to express their point of view, request human 61 This is a simplified generic version of a loan application process. WP29 does not provide any specifics as to the exact process they had in mind. 62 Where the CRA co-determines the purposes and means of processing, for example when the decision is solely based on the CRA report. See GDPR Art 26(1). 63 And specifically the processor’s obligation to assist the controller in responding to requests for the exercise of data subject rights under GDPR Art 28(3)(e). 64 Proposal for an AI Act, COM(2021) 206 final. 65 Credit institutions are considered as ‘high-risk’ systems under ANNEX III 5(b) of the Proposal for an AI Act. High-risk systems will face stricter monitoring and auditing obligations (see Article 14 of the Proposal for an AI Act). 66 Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, 25–26. 67 See also ICO, Automated decision-making and profiling (2018), 18, https://ico.org.uk/media/ for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/ automated-decision-making-and-profiling-1-1.pdf, citing as examples how creditworthiness can affect payment options. 68 ‘in a way that is useful, intelligible, and actionable to the data subject.’ Selbst and Powles, ‘Meaningful information and the right to explanation,’ 242.

138  Niko Tsakalakis et al intervention and challenge the decision. As WP29 notes, ‘[t]he data subject will only be able to challenge a decision or express their view if they fully understand how it has been made and on what basis.’ Such explanations should be meaningful to the data subject, ie meaningful to a human, presumably without requiring technical expertise. Using the credit scoring example from above, an ex post explanation should clarify to the data subject not only which data sources were used but also the weight that each of them played when calculating the credit report. Additionally, what the impact of that credit report was on the loan application, what the effect is – for example in relation to the interest rate. These explanations can include elements of accountability notifications,69 providing the recipients with a clear picture of the remedies available to them.70 Ex post explanations can act as external detective controls, and could then be followed by corrective actions, such as actions based on Articles 7771 or 79.72

B.  Explanations as Internal Detective Controls So far, explanations have been approached as a tool to explain impactful decisions to the data subject and assist them in exercising their corrective rights. We suggest that explanations have also the potential to function as internal detective controls. Internal detective controls can assist data controllers in demonstrating compliance under the principle of accountability introduced at GDPR Article 5(2), which should be read together with Article 25 and the requirement of data protection by design. The principle of accountability implies that data controllers are in charge of leading the compliance effort and should be in a position to demonstrate compliance with the whole data protection framework.73 To do so, data controllers will need to document their processing activities in records and keep these records up-to-date. Where high risks are anticipated, Article 35(3)(a) of the GDPR obliges controllers to perform ‘a systematic and extensive evaluation of personal aspects relating 69 See in section D below the example where information about the date and time of human review and the details of the human reviewer are included. 70 The concept of combining (algorithmic) explanations with accountability notifications is also present in Kaminski and Malgieri, ‘Short Multi-layered explanations from algorithmic impact assessments in the GDPR.’; Dillon Reisman and others, ‘Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability’ (AI Now, April 2018). 71 ‘Right to lodge a complaint with a supervisory authority’. 72 ‘Right to an effective judicial remedy against a controller or processor’. 73 In the words of EDPB ‘You are accountable for what you do and why you do it the way you do it.’ [emphasis in the original] European Data Protection Supervisor (EDPS), Accountability on the ground: Guidance on documenting processing operations for EU institutions, bodies and agencies – Summary (2019), 6, https://edps.europa.eu/sites/edp/files/publication/19-07-17_summary_accountability_ guidelines_en.pdf. Although aimed to clarify Regulation (EU) 2018/1725 for EU institutions, it mirrors the approach of the GDPR and hence the guidance is relevant. GDPR Art 5(2) ‘The controller shall be responsible for, and be able to demonstrate compliance with, paragraph 1 (‘accountability’).’

The Dual Function of Explanations  139 to natural persons which is based on automated processing, including profiling’,74 ie, a Data Protection Impact Assessment (DPIA).75 DPIAs enable data controllers to document the details of the processing, assess any risks that it might pose to the rights and freedoms of individuals and show that suitable controls have been put in place to mitigate those risks. DPIAs, therefore, are key to demonstrate how data controllers meet their accountability obligations within the meaning of Article 5(2).76 GDPR data protection-by-design requirement implies that mitigation of processing risks is an iteration that requires the implementation of organisational and technical safeguards as early as possible.77 Both provisions highlight that accountability of the data controller requires an ongoing monitoring process that begins very early on. Ongoing accountability in the context of algorithmic decisions requires monitoring of the entire data lifecycle. Technical controls can complement policy and organisational measures to offer a hybrid accountability approach. Internal detective controls are internal measures facilitating the detection of potential compliance issues78 implemented as part of a systematic compliance strategy. The role of internal detective controls, in other words, is to support the monitoring and reporting on how data controllers meet their obligations under the GDPR. Computable explanations can become internal detective controls and assist in demonstrating compliance with many obligations imposed upon data controllers. For example, the storage limitation principle places an obligation to retain data only for as long as necessary for the processing activities.79 To demonstrate compliance, a controller would have to explain how long a piece of data is retained and why such retention is necessary. Similarly, to demonstrate compliance to the accuracy principle,80 a controller would have to demonstrate that the information they process is accurate and up to date.

74 GDPR Art 35(3)(a). 75 Note that WP29’s interpretation is that the ‘based on’ should be taken to encompass processing and profiling that is not solely automated. Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, 29. 76 ‘The controller shall be responsible for, and be able to demonstrate compliance with, paragraph 1 (‘accountability’).’ 77 GDPR Art 25 ‘Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.’ 78 Majdalawi and Hamad, ‘Security, Privacy Risks and Challenges that Face Using of Cloud Computing.’ 79 GDPR Art 5(1)(e). 80 GDPR Art 5(1)(d).

140  Niko Tsakalakis et al In both cases, explanations can be used to collect such justifications. An e­ xplanation linking a certain piece of data to its data source and its date of creation can serve as an indicator of its accuracy. Further linking the data to the processing purpose for which it was collected and the applied retention policy can help demonstrate compliance to storage limitation. Such explanations will assist a data governance team or an auditor when monitoring runtime compliance. Importantly, computable explanations can complement other systematic compliance efforts, eg using AI systems to monitor breaches, where the output of such efforts is included in the explanations constructed by PLEAD. Once the dual function of explanations is acknowledged, it becomes clearer why automating explanations can assist in building systematic and comprehensive compliance strategies. Explanations have the potential to be useful for demonstrating compliance with all data protection principles as listed in GDPR Article 5 and related obligations. With this said, automating explanations has limits and does not necessarily mean that computed explanations will substitute explanations produced by humans. What is more, computer explanations will need to be actioned upon by humans. Methodologies for computing explanations should acknowledge these limits and focus on supporting while reducing human intervention rather than eliminating it.

III.  PLEAD’s Approach to Explanation Automation Computing explanations has proved to be a challenging task for different reasons. A distinction should probably be drawn between accounts and standalone ­explanations. A standalone explanation is able to provide an answer to a q ­ uestion without the need for further clarification. While accounts do not necessarily provide complete meaningful answers to the ‘how’ and the ‘why’ questions, they should ideally contribute towards a standalone explanation. That being said, this distinction does not necessarily imply that accounts do not facilitate the exercise of a right or compliance with an obligation. This is the reason why computing explanations remains a useful exercise and necessitates the following of a methodology that can precisely define the legal context (ie, the legal obligation to meet or the right to enable), the audience, and the timing at which the explanation should be generated within the decision-making process. PLEAD’s contribution lies in the leveraging of provenance-based technology to compute a wide range of explanations, which can serve both as external and internal detective controls.

A.  The Limits of Explanation Automation Only focusing upon how well automated explanations can explain details of the algorithmic processing is not necessarily the right approach from a legal

The Dual Function of Explanations  141 perspective: the focus should be on whether and how explanations meet the needs of their recipients.81 It is true that in many cases explanations appear too technical to their recipients.82 Still, some authors recognise that technical explanations look promising and are welcome as an important part of the governance jigsaw.83 In the literature, the terms ‘algorithmic transparency,’ ‘systemic accountability’ and ‘reviewability’ have been used either interchangeably or as an extension of one another.84 Coming back to the loan application scenario, assess the following explanation: We regret to inform you that your application was unsuccessful. After a review of your application, we have concluded that your current financial situation precludes this institution from extending credit to you at this time. Your credit score of 350 introduces a high level of risk exposure. When your financial picture changes we would be happy to reconsider your application.

The explanation presents the decision – a refusal – and the reason for the refusal. To achieve an understanding of the decision, however, the explanation should, depending upon its audience, provide more tailored information. A refused applicant might want to know the significance of the credit report to the financial risk; any mitigating circumstances or future steps the applicant could take to reduce this risk; whether the credit report was the sole reason for the refusal; whether the review of the application was a thorough one and whether a human has been involved; who to turn to next to discuss further the consequences of this refusal. An auditor, whose job is to assess the performance of the application algorithm, might want to know how this credit score was calculated; which data sources were collated; which checks were performed to validate the results; which percentage of similar applications have been approved or rejected, etc. A supervisory authority might want to know, for example, how the decision was communicated to the applicant; whether human review has been performed and to what extent it was meaningful and whether the applicant has been given an opportunity to contest the decision; what safeguards exist to ensure that fairness of treatment has been implemented.

81 See, eg: Algorithm Watch and Bertelsmann Stiftung, Automating Society: Taking Stock of Automated Decision Making in the EU; Brown, ‘2 – The limits of human and automated decision-making.’; Edwards and Veale, ‘Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For.’ 82 Gillis and Simons, ‘Explanation < Justification: GDPR and the Perils of Privacy,’. 83 See, eg: Marion Oswald, ‘Algorithm-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power,’ Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2128 (2018), https://doi.org/ doi:10.1098/rsta.2017.0359. 84 Cobbe, ‘Administrative law and the machines of government: judicial review of automated publicsector decision-making.’; Gillis and Simons, ‘Explanation < Justification: GDPR and the Perils of Privacy.’; Satell and Sutton, ‘We Need AI That Is Explainable, Auditable, and Transparent.’

142  Niko Tsakalakis et al Evidently, an adequate understanding of a decision depends upon its recipient and its aim, ie it depends ‘on who is justifying what to whom’.85 Current approaches to explainability are criticised in the literature precisely because of a lack of conditionality. Besides, it has been noted that the GDPR’s transparency requirements fall short of mandating individualised explanations.86 Moreover, computable explanations have mainly been able to explain the process by which a decision has been reached, ie the ‘how’ of a decision. The ‘why’ of a decision, ie the correlations of the data to reasons that led to the specific decision have been harder to be meaningfully documented by computable explanations.87 At the same time, current approaches appear limited even in relation to the ‘how’ question: Singh et al. note these approaches can present an account of the process, but neglect details like the training of the model, the sources of the training data, any assumptions on the design of the decision-making pipeline etc.88 For effective ongoing accountability, which Cobbe refers to with the term ‘reviewability’ of a system,89 it is important to convey how each individual process fits within the wider socio-technical system of the controller.90 Explanations can offer a solution, provided that they capture enough details about the design, implementation and performance of the system. Finally, approaches to computable explanations so far fail to show a versatility that is considered necessary in order for their audience to successfully achieve an understanding of the decision and its potential impacts on their lives. In complex scenarios of automated decision-making involving multiple actors (for example, our previous scenario with the bank and the CRA), each actor must provide their own explanation and combining these explanations is not necessarily straightforward.91 Most importantly, to be able to meaningfully justify algorithmic decision-making, explainability approaches must distinguish between: (a) who offers the explanation; (b) what is explained; and (c) to whom the explanation is offered, ie who the recipient is.92

85 Gillis and Simons, ‘Explanation < Justification: GDPR and the Perils of Privacy.’ 92. 86 Bryan Casey, Ashkon Farhangi and Roland Vogl, ‘Rethinking Explainable Machines: The GDPR’s “Right to Explanation” Debate and the Rise of Algorithmic Audits in Enterprise,’ Berkeley Technology Law Journal 34 (2018). 87 Sandra Wachter, Brent Mittelstadt and Chris Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR,’ Harvard Journal of Law & Technology (Harvard JOLT) 31, no. 2 (2017–2018), 888. 88 Jatinder Singh, Jennifer Cobbe and Chris Norval, ‘Decision Provenance: Harnessing Data Flow for Accountable Systems,’ IEEE Access 7 (2019). 89 Cobbe, ‘Administrative law and the machines of government: judicial review of automated public-sector decision-making.’ 90 Singh, Cobbe and Norval, ‘Decision Provenance: Harnessing Data Flow for Accountable Systems.’ 91 Michael Hind, ‘Explaining explainable AI,’ XRDS 25, no. 3 (2019), https://doi.org/10.1145/3313096, https://doi.org/10.1145/3313096. 92 Hind, ‘Explaining explainable AI.’; Kaminski and Malgieri, ‘Short Multi-layered explanations from algorithmic impact assessments in the GDPR.’; Mireia Ribera and Àgata Lapedriza, ‘Can we do better explanations? A proposal of user-centered explainable AI’ (paper presented at the Joint Proceedings of the ACM IUI 2019 Workshops, Los Angeles, US, 20 March 2019).

The Dual Function of Explanations  143

B.  Provenance-Based Explanations PLEAD has developed a methodology to build ‘explainable-by-design’ decisionmaking socio-technical systems. PLEAD’s approach uses provenance trails as the basis of the explanations. Of note, the provenance-based approach to explanations being further developed in PLEAD has been acknowledged in the ICO’s guidance on ‘Explaining decisions made with AI’.93 Computable explanations can document the design, implementation and performance of the decision-making process and support the organisation in demonstrating compliance. The position of the PLEAD project, therefore, is that supportive automation – such as computable explanations – is crucial to help scale compliance efforts within organisations. Recording the full provenance of a decision-making pipeline is one approach that can be used to increase the traceability of its decision as part of an overall strategy for the management of decision-making. Specifically, decision-making systems should be explainable-bydesign where explanations are implemented as early as possible from the design stage.94 The provenance of a decision-making process is a recorded audit trail.95 The provenance of a decision provides an account of the actions a system performed to produce that decision in the form of a knowledge graph. It is ‘a record that describes the people, institutions, entities, and activities involved in producing, influencing, or delivering’96 a decision, including data attribution and data derivations. Evidently, the provenance in the context of decision-making can provide valuable information about the factors that influenced a decision, albeit individuals, organisations or data. Provenance enables us to trace back a decision to its input data and identify the responsibility for each activity during the decisionmaking process. Suitably recording the full audit trail of all processes that led to a decision, ie its provenance, allows us to take a holistic view that considers all of the above aspects when constructing explanations. For the purpose of constructing explanations, the provenance of a decisionmaking process must first be recorded with sufficient details. Although the details are specified per category of explanations to be supported,97 generally speaking, provenance must allow: • to identify the various types of data of the universe of discourse, ie a loan ­application, an applicant, an automated decision or a decision after human review, etc; 93 ICO, Explaining decisions made with AI – Part 2: Explaining AI in practice (2020), 59–60, https:// ico.org.uk/media/about-the-ico/consultations/2616433/explaining-ai-decisions-part-2.pdf. 94 Eg explanations that enable recipients to take action (such as, to make corrections to erroneous information) appear to be effective detective controls. 95 Luc Moreau and Paolo Missier, PROV-DM: The PROV Data Model (2013), www.w3.org/TR/2013/ REC-prov-dm-20130430/. 96 ibid, 1. 97 See section IV.C.

144  Niko Tsakalakis et al • to trace back the outcomes to its influencers, including the algorithmic outputs; • to attribute or assign responsibility to software systems or humans for their actions or outputs; and • to identify the various activities, their respective timing and their contribution to the outcomes. Figure 1  Generating explanation narratives from provenance – an overview

PLEAD has developed a specific provenance vocabulary for decision-making related concepts (eg agents of type:DataController or type:DataSubject; entities of type:Request for activities of type:Erasure or type:Rectification etc) that can be used to tag the recorded provenance. The provenance traces can then be used to generate explanations in two steps. First, specific parts of the full provenance graph will be extracted into smaller provenance graphs (sub-graphs) based on a query looking for a specific graph pattern specified using terms from the PLEAD vocabulary as part of an explanation template.98 Then, the information contained within the extracted sub-graph will be used to complete the corresponding narrative, contained in the same explanation template. The result will be processed by a natural language generation engine99 to construct the sentences that constitute the explanation (Figure 1). This process has been incorporated into a wider framework whose goal is to identify the legal and governance requirements that can benefit from explanations and classify them into explanation templates together with associated provenance requirements. These templates relate to different parts of the decision-making

98 See in IV.B below ‘socio-technical specification’. 99 Based on the SimpleNLG library: Albert Gatt and Ehud Reiter, ‘SimpleNLG: A Realisation Engine for Practical Applications’ (paper presented at the Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009), Athens, Greece, March 2009).

The Dual Function of Explanations  145 process, serve different goals and address different audiences. All together, they are collected in a framework we call ‘socio-technical specification’, which will power the automation of the computable explanations. The socio-technical specification is described in the section that follows.

C.  Explanation Automation In its authoritative guidance on explainability for AI decisions, the ICO determines seven steps to determine the explanations required.100 Although this has been a starting point when considering automating explanations, clarification and specificity is required for most of these steps. The computable explanations generated by the PLEAD project are driven by practical requirements drawn from selected use cases. This use case approach is beneficial for explanation generation, as it allows for an in-depth examination of explanations in their real-life context. For PLEAD, therefore, the first step was to prioritise explanations by focusing on explanations whose generation is critical for achieving legal compliance and other organisational needs. We term these explanations ‘legally-grounded’. Legally-grounded explanations are required either explicitly, as a direct legal and/ or governance obligation to provide an explanation, or implicitly, where an explanation would facilitate compliance with some legal and/or governance obligation. An example of an explicit legal obligation is the obligation of a bank to explain how a creditworthiness score that was automatically created influenced the approval or refusal of a loan, as this obligation stems directly from Article 22 of the GDPR.101 In contrast, an explanation generated to detect when a loan applicant read the information notice about the logic of the system is an implicit obligation. Here, the explicit obligations are the bank’s notification and accountability obligations. The bank is required to display certain information to the applicant before the decision-making.102 This requirement is served by an information notice that is typically displayed on a web page prior to access to the application. However, the bank must also be in a position to demonstrate compliance under the principle of accountability.103 The bank is free to determine how it will demonstrate such compliance. The generation of an explanation that would detect that the applicant was shown the information notice as part of the process of submitting an

100 1. ‘Select priority explanations by considering the domain, use case and impact on the individual’; 2. ‘Collect the information you need for each explanation type’; 3. ‘Build your rationale explanation to provide meaningful information about the underlying logic of your AI system’; 4. ‘Translate the rationale of your system’s results into useable and easily understandable reasons’; 5. ‘Prepare implementers to deploy your AI system’; 6. ‘Consider contextual factors when you deliver your explanation’; 7. ‘Consider how to present your explanation’. ICO, Explaining decisions made with AI – Part 2: Explaining AI in practice, 3–7. 101 In combination with GDPR Arts 13, 14 and 15 as has been previously explained. 102 GDPR Art 13(2). 103 GDPR Art 5(2).

146  Niko Tsakalakis et al application would in this case be seen as good practice for achieving compliance with its notification obligations. PLEAD has determined the key requirements for legally-grounded explanations from three main areas: applicable laws, ie primary requirements for explanations; authoritative guidance and standards by expert groups and bodies, ie secondary requirements for explanations; and internal compliance functions, ie tertiary requirements for explanations. For each requirement, PLEAD identifies the building blocks needed to construct an explanation. Building blocks are the goals of the explanation, its minimum required content, the intended recipients and responsible agents, the underlying questions or concerns it addresses, and when and how it is to be triggered. The building blocks determine not only the content of the explanation but also its time-scale, format and visuals. We gather these building blocks into explanation generation templates that we call ‘sociotechnical specification’. The socio-technical specification comprises source tables that are then used to inform the engineering of the explanation automation in iterative feedback rounds. The technology that underpins PLEAD’s contribution is provenance. Provenance, and specifically its standard PROV, describes how a piece of information or data was created and what influenced its production.104 Within recorded provenance trails, we can retrace automated decisions to provide answers to some questions, such as what data were used to support a decision; who or what organisation was responsible for the data; and who else might have been impacted. Importantly, provenance can also be used to record actions that fall outside the strict decisionmaking process, for example when a certain version of an information notice is created, uploaded to the website and displayed before the decision-making session begins. This is paramount as it allows us to capture information that relates to accountability but would have been impossible to capture with other XAI methods because it relates to processes that happen before processing for the automated decision-making has begun. There seems to be an assumption in the literature that the added complexity of AI/ML systems that produce explainable decisions sacrifices effectiveness.105 PLEAD overcomes this assumption by outsourcing the explanation generation to a different component that we call the ‘Explanation Assistant’. For each explanation in the socio-technical specification, we match its building blocks to the queries, provenance data and provenance mark-ups required for the generation of an explanation. A provenance vocabulary is created alongside this process to express which information a system should be recorded in provenance. The socio-technical specification is translated into rules for the automatic generation 104 Luc Moreau and Paolo Missier, ‘PROV-DM: The PROV Data Model.’ 30 April 2013, W3C Recommendation. /www.w3.org/TR/2013/REC-prov-dm-20130430/. 105 Wachter and others seem to treat ML systems as inherently probabilistic, suggesting that ‘the use of complex probabilistic analytics’ is a hindrance to explanation of specific decisions, even where it does not similarly hinder explanations of system functionality.

The Dual Function of Explanations  147 of explanations that are richer than current approaches. The Explanation Assistant is responsible for collecting the recorded provenance of each step of the decisionmaking process and using it in accordance with the rules of the socio-technical specification to automatically compute explanations. Because the Explanation Assistant lives outside the decision-making pipeline, it is able to report not only on explanations about the automated decision but also on explanations about the processes that exist in the wider environment of the organisation. For example, the Explanation Assistant can generate explanations about the design decisions that shaped the decision-making pipeline, provided it has rules and provenance data about them. An explainable-by-design system, designed by following PLEAD’s methodology, is able to compute explanations that can address the shortcomings that have been so far highlighted in the literature.106

D.  An ‘Explainable-By-Design’ Loan Application Use Case The added value of PLEAD’s computable explanations can be illustrated by referring to the loan application scenario (Figure 2). To recap the scenario, a bank ‘Bank’ decides whether to accept or reject the application for a loan from a customer ‘Customer’ based on a creditworthiness score. The creditworthiness score is calculated by weighing the data that the Customer has entered into the application against data from other sources and a credit report about the Customer that has been received by a credit reference agency ‘CRA’. The credit report of the CRA has also been calculated automatically based on the data entered by the Customer on the application form and other data sources available to the CRA.107 For the calculation of the credit report and the creditworthiness score, the CRA and the Bank use automated decision-making. As a result, they must satisfy the obligations of Article 22 of the GDPR. The creditworthiness score of the Bank has a numeric value. A numeric value below a certain threshold triggers an automatic rejection, as the Customer is considered unreliable. In this case, a decision by solely automated means has been taken and the Bank needs to satisfy the obligations of Article 22 of the GDPR. In parallel, the process taken by the CRA to calculate the credit reports constitutes processing solely by automated means. It would appear that the CRA is not directly captured by the obligations of Article 22 in relation to the calculation of the credit report prior to the Bank’s decision, although the matter is debated.108 106 See above section III.C. 107 But using a different algorithmic model than the one used by the Bank. 108 WP29 is of the opinion that credit scoring constitutes profiling, which Art 22 classifies as automated processing of personal data. See Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (2018), 8. ‘Decisions that are not solely automated might also include profiling. For example, before granting a mortgage, a bank may consider the credit score of the borrower, with additional meaningful intervention carried out by humans before any decision is applied to an individual.’

148  Niko Tsakalakis et al However, arguably once the credit report has been used in a decision, the CRA would need to demonstrate directly or assist the Bank in demonstrating how and why the calculations in the report were performed.109 They must be able to verify the results and provide a simple explanation for the rationale behind them and illustrate the key decision points that formed the basis for the decision.110 And, additionally, they need to have processes in place for the Customer to contest or appeal the decision.111 Figure 2  The flowchart of the simulated loan decision pipeline

PLEAD is able to generate explanations that cover these obligations even before the Bank issues an automated decision. Regarding the calculation of the report, the provenance trail has captured the data sources that were used and the exact values of these sources. As a result, PLEAD is able to construct explanations that go beyond listing the data sources to explaining which precise values impacted the result. By extension, counterfactual explanations to explain how different values would have changed the result can also be constructed. Counterfactuals are valuable when addressing data subjects because they can reveal the ‘why’ behind a decision:112 a counterfactual explanation shows why the decision was chosen over an alternative by contrasting the hypothetical outputs.113 Regarding the verification 109 Whether this is a direct obligation out of Art 22 of the GDPR or arising out of the contractual obligation between the CRA and the Bank is a matter of interpretation. See section II.A. 110 ICO, Guidance on automated decision making and profiling (2018), 19, https://ico.org.uk/media/ for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/ automated-decision-making-and-profiling-1-1.pdf. See also Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, 31: ‘the categories of data that have been or will be used in the profiling or decision-making process; why these categories are considered pertinent; how any profile used in the automated decision-making process is built, including any statistics used in the analysis; why this profile is relevant to the automated decision-making process; and how it is used for a decision concerning the data subject.’ 111 ibid. 112 Wachter, Mittelstadt and Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR.’ 113 Miller has shown that in practice humans understand cause not based on dependence, ie when event C is always followed by event D, but based on counterfactuals, ie relative to an imagined alternative case.

The Dual Function of Explanations  149 of the results, details about the processing performed within the decision-making pipeline can be documented by the processing application(s) in the provenance trail, allowing later reporting on the precision of the process. Therefore, explanations regarding the verification of the results are possible. In addition, details about, ie the date and time of the data sources, can provide information as to the accuracy of the data. If the decision has been reviewed by a human, details about the time of the review and the identity of the human (eg their employee number) can be captured. The additional details are valuable to increase trust in the process and in case the Customer wishes to appeal the decision. Similar individualised explanations can be generated on behalf of the Bank. One of the benefits of common rules of explanation generation is that these explanations can be constructed on the fly at the point of delivery of the decision. This means that the CRA will be able to forward the relevant provenance to the Bank along with the credit report. The Bank will be able to construct all explanations about the CRA using its own Explanation Assistant. A customer that wishes to understand the decision better will be able to access both sets of explanations from the Bank, satisfying critiques that explanations from different entities about the same process should be combined. Because the explanations are machine constructed, hence machine-readable, they can also be queried. It is possible, therefore, to present the Customer with the explanations that the Bank considers most relevant, but allow them to query the data. Introducing an element of interactivity prevents customers from being overwhelmed but does still allows them to further question the provided explanations to deepen their understanding. PLEAD is able to construct explanations for other needs. For example, by capturing provenance data about the published privacy policy it can construct explanations about the CRA’s notification obligations. These explanations are useful to demonstrate accountability to the supervisory authority and for auditing and reporting purposes within the company.114 In other words, PLEAD is capable of constructing modular explanations for different purposes and different audiences. In addition, similar explanations can be constructed for any process that can be depicted in provenance. As a result, PLEAD’s explanations are capable of documenting not only the processes that take place within the decision-making pipeline but also wider system processes such as, for example, the training processes

So, the observation of the co-occurrence of events C and D produces no meaningful causal information. Instead, D is caused by C if event D would not occur unless C occurred. Tim Miller, ‘Explanation in artificial intelligence: Insights from the social sciences,’ Artificial Intelligence 267 (2019/02/01/ 2019), https://doi.org/https://doi.org/10.1016/j.artint.2018.07.007. 114 Observing WP29’s guidance on ‘clear and comprehensive ways to deliver the information’ for ‘algorithmic auditing – testing the algorithms used and developed by machine learning systems to prove that they are actually performing as intended, and not producing discriminatory, erroneous or unjustified results; provide the auditor with all necessary information about how the algorithm or machine learning system works;’ Article 29 Data Protection Working Party, Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, 31.

150  Niko Tsakalakis et al for the decision-making pipeline. Such explanations can answer questions on bias and fairness of the process and address the call for better ‘reviewability’ of automated decision-making systems. Additionally, because provenance is recorded throughout all processes of the system, PLEAD is able to construct explanations before a decision has been taken. This is important since it allows the generated explanations to demonstrate compliance for actions in the system that do now necessarily constitute decisions,115 eg to show compliance with the ‘right to information’ before processing begins. Because the Explanation Assistant is conceived as a separate sub-system, it is agnostic as to the architecture of the system. An organisation wishing to incorporate the Explanation Assistant into their system only has to determine which provenance data to capture according to the provenance vocabulary determined by the Assistant. As a result, organisations are at liberty to determine how the Explanation Assistant will be implemented and to calibrate how the explanations will be communicated to their stakeholders.

E.  Remaining Challenges Although the project is still in progress, a series of challenges must be acknowledged. First, although provenance offers flexibility since rules to record provenance can be easily adapted for most processing operations, provenance-based explanations by definition require the capture of provenance-related information. Yet, there will be a number of cases in which provenance-related information cannot be recorded, for example before the processing begins. In some of these cases, it is possible to overcome the limitation through the use of proxies. For example, generating explanations to substitute or complement the information that should be provided under Article 13 of the GDPR,116 which are typically contained in privacy policies, is nearly impossible. In order to overcome this, it is possible to capture provenance about the publication and updates of privacy policies. Explanations can then be created, referring to such provenance, to infer that the data subjects have had knowledge of the privacy policy’s contents. The use of proxies will not be possible in every conceivable case however and will limit the meaningfulness of explanations. In relation to the above, ‘tagging’ provenance trails with specific provenance types117 entails that organisations will have to enrich the types of metadata they utilise for correct provenance recording. As an example, in order to capture provenance about the processing purpose, the data controller will have to inject a new entity type ‘type:Purpose’. Because the Explanation Assistant could be used in different fields, formulating in advance an exhaustive list of metadata is

115 In

the narrow definition of the decision as the outcome of an AI process; see section II. to be provided to the data subject’ GDPR Art 13. 117 See III.B above. 116 ‘Information

The Dual Function of Explanations  151 impossible. Hence, organisations will have to decide whether to introduce new metadata depending on their needs. In deep learning systems and neural networks, where the algorithmic processing is inherently opaque,118 determining the factors that have influenced the decision is extremely difficult.119 Since PLEAD does not aim to explain the processes inside the black box but rather to provide explanations about the link between specific inputs and outputs, it relies on advances in XAI for black-box explainability. Attempts at exporting120 inner algorithms are actively explored, and when such progress has been made PLEAD will be able to incorporate such data in its explanations. In the meantime, systems taking advantage of deep learning and neural networks are still able to document all the inputs and outputs of their black boxes. In these cases, PLEAD is still able to construct explanations about the decision-making process without a need to look inside the black box. In addition, it is unlikely that PLEAD will be able to produce holistic explanations that will be able to substitute a human explanation in every case. PLEAD’s main contribution is in empowering humans (eg an employee of the data controller) to provide better explanations by offering relevant and meaningful information about the decision-making process. Finally, and perhaps most importantly, the effectiveness of a legally-grounded explanation is conditional upon the precision of the underlying legal concept. As a result, the generation of explanations to meet certain obligations might prove difficult. For example, explanations relating to fairness are challenging to generate because fairness does not always translate to provenance data. PLEAD may assist in demonstrating compliance with procedural fairness,121 with explanations that justify the processing’s timeliness, transparency and absence of legally-sanctioned discriminations.122 Fairness, however, also encompasses elements of ‘fair balancing’. According to the ICO, data controllers should balance the effects of processing versus the expectations of individuals. Such balancing exercises will not be captured by provenance, and, thus, cannot be translated to PLEAD explanations. 118 Andreas Holzinger, André Carrington and Heimo Müller, ‘Measuring the Quality of Explanations: The System Causability Scale (SCS)’ KI – Künstliche Intelligenz 34 (2020): 193, 194. 119 Gabriëlle Ras, Marcel van Gerven and Pim Haselager, ‘Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges’ in Explainable and Interpretable Models in Computer Vision and Machine Learning, eds. Escalante HJ and others (Springer International Publishing 2018). 120 See, eg: Gabriele Ciravegna and others, ‘Human-Driven FOL Explanations of Deep Learning’ (IJCAI-PRICAI 2020 – 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence). 121 Procedural fairness refers to the obligations of the data controller under the GDPR in relation to the timeliness of processing, the transparency of the processing operations and the controller’s burden of care towards the data subjects. In contrast, fairness as ‘fair balancing’ refers to the controllers’ obligations to justify the proportionality and necessity of the processing. See Damian Clifford and Jef Ausloos, ‘Data Protection and the Role of Fairness,’ Yearbook of European Law 37 (2018): 130, 179. 122 Gianclaudio Malgieri, ‘The concept of fairness in the GDPR: a linguistic and contextual interpretation’ (Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency), 163: ‘the protection of individual vulnerabilities and the prevention of vulnerability exploitation, as ­consequences of significant imbalance between individuals’.

152  Niko Tsakalakis et al PLEAD is currently in the process of refining its methodology and developing the Explanation Assistant. Key explanation requirements for selected three use cases have been identified and have been analysed according to a classification framework based on a long list of categories, such as audience, purpose, format, timing, etc. PLEAD’s next steps will be to apply socio-technical specification on simulated decision pipelines constructed from sample data provided by project partners. PLEAD will then be able to test the explanation generation for selected use cases and assess their compliance and effectiveness.

IV. Conclusion Automatic explanation generation has been explored in prior work as a means to empower data subjects against algorithmic bias, discrimination and unfairness. However, explanations can also be used as internal detective controls and help to put data controllers in a position to demonstrate compliance before the reception of a data subject request and even before and beyond the taking of socially sensitive automated decisions. Critics of current XAI approaches note nonetheless that the effectiveness of computable explanations suffers from a lack of modularity, interactivity and detail. PLEAD proposes to overcome these limitations by designing computable explanations that are legally grounded and provenance driven. Tracking the full provenance of decisions increases the traceability of the decision-making pipeline as part of an overall strategy for the management of decision-making. Specifically, using the socio-technical specification of PLEAD, organisations that perform automated decision-making should be able to build explainable-by-design socio-technical decision-making systems. Explanations generated by PLEAD’s Explanation Assistant are implemented as early as possible from the design stage and are able to address the needs of different audiences. This chapter demonstrates the added value of computable explanations based on provenance. By analysing a loan application scenario, it shows that PLEAD explanations are able to address different groups, can offer individualised justifications, can be interactive and expandable to increase understanding, and can be used to demonstrate compliance with a variety of obligations. Further, because they are designed to be technology agnostic, PLEAD explanations can be deployed in a configuration that suits the organisation in question and can perform as standalone explanations for individuals impacted by decisions or as a source of detailed information for the employees of the organisation. While challenges related to explanation integration and descriptive capabilities of provenance remain, PLEAD shows that computing explanations can benefit a wide range of organisations relying upon complex decision-making processing and seeking to scale their compliance strategies.

The Dual Function of Explanations  153

References Algorithm Watch, and Bertelsmann Stiftung. Automating Society: Taking Stock of Automated Decision Making in the EU. (2019). https://algorithmwatch.org/wp-content/ uploads/2019/01/Automating_Society_Report_2019.pdf. Article 29 Data Protection Working Party. Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. https://ec.europa. eu/newsroom/article29/document.cfm?action=display&doc_id=49826. ——. Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. (2018). Barredo Arrieta, Alejandro, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, et al. ‘Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI.’ Information Fusion 58 (2020/06/01/ 2020): 82–115. https://doi.org/https://doi.org/10.1016/j. inffus.2019.12.012. Beckers, Kristian, Stephan Faßbender, Maritta Heisel, and Rene Meis. A Problem-Based Approach for Computer-Aided Privacy Threat Identification. (Berlin, Heidelberg, 2014). Biran O and Cotton CV, Explanation and Justification in Machine Learning: A Survey (2017). Brown, Daniel. ‘2 – The limits of human and automated decision-making.’ In Mastering Information Retrieval and Probabilistic Decision Intelligence Technology, edited by Daniel Brown, 17–25: Chandos Publishing, 2004. Burrell, Jenna. ‘How the machine ‘thinks’: Understanding opacity in machine learning algorithms.’ Big Data & Society 3, no. 1 (2016): 2053951715622512. https://doi. org/10.1177/2053951715622512. Casey, Bryan, Ashkon Farhangi, and Roland Vogl. ‘Rethinking Explainable Machines: The GDPR’s ‘Right to Explanation’ Debate and the Rise of Algorithmic Audits in Enterprise.’ Berkeley Technology Law Journal 34 (2018). Castelluccia, Claude, and Daniel Le Métayer. Understanding algorithmic decision-making: Opportunities and challenges. (2019). www.europarl.europa.eu/RegData/etudes/ STUD/2019/624261/EPRS_STU(2019)624261_EN.pdf. Ciravegna G and others, ‘Human-Driven FOL Explanations of Deep Learning’ (IJCAI-PRICAI 2020 – 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence). Clifford D and Ausloos J, ‘Data Protection and the Role of Fairness.’ Yearbook of European Law 37 (2018): 130. Cobbe, Jennifer. ‘Administrative law and the machines of government: judicial review of automated public-sector decision-making.’ Legal Studies 39, no. 4 (2019): 636–55. https:// doi.org/10.1017/lst.2019.9. Cobbe, Jennifer, and Jatinder Singh. ‘Reviewable Automated Decision-Making.’ Computer Law & Security Review 39 (2020). https://doi.org/https://doi.org/10.1016/j. clsr.2020.105475. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3689166. Collins, Anna, Daniele Magazzeni, and Simon Parsons. ‘Towards an Argumentation-Based Approach to Explainable Planning.’ Paper presented at the 2nd ICAPS Workshop on Explainable Planning, Berkeley, CA, 2019. Corrales, Marcelo, Paulius Jurčys, and George Kousiouris. ‘Smart Contracts and Smart Disclosure: Coding a GDPR Compliance Framework.’ In Legal Tech, Smart Contracts

154  Niko Tsakalakis et al and Blockchain, edited by Marcelo Corrales, Mark Fenwick and Helena Haapio, 189–220. Singapore: Springer Singapore, 2019. Deng, Mina, Kim Wuyts, Riccardo Scandariato, Bart Preneel, and Wouter Joosen. ‘A privacy threat analysis framework: supporting the elicitation and fulfillment of privacy requirements.’ Requirements Engineering 16, no. 1 (2011/03/01 2011): 3–32. https://doi. org/10.1007/s00766-010-0115-7. https://doi.org/10.1007/s00766-010-0115-7. De Vos, Marina, Sabrina Kirrane, Julian Padget, and Ken Satoh. ‘ODRL Policy Modelling and Compliance Checking.’ Cham, 2019. Edwards, Lilian, and Michael Veale. ‘Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For.’ Duke Law & Technology Review 16 (2017): 18. https://doi.org/http://dx.doi.org/10.2139/ssrn.2972855. European Data Protection Supervisor (EDPS). Accountability on the ground: Guidance on documenting processing operations for EU institutions, bodies and agencies – Summary. (2019). https://edps.europa.eu/sites/edp/files/publication/19-07-17_summary_ accountability_guidelines_en.pdf. Gatt, Albert, and Ehud Reiter. ‘SimpleNLG: A Realisation Engine for Practical Applications.’ Paper presented at the Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009), Athens, Greece, March 2009. Gillis, Talia B., and Joshua Simons. ‘Explanation < Justification: GDPR and the Perils of Privacy.’ Pennsylvania Journal of Law and Innovation 2 (2019): 71. Goodman, Bryce, and Seth Flaxman. ‘European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation.’.’ AI Magazine 38, no. 3 (2017): 50–57. Hind, Michael. ‘Explaining explainable AI.’ XRDS 25, no. 3 (2019): 16–19. https://doi. org/10.1145/3313096. https://doi.org/10.1145/3313096. Holzinger A, Carrington A and Müller H, ‘Measuring the Quality of Explanations: The System Causability Scale (SCS).’ KI – Künstliche Intelligenz 34 (2020):193. Holzinger, Andreas, Georg Langs, Helmut Denk, Kurt Zatloukal, and Heimo Müller. ‘Causability and explainability of artificial intelligence in medicine.’ WIREs Data Mining and Knowledge Discovery 9, no. 4 (2019): e1312. https://doi.org/10.1002/widm.1312. IBM, and Morning Consult. From Roadblock to Scale: The Global Sprint Towards AI. (2020). http://filecache.mediaroom.com/mr5mr_ibmnews/183710/Roadblock-to-Scaleexec-summary.pdf. ICO. Automated decision-making and profiling. (2018). https://ico.org.uk/ media/for-organisations/guide-to-data-protection/guide-to-the-general-dat a-protection-regulation-gdpr/automated-decision-making-and-profiling-1-1.pdf. ——. Explaining decisions made with AI – Part 1: The basics of explaining AI. (2019). https:// ico.org.uk/media/2616434/explaining-ai-decisions-part-1.pdf. ——. Explaining decisions made with AI – Part 2: Explaining AI in practice. (2020). https:// ico.org.uk/media/about-the-ico/consultations/2616433/explaining-ai-decisionspart-2.pdf. ——. Guidance on automated decision making and profiling. (2018). https://ico.org. uk/media/for-organisations/guide-to-data-protection/guide-to-the-general-dat a-protection-regulation-gdpr/automated-decision-making-and-profiling-1-1.pdf. ——. A guide to ICO audits (2018). https://ico.org.uk/media/for-organisations/ documents/2787/guide-to-data-protection-audits.pdf. Kaminski, Margot E., and Gianclaudio Malgieri. ‘Multi-layered explanations from algorithmic impact assessments in the GDPR.’ Proceedings of the 2020 Conference on

The Dual Function of Explanations  155 Fairness, Accountability, and Transparency, Barcelona, Spain, Association for Computing Machinery, 2020. Kasse, John Paul, Lai Xu, Paul deVrieze, and Yuewei Bai. The Need for Compliance Verification in Collaborative Business Processes. Cham, 2018. Labadie, Clément, and Christine Legner. ‘Understanding Data Protection Regulations from a Data Management Perspective: A Capability-Based Approach to EU-GDPR’ Paper presented at the 14th International Conference on Wirtschaftsinformatik, Siegen, Germany 24–27 February 2019. Lowry, S., and G. Macpherson. ‘A blot on the profession.’ [In eng]. British medical journal (Clinical research ed.) 296, no. 6623 (1988): 657–58. https://doi.org/10.1136/ bmj.296.6623.657. Majdalawi, Yousef Kh., and Faten Hamad. ‘Security, Privacy Risks and Challenges that Face Using of Cloud Computing.’ International Journal of Advanced Science and Technology 13, no. 3 (2019): 156–65. Malgieri G, ‘The concept of fairness in the GDPR: a linguistic and contextual interpretation’ (Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency). Malgieri, Gianclaudio, and Giovanni Comandé. ‘Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation.’ International Data Privacy Law 7, no. 4 (2017): 243–65. https://doi.org/10.1093/idpl/ipx019. https://doi. org/10.1093/idpl/ipx019. Martin, Y., and A. Kung. ‘Methods and Tools for GDPR Compliance Through Privacy and Data Protection Engineering.’ Paper presented at the 2018 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 23–27 April 2018 2018. Miller, Tim. ‘Explanation in artificial intelligence: Insights from the social sciences.’ Artificial Intelligence 267 (2019/02/01/ 2019): 1–38. https://doi.org/https://doi.org/10.1016/ j.artint.2018.07.007. Moerel L and Storm M, ‘Automated decisions based on profiling: Information, explanation or justification That is the question!’ In Autonomous systems and the law, edited by NE Aggerwal, Horst Enriques, Luca Payne, Jennifer van Zwieten, Kristin (Verlag C.H. Beck 2019). Montavon, Grégoire, Wojciech Samek, and Klaus-Robert Müller. ‘Methods for interpreting and understanding deep neural networks.’ Digital Signal Processing 73 (2018/02/01/ 2018): 1–15. https://doi.org/https://doi.org/10.1016/j.dsp.2017.10.011. Moreau, Luc, and Paolo Missier. PROV-DM: The PROV Data Model. (2013). www.w3.org/ TR/2013/REC-prov-dm-20130430/. Notario, N., A. Crespo, Y. S. Martin, J. M. Del Alamo, D. L. Metayer, T. Antignac, A. Kung, I. Kroener, and D. Wright. ‘PRIPARE: Integrating Privacy Best Practices into a Privacy Engineering Methodology.’ Paper presented at the 2015 IEEE Security and Privacy Workshops, 21–22 May 2015. Oswald, Marion. ‘Algorithm-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power.’ Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2128 (2018): 20170359. https://doi.org/doi:10.1098/rsta.2017.0359. Preece, Alun. ‘Asking ‘Why’ in AI: Explainability of intelligent systems – perspectives and challenges.’ Intelligent Systems in Accounting, Finance and Management 25, no. 2 (2018): 63–72. https://doi.org/10.1002/isaf.1422. ——. ‘Asking ‘Why’ in AI: Explainability of intelligent systems – perspectives and challenges.’ Intelligent Systems in Accounting, Finance and Management 25 (2018): 63.

156  Niko Tsakalakis et al Ras G, van Gerven M and Haselager P, ‘Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges.’ In Explainable and Interpretable Models in Computer Vision and Machine Learning, edited by HJ Escalante and others, (Springer International Publishing 2018). Reisman D, Schultz J, Crawford K and Whittaker M, ‘Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability.’ (AI Now, April 2018). Ribera, Mireia, and Àgata Lapedriza. ‘Can we do better explanations? A proposal of user-centered explainable AI.’ Paper presented at the Joint Proceedings of the ACM IUI 2019 Workshops, Los Angeles, USA, 20 March 2019. Rosenfeld, Avi, and Ariella Richardson. ‘Explainability in human–agent systems.’ Autonomous Agents and Multi-Agent Systems 33, no. 6 (2019): 673–705. https://doi. org/10.1007/s10458-019-09408-y. Sachan, Swati, Jian-Bo Yang, Dong-Ling Xu, David Eraso Benavides, and Yang Li. ‘An explainable AI decision-support-system to automate loan underwriting.’ Expert Systems with Applications 144 (2020/04/15/ 2020): 113100. https://doi.org/https://doi.org/10.1016/ j.eswa.2019.113100. www.sciencedirect.com/science/article/pii/S0957417419308176. Sample, Ian. ‘AI watchdog needed to regulate automated decision-making, say experts.’ The Guardian, 27 January 2019. www.theguardian.com/technology/2017/jan/27/ai-artificia l-intelligence-watchdog-needed-to-prevent-discriminatory-automated-decisions. Satell, Greg, and Josh Sutton ‘We Need AI That Is Explainable, Auditable, and Transparent.’ Harvard Business Review, Updated 28 October, 2019, accessed 5 September, 2020, https:// hbr.org/2019/10/we-need-ai-that-is-explainable-auditable-and-transparent. Selbst, Andrew D, and Julia Powles. ‘Meaningful information and the right to explanation.’ International Data Privacy Law 7, no. 4 (2017): 233–42. https://doi.org/10.1093/idpl/ ipx022. https://doi.org/10.1093/idpl/ipx022. Singh, Jatinder, Jennifer Cobbe, and Chris Norval. ‘Decision Provenance: Harnessing Data Flow for Accountable Systems.’ IEEE Access 7 (2019): 6562–74. Turque, Bill. ‘Creative … motivating’ and fired.’ The Washington Post, 6 March 2012. Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation.’ International Data Privacy Law 7, no. 2 (2017): 76–99. https://doi.org/10.1093/idpl/ ipx005. Wachter, Sandra, Brent Mittelstadt, and Chris Russell. ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR.’ [In eng]. Harvard Journal of Law & Technology (Harvard JOLT) 31, no. 2 (2017–2018): 841–88. 888. Waldman, Ari Ezra. ‘Power, Process, and Automated Decision-Making.’ Fordham Law Review 88 (2019): 613.

6 COVID-19 Pandemic and GDPR: When Scientific Research becomes a Component of Public Deliberation LUDOVICA PASERI1

Abstract The chapter aims to carry out a critical assessment of the impact of the General Data Protection Regulation (GDPR) on the management, dissemination and re-use of research data, as the basis of public deliberation in the scenario outlined by the COVID-19 pandemic. In this emergency situation, data have been fundamental and crucial in the life of citizens and for institutions and politics as well. The Open Science projects presented by the European institutions seem to endorse the idea that scientific knowledge should be at the basis of public deliberation: a model of openness, re-use and sharing of data relevant for both European researchers and politics is set up by such projects. Correspondingly, the focus of this ­chapter is on the compliance of these projects of Open Science within the provisions of the GDPR and, in particular, the ‘exceptions’ of Article 89, in order to determine whether and to what extent scientific research appears at the basis of public deliberation, and thus a prerequisite for political decision.

Keywords Scientific research, GDPR, COVID-19 pandemic, public deliberation, open science, European institutions, European open science cloud, data protection.

I. Introduction The year 2020 will be remembered in history as the year of the pandemic ­emergency of COVID-19: the crisis has pushed – if not forced – society to become

1 CIRSFID,

University of Bologna, Via Galliera 3, 40121 Bologna, Italy: [email protected].

158  Ludovica Paseri aware of a changing world and has clearly shown us what the philosopher Luciano Floridi describes in terms of infosphere,2 namely the informational environment in which we live and act as informational entities, among other informational entities, human and non-human.3 Our reality has already become onlife, meaning that the boundary between online and offline tends to blur, breaking down the barriers between real and virtual:4 for example, performing work tasks away from our offices, meeting friends on online platforms and making the majority of our purchases remotely, are not an option, but the only way to carry out activities that would otherwise be impossible.5 In this scenario, the primary role played by data is undeniable. Data are increasingly at the centre of our lives: for instance, the data-driven economy, ie the economy based on the analysis of big data – if transformed from roadmaps into real agendas6 – might be considered the new frontier of the economy; data relating to the development of the virus have been fundamental7 and at the core of every policy choice made by nation states during the COVID-19 pandemic; the protection of individuals’ personal data, a fundamental right enshrined in Article 8 of the Charter of Fundamental Rights of the European Union,8 has increasingly become a concrete need perceived by every human being in everyday life, as well as at the heart of many debates on tracking apps during the last year.9 It can, therefore, be said that data have represented the most important resource in the hands of the institutions and politics in fighting the COVID-19 pandemic. But is this picture completely accurate? The link between data collection and political decision making is complex. In fact, data are collected for scientific research purposes and, subsequently, manipulated and aggregated with the aim to establish scientific evidence. This scientific evidence will then be the basis for the elaboration of scientific knowledge, whereas this scientific knowledge should provide the 2 Luciano Floridi, The fourth revolution: How the infosphere is reshaping human reality (Oxford: OUP, 2014), 99–100, iBooks. 3 Luciano Floridi, Information: A Very Short Introduction (Oxford: OUP, 2014), 13. 4 Luciano Floridi, The onlife manifesto: Being human in a hyperconnected era (Cham: Springer Nature, 2015), 99. 5 In this regard, the OECD, in its 2020 report, explicitly underlined the crucial role played by Internet access, also highlighting that the gap in the use of the Internet and technology still persists considerably, both among the different G20 countries and within them, between different categories of society, as shown in Figure 2.2, p. 13. ‘Roadmap toward a Common Framework for Measuring the Digital Economy’, Report for the G20 Digital Economy Task Force, OECD, 2020, www.oecd.org/sti/ roadmap-toward-a-common-framework-for-measuring-the-digital-economy.pdf. 6 María Cavanillas, José Edward Curry, Wolfgang Wahlster. New horizons for a data-driven economy: a roadmap for usage and exploitation of big data in Europe (Cham: Springer Nature, 2016), 290. 7 Chenghu Zhou, et al., ‘COVID-19: Challenges to GIS with big data.’ Geography and Sustainability 1(2020): 77–87, doi:10.1016/j.geosus.2020.03.005. 8 Charter of Fundamental Rights of the European Union, [2012] OJ C326, 391–407, ELI: http://data. europa.eu/eli/treaty/char_2012/oj; 9 On data protection and user acceptability aspects of tracing apps see, eg: Johannes Abeler, et al., ‘COVID-19 contact tracing and data protection can go together.’ JMIR mHealth and uHealth 8.4 (2020): 1–5, doi:10.2196/19359; for a first overview of the various types of tracing apps see: Nadeem Ahmed, et al., ‘A survey of covid-19 contact tracing apps.’ IEEE Access 8 (2020): 134577–134601, doi: 10.1109/ ACCESS.2020.3010226.

COVID-19 Pandemic and GDPR  159 foundation for the construction of a public deliberation:10 the latter refers to a space for critical reflection and argumentation, and it represents the sphere of the public discussion. Drawing on this space of critical debate, the task of politics will thus be to identify a specific political decision. The amount of data collected, for scientific purposes, which – in some instances – can represent the premise for public deliberation, necessarily involves personal data, and this is even more true in relation to the COVID-19 pandemic. This emergency scenario has put the issue of data collection for scientific purposes and the compliance with the GDPR,11 in the spotlight. Accordingly, this chapter aims to conduct a critical assessment of the impact of the GDPR on the management, dissemination and re-use of research data, representing the basis for public deliberation, in the scenario outlined by the COVID-19 pandemic. The gist of the GDPR is, indeed, crystal clear in the field of scientific research: Article 1 of the GDPR, on the one hand, aims to protect the fundamental rights and freedoms of individuals, in particular the right to the protection of personal data (paragraph 2) and, on the other hand, expressly states that ‘the free movement of personal data within the Union shall be neither restricted nor prohibited for reasons connected with the protection of natural persons with regard to the processing of personal data’ (paragraph 3). In addition, we may say that scientific research needs to foster the dissemination of data as much as possible in order to promote its own development, and yet, it has to protect the personal data of individuals involved. In the words of the former European Data Protection Supervisor (EDPS), Giovanni Buttarelli: Only a couple of generations ago most Europeans suffered the effects upon ­categorised and identifiable individuals of information databases which, although initially populated for benign purposes, were put to the service of totalitarian regimes with catastrophic consequences. Safeguarding privacy and the protection of personal data are the means by which individuals in the age of hyper-connectivity can insure themselves against similarly unexpected consequences which are sadly inevitable.’12

A few years later, it seems fair to admit that in the governance of data in general, and of data resulting from scientific research in particular, a special effort is required to strike a fair balance between opposing interests that are at stake. 10 It should be pointed out, however, that the aim here is not to introduce the broad and intricate debate on the relationship between science and law, although this statement would seem to be in line with the philosopher Habermas’ reflections on the concept of participatory democracy. See: Jürgen Habermas, Between facts and norms (Cambridge, Massachusetts: The MIT Press, 1996), 289–302. The intention here is merely to emphasise the peculiarities of the complex situation brought about by the fight against the SARS-CoV-2 virus: in this context – given the unprecedented problems – politics, through the use of data, has inevitably had to focus its attention on scientific knowledge. 11 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), [2016] OJ L119/1–88, ELI: http://data.europa.eu/eli/reg/2016/679/oj. 12 Giovanni Buttarelli, ‘Privacy matters: updating human rights for the digital society.’ Health Technology 7 (2017): 328, doi:10.1007/s12553-017-0198-y.

160  Ludovica Paseri Accordingly, attention should be drawn to the role played by the European institutions in proposing a data governance capable of both relaunching the ­economy and innovation, while protecting the fundamental rights of the individuals. In this regard, in January 2020, the European Commission issued a Communication entitled ‘A European Strategy for Data.’13 The aim is to lay the foundations for a digital Europe that ‘should reflect the best of Europe – open, fair, diverse, democratic, and confident’.14 In the conceptual and operational framework proposed by the European Commission for this strategy, particular attention is paid to the ‘European Open Science Cloud’ (EOSC). The EOSC was launched on 23 November 2018, in Vienna.15 It aims to set up a trusted environment that coordinates the Open Science projects, implemented in recent years, in order to facilitate the sharing and reuse of scientific research both in terms of research data and in terms of research results, namely publications.16 This chapter will initially untangle the epistemological facets of our analysis, which starts with the collection of data for scientific purposes, that is translated into scientific evidence as the basis for scientific knowledge; the focus is on whether such scientific knowledge should be understood as a prerequisite for public deliberation and political decisions, that is, a sort of interface between facts and politics (section II). Remarkably, over the past few years, lawmakers have intended to flesh out ­policies able to ensure that science is as open as possible: in this regard we will examine the EU Data Strategy (section III) and the crucial EOSC project, which is part of it (section III.A). On this basis, section IV dwells on two ­fundamental rights: the right of the free movement of knowledge, in accordance with the ­paradigm of Open Science and the right to protection of personal data. One of the main assumptions of this chapter is that such rights are not conflicting but, rather, complementary. As an illustration, consider the role that the sharing not only of data, but more generally of knowledge, has played in the initial and more turbulent period of the COVID-19 pandemic health emergency. In particular, attention is drawn to the role of a European Strategy for Data management – also from an infrastructural point of view – in connection with a recent ruling, on 16 July 2020, by the

13 European Commission, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, A European strategy for data, COM/2020/66 final, ELI: https://eur-lex.europa.eu/legal-content/EN/ TXT/?uri=CELEX:52020DC0066. Hereinafter, reference to this strategy will be – equally – by the title ‘European Strategy for Data’ or by the phrase ‘EU Data Strategy’. 14 European Commission, ‘A European strategy for data’, COM/2020/66, 1. 15 On the same occasion, The Vienna Declaration on the European Open Science Cloud was released, ‘The Vienna Declaration on the European Open Science Cloud’, European Union, 2018, https://eosclaunch.eu/fileadmin/user_upload/k_eosc_launch/EOSC_Vienna_Declaration_2018.pdf. For further in-depth examination of the content of the Vienna Declaration see: Paolo Budroni, Jean-Claude Burgelman and Michel Schouppe, ‘Architectures of Knowledge: The European Open Science Cloud.’ ABI Technik 39.2 (2019): 135–136, doi:10.1515/abitech-2019-2006. 16 Paul Ayris, et al., ‘Realising the European Open Science Cloud’, European Union Publications (2016): 6, doi:10.2777/940154.

COVID-19 Pandemic and GDPR  161 European Court of Justice (ECJ), in the so-called Schrems II case.17 This perspective casts light on some challenges specifically related to the protection of personal data in scientific research and the EOSC (section V). By stressing the merits and limits of today’s European Open Science strategies, section VI will draw the conclusions of the analysis on the open questions concerning the protection of personal data in scientific research.

II.  Scientific Knowledge: A Basis for Public Deliberation In 2020, as never before, it was deeply understood that politics cannot be limited to communication: it must entail, first and foremost, public and democratic deliberation. And again, as never before, it has become clear how essential it is, in some circumstances, that such public deliberation should be based on technicalscientific knowledge. Our deliberative democracy, as pointed out by Boniolo,18 needs public deliberation as the basis for every operative decision, with the aim of making the latter rational: Deliberation, then, does not imply merely talking about a problem, but rather the critical debate of all its aspects, the proposal of different solutions derived from different perspectives, and the careful evaluation of the supporting justifications of these solutions. Offering a justification is, indeed, the central stage of a rational debate: no position should be accepted solely on the basis that someone (whether an individual or a group) proposes it. It has to be justified too, that is, reasons, preferably good reasons, have to be offered in its support.19

If the political decision imposes measures restricting personal freedom, lockdowns or curfews, a fortiori there is a need for such decisions to be based on rational grounds.

A.  From Research Data to Political Decision In certain circumstances, such as those triggered by the COVID-19 pandemic and the political measures taken to deal with this emergency, scientific knowledge plays a key role as a sort of interface between factual evidence and political decisions.

17 Case C-311/18 Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems (2020) EUCJ, ECLI:EU:C:2020:559. 18 Giovanni Boniolo, The art of deliberating: democracy, deliberation and the life sciences between history and theory, (Berlin, Heidelberg: Springer Science & Business Media, 2012), 155–171, doi:10.1007/978-3-642-31954-9. 19 Boniolo, The art of deliberating, 10.

162  Ludovica Paseri The first step of the analysis has thus to do with the collection and processing of research data. A vast amount of data is indeed collected for scientific purposes, although data do not speak for themselves.20 It is, therefore, necessary to proceed to the analysis or manipulation of such data, in order to transform the raw data into scientific evidence. Scientific knowledge is built up through the identification of theories and the validation of models related to that evidence. What is relevant for this study is that, under certain circumstances, eg the COVID-19 crisis, scientific knowledge appears as the necessary precondition for democratic and political deliberation. Correspondingly, science takes the form of an interface between our current information about reality, that is, factual evidence, and the sphere of politics, ie of applicative decision making, or information for reality.21 In this study, the expressions ‘scientific research’ and ‘scientific knowledge’ are meant to identify very broad concepts, which ontologically are not suitable for easy definition. In general, knowledge is scientific if it represents the result of an investigation conducted according to the so-called ‘scientific method’. This primarily means that scientific knowledge is – and should always be – characterised by verifiability: the results of the scientific method must be reproducible and verifiable. Nowadays, scientific knowledge no longer involves only the traditional scientific community. The private sector, ie companies, is also part of it and is involved in various guises: eg, direct participation in research projects; or implementation of technical solutions, theoretically defined by the scientific community; etc. These examples show how the boundaries between private and scientific research – as traditionally understood – are increasingly blurred. During the COVID-19 pandemic, a real convergence of spheres has taken place and the development of vaccines has been emblematic, in this direction.22 Considering the process from research data to political decision, through the interface of scientific knowledge, three specifications are required. First, we should stress that democratic deliberation cannot, and must never, be confused with political decisions: the latter can be understood as the ­reductio ad unum ­realised by those who are legitimated to take the resolution that is 20 Despite the claims of those who, like Chris Anderson, not so many years ago, believed that it was enough to get a huge amount of data to make the results speak for themselves. See Chris Anderson, ‘The end of theory: The data deluge makes the scientific method obsolete’, Wired magazine, 2008, 1–3, www.wired.com/2008/06/pb-theory/. On this topic, then, on the worthlessness of data if considered alone and on the concept of the computational power actually extractable from data processing, as a resource, see: Massimo Durante. Computational Power: The Impact of ICT on Law, Society and Knowledge. (New York: Routledge, 2021), 50–75. 21 On the distinction between information as reality, for reality and about reality see: Floridi, Information: A Very Short Introduction, 65; and also, Ugo Pagallo and Massimo Durante, ‘The ­philosophy of law in an information society,’ in The Routledge Handbook of Philosophy of Information, ed. Luciano Floridi (New York: Routledge, 2016): 398–399; and, finally, Durante, Computational Power, 69–70. 22 The European lawmaker is fully aware of the breadth of the concept of ‘scientific research’: this emerges clearly in the data protection framework. See, in fact, in section II.B, what is reported by Recital 159 GDPR.

COVID-19 Pandemic and GDPR  163 actually applied; deliberation, on the other hand, is a much broader process, which thanks to its nature might be considered the cradle of the public opinion, involving a debate with many different actors. Political decisions must necessarily maintain their autonomy. In the case of data-driven decisions, politics cannot be a mere empty container for data collected by others: it should represent a balance between several different interests as a whole, and not the consequence of a blind reading of data. Political decisions based on raw data can lead to a dangerously distorted representation of reality. Take the COVID-19 pandemic as an example: the datafication of science, in the most delicate period of the crisis, has represented an obstacle in grasping the complexity of the situation, causing distortions in the understanding of reality.23 The second specification has to do with sensitive situations, such as the current global health emergency. If public deliberation were not based on scientific knowledge, we would end up with the behaviour that which the Greek philosopher Plato – through the words of Socrates – already condemned to the rhetor Gorgias: persuasion based on belief without any knowledge is simply an opinion not validated.24 On this aspect, the EDPS in a recent Opinion on scientific research stated: ‘Scientific research serves a valuable function in a democratic society to hold powerful players to account […].’25 Accordingly, the 46th elected President of the US, Joe Biden, in a memorandum on ‘[…] on restoring trust in government through scientific integrity and evidence-based policymaking’ emphasised precisely the need for scientific research to be independent from political interference, but especially to support policy choices.26 The third specification concerns the type of scientific research the public authority is dealing with, in public deliberation. Two possible options are identified: on the one hand, scientific research may be developed at the request and mandate of the public body involved (eg, in relation to COVID-19, research developed on the basis of data resulting from tracking apps); on the other hand, scientific research freely conducted by the scientific community may be taken into account during the process of public deliberation (eg, the studies that made it possible to develop appropriate treatments to cure COVID-19). In the context of public 23 Ugo Pagallo, ‘Sovereigns, Viruses, and the Law’, Law in the Context. A Socio-legal Journal 37.1 (2020): 8, doi:10.26826/law-in-context.v37i1.117. 24 Plato, Gorgia, (London: Penguins Classics, 2004), 454e–459d, 16–23; Giovanni Boniolo, Conoscere per vivere. Istruzioni per sopravvivere all’ignoranza (Milano: Meltemi, 2018), 120; the reasoning, however, is not dissimilar from that proposed by the philosopher Luciano Floridi, when, at the basis of the so-called ‘good ideas of politics’, he identifies three factors of which one is reasonableness, defined as ‘from common sense to logic, from the correct use of facts to probabilisitc reasoning’, Luciano Floridi, Il verde e il blu. Idee ingenue per migliorare la politica (Milano: Cortina Editore, 2020), 235. 25 European Data Protection Supervisor (EDPS), ‘A Preliminary Opinion on data protection and scientific research’, (January 2020), 1, https://edps.europa.eu/sites/edp/files/publication/20-01-06_ opinion_research_en.pdf. 26 Joe R. Biden, ‘Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymaking’, The White House, 27 January 2020. www.whitehouse.gov/ briefing-room/presidential-actions/2021/01/27/memorandum-on-restoring-trust-in-governmentthrough-scientific-integrity-and-evidence-based-policymaking/.

164  Ludovica Paseri deliberation, in the first case, the hypothetically involved actors may have both data and results; in the second case, it depends on the type of material released: it may be data-related reports or not.27

B.  The Opposing Interests in Political Decisions Political decisions always need to take into account a multiplicity of competitive interests. In the current global crisis, while it is of primary importance to guarantee the ‘freedom from disease’, a strong debate has emerged concerning the need to protect some other fundamental rights: the right to protection of personal data and, also, the freedoms underlying the GDPR, ie those freedoms of which the data protection is a means, pursuant to Recital 75 of the EU regulation, for example. Here, a peculiar aspect of the scenario under investigation is crucial: the difference between the business sector and the scientific research field. Consider the business models of big companies and the so-called surveillance c­apitalism:28 there is a considerable strain between the pursuit of the main goal of these companies – their economic freedoms – and the fundamental right to the p ­ rotection of personal data as a means by which to protect other fundamental rights. By contrast, the field of scientific research, especially in COVID-19 times, raises a problem of its own: while science generally pursues the aim to help the progress of society and innovation, nowadays, science is also the means by which we tackle the disease. Against this backdrop, we may say that contrary to the business sector and their data-driven core activities, scientific research should enjoy a greater degree of flexibility. An illustration of this greater flexibility is the regime established by the GDPR for the processing of personal data for scientific research purposes. In the wording of Recital 159, ‘the processing of personal data for scientific research purposes should be interpreted in a broad manner including for example technological development and demonstration, fundamental research, applied research and privately funded research’. Moreover, from the text of the GDPR itself emerges the convergence and intertwining of the public and private sectors in the scientific domain, which was empirically revealed during the COVID-19 pandemic. The European institutions seem to be particularly aware of these dynamics, considering the projects of data governance – and specifically of research data – promoted in recent months. It is therefore worth proceeding with the examination of these projects: the intention is to investigate how the compliance of scientific 27 Although the legal regime for the protection of personal data may vary according to the type of material involved in public deliberation, this study is limited to issues related to the protection of personal data in processing for scientific research purposes in an Open Science scenario, as described in section III. 28 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: PublicAffairs, 2019); ‘Out of Control: Failing EU Laws for Digital Surveillance Export’, Amnesty International, 21 September 2020, www.amnesty.org/en/documents/ eur01/2556/2020/en/.

COVID-19 Pandemic and GDPR  165 research with the GDPR fits into the complex scenario of data governance projects at the European level. This scenario is made even more complex because of the COVID-19 pandemic currently being faced.

III.  A European Strategy for Data: The Open Science Institutionalisation In light of the mechanism described in section II, according to which, in some instances, public deliberation can be (or, even, should be) rooted in scientific knowledge, it is now time to review the European Strategy for data of the European Commission.29 This strategy is interpreted as an institutionalisation of Open Science: whereas public deliberation hinges on scientific knowledge, science must be as open as possible, in order to have the richest and most comprehensive debate. The idea of scientific research being as open as possible can therefore be intended as an expression of the Open Science paradigm. But let us first explore the scenario in which the EU Data Strategy has been conceived. A recent World Economic Forum study shows that, in just one minute, ‘41,666,667 Whatsapp messages are sent, 1,388,889 video and voice calls are made, 69,444 people apply for jobs on LinkedIn and TikTok is downloaded 2,704 times’.30 Likewise, consider the field of e-Commerce: already growing steadily over the last few years, this field is now booming, so much so that every minute, €1 million euros are spent buying online. Amazon sends 6,659 packages every minute to meet the unceasing market demand.31 The data collected and produced are the most varied and of disparate types and, above all, represent what scholars dub as ‘datafication’,32 ie how reality has been transformed, described and characterised by such ‘Big Data’. In this scenario, it is relevant to underline that data governance is a matter of power. The control of data33 involves a multiplicity of actors: states, companies, universities, public administrations, and individuals.34 It is precisely 29 European Commission, ‘A European strategy for data’, COM/2020/66. 30 ‘Here’s what happens every minute on the internet in 2020,’ World Economic Forum, last modified September 21, 2020, www.weforum.org/agenda/2020/09/internet-social-media-downloads-uploadsfacebook-twitter-youtube-instagram-tiktok/. 31 ibid. 32 Lorena Elena Stanescu, Raluca Onufreiciuc, ‘Some Reflections on ‘Datafication’: Data Governance and Legal Challenges.’ European Journal of Law and Public Administration 7.1 (2020): 100–115, doi:10.18662/eljpa/7.1/118. 33 As ‘control of data’ we refer to the statement of the philosopher Luciano Floridi: ‘[…] by ‘control’ I mean here the ability to influence something (e.g. its occuurence, creation, or destructions) and its dynamics (e.g. its behaviour, development, operations, interactions), including the ability to check and correct for any deviation from such influence.’ in Luciano Floridi, ‘The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially for the EU.’ Philosophy & Technology 33.3 (2020): 371, doi:10.1007/s13347-020-00423-6. 34 Data governance implies a multiplicity of actors and, therefore, a multiplicity of responsibilities. This clearly emerges from the provisions of the GDPR; indeed, perhaps this awareness is rooted in the distinction between privacy and data protection and can somehow be considered formalised, for the first time, with the introduction of the different national data protection authorities.

166  Ludovica Paseri against this backdrop of immense data availability that the European Commission conceives the European Strategy for Data, expressing its own direction of change and the future European policies. Starting from the assumption that data-driven innovation has the potential to bring enormous benefits to individuals, the Strategy makes it clear that the plan of action is only possible in accordance with fundamental rights and European values.35 In addition, the European Commission makes explicit reference, from the very beginning of its strategy, to the environmental issue. The approach leaves room to a certain cautious optimism, in identifying a possible synergy between the EU Data Strategy and the European Green Deal.36 On this basis, one of the main objectives pursued by the European institutions is to create a large and composite European data space: The infrastructures should support the creation of European data pools enabling Big Data analytics and machine learning, in a manner compliant with data protection legislation and competition law, allowing the emergence of data-driven ecosystems.

In other words, the European Union seems to propose a data governance model that promotes innovation and development as much as possible but, at the same time, is compliant with regulatory disciplines, first of all the discipline of data protection. This approach appears the same as the one already identified by the Digital Single Market Strategy of 2015,37 and has been confirmed by the European Commission’s Communication on ‘Shaping Europe’s Digital Future’, in 2020.38 As regards the structure of the EU Data Strategy, the latter identifies the main issues that represent a limit to the full exploitation of the potential of the data economy.39 Against such issues, the European Commission describes the four pillars on which the Strategy is based: (1) The first pillar ‘A cross-sectoral governance framework for data access and use’ expresses the European intention to prefer a flexible governance, able 35 European Commission, ‘A European strategy for data’, COM/2020/66, 1. 36 European Commission, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, The European Green Deal, COM/2019/640 final, ELI: https://eur-lex.europa.eu/legal-content/ EN/TXT/?uri=CELEX:52019DC0640, presented on 11 December 2019, similar to what has been proposed in the US with the ‘Green New Deal’ presented on 7 February 2019: www.congress.gov/ bill/116th-congress/house-resolution/109/text. 37 European Commission, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, A Digital Single Market Strategy for Europe, COM/2015/0192 final, ELI: https://eur-lex.europa.eu/legal-content/ EN/TXT/?uri=celex:52015DC0192. 38 European Commission, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Shaping Europe’s digital future, COM/2020/67 final, ELI: https://eur-lex.europa.eu/legal-content/EN/ TXT/?uri=COM:2020:67:FIN. 39 The identified problems are availability of data; imbalances in market power; data ­interoperability and quality; data governance; data infrastructures and technologies; empowering individuals to ­exercise their rights; skills and data literacy; cybersecurity.

COVID-19 Pandemic and GDPR  167 to consider the multiple actors in the field, but always aimed at reducing the fragmentation of disciplines between the different countries. (2) The second stronghold, ‘Enablers’, consists of a series of investments strengthening the European infrastructures necessary for data processing: that is, a reinforcement of the existing computing capacities (such as High-Performance Computing, HPC) and a promotion of federal model initiatives (such as the EOSC, for research data, or Gaia X, defined as a European cloud for industries40). (3) The third pillar, ‘Competences’, aims to: strengthen the control of individuals on their own data; invest in digital competences training; relaunch initiatives to support SMEs and start-ups. (4) Finally, the fourth pillar is dedicated to ‘Common European data spaces in strategic sectors and domains of public interest’, as a complementary initiative to the cross-sectorial government framework. It should be stressed that the final part of the strategy is dedicated to the EOSC, the aforementioned federation of infrastructures aimed at the dissemination of research data, primarily among European researchers and universities. It is relevant to underline the connection between the EOSC and the fourth pillar of the Strategy: in fact, the EOSC is explicitly represented as an example to follow in the implementation of the European Common Data Spaces in strategic sectors. The EOSC, as discussed in the next section, is the result of a shared effort between the academic community and institutions, which has been ongoing for a few years now:41 this experience is intended as a model for data sharing and re-use in the implementation of European Common Data Spaces. The European institutions specifically intend to continue investing in the sharing and reuse of research data, which is an aspect worth of further discussion.

A.  The European Open Science Cloud – EOSC The European Open Science Cloud is not a proper and traditional cloud infrastructure, but rather a federated environment of trusted and certified repositories, which aims to disseminate data resulting from scientific research in accordance with the principle ‘as open as possible, as closed as necessary’.42 One of the most

40 Neal Kushwaha, Przemysław Roguski, Bruce W. Watson, ‘Up in the Air: Ensuring Government Data Sovereignty in the Cloud.’ 2020 12th International Conference on Cyber Conflict (CyCon), Estonia (2020): 43–61, doi:10.23919/CyCon49761.2020.9131718. 41 For a more detailed analysis of the background to the project see: Jean-Claude Burgelman, ‘Politics and Open Science: How the European Open Science Cloud Became Reality (the Untold Story).’ Data Intelligence 3.1 (2021): 5–19, doi: 10.1162/dint_a_00069. 42 Commission Recommendation (EU) 2018/790 of 25 April 2018 on access to and preservation of scientific information, C/2018/2375, [2018] OJ L134/12–18, ELI: http://data.europa.eu/eli/reco/ 2018/790/oj.

168  Ludovica Paseri important features of the EOSC is that such data must be rigorous from a structural point of view: in order to be part of the EOSC it is, in fact, mandatory to produce FAIR data, which is an acronym, that stands for Findable, Accessible, Interoperable, and Reusable43 data. The EOSC has been defined as ‘a process of making research data in Europe accessible to all researchers under the same terms of use and distribution’.44 In general, the EOSC is part of an even bigger process: the process of moving towards a new paradigm for science, which is the Open Science paradigm. Open Science can be defined as a new way of conducting science, prompted by the digital revolution and the ongoing advent of digital Information and Communication Technologies (ICTs).45 Open Science is a very broad term that encompasses multiple concepts and consists in the openness of each phase of the scientific research cycle,46 from the collection of empirical and raw data, to the use of open techniques and methodologies of analysis, to conclude with the publication of research results in open access. The European institutions have repeatedly argued that ‘The European Union will not remain competitive at the global level unless it promotes Open Science’47 and already in 2012 the then European Commissioner for the Digital Agenda, Neelie Kroes, argued that ‘The best thing about the Internet is that it is open. In every field it let us share and innovate. […] In science, openness is essential. Open Science doesn’t mean ignoring economic reality. […] Let’s make science open’,48 indirectly acknowledging the role played by the Internet and ICTs in the science paradigm shift we are dealing with. Open Science represents, therefore, a change in the practice of science that is not sudden,49 but unavoidable, which – in the long 43 Wilkinson Mark D., et al., ‘The FAIR Guiding Principles for scientific data management and stewardship.’ Scientific data 3.1 (2016): 1–9, doi:10.1038/sdata.2016.18. Furthermore, an explicit reference to the FAIR nature of the data is made in the EU Data Strategy, analysing the cross-sectoral governance framework for data access and use: ‘This [the cross-sectoral governance framework] could include a mechanism to prioritise standardisation activities and to work towards a more harmonised description and overview of datasets, data objects and identifiers to foster data interoperability (i.e. their usability at a technical level) between sectors and, where relevant, within sectors. This can be done in line with the principles on Findability, Accessibility, Interoperability and Reusability (FAIR) of data taking into account the developments and decisions of sector-specific authorities’, European Commission, ‘A European strategy for data’, COM/2020/66, 12. 44 Budroni, Burgelman, Schouppe, ‘Architectures of Knowledge’, 130. 45 Jean-Claude Burgelman, et al., ‘Open science, open data and open scholarship: European policies to make science fit for the 21st century.’ Frontiers in Big Data 2 (2019): 43, doi:10.3389/fdata.2019.00043. 46 Sönke Bartling, Sascha Friesike, Opening science: The evolving guide on how the internet is changing research, collaboration and scholarly publishing (Cham: Springer Nature, 2014), 17–19, doi:10.1007/978-3-319-00026-8. 47 European Commission, OSPP-REC Open Science Policy Platform Recommendations (Brussels: European Commission, 2018), 4, doi:10.2777/958647. 48 Neelie Kroes, ‘The benefits of being open online’, filmed June 2012 at Danish Royal Library for ‘Open Science: Structural Frameworks for Open, Digital Research – Strategy, Policy & Infrastructure’, www.youtube.com/watch?v=6sJbi2eaPXc&list=PL579F6BE69794EAEF&index=1&feature= plpp_video. 49 Already in 2012, the Royal Society described the reasons for a necessary change in: Royal Society, Science as an Open Enterprise: Open Data for Open Science (London: Royal Society, 2012), 24–27.

COVID-19 Pandemic and GDPR  169 term – will become open by default, making the adjective ‘open’ superfluous to its qualification.50 Open Science, however, does not subvert the founding pillars of science and, therefore, ethics remains an integral and fundamental part of the scientific process.51 In the multiplicity of codes of conduct and charters that summarise the ethical principles that characterise scientific research, one of the pillars that can be constantly found is the protection of human beings involved in the scientific process, with specific reference to their privacy and the protection of their personal data. The fact that Open Science promotes the widest possible openness does not mean that there are no limits: the fundamental right to the protection of personal data is one of these constraints. At first glance,52 Open Science and GDPR seem marked by a tendency to move in opposite directions: the former towards the widest possible openness and sharing, the latter towards the protection of individuals’ personal data, with a precise set of safeguards. However, both Open Science and GDPR are also characterised by flexibility and, in this way, they become complementary: Open Science does not promote blind and absolute openness but, according to the aforementioned principle ‘as open as possible, as closed as necessary’, establishes limits based on the balance between different interests; and the GDPR, in line with its dual nature expressed in Article 1 mentioned above, protects the personal data of individuals, but does not limit the free movement of data. In particular, the GDPR does not restrict the free movement of data processed for scientific research purposes, for which it has expressly provided a set of exceptions.53 It is therefore worthwhile to focus on the impact that the European Strategy for Data and, specifically, the part of it relating to scientific research and the EOSC, could have on the protection of personal data and the development of science and innovation in Europe.

IV.  The Convergence of Data Protection and the Free Flow of Knowledge The sharing of data promoted by the European projects just described can be considered as an expression of the mechanism, as explained in section II, by 50 Burgelman, et al., ‘Open science, open data and open scholarship’, 43. 51 Bernard Rentier, Open Science, the challenge of transparency (Brussels: Académie royale de Belgique, 2019), 35–36. 52 On the opinion that there is a conflict between Open Science and Data Protection, see: Mark Phillips, Bartha M. Knoppers. ‘Whose Commons? Data Protection as a Legal Limit of Open Science.’ The Journal of Law, Medicine & Ethics 47.1 (2019): 106–111. 53 On this aspect see: Ugo Pagallo, Eleonora Bassi, ‘Open Data Protection: Challenges, Perspectives, and Tools for the Reuse of PSI’, in Digital Enlightenment Yearbook 2013, eds. Mireille Hildebrand, Kieron O’Hara, Michael Waidner. (Amsterdam: IOS Press, 2013): 179–189, doi: 10.3233/978-1-61499-295-0-179.

170  Ludovica Paseri which, in some instances – such as the COVID-19 pandemic – public deliberation should be rooted in scientific knowledge. These new and ambitious European projects can be interpreted as an expression of the convergence of two needs, related respectively to two important rights: the openness of data in support of science and, more generally, the free flow of knowledge, in accordance with Article 27 of the Declaration of Human Rights;54 and the protection of personal data, a fundamental right enshrined in Article 8 of the Charter of Fundamental Rights of the European Union, strongly perceived by European citizens.55 This convergence, between the free movement of knowledge and the protection of personal data, can be deepened in light of three points: (1) the intrinsic nature of science, as also recognised by the European institutions; (2) the feasibility of the openness of science without undermining data protection; (3) the impact of the ruling of the ECJ in the Schrems II case.

A.  Sharing of Ideas: A Feature of Science The first aspect taken into account is the very nature of science. Scientific research is based on sharing: science is built on debates and exchanges of ideas and the concept of scientists locked in their ivory tower is now commonly rejected.56 In this scenario, the re-use of data can be what allows a real advancement of science, promoting the verifiability of the results by others, through the repeatability of experiments or simply promoting the re-use of data to conduct new and different investigations.57 This aspect seems to be also recognised and supported by the European institutions. As an illustration, consider what the current President of the European Commission, Ursula von der Leyen, stressed in her talk at the World Economic Forum in Davos in January 2020: ‘We consider data as a renewable resource such as wind or sun. Every 18 months we double the amount of data we produce […] 85 percent of which is never used.’58 On the same occasion, the President also 54 Assembly, UN General, Universal declaration of human rights, UN General Assembly, 302(2), Paris (1948). 55 A survey carried out on behalf of the European Commission, by the organisation ‘Gallup’, showed that, already in 2008, the majority of European citizens (64% of the respondents) had concerns about how their personal data was processed by the various organisations: Eurobarometer Flash, ‘Data Protection in the European Union Citizens’ perceptions.’ Flash Eurobarometer 225 (2008): 5, http:// uploadi.www.ris.org/editor/1234110594fl_225_sum_en.pdf. 56 Kimberly A Neuendorf, et al., ‘The view from the ivory tower: Evaluating doctoral programs in communication.’, Communication Reports 20.1 (2007): 24–41, doi:10.1080/08934210601180747; Michael A. Peters, et al., ‘Towards a philosophy of academic publishing.’ Educational Philosophy and Theory 48.14 (2016): 1401–1425, doi:10.1080/00131857.2016.1240987; Juan Carlos De Martin, Università futura: tra democrazia e bit (Torino: Codice, 2017), 227. 57 Graham Pryor, ‘Why manage research data?’ in Managing research data, ed. G. Pryor (London: Facet Publishing, 2012), 1–16. 58 ‘Ursula von der Leyen speaks at the World Economic Forum’, Euronews, Youtube, live 22 January 2020, www.youtube.com/watch?v=_A7Q514z_dw&feature=youtu.be&t=649.

COVID-19 Pandemic and GDPR  171 added that Europe would ‘co-create a framework to allow the use of these data’ through the EOSC, in order to create advantage and sustainability from these data, defined as a ‘hidden treasure’.59 By explicitly mentioning the EOSC, the willingness of the institutions to enhance the re-use and sharing of data emerges crystal clear, in accordance with the intrinsic nature of science as collaboration and exchange.

B.  The Feasibility of the Openness One of the lessons to be learned from the global COVID-19 crisis is that sharing data and knowledge is absolutely necessary60 to address the major challenges facing the world’s entire population. The real novelty, perhaps, is not that sharing is necessary, but that it is also feasible.61 An editorial in Nature, published in February 2020, explicitly reaffirmed the need for maximum data sharing to tackle the virus, describing an agreement among publishers to guarantee the fastest and the most effective circulation of data and information.62 A necessary clarification, however, is related to the content of the sharing that Open Science wants to foster: this circulation must not only and exclusively refer to the data of science but, in a more comprehensive way, it must also refer to the results of scientific research, ie scientific publications as an expression of knowledge.63 Although recently the discussions have mainly focused

59 ‘The President’s highlighting of the EOSC at the World Economic Forum signals the importance of the EOSC initiative to the EC’s policy priorities. There is no doubt the EOSC will play an important role in Europe’s future’: ‘EC President Ursula von der Leyen talks EOSC in Davos’, EOSC Portal, 22 January 2020, www.eosc-portal.eu/news/ec-president-ursula-von-der-leyen-talks-eosc-davos. 60 See, for instance, the action plan ‘ERAvsCORONA’ which provides for the close cooperation of multiple Countries. European Commission: First ERAvsCorona action plan, https://ec.europa.eu/info/ sites/info/files/research_and_innovation/research_by_area/documents/ec_rtd_era-vs-corona_0.pdf. 61 See, for instance, Aileen Fyfe, ‘The production, circulation, consumption and ownership of scientific knowledge: historical perspectives’, CREATe Working Paper 4 (2020): 1–21, doi:10.5281/ zenodo.3859492. This paper, proposing a historical analysis of the journal ‘Philosophical transaction’ from its founding year, 1665, to the present day argues that scientific journals of private publishers have not always been the traditional means by which scientific knowledge is circulated, but rather proposes them as one of the business models to be pursued. 62 ‘Researchers must ensure that their work on this outbreak is shared rapidly and openly. […] Nature and its publisher Springer Nature have now signed a joint statement with other publishers, funders and scientific societies to ensure the rapid sharing of research data and findings relevant to the coronavirus. […] For researchers, the message is simple: work hard to understand and combat this infectious disease; make that work of the highest standard; and make results quickly available to the world.’: ‘Calling all coronavirus researchers: keep sharing, stay open’ in Nature, 576 (2020): 7, doi:10.1038/d41586-020-00307-x. 63 The concepts of data, information and knowledge are partially different. This chapter does not intend to examine the differences, nor to take into account the debate in literature on this point, but will limit itself to the distinction identified by the philosopher Luciano Floridi, who defines information as data with interpretation, and knowledge as a set of information, inserted within a network of questions and answers, bringing out its organised aspect. See, Luciano Floridi, The philosophy of ­information (Oxford: OUP, 2013), 80–85; Luciano Floridi, The logic of information: A theory of ­philosophy as ­conceptual design (Oxford: OUP, 2019), 28–49.

172  Ludovica Paseri on data, this does not mean that encouraging discussions on knowledge is a moot point: a greater diffusion of data does not automatically mean a greater production of knowledge.64 The circulation of knowledge, understood as the result of research (ie scientific publications) should represent the subject of serious debate. Consider as an example the use of the so-called ‘pre-prints’ during the first months of the pandemic. The ‘pre-prints’ are the last draft of scientific articles as sent to publishers, which have not yet been subjected to the peer review process. In this period, there has been a considerable increase in the use of the ‘pre-prints’, with all the benefits and drawbacks of fast, but unreviewed, sharing.65 Without intending to investigate the arguments in support of or against the use of ‘pre-prints’, their widespread dissemination during the crucial months of the COVID-19 pandemic has shown an interesting aspect: their increased adoption has demonstrated, on the one hand, the strong need to circulate scientific knowledge more rapidly than traditional mechanisms and, on the other, the fact that this faster dissemination may actually find ways of becoming feasible.

C.  The Impact of the Schrems II Case Although scientific research necessarily requires exchange and sharing, both of knowledge in general and data in particular, an important innovation was introduced by the European Court of Justice’s ruling issued from 16 July 2020, Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems, known as the Schrems II case.66 This judgment invalidated the agreement stipulated between the European Union and the US, the so-called ‘Privacy Shield’, concerning a system of mechanisms to ensure the compliance of commercial operations involving the processing of personal data and their transfer to third countries with the safeguards provided by the rules on data protection. This agreement was stipulated in 2016, following the Schrems I judgment,67 which invalidated the previous agreement, the so-called ‘Safe Harbor’ agreement. In the Schrems II case, the judges of the ECJ denied the possibility for data controllers to have US data processors using the ‘Privacy Shield’ agreement as the legal basis for the processing or transfer of personal data. The ratio behind the Court’s decision is that the US does not ensure an adequate level of protection for personal data transferred from

64 Michael P. Lynch, The Internet of Us. Knowing More and Understanding Less in the Age of Big Data, (London-New York: Liveright, 2016), 10–16. 65 Caitlyn Vlasschaert, Joel Topf and Swapnil Hiremath. ‘Proliferation of papers and preprints during the COVID-19 pandemic: Progress or problems with peer review?’ Advances in Chronic Kidney Disease 27.5 (2020): 418–426, doi: 10.1053/j.ackd.2020.08.003. 66 Case C-311/18. 67 Case C-362/14 Maximillian Schrems v Data Protection Commissioner (2015) EUCJ, ECLI:EU:C:2015:650.

COVID-19 Pandemic and GDPR  173 the European Union to organisations established in that third state, in contrast to Article 1(1) of the ‘Privacy Shield’. In section 199 of the Schrems II ruling, the ‘Privacy Shield’ is declared incompatible with: (i) Article 45 of the GDPR, ‘Transfers on the basis of an adequacy decision’; (ii) Articles 7 and 8 of the Charter of Fundamental Rights of the European Union (respectively concerning the right to privacy and the right to the protection of personal data); (iii) Article 52 of the Charter of Fundamental Rights of the European Union, concerning ‘Scope and interpretation of rights and principles’.68 It is relevant that the ECJ stated the invalidity of the agreement precisely in light of the fundamental rights at the basis of the European architecture, proving to strengthen the system of safeguards put in place to protect individuals’ rights. Although the Privacy Shield under scrutiny in Shrems II refers to the t­ ransfer of data for commercial purposes, it is worth remarking upon the impact of this ruling more generally on the whole framework of European data protection policies, and hence, also in relation to field of scientific research. Consider the mechanism necessary for the transfer of personal data to a third country or an international organisation in the field of scientific research, according to the GDPR. Even for research data, in the absence of an agreement pursuant to Article 45(3) of the GDPR, the transfer can only take place if there are adequate guarantees providing that the data subjects have effective rights and remedies, as set out in Article 46 of the GDPR. If such ‘appropriate safeguards’ are not provided, then the transfer will only be possible if one of the exceptions provided for in Article 49 occurs. However, as clearly underlined by the European Data Protection Board (EDPB), in the guidelines ‘on the processing of data concerning health for the purpose of scientific research in the context of the COVID-19 outbreak’, adopted on 21 April 2020, ‘the derogations of Art. 49 of the GDPR do have exceptional character only’.69 Therefore, against this complex procedure, a twofold remark should be made. First, as described above, sharing and transfer in an Open Science scenario is essential and, frequently, the data involved can be personal data (eg medical, health-related or genetic data). Second, the Schrems II judgment has made the transfer of personal data to third countries uncertain and considerably complex. It should be stressed that the President of the European Commission, von der Leyen, in her aforementioned speech at the 2020 World Economic Forum, explicitly expressed the possibility of opening up the EOSC to international players in the 68 Art 52(1) TFUE: ‘1. Any limitation on the exercise of the rights and freedoms recognised by this Charter must be provided for by law and respect the essence of those rights and freedoms. Subject to the principle of proportionality, limitations may be made only if they are necessary and genuinely meet objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others. 2. Rights recognized by this Charter for which provision is made in the Treaties shall be exercised under the conditions and within the limits defined by those Treaties.’ 69 European Data Protection Board (EDPB), Guidelines on the processing of data concerning health for the purpose of scientific research in the context of the COVID-19 outbreak, 03 (2020): 14, https://edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-032020-processingdata-concerning-health-purpose_en.

174  Ludovica Paseri near future.70 Considering that after Schrems II the scenario has turned uncertain and complicated from a legal point of view, how to set the EOSC in this scenario? The implementation of the EOSC Project, which aims to promote the exchange of and access to research data to ‘1.7 million European researchers and 70 million professionals in science and technology’,71 may really represent a game changer.72 However, it is a sensitive matter: opening the EOSC beyond the EU may pose new challenges in relation to the Schrems II judgment; or, it could become an opportunity to trigger a concerted effort in clarifying a legally complex context. It may represent a first step of sharing research data in a guaranteed environment, within Europe, which may subsequently become a model for data sharing, open to international actors. Furthermore, the scope of the Schrems II judgment goes well beyond the purely legal aspects and opens a debate on ethics and the business models to be adopted. This ruling seems, in fact, to suggest to the European institutions that it may now be time to review the digital development plans and ‘[…] invest in projects that allow the EU to become more reliant in resources and platforms developed and delivered by the internal markets’.73 Although the EOSC is not a real cloud from a technical point of view, it represents a virtual environment for sharing and re-use of the FAIR data: this can indicate the direction to foster Open Science in Europe and to promote a governance of research data that will bring more benefits primarily for researchers, but also for European companies and public administrations in the future. The EOSC represents a sort of Internet of FAIR data and services,74 and like the web created at CERN in Geneva and released in open source, it embodies the nature of openness, guaranteeing the protection of the personal data of the individuals involved. After analysing the aspects that characterise a context in which the freedom of the free flow of knowledge and the protection of personal data coexist, attention should be drawn to some problematic aspects that still exist: the focus will be on the personal data protection in scientific research, specifically, in relation to the implementation of the EOSC, as part of the EU Data Strategy. 70 ‘EC President Ursula von der Leyen talks EOSC in Davos’, EOSC Portal. 71 European Commission, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, European Cloud Initiative – Building a competitive data and knowledge economy in Europe, COM/2016/178 final, 6, ELI: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52016DC0178; Budroni, Burgelman, Schouppe, ‘Architectures of Knowledge’, 132. 72 See EDPS, ‘Opinion on the European strategy for data’, 3/2020, 11: ‘The common European data spaces, based on the European values and fundamental rights with the human being at the centre, could also serve as evidence of the viability of alternative models to the current concentration of data in the hands of a few private corporations based outside the EU which play the role of self-appointed gatekeepers of internet or big IT solution providers. Therefore, the envisaged European data spaces should serve as an example with regard to transparency, effective accountability and proper balance between the interests of the data subjects and the shared interest of the society as a whole.’ 73 Paolo Vecchi, ‘On the inadequacy of US Cloud providers’, Joinup, European Union, 30 July 2020, https://joinup.ec.europa.eu/collection/joinup/news/privacy-shield-invalidation. 74 Budroni, Burgelman, Schouppe, ‘Architectures of Knowledge’, 136.

COVID-19 Pandemic and GDPR  175

V.  Some Issues Still Open for Data Protection in Scientific Research and in the EOSC The EU Data Strategy emphasises that ‘The vision of a common European data space implies an open, but assertive approach to international data flows, based on European values’.75 Considering the business model of the big players in the digital world, a strain is immediately identified between the aims pursued by them – ie economic freedoms – and the rights underlying the GDPR. The situation appears to be completely different in relation to scientific research. In this context, the GDPR itself already provides for flexibility: it identifies a set of exceptions put in place when personal data processing is carried out for scientific research purposes. If in recent months, in relation to the COVID-19 pandemic, it has been absolutely necessary to reiterate the importance of certain untouchable pillars of the European data protection framework (think about the debate on virus tracking apps), the situation seems different in the field of scientific research. Here compliance with the GDPR – through derogations – drives the promotion of research itself. Compliance with the GDPR becomes a means in support of valid scientific research that respects the fundamental rights of individuals involved. It is therefore crucial to deal with the open issues of this discipline. In particular, the focus should be, first of all, on the challenges related to the EOSC, which represents a model for the other European data spaces that the European Commission intends to implement in the near future, according to the EU Data Strategy. These challenges can be summarised in three sets of issues: (1) legal framework issues; (2) fragmentation issues; and (3) compliance issues.

A.  Legal Framework Issues Since its introduction, the EU Data Strategy seems to have repeatedly made a fairly clear distinction between personal and non-personal data.76 Although this distinction is fundamental to identify the scope of application of the European disciplines (the GDPR for personal data and the Regulation on the free flow for non-personal data77) and makes sense (not all data are personal), what is questioned here is

75 European Commission, ‘A European strategy for data’, COM/2020/66, 23. 76 See, eg: ‘Citizens will trust and embrace data-driven innovations only if they are confident that any personal data sharing in the EU will be subject to full compliance with the EU’s strict data protection rules. At the same time, the increasing volume of non-personal industrial data and public data in Europe, combined with technological change in how the data is stored and processed, will constitute a potential source of growth and innovation that should be tapped.’ European Commission, A European strategy for data, COM/2020/66, 1. 77 Regulation (EU) 2018/1807 of the European Parliament and of the Council of 14 November 2018 on a framework for the free flow of non-personal data in the European Union, [2018] OJ L303/59–68, ELI: http://data.europa.eu/eli/reg/2018/1807/oj.

176  Ludovica Paseri precisely the sharpness of this distinction. It should not be assumed that these categories of data belong to two separate areas, which never meet, as if they were two different non-communicating silos. And the same applies to the two legal frameworks of reference: the concrete reality makes it necessary to avoid this sharp separation when considering data. Actually, the nature of data is often ambiguous, it is not easy to distinguish personal data from non-personal data, and much more often we are dealing with so-called mixed data.78 In addition, a clear distinction between personal and non-personal data is even more difficult in the field of scientific research. It has actually been demonstrated that even if anonymisation is implemented – a technique frequently applied to data involved in scientific research – there is always a residual risk79 of a possible re-identification.80 Moreover, in the daily activity of researchers, it may be difficult to identify the nature of data that might have largely been subject to manipulation or aggregation. In this regard, Opinion 3/2020 of the EDPS, specifically concerning the European Strategy for Data, underlines that ‘[…] aggregated data is not necessarily non-personal data, since aggregated data might still be related to an identified or identifiable individual’.81 In this scenario, therefore, current EU strategies on data governance should work in two directions: first, the governance is supposed to be able to balance the opposing interests at stake,82 which are often characterised by blurred boundaries, intersections and overlapping; second, it is crucial to strive for legal coordination, enabling the coexistence of different disciplines and safeguarding opposing interests.

78 The discipline to be applied to ‘mixed data’ has been a source of multiple concerns. In this regard, see the UK Court of Appeal judgment Dr. B v The General Medical Council, although it is still based on the rules in force prior to GDPR: UK Court of Appeal DB v GMC [2018] EWCA Civ 1497, www.bailii. org/ew/cases/EWCA/Civ/2018/1497.html. 79 Michèle Finck and Frank Pallas, ‘They who must not be Identified – Distinguishing Personal from Non-Personal Data under the GDPR’, Max Planck Institute for Innovation & Competition Research Paper 19, no. 14 (2019): 17. In addition, the paper, adopting a computer science perspective, also focuses on the uncertainty concerning the standard of identifiability, which claims to be different between the GDPR, the WP29 and the various Member States’ guaranteeing authorities. 80 Eleonora Bassi, ‘PSI, protezione dei dati personali, anonimizzazione’, Informatica e diritto 37.1-2 (2011): 75. The Author states that: ‘The boundary between personal data and anonymous data, in fact, is variable and reversible in relation to the technical tools that allow the anonymisation and de-anonymisation of data, and its variability depends on the increase in the probability of a data being led back to a physical person, within a given context. Hence, the greater the possible contextual identification links, the more the data should be considered personal.’ [translation from the Italian original text]. 81 EDPS, ‘Opinion on the European strategy for data’, 3/2020, 8. 82 Ugo Pagallo, ‘Sovereigns, Viruses, and the Law’, 8. The paper states ‘What governance is called to do in addition to national governments and their international organizations is to set the proper level of abstraction for the balances that shall be struck between multiple regulatory systems in competition.’ It refers to the regulatory challenges raised by the global crisis of the COVID-19 pandemic, but which can be extended to a more general situation of governance, multi-stakeholders, with different, or even opposing, interests in the field.

COVID-19 Pandemic and GDPR  177

B.  Fragmentation Issues One of the topics that frequently characterises the debate and analysis of personal data protection in scientific research is the fragmentation of the discipline. This is due to the fact that the European Union has competence to establish data protection provisions for activities falling within the scope of EU law, pursuant to Article 16(2) of the Treaty on the Functioning of the European Union (TFEU),83 as reiterated in Article 2(2) of the GDPR, dedicated to the definition of the material scope of the Regulation. As regards to the field of scientific research specifically, the combined interpretation of Article 6 of the TFEU and Title XIX TFEU, entitled ‘Research and Technological Development and Space’, highlights a role of support, coordination and complementarity of the European Union in the action of the Member States in this field.84 The consequence is that, for scientific research, the GDPR, in Article 89, provides wide discretion for the Member States. Considering a legislative intervention by different countries, in the field of research, there is a risk of having, as a result, a fragmentary framework at the European level.85 This assumption has also been confirmed by the EDPS, in the aforementioned Opinion 3/2020, in relation to the creation of European data spaces: The European legislator has a responsibility to put in place additional legal safeguards in situations where the Strategy would lead to increased availability and reuse of personal data. The need for further specifications of the general rules contained in the GDPR at the EU level seems most pressing in relation to the sharing of health data86 and for scientific research in general. At the same time, such specification, aiming at the harmonization at the maximum possible extent of the rules on the processing of personal data for scientific research in particular, may further foster the sharing of data.87

Assuming that the EOSC is based on data sharing and re-use, this statement acquires fundamental relevance. In other words, the EDPS, stating that the highest possible degree of harmonisation of the discipline of processing of personal data

83 Consolidated version of the Treaty on the Functioning of the European Union, [2012] OJ C326/ 47–390, ELI: http://data.europa.eu/eli/treaty/tfeu_2012/oj. 84 On the competences of the European Union, ex multiis, see: Maria Luisa Manis, ‘The processing of personal data in the context of scientific research. The new regime under the EU-GDPR.’ BioLaw Journal 11.3 (2017): 325–354, doi:10.15168/2284-4503-259. 85 This aspect of national adaptations and the consequent risk of fragmentation is clearly analysed in: Paola Aurucci, ‘Legal Issues in Regulating Observational Studies: The Impact of the GDPR on Italian Biomedical Research’, European Data Protection Law Review, 5(2) (2019): 197–208, doi:10.21552/ edpl/2019/2/9; and Rossana Ducato, ‘Data protection, scientific research, and the role of information’, Computer Law & Security Review 37 (2020): 4, doi:10.1016/j.clsr.2020.105412. 86 See, European Data Protection Supervisor (EDPS), ‘Preliminary Opinion on the European Health Data Space’, 8/2020, (November 2020), 14, https://edps.europa.eu/sites/edp/files/publication/ 20-11-17_preliminary_opinion_european_health_data_space_en.pdf: ‘Considers that the f­orthcoming legislative initiative on the EHDS should also aim at contributing to a mitigation of the current fragmentation of rules applicable to the processing of health data and to scientific research, thus also aimed at guaranteeing a lawful and ethical use and re-use of the data within the EHDS.’ 87 EDPS, ‘Opinion on the European strategy for data’, 3/2020, 11.

178  Ludovica Paseri for scientific research may further foster the sharing of data, indirectly admits that the fragmentation of disciplines in this field may be an obstacle.88

C.  Compliance Issues A third set of problems is related to the effective implementation of the GDPR and the practical compliance of European initiatives with its provisions. The EDPS, in the aforementioned Opinion 3/2020, specifically, focuses on three profiles: the principles, the special categories of personal data and, finally, the rights of the data subject.89 Regarding the principles, regulated by Article 5 of the GDPR, it is reiterated, on several occasions,90 that scientific research cannot avoid them and that, therefore, they must be considered fully applicable to the processing of personal data for scientific research purposes. These statements, however, partially conflict with the exceptions provided by the combined provisions of Article 5 and 89 of the GDPR, specifically the exception to the principle of purpose limitation and storage limitation for the processing of personal data for scientific research purposes.91 In addition, a wide reuse of research data, if it involves personal data or, a fortiori, particular categories of personal data pursuant to Article 9 of the GDPR, necessarily requires the adoption of specific and appropriate safeguards.92 This aspect will certainly have to be taken into account in the concrete implementation of the EOSC, which is based precisely on re-use and sharing of data, to the extent that the nature of such data can be personal.93

88 On the risks related to the fragmentation of regulation, see: Ugo Pagallo, ‘The Legal Challenges of Big Data: Putting Secondary Rules First in the Field of EU Data Protection’, European Data Protection Law Review 3.1 (2017): 42–43, doi:10.21552/edpl/2017/1/7. 89 In addition, the issue of the legal basis of the processing could also be considered an interesting topic. If generally in relation to scientific research reference is made to the public interest (according to Art 6(1)e of the GDPR), in relation to the COVID-19 pandemic it has been enquired whether this legal basis is actually the best one. On this aspect, among others, see: Regina Becker, et al. ‘COVID-19 Research: Navigating the European General Data Protection Regulation.’ Journal of Medical Internet Research 22.8 (2020): e19799, doi:10.2196/19799. 90 EDPS, ‘Opinion on the European strategy for data’, 3/2020, 10; EDPS, ‘A Preliminary Opinion on data protection and scientific research’, (January 2020) 16, that states ‘Each of the principles under Article 5 of the GDPR apply to all data processing, including processing for research purposes.’ 91 For a detailed analysis of these exceptions see: Ducato, ‘Data protection, scientific research’, 5; Marcello Ienca, et.al., ‘How the General Data Protection Regulation changes the rules for scientific research’, European Parliamentary Research Service (EPRS), Scientific Foresight Unit (STOA) (2019) 21, doi:10.2861/17421. 92 EDPS, ‘Opinion on the European strategy for data’, 3/2020, 10. 93 This issue is closely linked to the concept of ‘secondary use of research data’. On this matter, for a more in-depth analysis, see: Aiste Gerybaite and Paola Aurucci, ‘Big Data and Pandemics: How to Strike Balance with Data Protection in the Age of GDPR.’ Intelligent Environments 2020. IOS Press, 2020, 115–124; Sumantra Sarkar, ‘Using secondary data to tell a new story: A cautionary tale in health information technology research’, Communications of the Association for Information Systems 47.1 (2020): 47–112, doi:10.17705/1CAIS.04705.

COVID-19 Pandemic and GDPR  179 Finally, as regards the rights of the data subject, the EDPS seems to envisage a narrowing of the scope of the derogations provided for scientific research. It states, […] this special regime cannot be applied in such a way that the essence of the right to data protection is compromised. Derogations from these data subject rights must be subject to a particularly high level of scrutiny. They require a case-by-case analysis, balancing of interests and rights at stake, and a flexible multi-factor assessment. Any limitation to fundamental rights in law has to be interpreted restrictively and should not be abused.94

Besides, this stance aimed at ensuring robust protection for the individuals involved was confirmed by the EDPB Opinion on ‘Guidelines on scientific research in the context of the COVID-19 outbreak’, which states that ‘In principle, situations as the current COVID-19 outbreak do not suspend or restrict the possibility of data subjects to exercise their rights pursuant to Art. 12 to 22 of the GDPR.’95 Yet, as a matter of fact, there are already different disciplines for scientific purposes in the different Member States due to the derogation to Article 89(2) of the GDPR. In light of these open issues, it can therefore be argued that although the GDPR provides an extremely flexible framework for the domain under investigation, this is not in itself sufficient to guarantee the greatest development of scientific research. It is precisely the relevance of science, and specifically, its role as a component of public deliberation in democratic societies, that requires a further, shared and concerted effort.96 In other words, based on the flexibility provided by the GDPR, it is crucial to proceed in a way that includes collaboration between the different actors involved at various levels (from the scientific community, first and foremost, to the institutional level, both European and national). The aim is to ensure compliance with the European data protection framework, which is essential to foster the sharing and reuse of data and scientific knowledge, while guaranteeing the fundamental rights of the individuals involved.

VI. Conclusion This chapter aimed to critically analyse the impact of the GDPR on the management, dissemination and re-use of research data, representing the basis for public deliberation, in the scenario outlined by the COVID-19 pandemic. Specifically, with regard to the management, dissemination and re-use of research 94 EDPS, ‘Opinion on the European strategy for data’, 3/2020, 10. 95 EDPB, ‘Guidelines on scientific research in the context of the COVID-19 outbreak’, 11. 96 Similarly, in relation to the setting up of one of the common European Data Spaces, specifically the European Health Data Space, the EDPS has recently stressed a related consideration. Regarding the possibility of sharing and reusing data, it has in fact emphasised the need to identify ‘the establishment of a strong data governance mechanism that provides for sufficient assurances of a lawful, responsible, ethical management anchored in EU values, including respect for fundamental rights’, see: EDPS, ‘Preliminary opinion on the European Health Data Space’, 8/2020, 1.

180  Ludovica Paseri data, we referred to the recent European Strategy for Data, proposed by the European Commission, paying particular attention to the scientific research and the European Open Science Cloud (EOSC) project. Section II untangled the epistemological facets of our analysis, exploring the role of scientific knowledge as a prerequisite for public and democratic deliberation, in such specific circumstances, as those generated by the pandemic. In section III, after examining the structure of the EU Data Strategy and the need underlying it, ie the construction of an effective data governance in Europe, we focused on the EOSC, as an expression of the emerging new paradigm of science, the so-called Open Science paradigm. In particular, we proposed the interpretation of the EU Data Strategy as an institutionalisation of the Open Science paradigm: the aim was to investigate the convergence between the so-called fifth European freedom, the ‘freedom of knowledge’97 and data, and the data protection European framework. Section IV scrutinised this convergence in light of three aspects, represented by: (1) the intrinsic nature of science, as also recognised by the European institutions; (2) the feasibility of the openness of science without undermining data protection; and (3) the impact of the ruling of the ECJ in the Schrems II case. The list of lessons learned with the Schrems II case includes the importance to invest in projects for the development of an open European research ecosystem, which does not, however, exclude international players, in the name of knowledge being shared as much as possible. As stressed in section IV, however, such an open research ecosystem should guarantee the protection of fundamental rights and European values, first and foremost the protection of individuals’ personal data. In order to strike a fair balance between these rights and interests, we need policies that coordinate the different disciplines: the purpose should be to both tackle the fragmented nature of the discipline on data protection in the scientific field and to guarantee the essential compliance with the GDPR. This Regulation, thanks to its intrinsic flexibility, seems able to adapt to different scenarios, without compressing individuals’ safeguards. On the basis of this flexibility, however, a concerted effort between different actors involved in the scientific research field is required. In order to further promote scientific research as a component of public deliberation in democratic societies, a coordinated effort is essential: a few open challenges persist in the area under investigation, as analysed in section V. These have been identified by three macro set of issues, related to the legal framework in general, the fragmentation of scientific research disciplines and, finally, the concrete compliance of European projects with the GDPR. Admittedly, the current COVID-19 pandemic may

97 The phrase was coined by Janez Potočnik, European Commissioner for Research from 2004 to 2010, consecrated at the Council of Europe in Ljubljana in 2008 and relaunched with the European Commission’s communication entitled ‘A Reinforced European Research Area Partnership for Excellence and Growth’, COM(2012) 392 final, which pursues the objective of ensuring ‘[…] a new knowledge-intensive products and services contribute substantially to growth and jobs, but a genuinely world class science base is crucial to achieving this aim’, p. 2.

COVID-19 Pandemic and GDPR  181 exacerbate these issues, or offer us an extraordinary opportunity. On the one hand, the crisis is showing us the relevance of Article 27 of the Universal Declaration of Human Rights, which states that ‘Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.’ Whereas, as some Scholars claim,98 knowledge can become a potential element of discrimination between those who have access to it and those who do not, it is essential that institutions ensure that access to knowledge is as broad as possible. On the other hand, scientific research, although generally represents the basis for the development of society and innovation, nowadays also represents the means by which to guarantee the ‘freedom from disease’: it is in the scientific field that the fight against the virus takes place. This chapter has shown that the realisation of the Open Science paradigm is an ethical stance and not a moral prejudice,99 and that there is no conflict between the free movement of knowledge and the protection of personal data: hence, it is essential to identify policies capable of ensuring that scientific research is as open as possible, a fortiori in a situation as the COVID-19 pandemic, in which it should represent the basis for public and democratic deliberation.

References Abeler, Johannes, et al. ‘COVID-19 contact tracing and data protection can go together.’ JMIR mHealth and uHealth 8.4 (2020): 1–5. doi:10.2196/19359; Ahmed, Nadeem, et al. ‘A survey of covid-19 contact tracing apps.’ IEEE Access 8 (2020): 134577–134601. doi: 10.1109/ACCESS.2020.3010226. Amnesty International. ‘Out of Control: Failing EU Laws for Digital Surveillance Export.’ 21 September 2020. www.amnesty.org/en/documents/eur01/2556/2020/en/. Anderson, Chris. ‘The end of theory: The data deluge makes the scientific method obsolete.’ Wired magazine 2008. www.wired.com/2008/06/pb-theory/. Assembly, UN General. Universal declaration of human rights. UN General Assembly, 302(2), Paris (1948). Aurucci, Paola. ‘Legal Issues in Regulating Observational Studies: The Impact of the GDPR on Italian Biomedical Research.’ European Data Protection Law Review, 5(2) (2019): 197–208. doi:10.21552/edpl/2019/2/9. Ayris, Paul, et al. ‘Realising the european open science cloud.’ European Union Publications (2016): 1–19. doi:10.2777/940154.

98 Stefano Rodotà, Il diritto di avere diritti, (Roma-Bari: Laterza & Figli Spa, 2012), 121; Stefano Quintarelli, Capitalismo immateriale: le tecnologie digitali e il nuovo conflitto sociale (Torino: Bollati Boringhieri, 2019), 128. 99 On the distinction between ethical stance and moral prejudice, see: Boniolo, The art of ­deliberating, 149: ‘[…] an ethical stance is a view on human behaviour that allows expression of judgment on whether behaviour is good or bad on the basis of good reasons. In the absence of justificatory good reasons, we only have a moral prejudice and not an ethical stance’.

182  Ludovica Paseri Bartling, Sönke, Sascha Friesike. Opening science: The evolving guide on how the internet is changing research, collaboration and scholarly publishing. Cham: Springer Nature, 2014. doi:10.1007/978-3-319-00026-8. Bassi, Eleonora. ‘PSI, protezione dei dati personali, anonimizzazione.’ Informatica e diritto 37.1–2 (2011): 65–83. Becker, Regina, et al. ‘COVID-19 Research: Navigating the European General Data Protection Regulation.’ Journal of Medical Internet Research 22.8 (2020): e19799. doi:10.2196/19799. Boniolo, Giovanni. Conoscere per vivere. Istruzioni per sopravvivere all’ignoranza. Milano: Meltemi, 2018. —— The art of deliberating: democracy, deliberation and the life sciences between history and theory. Berlin, Heidelberg: Springer Science & Business Media, 2012. doi:10.1007/978-3-642-31954-9. Budroni, Paolo, Jean-Claude Burgelman, Michel Schouppe. ‘Architectures of Knowledge: The European Open Science Cloud.’ ABI Technik 39.2 (2019): 130–141. doi:10.1515/ abitech-2019-2006. Burgelman, Jean-Claude, et al. ‘Open science, open data and open scholarship: European policies to make science fit for the 21st century.’ Frontiers in Big Data 2 (2019): 1–6. doi:10.3389/fdata.2019.00043. —— ‘Politics and Open Science: How the European Open Science Cloud Became Reality (the Untold Story).’ Data Intelligence 3.1 (2021): 5–19. doi:10.1162/dint_a_00069. Buttarelli, Giovanni. ‘Privacy matters: updating human rights for the digital society.’ Health Technol. 7 (2017): 325–328. doi:10.1007/s12553-017-0198-y. —— ‘Calling all coronavirus researchers: keep sharing, stay open.’ Nature, 576 (2020): 7. doi:10.1038/d41586-020-00307-x. Cavanillas, María, José Edward Curry, Wolfgang Wahlster. New horizons for a data-driven economy: a roadmap for usage and exploitation of big data in Europe. Cham: Springer Nature, 2016. Charter of Fundamental Rights of the European Union, [2012] OJ C326/391–407. ELI:http:// data.europa.eu/eli/treaty/char_2012/oj. De Martin, Juan Carlos. Università futura: tra democrazia e bit. Torino: Codice, 2017. Ducato, Rossana. ‘Data protection, scientific research, and the role of information.’ Computer Law & Security Review 37 (2020): 1–16. doi:10.1016/j.clsr.2020.105412. Durante, Massimo. Computational Power: The Impact of ICT on Law, Society and Knowledge. New York: Routledge, 2021. ECJ. Case C-362/14 Maximillian Schrems v Data Protection Commissioner (2015). ECLI:EU:C:2015:650. ECJ. Case C-311/18 Facebook Ireland v Schrems (2020). ECLI:EU:C:2020:559. EOSC Portal. ‘EC President Ursula von der Leyen talks EOSC in Davos.’ 22 January 2020. www.eosc-portal.eu/news/ec-president-ursula-von-der-leyen-talks-eosc-davos. Eurobarometer Flash. ‘Data Protection in the European Union Citizens’ perceptions.’ Flash Eurobarometer 225 (2008): 1–22. http://uploadi.www.ris.org/editor/1234110594fl_225_ sum_en.pdf. Euronews. ‘Ursula von der Leyen speaks at the World Economic Forum.’ Youtube, live 22 January 2020. www.youtube.com/watch?v=_A7Q514z_dw&feature=youtu.be&t=649. European Commission. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the

COVID-19 Pandemic and GDPR  183 Regions, A Digital Single Market Strategy for Europe. COM/2015/0192 final. ELI: https:// eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52015DC0192. —— Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, A European strategy for data. COM/2020/66 final. ELI: https://eur-lex.europa.eu/ legal-content/EN/TXT/?uri=CELEX:52020DC0066. —— Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, A Reinforced European Research Area Partnership for Excellence and Growth. COM/2012/392 final. ELI: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52012DC0392. —— Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, European Cloud Initiative – Building a competitive data and knowledge economy in Europe. COM/2016/178 final. ELI: https://eur-lex.europa.eu/legal-content/EN/ TXT/?uri=CELEX:52016DC0178. —— Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Shaping Europe’s digital future. COM/2020/67 final. ELI: https://eur-lex.europa.eu/legal-content/ EN/TXT/?uri=COM:2020:67:FIN. —— Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, The European Green Deal. COM/2019/640 final. ELI: https://eur-lex.europa.eu/legal-content/ EN/TXT/?uri=CELEX:52019DC0640. —— OSPP-REC Open Science Policy Platform Recommendations. (Brussels: European Commission, 2018), 1–14. doi:10.2777/958647. —— Recommendation (EU) 2018/790 of 25 April 2018, On access to and preservation of scientific information. C/2018/2375, [2018] OJ L134. ELI: http://data.europa.eu/eli/ reco/2018/790/oj. —— ‘The Vienna Declaration on the European Open Science Cloud.’ 2018. https://eosc-launch.eu/fileadmin/user_upload/k_eosc_launch/EOSC_Vienna_ Declaration_2018.pdf. European Data Protection Board (EDPB). Guidelines on the processing of data concerning health for the purpose of scientific research in the context of the COVID-19 outbreak. 03 (2020). https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines_202003_ healthdatascientificresearchcovid19_en.pdf. European Data Protection Supervisor (EDPS). ‘A Preliminary Opinion on data protection and scientific research.’ ( January 2020). https://edps.europa.eu/sites/edp/files/ publication/20-01-06_opinion_research_en.pdf. —— ‘Preliminary Opinion 8/2020 on the European Health Data Space.’ (November 2020). https://edps.europa.eu/sites/edp/files/publication/20-11-17_preliminary_opinion_ european_health_data_space_en.pdf. European Union. Consolidated version of the Treaty on the Functioning of the European Union, OJ C 326, 26.10.2012, 47–390. ELI: http://data.europa.eu/eli/treaty/tfeu_2012/oj. —— ‘First ERAvsCorona action plan.’ 2020. https://ec.europa.eu/info/sites/info/files/ research_and_innovation/research_by_area/documents/ec_rtd_era-vs-corona_0.pdf. Finck, Michèle, Frank Pallas. ‘They who must not be Identified – Distinguishing Personal from Non-Personal Data under the GDPR.’ Max Planck Institute for Innovation & Competition Research Paper 19, no. 14 (2019): 1–47.

184  Ludovica Paseri Floridi, Luciano. Il verde e il blu. Idee ingenue per migliorare la politica. Milano: Cortina Editore, 2020. —— Information: A Very Short Introduction. Oxford: OUP, 2014. —— ‘The Fight for Digital Sovereignty: What It Is, and Why It Matters, Especially for the EU.’ Philosophy & Technology 33.3 (2020): 369–378. doi:10.1007/s13347-020-00423-6. —— The fourth revolution: How the infosphere is reshaping human reality. Oxford: OUP, 2014. —— The logic of information: A theory of philosophy as conceptual design. Oxford: OUP, 2019. —— The onlife manifesto: Being human in a hyperconnected era. Cham: Springer Nature, 2015. —— The philosophy of information. Oxford: OUP, 2013. Fyfe, Aileen. ‘The production, circulation, consumption and ownership of scientific knowledge: historical perspectives.’ CREATe Working Paper 4 (2020): 1–21. doi:10.5281/ zenodo.3859492. Gerybaite, Aiste, Paola Aurucci. ‘Big Data and Pandemics: How to Strike Balance with Data Protection in the Age of GDPR.’ Intelligent Environments 2020. IOS Press, 2020, 115–124. Ienca, Marcello, et al. ‘How the General Data Protection Regulation changes the rules for scientific research.’ European Parliamentary Research Service (EPRS), Scientific Foresight Unit (STOA) (2019): 1–91. doi:10.2861/17421. Kroes, Neelie. ‘The benefits of being open online’, filmed June 2012 at Danish Royal Library for ‘Open Science: Structural Frameworks for Open, Digital Research – Strategy, Policy & Infrastructure’. www.youtube.com/watch?v=6sJbi2eaPXc&list=PL579F6BE69794EAE F&index=1&feature=plpp_video. Kushwaha, Neal, Przemysław Roguski, Bruce W. Watson. ‘Up in the Air: Ensuring Government Data Sovereignty in the Cloud.’ 2020 12th International Conference on Cyber Conflict (CyCon), Estonia (2020): 43–61. doi:10.23919/CyCon49761.2020.9131718. Lynch, Michael P. The Internet of Us. Knowing More and Understanding Less in the Age of Big Data. London-New York: Liveright, 2016. Manis, Maria Luisa. ‘The processing of personal data in the context of scientific research. The new regime under the EU-GDPR.’ BioLaw Journal 11.3 (2017): 325–354. doi:10.15168/2284-4503-259. Neuendorf, Kimberly A, et al. ‘The view from the ivory tower: Evaluating doctoral programs in communication.’ Communication Reports 20.1 (2007): 24–41. doi: 10.1080/08934210601180747. OECD. ‘Roadmap toward a Common Framework for Measuring the Digital Economy.’ Report for the G20 Digital Economy Task Force. 2020. www.oecd.org/sti/roadmap-towar d-a-common-framework-for-measuring-the-digital-economy.pdf. Pagallo, Ugo, Eleonora Bassi. ‘Open Data Protection: Challenges, Perspectives, and Tools for the Reuse of PSI.’ In Digital Enlightenment Yearbook 2013, edited by Mireille Hildebrand, Kieron O’Hara, Michael Waidner, 179–189. Amsterdam: IOS Press, 2013: doi: 10.3233/978-1-61499-295-0-179. Pagallo, Ugo, Massimo Durante. ‘The philosophy of law in an information society.’ In The Routledge Handbook of Philosophy of Information, edited by Luciano Floridi, 396–407.New York: Routledge, 2016. Pagallo, Ugo. ‘The Legal Challenges of Big Data: Putting Secondary Rules First in the Field of EU Data Protection.’ European Data Protection Law Review 3.1 (2017): 34–46. doi:10.21552/edpl/2017/1/7.

COVID-19 Pandemic and GDPR  185 —— ‘Sovereigns, Viruses, and the Law.’ Law in the Context. A Socio-legal Journal 37.1 (2020): 8. doi:10.26826/law-in-context.v37i1.117. Peters, Michael A., et al. ‘Towards a philosophy of academic publishing.’ Educational Philosophy and Theory 48.14 (2016): 1401–1425. doi:10.1080/00131857.2016.1240987. Phillips Mark, Bartha M. Knoppers. ‘Whose Commons? Data Protection as a Legal Limit of Open Science.’ The Journal of Law, Medicine & Ethics 47.1 (2019): 106–111. Plato. Gorgia. London: Penguins Classics, 2004. Pryor, Graham. ‘Why manage research data?’ In Managing research data, edited by Graham Pryor, 1–16. London: Facet Publishing, 2012. Regulation (EU) 2018/1807 of the European Parliament and of the Council of 14 November 2018, on a framework for the free flow of non-personal data in the European Union, [2018] OJ L303/59–68. ELI: http://data.europa.eu/eli/reg/2018/1807/oj. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), [2016] OJ L119/1–88. ELI: http://data.europa.eu/eli/ reg/2016/679/oj. Rentier, Bernard. Open Science, the challenge of transparency. Brussels: Académie royale de Belgique, 2019. Quintarelli, Stefano. Capitalismo immateriale: le tecnologie digitali e il nuovo conflitto sociale. Torino: Bollati Boringhieri, 2019. Rodotà, Stefano. Il diritto di avere diritti. Roma-Bari: Laterza & Figli Spa, 2012. Royal Society. Science as an Open Enterprise: Open Data for Open Science. London: Royal Society, 2012. Stanescu, Lorena Elena, Raluca Onufreiciuc. ‘Some Reflections on ‘Datafication’: Data Governance and Legal Challenges.’ European Journal of Law and Public Administration 7.1 (2020): 100–115. doi:10.18662/eljpa/7.1/118. Sumantra Sarkar. ‘Using secondary data to tell a new story: A cautionary tale in health information technology research.’ Communications of the Association for Information Systems 47.1 (2020), 47–112. doi: 10.17705/1CAIS.04705. UK Court of Appeal DB v GMC (2018) EWCA Civ 1497. www.bailii.org/ew/cases/EWCA/ Civ/2018/1497.html. Vecchi, Paolo. ‘On the inadequacy of US Cloud providers.’ Joinup, European Union, 30 July 2020. https://joinup.ec.europa.eu/collection/joinup/news/privacy-shiel d-invalidation. Vlasschaert, Caitlyn, Joel Topf, Swapnil Hiremath. ‘Proliferation of papers and preprints during the COVID-19 pandemic: Progress or problems with peer review?’ Advances in Chronic Kidney Disease 27.5 (2020): 418–426. doi: 10.1053/j.ackd.2020.08.003. Wilkinson, Mark D., et al. ‘The FAIR Guiding Principles for scientific data management and stewardship.’ Scientific data 3.1 (2016): 1–9. doi:10.1038/sdata.2016.18. World Economic Forum. ‘Here’s what happens every minute on the internet in 2020.’ September 21, 2020. www.weforum.org/agenda/2020/09/internet-social-medi a-downloads-uploads-facebook-twitter-youtube-instagram-tiktok/. Zhou, Chenghu, et al. ‘COVID-19: Challenges to GIS with big data.’ Geography and Sustainability 1 (2020): 77–87. doi:10.1016/j.geosus.2020.03.005. Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs, 2019.

186

7 The Pandemic Crisis as Test Case to Verify the European Union’s Personal Data Protection System Ability to Support Scientific Research VALENTINA COLCELLI*

Abstract The pandemic crisis currently sweeping the world provides the European Union’s personal data protection system with a test of its ability to support scientific research to tackle the health emergency. This chapter aims to analyse the main questions arising from the Guidelines 03/2020 in the context of the COVID-19 outbreak adopted by the European Data Protection Board (EDPB) on 21 April 2020, which are relevant in the framework of the use of health data in the context of scientific research. According to the Guidelines, the provisions of the Regulation (EU) 2016/679 (GDPR) on scientific research – if correctly applied – are able to protect personal health data in the context of COVID-19 research activities, with some particular conditions regarding their application. While it is true that GDPR includes several provisions on scientific research that favour such research (or, rather, that favour an understanding of its specific needs), the application of this Regulation is not always easy in the context of research, or at least is not well understood by researchers themselves. Thus, the research community needs some reactions and specific suggestions from the European Union (EU) authorities in order to harmonise and strengthen, across the EU, the application of the GDPR to research activity using health data. This chapter introduces some suggestions for the EDPB with a view to harmonising, across the EU, the application of the GDPR to research activity that uses health data.1 * Researcher of Italian National Research Council, PhD, Institute CNR-IFAC, Italy. 1 The present work has been carried out as part of the activities of the Umbria Biobank: Start up per una Biobanca in Umbria Project, n. PRJ-1506, with the support of the POR-FESR 2014–2020 Azione 2.3.1 Programme.

188  Valentina Colcelli

Keywords GDPR; COVID-19 outbreak; research activity using health data; difficulties in application of GDPR

I. Introduction This chapter aims to analyse Guidelines 03/2020 in the context of the COVID-19 outbreak (hereinafter the Guidelines) adopted by the EDPB on 21 April 2020.2 The Guidelines aim to shed light on legal questions concerning the use of health data in the context of scientific research, and to analyse the application of the GDPR to the processing of health data for scientific research purposes against the background of the COVID-19 pandemic. According to the Guidelines, the GDPR provides special rules for the processing of health data for scientific research purposes that are also applicable in the context of the COVID-19 pandemic.3 The Guidelines do not revolve around the processing of personal data for epidemiological surveillance, and in any case developing further and more detailed guidance for the processing of health data for the purpose of scientific research is part of the annual work plan of the EDPB. Thus, the Guidelines aim to shed light on the most urgent questions related to the use of health data in research activity arising from the pandemic crisis, including the legal basis for this use, the implementation of adequate safeguards for the processing of health data and the exercise of subjects’ rights, and the application of the GDPR to the processing of health data for scientific research purposes in the context of the COVID-19 pandemic. In the above-mentioned framework, the findings of the Guidelines can be summarised as follows: 1. The GDPR provides special rules for the processing of health data for the purpose of scientific research, and these are also applicable in the context of the COVID-19 pandemic. 2. The national legislator of each Member State may enact specific laws pursuant to Article 9(2)(i) and (j) GDPR to enable the processing of health data for scientific research purposes. The processing of health data for the purpose of scientific research must also be covered by one of the legal bases in Article 6(1) GDPR. Therefore, the conditions for, and the extent of, such processing vary depending on the laws enacted in the particular Member State. 2 Guidelines 03/2020 on the processing of data concerning health for the purpose of scientific research in the context of the COVID-19 outbreak, adopted by the European Data Protection Board on 21 April 2020, at https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines_202003_ healthdatascientificresearchcovid19_en.pdf (retrieved 04.01.2021). [Guidelines 03/2020] 3 Guidelines 03/2020, p 14.

The Pandemic Crisis as Test Case  189 3. All enacted laws based on Article 9(2)(i) and (j) GDPR must be interpreted in the light of the principles set out in Article 5 GDPR and the jurisprudence of the European Court of Justice (ECJ). In particular, derogations and limitations in relation to the protection of data as permitted in Article 9(2)(j) and Article 89(2) GDPR must apply only insofar as is strictly necessary. 4. As regards the processing risks in the context of the COVID-19 outbreak, there must be strong emphasis on compliance with Articles 5(1)(f), 32(1) and 89(1) GDPR. There must be an assessment if a data protection impact assessment (DPIA) pursuant to Article 35 GDPR has to be carried out. 5. Storage periods (timelines) must be set and must be proportionate. In order to define such storage periods, criteria such as the length and the purpose of the research should be taken into account. National provisions may also stipulate rules concerning storage periods and must therefore be considered. 6. In principle, situations such as the current COVID-19 outbreak do not suspend the rights of data subjects or restrict the possibility of them exercising those rights pursuant to Articles 12 to 22 GDPR. However, Article 89(2) GDPR allows the national legislator to restrict (some) of the rights of data subjects, as set out in Chapter 3 of the GDPR. Because of this, the restrictions on the rights of data subjects may vary depending on the laws enacted in the particular Member State. 7. With respect to international transfers, in the absence of an adequacy decision pursuant to Article 45(3) GDPR or appropriate safeguards pursuant to Article 46 GDPR, public authorities and private entities may rely upon the applicable derogations pursuant to Article 49 GDPR. However, the derogations of Article 49 GDPR do only have an exceptional character. Therefore, this chapter tries to underline some specific aspects of the GDPR and its application to research activity, by summarising the way in which much of the content of the Guidelines can be extended into a more general reflection on the issue of scientific research and health data.4 As a matter of fact, the GDPR foresees a specific derogation to the prohibition on the processing of certain special categories of personal data, such as health data, when this is necessary for the purposes of scientific research (Articles 9(2)(j) and 89(2) GDPR). Thus, while it is true that Regulation (EU) 2016/679 is a wide-ranging piece of legislation and includes several provisions related to scientific research that favour such research (or, rather, that favour an understanding of the specific needs of such research), its application is not always easy in the context of research, or at least is not well understood by researchers themselves. Thus, the research community needs some reaction and specific suggestions from the EU authorities in order to harmonise 4 Niamh Clarke, Gillian Vale, Emer P Reeves, Mary Kirwan, David Smith, Michael Farrell, Gerard Hurl, Noel G McElvaney, ‘GDPR: An impediment to research?’, Irish Journal of Medical Science 188 (2019): 1129–1135.

190  Valentina Colcelli and strengthen, across the EU, the application of the GDPR to research activity that uses health data.5 For a better understanding of the aims of the Guidelines, we have to bear in mind that, on the issue of processing health data for scientific research purposes and in the hypotheses of clinical trials, the GDPR should be applied simultaneously with the Clinical Trials Regulation in the EU, which contains specific relevant provisions but no derogations from the GDPR. The Guidelines use the definition of personal health data contained in Article 4(15) of the GDPR, but they emphasise that a broad interpretation of the notion of data concerning health, as referred to in the case law of the Court of Justice of the European Union (CJEU) and supplemented by the notion of mixed data as set out in Regulation (EU) 2018/1807 (datasets composed of both personal data and non-personal data where the former are inextricably linked to the latter), must be taken into account. Thus, starting from the statement above, this chapter will analyse some of the main questions arising from the Guidelines 03/2020 that are relevant to the use of health data in the context of scientific research. To reach this goal, the chapter begins in section II by analysing the meaning of ‘health data’ by looking at the definitions in law, as well as examples from the literature. A consideration of the definition of ‘health data’ is necessary to understand the analysis of the Guidelines, because if the GDPR is to be correctly applied then the wording of the legislation (Article 4(15) GDPR) is not enough. The notion of ‘health data’ is complex, and, for instance, the notion of mixed data, as set out in Regulation (EU) 2018/1807, also has to be taken into account. The Guidelines also analyse the legal basis for processing personal health data. Thus, section II defines this legal basis, with a focus on ‘consent’ and references to hard law and soft law. ‘Consent’ seems to remain the most relevant legal basis for the treatment of personal data for scientific purposes even under this emergency situation: this seems to stem from a normative link that is drawn by certain people between consent as a research ethics principle and consent as a lawful basis in data protection law.6 In any event, the Guidelines also take into consideration an alternative legal basis for the treatment of health data, which is specifically the COVID-19 crisis. The next sub-section analyses the public interest and the legitimate interest of the data controller as alternative legal bases for consenting to the processing of health data, with a focus on the Clinical Trials Regulation. Provided appropriate safeguards are in place and processing for scientific research has a basis in EU or Member State law, researchers can, among other things, keep healthrelated data stored for a long time, refuse to delete personal data even if the data subject withdraws their consent for participating in the research project, and use data from one research project for others.7 5 Kärt Pormeister, ‘Genetic research and applicable law: The intra-EU conflict of laws as a regulatory challenge to cross-border genetic research’, Journal of Law and the Biosciences 5(3) (2018): 706–723. 6 Edward S Dove and Jiahong Chen, ‘Should consent for data processing be privileged in health research? A comparative legal analysis’, International Data Privacy Law 10(2) (May 2020): 117–131. 7 Dove and Chen, ‘Should consent for data processing be privileged’ 117.

The Pandemic Crisis as Test Case  191 Sections III.A.i, III.A.ii and III.A.iii consider the problem of the withdrawal of consent and its impact on research activity and clinical trials; the EDPB reminds us in the Guidelines that the withdrawal of informed consent under Article 28(3) of the Clinical Trials Regulation should not be confused with the withdrawal of consent under the GDPR, and we must also avoid the ‘consent misconception’.8 The Guidelines underline several aspects that could be defined as ‘data ­protection principles’ and that should be taken into consideration in support of research activities during the COVID-19 crisis. We try to summarise these. Thus, section IV analyses other key questions derived from the Guidelines in the COVID-19 context, as well as transparency, information, the retention period and international data transfers taking place because of COVID-19 research. In our opinion, the core statement of the Guidelines is the following: Data protection rules (such as the GDPR) do not hinder measures taken in the fight against the COVID-19 pandemic. The GDPR is a broad piece of legislation and provides for several provisions that allow to handle the processing of personal data for the purpose of scientific research connected to the COVID-19 pandemic in ­compliance with the fundamental rights to privacy and personal data protection. The GDPR also foresees a specific derogation to the prohibition of processing of certain special categories of personal data, such as health data, where it is necessary for these purposes of scientific research.9

In response to this statement, section V introduces some elements of criticism and makes some suggestions to the EDPB with a view to harmonising, across the EU, the application of the GDPR to research activity using health data,10 in the light of the technological neutrality of the GDPR. The last paragraph of the chapter contains a conclusion.

II.  The Meaning of Personal Health Data The Guidelines analyse the notion of personal health data according to Article 4(15) GDPR and emphasise that the broad interpretation of the notion of health data, derived for instance from the case law of the CJEU, can also be taken into account. According to Article 4(15) GDPR: ‘data concerning health’ means ‘personal data related to the physical or mental health of a natural person, including the provision of health care services, which reveal information about his or her health status’, thus these data could include the definition arising from the ECJ’s wide interpretation

8 Dove and Chen, ‘Should consent for data processing be privilege’ 117. 9 Guidelines 03/2020, p 4. 10 The suggestions in section V (Conclusions) also arise from the working group ‘GDPR and regulatory issues’ of the European, Middle Eastern, and African Society for Biopreservation and Biobanking (ESBB).

192  Valentina Colcelli of ‘data concerning health’ derived from different sources, for example C-101/01 (Lindqvist) paragraph 50:11 1. 2.

3. 4.

Information collected by a health care provider in a patient record (such as medical history and results of examinations and treatments). Information that becomes health data by cross referencing with other data, thus revealing the state of health or health risks (such as the assumption that a person has a higher risk of suffering heart attacks based on high blood pressure measured over a certain period of time). Information from a ‘self -check’ survey, where data subjects answer questions related to their health (such as stating symptoms). Information that becomes health data because of its usage in a specific context (such as information regarding a recent trip to or presence in a region affected by COVID-19 processed by a medical professional to obtain a diagnosis).

Health data also include genetic information (Article 4, point 11 GDPR) ‘kept for diagnostic and health purposes and for medical and other scientific research purposes’ (International Declaration on Human Genetic Data),12 as well as information on people’s lifestyle and environment if linked to health information, clinical data and genealogical data.13 The guidelines specify that information collected by a healthcare professional in a patient’s medical record (such as medical history and results of examinations and treatments) becomes health data by crossreferencing it with other data, thus revealing health status or health risks (such as the hypothesis that a person has an increased risk of suffering heart attacks based on high blood pressure measured over a certain period of time); information from a ‘self-monitoring’ survey, where data subjects answer questions related to their health (such as a declaration of symptoms); and information that becomes health data because of its use in a specific context (such as information about a recent trip or presence in a region affected by COVID-19 processed by a doctor to make a diagnosis).

III.  Legal Basis for Processing Health Personal Data According to the Guidelines: The consent of the data subject, collected pursuant to Article 6 (1) (a) and Article 9 (2) (a) GDPR, may provide a legal basis for the processing of data concerning health in the COVID-19 context.

11 TJUE 6.11.2003, C-101/01 (Lindqvist), 50. 12 Roberto, Cippitani. ‘Genetic research and exceptions to the protection of personal data’. In Genetic Information and Individual Rights, edited by R. Arnold, R Cippitani and V Colcelli, 54–79. (Regensburg: Universität Regensburg, 2018). 13 Jean V McHale, ‘Regulating genetic databases: some legal and ethical issues’, Medical Law Review 12 (2004): 70–96, 72.

The Pandemic Crisis as Test Case  193 However, it has to be noted that all the conditions for explicit consent, particularly those found in Article 4 (11), Article 6 (1) (a), Article 7 and Article 9 (2) (a) GDPR, must be fulfilled. Notably, consent must be freely given, specific, informed, and unambiguous, and it must be made by way of a statement or ‘clear affirmative action’.’14

Informed consent is the fundamental ethical and legal requirement that makes it possible for donors to participate democratically and consciously in the activities of biomedical research and biobanks. The consent of the data subject, collected pursuant to Article 6(1)(a) and Article 9(2)(a) GDPR, may provide a legal basis for the processing of data concerning health in the COVID-19 context. It has to be noted that all the conditions for explicit consent, particularly those found in Article 4(11), Article 6(1)(a), Article 7 and Article 9(2)(a) GDPR, must be fulfilled. Notably, consent must be freely given, specific, informed, and unambiguous, and it must be made by way of a statement or ‘clear affirmative action’.15 Recital n. 33 states that the EU legal system could use so-called ‘broad consent’ to collect personal data for processing in the context of scientific research purposes. So-called ‘broad consent’ – before the entrance into force of the GDPR – was not taken into immediate consideration by EU laws, despite the international ethical and legal frameworks in the health and clinical fields always pushing for the use of broad consent and approving its use in 2016, the World Medical Association as well as the Council for International Organizations of Medical Sciences/World Health Organization (CIOMS/WHO). In the EU, its regulation was only implemented in 2018 with the GDPR.16 Recital n. 33 states that ‘Data subjects should have the opportunity to give their consent only to certain areas of research or parts of research projects to the extent allowed by the intended purpose’, if – and only if – the research project is ‘in keeping with recognized ethical standards for scientific research’. When research purposes cannot be fully specified, the controller must seek other ways to ensure the essence of the consent requirements, for example, to allow data subjects to consent to a research purpose in more general terms and for specific stages of a research project that are already known to take place at the outset. As the research advances, consent for subsequent steps in the project can be obtained before that next stage begins or for scientific area. Despite this, such consent should still be in line with the applicable ethical standards for scientific research. Taking into consideration the last affirmation and the meaning of scientific research in the EU framework, we can identify the possibility of dealing with the problem of future research. 14 Guidelines 03/2020, cited. 15 Guidelines on the processing of data concerning health for the purpose of scientific research in the context of the COVID-19 outbreak. 16 Dara, Hallinan. ‘Broad consent under the GDPR: an optimistic perspective on a bright future’, Life Sci Soc Policy 16, no. 1 (2020): https://doi.org/10.1186/s40504-019-0096-3.

194  Valentina Colcelli The GDPR covers broad areas of scientific research, provided that they are genuine scientific studies. As stated by the European Data Protection Supervisor, genuine scientific research does not equal ethical scientific research.17 The Guidelines on Consent under Regulation 2016/679 realised by the Article 29 Work Group define the notion of scientific research in the framework of GDPR as a research project established by relevant sector-related methodological and ethical standards, in conformity with good practice.18 This means, first, that personal data could be treated only if connected with a research project developed in the framework of the activities of public or private research organisations. The project has to define and adopt the property measures according to the GDPR and indicate the data treatment controllers. We must also take into consideration the difference between informed consent according to Article 28 of Regulation (EU) no 536/2014 of the European Parliament and of the Council of 16 April 2014 on Clinical Trials on Medicinal products for human use, and repealing Directive 2001/20/EC (hereinafter Regulation of Clinical Trial) from the explicit consent established in Article 7 of Regulation (EU) no 679/2016 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing the GDPR. Moreover, the legal basis grounding the treatment of personal data has an impact on how to manage the withdrawal of consent.

A.  Alternative Legal Basis to Consent: Public Interest or Legitimate Interest of the Data Controller The legal basis for processing changes depending on the purpose of both when data of a health nature are processed for scientific research ‘or/as well as’ for ­scientific research related to clinical trials, on the assumption that they are connected to a research project, has been drawn up in accordance with the methodological standards of the relevant disciplinary field. The legal basis for processing is not consent in the context of the Clinical Trials Regulation if the processing of personal data of a health nature is considered necessary in order to comply with the legal obligations to which the sponsor and/or the investigator are subject (such as in the case of a safety communication or inspection by the national competent authority, the storage of clinical trial data in accordance with the archiving obligations set out in the Clinical

17 European Data Protection Supervisor -04-12_fifth_world_congress_freedom_scientific_ research_en. 18 Guidelines on Consent under Regulation 2016/679 (wp259rev.01), adopted on 28 November 2017 and revised on 10 April 2018, pp 27–31, in https://ec.europa.eu/newsroom/article29/itemdetail.cfm?item_id=623051.

The Pandemic Crisis as Test Case  195 Trials Regulation, or as the case may be in the relevant national legislation). Thus, for reasons of reliability and safety in the case of clinical trials, the legal basis is then Article 6(1)(c), ie, the fulfilment of a legal obligation to which the data controller is subject. In contrast, research-related processing in the context of a clinical trial may fall under three different legal bases: consent (Article 6(1)(a), read in conjunction with Article 9(2)(a)); public interest (Article 6(1)(e)); or a basis that is necessary to pursue the legitimate interest of the controller (Article 6(1)(f), read in conjunction with Article 9(2)(i) or (j)). Additionally, the Guidelines state that in the COVID-19 context – if there was a need to do so with regard to the general approach underlying the GDPR – that the processing of health data for scientific research purposes must fall within one of the legal bases referred to in Article 6(1) of the GDPR, without prejudice to national law. Furthermore, the consent of the data subject, collected pursuant to Article 6(1)(a) and Article 9(2)(a) GDPR, may provide a legal basis for the processing of data concerning health in the COVID-19 context. Regardless, the processing of health data for research purposes may therefore be based on other legal bases than consent, ie, on the conditions laid down in Article 6(1)(e) or (f): public interest or legitimate interest of the data controller. The Guidelines consider that, depending on the specific circumstances of the clinical trial, the appropriate condition in Article 9 for all processing of sensitive data for exclusively research purposes could be: (a) ‘reasons of public interest in the field of public health […] based on the law […] of the Member States’ (Article 9(2)(i)) or, (b) ‘purposes of […] scientific research […] in accordance with Article 89(1), on the basis of Union or national law’ (Article 9(2)(j)).

Moreover, Article 168 of the Treaty on the Functioning of the European Union identifies a ‘high level of human health protection’ as one of the most important EU objectives that ‘shall be ensured in the definition and implementation of all Union policies and activities’. Union action shall cover the fight against all major health issues by promoting research into their causes, their transmission and their prevention, and monitoring, providing early warning of and combating serious cross-border threats to health. On this basis, in the context of a clinical trial, as well as the COVID-19 outbreak, processing personal data ‘is necessary for reasons of public interest in the area of public health’.19 Consequently, an appropriate condition for the lawfulness of 19 Some examples about national legislation: German higher federal authorities (the Federal Institute for Medicinal Products and Medical Devices (BfArM) and the Paul-Ehrlich-Institute (PEI)) have issued supplemental national guidance regarding required changes to clinical trial protocols, the supply of investigational medicinal products (IMP), and the conduct of site monitoring during the COVID-19 pandemic; On 12 March 2020, due to the COVID-19 outbreak and the consequent lockdown that is affecting non-essential services in Italy, including some health care services, the Italian Medicines Agency (AIFA) issued guidance (‘Guidance’) to provide directions to sponsors,

196  Valentina Colcelli the processing of special categories of data in the context of such obligations is Article 9(2)(i) – if it assures […] high standards of quality and safety of health care and of medicinal products and medical devices, on the basis of Union or Member State law providing for appropriate and specific measures to protect the rights and freedoms of data subjects, notably professional secrecy’.20

It is understood that the two aforementioned public interest hypotheses must find their basis in Union or Member State law, and thus be based on rules providing for appropriate and specific measures to protect the rights and freedoms of data subjects. A legal basis qualifying as public interest or legitimate interest of the data controller is applicable because of Article 9 GDPR, which provides for a specific derogation to the general prohibition to process special categories of data. This is indeed a foreseen possibility, with the indication that the derogating rules, be they European or enacted at Member State level, will have to pay special attention to professional secrecy with regard to the hypothesis referred to in letter (i) of Article 9, and apply only to the extent strictly necessary (in Article 89(2) GDPR) with regard to the necessary processing for archiving for public interest, scientific or historical research, or statistical purposes. Recital 45 of the GDPR clarifies that the Regulation does not require there to be a specific legislative act for each individual processing operation (ie, for each clinical trial). A legislative act serving as a basis for several processing operations based on the performance of a task in the public interest may be sufficient.21 In any event, all laws enacted based on Article (9)(2)(i) and (j) of the GDPR must be interpreted in light of the principles under Article 5 of the GDPR and in consideration of the case law of the CJEU.

non-profit organisations, and contract research organisations (CROs) involved in clinical trials. The Guidance tackles criticalities that the sponsors are experiencing in clinical trials due to the restrictions to patients’ access to the trial sites. AIFA sets out conditions for alternative arrangements in compliance with Good Clinical Practice (GCP) with a view toward ensuring therapeutic continuity in clinical trials to the benefit of the patients. The Guidance covers all the various phases of the clinical trial, from the electronic submissions of applications and amendments, to the management of the clinical trial’s activities – such as the delivery of the IMP directly to the patients or the remote monitoring of the trial, including circumstances where the trial sites has been closed and trials must be temporary halted. The UK Medicines and Health Care products Regulatory Agency (MHRA) has issued guidance on the management of clinical trials in the context of coronavirus. Trial protocol and standard operating procedure deviations are already starting to occur in the UK as a result of the coronavirus. Many of these deviations relate to missed visits due to patients who are infected or in self-isolation, or who have been advised not to, or would prefer not to, attend public places such as hospitals. Changes in trial processes are also becoming necessary as a result of staff involved in running the trial working from home. (see www.engage.hoganlovells.com/knowledgeservices/insights/ italys-aifa-issues-clinical-trial-management-guidance-during-covid-19-outbreak). 20 Opinion 3/2019 concerning the Questions and Answers on the interplay between the Clinical Trials Regulation (CTR) and the General Data Protection regulation (GDPR) (Art 70.1.b)) Adopted on 23 January 2019, p 5. 21 Opinion 3/2019 cit., p 8. See also Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC, adopted 9April 2014, WG29, p 22.

The Pandemic Crisis as Test Case  197 In particular, the exemptions and limitations in relation to data protection provided for in Article 9(2)(j) and Article 89(2) of the GDPR should only apply to the extent strictly necessary. So, for instance, a large population-based study conducted on COVID-19 patient records established by national rule certainly falls within the legal basis of public interest.22 For all other situations where the conduct of clinical trials cannot be regarded as ­necessary for the performance of public interest tasks conferred on the controller by law, the Committee considers that the processing of personal data could be ‘necessary for the purposes of pursuing the legitimate interests of the controller or a third party, provided that the interests or the fundamental rights and freedoms of the data subject are not overridden’ under Article 6(1)(f) of the General Data Protection Regulation.23

i.  Some Elements to Focus the Meaning of Withdrawal in Light of the GDPR In light of the Guidelines 03/2020, withdrawal of consent is one of the data subject’s rights that cannot be limited. According to Article 7(3) of the GDPR: The data subject shall have the right to withdraw his or her consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. Prior to giving consent, the data subject shall be informed thereof. It shall be as easy to withdraw as to give consent.

This statement shows how the withdrawal influences the explicit consent: if the withdrawal does not have the requirements dictated by the law, the mechanism of consent does not comply with the GDPR. Therefore, to acquire consent, the controller must inform the data subject about her/his right to withdraw and how to exercise that right.24 Moreover, the data subject may, without any resulting ­detriment and without having to provide any justification, withdraw from the purely research related activities or from the clinical trials at any time by revoking her/his informed consent. Among the requirements mentioned above, the flexibility to withdraw is one of the most important: to withdraw has to be as easy as to give consent. For the withdraw to be ‘easy’, it could be realised in the same way the data subject provided the consent. The GDPR does not require the same action, but it could be important in the case of consent for research activities in which the consent is obtained by technological instruments, as it could be in the case of using dynamic consent through a website in a biobank. The Article 29 Work Group analysed such a s­ ituation as a general case. 22 Opinion 3/2019, cit., p 8. 23 ibid. 24 Recital 39 GDPR, which refers to Arts 13 and 14 of that regulation, states that ‘natural persons should be made aware of risks, rules, safeguards and rights in relation to the processing of personal data and how to exercise their rights in relation to such processing’.

198  Valentina Colcelli When consent is obtained via electronic means through only one mouseclick, swipe, or keystroke, data subjects must, in practice, be able to withdraw that consent equally as easily. Where consent is obtained through use of a servicespecific user interface (for example, via a website, an app, a log-on account, the interface of an IoT device or by e-mail), there is no doubt a data subject must be able to withdraw consent via the same electronic interface, as switching to another interface for the sole reason of withdrawing consent would require undue effort.25 The cases described will be common in the near future with the development of instruments to implement dynamic consent or the meaning of block chain in biobanking activities. Using technological instruments, such as an app or website, to obtain the consent of patients or participants in research means adopting an easy way to withdraw by the same actions used for giving consent. The consent form should always specify how to withdraw, such as by email. Although the jurisprudence from the ECJU26 notes that any restriction of rights of the data subjects must be applied only to the extent necessary, the Guidelines underline that Article 89(2) of the GDPR allows the national legislator to limit (some of) the data subject’s rights set out in Chapter 3 of that regulation. However, the restrictions of data subjects’ rights may vary depending on the legislative provisions enacted in each Member State. In this situation, the current COVID-19 outbreak does not suspend or restrict the possibility for data subjects to exercise their rights under Articles 12 to 22 of the GDPR, and the right to withdraw cannot be limited.

ii.  The Impact of the Withdrawal on the Purely Research Activity and/or Clinical Trial The EDPB considers that the withdrawal of informed consent under Article 28(3) of the Clinical Trials Regulation should not be confused with the withdrawal of consent under the General Data Protection Regulation,27 because in the context of clinical trials, the consent of the data subject is limited to processing exclusively related to research activities.28 If consent is used as a legal basis for processing health-related data for research purposes, the data subject is always entitled to withdraw her/his consent at any time, pursuant to Article 7(3) of the GDPR.

25 Guideline on consent under Regulation 2026/679, Article 29 Working Party published 28 November 2017, WP259, available at: https://ec.europa.eu/newsroom/article29/item-detail.cfm? item_id=623051, p 21. 26 CJUE 14.02.2019 C-345/17 (Buivids), recital 64. 27 Opinion 3/2019 concerning the Questions and Answers on the interplay between the Clinical Trials Regulation (CTR) and the General Data Protection regulation (GDPR) (Art 70.1.b)) Adopted on 23 January 2019 https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_opinionctrq_a_final_it.pdf. 28 Gabriel André P, Mercad Charles P. ‘Data retention after a patient withdraws consent in clinical trials’, Open Access J Clin Trials 3 (2011): 15–19.

The Pandemic Crisis as Test Case  199 The withdrawal of consent shall not affect the lawfulness of processing based on consent before the withdrawal. This a general rule that does not provide for any exception for scientific research.29 If consent is used as a legal basis for processing health-related data for research purposes, the data subject must have the ­opportunity to withdraw the consent at any time, pursuant to Article 7(3) of the GDPR. If consent is withdrawn, all processing operations that were based on consent remain lawful, in accordance with the GDPR, but Article 17(1)(b) and (3) of the GDPR reminds us that the controller must cease the processing activities in question and, in the absence of any other legal basis justifying retention for further processing (eg, further storage), the data should be deleted. In the context of a clinical trial, the data subject may, without any resulting detriment and without having to provide any justification, withdraw from the clinical trial at any time by revoking her/his informed consent. Otherwise, as mentioned above, if in the context of clinical trials, the consent of the data subject is limited to processing exclusively related to research activities. This implies that in the event of withdrawal of consent by the data subject, all research activities carried out with the clinical trial data relating to that person must cease. Thus, withdrawal of consent shall not affect processing operations that are based on other legal grounds, in particular legal obligations to which the sponsor/­ experiencer is the subject, such as those related to security purposes.30 For clinical trial activity grounded on consent, according to Article 28(3) of the Regulation on Clinical Trial, ‘the withdrawal of the informed consent shall not affect the activities already carried out and the use of data obtained based on informed consent before its withdrawal’.

iii.  The Case of Withdrawal of Consent and Other Legal Bases for Personal Data Controllers have an obligation to delete data that was processed on the basis of consent once that consent is withdrawn, assuming that there is no other purpose justifying the continued retention. Besides this situation, covered in Article 17 (1)(b), an individual data subject may request erasure of other data concerning him that is processed on another lawful basis, e.g. on the basis of Article 6(1)(b).31

This provision must also take into account the Clinical Trials Regulation and the different nature of the consent underlying the research activity. In the case of a clinical trial, processing operations purely related to research activities must be distinguished from processing operations related to the

29 Guideline on Consent under Regulation 2026/679, Article 29 Working Party, cited. 30 Opinion 3/2019 concerning the Questions and Answers on the interplay between the Clinical Trials Regulation (CTR) and the General Data Protection regulation (GDPR) (Art 70.1.b)) Adopted on 23 January 2019 https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_opinionctrq_a_final_it.pdf. 31 Guideline on consent under Regulation 2026/679, Article 29 Working Party, cited.

200  Valentina Colcelli purposes of protection of health, while setting standards of quality and safety for medicinal products by generating reliable and robust data (reliability and safety related purposes). These two main categories of processing activities fall under different legal bases distinguished from consent. So, the withdrawal of consent does not affect the processing operations related to the purposes of protection of health. Therefore, a safety communication or inspection by the competent national authority, or the retention of clinical trial data in accordance with the archiving obligations established by the Clinical Trial Regulation or as the case may be by the relevant national legislation, must be considered necessary to comply with the legal obligations to which the sponsor and/or the investigator32 are subject. The legal basis for the treatment is not the ‘consent’, but the ‘public interest’. At the same time, the processing of personal data by data controllers could also be considered as ‘necessary for the performance of a task carried out in the public interest’ pursuant to Article 6(1)(e) GDPR if the conduct of the clinical trial directly falls within the mandate, mission, and task vested in a public or private body by national law.33 The legal basis shall be laid down by Union or Member State law (Article 6(3) GDPR). In this case, as in the case of purely research activity, out of the application of the consent as a legal basis, Article 6 lays down the conditions for lawful processing of personal data and describes six legal bases on which a controller may rely: The application of one of these six bases must be established prior to the processing activity and in relation to a specific purpose. (…) In cases where the data subject withdraws his/her consent and the controller wishes to continue to process the personal data on another lawful basis, they cannot silently migrate from consent (which is withdrawn) to this other lawful basis. Any change in the lawful basis for processing must be notified to a data subject in accordance with the information requirements in Articles 13 and 14 and under the general­ principle of transparency.34

IV.  Management of Personal Data for Scientific Research Purposes Related to the Covid-19 Pandemic: Transparency, Information, Retention Period The principles relating to processing of personal data pursuant to Article 5 GDPR shall be respected by the controller and processor, especially considering that a great amount of personal data may be processed for the purpose of scientific research. Considering 32 Opinion 3/2019, cited p 5. 33 Question and Answers on the interplay between the Clinical Trials Regulation and the General Data Protection Regulation, available at: https://ec.europa.eu/health/sites/health/files/files/documents/ qa_clinicaltrials_gdpr_en.pdf. 34 Guideline on consent under Regulation 2026/679, Article 29 Working Party, cited.

The Pandemic Crisis as Test Case  201 the context of the present guidelines, the most important aspects of these principles are addressed in the following.35

In the context of the COVID-19 outbreak, one must pay particular attention to measures for safeguards and the security of personal health data. The Guidelines stressed specific focus on these elements. According to Article 89(1) GDPR, data processing for research ‘shall be subject to appropriate safeguards’ since safeguards shall ensure that technical and organisational measures are put in place, in particular to ensure that the principle of data minimisation is respected.36 The requirements of Article 89(1) GDPR emphasise the importance of the principle of data minimisation and the principle of integrity and confidentiality, as well as the principle of data protection both by design and by default. Data must be anonymised wherever possible to carry out scientific research, otherwise appropriate safeguards for the data subject are ensured by compliance with the principle of minimisation. In most cases in scientific research, it is possible to comply with the principle of minimisation by specifying the research questions and assessing the type and amount of information needed to adequately answer those questions. Such measures may include pseudonymisation, provided these purposes can be met in that way, or the use of encryption, non-disclosure agreements and strict authorisation, and restriction and recordkeeping provisions. Considering the sensitive nature of health data and the risks when reusing such data for scientific research purposes, rigorous measurements must be carried out in order to ensure an appropriate level of security as required by Article 32(1) GDPR. There must also be an appreciation of whether or not a personal data protection impact assessment (DPIA) must be carried out pursuant to Article 35 GDPR when the processing ‘is likely to present a high risk to the rights and freedoms of natural persons’ (Recital 75 GDPR). International data transfers may be a risk factor to be considered in the context of a DPIA under Article 35 GDPR. The Guidelines remind us that the main innovation introduced by the GDPR is the principle of accountability, which aims to ensure compliance with data protection principles and implies a cultural change that endorses transparent data protection, privacy policies and user control, internal clarity and procedures for privacy operations, and high-level, demonstrable accountability to external stakeholders and data protection authorities. The GDPR requires that the data controller is responsible for ensuring that all privacy principles are respected. In addition, the GDPR requires the organisation and body to demonstrate compliance with all the principles of the Regulation; namely, the principles of lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation,

35 Guidelines 03/2020, cited. 36 See Communication From The Commission Guidance on Apps supporting the fight against COVID 19 pandemic in relation to data protection (2020/C 124 I/01).

202  Valentina Colcelli and integrity and confidentiality. The appointment of a Data Protection Officer (DPO) is one way of incorporating the principle of accountability. The principles of both accountability and minimisation bring with them respect for the principle of proportionate storage periods. Proportionate retention periods (terms) must be established. In order to define such retention periods, criteria such as the duration of the research and its purpose must be taken into account. National provisions can also establish rules on the storage period and, therefore, must be considered. In the current pandemic situation, researchers may process health-related data that they have not obtained directly from the data subject, for example because they use data from medical records or from abroad. According to the GDPR, when data is collected from the data subject, information must be provided at the time the personal data is obtained and when the information is updated, in accordance with the general principles of fair and transparent processing (Articles 13 and 14). When data is collected through third parties and/or used for secondary purposes, information must be provided ‘within a reasonable period after obtaining the personal data, but at the latest within one month’ (Article 14 GDPR). In general, the obligation to provide information does not apply where and to the extent that the data subject has already received information about the processing. Where personal data have been obtained not from the data subject but from another source, there is an exemption from the obligation to provide information to the person if: –– the provision of such information proves impossible, –– the provision of such information would involve a disproportionate effort, and/or –– the obligation is likely to render impossible or seriously prejudice the­ attainment of the research objectives of the processing of personal data. In the context of the emergency caused by COVID-19, to guarantee the rights and freedoms of data subjects, the Guidelines expressly call for consultations with ‘data protection officers regarding the processing of health data for scientific research purposes’. In the field of pandemic control, international cooperation cannot be disregarded, involving international transfers of health data for scientific research purposes outside the EU (eg, to identify treatments and/or develop vaccines). The international data transfers because of COVID-19 research, in the absence of a decision on adequacy under Article 45(3) GDPR or of adequate safeguards under Article 46 GDPR, could be realised according to the Article 49 GDPR. The above-mentioned Article envisages certain specific situations under which transfers of personal data can take place as an exception. The derogations enshrined in Article 49 GDPR are thus

The Pandemic Crisis as Test Case  203 exemptions from the general rule and, therefore, must be interpreted restrictively, and on a case-by-case basis. Applied to the current COVID-19 crisis, those addressed in Article 49 (1) (d) (‘transfer necessary for important reasons of public interest’) and (a) (‘explicit consent’) may apply.

V.  Criticisms of the Application of the GDPR In the Guidelines, the EDPB affirms that the GDPR rules for the processing of health data for scientific research purposes are also fully applicable in the context of the COVID-19 pandemic, and thus are sufficient to apply to the associated research and data; they can then – a fortiori – be more usefully applied to research activity beyond this particular period. This affirmation also seems to follow the idea of the technological neutrality of the GDPR, as described in Recital 15: In order to prevent creating a serious risk of circumvention, the protection of natural persons should be technologically neutral and should not depend on the techniques used.

From the legal point of view, the neutrality is stated in defence of the principles of equality and solidarity.37 To put it more generally, technology neutrality means that: 1) technical standards designed to limit negative externalities (eg radio interference, pollution, safety) should describe the result to be achieved, but should leave companies free to adopt whatever technology is most appropriate to achieve the result; 2) the same regulatory principles should apply regardless of the technology used. Regulations should not be drafted in technological silos; and 3) regulators should refrain from using regulations as a means to push the market toward a particular structure that the regulators consider optimal.38 In the light of the GDPR, personal data must be protected regardless of the ­technology used or the way in which the personal data is stored. The GDPR can cover any technological implementation through the fundamental rights because the regulatory principles should apply regardless of the technology used. No further law is required. Thus, no special rules for research activity processing of data as part of the pandemic crisis are necessary, nor are they applicable to scientific research efforts in the fight against SARS-CoV-2 in order to produce

37 Giovanna De Minico, ‘Net Neutrality. Come Diritto Fondamentale Di Chi Verrà’, Costituzionalismo. it, 1 (2016): 1–40. 38 Winston Maxwell and Marc Bourreau, ‘Technology neutrality in internet, telecoms and data protection regulation’, Computer and Telecommunications Law Review (2014): 1.

204  Valentina Colcelli research results as fast as possible; this is because the GDPR foresees a specific derogation to the prohibition on the processing of certain special categories of personal data, such as health data, where this is necessary for the purposes of scientific research (Article 9(2)(j) GDPR and Article 89(2) GDPR). Following the logic of the EDPB, it can be argued that if the GDPR rules for the processing of health data for scientific research purposes are sufficient to cover research and data in the face of the current pandemic, these rules can be more meaningfully applied to research activities beyond this exceptional period.39 The breadth of the GDPR’s provisions might not allow for ‘full’ legal certainty, and so if no further legislation is needed for the EU to address the pandemic, at the same time the national legislator in each Member State may adopt specific legislative provisions in accordance with Article 9(2)(i) and (j) GDPR to allow the processing of health data for scientific research purposes. This affirmation follows another general approach laid down in Article 9 GDPR, according to which the Member States may maintain or introduce more specific provisions adapting the application of the rules of the Regulation with regard to scientific research. Thus, regardless of the coordinating efforts of the GDPR, research regulations remain fragmented as regards the data protection framework. This is mainly due to the vast discretion granted to Member States in the GDPR in this area. This situation produces a fragmentation of the application of the GDPR at the national level that has an impact on research activity in several ways. For instance, although the GDPR enables data flows for research cooperation in the EU, the rules at the national level regarding research exemptions create a hurdle for crossborder research by non-direct referal to the intra-EU conflict of laws that inevitably arises in a fragmented regulatory framework: ways to solve the dilemma of which national law is applicable under the GDPR in cross-border research activity are needed to avoid either the interests of data subjects or the interests of researchers being compromised. A common legal framework is also needed in relation to the ownership of biological samples. The ownership rights in human biological material are ethically and legally challenging in the context of research activity. At the moment, there is no legal uniformity in the answer to the question of the ownership of biological samples and the rights that can be exercised over them. We believe it is time for a common normative framework shaped by a balanced respect for both individual and scientific/society interests and a specific legal framework as part of property rights. It is also difficult to ascertain what the legal requirements are in each country across Europe when it comes to the ‘secondary use’ of tissue. The GDPR does not deal with the secondary use of personal data for research purposes. Secondary

39 Laura R Bradford, Mateo Aboy and Kathleen Liddell, ‘COVID-19 Contact Tracing Apps: A Stress Test for Privacy, the GDPR and Data Protection Regimes’, Journal of Law and the Biosciences (2020), University of Cambridge Faculty of Law Research Paper No. 23/2020.

The Pandemic Crisis as Test Case  205 use refers to the use of data or samples for a purpose that is different from that for which they were originally collected. This is an aspect on which the GDPR is silent. One would hope that this fact does not preclude a secondary use that it was ­impossible to anticipate but for which it is essential for the research activity. Often, a new purpose is only known after the initial processing of personal health data, and the reality is that all data that are derived from studies associated with the genome, and broad population studies that increasingly often use electronic health records (EHRs) and/or electronic medical records, will fall under this legislation. Despite the fact that there is no mention in the GDPR that the pseudonymisation or anonymisation is a matter that falls under these derogations or under Member State law, the point at which anonymised personal data is re-identified is one of the problems that must be solved in the light of national law, according to the different possible approaches at national level (for instance, taking into consideration the different positions taken by the Data Protection Authorities at national level).40 As a matter of fact, Recital 26 GDPR does not make it clear that pseudonymised data should not be considered to be personal data in the hands of an entity that does not possess the code to re-identify the data, provided that appropriate measures are put in place to prevent the holder of the data from gaining access to the key or otherwise linking the dataset to other data sources that may permit re-identification. The GDPR, as the starting point for the hypothesis of a clinical trial, must be applied simultaneously with the Regulation on Clinical Trials, which contains specific provisions relevant to the GDPR but not derogations from it. The legal basis for the processing of health data could change according to the purposes, both the hypothesis for which the health data are processed for scientific research as well as the scientific research related to the clinical trial. Broad consent is a type of consent where a participant expresses his/her general consent that his/her own personal information, including bio-medical or health-related information and/or tissue samples, can be used in further research, without a new explicit consent being given by him/her. The emergence of biobanks as a vital research tool in medical sciences has led to widespread debate in the literature about how best to handle the informed consent procedures governing the ­enrolment of participants in research, and the subsequent use of participant 40 Statement on the processing of personal data in the context of the COVID-19 outbreak, by the European Data Protection Board on the 19 March 2020, at https://edpb.europa.eu/sites/default/files/ files/news/edpb_statement_2020_processingpersonaldataandcovid-19_en.pdf. Because personal data protection rules do not apply to data that has been appropriately anonymised, EDPB in its Statement on the processing of personal data in the context of the COVID-19 outbreak suggests: ‘in some Member States, governments envisage using mobile location data as a possible way to monitor, contain or mitigate the spread of COVID-19. This would imply, for instance, the possibility to geolocate individuals or to send public health messages to individuals in a specific area by phone or text message. Public authorities should first seek to process location data in an anonymous way (i.e. processing data aggregated in a way that individuals cannot be re-identified), which could enable generating reports on the concentration of mobile devices at a certain location (‘cartography’)’.

206  Valentina Colcelli samples and data in other studies. When the broad consent model is applied, general consent is gathered at the time of enrolment (subject to a set of limitations and restrictions that are formulated by the biobank and/or a regulatory authority and are stated in the consent form). Subsequently, samples stored in the biobank can be used for new studies that fall within the scope of the consent without re-obtaining consent from the participants. Medical researchers defend the broad model by arguing that it is the best way to make large-scale biobank research feasible. However, a clarification about broad consent is strongly recommended to support research activities. The question is whether or not the broad consent includes the dual use of biomaterial samples and their associated personal data. Additionally, in light of the fight against COVID-19, the processing of health data for research purposes may be based on legal grounds other than consent: public interest or the legitimate interest of the data controller (Article 6(1)(e) or (f)). In the context of both a clinical trial and the COVID-19 outbreak, ‘processing is necessary for reasons of public interest in the area of public health’, where the public interest is in high standards of quality and safety of health care and of medicinal products and medical devices, if Union or Member State laws provide appropriate and specific measures to protect the rights and freedoms of data subjects. Indeed, Member States may maintain or introduce more specific provisions adapting the application of the rules of the Regulation with regard to processing, when such processing is necessary for compliance with a legal obligation to which the controller is subject or is necessary in order to protect the vital interests of the data subject or of another natural person, by determining more specific requirements for processing and other measures to ensure that the processing is lawful and accurate. In the field of pandemic control, international cooperation, involving international transfers of health data for scientific research purposes outside the EU (eg to identify treatments and/or develop vaccines), cannot be disregarded. So, when data are to be transferred to ‘third countries’ (ie countries which are not part of the EU), or to international organisations, the GDPR contains several restrictions, including in the case of scientific activities. There is no mention of the transfer of personal data from third countries to EU Member States: the fact could be a problem for the controller. Taking into account the risks posed by data processing in the context of the COVID-19 outbreak, it is necessary to ensure compliance with Article 5, paragraph 1, letter (f), Article 32, paragraph 1, and Article 89, section 1 GDPR. Moreover, the relevance of carrying out an impact assessment related to data protection in accordance with Article 35 of the GDPR must be strongly considered. In the framework of the research activity for COVID-19, data retention periods should be proportionate. In this context, criteria such as the duration and purpose of the research should be examined. National provisions, in this case too, may regulate the retention period, and this should be reviewed.

The Pandemic Crisis as Test Case  207 More generally, all systems can be strongly influenced by legislation enacted at Member State level, given the constant reference to this possibility by the GDPR in Article 9(2)(i) and (j). Harmonisation is definitely necessary to avoid the uncertainty in application that we are living with beyond this exceptional period.

VI. Conclusion The pandemic crisis currently sweeping the world provides the EU’s personal data protection system with a test case to verify its ability to support scientific research by tackling the health emergency. The Guidelines analyse the legal basis for the processing of health data and scientific research. The GDPR already provides special rules for the processing of health-related data for scientific research purposes, rules that are fully applicable in the context of the COVID-19 outbreak and are sufficient to allow research and the use of data in the face of the current pandemic. Following the logic of the EDPB, it can be argued that, if the rules of the GDPR for the processing of health-related data for scientific research purposes may be sufficient to allow research and the use of data in the face of the current pandemic, such rules can – a fortiori – be more meaningfully applied to research activities beyond this exceptional period. As regards the processing of health-related data for scientific research purposes, in the case of clinical trials, the GDPR should be applied in conjunction with the EU Clinical Trials Regulation, which contains specific relevant provisions, but has no derogations from the GDPR. The legal basis for processing depends on the purpose, both when the health data are processed for scientific research purposes and as concerns scientific research related to clinical trials, on the assumption that personal data can only be processed for research purposes if they are linked to a research project that is being carried out in accordance with the methodological standards of the relevant disciplinary field. The legal basis for processing is not consent in the context of the Clinical Trials Regulation if the processing of the personal health data is considered necessary to fulfil the legal obligations of the sponsor and/or the investigator (such as in the case of a safety notification or an inspection by the national competent authority, or the storage of clinical trial data in accordance with archiving obligations laid down in the Clinical Trials Regulation or in the relevant national legislation, as the case may be). Therefore, for reasons of trial reliability and safety in the case of a clinical trial, the legal basis is Article 6(1)(c), that is, compliance with a legal obligation to which the controller is subject. By contrast, processing operations related to research in a clinical trial may be subject to three different legal bases: consent (Article 6(1)(a) in conjunction with Article 9(2)(a)); public interest (Article 6(1)(e)); or the legitimate interest of the controller (Article 6(1)(f) in conjunction with Article 9(2)(i) or (j)). The difference between the consents also has implications in the case of the withdrawal of consent: in the context of clinical trials, the data subject’s consent is limited

208  Valentina Colcelli to processing operations exclusively related to the research activities. While Article 89(2) GDPR allows the national legislator to limit (in some cases) the data subject’s rights under Chapter 3 of the Regulation, the right of withdrawal of consent by the data subject is not affected. Consent cannot constitute a valid legal basis for the processing of personal data if there is a clear imbalance between the data subject and the controller. The EDPB underlines the fact that consent cannot be considered to have been given freely if there is a clear imbalance between the data subject and the controller (Recital 43 GDPR), especially if this concerns the context of clinical trials related exclusively to research activities, and reference should be made to other legal bases. As regards international transfers, in the absence of an adequacy decision pursuant to Article 45(3) GDPR or ­appropriate safeguards pursuant to Article 46 GDPR, the public and private sector may make use of the exemptions applicable under Article 49 GDPR. However, the exceptions provided for in Article 49 GDPR are of an absolutely exceptional nature. International data transfers may be a risk factor to be considered in the context of a data protection impact assessment under Article 35 GDPR. The system may be strongly influenced by legislation enacted at Member State level, given the constant reference to this possibility in the GDPR.

References Bradford, Laura R, Mateo Aboy and Kathleen Liddell. ‘COVID-19 Contact Tracing Apps: A Stress Test for Privacy, the GDPR and Data Protection Regimes’, Journal of Law and the Biosciences (2020), University of Cambridge Faculty of Law Research Paper No. 23/2020. Cippitani, Roberto. ‘Genetic research and exceptions to the protection of personal data’. In Genetic Information and Individual Rights, edited by Arnold R., Cippitani, R., Colcelli V., 54–79. Regensburg: Universität Regensburg, 2018. CJUE 14.02.2019 C-345/17 (Buivids), recital 64. Clarke, Niamh, Gillian Vlae, Emer P. Reeves, et al. ‘GDPR: an impediment to research?’ Irish Journal of Medical Science 188, (2019): 1129–1135. Communication From The Commission Guidance on Apps supporting the fight against COVID 19 pandemic in relation to data protection (2020/C 124 I/01). De Minico Giovanna. ‘Net Neutrality. Come Diritto Fondamentale Di Chi Verrà’, Costituzionalismo.it, 1 (2016): 1–40. Dove Edward S, Jiahong. ‘Should consent for data processing be privileged in health research? A comparative legal analysis.’ International Data Privacy Law, 10, no. 2, (2020): 117–131. European Data Protection Supervisor -04-12_fifth_world_congress_freedom_scientific_ research_en. Gabriel André P, Charles P. Mercado. ‘Data retention after a patient withdraws consent in clinical trials’, Open Access J Clin Trials, 3 (2011): 15–19. Guideline on consent under Regulation 2026/679, Article 29 Working Party published 28 November 2017, WP259, retrieved from https://ec.europa.eu/newsroom/article29/ item-detail.cfm?item_id=623051.

The Pandemic Crisis as Test Case  209 Guidelines 03/2020 on the processing of data concerning health for the purpose of scientific research in the context of the COVID-19 outbreak, adopted by the European Data protection Board on 21 April 2020, https://edpb.europa.eu/sites/edpb/files/files/ file1/edpb_guidelines_202003_healthdatascientificresearchcovid19_en.pdf. Guidelines on Consent under Regulation 2016/679 (wp259rev.01), adopted on 28 November 2017 and revised on 10 April 2018, pages 27–31: https://ec.europa.eu/newsroom/ article29/item-detail.cfm?item_id=623051. Hallinan, Dara. ‘Broad consent under the GDPR: an optimistic perspective on a bright future’, Life Sci Soc Policy 16, 1 (2020). Maxwell, Winston and Marc Bourreau. ‘Technology Neutrality in Internet, Telecoms and Data Protection Regulation’. Computer and Telecommunications Law Review (2014), Available at SSRN: https://ssrn.com/abstract=2529680 or http://dx.doi.org/10.2139/ ssrn.2529680. McHale, Jean V. ‘Regulating genetic databases: some legal and ethical issues’. Medical Law Review 12 (2004): 70–96, 72. Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC, adopted 9April 2014, WG29. Opinion 3/2019 concerning the Questions and Answers on the interplay between the Clinical Trials Regulation (CTR) and the General Data Protection regulation (GDPR) (Art 70.1.b)) Adopted on 23 January 2019. In https://edpb.europa.eu/sites/edpb/files/ files/file1/edpb_opinionctrq_a_final_it.pdf. Pormeister, Kärt. ‘Genetic research and applicable law: the intra-EU conflict of laws as a regulatory challenge to cross-border genetic research’. Journal of Law and the Biosciences (2018): 706–723. Question and Answers on the interplay between the Clinical Trials Regulation and the General Data Protection Regulation, retrieved from https://ec.europa.eu/health/sites/ health/files/files/documents/qa_clinicaltrials_gdpr_en.pdf. Regulation (EU) no 536/2014 of the European Parliament and of the Council of 16 April 2014 on clinical trials on medicinal products for human use, and repealing Directive 2001/20/EC. Regulation (EU) no 679/2016 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Statement on the processing of personal data in the context of the COVID-19 outbreak, by the European Data protection Board on the 19 March 2020, in https://edpb.europa.eu/sites/ default/files/files/news/edpb_statement_2020_processingpersonaldataandcovid-19_ en.pdf. TJUE 6.11.2003, C-101/01 (Lindqvist), recital 50.

210

8 Data Protection Law and the EU Digital COVID Certificate Framework DANIELA DZURAKOVA (NÉE GALATOVA)1 AND OLGA GKOTSOPOULOU2

Abstract The Proposal for two EU Digital COVID Certificate Regulations was introduced by the European Commission in March 2021. The Regulations were adopted by the European Parliament and the Council in June 2021, presenting three types of certificates to be used as proof for vaccination, test and recovery since 1 July 2021 in the context of the on-going COVID-19 pandemic. The primary aim of the framework is to enable free movement through an EU-coordinated approach. This chapter presents the policy debate up to the adoption of the Proposals, including a brief historical overview and a taxonomy of terms, such as ‘immunity passports’ and ‘COVID-19 health passes’. Later, the chapter sketches out the most important elements of the adopted regulations and accompanying documentation, in relation to the EU data protection law. The chapter concludes with a set of broader considerations on the topic and recommendations as to the implementation of the framework.

Keywords COVID-19, privacy, vaccination, EU COVID certificate, proportionality, legitimacy, legality, GDPR, data protection law.



1 Pan-European 2 Vrije

University, Slovakia. Universiteit Brussel, Belgium.

212  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou

I. Introduction After more than a year of lockdowns, quarantine, curfews and other restrictive measures designated in the combat against the COVID-19 global health crisis, governments all over the world have expressed their interest in introducing certification schemes with the promise of raising some (or all) restrictions, re-starting the economy, business, education, re-opening the social realm as a whole and facilitating regional and international travel. The early calls for ‘immunity passports’ date back to the first months of the pandemic – the first of the algorithmic age,3 – with preliminary studies about their practical, scientific, legal and ethical implications having already been published as early as spring 2020.4 Initial speculations included amongst others: considerations as to what the format of those certificates would be (physical, digital or hybrid);5 who would be eligible for such certificates and for which purposes; whether those certificates would be combined with the infrastructure built for the national contact tracing and warning apps launched in several Member States or they would require an ad hoc system and an app to run; whether states and regions would issue their own certificate schemes or a unified approach would be followed. From an early stage, one of the main criticisms that supporters of this solution faced, was – and still remains – the lack of knowledge in conjunction with the ever-evolving nature of the pandemic, which cannot allow for assessing properly the specifications of such certification schemes.6 Following those debates, on 17 March 2021, the European Commission (the EC or the Commission) proposed two regulations, establishing the initially called ‘Digital Green Certificate’ framework.7 The first Proposal was for a Regulation on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the

3 Ada Lovelace Institute, ‘COVID-19 Rapid Evidence Review: Exit through the App Store?’, 20 April 2020, 15, www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exitthrough-the-app-store/. 4 See: Ada Lovelace Institute, ‘COVID-19 Rapid Evidence Review: Exit through the App Store?’. See also: Françoise Baylis and Natalie Kofler, ‘COVID-19 Immunity Testing’, Issues in Science and Technology (blog), 29 April 2020, https://issues.org/covid-19-immunity-testing-passports/ and Iñigo de Miguel Beriain, ‘Immunity passports’, blogdroiteuropéen (blog), 27 June 2020, https://blogdroiteuropeen.com/2020/06/27/immunity-passports-by-inigo-de-miguel-beriain/. 5 Alexandra L Phelan, ‘COVID-19 Immunity Passports and Vaccination Certificates: Scientific, Equitable, and Legal Challenges’, The Lancet 395, no. 10237 (May 2020): 1596, https://doi.org/10.1016/ S0140-6736(20)31034-5. 6 Privacy International, ‘The Looming Disaster of Immunity Passports and Digital Identity’, Privacy International (blog), 21 July 2020, http://privacyinternational.org/long-read/4074/ looming-disaster-immunity-passports-and-digital-identity. 7 ‘COVID-19: Digital Green Certificates (Factsheet)’, European Commission, March 2021, https:// ec.europa.eu/info/live-work-travel-eu/coronavirus-response/safe-covid-19-vaccines-europeans/ covid-19-digital-green-certificates_en.

Data Protection Law and the EU Digital COVID Certificate Framework  213 COVID-19 pandemic (Digital Green Certificate) (DGC I Proposal),8 whereas the second was for a Regulation on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to third-country nationals legally staying or legally residing in the territories of Member States during the COVID-19 pandemic (Digital Green Certificate) (DGC II Proposal).9 The latter basically consisted of three articles which repeated the provisions of the DGC I Proposal as applicable to third-country citizens.10 During the interinstitutional negotiations, the name of the framework changed from ‘Digital Green Certificate’ to ‘EU Digital COVID Certificate’ as agreed between the European Parliament and the Council on 20 May 2021 and signed on 14 June 2021. The whole legislative process was completed in the record time of 62 days, while in parallel with the negotiations, the EC and the Member States worked on a common interoperability infrastructure for the authentication of the system which became operational on 1 June 2021, before the negotiations were concluded. The Regulations (EU) 2021/953 (DCC I Regulation or DCC I)11 and (EU) 2021/954 (DCC II Regulation or DCC II)12 with regards to EU citizens and thirdcountry nationals legally staying or residing in the territories of Member States respectively shall apply from 1 July 2021 and be valid until 30 June 2022. Apart from its rather ambitious character, the EU Digital COVID Certificate framework raises several questions from a fundamental rights aspect, as well as from the perspective of EU competencies. In this chapter, the authors focus on the data protection and privacy implications of the framework under the EU data protection law in the context of a global health emergency.

8 ‘European Commission, Proposal for a Regulation of the European Parliament and the Council on a Framework for the Issuance, Verification and Acceptance of Interoperable Certificates on Vaccination, Testing and Recovery to Facilitate Free Movement during the COVID-19 Pandemic (Digital Green Certificate), Brussels, 17.3.2021 COM(2021) 130 Final 2021/0068 (COD)’, https://ec.europa.eu/info/ sites/info/files/en_green_certif_just_reg130_final.pdf. 9 ‘European Commission, Proposal for a Regulation of the European Parliament and of the Council on a Framework for the Issuance, Verification and Acceptance of Interoperable Certificates on Vaccination, Testing and Recovery to Third-Country Nationals Legally Staying or Legally Residing in the Territories of Member States during the COVID-19 Pandemic (Digital Green Certificate), Brussels, 17.3.2021 COM(2021) 140 Final 2021/0071 (COD)’, https://ec.europa.eu/info/sites/info/files/ en_green_certif_tcn_home_reg140final.pdf. 10 The legal basis for the first proposed regulation is Art 21 of the Treaty on the Functioning of the European Union (TFEU), whereas the legal basis for the second one is Art 77(2)(c) TFEU. 11 Regulation (EU) 2021/953 of the European Parliament and of the Council of 14 June 2021 on a framework for the issuance, verification and acceptance of interoperable COVID-19 vaccination, test and recovery certificates (EU Digital COVID Certificate) to facilitate free movement during the COVID-19 pandemic, PE/25/2021/REV/1, [2021] OJ L211/1–22. 12 Regulation (EU) 2021/954 of the European Parliament and of the Council of 14 June 2021 on a framework for the issuance, verification and acceptance of interoperable COVID-19 vaccination, test and recovery certificates (EU Digital COVID Certificate) with regard to third-country nationals legally staying or residing in the territories of Member States during the COVID-19 pandemic, PE/26/2021/ REV/1, [2021] OJ L211/24–28.

214  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou The chapter is structured as follows: section II outlines the evolution of the policy debate from the beginning of the pandemic outbreak until the adoption of the proposed EU Digital COVID Certificate Regulations, including a reference to previous similar schemes and a taxonomy, clarifying the terminology landscape. In section III, the main analysis of the data protection dimensions takes place, focusing on the principles of lawfulness, transparency, data minimisation, purpose and storage limitation, and includes a discussion with respect to data controllership and data access. Section IV adds a high-level consideration of broader issues, beyond data protection law. Section V recaps open matters and challenges the uptake of the system by the Member States. It lastly concludes the analysis with final remarks. Developments concerning the containment of the COVID-19 pandemic are so rapid that information becomes obsolete in the blink of an eye. Hence, the authors bring to the reader’s attention that this chapter includes updates up to 30 June 2021.

II. From Pre-COVID Health Certificates and Immunity Passports to the EU Digital COVID Certificate Framework A. State-of-the-Art Health certificates are not a new concept. Vaccination certification, both for domestic and travel use, is as old as the vaccination itself. In the 1930s, the International Sanitary Convention for Aerial Navigation introduced certificates to help contain severe diseases such as cholera, yellow fever, typhus and smallpox. In 2005, the World Health Organization (WHO) established a paper-based International Certificate of Vaccination or Prophylaxis through its International Health Regulations, which was launched in 2007.13 This certificate is also known as Yellow Card (Carte Jaune) due to the yellow colour of the paper booklet.14 It demonstrates that its holder has been vaccinated against some diseases before or upon entering a country, or before leaving it in order to enter another. The certificate includes the following information in bilingual (English and French) and trilingual versions (English, French and Arabic): full name, date of birth, sex, nationality, national identification document (if applicable), individual’s signature, name of disease or condition, the name of vaccine or prophylaxis, the date of vaccine administration, signature and professional status of supervising 13 Kumanan Wilson, Katherine M. Atkinson and Cameron P. Bell, ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene 94, no. 3 (2 March 2016): 485–88, https://doi.org/10.4269/ajtmh.15-0510. 14 The booklet can be found on the WHO website www.who.int/ihr/IVC200_06_26.pdf.

Data Protection Law and the EU Digital COVID Certificate Framework  215 clinician (by hand), manufacturer and batch number of vaccine or prophylaxis, validity dates of the certificate (from, until) and the official stamp of the administering centre. Paper certificates have been widely used over the years as a simple and universal way to authenticate and prove vaccination and prophylaxis. Albeit, they have also received criticism as being vulnerable to falsification and difficult to replace in case of loss.15 Hence, triggered by the repeated Ebola virus outbreaks and the declaration of polio as a public health emergency of international concern in 2014, discussions began around a potential digital optimisation of the paper-based certificate system.16 Those discussions were intensified during the on-going COVID-19 pandemic and in October 2020, the WHO agreed with Estonia upon the development of a digital International Certificate of Vaccination – a so-called ‘smart yellow card’ – to strengthen the effectiveness of the COVAX initiative.17 The latter aims to ensure and accelerate universal access to COVID-19 vaccines.18

B.  A Taxonomy of Terms and Practices The perception around health certification schemes evolves quickly while new knowledge around the COVID-19 pandemic, the efficacy of the vaccines, the tests, the treatments and the recovery is attained. In the next paragraphs, we scan some of the terms used in the media, as well as in the stakeholders’ communications, which reflect to some extent puzzlement, misconception and confusion. Earlier studies and reports appear to use several terms as synonymous, ignoring the purpose and function of those – we could say – credential tools. Gradually, it becomes apparent that the different terms indeed refer to, or can refer to, the same or different tools, even when they are used interchangeably. On 24 April 2020, the WHO issued its scientific brief on ‘immunity passports’ in the context of COVID-19.19 In another document, the international organisation referred to them as ‘immunity certificates’ or ‘risk-free certificates’.20 Risk-free because, given the absence of a vaccine back then, those ‘immunity passports’ could only rely on the detection of antibodies to the SARS-CoV-2 in 15 Wilson, Atkinson and Bell, ‘Travel Vaccines Enter the Digital Age’. 16 ibid. 17 For more information, see: www.who.int/news-room/feature-stories/detail/estonia-and-who-tojointly-develop-digital-vaccine-certificate-to-strengthen-covax. 18 More information on the COVAX initiative can be found here: www.who.int/initiatives/ act-accelerator/covax. 19 ‘Scientific Brief: “Immunity Passports” in the Context of COVID-19’ (World Health Organization, 24 April 2020) www.who.int/news-room/commentaries/detail/immunity-passportsin-the-context-of-covid-19. 20 Privacy International, ‘Immunity Passports and Covid-19: An Explainer’, Privacy International (blog), 21 July 2020, www.privacyinternational.org/explainer/4075/immunity-passports-andcovid-19-explainer.

216  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou the immune system of an individual who had fully recovered from the disease. The document could allegedly permit their holders to skip some of the imposed restrictions, such as quarantine. However, the WHO elucidated that, at that time, no study had concluded that the presence of antibodies conferred ‘immunity’.21 Indeed, until the time the present chapter was written, there has not been sufficient scientific evidence as to the levels and duration of immunity reached after previous infection,22 or even after vaccination. ‘Immunity passports’, however, dominated the news and have led to several debates among academics, activists, business and policy makers. Less than a year after those initial considerations, a number of COVID-19 vaccines were developed and received market authorisation. Population-wide vaccination campaigns were launched all over the world. Terms shifted focus from ‘immunity’ to ‘vaccination’ (‘vaccine passports’23). Some countries started producing pocket-sized, paper-based or digital ‘vaccination record cards’, which would permit their individual holders, healthcare staff and the competent authorities to keep track of the administered vaccine doses, given that the different currently offered types of vaccines require more than one dose and varying intervals between them. Soon, it became clear that despite the ambition of states to speed up their vaccination strategies, the vaccination of large percentages of population is a long and complex task. At the same time, new variants of the virus started to make their appearance. Moreover, newer studies demonstrated that not all vaccines are appropriate for all the population segments. For instance, many EU states provide different vaccines depending on age group. In addition, it is still not clear whether children under 12 years old and people with certain medical conditions can be administered the vaccination. Consequently, states left in place previous restrictions to be lifted gradually. To enable the transition from the pre- to the post-vaccine era, some states started requesting a proof of vaccination, or in the absence of it, a proof of a negative test or of recovery in order to allow entry into their territory. The taxonomy shifts once again. Media outlets refer to those credential tools with more generic terms, such as (digital) ‘health passports’ or ‘health passes’.24 Israel was one of the first countries to launch the so-called ‘green passes’ for its vaccinated and recovered population provided on the Traffic Light smartphone

21 ‘Scientific Brief: “Immunity Passports” in the Context of COVID-19’. 22 eHealth Network, ‘Guidelines on COVID-19 Citizen Recovery Interoperable CertificatesMinimum Dataset (Release 1)’, 15 March 2021, 5, https://ec.europa.eu/health/sites/health/files/ehealth/ docs/citizen_recovery-interoperable-certificates_en.pdf. 23 Shannon McMahon, ‘Everything Travelers Need to Know about Vaccine Passports’, Washington Post, 3 March 2021, www.washingtonpost.com/travel/2020/12/08/vaccine-passport-immunity-app-covid/. 24 Thomson Reuters Bacchi, ‘Coronavirus Vaccine Passports: What You Need to Know’, Thomson Reuters Foundation News, 9 March 2021, https://news.trust.org/item/20201221125814-sv8ea/.

Data Protection Law and the EU Digital COVID Certificate Framework  217 app.25 Those passes, based on a vaccination, test or recovery certificate and valid for a specific period of time, allow their holders to enter a number of establishments, such as gyms, theatres and restaurants, with the simultaneous presentation of an identification document.26 A number of EU/EEA Member States started developing certificate systems prior to the EC’s Proposals. Cyprus was the first EU country that demonstrated the establishment of a certification system in December 2020, scheduling its official release for March 2021 – ahead of the opening of the summer touristic season.27 In January 2021, Denmark also declared that it had started working on a vaccination certificate system.28 The Czech Republic, Germany, Greece, Italy, Portugal, Hungary, Slovakia, Poland, Spain, Sweden and Iceland had also either expressed their political will to introduce a system of COVID-19 health certificates domestically and/or had started to conceptualise such systems at national level.29 Some countries used pre-existing infrastructure. Germany, for instance, encouraged its citizens and residents to document the successful administration of its vaccine doses on their WHO paper-based International Certificates of Vaccination or Prophylaxis (‘Impfpass’, in German or ‘Yellow Card’ as seen earlier). In addition to the state-led initiatives, private actors such as the Vaccination Credential Initiative,30 consisting of big tech corporations, healthcare organisations and other stakeholders grasped the opportunity to develop solutions for ‘digital COVID-19 vaccination records’, possible to be accessed, stored and shared through the digital wallets of smartphones or paper-based QR barcodes.31 Other private or semi-private initiatives were rolled out in January 2021 in the form of trials, including the CommonPass, part of the Commons Project and the International Air Travel Association (IATA) travel pass, implemented by airlines on selected flights.32 25 Oliver Holmes and Quique Kierzenbaum, ‘Green Pass: How Are Covid Vaccine Passports Working for Israel?’, The Guardian, 28 February 2021, sec. World news, http://www.theguardian.com/world/2021/ feb/28/green-pass-how-are-vaccine-passports-working-in-israel. See also the official website of the Israeli Ministry of Health: https://corona.health.gov.il/en/directives/vaccination-certificate/. 26 For more information see: https://corona.health.gov.il/en/directives/green-pass-info/. 27 ‘Cyprus Plans to Open Borders in March for Travellers Vaccinated for COVID-19’, SchengenVisaInfo.Com, 4 December 2020, www.schengenvisainfo.com/news/cyprus-plans-to-openborders-in-march-for-travellers-vaccinated-for-covid-19/. 28 ‘Denmark Plans to Introduce “Vaccine Passports” for Travelers Soon’, SchengenVisaInfo.Com, 12 January 2021, www.schengenvisainfo.com/news/denmark-plans-to-introduce-vaccine-passportsfor-travelers-soon/. 29 For more information per country, follow: Ada Lovelace Institute, ‘International Monitor: Vaccine Passports and COVID Status Apps’, accessed 19 March 2021, www.adalovelaceinstitute.org/project/ international-monitor-vaccine-passports-covid-status-apps/. 30 Manas Mishra and Amruta Khandekar, ‘Tech and Health Giants Push for Digital COVID “Vaccine Passports”’, Thomson Reuters Foundation News, 14 January 2021, https://news.trust.org/ item/20210114121756-i3lpf/. 31 Cat Zakrzewski, ‘Analysis | The Technology 202: Tech Giants Are Teaming up to Build Digital Vaccine Records’, Washington Post, 15 January 2021, www.washingtonpost.com/politics/2021/01/15/ technology-202-tech-giants-are-teaming-up-build-digital-vaccine-records/. 32 McMahon, ‘Everything Travelers Need to Know about Vaccine Passports’.

218  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou Therefore, on the basis of the previous analysis, it is necessary to clarify that: (a) terms used to refer to COVID-19 related certificates are not necessarily synonymous, and thus using one term in the place of the other may be inaccurate and misleading;33 (b) the lexical framing of those tools can provoke diverse impressions regarding the rights and obligations derived from them; for instance, a passport is often seen as a means of identification or travel document, and in this sense it is different compared to a certificate, which is understood as a formal document issued by a competent authority or institution, such as a birth certificate; or a pass, which may refer to a ticket, a boarding pass or a festival pass. Moreover, apart from the EU framework which focuses on the facilitation of free movement, the situation is the following: (a) there are a number of tools which appear to be only of domestic or sectorial use (in one particular country or field). Other credential schemes, at least in principle, appear to be exclusively needed for international travel (within a particular area, to a particular country or worldwide). Nevertheless, it was not, and it is still not clear, whether and how those schemes – private or public – could potentially interact at some point. The same observation is relevant for the purposes behind the usage of such systems, which currently appear to be open-ended; (b) credential tools at that time were either official state-led initiatives, or projects directed by international organisations and fora, or private/commercial proposals. Many efforts seemed to take place in parallel, deployed by diverse actors and to some extent, overlapped.

C.  The EU Digital COVID Certificate Framework As seen earlier, in its Proposal the EC initially favoured the term ‘Digital Green Certificate’.34 However, in the adopted text, the framework was renamed the ‘EU Digital COVID Certificate’. The EU Digital COVID Certificate can be compared to other certificate schemes adopted in the context of previous or synchronous epidemics. In line with the proposed definition in Article 2(3) of DCC I, ‘EU Digital COVID Certificate’ means interoperable certificates containing information about

33 Nóra Ní Loideain, ‘COVID-19 and “Immunity Passports” – The Way Forward’, blogdroiteuropéen (blog), 8 October 2020, https://blogdroiteuropeen.com/2020/10/08/covid-19and-immunity-passports-the-way-forward-by-nora-ni-loideain/. 34 The term ‘green’ may refer to the choice of the word in the previous communication by the EC with respect to the ‘green lane border crossings’, as established, under the Guidelines for border management measures to protect health and ensure the availability of goods and essential services 2020/C 96 I/01 C/2020/1897, [2020] OJ C96I/1–7.

Data Protection Law and the EU Digital COVID Certificate Framework  219 the vaccination, test result or recovery of the holder issued in the context of the COVID-19 pandemic. In other words, the term is used as an umbrella expression to describe a framework rather than a certificate, as the authors will explain in the next sub-sections.

i.  Pragmatics: The Certificate Scheme a. The Certificates In fact, the framework encompasses three types of certificates:35 (a) ‘vaccination certificates’ – certificates that confirm that the holder has received a COVID-19 vaccine in the Member State which issues the certificate; (b) ‘test certificates’ – certificates indicating the holder’s result and date of a NAAT test or of a rapid antigen test carried out by skilled personnel; and (c) ‘certificates of recovery’ – a certificate confirming that the holder has recovered from a SARS-CoV-2 infection following a positive NAAT test or a positive rapid antigen test. Only tests included in the common, updated list of COVID-19 rapid antigen tests established on the basis of the Council Recommendation 2021/C 24/0136 are recognised. In a nutshell, those mutually accepted certificates aim to facilitate free movement within the EU for EU Member State nationals and other third-country nationals who are legally staying or legally residing in the territory of an EU Member State and who are entitled to travel to another Member State in accordance with Union law. They will be provided free of charge. The vaccination and test certificates can be issued automatically or upon request by the concerned persons,37 whereas as a logical consequence, the certificates of recovery can be only issued upon request.38 Moreover, the prospective holders of vaccination and test certificates shall be informed of their right to a vaccination or a test certificate respectively. The DCC I Regulation does not include reference to a similar right with respect to recovery certificates. The three types of certificates can be issued in digital, paper-based format or both. The prospective holders are entitled to receive them in the format of their choice.39 Irrespective of the format, all certificates have a digital component (hence, the ‘digital’ in their name), meaning that they should display an interoperable barcode for verification reasons, specifically for authenticity, validity and integrity. All the information should be shown additionally in human-readable form, at least in the official language(s) of the issuing state and in English.40

35 Art 3, para 1 DCC I. 36 ‘Council Recommendation on a Common Framework for the Use and Validation of Rapid Antigen Tests and the Mutual Recognition of COVID-19 Test Results in the EU’ 2021/C 24/01, [2021] OJ C24/1–5, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32021H0122(01). 37 Art 5, para 1 and Art 6, para 1 DCC I. 38 Art 7, para 1 DCC I. 39 Recital 18 DCC I and Art 3, para 2 DCC I. 40 Art 3, para 2 DCC I.

220  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou The pictured procedure reminds the currently established procedures followed in the context of e-passports and boarding passes (digital or paper-based) checks. The EC has clarified that travel should not be prevented on the basis of not having a vaccination certificate,41 making sure that persons who cannot get vaccinated due to age or health reasons, do not want to get vaccinated or are not part of the target group of the COVID vaccine, will be able to travel following the indicated measures per state or another type of certificate.42 If necessary, the certificates should be issued to another person on behalf of the vaccinated, tested or recovered person, for example to the legal guardian on behalf of legally incapacitated persons or to parents on behalf of their children.43 The certificates should not be subject to legalisation or any other similar formalities. The DCC I Regulation highlights that no discrimination should occur on the basis of the certificate, ie all types of certificates are equally valid for travel purposes.44 The Regulation addresses the uncoordinated approach described earlier in this chapter, by providing that the validity of other proofs of vaccination, test result or recovery issued before 1 July 2021 or for other purposes, in particular for medical purposes, should not be affected by the new type of certificates introduced herein.45 In this regard, Article 15 DCC I foresees a phase-in period: certificates issued before July 2021, as well as certificates which do not comply with the DCC Regulations, will be accepted by Member States until 12 August 2021. Concerning the types of vaccines, the Member States will have to accept all vaccines having received the EU marketing authorisation. On the other hand, whether a vaccine not being authorised by the EU would be accepted will depend on the decision powers of each Member State. The same goes as to whether Member States will accept vaccination certificates after one or more doses.46 The DCC I Regulation also left space for manoeuvre for Member States to keep other restrictions (such as quarantine, testing, etc) for the already vaccinated persons and holders of the other two types of certificates upon epidemiological evaluation.47 However, as opposed to the Proposal, the Member State which imposes the restrictions does not have an obligation for ex ante ‘notification’48 but of ‘information exchange’49 with other Member States and the Commission. Specifically, the Member State shall inform the Commission and the other Member States accordingly, if possible 48 hours in advance of the introduction of such new

41 Art 3, para 6 of DCC I. 42 European Commission, DGC I, 3. 43 Recitals 23 and 36 DCC I. 44 Art 3, para 7 DCC I. 45 Art 3, para 8 DCC I. 46 European Commission, ‘EU Digital COVID Certificate’, Text, https://ec.europa. eu/info/live-work-travel-eu/coronavirus-response/safe-covid-19-vaccines-europeans/ eu-digital-covid-certificate_en. 47 Recital 7 DCC I. 48 Notification procedure, Art 10 DGC I Proposal. 49 Art 11 DCC I.

Data Protection Law and the EU Digital COVID Certificate Framework  221 restrictions, as to the reasons why those restrictions are in place, their scope and their duration. Member States shall also provide the public with clear, comprehensive and timely information as a general rule 24 hours before the imposition of such restrictions. b.  The Trust Framework and Common Infrastructure The Proposal envisaged the establishment of a trust framework laying out the rules on, and infrastructure for, the reliable and secure issuance and verification of COVID-19 certificates, to support technically and administratively the certificate framework.50 As was the case with the contact tracing and warning apps, the eHealth Network, even before the publication of the Commission’s Proposal, provided a number of recommendations and guidelines with respect to the establishment of an overall trust framework for interoperable and reliable COVID-19 health certificates (updated on 12 March 2021),51 basic interoperability requirements for verifiable vaccination certificates for medical purposes (12 March 2021)52 and a minimum dataset for recovery certificates (15 March 2021),53 reflecting the preliminary discussions and agreements among the Member States. The conditions for the set-up were additionally coordinated with the European Centre for Disease Prevention and Control (ECDC), the Health Security Committee,54 the WHO, the International Civil Aviation Organization (ICAO) and other stakeholders, including third countries and international organisations, in order to enable international interoperability. Since then, the eHealth Network has published a series of reference implementations as open-source and elaborated technical specifications for the mutual recognition and interoperability of the vaccination, test and recovery certificates. This documentation encompasses specifications about the formats and trust management, the getaway, the 2D barcode, the applications, the public key certificate governance, the technical structure and value sets to be used in the contents of the EU Digital COVID Certificates.55

50 Recital 22 DCC I. 51 eHealth Network, ‘Interoperability of Health Certificates: Trust Framework (v1.0)’, 12 March 2021, https://ec.europa.eu/health/sites/health/files/ehealth/docs/trust-framework_interoperability_certificates_en.pdf. On p. 3, a mock-up picture of a digital and paper vaccination certificate is presented. 52 eHealth Network, ‘Guidelines on Verifiable Vaccination Certificates – Basic Interoperability Elements’, 12 March 2021, https://ec.europa.eu/health/sites/health/files/ehealth/docs/vaccinationproof_interoperability-guidelines_en.pdf. 53 eHealth Network, ‘Guidelines on COVID-19 Citizen Recovery Interoperable CertificatesMinimum Dataset (Release 1)’. 54 The EU Health Security Committee was set up in 2001. It is mandated to reinforce the coordination and sharing of best practice and information on national preparedness activities. For more information: https://ec.europa.eu/health/preparedness_response/risk_management/hsc/members. 55 The specifications can be found in the e-Health Network website, https://ec.europa.eu/health/ ehealth/covid-19_en.

222  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou

ii.  The Global Health Emergency and its Effect on the EU Legislative and Non-Legislative Initiatives Due to the urgency of the matter which calls for particular flexibility, the EC did not prepare an inception impact assessment for the DGC I and II Proposals.56 This spirit of ‘flexibility’ was kept and even fortified in the adopted text of the DCC I Regulation. The latter reminds Member States in Article 11, paragraph 4, that ‘some flexibility is required for epidemiological emergencies’. Indeed, flexibility and urgency57 reflect the legislator’s initial intention to fast track the process of the EU Digital COVID Certificate framework. Below, we will examine three aspects: the principle of subsidiarity, the lack of a regulatory impact assessment and the empowerment of the EC with the ability to adopt non-legislative acts. a. Subsidiarity According to Recital 61 of the DCC I Regulation and the Explanatory Memorandum of the DGC I Proposal, facilitating the exercise of the right to free movement during the COVID-19 pandemic within the Union cannot be sufficiently achieved by Member States. Thus, the EU is entitled, ‘by reason of the scale and effects of the proposed action’, to adopt measures based on the principle of subsidiarity, in order to prevent a situation where different approaches are implemented at Member State level in an uncoordinated manner. Article 5(3) of the Treaty on European Union (TEU)58 and Protocol (No 2) provide for the application of the principles of subsidiarity and proportionality. Both principles define the exercise of the EU’s competences. Specifically, the principle of subsidiarity applies only to the areas in which competence is shared between the Union and the Member States and defines areas where the Union does not have exclusive powers. The basic principle of subsidiarity overruled in this case the persisting discussion on where the line of the EU competence between the shared59 and coordinative60 is in the field of health. In addition, so far, matters falling under the shared competence concerning cross-border health threats, have thus been administered by legal tools such as decisions,61 recommendations and directives.62 In this context, and before the DCC I Regulation, the EC had tabled another legislative proposal for a regulation in the sphere of cross-border health threats.63 56 European Commission, DGC I, 5. 57 Note of the author: the word ‘urgency’ appears nine times in the adopted text. 58 Treaty on European Union, [2012] OJ C326/13–390. 59 Art 4(2)(k) TFEU. 60 Art 6(a) TFEU. 61 Decision No 1082/2013/EU of the European Parliament and of the Council of 22 October 2013 on serious cross-border threats to health and repealing Decision No 2119/98/EC Text with EEA relevance, [2013] OJ L293/1–15. 62 Directive 2011/24/EU of the European Parliament and of the Council of 9 March 2011 on the application of patients’ rights in cross-border healthcare, [2011] OJ L88/45–65. 63 Proposal for a Regulation of the European Parliament and of the Council on serious cross-border threats to health and repealing Decision No 1082/2013/EU, COM(2020) 727 final.

Data Protection Law and the EU Digital COVID Certificate Framework  223 For the purpose of this latter proposal, the EC had used again the subsidiarity principle.64 b. The Lack of a Regulatory Impact Assessment The EU Digital COVID Certificate regulatory Proposals were submitted to the ordinary legislative procedure.65 The Opinion of the European Data Protection Supervisor (EDPS) was sought66 and the latter delivered it jointly with the European Data Protection Board (EDPB) on 31 March 2021.67 The Economic and Social Committee also submitted an Opinion almost a month later, on 27 April 2021.68 The eHealth Network, as seen earlier, also provided guidelines and recommendations, which it updated regularly during the inter-institutional negotiations. All in all, impact assessments by the EC, with respect to the legislative process, are required in case a legislative initiative is likely to have a significant economic, environmental or societal impact, which one could argue would be the case of initiatives promoting the adoption of large-scale data-driven innovative solutions during a pandemic – with the reservation that an exception may apply.69 An impact assessment is key to policy making, as it provides the necessary evidence for informed decision making and allows for the preparation of review mechanisms. Although impact assessments are tailor-made for each legislative initiative, they are expected to address questions of paramount importance. They present in detail the problem to be addressed, the objectives, the specific impacts, the risks and the persons/groups to be impacted, as well as a comparative analysis of their effectiveness, efficiency and coherence. They also include a synopsis as to how the monitoring and the evaluation of the initiative will be organised.70 For instance, it is imperatively given that compliance with

64 Explanatory Memoranda of the Proposal for a Regulation of the European Parliament and of the Council on serious cross-border threats to health and repealing Decision No 1082/2013/EU (point 2). 65 European Commission, DGC I, 4. 66 Recital 47 DGC I. 67 EDPB-EDPS, ‘Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a Framework for the Issuance, Verification and Acceptance of Interoperable Certificates on Vaccination, Testing and Recovery to Facilitate Free Movement during the COVID-19 Pandemic (Digital Green Certificate)(Version 1.1)’, 8 April 2021, https://edps.europa.eu/system/ files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_dcg_en.pdf. 68 European Economic and Social Committee, ‘Opinion “Digital Green Certificate”’, 27 April 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=PI_EESC%3AEESC-2021-01771-AC. In its Opinion (point 4.17) the Committee highlighted that in addition to the three types of certificates, self-tests and blood tests on COVID antibodies should also allow individuals to receive digital COVID certificates. As to the personal data protection, the Committee suggested only the traveler should have access to sensitive personal medical data, including their status as regards to vaccination, antibodies or testing; a third person can only see they fulfil a specific condition (point 4.16). 69 European Commission, ‘Guidelines on Impact Assessment’, 15, https://ec.europa.eu/info/sites/ info/files/better-regulation-guidelines-impact-assessment.pdf. 70 European Commission, 17.

224  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou subsidiarity and proportionality can only be fully verified after the impacts of the alternative options have been studied in view of the set objectives.71 The EC appeared to try to include some elements of a small-scale impact assessment in the explanatory memoranda of the two proposed Regulations. However, the effort remained in the iterative character of the first steps, without providing more specific answers and explanations. In addition, in Article 14 of the DGC I Proposal, a review mechanism was foreseen, consisting of a report post-suspension of the proposed Regulation. The review would assess, on one hand, the impact of the Regulations on free movement and data protection and on the other would summarise the lessons learnt, one year after the WHO declared the SARS-CoV-2 pandemic to have ended. In the agreed text of the DCC I Regulation, this post-suspension review mechanism was replaced by the obligation of the Commission to submit a report to the European Parliament and to the Council on the application of this Regulation.72 This ex-post report shall contain, in addition to an assessment of the impact of this Regulation on the facilitation of free movement and on the protection of personal data during the COVID-19 pandemic, an assessment on its impact on travel and tourism, on the acceptance of the different types of vaccine, on fundamental rights and non-discrimination. The report may be accompanied by legislative proposals to extend the period of application of this Regulation, taking into account the evolution of the epidemiological situation with regard to the COVID-19 pandemic. c.  Implementing and Delegating Acts Additionally, in certain circumstances, the EC is assigned with the power to issue non-legislative acts, ie, implementing and delegated acts.73 Articles 12 and 13 of the DCC I Regulation set down the conditions for the adoption of delegated acts, for instance in case of modifying the Annex that includes the certificate datasets in line with Articles 5, 6 and 7. In the DGC I Proposal where a specific end date of the application of the Regulations was not foreseen, those delegated acts also referred to the ability of the Commission to specify a suspension date or post-suspension re-enactment of the Regulation. With respect to delegated acts, the power to be delegated to the EC (Article 12, paragraph 2 of the DCC I Regulation) would be of a determinate period of 12 months, starting from 1 July 2021. Moreover, an urgent procedure for delegated acts has been further foreseen in Article 13 of the DCC I Regulation. It may be applied in cases of imperative grounds of urgency to accommodate ‘newly emerging scientific evidence or to ensure interoperability with international



71 European

Commission, 17. 16, para 2 DCC I. 73 See also Arts 290 and 291 TFEU. 72 Art

Data Protection Law and the EU Digital COVID Certificate Framework  225 standards and technological systems’.74 This procedure entails that a delegated act would exceptionally enter into force without delay and would remain into force until an objection is communicated by the European Parliament or the Council. However, Article 13 on the urgency procedure points mistakenly to Article 11 paragraph 6, with respect to the procedural aspects; this is a previous article of the proposed Regulation, which has been erased from the final agreed text of the DCC I Regulation. As for the implementing acts, the Commission is vested with the power to adopt them on different occasions.75 For example, it can adopt implementing acts in order to establish that COVID-19 certificates issued by a third country are equivalent to the EU Digital COVID Certificates under specific conditions.76 Implementing acts can be adopted in order to ensure uniform conditions for the implementation of the trust framework, containing the technical specifications and rules.77 In this regard, under ‘duly justified imperative grounds of urgency’, the Commission is further empowered to adopt immediately applicable implementing acts, which will be in force for only the duration of application of the DCC I Regulation.78 The implementing acts are adopted through the ‘comitology’ procedures, in other words, after consultation with a committee with the participation of all EU Member States. The delegated acts are adopted by the EC on the basis of a delegation granted in the text of EU law – in the present case, the DCC I Regulation – after consulting expert groups which are composed of representatives of each EU Member State. The scope and conditions of the delegated acts is stricter compared to that of the implementing acts.79 In both cases though, in line with the EC’s better regulation guidelines, the draft of the implementing and the delegated act can be opened for public consultation before its adoption. Such consultation may be restricted in cases of emergency, as in the case of a pandemic.80 Nevertheless, Recital 59 of the DCC I Regulation recalls the importance of ‘appropriate consultation’ in the preparatory stage and the demand for equal participation between the European Parliament and the Council representatives as well as Member States’ experts, ensuring a higher

74 See: Art 5, para 4, Art 6, para 4 and Art 7, para 4 DCC I. They are identical paragraphs, applying to all three different types of certificates. 75 Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general principles concerning mechanisms for control by Member States of the Commission’s exercise of implementing powers [2011] OJ L55/13. 76 Art 3, para 10 and Art 8, para 3 DCC I. 77 Art 9, para 1 DCC I. 78 Art 9, para 3 DCC I. 79 For instance, the delegated acts enter into force if, after two months of their adoption by the Commission, the European Parliament and the Council do not bring any objections. Comparatively, the implementing acts are accepted or rejected based on a committee vote. See: https://ec.europa.eu/ info/law/law-making-process/adopting-eu-law/implementing-and-delegated-acts_en. 80 European Commission, ‘Better Regulation in the Commission’, 3, https://ec.europa.eu/info/sites/ info/files/better-regulation-guidelines-better-regulation-commission.pdf.

226  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou degree of transparency. Additionally, Recital 60 of the DCC I Regulation provides that the Commission should consult the European Data Protection Supervisor when preparing delegated acts or implementing acts that impact on the protection of individuals’ rights and freedoms with regard to the processing of personal data. Consultation with the European Data Protection Board is only recommended in the most important of those cases.

III.  The Interplay between the Regulations and EU Data Protection Law Despite the lack of an impact assessment with respect to the two Regulations, the explanatory memorandum of the DGC I Regulation states that an impact on Article 7 of the Charter of Fundamental Rights of the European Union81 on the right to respect of private life and to Article 8 of the Charter on the right to the protection of personal data is indeed anticipated.82 In addition, it was underlined that concerning the personal data processing, including health data, the provisions of the General Data Protection Regulation (GDPR)83 continue to apply.84 In the next subsections, the authors will present their observations with respect to the EU Digital COVID Certificate framework, studied through the lenses of EU data protection law.

A.  Lawfulness: Legal Obligation and Substantial Public Interest On the basis of the information provided in Recital 48 of the DCC I Regulation and in accordance with the lawfulness principle (Article 5(1)(a) GDPR), the legal grounds for the processing of personal data, including health data, necessary for the issuance and verification of the interoperable certificates are Articles 6(1)(c) GDPR (processing in compliance with a legal obligation) and 9(2)(g) GDPR (processing for reasons of substantial public interest). In relation to Article 6(1)(c) GDPR, it is directed that the obligation should derive directly from a legal provision, even if not entirely specified in law.85 In this 81 ibid. 82 European Commission, DGC I, 6. 83 ‘Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA Relevance)’, Pub. L. No. 32016R0679, [2016] OJ L119/679, http://data.europa.eu/eli/ reg/2016/679/oj/eng. 84 Recital 48 DCC I. 85 Christopher Kuner, Lee A. Bygrave and Christopher Docksey, The EU General Data Protection Regulation (GDPR): A Commentary (Oxford: Oxford University Press, 2019), 333.

Data Protection Law and the EU Digital COVID Certificate Framework  227 case, the provisions would be found in the new Regulations. With respect to Article 9(2)(g) GDPR, for this exception to apply to lift the prohibition of the processing, a balancing between the public interest and the risks for the data subjects is required.86 The public interest must be substantial. As explained in Recital 46 GDPR, ‘monitoring epidemics and their spread’ constitutes an important ground of public interest. This observation in combination with the planning of largescale processing operations of special categories of data, which could potentially interfere with the rights and freedoms of every individual, renders the conduct of a data protection impact assessment and a wider impact assessment at Member State level absolutely necessary prior to the implementation of a given system.87 The authors did not locate – when writing this chapter – data protection impact assessments publicly available with respect to the EU COVID Digital Certificates, neither at EU level in line with Article 39 of the Regulation (EU) 2018/172588,89 nor at Member State level. However, potential risks for the data subjects and their respective severity and likelihood were preliminarily considered by the EDPB and the EDPS in their joint Opinion. Looking for parallels to the context of contact tracing apps, many Member States performed data protection impact assessments or even opted for a form of public consultation prior to the deployment of such apps, for instance in the context of the Belgian app ‘coronalert’.90 National law may provide for a legal basis for data processing for other purposes (eg, maintenance of individual personal health records).91 After the negotiations, Recital 48 of the DCC I Regulation – as compared to Recital 37 of the DGC I Proposal – elaborated the lawfulness principle in more detail, when stating that a legal basis for the processing of data for other purposes provided in the national law must comply with Union data protection law and the principles of effectiveness, necessity and proportionality, and should contain provisions clearly identifying the scope and extent of the processing, the specific purpose involved, the categories of entity that can verify the certificate as well as the relevant safeguards to prevent discrimination and abuse, taking into account the risks to the rights and freedoms of data subjects.

Unlike as previously determined in the DGC I Proposal,92 the DCC I Regulation does not require a necessity to transmit/exchange the personal data on the individual certificates across borders. The actual communication of the data would be 86 ibid 379. 87 Ní Loideain, ‘COVID-19 and “Immunity Passports” – The Way Forward’. 88 ‘Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC, PE/31/2018/REV/1’, [2018] OJ L295/39–98. 89 Oskar Josef Gstrein, ‘The EU Digital COVID Certificate: A Preliminary Data Protection Impact Assessment’, SSRN Electronic Journal (2021), 5, https://doi.org/10.2139/ssrn.3853689. 90 For more information, see: www.esat.kuleuven.be/cosic/sites/corona-app/. 91 Recital 48 DGC I. 92 Recital 39 DGC I.

228  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou done via a public-key infrastructure.93 In addition, should a Member State use a processor, no transfer from the processor to a third country shall take place.94

B.  Transparency and Fairness Articles 5, 6 and 7 of the DCC I Regulation outline the types of personal data to be included in each certificate. The information on the certificate should be included in human-readable format (printed or displayed as plain text). The layout should be easily understandable, simple and user-friendly, which could be viewed in line with Recital 39 GDPR. Moreover, the information should be in at least two languages, the language of the issuing Member State and English, as seen above. The question arises as to how an individual who does not speak either language will be able to verify that the information contained within, is accurate and correctly updated, with reference to the data accuracy principle, in order to request a new certificate.95 Furthermore, particular attention would need to be paid to make those certificates accessible to persons with disabilities. Unlike in the Proposal, the adopted text includes information that a new certificate shall be issued for each vaccination, test or recovery without transferring data from the previous issuances. However, no clarifications are provided on how data subjects will be empowered to exercise their rights in cross-border cases. Additionally, no information is provided as to whether the information will be provided in a children-friendly language, given that children will be eligible to receive such certificates through their parents or a legal guardian. In its communication from 17 March 2021, the Commission states that the use of the certificates ‘should be accompanied by clear and transparent communication to citizens to explain its scope, use, clarify the safeguards to personal data protection’.96

C.  Types of Processed Data and Data Minimisation Considerations Articles 5, 6 and 7 of the DCC I Regulation outline the types of personal data proposed to be included in those certificates. The data fields are explicitly stated in the Annex of the DCC I Regulation. The certificates shall contain some common elements and some distinct. In particular, all the three types of certificates shall contain the identification of the holder and certificate metadata, such 93 Recital 51 DCC I. 94 Art 10, para 8 DCC I. 95 Art 3, para 4 DCC I. 96 ‘Communication from the Commission to the European Parliament, the European Council and the Council: A Common Path to Safe and Sustained Re-Opening’ (Brussels, 17 March 2021), 5, https:// ec.europa.eu/info/sites/info/files/communication-safe-sustained-reopening_en.pdf.

Data Protection Law and the EU Digital COVID Certificate Framework  229 as the certificate issuer or a unique certificate identifier. The vaccination certificates are expected to contain information about the vaccine administered; the test certificates information about the test carried out; and, the certificates of recovery information about past SARS-CoV-2 infection. The proposed Regulation contemplates all those categories of data as personal data. Moreover, the EC is endowed to adopt delegated acts to add, modify or remove data fields on the categories of personal data, whenever necessary, as seen earlier. It is highlighted in the explanatory memorandum of the DGC I Proposal that the Regulation does not intend to establish a European database on vaccination, testing or recovery with respect to COVID-19. This is again repeated in Recital 52 of the DCC I Regulation. The personal data will only be included in the issued certificate and not in another common database.97 Instead, a decentralised verification scheme of digitally signed interoperable certificates is envisaged.98 The eHealth Network – in relation to vaccination certificates – refers to a Unique Vaccination Certificate/assertion identifier (UVCI), ‘which could be used as a link to the underlying data registry’ but should not contain personal data.99 In line with the principle of data minimisation as determined in Article 5(1)(c) of GDPR, the rationale is to process data that is adequate, relevant and limited to the purpose of the processing. In addition, Recital 39 of the GDPR further specifies that the personal data should be only processed if the purposes cannot be reasonably fulfilled by other means. The processing should abide by the ‘limited to what is necessary’ principle, referring not only to quantity but also to quality of data.100 In that context, one cannot process neither a large amount of data nor a single datum if not necessary for the established purpose. Consequently, one needs to take into account the content of a certificate (of any category) as a whole. Thus, as long as a certificate includes the full name of a person, is assigned a unique identifier, and its verification requires some form of ID binding,101 it would be difficult to demonstrate that the UVCI does not qualify as personal data even if the specific elements do not include personal data as such. ID binding itself raises questions as to what kind of data will have to be additionally added and processed, for example, biometric data included in the identification document the certificate holder will have to present for verification reasons. As previously stated, cross-border personal data exchange is not expected according to the Recital 51 of the DCC I Regulation. Only the accompanying public keys of the issuers need to be transferred or accessed across borders. However, it is still uncertain whether data concerning one data subject collected from various

97 European Commission, DGC I, 6. 98 European Commission, DGC I, 3. 99 eHealth Network, ‘Guidelines on Verifiable Vaccination Certificates – Basic Interoperability Elements’, 6. 100 Kuner, Bygrave and Docksey, The EU General Data Protection Regulation (GDPR), 317. 101 eHealth Network, ‘Interoperability of Health Certificates: Trust Framework (v1.0)’, 7.

230  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou certificates would be somehow interlinked (for instance the test certificate and a vaccination certificate of the same person or a series of test certificates of the same person), in order to create a potential medical record of an individual, with respect to COVID-19 infections, recovery or testing or to detect fraudulent behaviour. If a record were to be kept for each individual in a data registry, it would seem rather disproportionate to the initial purpose of issuance of certificates solely for free movement purposes. However, unclarity remains as to how authorities will be able to charge fees for the issuance of a new certificate in cases of repeated loss, without the establishment of a data registry. Perhaps, the revocation lists are envisaged to be used for this purpose as well, as part of the overall aim to prevent fraud. Specifically, Recital 19 and Article 4 of the DCC I Regulation provide for Member States to establish and exchange with other Member States certificate revocation lists in limited cases. Limited cases are interpreted as cases of fraud or errors or cases of suspension of a defective COVID-19 vaccine batch. Those certificate revocation lists should not contain any personal data other than the unique certificate identifiers. The cross-border exchange of those lists constitutes, thus, an exception to the overall prohibition of cross-border personal data exchanges.

D.  Purpose Limitation and Storage Limitation Another principle to be met is that of purpose limitation, which will be discussed together with the principle of storage limitation. Article 5(1)(b) GDPR specifies that personal data may be processed only for a specified, explicit and legitimate purpose. The purpose must be defined by the controller prior to its processing in a clear language. The purpose of the EU COVID Digital Certificate framework is to facilitate the exercise of free movement stemming from the Treaties. Specifically, the objective of the proposed Regulations is the facilitation of the free movement within the EU during the COVID-19 pandemic for EU citizens and third country nationals staying or residing in the EU.102 In this regard, it is stated that the Regulations would enable the concerned parties to ‘demonstrate that they fulfil public health requirements imposed, in compliance with EU law, by the Member State of destination’.103 Interestingly, although on the basis of the explanatory memoranda of the DGC I and II Proposals, the primary and sole aim is to ensure the free movement of the individuals within the EU, in one of the earliest, non-binding, eHealth Network Guidelines on verifiable vaccination certificates, the stakeholders considered the proof of vaccination only for medical purposes.104 As part of this proof, 102 European Commission, DGC I, 4. 103 ibid. 104 eHealth Network, ‘Guidelines on Verifiable Vaccination Certificates – Basic Interoperability Elements’, 5.

Data Protection Law and the EU Digital COVID Certificate Framework  231 the Network considers the example of a person receiving two doses of a vaccine in two different states or having to receive medical treatment in one Member State based on information included in a vaccination certificate provided by another Member State. The use of vaccination certificates for travel purposes in that case fell within what the Network calls ‘other purposes’, to be decided individually by Member States. In the DCC I Regulation and as seen earlier, it is also clearly stated that Member States may process the personal data included in the certificates issued by them for other purposes, but ‘the legal basis for processing for other purposes is to be provided for in national law, which must comply with Union data protection legislation’.105 Member States have demonstrated interest in processing the personal data for other purposes, for example with respect to the re-opening of the hospitality sector. Concerning the data storage limitation principle,106 it is defined that personal data cannot be kept beyond the time necessary to achieve the purpose of the processing. In line with Article 10, paragraph 3 of the DCC I Regulation, the personal data processed for issuing the certificates shall not be retained longer than it is strictly necessary for its purpose and in no case longer than the period for which the certificates may be used to exercise the right to free movement. In addition, the final text includes a provision narrowing the retention of certificate revocation lists by the maximum possible period for processing, coinciding with the end of the application of the Regulation. In this regard, this provision opposes the originally expressed idea included in the Proposal that the certificate regime would be ‘suspended’ once the WHO declares the end of the pandemic.107 This is of paramount importance, given that many Member States operate on emergency regimes that function on the principle of derogation. Thus, the initially proposed mechanism would have established a precedence system to be used in the future, as part of a readiness mechanism108 which could resume ‘if the WHO declares another pandemic due to an outbreak of SARS-CoV-2, a variant thereof, or similar infectious diseases with epidemic potential’.109 Privacy International had referred to a potential ‘function creep’ of such certification mechanisms, turning them into a de facto digital identity management system.110 Like tracing applications, which despite the lack of scientific evidence about their efficiency, did not fade out, the vaccination certificates could become a 105 Recital 48 DCC I. 106 Art 5(1)(e) in line with Art 25(2) GDPR. 107 Recital 42 DGC I. 108 ‘Communication from the Commission to the European Parliament, the European Council and the Council: A Common Path to Safe and Sustained Re-Opening’, 10. 109 European Commission, DGC I, 5. 110 Privacy International, ‘“Anytime and Anywhere”: Vaccination, Immunity Certificates, and the Permanent Pandemic’, European Digital Rights (EDRi) (blog), 10 March 2021, https://edri.org/ our-work/anytime-anywhere-vaccination-immunity-certificates-pandemic/.

232  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou similar system. Or, they could even lead to the establishment of a health information system outside the scope of a health emergency. The DCC I Regulation also refers to two cases where data retention is not allowed. First, whenever certificates are used for non-medical purposes based on national legislation, no personal data retention should occur. Further, it provides that retaining personal data obtained from the certificate by the Member State of destination or by the cross-border passenger transport services operators required by national law to implement certain public health measures during the COVID-19 pandemic should be prohibited.111 By contrast, this means that the data can be retained by the state of issuance. And this is indeed confirmed by the eHealth Network: A repository used by Country A for storing health information and information about the issued health certificates. The system may be part of an Immunization Information System (IIS), a laboratory system or it may be stored by national, regional or local electronic health record systems, or on paper. The system may be centralised on the national level or it may be largely distributed.112

The proposed text did not set time limits for erasure. The agreed text, specifically Article 10(4) of DCC I,113 provides for that personal data not to be kept longer than the period for which the certificates may be used. Moreover, Paragraph 5 of the same Article sets up the retaining period limits for certificate revocation lists until the period of validity of the Regulation, ie 30 June 2022.

E.  Data Controllership and Accountability As established by the GDPR, a controller is an entity that defines the purpose and means of a processing of data.114 In practical terms, it means that a controller is the one who determines the reasons to justify the processing and the means to protect the personal data. The EC’s Proposal determines that the authorities responsible for issuing the certificates shall be considered as data controllers.115 Those authorities most likely will be public health authorities, which have to cooperate closely with health organisations authorised to issue those certificates.116 According to Article 9 of the DCC I Regulation,117 the EC is empowered to adopt implementing acts to lay out the responsibilities of the data controllers and the data processors with respect to the implementation of the trust framework. This is an important observation, given that the role of the European Commission with respect to the

111 Recital

52 DCC I (previously, Recital 40 DGC I). Network, ‘Interoperability of Health Certificates: Trust Framework (v1.0)’, 10–11. 113 Previously Art 9(3) DGC I Proposal. 114 Art 4(7) GDPR. 115 Recital 54 DCC I. 116 eHealth Network, ‘Interoperability of Health Certificates: Trust Framework (v1.0)’, 8. 117 Previously, Art 8 DGC I Proposal. 112 eHealth

Data Protection Law and the EU Digital COVID Certificate Framework  233 trust framework is not clarified in the adopted text, even though it is vested with a number of powers to adopt non-legislative acts. The fact that the adopted text provides us with a hint that there will be at least 27 controllers and there are no precisely defined processors at this stage can only bring attention to the potential risk of what has been already stated by the Advocate-General Bobek: ‘Making everyone responsible means that no-one will in fact be responsible’.118 In addition, in practice, it may become a tough task to keep a very limited number of processors involved. As indicated in Article 10, paragraph 3 of the DCC I Regulation, competent authorities of the Member State of destination as well as transit or cross-border transport services would be able to process the personal data obtained from the certificate if required by national law to implement certain health measures, without retention. The provisions concerning non-transmitting/non-transfer of personal data across the borders119 seem not to have any impact thereon. Although intended to be minimised, the flow of sensitive categories of personal data could become rather vast, with respect also to the bilateral exchanges of revocations lists. Another question that arises is who will have access to the data included in those certificates and eventually process it. According to the DCC I Regulation, the term ‘competent authority’ refers to a competent authority under Directive 2001/83/EC120 assigned to each Member State. Even though not defined precisely by the text of the Directive 2001/83/EC, those authorities regulate medicines approved by national, decentralised or the mutual recognition procedure, and are largely responsible for the enforcement of the medicines legislation.121 They create an EU network co-operating with the European Medicines Agency.122 As mentioned earlier, it seems that the EU COVID Digital Certificate scheme will be implemented in a way analogous to boarding passes and passports. The certificates could be checked before, during or after the journey together with an ID card or passport, for instance, before entering the terminal, at the check-in, at the security check, at the border control and at the boarding gate during the journey or upon arrival.123 The e-Health Network distinguishes between offline and online verification possibilities. It further provides scenarios for verification, although these are at the discretion of the Member States.124 In a more recent publication, it provided detailed workflows and three use cases (a generic one, one in 118 Case C-40/17, Fashion ID, Advocate General Opinion, para 92. 119 Recital 51 DCC I. 120 ‘Directive 2001/83/EC of the European Parliament and of the Council of 6 November 2001 on the Community code relating to medicinal products for human use’, [2001] OJ L311/67–128. 121 Richard Kingham, The Life Sciences Law Review (Law Business Research Ltd., 2017). 122 For more information, see: www.ema.europa.eu/en/partners-networks/eu-partners/eu-memberstates/national-competent-authorities-human#list-of-national-competent-authorities-in-the-eeasection. 123 eHealth Network, ‘Guidelines on Validation of EU Digital COVID Certificates in the Context of Air Transport’, 30 June 2021, https://ec.europa.eu/health/sites/default/files/ehealth/docs/covidcertificate_air-transport_en.pdf. 124 eHealth Network, ‘Interoperability of Health Certificates: Trust Framework (v1.0)’, 12.

234  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou France and one in Spain). The use cases refer respectively to verification that takes place at an airport, via an airline website and via the portal of a Member State.125 The eHealth Network had also previously drafted four user stories, reflecting how the procedure of issuing a certificate, importing it into a wallet app, verifying it offline or by an airline would look like in practice.126 Nevertheless, there is a difference as to the checking of boarding passes by a transport provider and checking of passports and other identification documents by the transport provider and/or law enforcement or other border control officials. It should be reminded that even if the data are only automatically or instantly processed and not retained or in any way recorded, this can still qualify as processing under the GDPR, if other conditions are met.

F.  Data security Security of the certificates being issued by Member States was one of the specific objectives of the Proposal.127 Financial support from the EU to Member States is also foreseen for the adoption of secure infrastructures at national level, including piloting activities and security audits.128 The EC is specifically empowered to adopt implementing acts under Article 9 of the DCC I Regulation with respect to the security of the personal data, in relation to the nature of the data. In the adopted text, Recitals 53 and 54 of the DCC I Regulation repeat the need for data controllers and processors to take appropriate technical and organisational measures to ensure a level of security appropriate to the risk of the processing. Data controllers are also expected to establish security audit mechanisms. Security specifications are further analysed in the eHealth Network documentation.

IV.  Broader Legal, Societal and Ethical Considerations A. Open Issues and Concerns Apart from open issues in relation to data protection law, scholars and civil society representatives have already raised concerns of broader interest. One imminent subject is whether the certificates will be somehow interconnected with the 125 eHealth Network, ‘Guidelines on Validation of EU Digital COVID Certificates in the Context of Air Transport’. For example, see Appendix: Spain has established the verification web portal www.spth. gob.es/. 126 eHealth Network, ‘Guidelines on Technical Specifications for Digital Green Certificates Volume 4 – European Digital Green Certificate Applications (Version 1.3)’, 21 April 2021, 10–14, https://ec.europa.eu/health/sites/default/files/ehealth/docs/digital-green-certificates_v4_en.pdf. 127 European Commission, DGC I, 28. 128 European Commission, DGC I, 30.

Data Protection Law and the EU Digital COVID Certificate Framework  235 national official contact tracing and warning apps, or other apps developed during the pandemic. Even though Member States keep a degree of flexibility as to the medium and the format of the certificate they wish to implement, there has been no suggestion so far to hint such intentions, which are only presented in a form of comparison and lesson learnt. In this regard, Member States remain free to decide on the particular characteristics of the certificates as long as they serve the minimum requirements introduced in the Regulation. Another concern relates to discrimination through the formation of safe and risky profiles. The certificates appear to create two distinct categories of individuals based on sensitive health data: those vaccinated, tested or recovered and therefore ‘safe’ for the public and on the other hand, those not eligible or able for whatever reason to receive a certificate, and therefore ‘unsafe’. Such distinction constitutes a black and white ‘binary indicator of risk’,129 ignoring other important risk factors,130 leading to a two-tier society.131 Nevertheless, as of the time of writing, there is not enough epidemiological evidence with respect to vaccination to prove reduced transmissibility – as opposed to the well-established reduced chance to severe infection – as well as efficacy towards new stronger variants. This could be particularly discriminatory and even lead to an ‘immuno-privilege’ on the one hand and stigma and fear on the other around those persons who cannot get vaccinated for health reasons and/or cannot afford to be tested132 or takes them several weeks or months to fully recover.133 A right to receive the certificates envisaged in the DCC framework is established. However, the impact that the Regulations may have on already disadvantaged groups, if vaccination or tests do not become available to them cannot be excluded.134 In addition, persons already vaccinated and holders of said certificates might as well be still requested to endure quarantine, self-isolation or a test after entry into a Member State’s territory, given that the epidemiological information changes fast and conditions of entry might change as well. In that case, Member States will be required to act swiftly through the established information exchange mechanism to prevent hindering the rights of certificate holders, thereby voiding the purpose of the Regulations. It is yet to be seen what will happen with vaccinated individuals, after the phase-in period, in cases of countries, such as Hungary, which initially chose not to include information about the type of vaccine on their vaccination certificate.135 129 David Child, ‘“A Can of Worms”: Experts Weigh in on the Vaccine Passport Debate’, 14 March 2021, www.aljazeera.com/news/2021/3/14/vaccine-passport-qa. 130 Imogen Parker and Elliot Jones, ‘Something to Declare? Surfacing Issues with Immunity Certificates’, Ada Lovelace Institute (blog), 2 June 2020, www.adalovelaceinstitute.org/blog/somethingto-declare-surfacing-issues-with-immunity-certificates/. 131 Ada Lovelace Institute, ‘COVID-19 Rapid Evidence Review: Exit through the App Store?’, 51. 132 Recital 39 DCC I. 133 Parker and Jones, ‘Something to Declare?’ 134 Recital 36 DCC I. 135 Vlagyiszlav Makszimov, ‘No Trace: Hungary Deletes Vaccine Origin from Jab Certificate’, Www.Euractiv.Com, 1 March 2021, sec. Coronavirus, www.euractiv.com/section/coronavirus/news/ no-trace-hungary-deletes-vaccine-origin-from-jab-certificate/.

236  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou Last but not least, by establishing a certificate system that can be suspended rather than abolished, stripping it of its temporary character may lead to a normalisation of health data checks for travel purposes within the EU, outside the scope of an emergency.

B.  Necessity, Legality and Proportionality: Insights from the ECtHR recent case law on vaccination The latest development of the DCC framework might have been supported by the recent case-law136 of the European Court of Human Rights (ECtHR) relating to joint applications concerning direct or indirect vaccination duty. Although the Court highlighted that compulsory vaccination represents an interference with the right to respect for life within the meaning of Article 8 of the European Convention on Human Rights,137 the Grand Chamber of ECtHR permitted itself not only to focus on the subject matter of each case, but to evaluate the case from a broader perspective by assessing necessity, legality and proportionality of interference into fundamental rights by a direct or indirect vaccination duty. With regard to necessity and legality, the Court decided that an interference would be considered ‘necessary in a democratic society’ in order to achieve a legitimate aim, thus combating diseases, representing a threat to public safety, economic well-being or prevention of disorder, by means of a legal tool not necessary of a primary law character.138 In addition, it concluded that a state may impose a compulsory vaccination policy in order to achieve an appropriate level of protection against serious diseases if not yet achieved by the herd immunity.139 Concerning proportionality, the Court considered that a compulsory vaccination strategy deployed by a state could not be considered as disproportionate when applied as a protective measure, for the sake of the small number of vulnerable persons who cannot benefit from the vaccination. Importantly, the notional availability of less intrusive means to achieve the legitimate aim would not play a role in the proportionality element.140 Applying the rationale of the judgment on the situation on COVID-19 vaccination certification, it would be rather difficult to demonstrate the unnecessity, illegitimacy and disproportionality of the demand to have in one’s disposal some proof of vaccination (or in its absence, another type of certificate). However, the question which stays unanswered is whether processing of personal data as

136 ECtHR, Vavřička and Others v the Czech Republic App no 47621/13, 3867/14, 73094/14 et al. Judgment 8.4.2021 [GC]. 137 Council of Europe, European Convention for the Protection of Human Rights and Fundamental Freedoms, as amended by Protocols Nos. 11 and 14, ETS 5, 4 November 1950. 138 Paragraph 272 of the Judgment. 139 Paragraph 288 of the Judgment. 140 Paragraph 306 of the Judgment.

Data Protection Law and the EU Digital COVID Certificate Framework  237 established by the DCC framework could be considered necessary, legal and proportionate. The necessity element might be justified by the constant development of scientific evidence concerning the favourable effects of vaccination and the indispensability to avert the pertaining of the pandemic situation with regards to the inability to exercise the right of free movement. On the other hand, the legitimacy of the creation of a de facto ‘digital identity’ bearing information about any intervention with regard to COVID-19 is clearly not easily justifiable. In this context, the question remains whether such a tool is necessary taking into a consideration that the issue of various national systems in combatting a crossborder health threat can be done by quarantine or tests which do not necessarily involve digitalisation. The aspect of legality demonstrates elements of diversity in comparison to the already existing system of vaccination and processing of personal data therein. While the previous vaccination regimes were based on the already existing legal framework on national levels and the flow of personal data across borders was rather limited, the legal framework enabling to process personal data vis-a-vis COVID-19 vaccination has been established at the EU level rather than the national one and it was developed from scratch as a consequence of the pandemic, leaving aside other pre-existing domestic vaccination registries. It is therefore questionable whether legality means application of the already existing legal framework and thus fairly expectable by the society or a creation of a legal basis as a response to a certain situation. In addition, the interference of the EU into the health national systems has often been criticised as exceeding its regulatory legitimacy.141 Although so far as practically applicable in the field of environment and trade, it is not remiss to apply the precautionary principle142 when assessing the proportionality of creation of the DCC framework from the perspective of such a large scale of collection and processing of data. The precautionary principle is established by the Communication to the Commission on the precautionary principle143 and its origins are based particularly on proportionality, non-discrimination and consistency. Observing the lack of regulatory impact assessment and the inconsistency in the regulatory framework could serve as a precondition of not applying the precautionary principle. As a consequence, when the precautionary principle cannot be applied, the system might be considered as disproportionate. Still with regards to proportionality, it is necessary to go back to the purpose of the DCC I and II Regulations, ie harmonisation of a system at the EU level in order to facilitate free movement and enable EU citizens to apply the rights stemming 141 Andreas Føllesdal and Simon Hix, ‘Why There is a Democratic Deficit in the EU: A Response to Majone and Moravcsik,’ 44 Journal of Mathematics and Computer Science (2006) 533. 142 The precautionary principle presupposes that potentially dangerous effects deriving from a phenomenon, product or process have been identified, and that scientific evaluation does not allow the risk to be determined with sufficient certainty. The implementation of an approach based on the precautionary principle should start with a scientific evaluation, as complete as possible, and where possible, identifying at each stage the degree of scientific uncertainty. 143 Communication from the Commission on the precautionary principle, COM/2000/0001 final.

238  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou from the Treaties. However, the provisions of the Regulations leave some discretion to Member States to decide further. This may involve decisions on acceptance of vaccines beyond those approved by the European Medicine Agency and other measures in case the epidemiological situation worsens. Applying the logic that once a system is not efficiently established from its roots, it may not be proportionate, one could conclude that if a system’s purpose is to be accomplished only partially, it cannot be efficient, thus not proportional. However, Recital 59 of the DCC I Regulation states that the Regulation does not go beyond what is necessary in order to achieve its objectives.

V. Conclusion A call for an updated EU Roadmap to enable the re-opening of borders and the lifting of restrictions, especially voiced from the transport and tourism sector, including scalability and interoperability of any given solution, is more imminent than ever.144 Are there other alternative options to a digital certificate scheme? This would have been a question to be scrutinised in a regulatory impact assessment. This was not, however, the case. On the basis of the comparison of the proposed and adopted text, it seems though that a number of issues to-be-addressed during an impact assessment were lively debated during the trialogues, especially the introduction of a termination date of the two Regulations and the issue of the retention and limitations to the cross-border exchange of personal data. On the other hand, the aspects of proportionality and necessity145 seemed not to be sufficiently addressed. In practice, the use of non-standardised practices for the travel between Member States (as well as to and from third countries) have created confusion and obstacles in the free movement of citizens, on top of other imposed restrictions in the context of the COVID-19 pandemic. Especially, given the lack of trust and the inability to prove authenticity, due to the use of different languages and formats,146 coordination and uniformity appear to indeed be of p ­ articular importance. Although the Regulation attempts to establish a temporary system, exceptional and applicable in this particular case of emergency, it is not clear – especially with reference to the initial intention of the Commission’s text to create a regime that would be suspended rather than terminated – whether this framework will not serve as a precedence to create a blanket

144 See the Tourism Manifesto of 23 February 2021, available at: www.aci-europe.org/downloads/ resources/Travel%20and%20Tourism%20Exit%20Strategy_final.pdf. 145 Ní Loideain, ‘COVID-19 and “Immunity Passports” – The Way Forward’. 146 EUROPOL, ‘Early Warning Notification – The Illicit Sales of False Negative COVID-19 Test Certificates’, 1 February 2021, www.europol.europa.eu/early-warning-notification-illicit-sales-of-falsenegative-covid-19-test-certificates.

Data Protection Law and the EU Digital COVID Certificate Framework  239 application regime through non-legislative acts or subsequent re-implementations. In addition, even though: (a) the legislators, Member States and the competent authorities applied some knowledge derived from lessons learnt with regards to their experience of the contact tracing and warning apps framework; and (b) the whole approach, also with reference to the extensive eHealth Network’s documentation appears more mature, the Regulations seem to lack a strong sense of direction due to the time constraints. Flexibility and urgency define, as we saw earlier, the provisions of the framework to a large extent, including inconsistencies. For example, by consulting the proposed and adopted Regulations and the relevant documentation, there seems to be some discrepancies with respect to the language and the terms used, for instance between the eHealth Network guidelines and the text of the Regulations, as discussed earlier. The eHealth Network in cooperation with the EC and other key stakeholders shall establish a pool of common terms and definitions, to enhance clarity and conformity and support the implementation by Member States. Member States should consult with the national supervisory authorities and perform a data protection impact assessment before (or at least as early as possible) implementing their national systems, especially if they wish to use the certificates for additional purposes than those regulated by the proposed Regulations. The role of the national supervisory authorities is also highlighted in the DCC I Regulation (Recital 54 DCC I). In the Proposal, financial support from the Union specifically for data protection impact assessment is foreseen, as necessary.147 Particular emphasis should be put on the safeguards surrounding such a solution. In this chapter, the authors presented the policy debate at EU level that led to the introduction by the EC of the two Regulatory Proposals for the establishment of a Digital Green Certificate scheme and its adoption as an EU COVID Digital Certificate scheme. The authors presented and critically commented on the main elements of the two Regulations in relation to the EU data protection and privacy law requirements. The authors conclude that the established system is still confronted with open issues, including but not limited to: technical and practical difficulties and complexities, the indirectly envisaged permanence of the system by allowing Member States to use it for other purposes, the enhanced powers of the EC to adopt acts through urgency procedures, the potential addition of more personal data in the certificate dataset, the risk of discrimination and stigma and so forth. The open issues would, with high probability, lead to future debates and might be a target of case-law. Although, taking account of the shortage in time, the fact that the Proposals were not accompanied by an impact assessment has also been identified as an issue of concern. A regulatory impact assessment is an essential element of each legislative procedure. This deficiency should be an exceptional circumstance and should not become a common-place situation,

147 European

Commission, DGC I, p.31.

240  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou combined with the issuance of non-legislative implementing and delegated acts, which could de facto in the name of urgency replace regular procedures. The authors anticipate seeing the different domestic frameworks established by Member States in parallel with the coordinated scheme, for purposes other than those of travelling, as well as the response of the European Data Protection Supervisor, the European Data Protection Board and the national supervisory authorities to such initiatives. This chapter does not study the potential implications with Article 22 GDPR and especially the prohibition on decisions based on sensitive health data. However, especially with respect to verification, this question may deserve further attention, in particular since the prohibition can be lifted under the condition that suitable measures have been taken, in case Article 9(2)(g) GDPR is applicable, in conjunction with Article 6(1)(c) GDPR.

References Ada Lovelace Institute. ‘COVID-19 Rapid Evidence Review: Exit through the App Store?’, 20 April 2020. www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidencereview-exit-through-the-app-store/. ——. ‘International Monitor: Vaccine Passports and COVID Status Apps’. Accessed 19 March 2021. www.adalovelaceinstitute.org/project/international-monitor-vaccinepassports-covid-status-apps/. Bacchi, Thomson Reuters. ‘Coronavirus Vaccine Passports: What You Need to Know’. Thomson Reuters Foundation News, 9 March 2021. https://news.trust.org/item/ 20201221125814-sv8ea/. Baylis, Françoise, and Natalie Kofler. ‘COVID-19 Immunity Testing’. Issues in Science and Technology (blog), 29 April 2020. https://issues.org/covid-19-immunity-testingpassports/. Charter of Fundamental Rights of the European Union, [2012] OJ C326/391–407 (n.d.). Child, David. ‘“A Can of Worms”: Experts Weigh in on the Vaccine Passport Debate’, 14 March 2021. www.aljazeera.com/news/2021/3/14/vaccine-passport-qa. ‘Communication from the Commission to the European Parliament, the European Council and the Council: A Common Path to Safe and Sustained Re-Opening’. Brussels, 17 March 2021. https://ec.europa.eu/info/sites/info/files/communication-safe-sustainedreopening_en.pdf. Council Recommendation on a common framework for the use and validation of rapid antigen tests and the mutual recognition of COVID-19 test results in the EU 2021/C 24/01, [2021] OJ C24/1–5. https://eur-lex.europa.eu/legal-content/EN/TXT/?ur i=CELEX:32021H0122(01). Council of Europe, European Convention for the Protection of Human Rights and Fundamental Freedoms, as amended by Protocols Nos. 11 and 14, ETS 5, 4 November 1950. ‘Cyprus Plans to Open Borders in March for Travellers Vaccinated for COVID-19’. SchengenVisaInfo.Com, 4 December 2020. www.schengenvisainfo.com/news/cyprusplans-to-open-borders-in-march-for-travellers-vaccinated-for-covid-19/.

Data Protection Law and the EU Digital COVID Certificate Framework  241 ‘Denmark Plans to Introduce “Vaccine Passports” for Travelers Soon’. SchengenVisaInfo. Com, 12 January 2021. www.schengenvisainfo.com/news/denmark-plans-to-introducevaccine-passports-for-travelers-soon/. Decision No 1082/2013/EU of the European Parliament and of the Council of 22 October 2013 on serious cross-border threats to health and repealing Decision No 2119/98/EC [2013] OJ L293/1–15. Directive 2001/83/EC of the European Parliament and of the Council of 6 November 2001 on the Community code relating to medicinal products for human use’, [2001] OJ L311/67–128. Directive 2011/24/EU of the European Parliament and of the Council of 9 March 2011 on the application of patients’ rights in cross-border healthcare, [2011] OJ L88/45–65. EDPB-EDPS. ‘Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a Framework for the Issuance, Verification and Acceptance of Interoperable Certificates on Vaccination, Testing and Recovery to Facilitate Free Movement during the COVID-19 Pandemic (Digital Green Certificate) (Version 1.1)’, 8 April 2021. https://edps.europa.eu/system/files/2021-04/21-03-31_ edpb_edps_joint_opinion_digital_green_certificate_dcg_en.pdf. eHealth Network. ‘Guidelines on COVID-19 Citizen Recovery Interoperable CertificatesMinimum Dataset (Release 1)’, 15 March 2021. https://ec.europa.eu/health/sites/health/ files/ehealth/docs/citizen_recovery-interoperable-certificates_en.pdf. ——. ‘Guidelines on Technical Specifications for Digital Green Certificates Volume 4 – European Digital Green Certificate Applications (Version 1.3)’, 21 April 2021. https:// ec.europa.eu/health/sites/default/files/ehealth/docs/digital-green-certificates_v4_ en.pdf. ——. ‘Guidelines on Validation of EU Digital COVID Certificates in the Context of Air Transport’, 30 June 2021. https://ec.europa.eu/health/sites/default/files/ehealth/docs/ covid-certificate_air-transport_en.pdf. ——. ‘Guidelines on Verifiable Vaccination Certificates – Basic Interoperability Elements’, 12 March 2021. https://ec.europa.eu/health/sites/health/files/ehealth/docs/vaccinationproof_interoperability-guidelines_en.pdf. ——. ‘Interoperability of Health Certificates: Trust Framework (v1.0)’, 12 March 2021. https:// ec.europa.eu/health/sites/health/files/ehealth/docs/trust-framework_interoperability_ certificates_en.pdf. European Commission. ‘Better Regulation in the Commission’. https://ec.europa.eu/info/ sites/info/files/better-regulation-guidelines-better-regulation-commission.pdf. ——. Explanatory Memoranda of the Proposal for a Regulation of the European Parliament and of the Council on serious cross-border threats to health and repealing Decision No 1082/2013/EU. ——. ‘EU Digital COVID Certificate’. Text. https://ec.europa.eu/info/live-work-traveleu/coronavirus-response/safe-covid-19-vaccines-europeans/eu-digital-covidcertificate_en. ——. ‘Guidelines on Impact Assessment’. https://ec.europa.eu/info/sites/info/files/betterregulation-guidelines-impact-assessment.pdf. ——. ‘COVID-19: Digital Green Certificates (Factsheet)’, March 2021. https://ec.europa. eu/info/live-work-travel-eu/coronavirus-response/safe-covid-19-vaccines-europeans/ covid-19-digital-green-certificates_en. ——. Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on

242  Daniela Dzurakova (née Galatova) and Olga Gkotsopoulou vaccination, testing and recovery to third-country nationals legally staying or legally residing in the territories of Member States during the COVID-19 pandemic (Digital Green Certificate), Brussels, 17.3.2021 COM(2021) 140 final 2021/0071 (COD). https:// ec.europa.eu/info/sites/info/files/en_green_certif_tcn_home_reg140final.pdf. ——. Proposal for a Regulation of the European Parliament and the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate), Brussels, 17.3.2021 COM(2021) 130 final 2021/0068 (COD). https:// ec.europa.eu/info/sites/info/files/en_green_certif_just_reg130_final.pdf. ——. Proposal for a Regulation of the European Parliament and of the Council on serious cross-border threats to health and repealing Decision No 1082/2013/EU, COM(2020) 727 final. European Court of Human Rights. ‘Guide on Article 8 of the European Convention on Human Rights – Right to Respect for Private and Family Life, Home and Correspondence’. Strasbourg: Council of Europe, 31 August 2019. www.echr.coe.int/Documents/Guide_ Art_8_ENG.pdf. European Economic and Social Committee. ‘Opinion “Digital Green Certificate”’, 27 April 2021. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=PI_EESC%3AEESC-202101771-AC. EUROPOL. ‘Early Warning Notification – The Illicit Sales of False Negative COVID-19 Test Certificates’, 1 February 2021. www.europol.europa.eu/early-warning-notification-illicitsales-of-false-negative-covid-19-test-certificates. Gstrein, Oskar Josef. ‘The EU Digital COVID Certificate: A Preliminary Data Protection Impact Assessment’. SSRN Electronic Journal, 2021. https://doi.org/10.2139/ssrn. 3853689. Holmes, Oliver, and Quique Kierzenbaum. ‘Green Pass: How Are Covid Vaccine Passports Working for Israel?’ The Guardian, 28 February 2021, sec. World news. www.theguardian. com/world/2021/feb/28/green-pass-how-are-vaccine-passports-working-in-israel. Kingham, Richard. The Life Sciences Law Review. Law Business Research Ltd., 2017. Kuner, Christopher, Lee A. Bygrave, and Christopher Docksey, eds. The EU General Data Protection Regulation (GDPR): A Commentary. Oxford: Oxford University Press, 2019. Makszimov, Vlagyiszlav. ‘No Trace: Hungary Deletes Vaccine Origin from Jab Certificate’. Www.Euractiv.Com, 1 March 2021, sec. Coronavirus. www.euractiv.com/section/ coronavirus/news/no-trace-hungary-deletes-vaccine-origin-from-jab-certificate/. McMahon, Shannon. ‘Everything Travelers Need to Know about Vaccine Passports’. Washington Post, 3 March 2021. www.washingtonpost.com/travel/2020/12/08/vaccinepassport-immunity-app-covid/. Mishra, Manas, and Amruta Khandekar. ‘Tech and Health Giants Push for Digital COVID “Vaccine Passports”’. Thomson Reuters Foundation News, 14 January 2021. https://news. trust.org/item/20210114121756-i3lpf/. Ní Loideain, Nóra. ‘COVID-19 and “Immunity Passports” – The Way Forward’. blogdroiteuropéen (blog), 8 October 2020. https://blogdroiteuropeen.com/2020/10/08/ covid-19-and-immunity-passports-the-way-forward-by-nora-ni-loideain/. Parker, Imogen, and Elliot Jones. ‘Something to Declare? Surfacing Issues with Immunity Certificates’. Ada Lovelace Institute (blog), 2 June 2020. www.adalovelaceinstitute.org/ blog/something-to-declare-surfacing-issues-with-immunity-certificates/.

Data Protection Law and the EU Digital COVID Certificate Framework  243 Phelan, Alexandra L. ‘COVID-19 Immunity Passports and Vaccination Certificates: Scientific, Equitable, and Legal Challenges’. The Lancet 395, no. 10237 (May 2020): 1595– 98. https://doi.org/10.1016/S0140-6736(20)31034-5. Privacy International. ‘“Anytime and Anywhere”: Vaccination, Immunity Certificates, and the Permanent Pandemic’. European Digital Rights (EDRi) (blog), 10 March 2021. https:// edri.org/our-work/anytime-anywhere-vaccination-immunity-certificates-pandemic/. ——. ‘Immunity Passports and Covid-19: An Explainer’. Privacy International (blog), 21 July 2020. www.privacyinternational.org/explainer/4075/immunity-passports-andcovid-19-explainer. ——. ‘The Looming Disaster of Immunity Passports and Digital Identity’. Privacy International (blog), 21 July 2020. http://privacyinternational.org/long-read/4074/ looming-disaster-immunity-passports-and-digital-identity. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance), Pub. L. No. 32016R0679, [2016] OJ L119. Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC, PE/31/2018/REV/1’, [2018] OJ L295/39–98. Regulation (EU) 2021/953 of the European Parliament and of the Council of 14 June 2021 on a framework for the issuance, verification and acceptance of interoperable COVID-19 vaccination, test and recovery certificates (EU Digital COVID Certificate) to facilitate free movement during the COVID-19 pandemic, PE/25/2021/REV/1, [2021] OJ L211/1–22. Regulation (EU) 2021/954 of the European Parliament and of the Council of 14 June 2021 on a framework for the issuance, verification and acceptance of interoperable COVID-19 vaccination, test and recovery certificates (EU Digital COVID Certificate) with regard to third-country nationals legally staying or residing in the territories of Member States during the COVID-19 pandemic, PE/26/2021/REV/1, [2021] OJ L211/24–28. Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general principles concerning mechanisms for control by Member States of the Commission’s exercise of implementing powers ([2011] OJ L55/13). World Health Organization. ‘Scientific Brief: “Immunity Passports” in the Context of COVID-19’, 24 April 2020. www.who.int/news-room/commentaries/detail/immunitypassports-in-the-context-of-covid-19. Wilson, Kumanan, Katherine M. Atkinson, and Cameron P. Bell. ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’. The American Journal of Tropical Medicine and Hygiene 94, no. 3 (2 March 2016): 485–88. https://doi.org/10.4269/ ajtmh.15-0510. Zakrzewski, Cat. ‘Analysis | The Technology 202: Tech Giants Are Teaming up to Build Digital Vaccine Records’. Washington Post, 15 January 2021. www.washingtonpost.com/ politics/2021/01/15/technology-202-tech-giants-are-teaming-up-build-digital-vaccinerecords/.

244

9 The DPIA: Clashing Stakeholder Interests in the Smart City? LAURENS VANDERCRUYSSE,1 MICHAËL DOOMS2 AND CAROLINE BUTS3

Abstract The development of a Smart City hinges on the collaboration of various stakeholder groups with diverging interests. From a technological perspective, the Smart City is often considered as the pinnacle of urban efficiency. From a business perspective, there is a straightforward financial incentive to subscribe to that narrative. However, from a societal perspective, there is a clear friction regarding several values and norms. Data protection constitutes one such important area of conflict. This chapter focuses on the data protection impact assessment (DPIA) in the context of smart city service development, and identifies and evaluates the various stakeholder interests that feature during this particular data protection interaction. Through 16 data protection expert interviews, we identify eleven pertinent, distinct interests: i. compliance; ii. cost control; iii. data management; iv. efficiency; v. generation of trust; vi. income acquisition; vii. limiting impact on service performance; viii. reputation building; ix. risk management; x. safeguarding competition; and xi. safeguarding data security. The interests can be categorised in a financial, competition/competitiveness, social, and risk theme. Using the analytical hierarchy process (AHP) method, we evaluate the salience of the interests in the context of the DPIA. We find generation of trust, safeguarding data security, and risk management to be the most salient. The relative lack of importance of financial and competition/competitiveness interests is remarkable. The results of this exploratory research support all Smart City stakeholders in clarifying and rationalising their data protection interactions, and serve as indicators for potential DPIA pitfalls. Furthermore, stakeholders can use these

1 Department

of Business, Vrije Universiteit Brussel, Brussels, Belgium. of Business, Vrije Universiteit Brussel, Brussels, Belgium. 3 Department of Applied Economics, Vrije Universiteit Brussel, Brussels, Belgium. 2 Department

246  Laurens Vandercruysse, Michaël Dooms and Caroline Buts findings to maximise their influence on the DPIA through coalition building. Data protection authorities can utilise this explorative study as a starting point to develop participation guidelines. Diligent stakeholder interest consideration is central to sustainable Smart City development, this chapter lays the groundwork for further research.

Keywords Smart Cities, stakeholder interests, data protection, data protection impact assessment (DPIA).

I. Introduction The development of Smart City (SC) services requires the cooperation of multiple stakeholders.4 There can be no SC service without a Smart City service provider (SCSP) nor without an urban area to provide the service in.5 Furthermore, it is evident that attracting and retaining users is key to any service provision. Additionally, sitting at the nexus of service provision, public governance and technological evolution, it is important SC growth is duly managed and regulated. The intricate ecosystem thus also requires regulation to guide stakeholders, consequently adding a regulator to monitor compliance. In general, the development of SC services is hence characterised by interactions between a wide variety of stakeholders, and the associated data protection interactions are no exception.6 This chapter focuses on one specific data protection interaction, namely the DPIA. The DPIA, which was introduced by the general data protection regulation (GDPR), allocates the responsibility for assessing the impact of certain data processing activities on data protection to the data controller.7 From a stakeholder analysis perspective, the DPIA constitutes an interesting case for two main reasons. First, the DPIA is a relatively new data protection interaction and, as a result, can be expected to provide a more flexible development of stakeholder interactions than more established, formalised processes. The novelty leads to situations in which the various stakeholders still have to find their place and voice.

4 Robert Wilhelm Siegfried Ruhlandt, ‘The Governance of Smart Cities: A Systematic Literature Review,’ Cities 81 (November 2018): 1–23. 5 SCSP should here be interpreted broadly as any actor who provides a SC service as defined below/ above. However, explicitly excluding any actor merely selling a SC solution and thus not providing a service. 6 Data protection interaction should here be interpreted as any interaction between various SC stakeholders concerning the topic of data protection. 7 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 35.

The DPIA: Clashing Stakeholder Interests in the Smart City?  247 Second, while the responsibility for the DPIA rests firmly on the data controller, the legislator chose to distribute some accountability to other stakeholders.8 This adds importance to the DPIA from the perspective of multiple stakeholders, and high-stake interactions tend to be characterised by more true interest revelations.9 Taking into account the particular set-up of SC services, we conduct an explorative, descriptive analysis of stakeholder interests. First, we investigate which stakeholder interests emerge during the DPIA (RQ1). To that end, we carry out 16 interviews with data protection experts representing an array of different SC stakeholder groups. Second, we evaluate the salience of the identified stakeholder interests during the DPIA (RQ2) by applying the analytical hierarchy process (AHP) method. As a case-study, we chose the region of Flanders, Belgium. Focusing on SCs in Flanders as a single case-study ensures an elevated value-added as: i) DPIA research with a regional focus is scarce; ii) data protection expertise in the region is limited;10 and iii) Flanders constitutes an archetypal example in terms of urbanisation in the European context.11 Stakeholders involved in DPIAs can build on our findings to organise ex-ante reflections on their interests, and how to balance these interests with those of other stakeholder groups. This could lead to more efficient DPIA processes and the formalisation of coalitions of stakeholders with similar interests. The results also allow for more normative research in the direction of sustainability guidelines for SC data protection. Inclusive SC governance has been identified as a central success factor for durable and sustainable SC development,12 our findings offer novel insights in that regard. This chapter is structured as follows: section II presents the fundamentals; section III comprises the methodology; section IV lays out the analysis; section V discusses the wider implications of the findings; and section VI concludes.

II. Fundamentals A. DPIAs The GDPR forms the backbone of the legislative framework concerning data protection in the European Union (EU). The regulation has updated the data 8 ibid. 9 Vernon Smith and James Walker, ‘Monetary Rewards and Decision Cost in Experimental Economics,’ Economic Inquiry 31, no. 2 (April 1993): 245–261. 10 Jonathan Desdemoustier and Nathalie Crutzen, ‘Etat des lieux sur la dynamique « Smart City » en Belgique : Un baromètre quantitatif,’ Smart City Institute report, 2017. Accessed 9 November 2019. 11 Barbara Tempels, Thomas Verbeek, Ann Pisman, and Georges Allaert, Verstedelijking in de Vlaamse open ruimte: een vergelijkende studie naar vijf transformaties (Heverlee: Steunpunt Ruimte en Wonen, 2012). 12 Prathivadi Anand and Julio Navío-Marco, ‘Governance and Economics of Smart Cities: Opportunities and Challenges,’ Telecommunications Policy, Smart Cities: Governance and Economics, 42, no. 10 (November 2018): 795–799.

248  Laurens Vandercruysse, Michaël Dooms and Caroline Buts protection tools and procedures with regard to the 1995 Data Protection Directive (DPD) in response to the booming digital age.13 Overarching GDPR principles encompass among others transparency, fairness and accountability.14 The European legislator aspired a shift towards responsibilisation of stakeholders handling personal data.15 One specific measure of this responsibilisation shift is the DPIA. This assessment of the impact of data processing operations on data protection is a new task attributed to the data controller. The privacy impact assessment (PIA), sometimes referred to as the forerunner of the DPIA, has been in use as a data protection tool since the 1990’s.16 The EU introduced its own variant of the DPIA in Article 35 of the GDPR. In case an envisaged data processing activity is anticipated to pose ‘significant risks to the fundamental rights and freedoms of data subjects’ a DPIA is to be performed.17 Following Vandercruysse, Buts, and Dooms (2020) a SC service would constitute ‘a solution to a societal demand based on technology that interacts with the physical world, where data collection and data use are central and several stakeholders, both public and private, are involved.’18 This definition includes large-scale, systematic monitoring of public space, which is explicitly mentioned in paragraph 3 of Article 35 as a processing requiring a DPIA.19 By consequence, it is expected that at least a subset of SC services would be subject to the DPIA obligation.20 Article 35 stipulates the specific requirements for such an assessment. It should as a minimum contain the following elements: i) a description of the foreseen data processing operation; ii) a description and justification of the necessity and proportionality of the operation; iii) an identification and

13 Lawrence Ryz, and Lauren Grest, ‘A New Era in Data Protection,’ Computer Fraud & Security 2016, no. 3 (March 2016): 18–20. 14 Michelle Goddard, ‘The EU General Data Protection Regulation (GDPR): European Regulation That Has a Global Impact,’ International Journal of Market Research 59, no. 6 (November 2017): 703–705. 15 Damian Clifford, and Jef Ausloos, ‘Data Protection and the Role of Fairness,’ Yearbook of European Law 37 (January 2018): 130–187. 16 David Tancock, Siani Pearson, and Andrew Charlesworth, ‘Analysis of Privacy Impact Assessments within Major Jurisdictions,’ in 2010 Eighth International Conference on Privacy, Security and Trust, 118–125, 2010. 17 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 35, para 1. 18 Paolo Neirotti, Alberto De Marco, Anna Corinna Cagliano, Giulio Mangano, and Francesco Scorrano, ‘Current Trends in Smart City Initiatives: Some Stylised Facts,’ Cities 38 (June 2014): 25–36; Laurens Vandercruysse, Caroline Buts, and Michaël Dooms, ‘A Typology of Smart City Services: The Case of Data Protection Impact Assessment,’ Cities 104 (September 2020): 102731; Nils Walravens, and Pieter Ballon, ‘Platform Business Models for Smart Cities: From Control and Value to Governance and Public Value,’ IEEE Communications Magazine 51, no. 6 (June 2013): 72–79. 19 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 35, para 3. 20 Shakila Bu-Pasha, ‘The Controller’s Role in Determining ‘High Risk’ and Data Protection Impact Assessment (DPIA) in Developing Digital Smart City,’ Information & Communications Technology Law 29, no. 3 (September 2020): 391–402.

The DPIA: Clashing Stakeholder Interests in the Smart City?  249 assessment of the data protection risks resulting from the introduction of the operation; and iv) a plan with regard to the mitigation of the risks identified in the previous step.21 The DPIA process aspires ownership of data protection choices, and ensuing risks, by data controllers. However, the design of the actual processes of DPIAs is largely left to the stakeholders in charge.22 It is exactly this freedom of movement in terms of processes, and more particularly the freedom to put distinct emphases related to stakeholder interests, that is under investigation in this chapter. It is vital to note that both the appropriate methodology and scope of the DPIA are subject to debate. Several of the most used methodologies differ quite considerably in terms of approach and focus.23 In addition, some were explicitly developed to suit local or sectoral needs.24 Regarding scope, for example, certain scholars assess the DPIA to be almost a full-blown fundamental rights assessment,25 while others imply that the DPIA should still focus primarily on (the risks to) the fundamental right to data protection.26 In many cases the debates on methodology and scope interlink, as should they. In practice, where in-depth expertise is often lacking and time is a real constraint, scope might follow from the methodology rather than vice versa. In that sense, the choice of methodology becomes all the more important. The width of their variety further underlines the value of this research, since this chapter makes abstraction of concrete methodologies to focus on the underlying interests. Deeper insight in these interests might uncover motives to choose certain methodologies over others. In addition, insights into the generic interests that feature during the DPIA as well as their relative salience should render the exercise more efficient, and could offer regulators a stepping stone to develop suitable (participation) guidelines.

21 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 35, para 7. 22 Belgian Data Protection Authority, ‘Aanbeveling nr. 01/2018 van 28 februari 2018.’ 23 Tamas Bisztray, and Nils Gruschka, ‘Privacy Impact Assessment: Comparing Methodologies with a Focus on Practicality,’ in Secure IT Systems, ed. Aslan Askarov, René Rydhof Hansen, and Willard Rafnsson. Lecture Notes in Computer Science. (Cham: Springer International Publishing, 2019); Raphaël Gellert, ‘Understanding the Notion of Risk in the General Data Protection Regulation,’ Computer Law & Security Review 34, no. 2 (April 2018): 279–88; Stephen Hart, Anna Lisa Ferrara, and Federica Paci, ‘Fuzzy-Based Approach to Assess and Prioritize Privacy Risks,’ Soft Computing 24, no. 3 (February 2020): 1553–63; David Wright, Rachel Finn, and Rowena Rodrigues, ‘A Comparative Analysis of Privacy Impact Assessment in Six Countries,’ Journal of Contemporary European Research 9, no. 1 (January 2013). 24 Dimitra Georgiou, and Costas Lambrinoudakis, ‘Data Protection Impact Assessment (DPIA) for Cloud-Based Health Organisations,’ Future Internet 13, no. 3 (March 2021): 66; Marco Todde, Marco Beltrame, Sara Marceglia, and Cinzia Spagno, ‘Methodology and Workflow to Perform the Data Protection Impact Assessment in Healthcare Information Systems,’ Informatics in Medicine Unlocked 19 (January 2020): 100361. 25 Dara Hallinan, and Nicholas Martin, ‘Fundamental Rights, the Normative Keystone of DPIA,’ European Data Protection Law Review 6, no. 2 (June 2020): 178–193. 26 Katerina Demetzou, ‘Data Protection Impact Assessment: A Tool for Accountability and the Unclarified Concept of ‘High Risk’ in the General Data Protection Regulation,’ Computer Law & Security Review 35, no. 6 (November 2019): 105342.

250  Laurens Vandercruysse, Michaël Dooms and Caroline Buts

B.  DPIA Stakeholder Groups Traditionally, stakeholders are most commonly identified through their stakes in a focal organisation.27 While this approach allows for, and has explicitly been combined with, a focus on the particular issues that this central organisation is faced with, it has been argued that the emphasis on the organisation unduly reduces the complexity of current organisational issues.28 This is especially problematic when multiple, conflicting stakeholder perspectives and values are relevant.29 Seeing the fundamental rights dimension of data protection, it becomes clear that the issues of data protection and DPIA are broader than the boundaries of any focal organisation. For the purposes of this chapter, DPIA stakeholders are thus to be interpreted following an issue-focused stakeholder management approach,30 the issue at hand being ‘the DPIA on a SC service’. In this context, a stakeholder can be defined as in Roloff (2008): ‘a stakeholder is any group or individual who can affect or is affected by the approach to the issue.’31 Since the SC is an intricate multi-stakeholder environment, identifying an encompassing stakeholder list for SC issues can be a complex task.32 In response, a wide variety of stakeholder categorisations have been developed.33 However, with regard to the DPIA, these classifications tend to be impractical as they gloss over highly relevant inherent DPIA characteristics (eg stakeholders concurrently belonging to different stakeholder groups, etc).

27 Jeff Frooman, ‘The Issue Network: Reshaping the Stakeholder Model,’ Canadian Journal of Administrative Sciences / Revue Canadienne Des Sciences de l’Administration 27, no. 2 (June 2010): 161–173. 28 Pernille Eskerod, Martina Huemann, and Grant Savage, ‘Project Stakeholder Management – Past and Present,’ Project Management Journal 46, no. 6 (December 2015): 6–14; Domenec Mele, Not only Stakeholder Interests. The Firm Oriented toward the Common Good (Notre Dame: University of Notre Dame Press, 2002). 29 Julia Roloff, ‘Learning from Multi-Stakeholder Networks: Issue-Focused Stakeholder Management,’ Journal of Business Ethics 82, no. 1 (September 2008): 233–250. 30 ibid. 31 ibid 238. 32 Igor Calzada, ‘Smart Citizens from Data Providers to Decision-Makers? The Case Study of Barcelona.’ Sustainability 10, no. 9 (September 2018): 3252. 33 ibid; Victoria Fernandez-Anez, José Miguel Fernández-Güell, and Rudolf Giffinger, ‘Smart City Implementation and Discourses: An Integrated Conceptual Model. The Case of Vienna,’ Cities 78 (August 2018): 4–16; Dagmar Haase, Neele Larondelle, Erik Andersson, Martina Artmann, Sara Borgström, Jürgen Breuste, Erik Gomez-Baggethun, et al, ‘A Quantitative Review of Urban Ecosystem Service Assessments: Concepts, Models, and Implementation,’ AMBIO 43, no. 4 (May 2014): 413–433; Zaheer Khan, Zeeshan Pervez, and Abdul Ghafoor Abbasi, ‘Towards a Secure Service Provisioning Framework in a Smart City Environment,’ Future Generation Computer Systems 77 (December 2017): 112–135; Mauricio Marrone, and Mara Hammerle, ‘Smart Cities: A Review and Analysis of Stakeholders’ Literature,’ Business & Information Systems Engineering 60, no. 3 (June 2018): 197–213; Jérémy Robert, Sylvain Kubler, Niklas Kolbe, Alessandro Cerioni, Emmanuel Gastaud, and Kary Främling, ‘Open IoT Ecosystem for Enhanced Interoperability in Smart Cities – Example of Métropole De Lyon,’ Sensors 17, no. 12 (December 2017): 2849.

The DPIA: Clashing Stakeholder Interests in the Smart City?  251 Table 1  DPIA stakeholder groups according to A29WP34 and Belgian35 DPA. Own creation based on respective guidelines DPIA stakeholder groups according to A29WP

DPIA stakeholder groups according to Belgian DPA

DPIA stakeholder groups according to chapter

Data controllers

Controllers

(Joint) data controllers

Processors

Processors

Data (sub-)processors

Specialized consultants (internal, e.g. DPO and CISO, or external)

DPO

Specialized consultants/ researchers

Data subjects

Population concerned or their representatives

Citizens

Population at large Regulators

Data protection authorities

Guidelines by regulators indicate which stakeholders play a role during the DPIA. Two guidelines are particularly relevant for the Flemish SC. On the one hand, there is the WP 248 by the Article 29 Working Party (A29WP). The A29WP is the predecessor of the European Data Protection Board (EDPB), which is the umbrella organisation of all European data protection regulators. On the other hand, the national Belgian Data Protection Authority (DPA) issued Recommendation 01/2018. Table 1 shows that the stakeholder groups comprised in the guidelines overlap considerably but are not identical. We distill our own list of stakeholder groups from the two guidelines, as shown in column three of Table 1. It is important to note that one stakeholder can, even simultaneously, be different types of stakeholders in different data processing operations. Therefore, the list of stakeholders should be interpreted firmly grounded in the specific data processing operation. Per data processing operation, we distinguish five major idiosyncratic stakeholder groups: i) (joint) data controllers; ii) data (sub-) processors; iii) specialised consultants/ researchers; iv) citizens; and v) the data protection authorities. What follows is a description of these different stakeholder groups.

i.  (Joint) Data Controllers A data controller decides independently on the purposes and means of the processing operation.36 Article 24 GDPR details the responsibilities of the data controller. These include the installation of the appropriate safeguards, both on a technical 34 Article 29 Working party, ‘Guidelines on Data Protection Impact Assessment (DPIA) (WP 248).’ 35 Belgian Data Protection Authority, ‘Aanbeveling nr. 01/2018 van 28 februari 2018.’ 36 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 4, para 7.

252  Laurens Vandercruysse, Michaël Dooms and Caroline Buts and an operational level, to guarantee that the operation complies with the regulation all along the processing chain. The DPIA is one of the specific obligations resting on the data controller under the GDPR.37 When more than one stakeholder decides on the purposes and means of a processing operation, they become joint controllers.38 The regulation stipulates that a division of responsibilities should be outlined contractually between the partners and that this should be clearly communicated. With regard to the DPIA, representatives of the different controllers can thus take all decisions jointly or tasks could be divided to some extent.39

ii.  Data (Sub-)Processors Data processors can be used to process data that the controller has chosen to collect.40 It is instructive to think of the processor as an instrument of the controller. Article 28 of the GDPR demarcates the responsibilities of the data processor.41 The processor is being utilised contractually by the controller to achieve predefined purposes through predefined means. An example could be a data controller that wants to map traffic flow densities through the use of cameras, and therefore enters into a contract with a data processor that will execute the idea. Note that a data processor in turn might outsource part of its processing to one or more sub-processors, creating a third processing layer.42 With respect to the DPIA, paragraph 3 of Article 28 clearly states that the processor needs to assist the data controller in conducting this assessment by providing the necessary information.43 Technical information concerning the processing set-up used by the processor is needed to conduct a full DPIA, without this information risk estimations are deficient.44

37 Luca Marelli, and Giuseppe Testa, ‘Scrutinizing the EU General Data Protection Regulation,’ Science 360, no. 6388 (May 2018): 496–498. 38 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 26, para 1. 39 Laurens Vandercruysse, Caroline Buts, and Michaël Dooms, ‘Practitioner’s Corner Data Control in Smart City Services: Pitfalls and How to Resolve Them,’ European Data Protection Law Review 5, no. 4 (December 2019): 554–560. 40 Christina Tikkinen-Piri, Anna Rohunen, and Jouni Markkula. ‘EU General Data Protection Regulation: Changes and Implications for Personal Data Collecting Companies.’ Computer Law & Security Review 34, no. 1 (February 2018): 134–153. 41 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 28. 42 ibid, Art 28, para 4. 43 ibid, Art 28, para 3. 44 Jenna Lindqvist, ‘New Challenges to Personal Data Processing Agreements: Is the GDPR Fit to Deal with Contract, Accountability and Liability in a World of the Internet of Things?’ International Journal of Law and Information Technology 26, no. 1 (March 2018): 45–63.

The DPIA: Clashing Stakeholder Interests in the Smart City?  253

iii.  Specialised Consultants/Researchers Specialised consultants and researchers can play a role in an advisory or control capacity. Paragraph 2 of Article 35 GDPR explicitly refers to the data protection officer (DPO) to be consulted in the DPIA process.45 Even though the explicit references to specialised consultants stop there, it might be useful to involve other specialists as well. In the context of Flemish city administrations a lack of data protection knowledge leads to the consultation of outhouse expertise,46 eg data protection lawyers and information security specialists. While not being standard procedure, some organisations regularly outsource substantial parts or even the entire DPIA process.47 The European regulator explicitly foresees a situation where an external consultant becomes the leader of a DPIA.48

iv. Citizens Citizens are stakeholders in the DPIA as ‘where appropriate’ they should be consulted regarding their opinions by the controller.49 However, the controller can argue that such a consultation is in contravention to ‘commercial or public interests or the security of processing operations’.50 In contrast to the A29WP, which refers to ‘data subjects’, and the Belgian DPA, which differentiates between the ‘population concerned or their representatives’ and the ‘general population’, we opt for the more neutral ‘citizens’. At the time of conducting the DPIA, ie ex ante the processing, any differentiation or specification is void of practical use. Especially regarding SC services, it is often impossible to narrow the ‘general population’ to the ‘population concerned’ or to a subset of potential ‘data subjects’ at this stage (an exception would be smart grids51). Following the definition, data collection in the context of a SC service regularly takes place in the public space. Therefore, one could argue that everyone who could ever utilise this public space, ie every citizen, is a ‘concerned person’ or ‘data subject.’

45 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 35, para 2. 46 Desdemoustier and Crutzen, ‘Etat des lieux sur la dynamique « Smart City » en Belgique.’ 47 Joshua Blume, ‘A Contextual Extraterritoriality Analysis of the DPIA and DPO Provisions in the GDPR Notes.’ Georgetown Journal of International Law 49, no. 4 (Summer 2018): 1425–1460; Dariusz Kloza, Alessandra Calvi, Simone Casiraghi, Sergi Vazquez Maymir, Nikolaos Ioannidis, Alessia Tanas, and Niels van Dijk, ‘Data protection impact assessment in the European Union: developing a template for a report from the assessment process,’ Report, 2020. Accessed 14 December 2020. 48 Article 29 Working party, ‘Guidelines on Data Protection Impact Assessment (DPIA) (WP 248).’ 49 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 35, para 9. 50 ibid. 51 Abdulrahaman Okino Otuoze, Mohd Wazir Mustafa, and Raja Masood Larik, ‘Smart Grids Security Challenges: Classification by Sources of Threats,’ Journal of Electrical Systems and Information Technology 5, no. 3 (December 2018): 468–83.

254  Laurens Vandercruysse, Michaël Dooms and Caroline Buts In short, citizens do not have a fixed role in the DPIA process, despite their data being directly or indirectly used in the data processing operation. The feasibility of collecting a representative sample as well as of doing a public consultation on the DPIA altogether remains an open question. The use of civil society organisations as a proxy could constitute an elegant solution to the sampling problem.

v.  Data Protection Authorities Finally, the national DPA can be an important player as the regulator overseeing compliance with the GDPR and the national data protection laws.52 In that regard, the DPA has consultation and control competences over the DPIAs performed within its jurisdiction.53 Additionally, paragraph 1 of Article 36 foresees for data controllers to consult the DPA when the controller feels that risks remain after implementation of the mitigation measures as indicated in the provisional DPIA.54 When asked for consultation, the DPA can provide advice in written form.55 Knowing the consequences of going against the advice of the authority, the controller should seriously consider the remarks.56

C.  Diverging and Dynamic Interests Research has shown that stakeholder interactions tend to be more problematic when parties’ interests differ.57 Generally, previous research on SC issues opposes business interests and (human) values such as transparency and trust.58 However, too often these discussions merely offset a neoliberal dystopia, ie a SC solely created by and for business, with a co-created, participatory-governed utopia.59 52 Belgian Data Protection Authority, ‘Meer informatie over de Autoriteit.’ 53 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 57. 54 ibid, Art 36, para 1. 55 ibid, Art 36, para 2. 56 Jan Philipp Albrecht, ‘How the GDPR Will Change the World,’ European Data Protection Law Review 2, no. 3 (October 2016): 287–89. 57 Karin Axelsson, and Malin Granath, ‘Stakeholders’ Stake and Relation to Smartness in Smart City Development: Insights from a Swedish City Planning Project,’ Government Information Quarterly 35, no. 4 (October 2018): 693–702; Giuseppe Grossi, and Daniela Pianezzi, ‘Smart Cities: Utopia or Neoliberal Ideology?’ Cities 69 (September 2017): 79–85; Tassilo Herrschel, and Yonn Dierwechter, Smart Transitions in City Regionalism : Territory, Politics and the Quest for Competitiveness and Sustainability. (London: Routledge, 2018); Helder Moura, and José Cardoso Teixeira, ‘Managing Stakeholders Conflicts,’ in Construction Stakeholder Management, eds. Ezekiel Chinyio, and Paul Olomolaiye (Hoboken: Blackwell Publishing Ltd., 2009) 286–316. 58 Ignasi Capdevila, and Matías Zarlenga, ‘Smart City or Smart Citizens? The Barcelona Case,’ Journal of Strategy and Management 8, no. 3 (August 2015): 266–282; Rob Kitchin, ‘Getting smarter about smart cities: Improving data privacy and data security,’ Data Protection Unit of the department of the Taoiseach technical report, 2016. Accessed 7 August 2019. 59 Maša Galic, and Marc Schuilenburg, ‘Reclaiming the Smart City: Towards a New Right to the City,’ in Handbook of Smart Cities, ed Juan Carlos Augusto (Switzerland: Springer Nature AG, 2020).

The DPIA: Clashing Stakeholder Interests in the Smart City?  255 Both SC versions then have fierce, natural proponents in the form of stakeholders, ie private business for the former version and citizens and civil society for the latter. We argue that the simplification of different stakeholders to unidimensional actors driven by irreconcilable goals and interests is injudiciously reductionist. The eventual SC will likely be neither a dystopia nor a utopia, but rather a set-up achieved through pragmatic compromise between stakeholders along the spectrum. In that sense, recognising that the interests of the various stakeholder groups are not homogeneous is of vital importance. Also, the interests of stakeholders might change depending on the particular subject matter that is discussed. This means that it is important to note that one is a stakeholder in a wide variety of processes and procedures rather than ‘a stakeholder in the SC.’ In this chapter, we aim to provide a deeper and more granular understanding of what the particular interests are that feature in the DPIA and how these interests relate to one another in terms of salience. Hence we argue that interests concerning the DPIA do not depend solely on individual stakeholder characteristics, but also on the stakeholder group that stakeholder belongs to in the specific context of a data processing operation. Any stakeholder involved in the data processing operation will belong to one of the stakeholder groups outlined in section II.B vis-à-vis that particular data processing. Inherent interests will be filtered through the lens of the legal role the stakeholder takes up. It is instructive to think of inherent interests as the interests that are fundamental to the type of stakeholder, eg for a private business that would be income generation or cost effectiveness. Now, the stakeholder group to which the stakeholder belongs in the context of data processing also comes with a set of interests; these are then called data processing interests. For a data controller this could be diligent data management or transparency, while for a data processor this could be cost minimisation. To perform a meaningful interest analysis, the end result of the interaction between both types of interests has to be studied rather than either category of interests in isolation. A concrete example can be found in contrasting the situation wherein a local public administration acts as the data controller of a certain processing activity with the one where the local public administration takes on the role of a processor. While the interests inherent to the local public administration remain the same, the data processing interests actually change. As a data controller, the administration has an incentive to perform the risk mapping of the DPIA very thoroughly, which might include performing audits of the data (sub-) processors. However, as a data processor, the same administration has a clear financial incentive to minimise the number of audits that are performed on its systems and processes. This section highlighted the importance of the insights that this chapter sets out to gather. Two interlinked research questions will be answered: 1. 2.

RQ1: What are the different interests that feature during the DPIA? RQ2: What is the salience of these identified interests?

256  Laurens Vandercruysse, Michaël Dooms and Caroline Buts

D.  Sustainable SC Development The development of SC services is currently often managed top-down, even though there is a growing body of academic literature underlining the importance of more inclusive governance modes and a general human-centric approach.60 Of course, the arguments in favour of inclusivity transcend the specific field of SC service data protection. Nonetheless, data protection offers a natural starting point to move towards SC development that is more inclusive, more human-centric, and more sustainable, because the GDPR requires by law that an assessment is made to balance business and the protection of fundamental rights in the form of the DPIA.61 This anchoring in law increases the success probability of an overhaul vis-à-vis situations where proponents of the humancentric approach are condemned to appeal solely on normative grounds. It could be expected that once stakeholders have been introduced to balancing these different values, there might be a spill-over into other domains. The attainment of smarter, more livable and more sustainable cities is a common goal of all stakeholders, but it is important that strengthened sustainability is reached in a durable, inclusive manner.

III. Methodology To answer our research questions, we opt for a single case-study. Rather than zooming in on one particular data processing operation, our case study concerns the Flemish SC environment. Specifically, interviewees as well as AHP participants were asked to reason in general terms. This case selection allows us to transcend the idiosyncratic and to gather more generalisable, multi-purpose insights.62 Flemish municipalities predominantly indicate that they are just starting the SC transition process.63 Multiple rankings show that Flanders is considerably lagging behind some regions in neighbouring countries.64 However, in an EU-context, the Flemish region represents the average case, which further safeguards the generalisability of our findings.

60 Stefano Andreani, Matteo Kalchschmidt, Roberto Pinto, and Allen Sayegh, ‘Reframing Technologically Enhanced Urban Scenarios: A Design Research Model towards Human Centered Smart Cities,’ Technological Forecasting and Social Change, 142 (May 2019): 15–25; Calzada, ‘Smart Citizens,’ 3252. 61 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 35. 62 Jason Seawright, and John Gerring, ‘Case Selection Techniques in Case Study Research: A Menu of Qualitative and Quantitative Options,’ Political Research Quarterly 61, no. 2 (June 2008): 294–308. 63 Sustainable Development Flanders, ‘Smart Cities Barometer.’ 64 European Smart Cities, ‘The ranking’; IMD Smart City Observatory, ‘Smart city index 2020.’

The DPIA: Clashing Stakeholder Interests in the Smart City?  257 In an initial step, we identify the various interests through 16 exploratory interviews with data protection experts and insights from the academic literature. In a second step, we apply an analytical hierarchy process (AHP) with input from representatives of the different stakeholder groups. This allows us to perform a balancing exercise concerning the interests gathered in step one.

A. Interviews As the DPIA is the responsibility of the data controller, the latter acts as DPIA decision-maker. As this controller functions as a gatekeeper for other stakeholders and their respective interests, it is imperative to study their modi operandi. The DPO, which serves as the data protection expert within organisations,65 is probably the most knowledgeable person on internal data protection procedures. It should be noted that the GDPR explicates that the DPO is to be independent as well as well-embedded within decision-making organisms in the organisation.66 The A29WP prescribes that the DPO is not to actually conduct the DPIA, but rather provide the necessary advice and guidance.67 Nonetheless, the lack of expertise within organisations arguably leads to a situation whereby the DPO becomes the main steering force behind the DPIA. Furthermore, the A29WP explicitly recommends that the DPO be consulted on, among others, the methodology for the assessment and the GDPR states that the DPO has a control function with regard to any DPIA.68 Based on the above, the DPO can currently be conceived as embodying the data protection strategy of the data controller. However, this situation is subject to change as intraorganisational awareness and expertise grows. Additionally, as organisational expertise on data protection is scarce,69 it is useful to include the views of specialised consultants and researchers. WP 248 by the A29WP clearly indicates that a controller can choose to fully outsource the DPIA process, rendering the paid consultant the de facto DPIA decisionmaker.70 It should be noted that the accountability ultimately remains with the data controller.71 Nonetheless, outsourcing the complete DPIA process signals a shortage of time and/or absence of expertise within the organisation. 65 EDPS, ‘Data Protection Officer (DPO).’ 66 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 38. 67 Article 29 Working party, ‘Guidelines on Data Protection Officers (DPOs) (WP 243 rev.01).’ 68 Article 29 Working party, ‘Guidelines on Data Protection Officers (DPOs) (WP 243 rev.01)’; Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 35, para 2. 69 Desdemoustier and Crutzen, ‘Etat des lieux sur la dynamique « Smart City » en Belgique’; Vandercruysse, Buts, and Dooms, ‘Data Control in Smart City Services,’ 554–560. 70 Article 29 Working party, ‘Guidelines on Data Protection Impact Assessment (DPIA) (WP 248).’ 71 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 (General Data Protection Regulation), Art 35, para 1.

258  Laurens Vandercruysse, Michaël Dooms and Caroline Buts Consequently, an in-depth and critical assessment of the work performed by the consultant is often lacking. As required to answer research question RQ1, the semi-structured interview allows for bottom-up gathering of noteworthy ideas and concepts.72 As a result, it serves as a basis to build our stakeholder interest analysis. The individual approach permits candid conversation concerning stakeholder goals without the interference and judgment of other stakeholders.73 Concretely, we contact three different types of specialists to optimise representativeness: ie DPOs working at public sector data controllers, DPOs working at private sector data controllers, and specialised (research) consultants. Specialists were stratified by their place of work because while the legal roles provide the lenses through which stakeholders regard the DPIA, one could also expect public sector DPOs, private sector DPOs and consultants to have different starting points to which the appropriate lens is subsequently applied. City administrations, SCSP communities, renowned law offices, consultancy practices, and university knowledge centers were all within our scope. A total of 240 organisations were contacted by email with the question of whether their primary stakeholder group in the SC was either that of data controller, or that of data protection consultant. If so, they were requested to delegate their DPO, or, when applicable, best placed consultant, to an interview. The vast majority of organisations indicated their experience with the DPIA process was too limited to offer any useful insights. The positive response rate of only 6.67 per cent underlines the scarcity of knowledge on the subject and highlights the addedvalue of this explorative study. The final sample consists of five DPOs working at city administrations, five DPOs working at private SCSPs, and six data protection consultants. Face-to-face interviews took place at the offices of the interviewees between March and August 2019, and lasted 40 to 80 minutes on average. Subsequently, interviews were transcribed verbatim. The interview guide can be found in Table a.1 in the Appendix. Since most interviews were conducted in Dutch, relevant quotes were then translated to English by the authors.

B. AHP As mentioned, the variety of interests gathered during the interviews are then ranked by importance (for the consensus over stakeholder groups as well as for some stakeholder groups individually) through an AHP. The AHP was developed by Saaty around 1980 to aid in complex decision-making.74 While the 72 Ann Blandford, ‘Semi-Structured Qualitative Studies,’ in The Encyclopedia of Human-Computer Interaction, eds. Mads Soegaard and Rikke Dam (Denmark: Interaction Design Foundation, 2013). 73 Petra Hartmann, ‘Response Behavior in Interview Settings of Limited Privacy,’ International Journal of Public Opinion Research 7, no. 4 (December 1995): 383–390. 74 Thomas Saaty, The Analytic Hierarchy Process (New York: McGraw-Hill, 1980).

The DPIA: Clashing Stakeholder Interests in the Smart City?  259 tool is mostly used to rank different policy alternatives based on a set of predefined, scorable criteria, the AHP can also be utilised to determine the salience of various interests in a specific context.75 As one of the premier multi-criteria decision-making (MCDM) methods allowing for the incorporation of subjective qualitative data,76 the AHP is well-suited for answering research question RQ2. Following Chen and Wang (2010) and Saaty (2008),77 conducting an AHP in order to find a relative weighing of different interests comprises seven steps. First, the list of relevant interests is compounded, and related interests are grouped together in overarching themes. This is based on the results of the 16 expert interviews. Second, a survey is circulated demanding respondents to perform pairwise comparisons between the importance of themes of interests, as well as of the importance of individual interests belonging to the same theme. In particular, survey respondents attribute a score between 1 and 9 to each comparison; 1 meaning both (themes of) interests are of equal importance, 9 meaning the former of the (themes of) interests is absolutely more important than the latter. Table a.2 in the Appendix delves deeper into the concrete scoring mechanism. Third, result matrices are developed based on the scores per respondent for the themes of interests as well as per theme of interests. Fourth, the leading eigenvectors of the various result matrices are calculated and standardised to sum to one. Relative weights of the themes and individual interests can now be derived. Fifth, individual result matrices are converted into result matrices at the level of each stakeholder group, this entails calculating result matrices consisting of the geometric means of the individual result matrices. The standardised leading eigenvectors again represent the respective relative weights. Sixth, since we are interested in a global ranking of interests, we calculate the result matrices of the geometric means of the result matrices at the stakeholder group level. This method is preferred over calculating arithmetic averages result matrices, because the influence of weight attributions of individual stakeholder groups is better represented this allows for diverging and even conflicting stakeholder interests.78 Last, the consistency ratio (CR) per matrix 75 Christian Tabi Amponsah, ‘Application of Multi-Criteria Decision Making Process to Determine Critical Success Factors for Procurement of Capital Projects under Public-Private Partnerships,’ International Journal of the Analytic Hierarchy Process 3, no. 2 (December 2011); Mohammad Mehregan, Mona Jamporazmey, Mahnaz Hosseinzadeh, and Mohsen Mehrafrouz, ‘Application of Fuzzy Analytic Hierarchy Process in Ranking Modern Educational Systems’ Success Criteria,’ International Journal of E-Education, e-Business, e-Management and e-Learning 1, no. 4 (October 2011): 299–304; Arash Shahin, and Ali Mahbod. ‘Prioritisation of Key Performance Indicators: An Integration of Analytical Hierarchy Process and Goal Setting,’ International Journal of Productivity and Performance Management 56, no. 3 (January 2007): 226–240. 76 Ramakrishnan Ramanathan, and L. S. Ganesh, ‘Energy Resource Allocation Incorporating Qualitative and Quantitative Criteria: An Integrated Model Using Goal Programming and AHP,’ Socio-Economic Planning Sciences 29, no. 3 (September 1995): 197–218. 77 Mingkuen Chen, and Shih-Ching Wang, ‘The Critical Factors of Success for Information Service Industry in Developing International Market: Using Analytic Hierarchy Process (AHP) Approach,’ Expert Systems with Applications 37, no. 1 (January 2010): 694–704; Thomas Saaty, ‘Decision Making with the Analytic Hierarchy Process,’ International Journal of Services Sciences 1, no. 1 (March 2008): 83. 78 Ernest Forman, and Kirti Peniwati, ‘Aggregating Individual Judgments and Priorities with the Analytic Hierarchy Process,’ European Journal of Operational Research 108, no. 1 (July 1998): 165–169;

260  Laurens Vandercruysse, Michaël Dooms and Caroline Buts is calculated.79 This ratio indicates the consistency of the various rankings. A ratio up to 0.2 is considered to indicate a tolerable consistent ranking for grouped responses.80 Over 300 organisations were directly or indirectly contacted to fill out the AHP survey. These organisations encompass all different stakeholder groups identified in section II.A. Specifically, we approached: multiple umbrella organisations representing over 250 private companies active in SC projects, over 40 local public administrations, 16 specialised consultants or researchers, eleven civil society organisations as a proxy for the stakeholder group ‘citizens’, and five data protection authorities. At the beginning of the survey, organisations were asked to indicate their primary stakeholder group in the SC. They were asked to fill out the survey from that perspective. In the end, eighteen organisations filled out the survey in its entirety: eleven self-identifying data controllers, one self-identifying data processor, four self-identifying specialised consultants and researchers, one self-identifying civil society organisation, and one self-identifying data protection authority. In total 32 organisations commenced the survey, of which fourteen organisations dropped out. As participation was terminated just after the research scope and objectives were explicated, the dropout is likely the result of lacking necessary experience. Experiences in the interview process as well as talks with the relevant umbrella organisations taught that only very few SCSPs and local public administrations have concluded a DPIA at the time of writing. The sample of eighteen, while limited, is surely defendable.81 Respondents probably represent the state-of-the-art in the Flemish region.

Wolfgang Ossadnik, Stefanie Schinke, and Ralf Kaspar. ‘Group Aggregation Techniques for Analytic Hierarchy Process and Analytic Network Process: A Comparative Analysis.’ Group Decision and Negotiation 25, no. 2 (March 2016): 421–457. 79 Thomas Saaty, and Liem Tran, ‘On the Invalidity of Fuzzifying Numerical Judgments in the Analytic Hierarchy Process,’ Mathematical and Computer Modelling 46, no. 7 (October 2007): 962–975. 80 Boaz Golany, and Moshe Kress, ‘A Multicriteria Evaluation of Methods for Obtaining Weights from Ratio-Scale Matrices,’ European Journal of Operational Research 69, no. 2 (September 1993): 210–220; Daniel Ho, Graeme Newell, and Anthony Walker, ‘The importance of property‐specific attributes in assessing CBD office building quality,’ Journal of Property Investment & Finance 23, no. 5 (October 2005): 424–444; William Wedley, ‘Consistency prediction for incomplete AHP matrices,’ Mathematical and Computer Modelling 17, no. 4–5 (February 1993): 151–161. 81 Ramin Gharisadeh Beiragh, Reza Alisadeh, Saeid Shafiei Kaleibari, Fausto Cavallaro, Sarfaraz Hashemkhani Zolfani, Romualdas Bausys, and Abbas Mardani, ‘An Integrated Multi-Criteria Decision Making Model for Sustainability Performance Assessment for Insurance Companies,’ Sustainability 12, no. 3 (February 2020): 789; Soumava Boral, Ian Howard, Sanjay K. Chaturvedi, Kristoffer McKee, and Vallayil Narayana Achutha Naikan, ‘An Integrated Approach for Fuzzy Failure Modes and Effects Analysis Using Fuzzy AHP and Fuzzy MAIRCA,’ Engineering Failure Analysis 108 (January 2020): 104195; Fikri Dweiri, Sameer Kumar, Sharfuddin Ahmed Khan, and Vipul Jain, ‘Designing an Integrated AHP Based Decision Support System for Supplier Selection in Automotive Industry,’ Expert Systems with Applications 62 (November 2016): 273–283; Nicolas Haber, Mario Fargnoli, and Tomohiko Sakao. ‘Integrating QFD for Product-Service Systems with the Kano Model and Fuzzy AHP.’ Total Quality Management & Business Excellence 31, no. 9–10 (July 2020): 929–954.

The DPIA: Clashing Stakeholder Interests in the Smart City?  261

IV. Analysis A.  DPIA Interests As stated, we conducted exploratory interviews with data protection experts to compile a comprehensive list of interests that feature in the DPIA. The questioning was purposely left open as to not influence the specialists. Interests purely related to the form of the DPIA were excluded as they are outside the scope of this study. The results are displayed in Table 2. Table 2  List of interests. Own creation based on interviews Interest

Short description

Most mentioned by

Compliance

Making sure the company is compliant with the current legislation

SCSP DPO

Cost control

Limiting investment in the DPIA process

Cons

Data management

Making sure the data protection and data management policies are aligned

SCSP DPO

Efficiency

Maximizing simplicity, and efficacity, of DPIA working processes and procedures

Cons

Generation of trust

Generating trust in citizens from within the organization

City DPO

Income acquisition

Making sure the person performing the DPIA keeps an eye on the current product offering of the company and can signal if opportunities for new, complementary products would arise from e.g. the mapping of the new data flows

Cons

Limiting impact on service performance

Limiting the impact of data protection on the performance of the service

City DPO

Reputation building

Making sure potential business partners are aware of the organization’s data protection efforts

City DPO/ SCSP DPO

Risk management

Making sure data protection risks are limited to a level the organization is willing to carry

Cons

Safeguarding competition

Making sure data protection requirements do not impact the ability of Smart City actors to compete

Cons DPO/ SCSP DPO

Safeguarding data security

Making sure the collected data is secure

City DPO/ SCSP DPO

262  Laurens Vandercruysse, Michaël Dooms and Caroline Buts We now briefly explain each of these interests, we will tackle them in alphabetical order.

i. Compliance A first DPIA interest is pure compliance.82 Compliance of the DPIA should be interpreted as the DPIA fulfilling all explicit legal demands, each demand can be thought of as a separate box and a compliant DPIA would tick all these boxes.83 A legal data protection consultant considered this interest to be primordial.84 While this interest might sound like a conditio sine qua non for any business procedure, in practice it seems that there sometimes is a disparity. An interviewee explained that this interest stems from an effort to rectify historical misalignment of data processes and IT-infrastructures with data protection legislation:85 An important factor would be to just comply with the legal provisions of the GDPR. Actually straightening the nonconformity from a historical point on view.86

ii.  Cost Control The importance of limiting financial investment in the DPIA process is also regularly mentioned during the interviews. Financial costs can be related to attributing manhours, hiring legal or technical expertise, procuring technical products, or instituting novel organisational measures.87 As private sector businesses are profit-maximising, cost considerations feature in all decision-making processes. A consultant noted that cost control is indeed important, but limiting costs has to be evaluated on a case-by-case basis, as cost control is always relative to a reference price: A very important thing is indeed to take financial cost as a reasonable consideration. You just can’t ignore that. But what it costs exactly, that depends on the project.88

Additionally, cost control was interpreted by a private sector DPO as limiting costs  springing from performing DPIAs which are not up-to-par, notably

82 Juhee Kwon, and Eric Johnson, ‘Health-Care Security Strategies for Data Protection and Regulatory Compliance,’ Journal of Management Information Systems 30, no. 2 (December 2014): 41–66. 83 Jesper Zerlang, ‘GDPR: A Milestone in Convergence for Cyber-Security and Compliance,’ Network Security 2017, no. 6 (June 2017): 8–11. 84 Cons 1, Interview, 24 April 2019. 85 Capgemini, Championing Data Protection and Privacy: a source of competitive advantage in the digital century.’ Report, 2019. Accessed 10 January 2020. 86 City 2, Interview, 21 March 2019. 87 Bergkamp, ‘EU Data Protection Policy,’ 31–47; Data Protection Commission, ‘Guide to Data Protection Impact Assessments (DPIAs).’ 88 Cons 3, Interview, 22 July 2019.

The DPIA: Clashing Stakeholder Interests in the Smart City?  263 because of underestimations of risk or the utilisation of very strict legal interpretations.89

iii.  Data Management Third, the experts refer to the alignment of data protection and data management policies.90 The DPIA process offers an opportunity to reflect on new data streams in a very systematic way. This allows for a rationalisation of data management procedures.91 A private sector DPO stated that this interest constitutes the most salient in the current era of the ever growing data realm: Certainly for the private sector, the use of a lot of Cloud and SaaS-solutions makes your data management a huge exercise and it is difficult to restrain that growth. One of the biggest challenges for me today is just keeping data flows and data under control.92

It is important to note that there are benefits to be had from doing the DPIA with an eye on the data management. Smart fact-based data management facilitates efficient resource allocation because all data is secured in an optimal way, ie not ‘over-secured’.93

iv. Efficiency Efficiency has to be interpreted as dealing with data protection in a simple, methodical way. While efficiency can have a financial component,94 it should be seen more broadly as maximising simplicity, and efficacity, of working processes and procedures while still reaching the DPIA objectives.95 The concern of data protection, and the DPIA, becoming a hindrance for staff is shared across sectors.96 A public sector DPO mentioned weighing legal needs and practical implications: I mainly think that the practical in relation to the purely legal is a factor to be reckoned with. Pragmatism is very important to me, also in terms of budget.97 89 Rok Bojanc, and Borka Jerman-Blažič, ‘An economic modelling approach to information security risk management,’ International Journal of Information Management 28, no. 5 (October 2008): 413–422; SCSP 4, Interview, 18 March 2019. 90 Nik Thompson, Ravi Ravindran, and Salvatore Nicosia, ‘Government Data Does Not Mean Data Governance: Lessons Learned from a Public Sector Application Audit,’ Government Information Quarterly 32, no. 3 (July 2015): 316–322. 91 Martin Horák, Václav Stupka, and Martin Husák, ‘GDPR Compliance in Cybersecurity Software: A Case Study of DPIA in Information Sharing Platform,’ in Proceedings of the 14th International Conference on Availability, Reliability and Security, 36:1–36:8, 2019. 92 SCSP 3, Interview, 27 March 2019. 93 Cons 2, Interview, 16 July 2019. 94 EDPB, ‘EDPB Guidelines 4/2019 on Article 25.’ 95 SCSP 4, Interview, 18 March 2019. 96 Jillian Oderkirk, Elettra Ronchi, and Niek Klazinga, ‘International comparisons of health system performance among OECD countries: Opportunities and data privacy protection challenges,’ Health Policy 112, no. 1–2 (September 2013): 9–18. 97 City 3, Interview, 16 April 2019.

264  Laurens Vandercruysse, Michaël Dooms and Caroline Buts Data protection consultants are primarily concerned with ‘sensible’ efficiency. More particularly with regard to the DPIA, scoping and project management are vital to handling the DPIA process efficiently.98 Efficiency can be interpreted as dealing with the DPIA obligation in a methodical way: I think efficiency is very important. You have to get the right profiles together from the start. You have to establish the right rhythm to review the DPIA. One of the biggest problems I see in doing DPIAs is the lack of a plan, there is no thought-out approach.99

v.  Generation of Trust Next, there is an interest in generating trust in SC projects.100 As trust is generated through interaction, most respondents consider this to be part of their awareness and communication campaigns.101 Trust is emitted, thus the DPIA process should be used as a lever to induce staff with a data protection mindset.102 One public sector DPO stated exactly that: To me, awareness among employees is most important. People who think about the concepts of data protection and privacy; that is something that has really taken off lately.103

Furthermore, the DPIA effort should not end when the document is ‘finalised’. Transparency is a central principle in the GDPR and a data protection policy should abide by that.104 Interviewees plan to clearly communicate about their efforts, also regarding the DPIA. Citizens should know what their data is used for and what the ensuing risks are. Transparency induces trust. This interest has gained importance due to the currently perceived ‘Big Brother’ society.105 Flemish citizens regard the SC as a potential privacy threat.106 Multiple respondents mentioned that once trust is gone, it is very hard to get it back. It is clear that a lack of communication is detrimental to the survival of the SC. One private sector DPO put it strikingly as: It is very important to persuade people that our services are not terrifying.107

98 Cons 2, Interview, 16 July 2019. 99 Cons 3, Interview, 22 July 2019. 100 Hsiaoping Yeh, ‘The Effects of Successful ICT-Based Smart City Services: From Citizens’ Perspectives,’ Government Information Quarterly 34, no. 3 (September 2017): 556–565. 101 City 2, Interview, 21 March 2019. 102 David Wright, ‘Making Privacy Impact Assessment More Effective,’ The Information Society 29, no. 5 (October 2013): 307–315. 103 City 2, Interview, 21 March 2019. 104 David Wright, ‘The State of the Art in Privacy Impact Assessment,’ Computer Law & Security Review 28, no. 1 (February 2012): 54–61. 105 SCSP 2, Interview, 22 March 2019; Cons 6, Interview, 28 August 2019. 106 imec-SMIT-VUB & imec.Living.Labs, ‘imec.smartcitymeter 2019.’ 107 SCSP 2, Interview, 22 March 2019.

The DPIA: Clashing Stakeholder Interests in the Smart City?  265

vi.  Income Acquisition A sixth interest is the possibility of acquiring income as a result of the DPIA exercise. While a DPIA is initially costly, a well-designed DPIA could also indirectly generate income.108 Mapping novel data flows might lead to the discovery of new, complementary products that the data controller can then commercialise. However, the potential for revenue generation depends on the profile of those performing the DPIA as well as the time available to investigate all options.109 This is a contingency that, at this time, is often not met in practice. Most businesses thus tend to see the DPIA process as a pure cost: I see the GDPR as a necessary evil instead of something that actually generates income.110

vii.  Limiting Impact on Service Performance Pragmatism featured frequently during the interview series. Having a data protection policy and conducting a required DPIA is important. However, for most interviewees data protection does not constitute their core business. A service is designed to perform a function and the obligatory DPIA is another hurdle to take.111 The idea that the central functions of the service are to be kept and that the data protection considerations should not unnecessarily impede business goals is a recurring thought.112 Notwithstanding that substantial efforts and investments are required for the DPIA, the core business remains central. A well-thought out GDPR strategy might partly resolve the tension between service performance and data protection.113 Service performance does not equal service quality, as one interviewee stated: Realizing data protection and DPIAs in a good way is a quality, because you give the people involved a higher level of privacy.114

viii.  Reputation Building An interest related to the competitiveness of SC stakeholders in the data economy is the development of a data protection reputation.115 We interpret reputation 108 Cons 4, Interview, 30 July 2019. 109 Cons 3, Interview, 22 July 2019. 110 SCSP 5, Interview, 10 April 2019, 111 Dawn Song, Elaine Shi, Ian Fischer, and Umesh Shankar, ‘Cloud Data Protection for the Masses,’ Computer 45, no. 1 (January 2012): 39–45. 112 Thomas Lenard, and Paul Rubin, ‘In Defense of Data: Information and the Costs of Privacy,’ Policy & Internet 2, no. 1 (January 2010): 143–177.; City 2, Interview, 21 March 2019. 113 SCSP 2, Interview, 22 March 2019. 114 Cons 5, Interview, 13 August 2019. 115 Charles Fombrun, and Mark Shanley, ‘What’s in a name? Reputation building and corporate strategy,’ Academy of Management Journal 33, no. 2 (June 1990): 233–248; Wright, ‘The State of the Art,’ 54–61.

266  Laurens Vandercruysse, Michaël Dooms and Caroline Buts building as making sure potential business partners are aware of the organisation’s data protection efforts. Evidently, a strong reputation for data protection can induce citizen trust, but building a reputation is a broader concept.116 For example: an established data controller publishing a full DPIA report on its website could garner trust from citizens who perceive this as very transparent, while business associates might positively or negatively judge the procedures on a more technical level. Alternatively, investing in two-factor authentication software might not necessarily prompt higher trust levels of citizens, but could boost reputation with potential collaborators.

ix.  Risk Management A substantial part of the DPIA process is about mapping and mitigating uncovered risks of the data processing activity.117 Risk management in that context is to be understood as an individual interest. While risk management efforts cover fields beyond data protection, for data controllers and processors the DPIA constitutes an integral part of risk management.118 The DPIA could be used as a tool to establish the optimal level of risk for the controller or processor. A data protection consultant stated this to be the foremost criterium: The most important criterium is risk reduction to a level that is acceptable for the organisation, to result in an acceptable level of risk for the organisation.119

x.  Safeguarding Competition Competition is crucial in a free market economy.120 Data protection obligations can be perceived as additional hurdles to competition for data processing services.121 These markets are already notoriously concentrated.122 The DPIA, being a quite intensive and costly exercise, can present an obstacle for some small or new providers.123 It is possible to instrumentalise the DPIA to safeguard

116 Luis Casaló, Carlos Flavián, and Miguel Guinalíu, ‘The role of security, privacy, usability and reputation in the development of online banking,’ Online Information Review 31, no. 5 (October 2007): 583–603; City 4, Interview, 23 April 2019. 117 Article 29 Working party, ‘Guidelines on Data Protection Impact Assessment (DPIA) (WP 248).’ 118 Pierre Trudel, ‘Privacy Protection on the Internet: Risk Management and Networked Normativity,’ in Reinventing Data Protection?, eds Serge Gutwirth, Yves Poullet, Paul De Hert, Cécile de Terwangne, and Sjaak Nouwt (Dordrecht: Springer, 2009) 317–334. 119 Cons 2, Interview, 16 July 2019. 120 John Lipczynski, John Wilson, and John Goddard, Industrial organisation: competition, strategy, policy (London: Pearson Education, 2005). 121 Barbara Engels, ‘Data Portability among Online Platforms.’ Internet Policy Review 5, no. 2 (June 2016). 122 City 3, Interview, 16 April 2019; Cons 5, Interview, 13 August 2019. 123 Cons 3, Interview, 22 July 2019.

The DPIA: Clashing Stakeholder Interests in the Smart City?  267 competition by providing some leeway as well as clear guidelines. A city DPO illustrated this as follows: We actually have a policy that aims to attract start-ups, avoids extensive order books, and uses a shortened tendering procedure for all IT purchases. We try to lower the threshold.124

Additionally, the DPIA can be used as a signal from the SCSP demonstrating that their product is safe and that data risks are well-considered.125 Such a signal, optionally combined with a recognised certification scheme, could stimulate competition.126

xi.  Safeguarding Data Security Three quarters of the specialists indicate safeguarding data security as an interest. Data security is important in a broad sense, regarding data protection as well as the DPIA process.127 Its substance was stressed in multiple interviews: I think everything starts and ends with data security.128

Additionally, a 2019 citizen survey in Flanders and Brussels reveals data security as one of the most important considerations for citizens in the SC.129 Central to the IT-infrastructure, data security constitutes the backbone of a good data protection policy.130 Performing a DPIA without a correct securitisation of data is futile.131 Another more specific contemplation related to data security was access control. During the DPIA, the access control procedures could be optimised.132

B.  DPIA Themes It can be argued that some of the interests in section V.A are more closely related than others. Consequently, we opted to group connected interests in overarching themes. This grouping constitutes a useful overview and is necessary for the two-layered AHP ranking process in section VI. 124 City 4, Interview, 23 April 2019. 125 Wright, ‘Making Privacy Impact Assessment More Effective,’ 307–315. 126 Cons 2, Interview, 16 July 2019. 127 EDPS, ‘Information security’; Minne Van der Haak, Astrid Corinna Wolff, Ralf Brandner, Peter Drings, Michael Wannenmacher, and Thomas Wetter, ‘Data Security and Protection in Cross-Institutional Electronic Patient Records,’ International Journal of Medical Informatics 70, no. 2 (July 2003): 117–130. 128 Cons 3, Interview, 22 July 2019. 129 imec-SMIT-VUB & imec.Living.Labs, ‘imec.smartcitymeter 2019.’ 130 Case C-293/12, Digital Rights Ireland, ECLI:EU:C:2014:238 (2014); SCSP 3, Interview, 27 March 2019. 131 Kitchin, ‘Getting smarter about smart cities’; City 1, Interview, 14 March 2019. 132 SCSP 5, Interview, 10 April 2019.

268  Laurens Vandercruysse, Michaël Dooms and Caroline Buts Four main themes can be discerned: i) financials; ii) competition/ competitiveness; iii) social; and iv) risk. The financials theme consists of three underlying interests. The common denominator is the relation to the financial situation of the organisation conducting the DPIA. Income acquisition and costs control go to the raison d’être of private business. Efficiency, though broader than just financial, relates to the avoidance of indirect costs. The theme comprising limiting the impact on service performance, reputation building and safeguarding competition is named competition/competitiveness. The two former interests concern the competitiveness of individual organisations; having a more performant service will render an organisation more competitive, just like having a better reputation will. The latter interest goes to managing broader market dynamics. Trust generation is classified under the social theme, as this generation of trust requires widespread internal and external interaction. Last, the risk theme encompasses four different interests: compliance, data management, risk management, and safeguarding data security. Following Gellert (2018), we consider compliance to be intrinsically related to risk in the sense that, although GDPR principles are not scalable, their implementation is to some extent.133 In practice, compliance levels do vary, and different compliance levels entail their proper of legal risk. In short, data management goes to managing operational risk. The risk management interest has to be interpreted as establishing the optimal level of overarching risk for the organisation. Evidently, safeguarding data security concerns technical risk. An overview of the composition of the themes is presented in Table 3. Table 3  Four DPIA themes and their composition. Own creation based on interviews Theme

Interest

Financials

Cost control

Justification Inherent financial component

Efficiency Income acquisition Competition/ competitiveness

Limiting impact on service performance Reputation building Safeguarding competition

Primarily impact competition for SC services, either directly or through effect on competitiveness of individual actors

Social

Trust generation

Communication and awareness strategies

Risk

Compliance

Risk management, either legally or technically

Data management Risk management Safeguarding data security



133 Gellert,

‘Understanding the Notion of Risk in the General Data Protection Regulation,’ 279–288.

The DPIA: Clashing Stakeholder Interests in the Smart City?  269

C.  Coming to a Consensus i.  The General Consensus Eighteen individual respondents representing all five DPIA stakeholder groups identified in section II.B participated in the AHP process. While the sample is limited, it suffices for the explorative purposes of this study.134 The DPIA is indeed a relatively novel specialist process and knowledge on the subject is limited. Furthermore, a higher quantity of respondents does not necessarily indicate a better representation of the subject as the quality of respondents prevails for an AHP process. To find the general consensus weights for each interest, grouped responses per stakeholder group were weighted equally. We thus determine an ideal situation wherein stakeholder groups are thoroughly consulted and their remarks are considered carefully, we find this situation to be most insightful as a baseline for further research. In Table 4, the general consensus weights are displayed. Table 4  Results global AHP. Own creation based on AHP analysis Theme Financials

Competition/ competitiveness

Weight 10%

8%

Interest

Weight

Cost control

1%

Efficiency

6%

Income acquisition

3%

Limiting impact on service performance

2%

Reputation building

4%

Safeguarding competition

1%

Social

32%

Generation of trust

32%

Risk

50%

Compliance

6%

Data management

7%

Risk management

15%

Safeguarding data security

22%

Note: CRs of all matrices are