The Curious Case of Usable Privacy: Challenges, Solutions, and Prospects (Synthesis Lectures on Information Security, Privacy, and Trust) [2024 ed.] 303154157X, 9783031541575

This book journeys through the labyrinth of usable privacy, a place where the interplay of privacy and Human-Computer In

115 44 5MB

English Pages 180 [178] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Contents
Acronyms
1 Introduction to Usable Privacy
1.1 Introduction
1.2 Why Privacy and Usable Privacy Matter
1.2.1 Privacy as a Fundamental Right
1.2.2 The Need for Usable Privacy Enhancing Technologies
1.3 Aims and Scope of This Book
1.4 Defining the Terms and Concepts
1.4.1 Privacy and Data Protection
1.4.2 The GDPR and Other Laws
1.4.3 GDPR Roles and Concepts
1.4.4 Technical Data Protection Goals and Terms
1.4.5 Privacy by Design
1.4.6 Privacy-Enhancing Technologies (PETs)
1.4.7 Human-Computer Interaction (HCI)
1.4.8 Usability
1.4.9 Human-Centred Design (HCD)
1.4.10 Usable Privacy
1.4.11 Metaphors
1.4.12 Mental Models
1.4.13 Nudging and Dark Patterns
1.5 Related Surveys and Books
2 Background: Privacy Laws and Technologies
2.1 Introduction
2.2 Laws for Privacy Protection
2.2.1 First Laws for Data Protection
2.2.2 The European Legal Privacy and Data Protection Framework
2.2.3 Further European Privacy Legislation
2.2.4 Privacy Legislation in Non-European Countries Including the USA
2.3 Technologies and Tools to Protect and Enhance Privacy
2.3.1 PETs to ``Minimise''
2.3.2 PETs to ``Hide''
2.3.3 PETs to ``Separate''
2.3.4 PETs to ``Aggregate''
2.3.5 PETs to ``Inform''
2.3.6 PETs to ``Control''
2.3.7 PETs to ``Enforce''
2.3.8 PETs to ``Demonstrate''
3 Overview of Usable Privacy Research: Major Themes and Research Directions
3.1 Introduction
3.2 Approach
3.2.1 Method
3.2.2 Delimitation and Further Work
3.3 Usable Privacy in the Context of IoT
3.3.1 Smart Home Devices
3.3.2 Wearables
3.3.3 Helping People Make Better Privacy Decisions in the Context of IoT
3.3.4 Gaps and Future Directions
3.4 Efforts Towards More Inclusive Privacy
3.4.1 Risk Factors Amplifying Privacy Risks of Marginalised People
3.4.2 Privacy-Protection Practices and Barriers to Effective Mechanisms
3.4.3 Recommendations for Better Privacy Protection
3.4.4 Gaps and Future Directions
3.5 Improving Privacy Through Usable Privacy for Developers
3.5.1 Developers' Barriers to Embedding Privacy
3.5.2 Developers and App Permissions
3.5.3 Privacy Views and Practices Based on Natural Conversations
3.5.4 Gaps and Future Directions
3.6 Adoption, Usability, and Users' Perceptions of PETs
3.6.1 Encryption
3.6.2 Anonymity
3.6.3 Differential Privacy
3.7 Towards Usable Privacy Notice and Choice and Better Privacy Decisions
4 Challenges of Usable Privacy
4.1 Introduction
4.2 Challenges of Conducting Usable Privacy Research
4.2.1 Challenge of Encompassing Different and Sometimes Specific Users
4.2.2 Prioritised and Conflicting Goals
4.2.3 Difficulty of Measuring the Right Thing and Privacy Paradox
4.2.4 The Issue of Ecological Validity
4.2.5 Specific Ethical and Legal Challenges
4.3 HCI Challenges Related to Privacy Technologies
4.3.1 Challenges of Explaining ``Crypto Magic'' and the Lack of Real-World Analogies
4.3.2 Challenges and Need to Cater for ``digital-World'' Analogies
4.3.3 Challenges of Usable Transparency-Enhancing Tools
4.4 HCI Challenges Related to Privacy Laws
4.4.1 The Discrepancy Between Privacy Laws and What People Need
4.4.2 Problems with Notice and Choice
5 Addressing Challenges: A Way Forward
5.1 Introduction
5.2 Human-Centred and Privacy by Design Approaches Combined
5.3 Encompassing Different Types of Users
5.3.1 Inclusive Design
5.3.2 Culture-Dependent Privacy Management Strategies and Privacy Profiles
5.4 Configuring PETs and Addressing Conflicting Goals
5.5 Privacy as a Secondary Goal—Attracting Users' Attention
5.5.1 Content, Form, Timing, and Channel of Privacy Notices
5.5.2 Engaging Users with Privacy Notices
5.6 Designing Usable Privacy Notices
5.6.1 Multi-layered Privacy Notices
5.6.2 Providing Usable Choices
5.6.3 Personalised Presentations
5.6.4 Visual Presentations
5.6.5 Informing Users About Policy Mismatches
5.6.6 Avoiding Dark Patterns
5.7 (Semi-)automated Privacy Management Based on Defaults and Dynamic Support
5.8 Explaining PETs
5.9 Usable Transparency and Control
5.10 Guidance for Mapping (GDPR) Privacy Principles to HCI Solutions
6 Lessons Learnt, Outlook, and Conclusions
6.1 Introduction
6.2 Key Takeaways
6.3 Outlook
6.4 Final Conclusions
Recommend Papers

The Curious Case of Usable Privacy: Challenges, Solutions, and Prospects (Synthesis Lectures on Information Security, Privacy, and Trust) [2024 ed.]
 303154157X, 9783031541575

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Synthesis Lectures on Information Security, Privacy, and Trust

Simone Fischer-Hübner Farzaneh Karegar

The Curious Case of Usable Privacy Challenges, Solutions, and Prospects

Synthesis Lectures on Information Security, Privacy, and Trust Series Editors Elisa Bertino , Purdue University, West Lafayette, IN, USA Elena Ferrari, University of Insubria, Como, Italy

The series publishes short books on topics pertaining to all aspects of the theory and practice of information security, privacy, and trust. In addition to the research topics, the series also solicits lectures on legal, policy, social, business, and economic issues addressed to a technical audience of scientists and engineers. Lectures on significant industry developments by leading practitioners are also solicited.

Simone Fischer-Hübner · Farzaneh Karegar

The Curious Case of Usable Privacy Challenges, Solutions, and Prospects

Simone Fischer-Hübner Department of Mathematics and Computer Science Karlstad University Karlstad, Sweden

Farzaneh Karegar Department of Information Systems Karlstad University Karlstad, Sweden

Department of Computer Science and Engineering Chalmers University of Technology Gothenburg, Sweden

ISSN 1945-9742 ISSN 1945-9750 (electronic) Synthesis Lectures on Information Security, Privacy, and Trust ISBN 978-3-031-54157-5 ISBN 978-3-031-54158-2 (eBook) https://doi.org/10.1007/978-3-031-54158-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

To those whose rights to privacy and freedom have been taken away by oppressive regimes. May this book serve as a reminder that even in the darkest of times, the human spirit prevails and the fight for justice and basic human rights will never be silenced.

Preface

In this book, we welcome you to the labyrinth of usable privacy—a place where the interplay of privacy and Human-Computer Interaction (HCI) reveals a myriad of challenges, solutions, and new possibilities. The book is intended to illuminate the often shadowy corridors of this multifaceted domain by exploring usable privacy research, practices, and challenges. It also provides guidelines and solutions to address these challenges. The journey we are about to embark on does not simply focus on data protection or legislative frameworks, but also on what it takes for privacy to be safeguarded, understood, embraced, and easily practised by all. Throughout this literary journey, we extend an open hand, inviting students who may be studying Computer Science, Information Systems, or Law, as well as researchers and practitioners working in the fields of usable privacy, privacy by design, Privacy-Enhancing Technologies (PETs), or HCI to explore the book’s objectives and aspirations and to gain knowledge of relevant terms and concepts. We deliberate upon the why—a question echoing through time and technological advancements—why does it matter? As we proceed, we explore the background of privacy tools and technologies, the evolution of privacy rules and regulations, and the backdrop upon which this narrative unfolds. But this is not merely documentation; it is an exploration of the past to inform the present and shape the future. An important focus of our expedition is to explore the frontiers of usable privacy research—a landscape vibrant with innovation resonating with topics ranging from the Internet of Things (IoT), the usability of PETs and usable privacy for developers to the often-overlooked privacy narratives of marginalised communities. Here, we explore the complexities of user-centric privacy and envision a roadmap towards making this a reality. Yet every great quest comes with ominous challenges. The chapter devoted to challenges acknowledges the hurdles that demand attention—the intricate issues related to conducting usable privacy research, HCI challenges, and problems concerning the everevolving world of privacy laws. First time to our knowledge, this book is the only one that discusses and explores most of the issues associated with usable privacy research

vii

viii

Preface

at once. Nevertheless, challenges are fertile ground for innovation. Thus, we also provide a blueprint for addressing these hurdles and establishing pathways for a more privacy-conscious world. Our journey culminates in gleaning insights, distilling wisdom, lessons learnt, and looking to the future that inspires reflection—allowing the reader to form their own path in this evolving narrative. May this expedition leave you informed, inspired, curious, and empowered. Welcome to “The Curious Case of Usable Privacy.” Karlstad, Sweden December 2023

Simone Fischer-Hübner Farzaneh Karegar

Contents

1 Introduction to Usable Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Why Privacy and Usable Privacy Matter . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Privacy as a Fundamental Right . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.2 The Need for Usable Privacy Enhancing Technologies . . . . . . . . 1.3 Aims and Scope of This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Defining the Terms and Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.1 Privacy and Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.2 The GDPR and Other Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.3 GDPR Roles and Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.4 Technical Data Protection Goals and Terms . . . . . . . . . . . . . . . . . 1.4.5 Privacy by Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.6 Privacy-Enhancing Technologies (PETs) . . . . . . . . . . . . . . . . . . . . 1.4.7 Human-Computer Interaction (HCI) . . . . . . . . . . . . . . . . . . . . . . . . 1.4.8 Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.9 Human-Centred Design (HCD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.10 Usable Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.11 Metaphors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.12 Mental Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4.13 Nudging and Dark Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Related Surveys and Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 1 1 3 4 5 5 6 6 7 9 10 10 11 12 12 13 13 13 14 15

2 Background: Privacy Laws and Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Laws for Privacy Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 First Laws for Data Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 The European Legal Privacy and Data Protection Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Further European Privacy Legislation . . . . . . . . . . . . . . . . . . . . . .

17 17 17 17 18 24 ix

x

Contents

2.2.4

Privacy Legislation in Non-European Countries Including the USA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Technologies and Tools to Protect and Enhance Privacy . . . . . . . . . . . . . . 2.3.1 PETs to “Minimise” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 PETs to “Hide” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 PETs to “Separate” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 PETs to “Aggregate” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.5 PETs to “Inform” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.6 PETs to “Control” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.7 PETs to “Enforce” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.8 PETs to “Demonstrate” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Overview of Usable Privacy Research: Major Themes and Research Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Delimitation and Further Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Usable Privacy in the Context of IoT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Smart Home Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Wearables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Helping People Make Better Privacy Decisions in the Context of IoT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Gaps and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Efforts Towards More Inclusive Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Risk Factors Amplifying Privacy Risks of Marginalised People . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Privacy-Protection Practices and Barriers to Effective Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Recommendations for Better Privacy Protection . . . . . . . . . . . . . 3.4.4 Gaps and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Improving Privacy Through Usable Privacy for Developers . . . . . . . . . . . 3.5.1 Developers’ Barriers to Embedding Privacy . . . . . . . . . . . . . . . . . 3.5.2 Developers and App Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Privacy Views and Practices Based on Natural Conversations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 Gaps and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Adoption, Usability, and Users’ Perceptions of PETs . . . . . . . . . . . . . . . . 3.6.1 Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Anonymity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.3 Differential Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25 27 28 34 35 36 36 37 38 38 38 43 43 44 44 44 46 47 52 54 56 56 57 59 61 63 64 64 65 66 67 68 69 72 75

Contents

Towards Usable Privacy Notice and Choice and Better Privacy Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

3.7

80 81

4 Challenges of Usable Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Challenges of Conducting Usable Privacy Research . . . . . . . . . . . . . . . . . 4.2.1 Challenge of Encompassing Different and Sometimes Specific Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Prioritised and Conflicting Goals . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Difficulty of Measuring the Right Thing and Privacy Paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 The Issue of Ecological Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.5 Specific Ethical and Legal Challenges . . . . . . . . . . . . . . . . . . . . . . 4.3 HCI Challenges Related to Privacy Technologies . . . . . . . . . . . . . . . . . . . . 4.3.1 Challenges of Explaining “Crypto Magic” and the Lack of Real-World Analogies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Challenges and Need to Cater for “digital-World” Analogies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Challenges of Usable Transparency-Enhancing Tools . . . . . . . . . 4.4 HCI Challenges Related to Privacy Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 The Discrepancy Between Privacy Laws and What People Need . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Problems with Notice and Choice . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

103 103 103

5 Addressing Challenges: A Way Forward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Human-Centred and Privacy by Design Approaches Combined . . . . . . . . 5.3 Encompassing Different Types of Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Inclusive Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Culture-Dependent Privacy Management Strategies and Privacy Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Configuring PETs and Addressing Conflicting Goals . . . . . . . . . . . . . . . . 5.5 Privacy as a Secondary Goal—Attracting Users’ Attention . . . . . . . . . . . 5.5.1 Content, Form, Timing, and Channel of Privacy Notices . . . . . . 5.5.2 Engaging Users with Privacy Notices . . . . . . . . . . . . . . . . . . . . . . 5.6 Designing Usable Privacy Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Multi-layered Privacy Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.2 Providing Usable Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.3 Personalised Presentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.4 Visual Presentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

133 133 133 134 134

103 106 107 109 111 117 117 118 119 121 121 122 124

135 135 138 138 139 141 141 144 145 145

xii

Contents

5.6.5 Informing Users About Policy Mismatches . . . . . . . . . . . . . . . . . . 5.6.6 Avoiding Dark Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 (Semi-)automated Privacy Management Based on Defaults and Dynamic Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Explaining PETs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Usable Transparency and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 Guidance for Mapping (GDPR) Privacy Principles to HCI Solutions . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

147 148

6 Lessons Learnt, Outlook, and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Key Takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Final Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

161 161 161 163 164 164

148 150 151 154 156

Acronyms

ABC ACM ACM CCS AI API BNT CCPA CHI CIA COPPA CPRA CSCW CSP DAD DADA DMA DP DPbDD DPIA DSA EDPB EEA EHR ENISA ETSI EU EuroUSEC EU-US DPF E2EE

Attribute Based Credential Association for Computing Machinery ACM Conference on Computer and Communications Security Artificial Intelligence Application Programming Interface Berlin Numeracy Test California Consumer Privacy Act ACM Human Factors and Computing Systems Conference Confidentiality, Integrity, Availability Children’s Online Privacy Protection Rule California Privacy Rights Act ACM Conference On Computer-Supported Cooperative Work and Social Computing Cloud Service Provider Dag and Drop Drag and Drop Agreement Digital Marketing Act Differential Privacy Data Protection by Design and Default Data Protection Impact Assessment Digital Services Act European Data Protection Board European Economic Area Electronic Health Record European Cybersecurity Agency European Telecommunications Standards Institute European Union European Symposium on Usable Security EU-US Data Protection Framework End to End Encryption xiii

xiv

FCRA FE FL FTC GDPR GPG HCD HCI HE HIPPA HTTP HTTPS ICT IEEE IFTTT IoT IP IRB ISO ISP IT JITCTA LED LGBTQ+ LGPD LLM MHP ML MPC NAT NGO OECD P3P PA PbD PET PIA PIPL POPI PPA PPL

Acronyms

Fair Credit Reporting Act Functional Encryption Federated Learning Federal Trade Commission EU General Data Protection Regulation GNU Privacy Guard Human-Centred Design Human-Computer Interaction Homomorphic Encryption Health Insurance Portability and Accountability Act Hypertext Transfer Protocol Hypertext Transfer Protocol Secure Information and Communication Technology Institute of Electrical and Electronics Engineers If-This-Then-That Internet of Things Internet Protocol Institutional Review Board International Organization for Standardization Internet Service Provider Information Technology Just-In-Time Click Through Agreement Light Emitting Diode Lesbian, Gay, Bisexual, Transgender, Queer or Questioning and more Lei Geral de Proteção de Dados—Brazil’s General Data Protection Law Large Language Model Medical Health Platform Machine Learning Multi-Party Computation Network Address Translation Non-Government Organisation Organization for Economic Co-operation and Development Platform for Privacy Preferences Privacy Assistant Privacy by Design Privacy-Enhancing Technology Privacy Impact Assessment (China’s) Personal Information Privacy Law (South African’s) Protection of Personal Information (Act) Personalized Privacy Assistant PrimeLife Policy Language

Acronyms

ROI SBO SCC SH-PPBs SMPC SMS SoK SOUPS SPA SSL TET TLS Tor TPE UCD UI URL US USA USEC UX VPN VR XACML XML

xv

Region of interest Security Behavior Observatory Standard Contractual Clause Smart Home Privacy-Protecting Behaviours Secure Multi-Party Computation Short Message Service Systematization of Knowledge Symposium on Usable Privacy and Security Smart Home Personal Assistants Secure Socket Layer Transparency-Enhancing Tool Transport Layer Security The onion router Thumbnail-Preserving Encryption User-Centred Design User Interface Uniform Resource Locator Unites States United States of America Workshop on Usable Security User eXperience Virtual Private Network Virual Reality eXtensible Access Control Markup Language eXtensible Markup Language

1

Introduction to Usable Privacy

1.1

Introduction

Privacy is a basic human need and is considered a fundamental human right. In this introductory chapter, different definitions of privacy are presented, as well as the importance of privacy not only for individuals but for society as a whole. As we discuss in this chapter, usability must be considered as an essential integral part of protecting privacy. We further argue that usable Privacy-Enhancing Technologies (PETs) play a significant role in privacy. Several basic terms and concepts related to privacy, data protection, and usability are also introduced in this chapter, and we will refer to them throughout the book. We conclude by mentioning related textbooks and surveys on usable privacy and showing how this book complements or extends them.

1.2

Why Privacy and Usable Privacy Matter

1.2.1

Privacy as a Fundamental Right

Privacy as a basic human need has been acknowledged for ages in different forms in different cultures. It has also historical roots in philosophical discussions from ancient times—in particular by Aristotle, who distinguished between the public sphere of politics and the private sphere of the family [1]. Also, rules for protecting privacy in different sectors of life have been in place for centuries (see also [3]). An example is the Hippocratic Oath, which dates back to Ancient Greece. It is an oath historically taken by physicians to swear to uphold specific ethical standards, including medical confidentiality. Another example is the seal of confession in the Catholic Church, which is the absolute duty of priests not to disclose anything learned during the course of the Sacrament of Confession. The secrecy © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Fischer-Hübner and F. Karegar, The Curious Case of Usable Privacy, Synthesis Lectures on Information Security, Privacy, and Trust, https://doi.org/10.1007/978-3-031-54158-2_1

1

2

1 Introduction to Usable Privacy

of letter correspondence is yet another instance showing the protection of privacy that has been protected by laws in Europe for more than 300 years. The very first definition of privacy by lawyers was given by Samuel Warren and Louis Brandeis in their famous article “The Right to Privacy” [4], which appeared in the Harvard Law Review in 1890. The two American lawyers defined privacy as “the right to be let alone” and argued that this right could be derived from the American common law. Privacy was also acknowledged by The European Convention on Human Rights, which is an international treaty to protect human rights and fundamental freedoms in Europe, drafted in 1950 by the Council of Europe. In Article 8 of the European Convention on Human Rights, it is stated that “Everyone has the right to respect for his private and family life, his home and his correspondence”. One of the most common contemporary definitions of privacy is the one by Alan Westin from 1967 [6]. Privacy is defined by Westin as “the claim of individuals, groups and institutions to determine for themselves, when, how and to what extent information about them is communicated to others”. A major concern at that time was the collection of personal information about citizens in databases on mainframe computers by the government, which were mainly needed for providing government services. According to Westin’s definition, privacy is a claim by both natural persons (individuals) as well as legal persons (groups and institutions). In most countries, however, privacy is primarily regarded as a basic right to ensure individual autonomy and dignity, which is why this book focuses primarily on individual privacy. Privacy has also been defined as the right to informational self-determination by the German Constitutional Court in its fundamental Census Decision in 1983. This right to informational self-determination is the right of individuals to make their own decisions regarding the disclosure and use of their personal data, and was derived from the basic rights of human dignity and self-determination of the German Constitution (Grundgesetz). It is an expression of human dignity, which would be breached if persons were excessively monitored and profiled without limitations. Informational self-determination is not only an individual fundamental right but also an elementary prerequisite for the functioning of a free democratic society, as the German Constitutional Court back in 1983 elaborated [5]: “If it is not transparent to someone who else is knowing what about him/her and whether any deviant behaviour is monitored and registered, this person might try to behave so as not to attract attention. For example, if someone does not know if he/she is monitored while participating in a political meeting, he/she might prefer not to attend this meeting. This, however, would also affect the free democratic order, which is based on free actions and political collaboration of the citizens, who are freely expressing their opinions.” Nissenbaum [7] defined privacy as contextual integrity, the right to prevent information from flowing from one context to another: Contextual integrity “ties adequate protection for privacy to norms of specific contexts”, and requires that information processing and flows be appropriate to that context and meet the governing norms of distribution within that context.

1.2 Why Privacy and Usable Privacy Matter

3

Both privacy and data protection have been protected by the Charter of the Fundamental Rights of the European Union since 2000. The EU General Data Protection Regulation (GDPR) entered into force in 2018 with the objective to protect fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data by modernising data protection rules. To this end, the GDPR particularly protects users in the global economy by safeguarding their rights when their data is processed by data controllers or processors (regardless of whether they are based inside or outside the EU) that offer goods and services to, or that monitor individuals in the EU. Also due to this, the GDPR has since then had an imminent impact on modernising data protection globally.

1.2.2

The Need for Usable Privacy Enhancing Technologies

In our networked and smart society, we cannot rely solely on laws to effectively protect privacy as a fundamental human right. There are constant reports about breaches of personal data by governments or businesses, including misuse on a large scale. Hence, privacy as a fundamental human right is increasingly at stake. Particularly, seemingly invisible personal big data collections through smart devices and data analytics based on Artificial Intelligence (AI) make it possible to derive highly sensitive personal information and to extensively profile individuals, which is not transparent to the individuals concerned. To be able to exercise their right to informational self-determination, individuals must be well informed about the extent, purposes, and consequences of the processing of their personal data. As pointed out by Patrick and Kenny, legal privacy principles have HumanComputer Interaction (HCI) implications as they “describe mental processes and behaviours that the data subject must experience to adhere to the principles” [8]. Consequently, the GDPR also requires privacy policy information to be presented to the individuals concerned (the so-called data subjects) in a “concise, transparent, intelligible and easily accessible form, using clear and plain language” (Art. 12 GDPR). It, however, remains a challenge to enforce usable privacy and transparency in practice. In contrast to this requirement, users are often “tricked” (e.g. via so-called “dark patterns”) into disclosing more personal data than intended. Exercising the right to informational self-determination also requires easy-to-use controls for data subjects for exercising their data subject rights to access, correct, delete, or export their data and to object and limit the amount of personal data that is disclosed and processed about them. According to Cavoukian, the Privacy by Design (PbD) principle of Respect for Privacy extends to the need for User Interfaces (UIs) to be “human-centred, user-centric and userfriendly, so that informed privacy decisions may be reliably exercised” [2].

4

1 Introduction to Usable Privacy

Privacy needs to be protected, not only by law but also by PETs. PETs are in fact important building blocks for achieving data protection by design and default, which has since 2018 been a legal principle of the GDPR. PETs that have been proposed or developed in longer than the last 25 years as privacy self-protection and control tools are, however, not broadly in use, also since the usable design of these tools remains a challenge for various reasons. The need for interdisciplinary expertise, which is often lacking, and the difficulty in mediating trust in the stated privacy functionalities of PETs built on counter-intuitive “crypto magic” have been posing major challenges in designing usable PETs. Moreover, the problem that privacy is often only a secondary goal for users presents another important challenge for the design of those types of PETs that require the users’ attention for making informed decisions. Hence, PETs are still hardly adopted in practice, often misconfigured, and not adequately designed for usability. The usability of PETs can be crucial for PETs to be widely deployed and used, and thus for enabling technology-based privacy protection for individuals at a larger scale. In addition, usability is also essential for the correct and secure use and configuration of PETs and systems in general. The Verizon 2023 Data Breach Investigations Report showed that, as in previous years, the vast majority (74%) of breaches include a human element.1 User errors, including misconfigurations, therefore constitute a high privacy and security risk, not least because they can lead to serious data breaches with irreversible personal consequences and damages for individuals.

1.3

Aims and Scope of This Book

In the last 25 years, usable privacy has become an increasingly important topic for research [10] and in practice. This book explores the background and current state of usable privacy research and practices and challenges for achieving usable privacy. Further, it provides guidelines and solutions for addressing such challenges. This book addresses students from different disciplines including Computer Science, Information Systems, and Law, researchers and practitioners working in the areas of usable privacy, privacy by design, PETs, or generally in HCI. This book contributes with the following unique features: • An introduction to legal privacy principles and PETs providing legal and technical background knowledge for the usable privacy domain. • An overview of the status quo of usable privacy research and its selected areas. • A novel overview and exploration of HCI challenges concerning privacy. • The state-of-the-art and novel approaches and guidance for usable privacy design.

1 https://www.verizon.com/business/resources/reports/dbir/.

1.4

Defining the Terms and Concepts

1.4

5

Defining the Terms and Concepts

This section presents and defines basic terms and concepts related to privacy, data protection, and usability.

1.4.1

Privacy and Data Protection

Privacy has been defined in different ways, as presented above. In particular, it has been defined as the right to be let alone [4], as the right to informational self-determination [5, 6], or in terms of contextual integrity [7]. Both privacy and data protection have been protected and recognised as two separate rights by the Charter of the Fundamental Rights of the European Union since 2000, which brings together in a single document the fundamental rights protected in the EU. Its Article 7—Respect for private and family life—protects privacy by requiring that “Everyone has the right to respect for his or her private and family life, home and communications.” Article 8 guarantees the protection of personal data requiring that (1) everyone has the right to the protection of personal data concerning him or her; and (2) such data must be processed fairly for specified purposes and based on the consent of the person concerned or some other legitimate basis. The concept of privacy has several dimensions listed below, of which the first three are the ones most related to data protection: • Informational Privacy, which requires rules controlling whether and how personal data can be gathered, stored, processed, or selectively disseminated. • Privacy of communications, which requires confidentiality and integrity of traditional and electronic communications. • Spatial privacy, which requires protecting the user’s virtual space against intrusions, such as SPAM and hacking. • Territorial privacy, which requires protecting the close physical area surrounding a person at home and other environments such as the workplace or public space. • Bodily privacy, which requires protecting individuals’ physical selves against invasive procedures such as physical searches and drug testing. While privacy goes beyond data protection, the terms are often used interchangeably. In the rest of the book, for the sake of simplicity, we may also use the term privacy even if we rather refer to the narrower concept of data protection.

6

1.4.2

1 Introduction to Usable Privacy

The GDPR and Other Laws

The GDPR, the EU General Data Protection Regulation, was adopted by the European Parliament and European Council on 27 April 2016, and entered into effect on 15 May 2018, replacing the EU Data Protection Directive 95/46/EC. It has the objective to harmonise data protection laws across Europe, to improve the level of compliance and to modernise data protection rules. Particularly, the GDPR strengthens the rights of individuals and gives them more control, for instance by emphasising transparency and fair data processing as well as reinforcing data subject rights. As pointed out above, the GDPR applies to any data controller or data processor, no matter where they are based, that processes the personal data of EU citizens. In addition, it restricts transfers of personal data to countries outside Europe with no “adequate” level of data protection. It was for these two reasons that the GDPR also had an impact on the global economy and led to the release and modernisation of data protection and privacy laws by numerous countries outside of the European Union that wished to achieve an appropriate level of data protection or simply understood that strengthening data protection in the digital age was necessary. Important non-European data protection and privacy laws were released after the GDPR entered into force by countries, including Japan and South Korea, that are meanwhile recognised by the EU Commission as providing adequate protection in relation to the GDPR (see the Commission’s website2 ). The California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) are further examples of recently enacted modernised privacy laws. Further details about the GDPR and other relevant laws and their main principles will be presented in Chap. 2.

1.4.3

GDPR Roles and Concepts

The GDPR defines in its Art. 4 important data protection concepts and roles, to which we will also refer in this book, including the following. • Personal data, which means any information relating to an identified or identifiable natural person (data subject) (Art. 4 (1)). Personal data can also belong to the class of “special categories of data”, which are defined in Art. 9 (1) GDPR as “personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data,biometricdataforthepurposeofuniquelyidentifyinganaturalperson,dataconcerning health or data concerning a natural person’s sex life or sexual orientation”, for which data processing restrictions and stricter rules are defined by the GDPR.

2 https://commission.europa.eu/law/law-topic/data-protection/international-dimension-data-protec-

tion/adequacy-decisions\_en.

1.4





• •

Defining the Terms and Concepts

7

There are different types of personal data that are not directly defined in the GDPR, but, still, different rules may apply to them, including Data explicitly disclosed by the data subject (e.g. name, delivery address), data implicitly disclosed by the data subject including metadata (e.g. IP address, MAC address, cookies, location data, traffic data), derived data (e.g. user behavioural profiles), and third party provided data (e.g. reputation scores). Data Processing, which means any operation(s) performed on personal data, such as “collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction” (Art. 4 (2)). Consent, which is a freely given, specific, informed and unambiguous indication of the data subject’s wishes, by a statement or by clear affirmative action, which signifies agreement to the processing of personal data relating to him or her (Art. 4 (11)). The GDPR also includes the concept of “explicit consent”, which may be a legal basis for the processing of special categories of data, for the data transfers to third countries outside Europe or for automated individual decision-making, including profiling. According to the European Data Protection Board [11], “explicit” refers to the way consent is expressed by the data subject, i.e. the data subject must give an express statement of consent, e.g. if appropriate, via a signed written statement. Controller is the natural or legal person, which, alone or jointly with others, determines the purposes and means of the processing of personal data (Art. 4 (7)). Processor is a natural or legal person, which processes personal data on behalf of the controller (Art. 4 (8)).

1.4.4

Technical Data Protection Goals and Terms

Technical privacy and data protection goals go beyond the classical security protection goals of Confidentiality, Integrity and Availability (CIA), and also include the goals of data minimisation (which can be achieved by anonymity, pseudonymity, unlinkability or unobservability, fairness and transparency of data processing for the data subjects, and intervenability by granting individuals the right to control their personal data and to intervene in data processing (see [12]). Data minimisation can be considered as the most effective privacy strategy, as privacy is best protected if no data or as little data as possible are processed. Data minimisation can be achieved through • Anonymity, which is “the state of being not identifiable within a set of subjects (e.g. set of senders or recipients), the anonymity set” [13].

8

1 Introduction to Usable Privacy

Anonymisation has been defined as a “process by which personal data is irreversibly altered in such a way that a data subject can no longer be identified directly or indirectly, either by the data controller alone or in collaboration with any other party” [14]. The GDPR (or other data protection legislation) only applies to the processing of personal data and thus does not apply to anonymous data. • Pseudonymity, which is the use of pseudonyms as identifiers [13]. Pseudonymity is related to anonymity as both concepts aim at protecting the real identity of a subject. However, pseudonymity usually allows us to maintain a reference to the subject’s real identity, for example, to ensure accountability. The degree of protection provided by pseudonyms in online transactions depends on how often a pseudonym is (re-)used in various contexts/for various transactions. The best privacy protection can be achieved if, for each transaction, the user uses a new so-called transaction pseudonym [13] that is unlinkable to any other transaction pseudonyms and at least also initially unlinkable to any other personal information of that user. The GDPR defines pseudonymisation in Art. 4 (5) as the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, which is kept separately in a safe manner. Hence, pseudonymous data is still considered personal data by GDPR and hence needs to be protected. • Unlinkability of two or more items of interest (e.g. subjects, messages, events) from an adversary’s perspective means that within the system, the adversary cannot sufficiently distinguish whether these items are related or not [13]. Anonymity and unlinkability are directly related, e.g. data is anonymous if it cannot be linked to a data subject, and sender anonymity implies that a message cannot be linked to the sender. • Unobservability, which ensures that a user may use a resource or service without others being able to observe that the resource or service is being used, and goes beyond anonymity [15]. Unobservability is stronger than anonymity. For example, if it is unobservable that a message was sent from Alice to Bob, then Alice is anonymous as a sender and Bob is anonymous as a recipient. However, Alice could be anonymous as the sender of a message that is still observable (even though it cannot be linked to her as a sender). Hence, unobservability implies anonymity, but not vice versa. Data protection goals may also conflict with each other, as discussed by [12] and illustrated in Fig. 1.1. Solving such conflicts may be challenging. For example, transparency for patients about who has accessed their medical records may be achieved by keeping a log of all instances of access. However, such log files create additional personal data not only about the patients but also about the healthcare personnel accessing the data, thereby conflicting with the goal of data minimisation. These log data may be very sensitive; e.g. the type of

1.4

Defining the Terms and Concepts

9

Fig. 1.1 Data protection goals and conflicts by Hansen, Jensen, and Rost [12]

medical specialist that has accessed a patient’s medical record may reveal information about the patient’s treatment and medical issues.

1.4.5

Privacy by Design

In the late 1990s, Ann Cavoukian coined the term Privacy by Design (PbD), which she defined through seven foundational principles. PbD provides an approach for considering privacy from the very beginning of and throughout the entire system development process [2]. PbD goes beyond technical solutions and addresses organisational procedures and business models as well [16]. The seven “foundational principles” of PbD defined by Cavoukian are the following: 1. Proactive not reactive: PbD measures should be preventive rather than remedial. 2. Privacy as the default setting: Maximum privacy should be delivered by default. 3. Privacy embedded into design: PbD must be embedded into the design and architecture of systems and business practices. 4. Full functionality, positive-sum, not zero-sum: PbD strives to accommodate all legitimate interests and objectives in a positive-sum “win-win” manner, avoiding any unnecessary trade-offs. 5. End-to-end security: full privacy protection during the entire life cycle of data processing.

10

1 Introduction to Usable Privacy

6. Visibility and transparency: system parts and operations remain visible and transparent to both users and providers and verifiable. 7. Respect for user privacy: “keep it user-centric” by offering measures such as strong privacy defaults, appropriate privacy notices, and empowering usable options. Especially the seventh foundational principle of “Respect for user privacy—keep it usercentric” clearly defines usability as an integral part of PbD. In 2018, with GDPR entering into force, privacy by design (or, more precisely, its variant of Data Protection by Design and Default (DPbDD)) became a legal requirement for data controllers.

1.4.6

Privacy-Enhancing Technologies (PETs)

PETs can be defined as technologies that enforce privacy principles in order to protect and enhance the privacy of users and/or of other individuals about whom personal data are processed (the so-called data subjects). They provide basic building blocks for achieving privacy by design. When the Dutch Data Protection Commissioner and the Privacy and Data Commissioner of Ontario coined the term “Privacy-Enhancing Technologies”, and the abbreviation “PETs”, in a report in 1995 [17], research and development of PETs had already started in the 1980s when David Chaum developed the most fundamental “classical” PETs. These included data minimisation PETs-based cryptographic protocols for anonymous payment schemes or anonymous communication; for instance, Mix Nets, the “conceptual ancestor” of today’s Tor network, which routes pre-encrypted messages over a series of proxy servers, effectively hiding who is communicating with whom. During this century, research and development of PETs have progressed for enforcing data minimisation and other data protection goals. The latter include Transparency-Enhancing Technologies (TETs), dashboards and (intervenability) tools for supporting users to exercise their data subject rights, and encryption tools for helping to keep personal data confidential (see also the overview in Sect. 2.3).

1.4.7

Human-Computer Interaction (HCI)

As HCI implies, humans, computer systems, and interaction between them are the three pillars of HCI. HCI is thus about studying the interaction between humans and computer systems. Generally, HCI is focused on understanding why and how interactions happen as well as improving interactions through better design. In other words, a large part of HCI research studies how to design computer systems, for example, user interfaces that support people in accomplishing their tasks. Such computer systems, therefore, should be usable.

1.4

Defining the Terms and Concepts

11

Thus, there is another focus besides people, computers, and their interactions in HCI which is usability (see Sect. 1.4.8).

1.4.8

Usability

Usability is defined by ISO 9241-11 as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” For achieving usability, heuristics (e.g. general principles or “rules of thumb”) for user interface design were defined by HCI experts [18, 19]. In particular, Jakob Nielsen’s 10 Usability heuristics [18], which we list below, are broadly applied and referred to when designing or evaluating systems: 1. Visibility of system status: The design should keep users informed about what is happening by providing appropriate feedback within a reasonable amount of time. 2. Match between system and the real world: The system design should “speak the users’ language” and use words, phrases, and concepts (including metaphors) that are familiar to the user, instead of using technical expert expressions. 3. User control and freedom: Systems should support undo and redo and provide clearly marked ways to exit an interaction entered by mistake. 4. Consistency and standards: The learnability for a user interface can be improved by internal consistency (within a single product or product series) and external consistency with established industry conventions. 5. Error prevention: The best designs carefully prevent errors and problems from occurring in the first place. Error-prone conditions should be eliminated, or the system should check and detect them and then present users with a confirmation option before they commit to the action. 6. Recognition rather than recall: The user’s memory load should be minimised by letting users recognise information in the interface by making objects, actions, and options visible (rather than having users remember (“recall”) it). 7. Flexibility and efficiency of use: Allow users to tailor frequent actions. 8. Aesthetic and minimalist design: Content and visual design of user interfaces should focus on the essentials. Unnecessary elements and content may distract users and should not be included. 9. Help users recognise, diagnose, and recover from errors: Error messages should be expressed in plain language, accurately indicate the problem, and constructively propose a solution. 10. Help and documentation: Help and documentation may be needed to help users understand how to complete their tasks and should be easy to search and be focused on the user’s task.

12

1.4.9

1 Introduction to Usable Privacy

Human-Centred Design (HCD)

According to ISO 9241-210:2019(E), “Human-centred design is an approach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques”. This approach enhances effectiveness and efficiency, improves human well-being, user satisfaction, accessibility, and sustainability; and counteracts possible adverse effects of use on human health, safety, and performance. User-centred design (UCD), which is a term introduced by Donald Norman in 1986 [22], is a concise version of HCD, which addresses specific target users in real situations, while HCD aims to include all humans as possible users and stakeholders. Figure 1.2 illustrates the steps involved in a human-centred design process, which involves understanding and specifying the context of use, eliciting and specifying user and organisational requirements, designing solutions addressing the requirements, evaluating the designed solutions against the requirements, and possibly an iteration of these steps for improving the solutions until the requirements are well met.

1.4.10 Usable Privacy As we clarified in Sect. 1.4.7, HCI is both about usable user interfaces of computer systems and users’ experiences of such systems, including their understanding, learning, reacting,

Fig. 1.2 Steps involved in a human-centred design approach (ISO 13407)

1.4

Defining the Terms and Concepts

13

and adapting. If the computer system in the HCI definition is a privacy system3 then the ultimate goal is to design and implement a system that is both usable and still fulfils its primary privacy missions. Therefore, the usable privacy field deals with topics on how and why users interact with privacy systems and how their interactions can be improved. Considering the intersection of privacy and HCI is important in the usable privacy field: For example, suppose we just care about privacy, and then humans are secondary constraints to privacy, making privacy metrics of interest. In the HCI and usability context, humans will be the primary constraint, privacy will be seldom thought of, and usability metrics will be used most often. However, both metrics may and should be used to determine usable privacy, and humans and privacy are both the primary constraints. Effective usable privacy enables individuals to protect their personal information and exercise their privacy rights in an increasingly digital world.

1.4.11 Metaphors A metaphor is an expression, often found in the literature, which describes something by referring to something else that is considered to have similar or, in a certain way, the same characteristics. Metaphors from the real world can serve as a valuable user interface technique for achieving a match between system and the real world and for simplifying interface design. It can provide an environment that users are familiar with and thereby reduces the need to develop new knowledge and understanding [8].

1.4.12 Mental Models A mental model is a set of beliefs and ideas that we consciously or unconsciously form about how something works in the real world. HCI research can utilise the tendency of users to create their own personal models, especially for complex systems, by either guiding users to form appropriate models or by examining the models that already exist and building on them [8]. The conceptual model, which is given to the users through the system design and interface of the actual system [28], should match the users’ mental models.

1.4.13 Nudging and Dark Patterns For the first time, Thaler and Sunstein [26], in 2008, defined nudging as “Any aspect of the choice architecture that alters people’s behaviour in a predictable way without forbidding any options or significantly changing their economic incentives.” Nudges increase the likelihood that a user will reach a wise decision by making the better choice more convenient or salient. 3 A system that has privacy implications.

14

1 Introduction to Usable Privacy

Thaler and Sunstein portray nudging as operating in the highest interest of the user, even though this is not included in the above definition. Despite this, nudging techniques could be used not only to promote privacy-friendly choices but also to trick users into making decisions that are not in their best interests, in particular, by using so-called “dark patterns”. The term “dark patterns” was first introduced in 2010 by Brignull on darkpatterns.org4 which describes user interfaces as “tricks used in websites and apps to make you do things you didn’t mean to, like buying or signing up for something.” Dark patterns are deceptive user interfaces that are specifically designed to tempt users into doing something, such as providing consent to disclose personal data, against their wishes. Researchers categorise dark patterns to illuminate their effects (e.g. [23–25]). Among the existing categorisations, Luguri and Strahilevitz [23] have summarised them as nagging, social proof, obstruction, sneaking, interface interference, forced action, scarcity, and urgency. Hansen [27], in 2015, proposed a new definition of nudging: “A nudge is a function of (1) any attempt at influencing people’s judgement, choice or behaviour in a predictable way, that is (2) made possible because of cognitive boundaries, biases, routines and habits in individual and social decision-making posing barriers for people to perform rationally in their own selfdeclared interests and which (3) works by making use of those boundaries, biases, routines, and habits as integral parts of such attempts.” In this sense, this definition eliminates nudging from libertarian paternalism’s tricky realm, removes the subjective requirement that nudging is for the users’ benefit (which means dark patterns can be seen as nudges), and does not require equivalence or inclusion of all options.

1.5

Related Surveys and Books

Earlier related and recommended books or surveys on usable privacy and security include 1. Cranor, L. F., & Garfinkel, S. (2005). Security and usability: designing secure systems that people can use. “O’Reilly Media, Inc.”. 2. Iachello, G., & Hong, J. (2007). End-user privacy in human–computer interaction. Foundations and Trends in Human–Computer Interaction, 1(1), 1–137. 3. Garfinkel, S., & Lipford, H. R. (2014). Usable security: History, themes, and challenges. Synthesis Lectures on Information Security, Privacy, and Trust, 5(2), 1–124. 4. Reuter, C., & Kaufhold, M. A. (2021). Informatik für Frieden-, Konflikt-und Sicherheitsforschung. In Sicherheitskritische Mensch-Computer-Interaktion: Interaktive Technologien und Soziale Medien im Krisen-und Sicherheitsmanagement (pp. 605–630). Wiesbaden: Springer Fachmedien Wiesbaden. (In German)

4 Readdressed and now available at: https://www.deceptive.design/.

References

15

5. Reuter, C., Iacono, L. L., & Benlian, A. (2022). A quarter century of usable security and privacy research: transparency, tailorability, and the road ahead. Behaviour & Information Technology, 41(10), 2035–2048. 6. Knijnenburg, B.P., Page, X., Wisniewski, P., Richter Lipford, H., Proferes, N., & Romano, J. (2022). Modern Socio-Technical Perspectives on Privacy (p. 462). Springer Nature. The book by Cranor et al. is a collection of scientific articles and not a textbook, and the book edited by Reuter is written and only available in German. In comparison to all books and surveys listed that address usable security including privacy, this book focuses entirely on usable privacy and explores the challenges of usable privacy design and approaches to addressing them in more detail. Moreover, in contrast to the books by Cranor and Garfinkel, Garfinkel and Lipford, and the survey by Iachello et al., this book provides an up-to-date overview of the field of usable privacy research, which has developed considerably during the last 25 years, also due to the GDPR and other significant developments in the technical and legal privacy domain that required novel usability solutions. The recent survey by Reuter, Lo Iacono, and Benlian provides a brief overview of the usable security and privacy field from a historical point of view, which complements the perspectives presented in this book. The book on “Modern Socio-Technical Perspectives on Privacy” by Knijnenburg et al. is a collection of chapters written by different authors to inform researchers and professionals about the socio-technical privacy challenges related to modern networked technologies. As opposed to the last book on the list, ours focuses more specifically on multidisciplinary aspects related to usable privacy, including socio-technical aspects. Nevertheless, it also contains topical chapters that discuss usability challenges and ways to advance privacy in selected topic areas, including the Internet of Things (IoT) domain as well as accessible privacy and privacy for vulnerable populations, for which we also provide a review and discussion of academic literature.

References 1. DeCew, J. Privacy. The Stanford Encyclopedia Of Philosophy. (2018), https://plato.stanford.edu/ archives/spr2018/entries/privacy/ 2. Cavoukian, A. & Others Privacy by design: The 7 foundational principles. Information And Privacy Commissioner Of Ontario, Canada. 5 pp. 12 (2009) 3. Holvast, J. History of privacy. The History Of Information Security. Springer. pp. 737–769 (2007) 4. Warren, S. & Brandeis, L. Right to privacy. Harvard Law Review. 4 pp. 193 (1890) 5. BVerf, Order of the First Senate of 15 December 1983 - 1 BvR 209/83 -, paras. 1-214, http:// www.bverfg.de/e/rs19831215_1bvr020983en.html. (1983) 6. Westin, A. Privacy and freedom. Washington And Lee Law Review. 25, 166 (1968) 7. Nissenbaum, H. Privacy as contextual integrity. Wash. L. Rev.. 79 pp. 119 (2004)

16

1 Introduction to Usable Privacy

8. Patrick, A. & Kenny, S. From privacy legislation to interface design: Implementing information privacy in human-computer interactions. International Workshop On Privacy Enhancing Technologies. pp. 107–124 (2003) 9. The European Parliament and the Council of the European Union Regulation (EU) 2016/679 of the European Parliament and of the Council. (2016) 10. Reuter, C., Iacono, L. & Benlian, A. A quarter century of usable security and privacy research: transparency, tailorability, and the road ahead. Behaviour & Information Technology. 41 pp. 2035–2048 (2022) 11. The European Data Protection Board. Guidelines 05/2020 on consent under Regulation 2016/679. Version 1.1. (2020) 12. Hansen, M., Jensen, M. & Rost, M. Protection Goals for Privacy Engineering. 2015 IEEE Security And Privacy Workshops. pp. 159–166 (2015) 13. Pfitzmann, A. & Hansen, M. A terminology for talking about privacy by data minimization: Anonymity, unlinkability, undetectability, unobservability, pseudonymity, and identity management. (Dresden, Germany, 2010) 14. Standardization, I. Health informatics – Pseudonymization. (International Organization for Standardization, 2017), https://www.iso.org/standard/63553.html 15. Standardization, I. Common Criteria for Information Technology Security Evaluation. (International Organization for Standardization, 2022), https://www.iso.org/standard/63553.html 16. Danezis, G., Domingo-Ferrer, J., Hansen, M., Hoepman, J., Metayer, D., Tirtea, R. & Schiffner, S. Privacy and data protection by design-from policy to engineering. ArXiv Preprint arXiv:1501.03726. (2015) 17. Hes, R. & Borking, J. Privacy-Enhancing Technologies: The Path to Anonymity. (1995,1) 18. Nielsen, J. Ten usability heuristics. http://www.nngroup.com/articles/ten-usability-heuristics/ (2005) 19. Schneiderman, B. Designing the User Interface: Strategies for Human Computer Interaction. Reading, MD: Addison-Wesley. (1998) 20. Brooke, J. & Others SUS-A quick and dirty usability scale. Usability Evaluation In Industry. 189, 4–7 (1996) 21. Pater, J., Coupe, A., Pfafman, R., Phelan, C., Toscos, T. & Jacobs, M. Standardizing reporting of participant compensation in HCI: A systematic literature review and recommendations for the field. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems. pp. 1–16 (2021) 22. Norman, D. User-Centered System Design: New Perspectives on Human-Computer Interaction. (CRC Press, 1986) 23. Luguri, J. & Strahilevitz, L. Shining a light on dark patterns. Journal Of Legal Analysis. 13, 43–109 (2021) 24. Gray, C., Kou, Y., Battles, B., Hoggatt, J. & Toombs, A. The dark (patterns) side of UX design. Proceedings Of The 2018 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2018) 25. Bösch, C., Erb, B., Kargl, F., Kopp, H. & Pfattheicher, S. Tales from the Dark Side: Privacy Dark Strategies and Privacy Dark Patterns.. Proc. Priv. Enhancing Technol.. 2016, 237–254 (2016) 26. Leonard, T. Richard H. Thaler, Cass R. Sunstein, Nudge: Improving decisions about health, wealth, and happiness. (Springer, 2008) 27. Hansen, P. The definition of nudge and libertarian paternalism: Does the hand fit the glove?. European Journal Of Risk Regulation. 7, 155–174 (2016) 28. Weinschenk, S. 100 things every designer needs to know about people. (Pearson Education, 2011)

2

Background: Privacy Laws and Technologies

2.1

Introduction

Privacy-Enhancing Technologies (PETs) enforce legal privacy principles and data protection goals. Legal privacy principles have in turn also Human-Computer Interaction (HCI) implications, especially the principles of transparency, intervenability, and consent that require interactions with data subjects. Designing transparency for PETs and usable PETs also requires an understanding of the core functionality of these PETs. For this reason, usable privacy researchers and developers should be familiar with both the legal and technical background for privacy and PETs. This chapter therefore provides the background for important laws and legal principles for privacy protection, with a special focus on the European General Data Protection Regulation (GDPR), as it has been considered the new “golden standard” globally for data protection regulation [57]. Moreover, we provide an introduction to PETs as well as a PET classification with relevant examples of PETs that have been researched and/or are used in practice.

2.2

Laws for Privacy Protection

This section provides an overview of privacy and data protection regulations and their history.

2.2.1

First Laws for Data Protection

The Convention for the Protection of Human Rights and Fundamental Freedoms, which entered into force in 1953 and played an important role in Europe for the development and awareness of human rights, acknowledges privacy in its Article 8: Everyone has the right to respect for his private and family life, his home, and his correspondence. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Fischer-Hübner and F. Karegar, The Curious Case of Usable Privacy, Synthesis Lectures on Information Security, Privacy, and Trust, https://doi.org/10.1007/978-3-031-54158-2_2

17

18

2 Background: Privacy Laws and Technologies

In the 1960 and 1970s with the advent of Information Technology (IT), the interest in privacy and data protection increased. The very first data protection law in the world was enacted in the German state of Hessen in 1970, which was followed by national laws in Sweden (1973), the United States (1974), Germany (1977), and France (1978). From these national laws, two important international instruments evolved: (1) The Council of Europe’s Convention for the Protection of Individuals concerning the Automatic Processing of Personal Data in 1981, and (2) The OECD’s Guidelines Governing the Protection of Privacy and Transborder Data Flows of Personal Data in 1980, which were updated in 2013. The OECD guidelines are, however, advisory in nature and not legally binding on its members. Both instruments articulate specific rules and principles covering the handling of electronic data, which form the core of the data protection and privacy laws of several countries.

2.2.2

The European Legal Privacy and Data Protection Framework

2.2.2.1 GDPR—Background and Objectives Different levels of data protection across Europe in the early nineties were leading to a risk of circumvention of data protection rules by EU member states providing a high level of legal data protection (such as Germany or Sweden at that time) by outsourcing the personal data processing to member states with no data protection legislation (such as Italy or Greece that had no data protection laws in place at that time). As an instrument of harmonisation and for the protection of privacy as a fundamental right of individuals in the European Union, the EU Data Protection Directive 95/46/EC was formally adopted in 1995, which also restricted personal data transfer to third countries with no adequate level of data protection. EU member states had to revise their respective national laws to comply with the requirements of the directive. It was still up to each country to implement the directive and there was still a great deal of variation between countries. To address this issue, the directive was replaced by GDPR on 25 May 2018. As a regulation, the GDPR is directly applicable, in full, to all member states. The GDPR has additionally the objective to modernise data protection rules, particularly protecting users in the global economy—the GDPR is, therefore, applicable not only to organisations established in the EU but also to any controller or processor, no matter where it is based, that offers goods and services to, or that monitors individuals in the EU (Art. 3 (2) GDPR)). The modernisation of data protection rules has also strengthened data subject rights by providing individuals with more control over their data. Moreover, the GDPR has the objective of improving levels of compliance, particularly by introducing mandatory data breach notification and more significant penalties. Organisations that fail to meet their regulatory obligations for complying with the GDPR face fines of up to 4% of their annual global turnover, or 20 million EUR, whichever is greater (Art. 83 GDPR). In the following subsections, we will first present the general data processing principles of the GDPR, which, as shown in Sect. 5.10, can be mapped to HCI requirements and solutions

2.2

Laws for Privacy Protection

19

for usable privacy. Moreover, we will elaborate in more detail on the rules that the GDPR provides for enhancing data subjects’ rights and controls including rules for consent, which all require usable implementations. We will also discuss the importance of usability for the GDPR’s principle of Data Protection by Design and Default.

2.2.2.2 GDPR: Data Processing Principles General data processing principles are defined by the GDPR in its Art. 5. These principles mostly also match with general privacy principles of the OECD privacy guidelines. As stated by Art. 5 GDPR, personal data processing shall comply with the following principles: • Lawfulness, fairness, and transparency: data should be processed lawfully, fairly, and in a transparent manner (Art. 5 (1) (a)—cf. OECD Openness & Individual Participation Principles); • Purpose limitation: data shall be collected for specified, explicit, and legitimate purposes and not processed in a way incompatible with those purposes (Art. 5 (1) (b)—c.f. OECD Purpose Specification Principle & Use Limitation Principles); • Data minimisation: data should be adequate, relevant, and limited to what is necessary (Art. 5 (1) (c)—c.f. OECD Data Quality Principle); • Data accuracy: data should be accurate and kept up to date (Art. 5 (1) (d)—c.f. OECD Data Quality Principle); • Storage limitation: data should be kept in a form that allows identification of data subjects for no longer than necessary (Art. 5 (1) (e)—c.f. OECD Data Quality Principle); • Integrity and confidentiality: data should be processed in a manner that ensures appropriate security of the data (Art. 5 (1) (f)—c.f. OECD Security Safeguards Principle); • Accountability: the controller shall be responsible for the processing and demonstrate compliance (Art. 5 (2)—c.f. OECD Accountability Principle).

2.2.2.3 GDPR: Data Subject Rights The GDPR puts a strong emphasis on “transparent and fair” data processing (mentioned in Art. 5 (1) (a) as one of the main principles), and at the same time strengthens the data subject rights for transparency and intervenability (Art. 12–23, 34 GDPR & OECD Openness & Individual Participation Principle). General rules for “Transparent information, communication and modalities for the exercise of the rights of the data subject” are specified in Art. 12 GDPR and need to be applied for implementing transparency rights granted by the GDPR, which include • the Right to Information which specifies that data subjects should be provided with the necessary privacy policy information before data is disclosed, i.e. ex-ante (Art. 13–14 GDPR);

20

2 Background: Privacy Laws and Technologies

• the Right to Access which specifies that data subjects should be provided with information about their data and how they have been processed after data have been disclosed, i.e. ex-post (Art. 15, 22 GDPR); • a provision on data breach notification that, in addition, requires data controllers, under certain circumstances, to inform data subjects about security breaches related to their personal data (Art. 34 GDPR). In its Art. 12, the GDPR puts special emphasis on the usability of transparency. Information to be provided to the data subject needs to be concise, transparent, intelligible and needs to be provided in an easily accessible form, using a clear and plain language. To this end, the European Data Protection Board (EDPB) [2] recommends multi-layered privacy notices, which allow data subjects to navigate to a particular section of the notice rather than having to scroll through a large amount of text (see Sect. 5.6.1 for more information on multi-layered notices). Moreover, pursuant to Art. 12, information should be provided by electronic means for electronic requests. Besides, a combination of policy text with standardised and machinereadable policy icons is suggested. In addition to visually illustrating the written policy information, these icons can serve as a complement, and not a replacement, for the written policy information. Information that should be made transparent ex-ante (i.e. before personal data is disclosed) and ex-post (i.e. after personal data has been disclosed) comprises several items (cf. Art. 13– 15 GDPR): First of all, the information that always needs to be provided to the data subject includes the identity of controller and its contact details; the data processing purposes and legal basis for processing; the data recipients; and also information about any international data transfers to third countries. In addition, information should be provided to ensure fair and transparent processing, including information about storage retention periods; data subject rights; and the existence of automated decision-making, the logic involved, and its significance and envisioned consequences. The controller needs to provide a copy of the data in a commonly used electronic format— this may facilitate searching for data entries or visualising data traces with transparencyenhancing tools. The intervenability rights that the GDPR grants to data subjects include • the right to rectification (i.e. correction) of inaccurate personal data (Art. 16); • the right to erasure, also called “right to be forgotten”, especially in cases where data are no longer necessary or where the data processing was unlawful (Art. 17); • the right to restriction of processing (i.e. the right to block the data processing under certain circumstances; Art. 18); • (Art. 19); • the right to data portability (Art. 20);

2.2

Laws for Privacy Protection

21

• the right to object to marketing and profiling (Art. 21–22); • The right to lodge a complaint with a supervisory authority (Art. 57 (f), 80). The right to be forgotten was also proclaimed in the European Court of Justice Decision of Mario Gonzales versus Google Spain,1 and has forced Google to remove outdated search indices for fulfilling respective data subject rights requests. Using the right to portability, which was recently introduced in the GDPR, data subjects can easily export their data from one service provider to another, which may offer more privacy-friendly services. It comprises the rights to receive disclosed personal data in a structured, commonly used, and machine-readable format and to transmit those data to another controller, or to have data transmitted directly between controllers. The right may prevent a “lock-in” to a particular service when switching to an alternative and more privacy-friendly alternative service provider is too expensive or too complex. Data portability, however, only needs to be provided for data that are explicitly or implicitly disclosed by the data subject. In contrast to the right to data portability, the right to access and receive an electronic copy of one’s data according to Art. 15 GDPR refers to all types of data, including data that a service provider derived, e.g. via profiling, or obtained from a third party. The GDPR also restricts “profiling” and gives data subjects significant rights to avoid profiling-based decisions. In this context, it is important to remember that data subjects have the right to object to data processing, including profiling, which is granted pursuant to Art. 21 if the legal basis for data processing is for a task of public interest or legitimate interest of the controller (Art. 6 (1) (e) or (f) GDPR; or in the case of direct marketing (in this case it is an absolute right with no exceptions)). Besides, data subjects can also object in the case of data processing for research or statistics. Pursuant to Art. 22, data subjects have also the right not to be subject to fully automated decision-making (i.e. without human interventions), which produces legal effects concerning the data subject and significantly affects the data subject. According to [1], Art. 22 (1) establishes not only a data subject right but also a general prohibition for decision-making based solely on automated processing, which applies whether or not the data subject takes an action to actively invoke this right regarding the processing of their personal data.

2.2.2.4 GDPR: Consent Personal data processing must have lawful grounds under the GDPR. Consent is one of the six legal bases listed in Art. 6 of the GDPR and should also serve as a means for enabling data subjects’ control over their data. Art. 4 (11) of the GDPR defines “Consent” of the data subject as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of

1 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A62012CJ0131.

22

2 Background: Privacy Laws and Technologies

personal data relating to him or her”. This means that several conditions have to be fulfilled for valid consent (see also [2]): A valid consent must be • Freely given: This requires, in particular, that there must be real and free choices, which usually do not exist if there is an imbalance of power (e.g. in an employee-employer relationship). To be freely given, there should be no negative consequences if consent is not given. Moreover, consent may not be bundled as a non-negotiable part of the terms and conditions. • Specific: For classifying as “specific”, consent must be given for one or more specific purposes and the data subject must have a choice in relation to them, i.e. a separate opt-in is needed for each purpose. • Informed: This requirement means that the data subject has to be informed about certain elements that are crucial to making a choice. Information items that need to be provided for valid consent include the controller’s identity, the data processing purposes, the type of data, the right to withdraw consent, information on the existence of decisions based solely on automated processing, and risks of data transfers to third countries. Furthermore, consent requires a statement or an affirmative action, which involves the data subject taking a deliberate action to consent. Silence, pre-ticked checkboxes, or inactivity should therefore not be considered valid forms of consent.

2.2.2.5 GDPR: Data Protection by Design and Default The GDPR also introduced Data Protection by Design and by Default as a legal principle in its Art. 25: As a result of the principle of Data Protection by Design, data controllers are required to implement adequate technical and organisational measures, as early as possible in the design of processing operations, ensuring that privacy and data protection principles, such as data minimisation, are respected from the start. One specific example that Art. 25 provides is the use of pseudonymisation. Another example is encryption, which means encoding messages so only those authorised can read them. Both pseudonymised and encrypted data, however, are still classified as personal data. Furthermore, by default, data controllers should only collect, store, and process personal data to the amount and extent necessary for specific data processing purposes. This means that the principle of data minimisation, which is also an expression of the proportionality principle, should be enforced by default. For achieving data protection by default, systems should for instance define the default privacy settings as the most privacy-friendly option by limiting the amount of data that is collected, processed, or accessible by others to the minimum needed. Involving multiple disciplines including usability design is of high importance for achieving Data Protection by Design and Default, as the end users should ultimately profit from Data Protection by Design [3]. Also, Cavoukian emphasises that the Privacy by Design

2.2

Laws for Privacy Protection

23

principle “Respect for Privacy” that she defined extends to the need for user interfaces to be “human-centred, user-centric and user-friendly, so that informed privacy decision may be reliably exercised” [4]. Privacy by Design and by Default can in turn also be the means for making privacy usable, especially when they reduce user interactions for users to protect their privacy or ideally make them obsolete.

2.2.2.6 GDPR: Data Protection Impact Assessment Pursuant to Art. 35 GDPR, a Data Protection Impact Assessment (in short: DPIA, also known as Privacy Impact Assessment (PIA)) is required in case the processing is likely to result in a high risk to the rights and freedoms of the data subjects. A DPIA should be conducted at least in the case of systematic or extensive user profiling, processing of sensitive data on a large scale, or in the case of systematic monitoring of public areas on a large scale. A DPIA should also produce a deep understanding of a system’s privacy features, which can be used to codify a comprehensive privacy policy to be presented to users [5], and can thus facilitate usable privacy notices.

2.2.2.7 GDPR Restrictions for International Data Transfers By its territorial scope, the GDPR also applies to non-European controllers and processors that are established in the EU, or outside the EU, but offer goods and services to, or monitor individuals in, the EU. In addition, the GDPR restricts the transfer of personal data to third countries outside the EU, preventing companies in Europe from circumventing GDPR’s strict data protection rules by outsourcing their data processing to countries outside of Europe with less stringent or no data protection laws. Therefore, the adequacy principle of Art. 45 GDPR restricts personal data transfers to a third country outside the EU if the EU Commission has decided that this third country has an adequate level of data protection. Further conditions allowing international data transfers, and thus examples of important exceptions to the adequacy rule, are defined in Art. 46–49. They include (1) contractual arrangements with the recipient of the personal data, using, for example, the standard contractual clauses (SCC) approved by the European Commission, (2) binding corporate rules that are designed to allow multinational companies to transfer personal data between company sites and for which it has been demonstrated that adequate safeguards are in place, or (3) the data subject’s explicit consent. The US-EU Privacy Shield, which was adopted in 2016 by the EU Commission and replaced the Safe Harbor Agreement, was falling under the adequacy decision, and was agreed by the European Commission and the United States to establish a new framework for transatlantic data flows. However, similar to the Safe Harbor Agreement, the Privacy Shield was also declared invalid by the European Court of Justice in July 2020 by its socalled Schrems II court decision as an outcome of the lawsuits by the Austrian privacy activist Max Schrems against Facebook and its international data transfers. The Privacy Shield was invalidated due to invasive US surveillance programmes that did not satisfy the

24

2 Background: Privacy Laws and Technologies

GDPR requirements for granting individuals transparency and intervenability rights over their personal data. As expressed by the European Court of Justice, standard contractual clauses could, however, still be taken as a legal basis for data transfers outside of Europe in compliance with the GDPR, if they are complemented with adequate additional safeguards to compensate for insufficient protection of third-country legal systems.2 The EDPB has provided recommendations on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data. In Annex 2, the EDPB provides a list of supplementary measures (including PETs) that could be used in the context of implementing such additional safeguards [21]. To address the points raised by the European Court of Justice in its Schrems II decision, the EU and US recently negotiated the “EU-US Data Privacy Framework” (EU-US DPF) which introduced new binding safeguards. To this end, especially new obligations were introduced that should restrict data access by US intelligence agencies only to the extent of what is necessary and proportionate. Moreover, an independent and impartial redress mechanism was introduced to address and resolve complaints from European citizens concerning the processing of their data for national security purposes. Moreover, it is granting data subjects resident in the EU with enhanced control rights. The EU Commission adapted its adequacy decision on 10 July 2023 for the EU-US DPF and thereby concluded that US companies participating in this Data Privacy Framework can ensure an adequate level of protection—compared to that of the EU—for personal data transferred from the EU to those US companies. However, the EU-US DPF has also been criticised by Max Schrems and others, e.g. since it does not address onward transfers of data from the US to other third countries, and hence it may soon be legally challenged again.

2.2.3

Further European Privacy Legislation

In addition to the GDPR, further privacy-related European laws exist that may as specialised laws (lex specialis) override the rules of the GDPR or complement the GDPR with more specific rules. In 2002, the EU ePrivacy Directive 2002/58/EC was enacted to regulate privacy in the electronic communication sector and implemented by national laws in the member states. It deals with the regulation of several important issues such as confidentiality of communication, the protection of traffic and location data, as well as the regulation of spam and cookies. This directive was amended in 2009 with several changes, especially in what concerns cookies, which are now subject to prior consent. Therefore, it has also been known as the “Cookie Directive”. In the context of the reform of European data protection law, data protection of the electronic communication sector was also modernised with the ePrivacy Regulation that was proposed in 2017 to replace the ePrivacy Directive but has not been enacted yet. The proposed regulation will in the future also apply to new players providing 2 https://curia.europa.eu/juris/liste.jsf?num=C-311/18.

2.2

Laws for Privacy Protection

25

electronic communications services, such as video conferencing services, messengers, or email services, in contrast to the current rules of the ePrivacy Directive that apply only to traditional telecom providers. In particular, the proposed ePrivacy regulation introduces simpler rules for cookies to avoid an overload of consent requests for Internet users. Browser settings may be used to provide an easy way to accept or refuse tracking cookies and other identifiers. In addition, no consent should be required for non-privacy intrusive cookies that are used to improve Internet experience, such as cookies to remember shopping cart history or to count the number of website visitors. Further European legislation that was recently proposed or enacted with privacy and data protection dimensions includes the following. The Data Governance Act, which entered into application on 24 September 2023, creates the processes and structures to facilitate data sharing and re-use. The Data Governance Act is complemented by the Data Act (EU Regulation on harmonised rules on fair access to and use of data, adopted by the Council on 27 November 2023), which clarifies who can create value from data and under which conditions this can happen in line with EU rules. The AI Act is a proposed EU regulation that aims to introduce a common regulatory and legal framework for artificial intelligence (AI) for assuring fundamental rights. It uses a riskbased regulatory approach to AI system providers in the EU classifying AI applications by risk and regulating them accordingly. A ban is put on AI applications bearing unacceptable risks that contravene union values and violate fundamental rights, such as social credit scoring applications, while high-risk AI applications to the health and safety or fundamental rights of natural persons, such as intelligent CV-evaluation tools that rank job applicants, are subject to specific legal requirements. The European Parliament reached a provisional agreement with the European Council on the AI act on 9 December 2023, but it still has to be formally adopted by both Parliament and Council to become EU law. On 25 August 2023, The Digital Services Act (DSA) came into effect. It defines new rules for providers of intermediary services (e.g. cloud services, search engines, online marketplaces), and, in particular, aims to increase transparency for users, e.g. by identifying online advertising including the advertiser and sponsor, and by providing information on the main parameters used in recommender systems including options that users have to modify or influence these parameters. Moreover, users will be able to choose options that do not include profiling.

2.2.4

Privacy Legislation in Non-European Countries Including the USA

As defined by Art. 45 GDPR, the transfer of personal data to a third country outside Europe can be authorised in case the EU Commission has decided that the third country provides an adequate level of protection for the data. This provision has motivated countries outside of Europe to revise or adopt national data protection laws for fulfilling this requirement of providing an adequate level of protection.

26

2 Background: Privacy Laws and Technologies

The European Commission has by the end of 2023 recognised Andorra, Argentina, Canada (commercial organisations), Faroe Islands, Guernsey, Israel, Isle of Man, Japan, Jersey, New Zealand, the Republic of Korea, Switzerland, the UK, and Uruguay, as well as the United States (commercial organisations participating in the EU-US Data Privacy Framework—see below) as countries with privacy and data protection laws providing adequate protection.3 In recent years, further countries and jurisdictions have privacy and data protection legislation, including South Africa’s Protection of Personal Information Act (POPI Act),4 the Brazilian General Data Protection Law (LGPD),5 and the Chinese Data Privacy Framework with China’s Personal Information Privacy Law (PIPL).6 The US Privacy Act of 1974, amended in 1988, provides a law for fair information practices that govern the collection, maintenance, use, and dissemination of data about individuals that are maintained in systems of records by US federal agencies. The United States have, however, no singular federal regulation that covers the privacy of all types of data in different domains, including the private sector. Rather than a comprehensive national privacy law, a “patchwork” of many different laws governs privacy and data protection for specific sectors or domains, while large parts of the private sector are governed by self-regulation practices. Sector-specific laws include the Health Insurance Portability and Accountability Act (HIPPA), which contains privacy, security, and breach notification requirements applying to personal health data created, processed, or transmitted by healthcare providers, the Fair Credit Reporting Act (FCRA) protecting information in credit reports, or the Children’s Online Privacy Protection Rule (COPPA) which imposes certain privacy requirements on operators of websites or online services directed to children under the age of 13, prohibiting unfair and deceptive data processing practices. Recently, some states in the USA, namely California, Virginia, and Colorado, have enacted different comprehensive consumer privacy laws that provide rights to the citizens of these states, while some other states have proposals for consumer data privacy proposals in progress. The California Consumer Privacy Act of 2018 (CCPA) and California Privacy Rights Act (CPRA) which was adopted via referendum by the state of California in 2020 and amends the CCPA are considered the strongest state privacy laws. They go beyond the principle of notice and choice and provide Californian citizens with non-waivable data subject rights to information, delete, object, and the right to non-discrimination for exercising their rights, as well as the right to correction and to limit use and disclosure of sensitive

3 https://commission.europa.eu/law/law-topic/data-protection/international-dimension-data-protec-

tion/adequacy-decisions_en. 4 https://popia.co.za/. 5 http://www.planalto.gov.br/ccivil_03/_ato2015-2018/2018/lei/L13709compilado.htm. 6 https://digichina.stanford.edu/work/translation-personal-information-protection-law-of-the-peo-

ples-republic-of-china-effective-nov-1-2021/.

2.3 Technologies and Tools to Protect and Enhance Privacy

27

personal information. The CPRA established the California Privacy Protection Agency as a new agency to implement and enforce the law. In the US, consumer protection laws, which prohibit unfair and deceptive business practices, provide another avenue for enforcement against businesses for their privacy and security practices. At the federal level, the US Federal Trade Commission (FTC), empowered by the Federal Trade Commission Act, is the chief federal agency on privacy policy and enforcement, which protects consumers against unfair or deceptive trade practices. The FTC uses its authority to take enforcement actions and to investigate companies, e.g. for making inaccurate or misleading privacy and security statements or promises, e.g. in privacy policies, or for failing to comply with applicable industry self-regulatory principles. Designing privacy policies and privacy notifications for usability also plays a key role in preventing unfair and deceptive practices that US laws for consumer and children protection try to prohibit. With GDPR’s territorial scope that may also apply to non-EU-based data controllers and processors, the restrictions that the GDPR poses for data transfers to third countries without an adequate level of data processing, and the growing need and demand to protect individuals in the age of digitalisation, the GDPR is expected to continue to prompt non-EU countries to adopt or revise their data protection laws to have higher protection standards than what they have today.

2.3

Technologies and Tools to Protect and Enhance Privacy

While privacy and data protection regulations play an eminent role in protecting privacy and data protection as fundamental rights that are recognised by the EU Charter of Fundamental Rights, they cannot provide effective protection on their own, since they can be easily bypassed and are not always followed in practice. They should, therefore, be complemented by PETs, which are technologies that enforce legal privacy principles in order to protect and enhance the privacy of users of information technology and/or data subjects (see [6]). PETs provide technical building blocks for achieving privacy by design and by default and can be defined as technologies that enforce the fundamental data protection goals of unlinkability, intervenability, and transparency [7] in addition to the classical confidentiality, integrity, availability (CIA) security goals by minimising personal data collection and use, maximising data security, and empowering individuals. PETs can also function as enablers of data processing applications that without privacy enhancements/privacy by design would bear unacceptable privacy issues. For instance, for enabling European data controllers to use non-EU-based cloud services, PETs (such as those mentioned by the EDPB in Annex 2 of [21]) could be used for providing adequate additional safeguards complementing standard contractual clauses, as mandated by the European Court of Justice in its Schrems II decision.

28

2 Background: Privacy Laws and Technologies

Fig. 2.1 Privacy strategies by Hoepman [9]

Hoepman [8] introduced eight strategies for achieving privacy: minimise, hide, separate, aggregate, inform, control, enforce, and demonstrate, as illustrated in Fig. 2.1. These privacy strategies can help support privacy by design throughout the full software development life and are motivated by data protection legislation, such as the GDPR. In the following subsections, we group relevant PETs under these eight strategies that the PETs can help to enforce. Several PETs are enforcing more than one privacy strategy at the same time, as we will show below. The objective is not to provide a complete overview of PETs. We rather selected PETs that we consider to be of high practical relevance.

2.3.1

PETs to “Minimise”

The most fundamental privacy strategy is enforcing the legal privacy principle of data minimisation, requiring that the amount of personal information that is processed should be kept

2.3 Technologies and Tools to Protect and Enhance Privacy

29

to a minimum. As mentioned in Sect. 1.4.4, data minimisation could be technically achieved in the form of anonymity, pseudonymity, unlinkability, and/or unobservability. We can categorise PETS to “minimise” by whether they enforce data minimisation at the communication level, application level, or data level. It is important to understand that applications (e.g. payment schemes) can only be designed and used anonymously if their users cannot already be identified at the communication level (e.g. via their IP addresses). Hence, anonymity at the communication level is a prerequisite for achieving anonymity, or more generally data minimisation, at the application level. Data minimisation at communication level: Several protocols and tools for anonymous communication have been proposed and developed. Several are based on the fundamental idea of mix nets which was already developed by David Chaum back in 1981 [10]. Mix nets route pre-encrypted messages of a sender to a recipient over a series of proxy servers (the so-called “mix-servers”) to hide the identity of the sender who is communicating with the client. The sender determines the route of mix-servers and pre-encrypts the message (plus the address of the next hop along the path) with the public keys of the nodes along the path in reverse order. Each mix-server receives a set of messages, decrypts/removes one layer of encryption from the message, which provides the address of the next node along the path, shuffles them, and forwards them in a random order to the next node (next mix-server or the final destination). The change of message outlooks via decryption and message shuffling by each mix breaks the link between the source of the message and the destination, preventing eavesdroppers (and even global adversaries that can monitor all communication lines in a network) to trace the message traffic flows. Also, each node along the path only knows the node from which it immediately received the message and the next node on the path, but cannot link the sender with the recipient of the message. Hence, relationship anonymity (unlinkability of who communicates with whom) can be provided. The mix net protocol has been originally developed for anonymous (high latency) email communication and implemented in the form of anonymous remailers, such as Mixmaster,7 and have been further enhanced in recent project implementations, such as Katzenpost8 and Loopix [11]. Mix nets can also be seen as the “conceptual ancestor” of onion routing and the Tor (The Onion Router) network [17], which is the mostly used anonymous communication system today with millions of users worldwide. In the mid-nineties, when interactive Internet services became broadly used, the concept of onion routing as a low-latency anonymous communication protocol was developed and successively enhanced to the Tor protocol. Similar to the mix net protocol, the Tor protocol also uses layers of encryption (thus the 7 https://mixmaster.sourceforge.net/. 8 https://github.com/katzenpost.

30

2 Background: Privacy Laws and Technologies

onion metaphor) to route traffic through a path of 3 onion servers, called relays (the guard relay, the middle relay, and the exit relay). In contrast to mix nets, symmetric instead of public key encryption with short-term session keys and no message mixing in the servers is used for achieving low-latency communication. These differences also imply that Tor, in contrast to mix nets, cannot protect against traffic analysis by global adversaries. However, differently to mix nets, Tor offers forward secrecy, meaning that once the session keys used during a communication session by the onion servers are deleted, the message flows and thus the routing of a message cannot be reconstructed any longer. Virtual Private Networks (VPNs) are very frequently used by users to hide their identities/IP addresses from their Internet Service Providers (ISPs). When using a VPN, an encrypted connection (tunnel) is established between the user’s device and the VPN server and all the user’s traffic travels, through an encrypted tunnel, to the VPN’s servers and then onward to the website that the user is visiting. The websites thus will only see the VPN service’s IP address instead of the user’s. The VPN server, however, decrypts the user’s data before sending it forward. Hence, VPN servers become single points of trust that could misuse their power for viewing and profiling the communication of users. Steganographic tools can be used for implementing unobservable communication channels (see, for example, [23]). In contrast to cryptography, where the goal is to hide the content of a message and an adversary is nevertheless able to detect, intercept, and modify the (encrypted) message, the goal of steganography is to hide the very existence of a message, and thus make the communication unobservable. Most available steganographic tools use images as carriers for hidden information, e.g. simple steganographic tools can hide information in the least significant bits of an image. Data minimisation at application level: Even if the communication channel is anonymised, users can still disclose personal and identifying data on the application level and often users are requested to reveal more personal data than needed. Hence, data minimisation techniques are also needed on the application level. Many of such data minimisation techniques are based on “crypto magic” schemes, with somewhat “surprising” and counter-intuitive properties for lay users. Cryptographic building blocks for such schemes are, for example, blind signatures and/or zero-knowledge proofs. Blind signatures, invented by David Chaum [12] in the eighties, are an extension of digital signatures and allow someone to obtain a signature from a signer on a document without the signer being able to see the actual content of the “blinded” document. Hence, if the signer is later presented with the signed “unblinded” document, they cannot relate it to the signing session and with the person on behalf of whom they signed the document. Blind signatures were presented by David Chaum as a basic building block for anonymous electronic money (eCash), where a bank blindly signs a banknote with a randomly generated banknote number for a customer. The customer can then unblind the banknote (still including a valid signature from the bank) and spend it anonymously (as the bank cannot relate it to the banknote that

2.3 Technologies and Tools to Protect and Enhance Privacy

31

it signed, since the banknote number was hidden during the signing process). The blind signature can be also used to achieve anonymity of other applications, such as eVoting [13]. A zero-knowledge proof is defined as an interactive proof, in which a prover can prove to a verifier that a statement is true without revealing anything else than the veracity of the statement. Blind signatures and zero-knowledge proofs are used for implementing anonymous credential systems. A traditional credential consists of a set of personal attributes, such as name, birth date, and personal number, and is signed (and thereby certified) by the certifying party (the so-called issuer) and bound to its owner by cryptographic means. A traditional credential is shared in its entirety, meaning that all attributes in the credential are revealed together, if the user wants to prove certain properties to a verifier. Thus, usually, more data may be revealed than needed for a specific application. Moreover, the different uses of the user’s credentials can be easily linked by the verifier. Anonymous credentials (also called AttributeBased Credentials or ABCs) were first introduced by Chaum [14] and later enhanced by Brands [15] and by Camenisch and Lysyanskaya [16], and have stronger privacy properties than traditional credentials. They essentially allow the user to turn the credential into a new one that contains only a subset (i.e. a selection) of attributes of the original credential, which are then presented to the verifier, i.e. anonymous credentials enable selective disclosure of attributes to only those required for a specific application. Moreover, instead of revealing the exact value of an attribute, anonymous credential systems also enable the user to prove only attribute properties without revealing the attribute itself. If for instance users want to stream adult videos from an online platform, they can use an anonymous credential (including the birth date attribute) to prove that they are older than 18 years without revealing their exact birth date or any other attributes contained in the credential. With the Identity Mixer (idemix) protocol by Camenisch et al., which is based on zeroknowledge proofs, the issuer’s signature is transformed in such a way that the signature in the new credential cannot be linked to the original signature by the issuer. Hence, different uses of the same credential (e.g. for streaming different videos) cannot be linked by the verifier and/or issuer, which prevents them from profiling the user. Simply speaking, with idemix, the user can utilise zero-knowledge proofs to convince the verifier to possess a signature generated by the issuer on a statement containing the selected attributes. The anonymous credential scheme by Brands (used in Microsoft’s U-Prove technology) is based on blind signatures (where the user can selectively “unblind” attributes that should be disclosed), which, however, makes the re-uses of an anonymous credential linkable. Hence, if the user wants to achieve unlinkability of transactions with the anonymous credential scheme by Brands, the user needs to present a new credential for each transaction. Anonymous credentials have been used for developing privacy-enhancing identity management and authentication systems, and for diverse privacy-preserving applications, such as anonymous e-petitions [24] or anonymous chat rooms that are accessible only for teenagers [25] who can anonymously prove that they belong to a certain age group.

32

2 Background: Privacy Laws and Technologies

Fig. 2.2 Redactable signatures in an eHealth use case by the PRISMACLOUD project [18]

Related to anonymous credential schemes are malleable signatures (also called redactable signatures) [26], which also enable selective disclosure. In contrast to traditional digital signatures, where any changes to a signed document will invalidate the signature, malleable signatures allow the controlled redaction of certain parts of the signed data without the signature losing its validity. In an eHealth use case based on malleable signatures, developed by the PRISMACLOUD project9 and illustrated in Fig. 2.2, patients are given control and are authorised to redact certain information in their Electronic Health Records (EHRs) that are provided to them after they were discharged from hospital. First, the patient’s hospital doctor signs the EHR with a redactable signature, which is then transferred to the patient’s account on a hospital Cloud platform. The patient can then “black out” certain predefined redactable fields of information from this signed EHR copy. The signature of the doctor remains valid and the authenticity of the medical document is maintained. For instance, if the patient wants to get a second opinion on a diagnosis stored in their EHR, the diagnosis fields could be redacted from the EHR by the patient. The redacted EHR including only the raw medical values can then be made available on the Cloud portal to a second doctor of the patient’s choice. This specialist can validate the signature by the patient’s hospital doctor, and thus, verify the authenticity of the patient’s medical data. Thus, user-controlled data minimisation can be offered while retaining the authenticity of the selectively disclosed medical data (for protecting patient safety). To make the patients accountable for any redactions that they make, redactions can be implemented to require a digital signature by the patient.

9 https://prismacloud.eu/.

2.3 Technologies and Tools to Protect and Enhance Privacy

33

Data minimisation at data level: Data minimisation at the data level comprises the anonymisation and pseudonymisation of data, which are, for instance, important for protecting the privacy of individuals, whose data are processed for research and statistical applications. Typical anonymisation techniques use data generalisation or suppression (e.g. kanonymisation or variants) or data perturbation by adding statistical noise to aggregated data (e.g. Differential Privacy (DP)). However, it is important to note that these so-called anonymisation techniques do not necessarily lead to anonymous data, since even though these techniques aim to significantly lower the likelihood of identification of individuals, still some re-identification risks usually remain that cannot be decreased to zero. k-Anonymisation is a data generalisation technique that ensures that in a dataset the indirect identifiers match at least k (.≥ 2) other records, meaning that for each data record in the dataset, there are at least k-1 other records with the same value combination for indirect identifiers, i.e. an anonymity set with the size of at least k can be guaranteed. The higher we set k, the harder it will be to re-identify an individual based on the indirect identifiers. K-anonymity as an anonymity measure was initially proposed by Sweeney and Samarati [27]. However, the main limitation of the k-anonymity approach is that it still allows inference attacks. For example, if all records with the same (at least) k identifiers share the same sensitive values for other attributes stored in the record, these values can be easily inferred. For instance, if there are .k < 2 forty-year-old male patients in a kanonymised database that all are diagnosed with cancer, someone who has access to the database and knows a 40-year-old male patient that his records are stored in this database could directly derive that this person has cancer. For addressing issues of possible inference attacks, extensions including l-diversity [29] and t-closeness [30] have been proposed. Differential privacy, initially formalised by Cynthia Dwork [31], is a mathematically rigorous definition of privacy for the calculation of statistics on a dataset. DP places a formal bound on the leakage of information from these statistics about individual data points within a dataset. Informally, for each individual who contributes their data to a differentially private data analysis, DP assures that the output of such analysis will be approximately the same, regardless of the contribution of their data to the data analysis. Differentially private mechanisms are implemented by perturbation of data (adding “noise”) in a controlled manner that allows quantifying privacy through a privacy-loss parameter epsilon .ε. Lower values of privacy-loss parameters provide more privacy but affect the accuracy of the results more negatively. Therefore, there is a trade-off between privacy and accuracy in differentially private data analyses, and the aim is to provide data subjects deniability while still providing sufficient data utility for data analysts. There are two main types of differential privacy, offering slightly different privacy guarantees: Global DP (e.g. used by the US Census), where noise is added by a central aggregator to the output of a query of a database, offers individuals plausible deniability of their participation or inclusion within a data source. Local DP (e.g. used by Apple) adds noise to the individual (input) data points and offers deniability of record content. Differential privacy

34

2 Background: Privacy Laws and Technologies

can also be applied to machine learning models (which may contain sensitive data from individuals used to train the models) and protect them against membership inference attacks (as, e.g. presented by Shokri et al. [36]), which could reveal if a person contributed their data for model training. If the model has, for instance, been trained to detect diseases such as cancer, a model membership (i.e. that someone has contributed their data to model training) can constitute very sensitive personal information. Data Pseudonymisation is another data minimisation technique, which is, for instance, used if linkability to the data subjects is still needed, e.g. misbehaving pseudonymous users should be made accountable or for enabling to link pseudonymous medical lab test results to the respective patients. Simple data pseudonymisation techniques include a counter (where identifiers are simply substituted by a number chosen by a monotonic counter), random number generator, cryptographic hash function, message authentication code, encryption [32], and more advanced pseudonymisation techniques that can be implemented with zero-knowledge proofs or secret sharing (see also below) [33].

2.3.2

PETs to “Hide”

This strategy enforces the principle of data confidentiality disallowing access to personal data in plain view, or limiting access to authorised users only. Cryptographic protocols and schemes can be used to hide data that are transmitted, communicated, and/or stored, e.g. locally or in the cloud, from all parties that do not have access to the needed decryption keys. One prominent example of a cryptographic protocol is the Transport Layer Security (TLS) protocol that operates on the transport layer and provides encrypted and thus secure communication over a computer network. It is especially used in Hypertext Transfer Protocol Secure (HTTPS) for securing web browsing between a client and a server, where the client and server infrastructure take care of the key management (without directly involving the user, and thus “transparent” to them) by involving a handshake process for establishing a symmetric session key. End-to-End Encryption (E2EE) operates on the application layer and is typically used in messaging services, such as Whatsapp or Signal or email applications. It encrypts data on the client’s device before transmission and allows only recipients that possess the respective decryption keys to access the original data in clear text. Homomorphic Encryption (HE) and Functional Encryption (FE) are cryptographic schemes that allow data analysis on the encrypted data and have recently received more attention and been increasingly applied for implementing privacy-preserving data analytics tools and platforms (see, for example, [19]). Through encryption, HE can conceal both the input data, on which the operations/data analysis will be performed, and the result of the operations/data analysis, which will also be encrypted. In contrast, FE only protects the input data while the results of the operations/data analysis are made available in plain text (i.e. in

2.3 Technologies and Tools to Protect and Enhance Privacy

35

unencrypted form). In the context of privacy-preserving data analytics platforms, HE should thus be used if the analysis results should be hidden from the data analytics platform. For example, in [20], Alaqra et al. describe a use case, in which medical data (electrocardiogram signals) of one patient is first homomorphically encrypted on a Medical Health Platform (MHP) and then sent to a non-trusted cloud-based data analysis platform, which obtains the encrypted data analysis result and sends it back to the MHP. FE, on the other hand, is well-suited for the analysis of encrypted and aggregated data from several users on a data analysis platform that should also be the recipient of the analysis results (in unencrypted form). Both HE and FE have, however, practical limitations in terms of performance, and the type and amount of supported operations. Some data minimisation PETs can also be classified as PETS that hide information, e.g. mix networks and onion routing anonymise communication (or more precisely, who is communicating with whom) by hiding traffic patterns.

2.3.3

PETs to “Separate”

Technologies for enhancing privacy via separation are processing data in a distributed manner. With the distributed processing or storage of the personal data of one individual at different locations and/or by different entities, complete profiles of that individual cannot be made by one of the entities. Another good example of a data separation technique for enhancing privacy is secret sharing, which is a method, first proposed by Shamir in 1979 [35], for distributing a secret (S) among a group of N participants, each of whom is allocated a share of the S. The secret can be reconstructed only when at least k (.≤ N) of shares are combined, while individual shares (or any .h < k shares) cannot be used to reconstruct S or any information from it. The Archistar cloud storage system [34], for instance, utilises secret sharing for redundantly storing data over multiple independent storage clouds in a secure and privacy-friendly manner. More concretely, Archistar is splitting secret data into N shares which are distributed among a group of N cloud servers. The secret data can only be reconstructed when a sufficient number .k ≤ N of shares are combined. Data privacy protection is based on a non-collusion assumption between the involved storage providers, meaning that at least . N − k + 1 storage providers are assumed to not collude for reconstructing the data without the permission of the data owner. Secret sharing can also be used as the basis for Secure Multi-Party Computation (SMPC/MPC), which provides a method for entities to jointly compute a function over their inputs while keeping those inputs secret [37]. The technology has gained much attention in recent years as a means to implement privacy-preserving data analytics, which for instance allows the sharing of sensitive medical data for statistical research analytics while protecting the privacy of patients.

36

2 Background: Privacy Laws and Technologies

Another example of enhancing privacy by separation is Federated Learning (FL) (also known as collaborative learning), introduced in [38]. FL is a machine learning technique that trains an algorithm in a distributed fashion on multiple decentralised servers that are holding local data samples, without sharing or exchanging their data samples. With FL, the input data for the model training is distributed and hidden from a central server to eliminate the problems associated with centralised machine learning. While federated learning enhances privacy, locally trained models can still leak personal information. For instance, membership inference attacks may expose personal information.

2.3.4

PETs to “Aggregate”

Using data aggregation, PETs can also conceal data by restricting the amount of personal information at the group level. By doing so, it is made difficult for the adversary to identify a certain individual within the group. However, data aggregation through statistics or machine learning models (including FL) does not sufficiently protect data, since correlation of statistics can still leak personal information. Therefore, complementary privacy-protecting measures, such as DP, are needed. It is also possible to enhance privacy by using synthetic data instead of real personal data by utilising aggregation, whereby the characteristics of the real-world data are derived, aggregated, and used to generate randomised data that follow the same statistical distribution as the real-world data. As a result, even when synthetic data is aggregated, the original characteristics of real-world data should still be present. Nevertheless, it was pointed out that synthetic data can be subject to linkability and membership inference attacks, that it does not necessarily provide a better trade-off between privacy and utility than traditional anonymisation techniques, and that there are problems in predicting the privacy gain of synthetic data publishing [39].

2.3.5

PETs to “Inform”

PETS to “inform” are providing transparency to data subjects about how their data are processed. For such so-called Transparency-Enhancing Tools (TETs), usability is a critical characteristic, as transparency can only be achieved if data subjects are provided with usable policy information, which, in line with Art. 12 GDPR, is intelligible and easy to understand. We will, therefore, discuss solutions for TETs in further detail in Chap. 5. TETs comprise ex-ante and ex-post TETs: • Ex-ante TETs can provide transparency before personal data is disclosed/processed for enabling informed decisions/consent. Concepts for usable privacy policy presentations, which may occur as part of consent forms, include multi-layered policies, as proposed by

2.3 Technologies and Tools to Protect and Enhance Privacy

37

the EDPB [45], which can be complemented with policy icons (see, for example, [44]). Harkous et al. [43] present a framework (Polisis) for automated analysis and presentation of privacy policies using deep learning, which can automatically annotate previously unseen privacy policies and assign privacy icons to a privacy policy. To ease users from the burden of frequent policy and consent interactions, semi-automated policy management tools [41] and automated personalised policy assistants [42] have been proposed and developed. Another example of ex-ante TETs is provided by privacy product labels on packages of software or hardware products that can inform customers about the level of privacy protection that a product is providing before they purchase a product. Usable solutions for IoT product privacy labels have for instance been researched by Emami-Naeini et al. [56], Railean [55] and are also discussed more generally in [54]. • Ex-post TETs are providing transparency including explanations about how data have been processed. Ex-post TETs for explainable machine learning are a current research subject (as discussed, e.g. in [47] for the area of explainable deep reinforcement learning) and of high relevance for achieving compliance with the transparency requirements of the GDPR. Other general ex-post TETs include privacy dashboards, such as the Data Track [48], for displaying what data has been processed by whom, for tracking data usages and flows or for data export, as well as privacy notification tools (e.g. informing about breaches or risks). Murmann and Fischer-Hübner provide an overview and classification of ex-post TETs in [40].

2.3.6

PETs to “Control”

Various intervenability tools have been researched or developed to enhance control of data subjects over their data. These tools include for instance: • Privacy policy tools/languages allowing data subjects to negotiate privacy policies with data controllers (see, for example, overview in [50]), • Privacy dashboards or platforms which allow data subjects to request (from data controllers) data deletions, corrections, or exports of their personal data for exercising their rights to data portability (see, for example, [48, 49]), and/or to easily revoke or revise their consent online.

38

2.3.7

2 Background: Privacy Laws and Technologies

PETs to “Enforce”

PETs to “enforce” should guarantee that a privacy policy that complies with data protection/privacy laws and that has been agreed upon with the respective data subject is technically enforced. Access control systems, e.g. based on attribute-based access control, can provide means for defining and technically enforcing privacy access control policies. Sticky policies are machine-readable policies, usually negotiated with the data subjects, that can stick to data to define allowed usage and obligations when they is forwarded or outsourced to other parties (see, for example, [51]).

2.3.8

PETs to “Demonstrate”

The privacy strategy to “demonstrate” requires the controller to demonstrate compliance and enable accountability. Tools and means for supporting this strategy include • Logging, monitoring, and compliance auditing tools, which monitor activities and check compliance with users’ expectations, business policies, and regulations (see, for example, [52]), as well as privacy intrusion detection systems which allow detection of privacy breaches as a means for making attackers accountable and to demonstrate compliance (as, e.g. surveyed in [53]), • Privacy certification schemes for PETs to demonstrate compliance with privacy functionality and assurance by independent certification bodies. The certification results can have the form of a (certified) privacy seal (see, for example, the Europrivacy seal10 /or the EuroPrise seal11 ) that mediates and assures privacy protection levels of products to users/customers, • Tools for consent management, which allows data controllers to prove, for instance to regulators or data subjects, that valid consent has been obtained.

References 1. European Union Agency for Fundamental Rights (FRA) Handbook on European data protection law. (Luxembourg: Publications Office of the European Union, 2018) 2. European Data Protection Board Guidelines 05/2020 on consent under Regulation 2016/679, Version 1.1, Adopted on 4 May 2020. (2020), https://edpb.europa.eu/sites/default/files/files/ file1/edpb_guidelines_202005_consent_en.pdf 3. Tsormpatzoudi, P., Berendt, B. & Coudert, F. Privacy by design: from research and policy to practice–the challenge of multi-disciplinarity. Annual Privacy Forum. pp. 199–212 (2016) 10 https://www.europrivacy.org. 11 https://www.euprivacyseal.com/EPS-en/Home.

References

39

4. Cavoukian, A. & Others Privacy by design: The 7 foundational principles. Information And Privacy Commissioner Of Ontario, Canada. 5 pp. 2009 (2009) 5. Schaub, F., Balebako, R., Durity, A. & Cranor, L. A design space for effective privacy notices. Eleventh Symposium On Usable Privacy And Security (SOUPS 2015). pp. 1–17 (2015) 6. Fischer-Hübner, S. Privacy-Enhancing Technologies. Encyclopedia Of Database Systems. pp. 2142–2147 (2009), https://doi.org/10.1007/978-0-387-39940-9_271 7. Hansen, M., Jensen, M. & Rost, M. Protection goals for privacy engineering. 2015 IEEE Security And Privacy Workshops. pp. 159–166 (2015) 8. Hoepman, J. Privacy design strategies. IFIP International Information Security Conference. pp. 446–459 (2014) 9. Hoepman, J. Privacy Design Strategies (The little blue book). https://www.cs.ru.nl/~jhh/ publications/pds-booklet.pdf (2022) 10. Chaum, D. Untraceable electronic mail, return addresses, and digital pseudonyms. Communications Of The ACM. 24, 84–90 (1981) 11. Piotrowska, A., Hayes, J., Elahi, T., Meiser, S. & Danezis, G. The loopix anonymity system. 26th USENIX Security Symposium (USENIX Security 17). pp. 1199–1216 (2017) 12. Chaum, D. Blind signatures for untraceable payments. Advances In Cryptology. pp. 199–203 (1983) 13. Ibrahim, S., Kamat, M., Salleh, M. & Aziz, S. Secure E-voting with blind signature. 4th National Conference Of Telecommunication Technology, 2003. NCTT 2003 Proceedings.. pp. 193–197 (2003) 14. Chaum, D. Security without identification: Transaction systems to make big brother obsolete. Communications Of The ACM. 28, 1030–1044 (1985) 15. Brands, S. Rethinking public key infrastructures and digital certificates: building in privacy. (Mit Press, 2000) 16. Camenisch, J. & Lysyanskaya, A. An efficient system for non-transferable anonymous credentials with optional anonymity revocation. International Conference On The Theory And Applications Of Cryptographic Techniques. pp. 93–118 (2001) 17. Dingledine, R., Mathewson, N. & Syverson, P. Tor: The second-generation onion router. (Naval Research Lab Washington DC, 2004) 18. Alaqra, A., Fischer-Hübner, S. & Framner, E. Enhancing Privacy Controls for Patients via a Selective Authentic Electronic Health Record Exchange Service: Qualitative Study of Perspectives by Medical Professionals and Patients. J Med Internet Res. 20, e10954 (2018, 12), https:// www.jmir.org/2018/12/e10954/ 19. Ciceri, E., Mosconi, M., Önen, M. & Ermis, O. PAPAYA: A platform for privacy preserving data analytics. ERCIM News. 118 (2019) 20. Alaqra, A., Kane, B. & Fischer-Hübner, S. Machine Learning–Based Analysis of Encrypted Medical Data in the Cloud: Qualitative Study of Expert Stakeholders’ Perspectives. JMIR Hum Factors. 8, e21810 (2021, 9), https://humanfactors.jmir.org/2021/3/e21810/ 21. European Data Protection Board Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data. (2020), https://edpb.europa.eu/system/files/2021-06/edpb_recommendations_202001vo. 2.0_supplementarymeasurestransferstools_en.pdf 22. Art. 29 Working Party Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. (2018) 23. Nagaraja, S., Houmansadr, A., Piyawongwisal, P., Singh, V., Agarwal, P. & Borisov, N. Stegobot: construction of an unobservable communication network leveraging social behavior. CoRR. abs/1107.2031 (2011), arXiv:1107.2031 24. Diaz, C., Kosta, E., Dekeyser, H., Kohlweiss, M. & Nigusse, G. Privacy preserving electronic petitions. Identity In The Information Society. 1, 203–219 (2008)

40

2 Background: Privacy Laws and Technologies

25. Sabouri, A., Bcheri, S., Lerch, J., Schlehahn, E. & Tesfay, W. School Community Interaction Platform: the Soderhamn Pilot of ABC4Trust. Attribute-based Credentials For Trust. pp. 163– 195 (2015) 26. Derler, D., Pöhls, H., Samelin, K. & Slamanig, D. A general framework for redactable signatures and new constructions. ICISC 2015. pp. 3–19 (2015) 27. Samarati, P. & Sweeney, L. Protecting privacy when disclosing information: k-anonymity and its enforcement through generalization and suppression. (Technical report, SRI International, 1998) 28. ARTICLE 29 DATA PROTECTION WORKING PARTY 0829/14/EN WP216 Opinion 05/2014 on Anonymisation Techniques. Adopted on 10 April 2014. (2014) 29. Machanavajjhala, A., Kifer, D., Gehrke, J. & Venkitasubramaniam, M. l-diversity: Privacy beyond k-anonymity. ACM Transactions On Knowledge Discovery From Data (TKDD). 1, 3-es (2007) 30. Li, N., Li, T. & Venkatasubramanian, S. t-closeness: Privacy beyond k-anonymity and l-diversity. 2007 IEEE 23rd International Conference On Data Engineering. pp. 106–115 (2006) 31. Dwork, C. Differential privacy: A survey of results. International Conference On Theory And Applications Of Models Of Computation. pp. 1–19 (2008) 32. Jensen, M., Lauradoux, C. & Limniotis, K. Pseudonymisation techniques and best practices. Recommendations on shaping technology according to data protection and privacy provisions. European Union Agency For Cybersecurity (ENISA). (2019) 33. Lauradoux, C., Limniotis, K., Hansen, M., Jensen, M. & Eftasthopoulos, P. Data pseudonymisation: advanced techniques and use cases. (Technical Report. European Union Agency for Cybersecurity (ENISA). https.., 2021) 34. Loruenser, T., Happe, A. & Slamanig, D. ARCHISTAR: towards secure and robust cloud based data sharing. 2015 IEEE 7th International Conference On Cloud Computing Technology And Science (CloudCom). pp. 371–378 35. Shamir, A. How to share a secret. Communications Of The ACM. 22, 612–613 (1979) 36. Shokri, R., Stronati, M., Song, C. & Shmatikov, V. Membership inference attacks against machine learning models. 2017 IEEE Symposium On Security And Privacy (SP). pp. 3–18 (2017) 37. Canetti, R., Feige, U., Goldreich, O. & Naor, M. Adaptively secure multi-party computation. Proceedings Of The Twenty-eighth Annual ACM Symposium On Theory Of Computing. pp. 639– 648 (1996) 38. McMahan, H., Moore, E., Ramage, D. & Arcas, B. Federated Learning of Deep Networks using Model Averaging. CoRR. abs/1602.05629 (2016), arXiv:1602.05629 39. Stadler, T., Oprisanu, B. & Troncoso, C. Synthetic data–anonymisation groundhog day. ArXiv Preprint arXiv:2011.07018. (2021) 40. Murmann, P. & Fischer-Hübner, S. Tools for achieving usable ex post transparency: a survey. IEEE Access. 5 pp. 22965–22991 (2017) 41. Angulo, J., Fischer-Hübner, S., Wästlund, E. & Pulls, T. Towards usable privacy policy display and management. Information Management & Computer Security. (2012) 42. Das, A., Degeling, M., Smullen, D. & Sadeh, N. Personalized privacy assistants for the internet of things: Providing users with notice and choice. IEEE Pervasive Computing. 17, 35–46 (2018) 43. Harkous, H., Fawaz, K., Lebret, R., Schaub, F., Shin, K. & Aberer, K. Polisis: Automated analysis and presentation of privacy policies using deep learning. 27th USENIX Security Symposium (USENIX Security 18). pp. 531–548 (2018) 44. Holtz, L., Zwingelberg, H. & Hansen, M. Privacy policy icons. Privacy And Identity Management For Life. pp. 279–285 (2011) 45. European Data Protection Board Guidelines on Transparency under Regulation 2016/679. WP260 Rev. 1 (2018)

References

41

46. Korhonen, T. & Garcia, J. Exploring Ranked Local Selectors for Stable Explanations of ML Models. 2021 Second International Conference On Intelligent Data Science Technologies And Applications (IDSTA). pp. 122–129 (2021) 47. Vouros, G. Explainable Deep Reinforcement Learning: State of the Art and Challenges. ACM Comput. Surv.. 55 (2022, 12), https://doi.org/10.1145/3527448 48. Fischer-Hübner, S., Angulo, J., Karegar, F. & Pulls, T. Transparency, privacy and trust– Technology for tracking and controlling my data disclosures: Does this work?. IFIP International Conference On Trust Management. pp. 3–14 (2016) 49. Karegar, F., Pulls, T. & Fischer-Hübner, S. Visualizing exports of personal data by exercising the right of data portability in the data track-are people ready for this?. IFIP International Summer School On Privacy And Identity Management. pp. 164–181 (2016) 50. Leicht, J. & Heisel, M. A Survey on Privacy Policy Languages: Expressiveness Concerning Data Protection Regulations. 2019 12th CMI Conference On Cybersecurity And Privacy (CMI). pp. 1–6 (2019) 51. Pearson, S. & Casassa-Mont, M. Sticky Policies: An Approach for Managing Privacy across Multiple Parties. Computer. 44, 60–68 (2011) 52. Pearson, S., Tountopoulos, V., Catteddu, D., Südholt, M., Molva, R., Reich, C., Fischer-Hübner, S., Millard, C., Lotz, V., Jaatun, M. & Others Accountability for cloud and other future internet services. 4th IEEE International Conference On Cloud Computing Technology And Science Proceedings. pp. 629–632 (2012) 53. Reuben, J., Martucci, L. & Fischer-Hübner, S. Automated log audits for privacy compliance validation: a literature survey. IFIP International Summer School On Privacy And Identity Management. pp. 312–326 (2015) 54. Johansen, J., Pedersen, T., Fischer-Hübner, S., Johansen, C., Schneider, G., Roosendaal, A., Zwingelberg, H., Sivesind, A. & Noll, J. A multidisciplinary definition of privacy labels. Information & Computer Security. (2022) 55. Railean, A. Improving IoT device transparency by means of privacy labels. (2022) 56. Emami-Naeini, P., Agarwal, Y., Cranor, L. & Hibshi, H. Ask the experts: What should be on an IoT privacy and security label?. 2020 IEEE Symposium On Security And Privacy (SP). pp. 447–464 (2020) 57. Mantelero, A. The future of data protection: Gold standard vs. global standard. Computer Law & Security Review. 40 pp. 105500 (2021)

3

Overview of Usable Privacy Research: Major Themes and Research Directions

3.1

Introduction

Since the late nineties, the area of usable privacy and security has emerged and developed considerably as an important research domain. The literature on privacy abounds and is dispersed across multiple communities and disciplines including Human-Computer Interaction (HCI), computer science and networking, requirement engineering, management of information systems, social sciences and humanities, marketing and business, and jurisprudence to name a few. Even within HCI, the privacy literature is spread out since HCI has been expanding from its origin to incorporate multiple disciplines, such as computer science, cognitive science, and human-factors engineering. In this section, the intent is not to provide an exhaustive list of references to HCI-related privacy literature in general. Rather, with a focus on the interaction and usability pillars of HCI and drawing on the definition of usable privacy in Sect. 1.4.10, we explored the literature on usable privacy research, the work that investigated how and why users interact with systems that have privacy implications and how their interactions can be improved, to classify major themes and trends in usable privacy research. In the subsequent sections, we provide an overview of the usable privacy literature under the following major themes of our classification: Usable privacy for Internet of Things (IoT), inclusive privacy, usable privacy for developers, usability and users’ perspectives of Privacy-Enhancing Technologies (PETs), and usable privacy notice and decisions. Given the size and volume of published research results in usable privacy, we have selected and explored these fields that have become increasingly relevant in more detail below. In Sect. 3.2.2, we provide references to work on related other themes that we decided to exclude for reasons mentioned in that section.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Fischer-Hübner and F. Karegar, The Curious Case of Usable Privacy, Synthesis Lectures on Information Security, Privacy, and Trust, https://doi.org/10.1007/978-3-031-54158-2_3

43

44

3 Overview of Usable Privacy Research: Major Themes and Research Directions

3.2

Approach

3.2.1

Method

For surveying the recent field of usable privacy research, first, we narrowed our search to publications in reputed and high-quality journals and venues in the context of usable privacy and security research from their introduction until 2022 to write this chapter and categorise the literature on usable privacy research. The venues included the Symposium on Usable Privacy and Security (SOUPS), the Workshop on Usable Security (USEC) and its European version (EuroUSEC), the ACM Human Factors and Computing Systems Conference (CHI), the ACM Conference on Computer and Communications Security (ACM CCS), the IEEE Security and Privacy (S&P) and its European version (EuroS&P) conferences, USENIX Security Symposium, and the Privacy-Enhancing Technologies Symposium (PETS). We also performed an in-depth literature review using ACM Transactions on Privacy and Security (TOPS), ACM Transaction on Computer-Human Interaction (TOCHI), Proceedings of the ACM on Human-Computer Interaction (PACMHCI), International Journal of HumanComputer Studies (Int. J. Hum. Comput.), and the Journal of Behaviour and Information Technology (BIT). In the first round of reviewing all publications in the mentioned sources to find usable privacy papers, 863 unique papers were identified. Further examination of those publications by reading their abstracts resulted in excluding 229 papers, leaving 634 papers relevant to usable privacy topics, which we categorised based on. Figure 3.1 shows the word cloud created from the titles and abstracts of the 634 remaining papers. In an iterative process, we reviewed and labelled the remaining papers. Our goal is not to provide an exhaustive reference list for each category. Instead, we want to illustrate each line of research using selected references from the field. For each line of research, we also traced references and conducted snowballing to include prominent and important articles published in venues and journals not originally sampled, as well as more recent articles published after 2022.

3.2.2

Delimitation and Further Work

In order to discuss major themes in recent usable privacy research in this chapter, we excluded work in which authors focus on methods to measure privacy constructs including privacy concerns (e.g. [2, 3]), the work on privacy personas and exploration of the shortcomings of existing segmentation indices (e.g. [1, 5]), as well as the work on explaining the reasons for privacy paradox (e.g. [4]). Such work rather facilitates research on usable privacy but is not directly considered usable privacy research in this chapter. We acknowledge that privacy research is closely intertwined with security research, especially if, based on the definition, the privacy system is considered any system that has privacy implications. As illustrated in the previous chapters, privacy and General Data Protection Regulation (GDPR) data protection goals also include the “classical” security protection

3.2

Approach

45

Fig. 3.1 Word cloud based on titles and abstracts of papers we used to extract trends and themes in usable privacy research

goals, confidentiality, integrity, and availability (CIA)—see also Fig. 1.1. However, the scope of this book focuses on the usability of privacy and data protection technologies that go beyond addressing the classical security goals. Therefore, the usability of passwords and other authentication methods (e.g. [18–22]), human aspects of secure emailing (e.g. [15–17, 23]), firewalls (e.g. [6, 7]), secure communication protocols (e.g. [8–10]), as well as antiphishing efforts (e.g. [11–14]) are not further discussed. The articles referenced above as well as the book Usable Security by Lipford and Garfinkel [24] are recommended for those interested in usable security. The literature review that we conducted also revealed further major themes related to usable privacy in addition to those reported and discussed in this chapter. These research themes—social network and mobile privacy—despite their importance and relevance are not included in this chapter since they have already been covered in Garfinkel’s and Lipford’s book [24], and for that reason, we prioritised and focused on research themes and domains, including usable privacy for the IoT, that have emerged more recently, in this short synthesis book. Nonetheless, below we provide a brief classification with references to important work in these areas: Major topics on social network privacy include the exploration of privacy concerns, attitudes, experiences, and behaviours of social network users (e.g. [307– 316]), account sharing and interdependent privacy (e.g. [317–321]), and privacy-related

46

3 Overview of Usable Privacy Research: Major Themes and Research Directions

settings and control of information (e.g. [322–326]). The most important mobile privacy topics include studies on perceptions of and decisions related to location privacy (e.g. [342– 346]), exploration of users’ decisions regarding apps to use and the app permissions and efforts towards helping users make better decisions (e.g. [327–336, 341]), and more recently attitudes, concerns, and experiences concerning contact tracing apps [337–340]. If you are looking for more information on social network privacy and mobile privacy, please consult the Usable Security book [24] and the handful of papers we cited here. Moreover, usable privacy aspects including user perspectives related to location privacy and sharing of data via social networks and with peers are also discussed below in Sect. 3.3.2 for the context of IoT wearables. In addition, we also want to refer the reader to the recently published book titled “Modern Socio-Technical Perspectives on Privacy” by Knijnenburg et al. [352] which includes a chapter (Chap. 7) on “Social Media and Privacy” by Page et al. [353]. Chapter 7 of the mentioned book specifically also discusses usable privacy aspects, including the need to design social media with individual differences.

3.3

Usable Privacy in the Context of IoT

We live in the Internet of Things (IoT) era where devices (things) connect to each other and the Internet. There are many different types of these things, ranging from consumer products like mobile phones and wearables to industrial sensors and actuators. IoT services and solutions collect and analyse various types of sensor data about their users through both sensors on personal devices or built-in sensors in users’ surrounding environments. Sensor data can serve meaningful purposes in many cases, such as assisting users or ensuring public safety. However, their presence and the data they collect can raise privacy issues as well. In many cases, even users are unaware that sensors exist, so they can’t avoid being exposed to them. Lipford et al. provided a detailed discussion on privacy challenges that IoT devices face [354] which also surfaced in our literature review on IoT privacy research: a) lack of awareness of IoT device privacy practices, b) unknown implications of sensitive information due to the accumulation of large amounts of data, c) the complexity of privacy needs and access controls due to multiple users in different environments, d) limited controls for users (e.g. bystanders) to manage their privacy, and e) inadequate or ineffective security mechanisms to protect the devices and the data collected. There is an increase in privacy concerns about IoT technologies as we move forward, which is why researchers are exploring user privacy concerns, attitudes, perceptions, and experiences regarding IoT technologies. In this context, for instance, smart speakers, smart homes, and wearables have been the focus of many studies (see Sects. 3.3.1, 3.3.2), but, in comparison, autonomous vehicles and smart grids have been the focus of few (e.g. [95–98]). Further, researchers are investigating how to design privacy-friendly IoT

3.3

Usable Privacy in the Context of IoT

47

systems and services and how to help individuals make better privacy choices in the context of IoT which we report in the following section.

3.3.1

Smart Home Devices

Generally, smart homes provide residents with useful services and benefits via smart devices that can connect to the internet or each other. Users can further customise their homes by creating rules or routines for their smart devices. A user might, for example, have their thermostat turn on automatically when they arrive home or have their smart lights turn on automatically when the sun sets. There are end-user programming tools such as If-ThisThen-That (IFTTT)1 allowing users to create rules (called applets in IFTTT) that dictate interactions between devices and services. These features allow users to control smart devices in a fine-grained, automated way. Cobb and colleagues [63] assessed the risks associated with the real-life use of IFTTT. According to Cobb et al. [63], significantly fewer applets pose a threat to users’ secrecy or integrity than previously believed. Participants were generally unconcerned about potential harms, even when they were explained to them [63]. Nevertheless, Cobb et al. discovered new types of potential harm not previously considered in that automated analysis. For example, many applets in smart home devices track incidental users, including family, friends, and neighbours who may interact with others’ devices without realising it [63]. Broadly speaking, we can define two kinds of stakeholders in a smart home in terms of ownership of smart devices, direct usage, and control: primary users and incidental users who are bystanders. Those who are not owners or directly use smart devices but may potentially be affected by and involved in the use of smart home devices are bystanders, such as other family members who did not purchase the devices, guests, tenants, and passersby [26]. So smart home bystanders can also be subject to data collection. They may or may not be aware of the installed smart devices, or the functions of these devices. It is also the user who sets up and configures smart devices as privacy control mechanisms are primarily designed for device administrators, not bystanders. This means that primary users can add new smart functionality to their homes, while bystanders may use existing functionality without realising it or just take advantage of it wittingly. Various complex privacy impacts can arise from asymmetric knowledge and experience, as well as power dynamics between different groups of users. Due to these unique features of smart home environments and the capability to collect data at unprecedented levels from residents and bystanders, privacy concerns are being raised including concerns on consent practices for the collection and use of personal data, protection of personal data by companies and third parties, and misuse of smart technology by users against other family members (e.g. controlling gadgets to perpetrate domestic abuse [40]). Consequently, usable privacy 1 https://ifttt.com/.

48

3 Overview of Usable Privacy Research: Major Themes and Research Directions

researchers have extensively studied privacy concerns and experiences with smart homes from both angles, i.e. considering users and bystanders. Studies related to privacy concerns and attitudes of primary users: Data collection, sharing, and analysis, as well as hacking of smart homes, are concerns of smart home users [27, 28]. However, researchers reported that smart homeowners in general are not too concerned about potential threats [28]. Users are not particularly concerned with the content of the data, but rather with how it is processed [29]. Smart homeowners desire more transparency and control about how their data is used, according to Tabassum et al. [30]. Various factors can influence people’s perceptions of privacy in smart homes, including the type of device, the context, the type of data collection, the purpose of data collection, and the trustworthiness of devices or service providers [31–34]. For example, users perceive data collection in public spaces as less important, and they also perceive environmental data as less critical than personal data [32]. Users have expressed concerns about their data being shared with third parties by smart speakers [35], although these concerns might be mitigated if they trusted the speakers’ manufacturers [36]. According to Abdi et al. [64], some smart speaker users don’t utilise the full capabilities of their devices due to privacy concerns. Data collection, data usage, and data sharing with third parties are all common concerns for people when they are not sure how smart TVs handle their data [37]. A study conducted in 2022 by Jin et al. [38] aimed to understand people’s smart home privacy-protecting behaviours (SH-PPBs) in order to help them manage their privacy better. There are 33 unique types of smart home privacy-protecting behaviours described by Jin et al. [38], and users heavily rely on ad hoc methods at the physical layer (e.g. physical blocking, manual powering off). Four key factors are reported in [38] to be considered in the design of SH-PPB tools, as users prefer tools that are (1) simple, (2) proactive, (3) preventative, and (4) offer more control. Studies related to privacy concerns and attitudes of bystanders: Researchers found that non-owners similar to owners of smart TVs were unsure of how smart TVs handled personal data, such as what data was collected, and how that data was used, re-purposed, and shared with third parties [37]. Some studies have demonstrated the appeal of visitor modes on smart devices for users [42, 43]. Despite privacy concerns and discomfort over surveillance expressed by domestic workers, i.e. nannies, interviewed by Bernd et al. [44], they concurred that power dynamics led them to accept their employers’ decisions to instal cameras. Most nannies, however, believed that their employers are disrespectful if they fail to inform them about the existence, location, and purpose of cameras [44]. During a subsequent study, Bern et al. [45] concluded that the purposes and manner of use of cameras are particularly influential on nannies’ attitudes towards cameras because those factors both affect and reflect their relationships with their employers. Micromanagement and excessive monitoring by employers, for instance, signal a lack of trust and disrespect [45]. Bystander privacy can be breached by smart home devices that collect data from the environment. For example, bystanders’ data may be collected by smart speakers, posing privacy risks [36]. Therefore, bystanders should be provided with strong privacy assurances by smart devices that clearly and unambiguously convey sensor states to them, according to Ahmad et al. [46].

3.3

Usable Privacy in the Context of IoT

49

Different emerging technologies, like wearable cameras and augmented reality [42, 47–49], have also been studied for bystander privacy. According to these studies, bystanders are more likely to share information if they have control over it, and their level of privacy concerns depends on the context: e.g. whether using it in a house (private space) or at a metro station (public space). Due to a lack of awareness, knowledge, and coping strategies, visitors to foreign smart environments cannot protect their privacy [50]. By following these five steps proposed by Marky et al. [50], visitors can exert control over their privacy: (1) becoming aware, (2) becoming knowledgeable, (3) evaluating data sensitivity, (4) making decisions, and (5) communicating those decisions. Users versus bystanders: Bystanders and users have been found to differ in their perceptions from two perspectives, i.e. device control and privacy expectations. As far as privacy expectations and mitigation strategies are concerned, bystanders prioritise their relationship with users and potential social confrontations over their own privacy [26]. As far as smart home device controls are concerned, primary users have greater control over smart devices compared to bystanders [51]. It is not uncommon for primary users to restrict others’ access and control of certain devices [28]. Nonetheless, they sometimes give trusted family and friends remote access to their devices [52]. Users generally prefer straightforward privacy mechanisms, while bystanders generally prefer a channel through which they can negotiate their privacy needs with users [26, 27]. The prevalence and impact of relationships between smart home users who introduce new functionality (pilot users) and those who do not (passenger users) and just use the functionality was explored by researchers [58]. Passenger users have difficulty incorporating smart homes into their daily lives since they reflect their pilot users’ values. Often passenger users are less comfortable using technology and rely on pilot users in their homes for device management and information about devices, which limits their ability to learn about new features [58]. In the context of rental houses, Mare et al. reported that Airbnb guests and hosts have different expectations regarding data access, even though they have similar views on data collection. As an example, 90% of the guests preferred not to share their browsing history. Nevertheless, 20% of hosts wanted access to that information [53]. Despite the differences in their privacy expectations and control, it is challenging for both users and bystanders to learn about smart home IoT device data practices. Some insights can be gathered from IoT privacy policies for smart homes, but these policies, like those in other contexts, are not necessarily practical for users [54–56]. Most data practices are opaque to the general public. The provision of privacy notices to smart home users and bystanders is one way to combat opaque data practices in smart homes and increase their awareness. However, how to deliver these notifications effectively remains an open question. In a recent study, Thakkar et al. [57] surveyed 136 smart home users and 123 smart home bystanders to understand their preferences for receiving privacy notices in smart homes, their perceptions of four privacy notice mechanisms (e.g. data dashboard), as well as their choice of mechanisms in different scenarios (e.g. when friends visit). These results demonstrated the advantages and disadvantages of each privacy awareness mechanism as well as the

50

3 Overview of Usable Privacy Research: Major Themes and Research Directions

similarities and mismatches in users’ perceptions of them. For example, the data dashboard appeared to be able to help reduce bystanders’ dependence on users [57]. Thakkar et al. also found some unique benefits of each mechanism. Ambient light, for instance, could provide unobtrusive privacy awareness. Future privacy awareness mechanisms must consider four key design dimensions, as described in [57]: • Easy access: Both users and bystanders should have easy access to privacy awareness mechanisms. Data practices are not straightforward to learn for bystanders, especially when they do not have prior knowledge of smart home devices. One concrete idea is to have a bystander mode, according to Thakkar et al. [57]. In an IoT app with a bystander mode, for instance, bystanders should be able to view data processing practices related to them, not to users. • Unobtrusive modality: Researchers and practitioners can explore ways to raise privacy awareness unobtrusively. Unobtrusive notifications such as ambient light and privacy speakers may be preferred in situations such as when working from home or when having visitors. • Privacy awareness at the smart home level: Privacy-related notifications should be delivered at the smart home level (i.e. a centralised place such as a dashboard for informing on data practices of all surrounding smart home devices), rather than at the individual device level. In smart homes, however, most privacy awareness mechanisms concern only individual devices, unless they are all made by the same company. • Enabling privacy controls along with raising awareness: Thakkar et al. reported that both users and bystanders indicated they needed some privacy controls when they were more aware of their privacy situations [57]. Participants, especially bystanders, preferred to have privacy notices integrated with privacy choices since they could then take action if they had privacy concerns. Smart Home Personal Assistants (SPAs): With SPAs, users can perform a variety of tasks using voice commands while concentrating on other things. In addition to cloudbased processing and interactions with other smart devices, SPAs typically include (i) a voice-based intelligent personal assistant, like Amazon’s Alexa and Apple’s Siri; and (ii) a smart speaker, like Amazon’s Echo and Google’s HomePod. As soon as a user speaks into the smart speaker, the SPA provider’s cloud processes the request to understand the user’s speech. Requests are delegated by the SPA provider to Skills (Amazon’s terminology) or Actions (Google’s terminology).2 Among the functions that users can perform with Skills are music playing, checking weather updates, shopping, and controlling smart home devices. Skills fall into two main categories, either built-in Skills, provided by the SPA provider (e.g. weather updates, setting alarms), or third-party Skills, supplied by third-party developers

2 Other SPA platforms may refer to them differently.

3.3

Usable Privacy in the Context of IoT

51

(e.g. Spotify and Uber) [64]. SPA providers generate spoken responses from Skill outputs which are then sent to the smart speaker that plays the response. Researchers provided an analysis and classification of the security and privacy concerns associated with SPAs, showing how a wide range of malicious actors can abuse them to harm end-users’ security and privacy [25]. It has been noted by Edu et al. [25] that the interaction between users and SPA devices is the weakest. They have also identified a wide range of attacks that are potentially damaging to users. Their survey found that most technical countermeasures did not explicitly consider usability. Thus, efforts should be made to assess how usable the countermeasures already proposed are and to take usability into account from the beginning for new SPA security and privacy mechanisms. SPA ecosystem perceptions were examined by Abdi et al. [64] through semi-structured interviews regarding four main use cases with unique architectural elements and stakeholders (built-in Skills, third-party Skills, managing other smart home devices, and shopping). SPA users misperceive where their data is stored, processed, and shared as a result of incomplete mental models, according to Abdi and co-workers [64]. Users rarely understand the SPA ecosystem beyond their household and SPA vendor, even when using third-party skills or managing smart home devices. The result is incomplete threat models (limited forms of threats and sources of attack) and non-technical coping strategies [64]. Additionally, Abdi et al. report that users have concerns about security and privacy regarding SPA, and that, while SPA is viewed as intelligent and capable of learning, users would not like to learn all about them [64]. Huang et al. [65] investigated users’ concerns about their shared smart speakers and external entities and explored the differences in users’ corresponding coping strategies. In Huang et al.’s study, participants were primarily concerned with their housemates gaining unauthorised access to personal information and misusing the device [65]. The users adopted all-or-nothing strategies regardless of whether they dealt with external entities or housemates. This indicates a lack of effective risk management [65]. It is recommended by Huang et al. that future smart speakers should personalise the sharing experience, improve voice recognition technology so that false positives are minimised, and communicate optimal risk management methods to end users so that they can cope with perceived risks better [65]. Customisable privacy settings for smart speakers have been a way vendors introduced to mitigate users’ privacy and security concerns, e.g. regarding “always-on” smart speakers. Cho et al. [66] investigated whether customising privacy preferences affect smart speakers’ trust and experiences. The ability to customise privacy settings (by having the option to delete voice history for privacy reasons over better-personalised services) boosts trust and usability for regular users, while it adversely affects power users [66]. When privacy settings can only be modified without customisation of content, i.e. speech speed, content length, and information source, users with higher privacy concerns are less trusting [66]. While privacy customisation may cause a reaction among power users, allowing privacy-concerned individuals to alter content simultaneously can alleviate any negative impact on trust [66].

52

3 Overview of Usable Privacy Research: Major Themes and Research Directions

There is no clarity on privacy rights for SPA and smart home users, and culturally and socially acceptable privacy practices and norms are continuously evolving. Nevertheless, in the SPA ecosystem, information flows through several entities including SPA providers and third-party skills providers and the extent of acceptability of these information flows by different users is not clear. Researchers Abdi and Zhan studied privacy norms in SPAs using Contextual Integrity [67] and the influence of the Contextual Integrity parameters (five primary parameters are considered in context integrity: sender, recipient, information type (topic, attributes), information subject, and transmission principles) [71, 72] and personal factors on privacy norms. Additionally, they identified similarities between the privacy norms studied in terms of the Contextual Integrity parameters, which could be utilised to establish appropriate privacy defaults in SPAs [67]. To elicit privacy norms in the IoT ecosystem, however, more research should be conducted taking into consideration a broader and inclusive definition of scenarios based on Contextual Integrity parameters.

3.3.2

Wearables

It has become ubiquitous to use wearable sensing devices to monitor health and fitness in recent years. By sensing personal data, such as step count and sleep quality, or monitoring some medical conditions (e.g. diabetes), these devices can track activities and help people improve their health. Furthermore, users can share the wide range of health and fitness information gathered by such devices for a variety of purposes, such as meeting fitness goals with different people and organisations through various communication channels, such as popular messaging apps (like WhatsApp), popular social networks (like Facebook & Twitter), or external fitness applications (like Strava). Data collected by wearable devices may get exposed in an unwanted manner, posing new privacy and security challenges. For example, recent studies [74, 79] showed that endpoint privacy zones, which conceal portions of exercise routes that fall within a certain radius of protected locations and are commonly used on fitness tracking social networks such as Strava, are not always reliable in protecting sensitive locations from malicious actors and the hidden locations can still be found. Aside from data disclosures and their consequences, recent studies have shown that machine learning techniques can be used to infer sensitive information from fitness tracker data [78, 90, 91]. Research has shown that attackers can precisely infer sensitive information, such as alcohol intoxication [91] and illegal drug use [78], padlock codes [92] and smartphone keystrokes [93]. Therefore, wearable devices must provide usable transparency and inform users about possible threats and consequences so they can make informed decisions about sharing their data. This requires us to know how and why users share information gathered by wearables, how they feel about disclosing that information across multiple platforms, whether they are aware of data processing practices, privacy threats, and consequences, and if they have specific privacy concerns or requirements.

3.3

Usable Privacy in the Context of IoT

53

Research studies have been conducted on users’ motivations and behaviours surrounding their use of fitness trackers in various domains (e.g. [75, 77, 81–83]) as well as their privacy concerns and the effects of sharing fitness tracker data with others (e.g. [76, 78–80]). Data sharing is widely reported to be influenced by the type and recipient of information, as well as by the reasons and benefits of sharing in different contexts. In the context of wearable devices including fitness trackers, while movement data such as step count is not considered sensitive [85], weight and sleep data, depending on the audience [86], along with locational information [85, 87], as well as quantified self-data related to finances [87] are assumed to be sensitive. The literature on users’ perception of sharing data collected by fitness trackers shows that while users are generally better at understanding the risks and implications of sharing location than other types of data, they may still underestimate the extent to which it is compromised and can leak information [75, 77, 79, 84]. Additionally, people are less comfortable sharing graph data (e.g. weight graphs) than aggregate statistics such as lifetime floors climbed or personal information such as birthdays [75]. The users would also be more concerned if their data could be used for identification [80, 84]. The results of interviews conducted by Alqhatani et al., however, show that fitness data is assumed to be a type of data that doesn’t need to be kept secret. They further report that a key driver of users’ choices in what information to share and whom to share with is not concern over data sensitivity, but norms and self-presentation [77]. In other words, Alqhatani et al. argue that users’ data sharing decision is influenced by what they expect to achieve by sharing data with different entities, and how well those entities can support those expectations and goals [77]. In addition to data sensitivity, people worry that their fitness tracker data will be used unintendedly and that they will not have control over their information [88]. As for recipients of the data, users are more concerned about third parties such as insurance companies [75, 77], employers [75, 77, 80], and advertising agencies [75]. In Zufferey et al.’s study, almost half of wearable activity tracker users underestimated the number of third-party applications they have given access to, and around 60% shared data with at least one third-party application they no longer use [94]. The same study found that close to 30% of the participants (total number of participants .= 628) do not revoke access to their data from third-party applications because they forgot it was given to them and 8% did not even know they could revoke access to their data [94]. As described above, fitness tracker users have some privacy concerns about data practices and information distribution. However, researchers have found that in general they do not have much concern about privacy issues [84] and they do not have much idea how fitness device manufacturers handle data [89]. Most users are confident that no one would be interested in their data [84] or they fully trust the service provider to protect their data [77, 84]. Several factors can contribute to the lack of privacy concerns, including ignorance of inference attacks, or unawareness of how their data is collected and used. Users believe that data inference is technically possible yet unlikely to happen [75]. In a study by Velykoivanenko et al. [76], it was found that fitness tracker users were aware that the data collected could be used to infer some types of information. Using heart rate data, the participants correctly

54

3 Overview of Usable Privacy Research: Major Themes and Research Directions

inferred sexual activity, but they did not realise they could also infer non-physiological information. Researchers also report that users of fitness trackers have a poor understanding of the fitness tracker ecosystem, their mental models show substantial gaps in their understanding of the data sharing process, and consequently, their misconceptions can adversely affect their privacy [76, 94]. The results of usable privacy work in the context of wearables show the importance of educating and informing users about privacy threats, what could be inferred about their data, data sharing consequences, and existing privacy features and their limitations in wearable devices and their accompanying apps. Also, more efforts are needed to design and evaluate fitness tracking privacy mechanisms.

3.3.3

Helping People Make Better Privacy Decisions in the Context of IoT

In IoT, notice and consent mechanisms are difficult to implement or ineffective. IoT devices often do not have traditional user interfaces or controls. Given how many IoT devices there are, informing users and asking their consent to share data for every device they run into can be overwhelming. Lipford et al. advocate for granular, on-device control [354]. Nonetheless, as they have also acknowledged, fine-grained controls can be challenging to users. IoT users may therefore benefit from context-aware and user-tailored privacy solutions that provide sufficient control over their privacy without imposing undue burdens on them. A selection of works taking advantage of these solutions is discussed in this part that provides solutions for helping IoT users make informed decisions about sharing their data. Users face increasingly complex and diverse data collection and use practices, and Privacy Assistants (PAs) are proposed as a solution to help them manage their privacy. PAs can help users manage an increasingly large number of privacy decisions while selectively notifying them about data practices they would want to know about. By building PAs based on decisionmaking models that account for user preferences, PAs can be personalised. IoT Personalised Personal Assistants (PPAs) can help overcome the lack of awareness of data collection and control over IoT devices. Lee and Kobsa investigated how people’s privacy awareness impacts their privacy decisions in IoT [61]. Based on the results of their study, Kobsa and Lee report that when people are aware of the potential privacy risks associated with IoT services, they tend to make conservative and confident decisions. Additionally, their machine learning experiment showed that an individual’s overall level of privacy awareness was the most significant feature in predicting their privacy choices. ML models trained on confident privacy decisions can produce highly accurate privacy recommendations. Kobsa and Lee also suggested strategies for better designing and developing a PPA aimed at increasing privacy awareness among IoT users and providing them with ML-based privacy recommendations. Colnago et al. studied user perceptions of three increasingly autonomous PPA implementations, i.e. Notification, Recommendation, and Auto, determining their benefits and

3.3

Usable Privacy in the Context of IoT

55

issues [59]. Notification PPAs can identify nearby devices and notify their users of their presence and requests, allowing them to accept or decline them. In recommendation PPAs, users are given recommendations on how to share data based on their preferences. Users still have full control, but the system has more autonomy in analysing information. Auto PPAs eliminate users’ cognitive burden by making decisions for them but also remove their control over data sharing. Participants generally liked the different implementations but expressed concerns, which differed with automation levels, According to Colnago et al. [59]. They frequently weighed the trade-off between being aware and in control versus feeling overwhelmed by notifications about data collection. Colnago et al. recommend that PPAs provide configurable automation levels. In this way, users can balance their desire for control and their need to minimise cognitive load. Nonetheless, it is important to note that the use of PPAs raises several fundamental open issues related to infrastructure and decision-making including issues about the collection of public data, legal and ethical issues of automated consent, resignation, and communities at risk, which will influence the adoption of PPAs [59]. When users make decisions about IoT devices, relevant security and privacy information is essential since their attention capacity is limited. Emami-Naeini et al. [60] explored and tested the design space for an IoT privacy and security label to help end users make better decisions. They elicited experts’ opinions on the contents of IoT privacy and security labels. Based on experts’ assessment of each factor’s criticality in conveying risk to consumers, Emami-Naeini and colleagues divided it into two layers (primary and secondary). A primary layer should appear on a product package or prominently on a website. A secondary layer should be available online via a QR code or a web link. Their proposed privacy and security label for IoT devices was iteratively improved by incorporating feedback from IoT consumers. Despite its potential transparency benefits, the proposed label suffers from the same problems as multi-layered privacy notices, such as brevity that may hinder transparency and users disregarding the secondary layer or skipping them altogether. The label should be tested further to identify areas of improvement as well as how consumers use it in context. Our environment is increasingly populated with sensors that monitor sensitive data. Yet users are often unaware of this, violating their privacy. Prange et al. present PriView in [62], a visualisation of sensor locations, providing information about the sensor (e.g. data collection and sharing), and highlighting potential privacy intrusions (e.g. video recording). Two prototypes were developed, namely a mobile application and a VR head-mounted display showing mockups of various scenes, by Prange and co-workers [62]. In general, users appreciated the idea of PriView and saw interesting use cases, including and beyond privacy protection. Although participants prefer more detailed visualisations for unfamiliar private places where the purpose of data collection may be unclear, simple indications are appreciated to get a basic overview of a new scene [62]. To use PriView in practice, several issues must be addressed. PriView requires gathering the relevant information to visualise. Nevertheless, it is necessary to investigate how these types of data can be gathered to be viewed in PriView, can be handled in a way that maintains the privacy of different users

56

3 Overview of Usable Privacy Research: Major Themes and Research Directions

(e.g. device owners, bystanders), and how we can select the relevant information for users based on the circumstances.

3.3.4

Gaps and Future Directions

Although IoT security and privacy are a fast-moving field, they are still in their infancy, particularly as they relate to user privacy needs and challenges. Despite several researchers considering how usability impacts privacy mechanisms and examining users’ privacy perceptions and concerns in the IoT ecosystem and how to help people make more informed privacy decisions (see Sects. 3.3.1–3.3.3), we still lag in comparison to the speed of smart technology development. To future-proof the research, as also discussed in [354], we should not just focus on the possibilities that IoT technologies currently offer but also anticipate the socio-technical impacts of those possibilities [354]. More work is required, for example, to identify how different preferences can be accommodated in complex scenarios considering primary users as well as bystanders with distinct privacy expectations even within each group to fully address the needs and concerns of all stakeholders. Although most attention has been paid to popular SPAs like Amazon Alexa and Google Assistant, others (such as Microsoft Cortana) may have unique characteristics and user requirements that are not addressed in the literature. The perspective of non-westerners, minorities, marginalised, and the elderly has also been overlooked, even though little research has been conducted in these contexts (e.g. [68, 69]), and gendering of SPAs has been explored [70]. Furthermore, it is crucial to provide more context-based privacy solutions to enhance privacy protection.

3.4

Efforts Towards More Inclusive Privacy

Even though some technologies, like assistive technology and safe disclosure spaces, can promote equity and freedom and reduce discrimination, other technologies can serve as mechanisms that strengthen marginalisation processes. The technology may not adequately protect marginalised groups’ privacy interests, for example, by introducing disproportionate privacy threats or by exclusion. There is usually a generic population in mind when designing privacy or security mechanisms. This means that privacy and security mechanisms often fail to include the experiences and challenges of many understudied and marginalised subpopulations and to address their needs. Further, many existing privacy frameworks do not account for marginalisation [73]. Nonetheless, marginalised people have unique privacyrelated needs and behaviours that need to be considered by HCI research and development work. Researchers conducted several studies in an attempt to fill the gap and develop effective privacy solutions and tools for the broadest range of users possible, i.e. moving towards more inclusive privacy. Marginalised groups include all populations whose needs are overlooked when designing and developing technology including, but not limited to, immigrants

3.4

Efforts Towards More Inclusive Privacy

57

and asylum seekers, children, the elderly, those with disabilities, those with special diseases, members of disadvantaged social groups, activists, journalists, LGBTQ+ people, victims of crimes, and those from developing nations. The mentioned groups can suffer disproportionate harm when their privacy is violated. Activists living under suppression regimes, for example, can face far more severe and life-threatening consequences of re-identification risks than normal users. Meanwhile, marginalised individuals face complex decisions regarding identity management, data disclosure, and technology use, while their resources and power are limited compared to the general population when it comes to protecting their privacy and enforcing countermeasures when a breach occurs. For example, the resources available to those in economically disadvantaged communities tend to be limited. This section reports and discusses marginalised groups’ privacy and HCI problems and challenges, the solutions to these challenges, and finally the future directions in this field based on two recent SoK (systematisation of knowledge) articles [73, 224]. In our discussion of inclusive privacy, marginalised populations are defined broadly including, but not limited to, people with disabilities, adolescents, and vulnerable groups based on their ethnicity, gender, race, etc. and their privacy needs and challenges are described along with possible solutions. However, readers who are desultorily interested in reading about privacy in specific populations can consult the book on socio-technical perspectives on privacy by Knijnenburg et al. [352] and find dedicated chapters for accessible privacy (with a focus on the needs and challenges of people with disabilities) [357], privacy in adolescence [356], and privacy of vulnerable populations [355].

3.4.1

Risk Factors Amplifying Privacy Risks of Marginalised People

If researchers and designers understand the risk factors associated with marginalised populations, they can develop tailored educational materials to promote awareness of risks and countermeasures and create more inclusive and effective PETs. Furthermore, policymakers who recognise such privacy risks can develop policies that address systemic issues that lead to privacy violations of marginalised people, and give marginalised communities additional protections and resources. The framework developed by Warford et al. [224] simplifies the process of reasoning about marginalised populations. A risk factor enhances or amplifies the likelihood that a user will be digitally attacked or suffer disproportionate harm [224], for instance, activists, LGBTQ+ people, or people living in repressive regions. To emphasise the external risks these users face, Warford et al. use the term at-risk population. The framework [224] explains 10 contextual risk factors that enhance or magnify security or privacy risks. Risk factors fall into three main categories based on whether they are influenced by at-risk users’ roles in their society and culture, i.e. social factors (n = 3), stemming from who at-risk users know or interact with, i.e. relationship factors (n = 3), and who at-risk users are or their personal or professional activities, i.e. personal circumstances (n = 4) [224]. Users may reprioritise critical services in a way that exacerbates their risk of attack or the severity

58

3 Overview of Usable Privacy Research: Major Themes and Research Directions

of the harm they suffer under certain circumstances, such as natural disasters or war. In light of this, Warford et al. emphasise the importance of monitoring the circumstances and expanding the definitions of existing risk factors [224]. Societal risk factors: As opposed to targeting a specific individual, attacks linked to societal risk factors target everyone within a population or the entire population [224]. Law and politics, social norms, and marginalisation all contribute to societal risk. Physically locking devices or data [225, 226], bribing providers into bypassing security measures [227], spoofing trusted entities [228], or preventing internet access [231] can be used by governmental, political, or legal entities to monitor or intercept at-risk populations’ communication which may lead to destroying their trust [229, 230] or restricting free expression [232, 233]. Some at-risk groups may also be threatened and harassed online because of their identity, beliefs, or social statuses, such as LGBTQ+ [234–237], undocumented immigrants [238], low socioeconomic status people [240], or marginalised racial and ethnic groups [239, 241]. Risks driven by relationship: At-risk people’s relationships with attackers and with third parties are responsible for the relationship risk factors [224]. In contrast to attacks linked to societal risk factors, relationship risks target specific at-risk individuals. There are a variety of threats that can arise from intimate and personal relationships. For example, intimate partner abusers may have physical or digital access to the survivors’ devices and accounts [242– 244], and in this way, they can limit the abusees’ autonomy [242, 243, 245]. Abusers may also target children of intimate partner abuse survivors to regain access to them [242]. People who depend on a third party for help with critical tasks to accomplish their online goals or protective actions against privacy or security breaches, such as children [246–248], elderly people [249, 250], people with impairments [251, 252], and refugees [253, 254] may also be at risk. For example, children and adolescents share personal data and devices with parents and caregivers [246–248], and the use of parental monitoring applications is popular to protect children and teens [255–257]. Risks driven by personal circumstances: Public prominence and socioeconomic constraints can also amplify privacy and security risks for individuals [224]. Attacks based on personal circumstances are mostly diffusely targeting marginalised populations, except in situations when someone signals out in a population because of their prominence [224], i.e. they are well-known publicly or have noticeable attributes such as celebrities, notable journalists [226, 258], activists [228, 231], and NGO staff [225]. People can experience amplified risks and decreased ability to respond and safeguard themselves against threats if they have limited access to different resources ranging from the technology itself to money or internet connectivity [224]. Moreover, current technology and its design do not always meet the needs of marginalised groups, such as those with disabilities, neurodiverse conditions, language barriers, or developmental disabilities, posing a threat to their security and privacy online [224]. Affordances in technology and design are critical to preventing special users from feeling anxious and concerned that they cannot protect themselves effectively. For instance, some older adults feel insecure about protecting themselves online and seek assistance from trusted sources [250]. Wang et al. likewise reported that people with visual

3.4

Efforts Towards More Inclusive Privacy

59

impairments were concerned about safety and eavesdropping when using screen readers in public [259].

3.4.2

Privacy-Protection Practices and Barriers to Effective Mechanisms

In the absence of privacy protection, marginalised groups may face discrimination, harassment, and other forms of harm. Researchers in HCI and usability can mitigate these consequences by developing privacy-enhancing technologies that, in turn, require a proper understanding of the privacy protection practices that marginalised people employ and the barriers they face in protecting their privacy. But how do these people at risk employ practices that they believe will help them prevent, mitigate, or respond to digital safety risks? Marginalised people utilise three different types of protective measures including social strategies, distancing behaviours, and technical solutions [224]. A Privacy Responses and Costs Framework presented by Sannon and Forte [73] likewise lists ten types of responses to privacy threats observed in the literature about marginalised groups’ privacy. In addition, they list the costs and consequences these protection behaviours carry for marginalised people. Warford et al. [224] categorise responses differently, but they share many similarities with Sannon and Forte [73]. Following is a brief discussion of the types of protection behaviours described by Warford et al. We further explain which of the ten behaviours identified by Sannon and Forte [73] belong to each of Warford et al’s categories [224]. Note that multi-strategy reliance is reported to be common among at-risk users, with none of the strategies being mutually exclusive [224]. Social strategies: At-risk users may rely on their social connections to overcome the threats they face. They may rely on trusted family, friends, or organisations for informal or formal advice and support [224]. There are certain circumstances under which users decide to control social interactions to minimise harm. Controlling disclosure (e.g. segmenting identity and audiences, multiple accounts, and privacy controls), physical workarounds (e.g. hiding device, use of headphones), privacy lies (e.g. providing false information), asking for help (e.g. from websites, professionals, and consulting networks), collaborative privacy practices (e.g. developing shared guidelines and boundaries), and third-party protections (e.g. others taking measures to protect marginalised individuals on behalf of them) are the six categories of responses surfaced in Sannon and Forte’s research [73] that can go under social strategies. Loss of autonomy, need to trust and the involvement of other sources of risks due to reliance on third parties, the introduction of restrictions on self-expression, and financial, social, cognitive, and legal repercussions are among the consequences marginalised people will face if they take such measures to protect themselves [73]. In order to engage in political activism, young Azerbaijanis maintained several accounts [260]. In social media, LGBTQ+ users manage their identities across platforms [261]. South Asian women who were harassed online sometimes turned to their families for emotional support [262, 263] and provided false information to protect themselves from online

60

3 Overview of Usable Privacy Research: Major Themes and Research Directions

abuse [262]. Children and teenagers seek their parents’ advice when they have strange, scary, or confusing online experiences [264, 265]. In addition to their own privacy rights, LGBTQ+ adults considered those of their families, ex-partners, and children [234]. Families co-developed privacy guidelines for shared devices in Bangladesh [266]. As a result of fears of impersonation by law enforcement or the police, undocumented immigrants restricted communications with untrusted parties until they were sufficiently vetted [238]. As a way to simplify sexual negotiations and avoid stigma, men seeking men on dating apps may disclose their HIV status publicly [236]. Distancing behaviours: At-risk people may also distance themselves from disclosing personal information, or limit their use of technology [224]. Apathy (i.e. lack of response), no-use (e.g. not using technology, or deleting an account), and withholding disclosure (e.g. information removal, self-censorship) are the three categories of responses that surfaced from Sannon and Forte’s investigation of privacy literature concerning marginalised people [73] that can go under distancing behaviours. Exposure to risks, opportunity loss, exclusion, isolation, and restricted self-expression are among the consequences marginalised people will face if they take such measures to protect themselves [73]. As an example, some LGBTQ+ parents refrain from sharing family photos or personal information to avoid revealing themselves to outsiders [234]. A feeling that government surveillance was impossible to escape led to inaction on the part of undocumented immigrants [238]. To avoid government control, Sudanese Revolution activists used coded communications and regularly deleted sensitive data [231]. Eavesdropping was avoided at the cost of physical safety by people with impaired vision who used headphones when using screen readers [267]. Technical solutions: Technical solutions are also used by at-risk users to protect their digital safety, including secured communication and encryption, privacy settings, and access control, as well as tracking applications and stronger authentication [224]. The use of PETs (e.g. encryption, cloaking) as a response to privacy threats was also a common theme that surfaced in Sannon and Forte’s investigation of literature [73] concerning the privacy of marginalised groups. Journalists and activists have, for example, reported using encrypted chat apps to communicate over networks controlled directly by their antagonists [227, 231]. Foster parents reported limiting internet access at certain times of the day and monitoring potentially dangerous behaviours with parental control apps [269]. Several visually impaired users change their passwords frequently in case others have observed them entering them [267]. Only a small proportion of survivors of intimate partner abuse use two-factor authentication in spite of committed malicious actors and consequently high privacy and security needs [242]. Note that while PETs offer protections, the exact people who need them may be incriminated by their use [73]. For example, in a study of shared mobile phone use in Bangladesh, Ahmed et al. reported that almost half of their respondents reported locking specific data or applications might raise suspicions among their partners [266]. Also, when confronted with intimate partner violence, locking down devices or apps may result in further coercion or harm [243, 270].

3.4

Efforts Towards More Inclusive Privacy

61

Barriers to employing effective measures: Although users select protective measures based on the risks they face, those practices are not always ideal or even effective. Different users may assess the pros and cons in varying ways, resulting in inconsistent or conflicting decisions when deciding which practices are best for themselves. Thematic analysis conducted by Warford et al. [224] in their survey review also revealed numerous barriers to the effective adoption of protection mechanisms and practices presented earlier. The barriers include (1) competing priorities, (2) lack of knowledge or experience, and (3) broken technology assumptions [224]. Putting online security and privacy ahead of competing, sometimes urgent, needs isn’t always possible. Especially marginalised groups may have a tough time adopting appropriate protective behaviours. Warford et al. [224] discuss that basic needs such as food, income, physical health and safety, social participation, compliance with social norms, and caring for others are among the competing priorities. Homeless people reported using public WiFi to obtain government assistance, housing, and employment [271]. As well, refugees shared account information with caseworkers when applying for social services or jobs [253]. Even though marginalised groups are at greater risk of security and privacy risks and need to understand and deploy stronger protections, they lack sufficient knowledge of digital safety [224]. Having direct relationships with attackers and a power imbalance between them and marginalised people exacerbate their lack of security and privacy knowledge, for example, in the case of politicians, journalists, and activists at risk of government attacks [226, 230]. The assumptions designers and developers made when designing systems for improving user security and privacy may not always be valid or applicable to marginalised groups [224]. Marginalised individuals may be prevented from accessing, using, or benefiting from digital safety technologies and best practices due to these assumptions. For example, marginalised groups commonly share personal devices and accounts to receive needed monitoring [249], accomplish important tasks [253, 272], or fulfil social norms [263]. Further, marginalised people may not have access to technology to the extent assumed by designers [224] such as in the case of Sudanese activists deprived of enabling 2FA on a certain social media platform due to US sanctions [231]. In general, concepts, values, and experiences are not universal, much less for marginalised people with more complex needs. As a result of unmet accessibility needs, this mistaken assumption of universal needs becomes even more problematic [224]. Those who suffer from dyslexia or aphasia, for example, may have difficulty remembering passwords [273].

3.4.3

Recommendations for Better Privacy Protection

What are the best ways to improve technology and design for marginalised people to ensure their privacy taking into account their reactions to privacy threats and their challenges utilising protection mechanisms and best practices? As part of their literature survey on the privacy of marginalised people, Sannon and Forte [73] divided the recommendations derived from

62

3 Overview of Usable Privacy Research: Major Themes and Research Directions

88 papers into three broad categories: (1) conceptual, (2) technological, and (3) behavioural recommendations, to address this issue. The recommendations in [73] can alleviate the barriers Warford et al. [224] presented for the effective adoption of protection measures which we discussed previously. Conceptual changes required: In order to design privacy for marginalised individuals, designers and developers must conceptually rethink their approach. The conceptual recommendations can mitigate problems related to competing priorities and broken technology assumptions previously discussed in Sect. 3.4.2. In light of Sannon and Forte’s findings [73], many researchers advocate considering that technology is not a panacea, assessing key values in the design process and how they affect marginalised groups, and prioritising people’s agency to protect marginalised groups. An example is the claim made by Wan et al. that technologies must accommodate different needs in various organisational and family settings [274]. Therefore, designers should decide whether specific acts, such as gendering, are actually necessary and allow users the choice to not be gendered to give them autonomy [275]. Designers should also understand how power structures marginalised groups’ use of technology and tailor technology accordingly [73]. As an example, a husband may be able to force a woman to unlock a locked mobile phone in a male-dominated society [266]. Technical changes required: The benefits and drawbacks of any technical intervention should be carefully assessed to minimise tensions and maximise benefits. For this reason, Warford et al. recommend technology teams use their framework for balancing the risks and benefits of their potential designs [224]. Forte and Sannon, based on the results of their literature survey, recommend providing greater control over information, facilitating management of communal and networked aspects of privacy, making privacy decisions easier, and building technical safeguards [73]. Granular privacy settings are a common design suggestion for technologies, which would allow users to better control the visibility of their personal information [73]. Granular privacy controls in terms of cooperative settings and making individual accounts on shared devices can help people better manage their privacy. The first case, for example, helps people set up and manage their collective privacy boundaries, in the case of dementia patients and caregivers [276], and the second, helps them for personal privacy on shared devices [270]. However, granular settings are only effective if users can make informed decisions about how to use and set them. Therefore, for users to make informed privacy decisions and protect themselves, usable privacy settings and design for transparency are important. With a layered or directed design to find options and controls matched to user needs, transparency features that inform users about actions they can take in the presence of a threat, and understandable defaults that are carefully selected to support marginalised users with many competing priorities, Warford et al. suggest balancing the need for control options and usability [224]. In addition, through better technological infrastructure, marginalised populations can be safeguarded beyond improving their experiences and interface features. Akter et al. [252], for example, argue that algorithms should detect objects and context features that matter to users. In conclusion, the technological recommendation mitigates the

3.4

Efforts Towards More Inclusive Privacy

63

barrier of a lack of knowledge and experiences that marginalised people face (see Sect. 3.4.2), which impedes their adoption of effective protection mechanisms. Behavioural changes required: It might not surprise marginalised groups that they face privacy and security threats, but they may lack the resources to respond. Thus, educating and empowering people is essential for keeping them safe and maintaining their privacy while using technology. Security consultations with a trained technologist were found to be valuable in helping survivors of intimate partner abuse, as well as revealing security vulnerabilities that victims weren’t aware of [243, 245]. However, ensuring privacy education for marginalised people remains challenging due to low attendance at digital literacy programmes [73]—not all marginalised people, especially in disadvantaged areas, have access to the Internet privately. Accordingly, Reichel et al. propose that “lightweight privacy onboarding interfaces” can benefit restricted users [277]. One way to thwart disadvantages faced by those with limited connectivity is to make privacy settings accessible offline [277]. The behavioural recommendations similar to technological ones can alleviate the barrier of a lack of knowledge and experiences that marginalised people face (see Sect. 3.4.2).

3.4.4

Gaps and Future Directions

Despite increased attention towards inclusive privacy in recent years and dedicated panels and workshops,3 researchers are still trying to comprehend the complexities of privacy issues in marginalised contexts so that they can develop more inclusive technologies capable of protecting everyone’s privacy. However, it is also worth mentioning that even in studies on inclusive privacy, certain groups receive more attention than others. In addition, our database of attempts to achieve more inclusive privacy shows that considerable research has been conducted on specific domains such as privacy issues of adolescents and children (e.g. [246, 248, 265, 280]) and people with various health conditions (e.g. [249, 251, 252, 267, 268, 278, 279]). On the other hand, privacy issues, concerns, and needs of people in crisis, asylum seekers, people living under oppressive regimes, people belonging to specific tribes, and races, as well as marginalised populations from non-western cultures have seldom been studied.4 Perhaps the sheer number of people in certain marginalised groups increases attention and even facilitates research, for example, due to the availability of funds and participants. Likewise, Wardfor et al. [224] advocate for more research involving at-risk individuals from geographically and culturally diverse backgrounds. Similarly, Sannon and Forte note that inclusive privacy research has focused mainly on the most common topics, such as disability and LGBTQ+ issues, but other groups, such as those with mental health 3 For example, CSCW workshops on privacy for vulnerable populations and SOUPS Workshops on

Inclusive Privacy and Security (WIPS). 4 Generally, academic studies in usable privacy focus mainly on Western culture with participants

from US and European countries, which leads to marginalised people in non-western countries being neglected even more.

64

3 Overview of Usable Privacy Research: Major Themes and Research Directions

conditions and queer women, are less studied [73]. Furthermore, they point out the gap in race-centred privacy research, particularly within HCI and privacy-focused venues, as well as the need to investigate the effects of race on marginalisation [73]. Until we understand the interplay between inequitable social structures, social norms, and policies and laws, marginalised groups’ privacy concerns and their challenges may not be resolved. Nevertheless, as also mentioned by Sannon and Forte [73], most empirical works in inclusive privacy research in the usable privacy field focus on privacy behaviours and experiences without considering policy implications. Our future research should focus more on the structural and societal factors that affect the design and use of technology as well as policy recommendations that can correct inequities in protecting marginalised people’s privacy.

3.5

Improving Privacy Through Usable Privacy for Developers

Online technologies have a significant impact on end-users’ privacy and security, which consequently depends heavily on technology developers’ expertise concerning online privacy and security. Due to the lack of usability and inconvenience of security libraries and tools, like cryptographic libraries, developers may not fully utilise them, which can result in security vulnerabilities [99, 100]. Even with strict privacy laws, many developers fail to adhere to privacy best practices when it comes to online privacy [101]. Privacy tends to be a secondary objective for developers and other factors are prioritised [104, 105]. Even so, developers who care about privacy can still struggle to meet privacy requirements due to a lack of knowledge and expertise. Developers often find it difficult to understand and map privacy requirements to technical requirements since it is laden with legal jargon [102, 103]. In light of the critical role developers play in protecting user privacy, more and more researchers are using human-centred approaches to study how developers protect user privacy in practice, what challenges they face, and what solutions may be available, which we will discuss in this chapter.

3.5.1

Developers’ Barriers to Embedding Privacy

The ability to embed privacy into an app’s design is hampered by design requirements that contradict privacy requirements and a lack of knowledge about privacy theories as observed by Senarath and Arachchilage while developers were asked to complete a task involving embedding privacy in the process [106]. Tahaei et al. [102] additionally report that common barriers to implementing privacy in software design are negative privacy cultures, internal priorities, limited tool support, unclear evaluation metrics, and technical complexity, as revealed in their interviews with Privacy Champions in software development teams. A

3.5

Improving Privacy Through Usable Privacy for Developers

65

privacy champion is someone who strongly advocates privacy and supports a development culture that respects privacy [102]. In contrast to general privacy awareness and onboarding training, Privacy Champions find code reviews and practical training to be more instructive [102]. Li et al.’s interviews with developers revealed that although some developers valued user privacy, they only had a partial understanding of it [107]. Their participants did not always keep accurate records of their own data practices [107]. Developers annotating their apps’ privacy practices during the development process, for example, with the help of an Android Studio plugin that provides real-time feedback on privacy concerns, could result in significant improvements in privacy [107]. Some research also focused on studying developers and their privacy understanding, needs, and challenges in specific contexts such as when working with advertising networks5 or regarding app permissions. The default settings of ad networks can be privacyunfriendly [108, 109], and developers usually tend to adhere to those settings [108], which will negatively affect the privacy of users. Ad networks’ privacy interfaces for developers contain dark patterns that may lead to unfriendly privacy decisions that affect users [109]. It is possible, however, to nudge developers to make privacy-friendly decisions by, for example, integrating the implications of their choices into the interface [110]. Ad networks’ documentation does not support developers in building privacy-compliant apps [111]. It is also common that privacy information is concealed under multiple layers, inconsistent between platforms, filled with legal jargon, and lacking structure [111]. As a solution for the mentioned issues, Tahaei et al. suggest developing a privacy testing system for developers, dedicating a section to privacy, unifying privacy regulation terms, and creating usable opensource libraries that handle privacy regulations for developers using a simple configuration interface [111].

3.5.2

Developers and App Permissions

The developer’s perspective on app permissions is comparably understudied in contrast to the end-users’ perspective. Tahaei et al. [112] examined why developers request permissions, how they conceptualise permissions, and how their perspectives differ from those of end users. As a result of confusion over the scope of a particular permission or third-party library requirements, developer participants sometimes requested multiple permissions, as reported by Tahaei et al. [112]. In addition, most of their end-user participants believed that they were responsible for granting permissions and it was their choice to do so. Many developers also held the same belief about the end users in this context [112]. The unclear data collection practices and complicated configurations of third-party libraries can confuse developers [113]. Developers may send sensitive data about end-users to third parties unintentionally because these libraries are unclear about their purposes and permission 5 Ad networks enable developers to create revenue from their apps but using them may impact user

privacy.

66

3 Overview of Usable Privacy Research: Major Themes and Research Directions

requests [114]. Further, developers try to satisfy app store requirements rather than laws, for example, when it comes to making child-directed legally-compliant apps, and they often rely on app stores and operating systems for detecting privacy-related issues [113]. Tianshi Li and colleagues [115] recently examined the usability and understandability of Apple’s privacy nutrition label creation process from the developer’s perspective. Incorrect preconceptions about different concepts and misinterpreting definitions, limitations of Apple’s documentation, lack of team and organisation support, complexities related to communication costs between different stakeholders, and challenges of cross-platform apps are the identified common challenges for correctly and efficiently creating privacy labels by developers [115]. Li et al. further discussed the design recommendations for platform providers such as Apple and Google to improve their design of privacy labels from developers’ perspectives. As part of their recommendations, they primarily emphasise revising definitions and providing examples, clarifying common misconceptions proactively, and using different formats than text to avoid current privacy label documentation that is text-heavy and makes reading difficult, e.g. the explanations of optional disclosure, linking, and tracking [115]. In addition, labels should be checked internally for validity and consistency because although some concepts are interconnected per their definitions in the platform, developers used them independently [115].

3.5.3

Privacy Views and Practices Based on Natural Conversations

A different line of research has been conducted to understand developers’ privacy practices and attitudes by reviewing a log of online conversations that naturally occurred in various social media forums like Reddit, as opposed to interviews, surveys, and lab observations used in previous studies. It can provide a better understanding of developers’ privacy views and practices without explicitly asking developers about them. An iOS developer forum and an Android developer forum were compared and Greene and Shilton [116] found that developers on different platforms had different interpretations of “privacy”. In their analysis of Stack Overflow questions with the word “privacy” in the title or tag, Tahaei et al. found that developers face challenges when writing and modifying privacy policies, implementing or designing access control systems, updating platforms and APIs, and deciding about the privacy aspects of their projects [117]. The developers on Stack Overflow, for instance, expressed concerns about privacy issues and sought solutions to reduce data collection, mask personal information, remove unnecessary data, minimise tracking, and other approaches aligned with the principles of privacy by design to protect privacy [117]. Tahaei et al., in another more recent study, found that developers frequently consult Stack Overflow for privacy advice and that privacy advice mostly focuses on compliance with regulations and ensuring confidentiality, partly because platforms like Google Play impose requirements that impact the types of questions and answers [118]. There was also a bias towards a limited set of privacy design strategies, including inform, hide, control, and minimise, whereas other

3.5

Improving Privacy Through Usable Privacy for Developers

67

strategies such as demonstrate and separate rarely were considered [118]. To empower developers, it is important to improve the usability of less common privacy framework strategies and approaches like differential privacy, which is promising but difficult to use now [118]. Li et al. [105] followed a similar objective as Tahaei et al. [117, 118]. Nevertheless, they studied how developers discuss the use of personal data and privacy in the r/androiddev subreddit, where a variety of reactions are expressed to news about developers and discussions about how new OS releases are designed, and by directly focusing on personal data practices rather than sampling based on the use of specific words, such as “privacy”. Based on the findings of Li et al., developers rarely discuss privacy issues regarding personal data use unless they are prompted by external events such as new APIs designed for privacy, app store policy updates for privacy, and mostly negative user reviews regarding personal data use [105]. Developers often view platform or legal requirements regarding privacy as burdensome and not beneficial to them [105].

3.5.4

Gaps and Future Directions

The focus on privacy issues concerning developers has been thriving recently, but there are still several areas of research that should be explored in the future. Besides their environment, such as organisational culture, developers’ perceptions of and practices around privacy may be influenced by their personal characteristics and cultural differences. For example, there are fewer resources and experts available to deal with privacy concerns at smaller companies. As Bouma-Sims and Acar recently examined how developers view gender in software, they discovered that while some developers understand inclusive gender options, they rarely take into account when gender data is required or how they ask for gender disclosure from users [119], which also impacts technology inclusivity besides privacy. While cultural differences, environments, and individual characteristics may affect app development at different stages, and personalised and automatised developer tools and interfaces may be helpful, these areas are rarely investigated. In addition, there are new changes in privacy laws and regulations as well as advances in AI and IoT that have resulted in more sophisticated data flows and data practices, so it is imperative to research developers’ privacy attitudes, practices, challenges related to emerging technologies, and possible solutions as well as longitudinal studies examining changes in their privacy practices and attitudes due to new advancements. For example, Kühtreiber and colleagues [120] recently surveyed privacy guidelines and data flow tools for developers in the context of IoT. According to them, existing solutions are cumbersome, only work in certain scenarios, and are not sufficient to address the privacy issues inherent to IoT [120]. Moreover, we need to focus on effective methods of educating and training developers on privacy issues and how they can balance different goals, such as privacy and usability, as well as identifying how these methods may

68

3 Overview of Usable Privacy Research: Major Themes and Research Directions

affect their attitudes and practices. Following a human-centred design approach, in which stakeholders from different disciplines, such as legal or technical privacy experts, are also involved in requirement elicitation and evaluation, should be another direction to go, as we will also discuss in Chap. 5.

3.6

Adoption, Usability, and Users’ Perceptions of PETs

There are several technologies and tools available to help users protect their data and enhance their privacy (see Sect. 2.3 for an overview of different types of PETs). PETs will only provide ultimate benefits for users if they are adopted and used correctly to defeat and protect against specific threats that the PETs selected are capable of defeating. However, researchers have shown that various obstacles and different factors play a role in preventing and hindering users from taking privacy-protective actions including lack of privacy concern and unawareness of risks [156, 165], usability issues, perceived usefulness, and the costs or difficulties of taking action [156, 157, 160, 162–165], perceptions that (certain) protection techniques are not needed [157, 161, 162, 164], lack of awareness of protection options (their existence and functionality) [156, 157], and social factors including behaviours of friends [156]. In sum, research on users’ reasoning regarding how and why they use (and don’t use) privacy protection techniques, including simple and advanced PETs, indicates that users are unaware of the online privacy risks and consequences of data disclosure and the protection techniques available to counter them. However, even if they did, the usability of the available tools and techniques would impede the adoption and sustainable use of protection methods. Especially, there is a lack of awareness on the part of users regarding the heightened levels of privacy offered by advanced PETs [157]. As a result, a collective effort within the usable privacy community is urgently needed to explore suitable approaches to educate and inform users about the features and limitations of various PETs, when and how to use them, and privacy-preserving methods used to protect their data. Additionally, we need to examine whether and how usable transparency on PETs can help users make informed decisions about their data, which is claimed to be protected by privacy-preserving measures. The development of and attention towards tools and techniques for data minimisation at different levels have been increasing over the past few years, including tools and techniques for anonymity (for example, Tor) and tracking protection, crypto-based tools and techniques (such as end-to-end encrypted messaging) and differential privacy-based tools. However, the process of informing users and providing usable explanations of PETs remained a challenge. We face challenges in providing usable transparency on PETs and explaining privacypreserving mechanisms, including the lack of analogies in real life for complex mechanisms such as differentially private mechanisms and crypto-based protections, the need for catering to analogies with the digital world, and special issues relating to considering the effects of different factors, such as the right time, framing, and mode of delivery of information, on

3.6

Adoption, Usability, and Users’ Perceptions of PETs

69

transparency, which we discuss all in Sect. 4.3. This section, meanwhile, discusses prominent work on human aspects of encryption, anonymity, and differential privacy, focusing on the usability of related PETs, their use, and users’ mental models, and when applicable, approaches to explaining the concept to them.

3.6.1

Encryption

Throughout recent years, encryption has become a standard feature in popular software, including instant messaging apps like WhatsApp and Telegram and operating systems like Google’s Android and Apple’s iOS. In previous studies, however, it was found that users have difficulty coping with encryption when, for example, actively involved in managing secure emails [15, 23, 281–286]. Those who misuse or misapply encryption technologies can suffer devastating consequences as the technology they were promised may no longer protect them due to their mistakes. One effective and practical way to minimise potentially disastrous user errors in encryption is to integrate encryption into software and bypass the user entirely. Using this approach appears to be effective in scenarios where encryption is already widely deployed, such as smartphone encryption, HTTPS (HTTP with data encryption using SSL/TLS), and secure messaging. Despite this, bypassing the user and integrating encryption seamlessly can come with limitations. As a result of high levels of automation, users may lack the context necessary to respond correctly when systems require interaction. where encryption has been seamlessly applied, researchers have shown that users were confused and unsure about what actions to take when confronted with warnings [10, 287] and mistakenly sent out unencrypted messages and were unsure whether they should trust the secure email system [16]. Illustrating encryption even through a simple text disclosure improves perceptions versus not including encryption at all [298]. When Distler and colleagues [294] compared an e-voting application with visible encryption to one without, similar results were reported. Despite performing worse in terms of usability, visible encryption had a more positive impact on overall User eXperience (UX) and perceived security [294]. Also, for e-voting and online banking, high-level textual/not visual descriptions of the encryption mechanism applied to provide confidentiality during data transmission were found to have a considerable effect on perceived security and understanding [295]. In addition to the challenges that automation and seamless integration may present, communicating about the underlying privacy protection as part of increasing transparency about what happens to users’ data is crucial to increasing users’ trust and enabling them to make informed data sharing decisions. Further, usable transparency on the underlying protection can lead to the formation of comprehensive mental models of a system such as secure emailing which in turn counter the barriers to adopting such systems. In a study by Renauld et al. [162], usability issues, incomplete threat models, and a lack of knowledge of email architecture were reported as the primary barriers to end-to-end encrypted email adoption. So in the context of encryption, the question is what users’ mental models of

70

3 Overview of Usable Privacy Research: Major Themes and Research Directions

encryption are and how encryption can be effectively explained to users so that they form the correct mental models and develop trust, adopt effective privacy and security measures, and make informed decisions as a result. We will discuss in this section the research on users’ mental models of encryption, how to explain encryption effectively to users, and the subsequent behavioural changes that result from it. We will not discuss the details of studies exploring the use of encryption-enabled applications and their usability. Users’ mental models of encryption: In several studies, users’ mental models of secure communications and encryption have been examined, either generally [184] or in the context of different communication applications or services such as email [162], messaging services [163, 289–291], HTTPS [183], or VPN [181] which all confirmed that users lack accurate and confident mental models of encryption in varying degrees. Encryption models in users’ minds follow a model of symmetric encryption and are essentially simplified into an access control abstraction [184]. Users confuse encryption with authentication, consider encryption to be just some sort of encoding or scrambling data, and do not distinguish between end-to-end encryption and client-server encryption [163]. Several misconceptions and knowledge gaps about encryption and secure communication have been identified in different studies on users’ mental models’ of secure communications or encryption that we summarise below: • Encryption is pointless and ineffective since it does not protect against different resourceful attackers such as hackers or governmental actors [163, 183, 184, 289]. • Landline phone calls and SMS are more secure (or as secure as) than E2E encrypted communications [288, 290, 291, 298]. • Service providers (for example, WhatsApp as a messaging app with enabled encryption) can still access E2E encrypted contents, e.g. read messages [163, 289, 290, 292]. • The concept of endpoint authentication (verifying the identity of the communicating partner) in a cryptographic sense is lacking [287, 289, 290, 290]. Improving mental models of encryption may lead users to adopt encryption-enabled services such as secure communication apps more readily and help them protect their privacy and security by eliminating erroneous beliefs and behaviours. Therefore, researchers attempted to improve users’ mental models of encryption through different ways of explanations which we discuss in the next section. We focus on works that directly improve understanding of encryption in general and E2EE in particular through the use of a variety of explanations that convey different aspects and elements of the underlying system to correct users’ mental models. Our discussion does not include works investigating how to avoid user mistakes, overcome usability issues, and improve security perceptions through visualisation (i.e. security indicators such as security icons) in the context of secure emailing and SSL/TLS-enabled connections (e.g. [17, 283, 296, 297]). Explaining encryption to users and correcting mental models: A lab-based study was conducted by Bai et al. [288] to evaluate the effectiveness of their tutorial in improving

3.6

Adoption, Usability, and Users’ Perceptions of PETs

71

non-experts’ E2EE mental models. The tutorial consisted of four main sections: a basic overview, information on what E2EE can and cannot protect against, clarifications of misconceptions reported heretofore, and a simplified explanation of how E2EE works cryptographically [288]. The first three modules focus on cueing functional mental models [293] (i.e. explaining the implications for users), and the second module examines whether simple structural information [293] on how E2EE works could support functional models. In general, the study showed that educational interventions on E2EE risks, threats, and protections can improve users’ mental models of E2EE [288]. Nonetheless, some misconceptions remained and some uncovered ones emerged. Participants appeared confused by terms like integrity and authenticity, and they assumed that E2EE could also protect against malware and endpoint access [288]. Also, several participants did not believe that encryption is impervious to hacking, or that E2E encrypted messages are more secure than voice calls [288]. Another study also demonstrated that brief educational messages embedded within messaging workflows of secure messaging apps could facilitate understanding of some key E2EE concepts if used alone (e.g. who could or could not read their messages) [186]. The educational messages in [186] consisted of one or more of five principles authors extracted and formulated based on existing messages from the academia and industry: (1) information on confidentiality, (2) risk communication, (3) a simplified structural model of how technology works, (4) metadata weakness, and (5) endpoint weakness of E2EE systems. However, Akgul et al. did not see an effect of their interventions in a more realistic setting (integrated into a secure messaging app participants used), which suggests that more salient in-app educational messages may be more effective [186]. Additionally, researchers investigated appropriate terminology (based on perceived privacy and security) for describing encryption of a messaging app and data transmission in multiple contexts [299, 300]. In the context of securing messaging apps, Stransky et al. also compared different text disclosures, icons, and animations of the encryption process regarding their effects on perceived trust, security, and privacy [298]. The “encrypted” text disclosure condition was perceived as the most secure and private, but different text disclosures had no significant impact on usability and app satisfaction. A surprising negative effect of security icons on users’ perception of trust, security, and privacy was observed, while no significant effects were observed for animations. Stransky et al. concluded that users’ perceptions of security mechanisms were influenced more by preconceived expectations and an app’s reputation than by visualisations [298]. The use of metaphors is another approach to explaining encryption to users. When users are faced with a new, abstract, or complicated domain, metaphors can help them build a mental model by comparing it to a familiar and often simpler domain [304, 305]. People can be encouraged to use their existing knowledge of the source domain, by providing an appropriate metaphor, to structure their thinking about the target of explanation [301]. A study conducted by Demjaha et al. [185] used metaphors to transform existing knowledge of more familiar domains and their features into functional understandings of E2EE by providing metaphorical descriptions of E2EE and testing their influence on users’ understanding of

72

3 Overview of Usable Privacy Research: Major Themes and Research Directions

what security properties E2EE provides, such as confidentiality and authenticity, as well as what it does not provide, such as metadata protection. E2EE metaphors were created based on explanations provided by users [185]. While metaphors helped some participants identify security properties correctly, others incorrectly assumed security properties that E2EE does not provide [185]. All metaphors tested failed to evoke a correct mental model in participants [185]. Yet, as compared with existing industry descriptions, metaphors developed from users’ language were less detrimental to their understanding [185]. While Demjaha et al.’s study suggested metaphors might help users better understand encryption, it failed to explain why the metaphors were partly successful and partly unsuccessful. A later study by Schaewitz et al. [306] examined how users construct their functional understanding using metaphors re-used from [185] and conducted qualitative interviews to examine users’ understanding of the metaphors, as well as the security properties of E2EE. A combination of metaphors and existing beliefs, such as the trustworthiness of providers, contributed to participants’ inferences about E2EE’s security properties [306]. Though metaphors improved confidentiality assessments, misconceptions about authenticity remained. According to most participants, authenticity was determined only by the protection of the end device, the credentials, and the password, not by security holes in the transmission [306]. Studies relied on metaphorical explanations of PETs [185, 218, 306], although showing the plausible suitability of metaphors (see also Sect. 3.6.3 under the paragraph on DP communication based on metaphors) confirm and reveal the problems associated with such descriptions. According to Thibodeau et al. [302], metaphors’ power can be moderated by cognitive, affective, and social-pragmatic factors. Also, metaphorical mappings, for example, emphasise some features and ignore others of a target domain [302] or imply features that do not exist [303]. Consequently, metaphors need to be carefully developed and evaluated according to the context in which they will be used and should counter the misconceptions (at least the ones already reported in the literature) to effectively convey complex ideas.

3.6.2

Anonymity

Modern societies rely heavily on anonymity. People seek anonymity for various online activities including social activities such as participating in special interest groups, social networking, and receiving help and support and instrumental activities such as searching information and file sharing [179]. The core values users achieve through anonymity tools are freedom, personal privacy, economic prosperity, professional development, and fearless living [187]. For example, using the shield of anonymity a journalist can contact sources and do investigations. Vulnerable populations such as refugees and victims of sexual harassment can also seek assistance and information. Data collection and analysis measures that utilise sophisticated AI technologies, complex processing practices in IoT systems, and tracking mechanisms are making it increasingly difficult to maintain anonymity. To ensure

3.6

Adoption, Usability, and Users’ Perceptions of PETs

73

anonymity, advanced tools and techniques are consequently needed, including those presented in Chap. 2. Users’ mental models of anonymity and anonymisation in different contexts have rarely been studied in contrast to research on mental models of encryption. The researchers examined users’ mental models of PETs that provide anonymity, their use, and their challenges, however. A PET used for anonymity, Tor [167] (read more about Tor in Sect. 2.3.1 under the paragraph related to data minimisation at the communication level), has more than 2,000,000 active users [349], making it unique in its acceptance and adoption. Tor’s anonymity set, which is the number of indistinguishable users, determines the strength of its anonymity [166]. Tor browser, provided by Tor Project,6 makes users less indistinguishable by blocking trackers, defending against surveillance, and resisting fingerprinting [350]. The adoption and use of the Tor browser by many people around the world are very important for Tor’s anonymity. In addition, the Tor browser’s usability and UX play a vital role in the anonymity of the Tor network, as poor UX and usability discourage users from using the service. Consequently, after the formation of tools beyond just the Tor proxy in 2005 [351], researchers gradually started to work on human aspects of the Tor network. Usability and UX of Tor network and browser: The results of different studies on measuring latency on anonymity networks show high latency is an important reason why people leave anonymity networks [168, 169] and they led to efforts for reducing latency [170– 172]. Additionally, researchers have examined user experience from a user’s viewpoint as well as network-based approaches. In 2007, Clark et al. [173] evaluated four competing methods of deploying Tor clients (before the deployment of the Tor browser) through cognitive walkthroughs. A lack of visual feedback, confusing menus, and jargon-filled documentation were among the hurdles identified in the study [173]. However, since 2008, when Tor first introduced the Tor network to users through the Tor browser, the findings of more recent studies are more pertinent. Norcie et al. identified several hurdles associated with installing and using the Tor browser bundle [174]. In a laboratory-based experiment involving 25 participants, almost two-thirds faced problems installing or using the Tor browser [174]. Several usability issues were identified by participants, including long launch times, browsing delays, and window discriminability [174]. Norcie et al. proposed several modifications to minimise these issues [174] and evaluated them in a subsequent study which resulted in notable UX and usability improvements [174]. To configure Tor connections, the Tor browser uses Tor Launcher, a Graphical User Interface (GUI) whose usability is evaluated in [175]. Lee et al. show that 79% of their participants who attempted to connect to Tor in simulated censored environments failed [175]. Users often got frustrated and tried options at random [175]. The interface requires users to know technical terms, offers room for error, and does not provide feedback. Lee et al. [175] discuss various design improvements to solve the problems revealed. Tor browser version 5.4.5 was updated with changes based on Lee et al.’s work [175]. Gallagher et al. [178] examined the use of the Tor browser over one week in a naturalistic setting, in 6 https://www.torproject.org/.

74

3 Overview of Usable Privacy Research: Major Themes and Research Directions

contrast to previous research. Their study revealed a variety of UX issues, including broken web pages, latency, lack of common browsing conveniences, incorrect geolocation, and operational opacity [178]. Based on this insight, they suggested several UX improvements to reduce user frustration and mitigate the issues [178]. Attitudes, practices, and mental models of Tor users: Perceived anonymity and trust are reported as two of the most influential factors in deciding whether to use anonymity tools [188]. The right mental model of what an anonymity tool offers and understanding how it works are necessary to adopt and utilise it effectively otherwise deanonymisation may occur. Yet, little research has been conducted on understanding and using anonymity tools and techniques by users. Non-experts’ and experts’ mental models of different privacy and security tools and techniques have been explored and compared in other studies [177, 180–183]. Gallagher and colleagues [177] explored the understanding and the use of the Tor anonymity tool through their interviews with both experts and non-experts in cybersecurity and anonymity.7 According to Gallagher et al. [177], there is a noticeable difference between the way experts and non-experts view, understand, and use Tor. Experts use Tor more frequently and for a variety of purposes, while non-experts tend to use it for a single specific purpose [177]. Among both experts and non-experts, there were behavioural traits and understanding gaps concerning Tor, to a varying degree, that could compromise anonymity [177]. Even experts exhibited gaps in understanding and behaviours that made them vulnerable to specific attacks, including DNS leaks [177]. Although experts were generally well-versed in the Tor architecture and operation, as well as its threat model, and understood that Tor is not a complete solution to all potential anonymity issues, non-experts had an insufficient understanding of Tor’s threats and had incomplete and overly abstract mental models, which obscured or misrepresented important details relating to anonymity and privacy [177]. Some non-experts (5 out of 11) believed that Tor provided more security than it actually did [177]. To the experts, Tor is a complicated decentralised network that transmits packets of information [177]. When describing how Tor works, experts explained it in terms of the architectural and engineering details of the software and the network [177] and showed a structural mental model. Non-experts, however, often treated Tor as a ‘service’ that performed specific functions such as providing security and viewed the components of Tor’s network architecture as abstract and opaque [177]. As non-experts described Tor, they stressed its potential usage scenarios and the social values it could offer, including freedom from surveillance and personal liberty [177]. Thus non-experts focus more on the functional rather than structural features of a system which shows, similar to other works [184–186], that functional descriptions could better facilitate correct mental models of privacy or security systems, at least for the general public. Hurdles to using and adopting anonymity tools such as Tor that are revealed in the literature, the number of Tor users compared to the number of Internet users and those 7 They used the scale of Technical Knowledge of Privacy Tools from Kang et al. [165] to categorise

their participants.

3.6

Adoption, Usability, and Users’ Perceptions of PETs

75

who might need anonymity to do their rightful tasks but do not use Tor or alike suggest that there is an urgent need for not only studying issues considering a wider group of potential users such as vulnerable populations, but also exploring different approaches that can counter the problems and increase adoption rate (which consequently can increase anonymity level). Nevertheless, vulnerable populations that may require the most care to preserve their identities are overlooked, and very little research has been conducted to explore the solutions to the problems investigated and revealed in the literature regarding human aspects of anonymity tools. Two recent studies in 2022, through the use of nudging and personalisation, explored how the adoption of the Tor browser can be improved. Stort et al. [190] argue that nudging interventions (See Sect. 1.4.13 for the definition of nudge and Sect. 4.2.5.1 for ethics of nudge) targeting different hurdles may help users adopt the Tor browser and accordingly, they designed and tested different nudges. To help participants form accurate perceptions of the Tor browser, they first tested an informational nudge. As part of the informational nudge, participants were shown a description of privacy threats and the protection offered by the Tor browser. After that, they added an implementation intention aimed at helping participants identify opportunities to use Tor. Additionally, they added an implementation intention for coping planning, which aimed to help participants cope with challenges associated with using the Tor browser, such as extreme website load times. Their longitudinal field experiment showed that the action planning nudge was ineffective and the coping planning nudge increased the use of Tor only in the week following encountering the intervention [190]. Nonetheless, the informational nudge increased the use of Tor browser in both short term and long term [190]. In another study, Albayram et al. examined whether “personalised” content in the form of videos based on decision-making style and level of IT expertise could increase Tor browser adoption [189]. Nonetheless, the results revealed that personalised videos did not lead to a significantly higher adoption rate compared to a general video raising awareness of the Tor browser [189].

3.6.3

Differential Privacy

As part of privacy-preserving data analysis, differential privacy (DP) has become a cornerstone component. By adding noise to data in a controlled way, it prevents individuals from being identified while still allowing for the extraction of valuable insights from the data (see Sect. 2.3.1, under data minimisation at data level for more details on differential privacy). Differentially private mechanisms have been integrated into the systems of several large companies in recent years, including Apple [192], Google [193], Microsoft [194], and LinkedIn [195]. To prevent the linking of Census responses to specific individuals, the U.S. Census Bureau developed a disclosure avoidance system based on differential privacy that

76

3 Overview of Usable Privacy Research: Major Themes and Research Directions

adds random noise to Census tabulations [196].8 In light of differentially private mechanisms being deployed in a variety of variants and contexts, human aspects of DP-enabled systems need to be addressed. Despite the abundance of technical literature on differentially private mechanisms (e.g. [197–200] just to name a few), little is known about their ethical, legal [209–213, 222], and HCI implications and challenges [214–220, 222] considering different stakeholders ranging from data subjects to policymakers and data analysts. In terms of communicating DP with different types of users, very little research has been conducted. For instance, to make DP and concerning mechanisms more accessible, researchers proposed a few tools and interfaces [201, 203–208], and partly they evaluated them involving users such as research practitioners and domain experts [202, 204, 205, 208, 222]. Research on differentially private analysis has also focused on describing it to the end users whose data is to be included in differentially private analyses and who need explanations or transparency on the underlying mechanisms to be able to make informed decisions. In this part, we provide a brief discussion around existing approaches to DP explanation, the effects, and the problems. DP Communication based on publicly available descriptions: Different approaches have been taken by researchers to communicate differential privacy (and the underlying DP mechanisms) to users and evaluate how effectively DP transparency impacts users’ understanding and data sharing decisions. In one approach [214, 216], adapted versions of verbal descriptions of DP are utilised, such as those provided by companies/organisations, news outlets, and academic publications, to analyse users’ privacy expectations and willingness to share data in DP-enabled systems. According to these studies [214, 216], promising users that DP will protect their privacy (e.g. DP reduces the risk of their information being disclosed) makes them more likely to disclose their personal information (especially sensitive information). Despite this, users found the descriptions difficult to understand [216], and their expectations of how privacy would be protected based on descriptions did not match reality [214]. Cumming et al. report that DP can address users’ privacy concerns, but its real-world description may mislead users by raising unrealistic expectations about privacy features [214]. Furthermore, Xiong et al. report that despite their participants’ struggle with DP descriptions, descriptions explaining implications, such as what happens when the aggregator’s database is compromised, could facilitate data sharing decisions and understanding of different models of deploying DP including both local and central models [216]. To gain further insights into differential privacy communication in a different cultural context, Kühtreiber and colleagues [220] replicated Xiong et al.’s study [216] (conducted with participants from US and India) with German participants. In line with Xiong et al.’s study, they found participants had difficulty understanding differential privacy and needed a new way of conveying its effects [220]. Nonetheless, the German participants were more willing to share information compared to those from the United States and India in [220].

8 The interested audience can read Damien Desfontaines’ blog [191] for a list of real-world differential

privacy deployments with their privacy parameters.

3.6

Adoption, Usability, and Users’ Perceptions of PETs

77

Fig. 3.2 The three spinner metaphors used in [215]. Bullek et al. refer to “bias” as the probability the participant must answer truthfully

DP Communication based on metaphors: Instead of relying on DP’s current descriptions, which were found to be insufficient and not helpful for users’ understanding and decision-making [214, 216], researchers utilised metaphors [215, 218, 221]. Bullek et al. used a spinner metaphor to describe an alternative local DP technique, the Randomised Response Technique (RRT) [215]. In their study, each spinner showed the probability of sharing a truthful response as the answer to the related question and the probability of sharing a Yes or No response regardless of the true answers of the participants (see Fig. 3.2). Users significantly preferred the spinner with the highest probability of perturbing responses [215]. The spinner with a higher likelihood of revealing the most truthful responses, however, was preferred by some participants since it minimised the ethical implications of lying [215]. Karegar et al. [218, 221] also used the spinner metaphor to describe local DP. Nevertheless, their spinner metaphor illustrated the privacy-accuracy trade-off (see Fig. 3.3). Participants in [218] considered the trade-off conveyed by the metaphor and all of them (N = 10) preferred the spinner with less probability of revealing true responses. Furthermore, Karegar et al. thwarted ethical concerns by conveying that the obfuscation would be handled by an app, not by participants [218, 221]. The feedback from individuals did not reveal anything regarding their perceptions of data perturbation as unethical or associated with lying [218]. Besides using a spinner metaphor, Karegar et al. produced and evaluated pictorial metaphors for DP in a range of data analysis scenarios (local DP, typical central DP, and central DP for federated learning) such as noisy picture metaphors which were elaborated with short, simple text [218, 221]. In local DP, simply speaking, noise is added to individual data points whereas in central/global DP noise is added by the aggregator to the output of a query on a database. Figure 3.4 shows the noisy picture metaphor in the local model, and Fig. 3.5 shows the metaphor based on picture pixelation in the central model. DP’s privacy features such as plausible deniability and the trade-off between privacy and utility are conveyed to lay users to varying degrees by their proposed metaphors [218, 221]. At the same time, their study highlights several challenges that require further attention if metaphors are to be used [218]. Karegar et al. recommend communicating about the reduction of identifi-

78

3 Overview of Usable Privacy Research: Major Themes and Research Directions

Fig. 3.3 The spinner metaphor used in [218] Original data

The amount of added noise:

No added noise

Accuracy of outcome: Highest accuracy No privacy

Very low

Low

Medium

Decreasing

High

Very high No accuracy High privacy

Fig. 3.4 Noisy picture metaphor for local model used in [218]

cation risks (i.e. focusing on implications for the users in terms of benefits and risks rather than “how” the technology works), addressing misconceptions about digital world analogies (see Sect. 4.3 for a complete discussion on challenges in providing usable transparency on PETs), and countering the inherent shortcomings of metaphors, including conceptual baggage, i.e. privacy features implied by the metaphors that the underlying system does not have through suitable additional accompanying information [218]. Cumming et al. also emphasise the importance of explicitly informing users of the risks associated with their data in DP contexts, for example, by explaining that although personal data may not leak

3.6

Adoption, Usability, and Users’ Perceptions of PETs

79

Original data collected: Selfie of users

The original result of data analysis:

The amount of added noise:

Accuracy of outcome:

No noise

Very low

Low

High accuracy No privacy

Medium

High

Very high

No accuracy High privacy

Fig. 3.5 Noisy picture metaphor for central model used in [218]

through DP statistics it can still leak to other entities [214] if the data aggregator’s servers get hacked or accessed by untrusted parties. DP communication based on privacy risks and DP parameters: A recent study by Franzen et al. [217] proposed using quantitative privacy risk notifications for DP privacy assurances in response to the request to communicate privacy risks to users. Franzen et al. adapted empirically validated research from quantitative risk communications in the medical domain [217]. There are six basic risk communication formats used in medical risk communication: percentages, simple frequencies, fractions, numbers needed to treat, 1-in-x, and odds, among which, Franzen et al. adopted percentages (e.g. 76%) and simple frequency formats (e.g. 34 out of 90) and they argue that other formats can be easily misunderstood, are not flexible or accurate, or do not have sufficient research behind them [217]. In their evaluation of how well the proposed notifications conveyed privacy risk information through frequencies and percentages, Franzen et al. found that their notifications could communicate objective information similar to existing qualitative notifications (derived from [216]), but individuals were less confident in their understanding of them [217]. Further, Franzen et al. discovered that several of their newly proposed notifications and the currently used qualitative notifications disadvantage individuals with low numeracy9 [217]. Low numeracy individuals appeared overconfident concerning their real understanding of the privacy risks, making them less likely to seek the additional information they need before making a decision [217]. Communication of privacy risks and identifying risk reductions due to the deployment of privacy techniques such as DP should therefore consider the fairness and inclusiveness of their proposed approaches. As described in Sect. 2.3.1, certain algorithm parameters govern how privacy protection for individuals is balanced with data utility in differential privacy implementations. Consequently, data controllers who fail to disclose privacy parameters may obscure the limited protection offered by their implementation. Researchers explored whether explanations that 9 The Berlin Numeracy Test (BNT) was used which provides a flow-diagram to classify participants

into one of four levels of numeracy [217].

80

3 Overview of Usable Privacy Research: Major Themes and Research Directions

hide critical information can persuade users to share their data, i.e. can be misused to trick people into sharing more than they would otherwise. The results of Smart et al.’s study [219] revealed that the choice of privacy parameter and the choice of explanation had no significant effect on willingness to share. A majority of their participants (76%) decided whether or not to share their data before reading about privacy protections [219]. The limited influence of DP transparency on users’ decisions, perhaps because people do not understand or pay attention to privacy notices, raises the issue of privacy theatre. Privacy theatre means that PETs may provide the “feeling of improved privacy while doing little or nothing to actually improve privacy” [223]. By using explanations with impaired transparency, service providers can hide important privacy features like DP parameters and their consequences while still collecting users’ data. In light of such problems, it is, therefore, necessary to avoid burdening users too much to ensure that privacy mechanisms used by service providers are adequate as also discussed in [219], and policymakers and other stakeholders should be involved in examining a variety of approaches that could influence the way service providers support users’ privacy.

3.7

Towards Usable Privacy Notice and Choice and Better Privacy Decisions

A notice is a presentation of terms, whereas a choice implies acceptance of those terms. Privacy notices can take a variety of forms, ranging from general privacy policies to consent forms and cookie consent banners. Under privacy laws and regulations, internal processes for storing, processing, retaining, and sharing personal information should be transparent to users. A transparent data processing practice enables users to take action, i.e. make a choice and intervene in the processing of their data. Nowadays, the notice and choice mechanism is a de facto way of notifying users about a system’s data practices and enabling them to undertake actions, i.e. make informed privacy decisions. As a way to assist designers, developers, and researchers in identifying notice and choice requirements and developing a comprehensive notice concept that addresses the needs of diverse audiences, Schaub et al. identified relevant dimensions to define the design space for privacy notices [348]. In their design space, notices are categorised by timing (when they are provided, e.g. just-in-time, periodic, or on-demand), channels (how they are delivered, e.g. on primary channels, secondary channels, or public channels), modality (the interaction modes used, such as visuals, auditory, and haptic), and control (the way choices are presented, such as decoupled ones) [348]. Based on a user-centred analysis of how people exercise privacy choices in real-world systems, Feng et al. have constructed a design space for privacy choices [347]. In terms of privacy choices, five dimensions have been identified. Two of these dimensions are unique to privacy choices including type (what kinds of choices are offered: e.g. binary, multiple, or contextualised) and functionality (what capabilities are offered to support the privacy choice process: e.g. presentation, feedback, or enforcement), and three

References

81

similar dimensions are shared with the design space of privacy notices [348], which are timing, channel, and modality [347]. Unfortunately, current methods of providing notice and choice, such as privacy policies, are ineffective because neither are usable nor useful to users. Throughout the years, both HCI and privacy research communities have called for more usable privacy notices and choices. In order to increase data privacy transparency, privacy notices can be provided. While transparency has been emphasised as an important practice for decades, existing privacy notices often fail to help users make informed choices. In Sect. 4.3.3, we discuss the challenges of providing usable transparency, factors influencing usable transparency, and the challenge of preventing transparency from becoming “a sleight of privacy” [128]. Even with effective privacy notices, privacy protection cannot be guaranteed without informed privacy choices. In the end, whether people’s data privacy is protected is determined by their privacy choices. Section 4.4.2 discusses the current problems concerning notice and choice. Additionally, it discusses the general research approaches that propose solutions to these problems, including those that advocate reducing the active role of users in consenting and relaxing the legal requirements, those that propose consent delegation, and those who believe that consent and transparency requirements are not intrinsically flawed, but rather how notices and choices are currently implemented. As part of the latter approach, researchers presented and tested solutions to assist users in understanding what information notices should provide, how to communicate such information, and when and how to engage them in making privacy choices. Section 5.6 describes the current approaches to designing usable privacy notices and choices based on exploring what information needs to be provided to users and how it should be presented, i.e. looking into how content for privacy notices should be designed. Further, Sects. 5.7 and 5.5.2 discuss approaches that are based on analysing how, when, how often, and to what extent users should be engaged in the content or interact with notices so that they can make active decisions. Instead of altering content, such solutions either emphasise the active engagement of users with content in order to catch their attention or automate privacy management to a certain extent by using appropriate privacy defaults and dynamic support to request decisions based on the context.

References 1. Dupree, J., Devries, R., Berry, D. & Lank, E. Privacy personas: Clustering users via attitudes and behaviors toward security practices. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 5228–5239 (2016) 2. Phelan, C., Lampe, C. & Resnick, P. It’s creepy, but it doesn’t bother me. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 5240–5251 (2016) 3. Paine, C., Reips, U., Stieger, S., Joinson, A. & Buchanan, T. Internet users’ perceptions of ‘privacy concerns’ and ‘privacy actions’. International Journal Of Human-Computer Studies. 65, 526–536 (2007) 4. Ooi, K., Hew, J. & Lin, B. Unfolding the privacy paradox among mobile social commerce users: a multi-mediation approach. Behaviour & Information Technology. 37, 575–595 (2018)

82

3 Overview of Usable Privacy Research: Major Themes and Research Directions

5. Woodruff, A., Pihur, V., Consolvo, S., Brandimarte, L. & Acquisti, A. Would a Privacy Fundamentalist Sell Their DNA for $1000... If Nothing Bad Happened as a Result? The Westin Categories, Behavioral Intentions, and Consequences. 10th Symposium On Usable Privacy And Security (SOUPS 2014). pp. 1–18 (2014) 6. Raja, F., Hawkey, K., Hsu, S., Wang, K. & Beznosov, K. A brick wall, a locked door, and a bandit: a physical security metaphor for firewall warnings. Proceedings of The Seventh Symposium On Usable Privacy And Security. pp. 1–20 (2011) 7. Voronkov, A., Martucci, L. & Lindskog, S. System Administrators Prefer Command Line Interfaces, Don’t They?: An Exploratory Study of Firewall Interfaces. 15th Symposium On Usable Privacy And Security. pp. 259–271 (2019) 8. Krombholz, K., Mayer, W., Schmiedecker, M. & Weippl, E. “I Have No Idea What I’m Doing” - On the Usability of Deploying HTTPS. 26th USENIX Security Symposium (USENIX Security 17). pp. 1339–1356 (2017,8) 9. Sunshine, J., Egelman, S., Almuhimedi, H., Atri, N. & Cranor, L. Crying wolf: An empirical study of ssl warning effectiveness.. USENIX Security Symposium. pp. 399–416 (2009) 10. Akhawe, D. & Felt, A. Alice in Warningland: A Large-Scale Field Study of Browser Security Warning Effectiveness.. USENIX Security Symposium. 13 pp. 257–272 (2013) 11. Nicholson, J., Coventry, L. & Briggs, P. Can we fight social engineering attacks by social means? Assessing social salience as a means to improve phish detection.. SOUPS. pp. 285–298 (2017) 12. Tuncay, G., Qian, J. & Gunter, C. See no evil: phishing for permissions with false transparency. Proceedings Of The 29th USENIX Conference On Security Symposium. pp. 415–432 (2020) 13. Greene, K., Steves, M., Theofanos, M. & Kostick, J. User context: an explanatory variable in phishing susceptibility. In Proc. 2018 Workshop Usable Security. (2018) 14. Reinheimer, B., Aldag, L., Mayer, P., Mossano, M., Duezguen, R., Lofthouse, B., Von Landesberger, T. & Volkamer, M. An investigation of phishing awareness and education over time: When and how to best remind users. Proceedings Of The Sixteenth USENIX Conference On Usable Privacy And Security. pp. 259–284 (2020) 15. Ruoti, S., Andersen, J., Dickinson, L., Heidbrink, S., Monson, T., O’neill, M., Reese, K., Spendlove, B., Vaziripour, E., Wu, J. & Others A usability study of four secure email tools using paired participants. ACM Transactions On Privacy And Security (TOPS). 22, 1–33 (2019) 16. Ruoti, S., Kim, N., Burgon, B., Van Der Horst, T. & Seamons, K. Confused Johnny: when automatic encryption leads to confusion and mistakes. Proceedings Of The Ninth Symposium On Usable Privacy And Security. pp. 1–12 (2013) 17. Atwater, E., Bocovich, C., Hengartner, U., Lank, E. & Goldberg, I. Leading Johnny to water: Designing for usability and trust. Proceedings Of The Eleventh USENIX Conference On Usable Privacy And Security. pp. 69–88 (2015) 18. Lee, K., Sjöberg, S. & Narayanan, A. Password policies of most top websites fail to follow best practices. Eighteenth Symposium On Usable Privacy And Security (SOUPS 2022). pp. 561–580 (2022) 19. Pearman, S., Thomas, J., Naeini, P., Habib, H., Bauer, L., Christin, N., Cranor, L., Egelman, S. & Forget, A. Let’s go in for a closer look: Observing passwords in their natural habitat. Proceedings Of The 2017 ACM SIGSAC Conference On Computer And Communications Security. pp. 295– 310 (2017) 20. Ur, B., Bees, J., Segreti, S., Bauer, L., Christin, N. & Cranor, L. Do users’ perceptions of password security match reality?. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 3748–3760 (2016) 21. Zimmermann, V. & Gerber, N. The password is dead, long live the password–A laboratory study on user perceptions of authentication schemes. International Journal Of Human-Computer Studies. 133 pp. 26–44 (2020)

References

83

22. Hayashi, E., Hong, J. & Christin, N. Security through a different kind of obscurity: evaluating distortion in graphical authentication schemes. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 2055–2064 (2011) 23. Whitten, A. & Tygar, J. Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0.. USENIX Security Symposium. 348 pp. 169–184 (1999) 24. Garfinkel, S. & Lipford, H. Usable security: History, themes, and challenges. Synthesis Lectures On Information Security, Privacy, And Trust. 5, 1–124 (2014) 25. Edu, J., Such, J. & Suarez-Tangil, G. Smart home personal assistants: a security and privacy review. ACM Computing Surveys (CSUR). 53, 1–36 (2020) 26. Yao, Y., Basdeo, J., Mcdonough, O. & Wang, Y. Privacy Perceptions and Designs of Bystanders in Smart Homes. Proc. ACM Hum.-Comput. Interact.. 3 (2019,11) 27. Yao, Y., Basdeo, J., Kaushik, S. & Wang, Y. Defending my castle: A co-design study of privacy mechanisms for smart homes. Proceedings Of The 2019 Chi Conference On Human Factors In Computing Systems. pp. 1–12 (2019) 28. Zeng, E., Mare, S. & Roesner, F. End user security and privacy concerns with smart homes. Symposium On Usable Privacy And Security (SOUPS). 220 (2017) 29. Rodden, T., Fischer, J., Pantidi, N., Bachour, K. & Moran, S. At home with agents: exploring attitudes towards future smart energy infrastructures. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 1173–1182 (2013) 30. Tabassum, M., Kosinski, T. & Lipford, H. I don’t own the data”: End User Perceptions of Smart Home Device Data Practices and Risks.. SOUPS@ USENIX Security Symposium. (2019) 31. Zheng, S., Apthorpe, N., Chetty, M. & Feamster, N. User perceptions of smart home IoT privacy. Proceedings Of The ACM On Human-computer Interaction. 2, 1–20 (2018) 32. Naeini, P., Bhagavatula, S., Habib, H., Degeling, M., Bauer, L., Cranor, L. & Sadeh, N. Privacy expectations and preferences in an IoT world. Thirteenth Symposium On Usable Privacy And Security (SOUPS 2017). pp. 399–412 (2017) 33. Barbosa, N., Park, J., Yao, Y. & Wang, Y. “What if?” Predicting Individual Users’ Smart Home Privacy Preferences and Their Changes.. Proc. Priv. Enhancing Technol.. 2019, 211–231 (2019) 34. Choe, E., Consolvo, S., Jung, J., Harrison, B. & Kientz, J. Living in a glass house: a survey of private moments in the home. Proceedings Of The 13th International Conference On Ubiquitous Computing. pp. 41–44 (2011) 35. Malkin, N., Deatrick, J., Tong, A., Wijesekera, P., Egelman, S. & Wagner, D. Privacy attitudes of smart speaker users. Proceedings On Privacy Enhancing Technologies. 2019, 250–271 (2019) 36. Lau, J., Zimmerman, B. & Schaub, F. Alexa, are you listening? Privacy perceptions, concerns and privacy-seeking behaviors with smart speakers. Proceedings Of The ACM On Humancomputer Interaction. 2, 1–31 (2018) 37. Malkin, N., Bernd, J., Johnson, M. & Egelman, S. “What can’t data be used for?” Privacy expectations about smart tvs in the us. Proceedings Of The 3rd European Workshop On Usable Security (EuroUSEC), London, UK. (2018) 38. Jin, H., Guo, B., Roychoudhury, R., Yao, Y., Kumar, S., Agarwal, Y. & Hong, J. Exploring the needs of users for supporting privacy-protective behaviors in smart homes. Proceedings Of The 2022 CHI Conference On Human Factors In Computing Systems. pp. 1–19 (2022) 39. Kraemer, M. & Flechais, I. Disentangling privacy in smart homes. (2018) 40. Leitão, R. Digital technologies and their role in intimate partner violence. Extended Abstracts Of The 2018 CHI Conference On Human Factors In Computing Systems. pp. 1–6 (2018) 41. Chhetri, C. & Motti, V. Eliciting privacy concerns for smart home devices from a user centered perspective. Information In Contemporary Society: 14th International Conference, IConference 2019, Washington, DC, USA, March 31–April 3, 2019, Proceedings 14. pp. 91–101 (2019)

84

3 Overview of Usable Privacy Research: Major Themes and Research Directions

42. Chalhoub, G., Flechais, I., Nthala, N. & Abu-Salma, R. Innovation inaction or in action? the role of user experience in the security and privacy design of smart home cameras. Proceedings Of The Symposium On Usable Privacy And Security. (2020) 43. Chalhoub, G., Flechais, I., Nthala, N., Abu-Salma, R. & Tom, E. Factoring user experience into the security and privacy design of smart home devices: A case study. Extended Abstracts Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–9 (2020) 44. Bernd, J., Abu-Salma, R. & Frik, A. Bystanders’ Privacy: The Perspectives of Nannies on Smart Home Surveillance.. FOCI@ USENIX Security Symposium. (2020) 45. Bernd, J., Abu-Salma, R., Choy, J. & Frik, A. Balancing Power Dynamics in Smart Homes: Nannies’ Perspectives on How Cameras Reflect and Affect Relationships. Eighteenth Symposium On Usable Privacy And Security (SOUPS 2022). pp. 687–706 (2022) 46. Ahmad, I., Farzan, R., Kapadia, A. & Lee, A. Tangible privacy: Towards user-centric sensor designs for bystander privacy. Proceedings Of The ACM On Human-Computer Interaction. 4, 1–28 (2020) 47. Koelle, M., Kranz, M. & Möller, A. Don’t look at me that way! Understanding user attitudes towards data glasses usage. Proceedings Of The 17th International Conference On Humancomputer Interaction With Mobile Devices And Services. pp. 362–372 (2015) 48. Price, B., Stuart, A., Calikli, G., Mccormick, C., Mehta, V., Hutton, L., Bandara, A., Levine, M. & Nuseibeh, B. Logging you, logging me: A replicable study of privacy and sharing behaviour in groups of visual lifeloggers. Proceedings Of The ACM On Interactive, Mobile, Wearable And Ubiquitous Technologies. 1, 1–18 (2017) 49. De Guzman, J., Thilakarathna, K. & Seneviratne, A. Security and privacy approaches in mixed reality: A literature survey. ACM Computing Surveys (CSUR). 52, 1–37 (2019) 50. Marky, K., Prange, S., Krell, F., Mühlhäuser, M. & Alt, F. “You just can’t know about everything”: Privacy Perceptions of Smart Home Visitors. Proceedings Of The 19th International Conference On Mobile And Ubiquitous Multimedia. pp. 83–95 (2020) 51. Geeng, C. & Roesner, F. Who’s in control? Interactions in multi-user smart homes. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2019) 52. Tabassum, M., Kropczynski, J., Wisniewski, P. & Lipford, H. Smart home beyond the home: A case for community-based access control. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–12 (2020) 53. Mare, S., Roesner, F. & Kohno, T. Smart Devices in Airbnbs: Considering Privacy and Security for both Guests and Hosts.. Proc. Priv. Enhancing Technol.. 2020, 436–458 (2020) 54. Reidenberg, J., Breaux, T., Cranor, L., French, B., Grannis, A., Graves, J., Liu, F., McDonald, A., Norton, T. & Ramanath, R. Disagreeable privacy policies: Mismatches between meaning and users’ understanding. Berkeley Tech. LJ. 30 pp. 39 (2015) 55. Obar, J. & Oeldorf-Hirsch, A. The biggest lie on the internet: Ignoring the privacy policies and terms of service policies of social networking services. Information, Communication & Society. 23, 128–147 (2020) 56. Schaub, F., Balebako, R. & Cranor, L. Designing effective privacy notices and controls. IEEE Internet Computing. 21, 70–77 (2017) 57. Thakkar, P., He, S., Xu, S., Huang, D. & Yao, Y. “It would probably turn into a social fauxpas”: Users’ and Bystanders’ Preferences of Privacy Awareness Mechanisms in Smart Homes. Proceedings Of The 2022 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2022) 58. Koshy, V., Park, J., Cheng, T. & Karahalios, K. “We Just Use What They Give Us”: Understanding Passenger User Perspectives in Smart Homes. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2021)

References

85

59. Colnago, J., Feng, Y., Palanivel, T., Pearman, S., Ung, M., Acquisti, A., Cranor, L. & Sadeh, N. Informing the design of a personalized privacy assistant for the internet of things. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2020) 60. Emami-Naeini, P., Agarwal, Y., Cranor, L. & Hibshi, H. Ask the experts: What should be on an IoT privacy and security label?. 2020 IEEE Symposium On Security And Privacy (SP). pp. 447–464 (2020) 61. Lee, H. & Kobsa, A. Confident privacy decision-making in IoT environments. ACM Transactions On Computer-Human Interaction (TOCHI). 27, 1–39 (2019) 62. Prange, S., Shams, A., Piening, R., Abdelrahman, Y. & Alt, F. Priview–exploring visualisations to support users’ privacy awareness. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems. pp. 1–18 (2021) 63. Cobb, C., Surbatovich, M., Kawakami, A., Sharif, M., Bauer, L., Das, A. & Jia, L. How risky are real users’ IFTTT applets?. Proceedings Of The Sixteenth USENIX Conference On Usable Privacy And Security. pp. 505–529 (2020) 64. Abdi, N., Ramokapane, K. & Such, J. More than Smart Speakers: Security and Privacy Perceptions of Smart Home Personal Assistants.. SOUPS@ USENIX Security Symposium. (2019) 65. Huang, Y., Obada-Obieh, B. & Beznosov, K. Amazon vs. my brother: How users of shared smart speakers perceive and cope with privacy risks. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2020) 66. Cho, E., Sundar, S., Abdullah, S. & Motalebi, N. Will deleting history make alexa more trustworthy? effects of privacy and content customization on user experience of smart speakers. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2020) 67. Abdi, N., Zhan, X., Ramokapane, K. & Such, J. Privacy norms for smart home personal assistants. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2021) 68. Albayaydh, W. & Flechais, I. Exploring bystanders’ privacy concerns with smart homes in Jordan. Proceedings Of The 2022 CHI Conference On Human Factors In Computing Systems. pp. 1–24 (2022) 69. Ghorayeb, A., Comber, R. & Gooberman-Hill, R. Older adults’ perspectives of smart home technology: Are we developing the technology that older people want?. International Journal Of Human-computer Studies. 147 pp. 102571 (2021) 70. Loideain, N. & Adams, R. From Alexa to Siri and the GDPR: The gendering of Virtual Personal Assistants and the role of Data Protection Impact Assessments. Computer Law & Security Review. 36 pp. 105366 (2020). 71. Nissenbaum, H. Privacy in Context: Technology, Policy, and the Integrity of Social Life. (Stanford University Press,2009) 72. Nissenbaum, H. Privacy as contextual integrity. Wash. L. Rev.. 79 pp. 119 (2004) 73. Sannon, S. & Forte, A. Privacy Research with Marginalized Groups: What We Know, What’s Needed, and What’s Next. Proceedings Of The ACM On Human-Computer Interaction. 6, 1–33 (2022) 74. Dhondt, K., Le Pochat, V., Voulimeneas, A., Joosen, W. & Volckaert, S. A Run a Day Won’t Keep the Hacker Away: Inference Attacks on Endpoint Privacy Zones in Fitness Tracking Social Networks. Proceedings Of The 2022 ACM SIGSAC Conference On Computer And Communications Security. pp. 801–814 (2022) 75. Gabriele, S. & Chiasson, S. Understanding fitness tracker users’ security and privacy knowledge, attitudes and behaviours. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–12 (2020)

86

3 Overview of Usable Privacy Research: Major Themes and Research Directions

76. Velykoivanenko, L., Niksirat, K., Zufferey, N., Humbert, M., Huguenin, K. & Cherubini, M. Are those steps worth your privacy? Fitness-tracker users’ perceptions of privacy and utility. Proceedings Of The ACM On Interactive, Mobile, Wearable And Ubiquitous Technologies. 5, 1–41 (2021) 77. Alqhatani, A. & Lipford, H. “There is Nothing That i Need to Keep Secret”: Sharing Practices and Concerns of Wearable Fitness Data. Proceedings Of The Fifteenth USENIX Conference On Usable Privacy And Security. pp. 421–434 (2019) 78. Raij, A., Ghosh, A., Kumar, S. & Srivastava, M. Privacy risks emerging from the adoption of innocuous wearable sensors in the mobile environment. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 11–20 (2011) 79. Mink, J., Yuile, A., Pal, U., Aviv, A. & Bates, A. Users Can Deduce Sensitive Locations Protected by Privacy Zones on Fitness Tracking Apps. Proceedings Of The 2022 CHI Conference On Human Factors In Computing Systems. pp. 1–21 (2022) 80. Gorm, N. & Shklovski, I. Sharing steps in the workplace: Changing privacy concerns over time. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 4315–4319 (2016) 81. Gui, X., Chen, Y., Caldeira, C., Xiao, D. & Chen, Y. When fitness meets social networks: Investigating fitness tracking and social practices on werun. Proceedings Of The 2017 CHI Conference On Human Factors In Computing Systems. pp. 1647–1659 (2017) 82. Zhou, X., Krishnan, A. & Dincelli, E. Examining user engagement and use of fitness tracking technology through the lens of technology affordances. Behaviour & Information Technology. 41, 2018–2033 (2022) 83. Park, K., Weber, I., Cha, M. & Lee, C. Persistent sharing of fitness app status on Twitter. Proceedings Of The 19th ACM Conference On Computer-Supported Cooperative Work & Social Computing. pp. 184–194 (2016) 84. Zimmer, M., Kumar, P., Vitak, J., Liao, Y. & Chamberlain Kritikos, K. ‘There’s nothing really they can do with this information’: unpacking how users manage privacy boundaries for personal fitness information. Information, Communication & Society. 23, 1020–1037 (2020) 85. Motti, V. & Caine, K. Users’ privacy concerns about wearables: impact of form factor, sensors and type of data collected. Financial Cryptography And Data Security: FC 2015 International Workshops, BITCOIN, WAHC, And Wearable, San Juan, Puerto Rico, January 30, 2015, Revised Selected Papers. pp. 231–244 (2015) 86. Prasad, A., Sorber, J., Stablein, T., Anthony, D. & Kotz, D. Understanding sharing preferences and behavior for mHealth devices. Proceedings Of The 2012 ACM Workshop On Privacy In The Electronic Society. pp. 117–128 (2012) 87. Leibenger, D., Möllers, F., Petrlic, A., Petrlic, R. & Sorge, C. Privacy challenges in the quantified self movement-an EU perspective. Proceedings On Privacy Enhancing Technologies. 2016 (2016) 88. Lowens, B., Motti, V. & Caine, K. Wearable privacy: Skeletons in the data closet. 2017 IEEE International Conference On Healthcare Informatics (ICHI). pp. 295–304 (2017) 89. Vitak, J., Liao, Y., Kumar, P., Zimmer, M. & Kritikos, K. Privacy attitudes and data valuation among fitness tracker users. Transforming Digital Worlds: 13th International Conference, IConference 2018, Sheffield, UK, March 25–28, 2018, Proceedings 13. pp. 229–239 (2018) 90. Weiss, G., Timko, J., Gallagher, C., Yoneda, K. & Schreiber, A. Smartwatch-based activity recognition: A machine learning approach. 2016 IEEE-EMBS International Conference On Biomedical And Health Informatics (BHI). pp. 426–429 (2016) 91. Arnold, Z., Larose, D. & Agu, E. Smartphone inference of alcohol consumption levels from gait. 2015 International Conference On Healthcare Informatics. pp. 417–426 (2015) 92. Maiti, A., Jadliwala, M., He, J. & Bilogrevic, I. Side-channel inference attacks on mobile keypads using smartwatches. IEEE Transactions On Mobile Computing. 17, 2180–2194 (2018)

References

87

93. Maiti, A., Armbruster, O., Jadliwala, M. & He, J. Smartwatch-based keystroke inference attacks and context-aware protection mechanisms. Proceedings Of The 11th ACM On Asia Conference On Computer And Communications Security. pp. 795–806 (2016) 94. Zufferey, N., Niksirat, K., Humbert, M. & Huguenin, K. “Revoked just now!” Users’ Behaviors toward Fitness-Data Sharing with Third-Party Applications. Proceedings On Privacy Enhancing Technologies. 1 pp. 1–21 (2023) 95. Bloom, C., Tan, J., Ramjohn, J. & Bauer, L. Self-driving cars and data collection: Privacy perceptions of networked autonomous vehicles. Symposium On Usable Privacy And Security (SOUPS). (2017) 96. Islami, L., Fischer-Hübner, S. & Papadimitratos, P. Capturing drivers’ privacy preferences for intelligent transportation systems: An intercultural perspective. Computers & Security. 123 pp. 102913 (2022) 97. Golbabaei, F., Yigitcanlar, T., Paz, A. & Bunker, J. Individual predictors of autonomous vehicle public acceptance and intention to use: A systematic review of the literature. Journal Of Open Innovation: Technology, Market, And Complexity. 6, 106 (2020) 98. Jakobi, T., Patil, S., Randall, D., Stevens, G. & Wulf, V. It is about what they could do with the data: A user perspective on privacy in smart metering. ACM Transactions On Computer-Human Interaction (TOCHI). 26, 1–44 (2019) 99. Fahl, S., Harbach, M., Perl, H., Koetter, M. & Smith, M. Rethinking SSL development in an appified world. Proceedings Of The 2013 ACM SIGSAC Conference On Computer & Communications Security. pp. 49–60 (2013) 100. Green, M. & Smith, M. Developers are not the enemy!: The need for usable security APIs. IEEE Security & Privacy. 14, 40–46 (2016) 101. Utz, C., Degeling, M., Fahl, S., Schaub, F. & Holz, T. (Un) informed consent: Studying GDPR consent notices in the field. Proceedings Of The 2019 Acm Sigsac Conference On Computer And Communications Security. pp. 973–990 (2019) 102. Tahaei, M., Frik, A. & Vaniea, K. Privacy champions in software teams: Understanding their motivations, strategies, and challenges. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems. pp. 1–15 (2021) 103. Bednar, K., Spiekermann, S. & Langheinrich, M. Engineering Privacy by Design: Are engineers ready to live up to the challenge?. The Information Society. 35, 122–142 (2019) 104. Balebako, R. & Cranor, L. Improving app privacy: Nudging app developers to protect user privacy. IEEE Security & Privacy. 12, 55–58 (2014) 105. Li, T., Louie, E., Dabbish, L. & Hong, J. How developers talk about personal data and what it means for user privacy: A case study of a developer forum on reddit. Proceedings Of The ACM On Human-Computer Interaction. 4, 1–28 (2021) 106. Senarath, A. & Arachchilage, N. Why developers cannot embed privacy into software systems? An empirical investigation. Proceedings Of The 22Nd International Conference On Evaluation And Assessment In Software Engineering 2018. pp. 211–216 (2018) 107. Li, T., Agarwal, Y. & Hong, J. Coconut: An IDE plugin for developing privacy-friendly apps. Proceedings Of The ACM On Interactive, Mobile, Wearable And Ubiquitous Technologies. 2, 1–35 (2018) 108. Mhaidli, A., Zou, Y. & Schaub, F. “We Can’t Live Without Them!” App Developers’ Adoption of Ad Networks and Their Considerations of Consumer Risks.. SOUPS@ USENIX Security Symposium. (2019) 109. Tahaei, M. & Vaniea, K. “Developers Are Responsible”: What Ad Networks Tell Developers About Privacy. Extended Abstracts Of The 2021 CHI Conference On Human Factors In Computing Systems. pp. 1–11 (2021) 110. Tahaei, M., Frik, A., Vaniea, K. & Informatics, D. Deciding on Personalized Ads: Nudging Developers About User Privacy.. SOUPS@ USENIX Security Symposium. pp. 573–596 (2021)

88

3 Overview of Usable Privacy Research: Major Themes and Research Directions

111. Tahaei, M., Ramokapane, K., Li, T., Hong, J. & Rashid, A. Charting app developers’ journey through privacy regulation features in ad networks. Proceedings On Privacy Enhancing Technologies. 1 pp. 24 (2022) 112. Tahaei, M., Abu-Salma, R. & Rashid, A. Stuck in the Permissions With You: Developer & End-User Perspectives on App Permissions & Their Privacy Ramifications. ArXiv Preprint ArXiv:2301.06534. (2023) 113. Alomar, N. & Egelman, S. Developers Say the Darnedest Things: Privacy Compliance Processes Followed by Developers of Child-Directed Apps. Proceedings On Privacy Enhancing Technologies. 4, 24 (2022) 114. Reardon, J., Feal, Á., Wijesekera, P., On, A., Vallina-Rodriguez, N. & Egelman, S. 50 ways to leak your data: An exploration of apps’ circumvention of the android permissions system. 28th USENIX Security Symposium (USENIX Security 19). pp. 603–620 (2019) 115. Li, T., Reiman, K., Agarwal, Y., Cranor, L. & Hong, J. Understanding challenges for developers to create accurate privacy nutrition labels. Proceedings Of The 2022 CHI Conference On Human Factors In Computing Systems. pp. 1–24 (2022) 116. Greene, D. & Shilton, K. Platform privacies: Governance, collaboration, and the different meanings of “privacy” in iOS and Android development. New Media & Society. 20, 1640–1657 (2018) 117. Tahaei, M., Vaniea, K. & Saphra, N. Understanding privacy-related questions on Stack Overflow. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2020) 118. Tahaei, M., Li, T. & Vaniea, K. Understanding privacy-related advice on stack overflow. Proceedings On Privacy Enhancing Technologies. 2022, 114–131 (2022) 119. Bouma-Sims, E. & Acar, Y. Beyond the Boolean: How Programmers Ask About, Use, and Discuss Gender. ArXiv Preprint ArXiv:2302.05351. (2023) 120. Kühtreiber, P., Pak, V. & Reinhardt, D. A survey on solutions to support developers in privacypreserving IoT development. Pervasive And Mobile Computing. 85 pp. 101656 (2022) 121. Gulotta, R., Faste, H. & Mankoff, J. Curation, provocation, and digital identity: risks and motivations for sharing provocative images online. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 387–390 (2012) 122. Cox, A., Clough, P. & Marlow, J. Flickr: a first look at user behaviour in the context of photography as serious leisure. Information Research. 13, 13–1 (2008) 123. Van House, N., Davis, M., Takhteyev, Y., Good, N., Wilhelm, A. & Finn, M. From “what?” to “why?”: the social uses of personal photos. Proc. Of CSCW 2004. (2004) 124. Hoyle, R., Stark, L., Ismail, Q., Crandall, D., Kapadia, A. & Anthony, D. Privacy norms and preferences for photos posted online. ACM Transactions On Computer-Human Interaction (TOCHI). 27, 1–27 (2020) 125. Tajik, K., Gunasekaran, A., Dutta, R., Ellis, B., Bobba, R., Rosulek, M., Wright, C. & Feng, W. Balancing Image Privacy and Usability with Thumbnail-Preserving Encryption.. IACR Cryptol. EPrint Arch.. 2019 pp. 295 (2019) 126. Zezschwitz, E., Ebbinghaus, S., Hussmann, H. & De Luca, A. You Can’t Watch This! PrivacyRespectful Photo Browsing on Smartphones. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 4320–4324 (2016) 127. Hasan, R., Hassan, E., Li, Y., Caine, K., Crandall, D., Hoyle, R. & Kapadia, A. Viewer experience of obscuring scene elements in photos to enhance privacy. Proceedings Of The 2018 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2018) 128. Li, Y., Vishwamitra, N., Knijnenburg, B., Hu, H. & Caine, K. Effectiveness and users’ experience of obfuscation as a privacy-enhancing technology for sharing photos. Proceedings Of The ACM On Human-Computer Interaction. 1, 1–24 (2017)

References

89

129. Bellare, M., Ristenpart, T., Rogaway, P. & Stegers, T. Format-preserving encryption. Selected Areas In Cryptography: 16th Annual International Workshop, SAC 2009, Calgary, Alberta, Canada, August 13–14, 2009, Revised Selected Papers 16. pp. 295–312 (2009) 130. Lander, K., Bruce, V. & Hill, H. Evaluating the effectiveness of pixelation and blurring on masking the identity of familiar faces. Applied Cognitive Psychology: The Official Journal Of The Society For Applied Research In Memory And Cognition. 15, 101–116 (2001) 131. Newton, E., Sweeney, L. & Malin, B. Preserving privacy by de-identifying face images. IEEE Transactions On Knowledge And Data Engineering. 17, 232–243 (2005) 132. Hill, S., Zhou, Z., Saul, L. & Shacham, H. On the (In) effectiveness of Mosaicing and Blurring as Tools for Document Redaction.. Proc. Priv. Enhancing Technol.. 2016, 403–417 (2016) 133. Tierney, M., Spiro, I., Bregler, C. & Subramanian, L. Cryptagram: Photo Privacy for Online Social Media. Proceedings Of The First ACM Conference On Online Social Networks. pp. 75–88 (2013) 134. Ra, M., Govindan, R. & Ortega, A. P3: Toward Privacy-Preserving Photo Sharing. 10th USENIX Symposium On Networked Systems Design And Implementation (NSDI 13). pp. 515–528 (2013) 135. Ra, M., Govindan, R. & Ortega, A. P3: Toward Privacy-Preserving Photo Sharing. 10th USENIX Symposium On Networked Systems Design And Implementation (NSDI 13). pp. 515–528 (2013,4) 136. Yu, J., Zhang, B., Kuang, Z., Lin, D. & Fan, J. iPrivacy: image privacy protection by identifying sensitive objects via deep multi-task learning. IEEE Transactions On Information Forensics And Security. 12, 1005–1016 (2016) 137. Klemperer, P., Liang, Y., Mazurek, M., Sleeper, M., Ur, B., Bauer, L., Cranor, L., Gupta, N. & Reiter, M. Tag, you can see it! Using tags for access control in photo sharing. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 377–386 (2012) 138. Christin, D., Sánchez López, P., Reinhardt, A., Hollick, M. & Kauer, M. Privacy bubbles: user-centered privacy control for mobile content sharing applications. Information Security Theory And Practice. Security, Privacy And Trust In Computing Systems And Ambient Intelligent Ecosystems: 6th IFIP WG 11.2 International Workshop, WISTP 2012, Egham, UK, June 20–22, 2012. Proceedings 6. pp. 71–86 (2012) 139. Tonge, A. & Caragea, C. Image Privacy Prediction Using Deep Neural Networks. ACM Trans. Web. 14 (2020,4) 140. Yan, Z., Zhang, H., Piramuthu, R., Jagadeesh, V., DeCoste, D., Di, W. & Yu, Y. HD-CNN: hierarchical deep convolutional neural networks for large scale visual recognition. Proceedings Of The IEEE International Conference On Computer Vision. pp. 2740–2748 (2015) 141. Spyromitros-Xioufis, E., Papadopoulos, S., Popescu, A. & Kompatsiaris, Y. Personalized privacy-aware image classification. Proceedings Of The 2016 ACM On International Conference On Multimedia Retrieval. pp. 71–78 (2016) 142. Ilia, P., Polakis, I., Athanasopoulos, E., Maggi, F. & Ioannidis, S. Face/Off: Preventing Privacy Leakage From Photos in Social Networks. Proceedings Of The 22nd ACM SIGSAC Conference On Computer And Communications Security. pp. 781–792 (2015) 143. Fan, L. Image pixelization with differential privacy. Data And Applications Security And Privacy XXXII: 32nd Annual IFIP WG 11.3 Conference, DBSec 2018, Bergamo, Italy, July 16–18, 2018, Proceedings 32. pp. 148–162 (2018) 144. Yuan, L. & Ebrahimi, T. Image privacy protection with secure JPEG transmorphing. IET Signal Processing. 11, 1031–1038 (2017) 145. Zhang, L., Jung, T., Liu, C., Ding, X., Li, X. & Liu, Y. Pop: Privacy-preserving outsourced photo sharing and searching for mobile devices. 2015 IEEE 35th International Conference On Distributed Computing Systems. pp. 308–317 (2015) 146. Roundtree, A. Ethics and Facial Recognition Technology: An Integrative Review. 2021 3rd World Symposium On Artificial Intelligence (WSAI). pp. 10–19 (2021)

90

3 Overview of Usable Privacy Research: Major Themes and Research Directions

147. Bertalmio, M., Sapiro, G., Caselles, V. & Ballester, C. Image inpainting. Proceedings Of The 27th Annual Conference On Computer Graphics And Interactive Techniques. pp. 417–424 (2000) 148. Chinomi, K., Nitta, N., Ito, Y. & Babaguchi, N. PriSurv: Privacy protected video surveillance system using adaptive visual abstraction. Advances In Multimedia Modeling: 14th International Multimedia Modeling Conference, MMM 2008, Kyoto, Japan, January 9–11, 2008. Proceedings 14. pp. 144–154 (2008) 149. Ni, Z., Shi, Y., Ansari, N. & Su, W. Reversible data hiding. IEEE Transactions On Circuits And Systems For Video Technology. 16, 354–362 (2006) 150. Shi, Y., Li, X., Zhang, X., Wu, H. & Ma, B. Reversible data hiding: Advances in the past two decades. IEEE Access. 4 pp. 3210–3237 (2016) 151. Yuan, L., Korshunov, P. & Ebrahimi, T. Privacy-preserving photo sharing based on a secure JPEG. 2015 IEEE Conference On Computer Communications Workshops (INFOCOM WKSHPS). pp. 185–190 (2015) 152. Li, T. & Lin, L. Anonymousnet: Natural face de-identification with measurable privacy. Proceedings Of The IEEE/CVF Conference On Computer Vision And Pattern Recognition Workshops. pp. 0–0 (2019) 153. Padilla-López, J., Chaaraoui, A. & Flórez-Revuelta, F. Visual privacy protection methods: A survey. Expert Systems With Applications. 42, 4177–4195 (2015) 154. Croft, W., Sack, J. & Shi, W. Obfuscation of images via differential privacy: from facial images to general images. Peer-to-Peer Networking And Applications. 14 pp. 1705–1733 (2021) 155. Asghar, M., Kanwal, N., Lee, B., Fleury, M., Herbst, M. & Qiao, Y. Visual surveillance within the EU general data protection regulation: A technology perspective. IEEE Access. 7 pp. 111709– 111726 (2019) 156. Gerber, N., Zimmermann, V. & Volkamer, M. Why johnny fails to protect his privacy. 2019 IEEE European Symposium On Security And Privacy Workshops (EuroS&PW). pp. 109–118 (2019) 157. Coopamootoo, K. Usage patterns of privacy-enhancing technologies. Proceedings Of The 2020 ACM SIGSAC Conference On Computer And Communications Security. pp. 1371–1390 (2020) 158. Liang, H. & Xue, Y. Avoidance of information technology threats: A theoretical perspective. MIS Quarterly. pp. 71–90 (2009) 159. Miltgen, C. & Smith, H. Exploring information privacy regulation, risks, trust, and behavior. Information & Management. 52, 741–759 (2015) 160. Benenson, Z., Girard, A. & Krontiris, I. User Acceptance Factors for Anonymous Credentials: An Empirical Investigation.. WEIS. (2015) 161. Shirazi, F. & Volkamer, M. What deters jane from preventing identification and tracking on the web?. Proceedings Of The 13th Workshop On Privacy In The Electronic Society. pp. 107–116 (2014) 162. Renaud, K., Volkamer, M. & Renkema-Padmos, A. Why doesn’t Jane protect her privacy?. Privacy Enhancing Technologies: 14th International Symposium, PETS 2014, Amsterdam, The Netherlands, July 16–18, 2014. Proceedings 14. pp. 244–262 (2014) 163. Abu-Salma, R., Sasse, M., Bonneau, J., Danilova, A., Naiakshina, A. & Smith, M. Obstacles to the adoption of secure communication tools. 2017 IEEE Symposium On Security And Privacy (SP). pp. 137–153 (2017) 164. Zou, Y., Roundy, K., Tamersoy, A., Shintre, S., Roturier, J. & Schaub, F. Examining the adoption and abandonment of security, privacy, and identity theft protection practices. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–15 (2020) 165. Kang, R., Dabbish, L., Fruchter, N. & Kiesler, S. my data just goes everywhere:”user mental models of the internet and implications for privacy and security. Eleventh Symposium On Usable Privacy And Security (SOUPS 2015). pp. 39–52 (2015)

References

91

166. Dingledine, R. & Mathewson, N. Anonymity loves company: Usability and the network effect. WEIS. (2006) 167. Dingledine, R., Mathewson, N. & Syverson, P. Tor: The second-generation onion router. (Naval Research Lab Washington DC, 2004) 168. Fabian, B., Goertz, F., Kunz, S., Möller, S. & Nitzsche, M. Privately waiting–a usability analysis of the tor anonymity network. Sustainable E-Business Management: 16th Americas Conference On Information Systems, AMCIS 2010, SIGeBIZ Track, Lima, Peru, August 12–15, 2010. Selected Papers. pp. 63–75 (2010) 169. Köpsell, S. Low latency anonymous communication–how long are users willing to wait?. Emerging Trends In Information And Communication Security: International Conference, ETRICS 2006, Freiburg, Germany, June 6–9, 2006. Proceedings. pp. 221–237 (2006) 170. Hogan, K., Servan-Schreiber, S., Newman, Z., Weintraub, B., Nita-Rotaru, C. & Devadas, S. ShorTor: Improving Tor Network Latency via Multi-hop Overlay Routing. 2022 IEEE Symposium On Security And Privacy (SP). pp. 1933–1952 (2022) 171. Geddes, J., Schliep, M. & Hopper, N. Abra cadabra: Magically increasing network utilization in tor by avoiding bottlenecks. Proceedings Of The 2016 ACM On Workshop On Privacy In The Electronic Society. pp. 165–176 (2016) 172. Annessi, R. & Schmiedecker, M. Navigator: Finding faster paths to anonymity. 2016 IEEE European Symposium On Security And Privacy (EuroS&P). pp. 214–226 (2016) 173. Clark, J., Van Oorschot, P. & Adams, C. Usability of anonymous web browsing: an examination of tor interfaces and deployability. Proceedings Of The 3rd Symposium On Usable Privacy And Security. pp. 41–51 (2007) 174. Norcie, G., Blythe, J., Caine, K. & Camp, L. Why Johnny can’t blow the whistle: Identifying and reducing usability issues in anonymity systems. Workshop On Usable Security. 6 pp. 50–60 (2014) 175. Lee, L., Fifield, D., Malkin, N., Iyer, G., Egelman, S. & Wagner, D. A Usability Evaluation of Tor Launcher.. Proc. Priv. Enhancing Technol.. 2017, 90 (2017) 176. EU Commission Special Eurobarometer 487a – The General Data Protection Regulation. (2019). 177. Gallagher, K., Patil, S. & Memon, N. New me: Understanding expert and non-expert perceptions and usage of the Tor anonymity network. Thirteenth Symposium On Usable Privacy And Security (SOUPS 2017). pp. 385–398 (2017) 178. Gallagher, K., Patil, S., Dolan-Gavitt, B., McCoy, D. & Memon, N. Peeling the onion’s user experience layer: Examining naturalistic use of the tor browser. Proceedings Of The 2018 ACM SIGSAC Conference On Computer And Communications Security. pp. 1290–1305 (2018) 179. Kang, R., Brown, S. & Kiesler, S. Why do people seek anonymity on the internet? Informing policy and design. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 2657–2666 (2013) 180. Camp, J., Asgharpour, F., Liu, D. & Bloomington, I. Experimental evaluations of expert and non-expert computer users’ mental models of security risks. Proceedings Of WEIS 2007. pp. 1–24 (2007) 181. Binkhorst, V., Fiebig, T., Krombholz, K., Pieters, W. & Labunets, K. Security at the End of the Tunnel: The Anatomy of VPN Mental Models Among Experts and Non-Experts in a Corporate Context. 31st USENIX Security Symposium (USENIX Security 22). pp. 3433–3450 (2022) 182. De Luca, A., Das, S., Ortlieb, M., Ion, I. & Laurie, B. Expert and Non-Expert Attitudes towards (Secure) Instant Messaging. Twelfth Symposium On Usable Privacy And Security (SOUPS 2016). pp. 147–157 (2016) 183. Krombholz, K., Busse, K., Pfeffer, K., Smith, M. & Von Zezschwitz, E. “If HTTPS Were Secure, I Wouldn’t Need 2FA”-End User and Administrator Mental Models of HTTPS. 2019 IEEE Symposium On Security And Privacy (SP). pp. 246–263 (2019)

92

3 Overview of Usable Privacy Research: Major Themes and Research Directions

184. Wu, J. & Zappala, D. When is a Tree Really a Truck? Exploring Mental Models of Encryption. SOUPS@ USENIX Security Symposium. pp. 395–409 (2018) 185. Demjaha, A., Spring, J., Becker, I., Parkin, S. & Sasse, M. Metaphors considered harmful? An exploratory study of the effectiveness of functional metaphors for end-to-end encryption. Proc. USEC. 2018 (2018) 186. Akgul, O., Bai, W., Das, S. & Mazurek, M. Evaluating In-Workflow Messages for Improving Mental Models of End-to-End Encryption.. USENIX Security Symposium. pp. 447–464 (2021) 187. Skalkos, A., Tsohou, A., Karyda, M. & Kokolakis, S. Identifying the values associated with users’ behavior towards anonymity tools through means-end analysis. Computers In Human Behavior Reports. 2 pp. 100034 (2020) 188. Harborth, D., Pape, S. & Rannenberg, K. Explaining the technology use behavior of privacyenhancing technologies: The case of tor and JonDonym. Proceedings On Privacy Enhancing Technologies. 2020, 111–128 (2020) 189. Albayram, Y., Suess, D. & Elidrissi, Y. Investigating the Effectiveness of Personalized Content in the Form of Videos When Promoting a TOR Browser. Proceedings Of The 2022 European Symposium On Usable Security. pp. 216–232 (2022) 190. Story, P., Smullen, D., Chen, R., Acquisti, A., Cranor, L., Sadeh, N., Schaub, F. & Others Increasing Adoption of Tor Browser Using Informational and Planning Nudges. Proceedings On Privacy Enhancing Technologies. 2 pp. 152–183 (2022) 191. Desfontaines, D. A list of real-world uses of differential privacy. (https://desfontain.es/privacy/ real-world-differential-privacy.html,2021,10), Ted is writing things (personal blog) 192. Differential Privacy Team, Apple. Learning with privacy at scale. Apple Mach. Learn. J. 1, 1–25 (2017) 193. Erlingsson, Á., Pihur, V. & Korolova, A. Rappor: Randomized aggregatable privacy-preserving ordinal response. Proceedings Of The 2014 ACM SIGSAC Conference On Computer And Communications Security. pp. 1054–1067 (2014) 194. Ding, B., Kulkarni, J. & Yekhanin, S. Collecting telemetry data privately. Advances In Neural Information Processing Systems. 30 (2017) 195. Kenthapadi, K. & Tran, T. Pripearl: A framework for privacy-preserving analytics and reporting at LinkedIn. Proceedings Of The 27th ACM International Conference On Information And Knowledge Management. pp. 2183–2191 (2018) 196. Kenny, C., Kuriwaki, S., McCartan, C., Rosenman, E., Simko, T. & Imai, K. The use of differential privacy for census data and its impact on redistricting: The case of the 2020 US Census. Science Advances. 7, eabk3283 (2021) 197. Abadi, M., Chu, A., Goodfellow, I., McMahan, H., Mironov, I., Talwar, K. & Zhang, L. Deep learning with differential privacy. Proceedings Of The 2016 ACM SIGSAC Conference On Computer And Communications Security. pp. 308–318 (2016) 198. Kim, J., Edemacu, K., Kim, J., Chung, Y. & Jang, B. A survey of differential privacy-based techniques and their applicability to location-based services. Computers & Security. 111 pp. 102464 (2021) 199. Barthe, G., Gaboardi, M., Hsu, J. & Pierce, B. Programming language techniques for differential privacy. ACM SIGLOG News. 3, 34–53 (2016) 200. El Ouadrhiri, A. & Abdelhadi, A. Differential privacy for deep and federated learning: A survey. IEEE Access. 10 pp. 22359–22380 (2022) 201. Hay, M., Machanavajjhala, A., Miklau, G., Chen, Y., Zhang, D. & Bissias, G. Exploring privacyaccuracy tradeoffs using DpComp. Proceedings Of The 2016 International Conference On Management Of Data. pp. 2101–2104 (2016) 202. Murtagh, J., Taylor, K., Kellaris, G. & Vadhan, S. Usable differential privacy: A case study with psi. ArXiv Preprint ArXiv:1809.04103. (2018)

References

93

203. Gaboardi, M., Honaker, J., King, G., Murtagh, J., Nissim, K., Ullman, J. & Vadhan, S. Psi : a private data sharing interface. ArXiv Preprint ArXiv:1609.04340. (2016) 204. John, M., Denker, G., Laud, P., Martiny, K., Pankova, A. & Pavlovic, D. Decision support for sharing data using differential privacy. 2021 IEEE Symposium On Visualization For Cyber Security (VizSec). pp. 26–35 (2021) 205. Nanayakkara, P., Bater, J., He, X., Hullman, J. & Rogers, J. Visualizing privacy-utility trade-offs in differentially private data releases. Proceedings On Privacy Enhancing Technologies. 2022, 601–618 (2022) 206. Vadhan, S., Crosas, M., Honaker, J., King, G. OpenDP: An Open-Source Suite of Differential Privacy Tools. OpenDP community. (2019) 207. Budiu, M., Thaker, P., Gopalan, P., Wieder, U. & Zaharia, M. Overlook: Differentially Private Exploratory Visualization for Big Data. Journal Of Privacy And Confidentiality. 12 (2022) 208. Zhou, J., Wang, X., Wong, J., Wang, H., Wang, Z., Yang, X., Yan, X., Feng, H., Qu, H., Ying, H. & Others DPVisCreator: Incorporating Pattern Constraints to Privacy-preserving Visualizations via Differential Privacy. IEEE Transactions On Visualization And Computer Graphics. 29, 809– 819 (2022) 209. Sarathy, J. From algorithmic to institutional logics: the politics of differential privacy. Available At SSRN. (2022) 210. Cohen, A. & Nissim, K. Towards formalizing the GDPR’s notion of singling out. Proceedings Of The National Academy Of Sciences. 117, 8344–8352 (2020) 211. Cummings, R. & Desai, D. The role of differential privacy in GDPR compliance. FAT’18: Proceedings Of The Conference On Fairness, Accountability, And Transparency. pp. 20 (2018) 212. Prokhorenkov, D. Anonymization Level and Compliance for Differential Privacy: A Systematic Literature Review. 2022 International Wireless Communications And Mobile Computing (IWCMC). pp. 1119–1124 (2022) 213. Oberski, D. & Kreuter, F. Differential privacy and social science: An urgent puzzle. Harvard Data Science Review. 2, 1 (2020) 214. Cummings, R., Kaptchuk, G. & Redmiles, E. “I Need a Better Description”: An Investigation Into User Expectations For Differential Privacy. Proceedings Of The 2021 ACM SIGSAC Conference On Computer And Communications Security. pp. 3037–3052 (2021). 215. Bullek, B., Garboski, S., Mir, D. & Peck, E. Towards Understanding Differential Privacy: When Do People Trust Randomized Response Technique?. Proceedings Of The 2017 CHI Conference On Human Factors In Computing Systems. pp. 3833–3837 (2017) 216. Xiong, A., Wang, T., Li, N. & Jha, S. Towards Effective Differential Privacy Communication for Users’ Data Sharing Decision and Comprehension. 2020 IEEE Symposium On Security And Privacy (SP). pp. 392–410 (2020) 217. Franzen, D., Voigt, S., Sörries, P., Tschorsch, F. & Möller-Birn, C. Am I Private and If So, How Many? Communicating Privacy Guarantees of Differential Privacy with Risk Communication Formats. Proceedings Of The 2022 ACM SIGSAC Conference On Computer And Communications Security. pp. 1125–1139 (2022) 218. Karegar, F., Alaqra, A. & Fischer-Hübner, S. Exploring User-Suitable Metaphors for Differentially Private Data Analyses. Eighteenth Symposium On Usable Privacy And Security (SOUPS 2022). pp. 175–193 (2022,8) 219. Smart, M., Sood, D. & Vaccaro, K. Understanding Risks of Privacy Theater with Differential Privacy. Proc. ACM Hum.-Comput. Interact.. 6 (2022,11) 220. Kühtreiber, P., Pak, V. & Reinhardt, D. Replication: The Effect of Differential Privacy Communication on German Users’ Comprehension and Data Sharing Attitudes. Eighteenth Symposium On Usable Privacy And Security (SOUPS 2022). pp. 117–134 (2022,8)

94

3 Overview of Usable Privacy Research: Major Themes and Research Directions

221. Karegar, F. & Fischer-Hübner, S. Vision: A noisy picture or a picker wheel to spin? exploring suitable metaphors for differentially private data analyses. Proceedings Of The 2021 European Symposium On Usable Security. pp. 29–35 (2021) 222. Sarathy, J., Song, S., Haque, A., Schlatter, T. & Vadhan, S. Don’t Look at the Data! How Differential Privacy Reconfigures the Practices of Data Science. ArXiv Preprint ArXiv:2302.11775. (2023) 223. Khare, R. Privacy Theater: Why Social Networks Only Pretend To Protect You. (2022), https:// techcrunch.com/2009/12/27/privacy-theater/ 224. Warford, N., Matthews, T., Yang, K., Akgul, O., Consolvo, S., Kelley, P., Malkin, N., Mazurek, M., Sleeper, M. & Thomas, K. Sok: A framework for unifying at-risk user research. 2022 IEEE Symposium On Security And Privacy (SP). pp. 2344–2360 (2022) 225. Le Blond, S., Cuevas, A., Troncoso-Pastoriza, J., Jovanovic, P., Ford, B. & Hubaux, J. On enforcing the digital immunity of a large humanitarian organization. 2018 IEEE Symposium On Security And Privacy (SP). pp. 424–440 (2018) 226. McGregor, S., Charters, P., Holliday, T. & Roesner, F. Investigating the computer security practices and needs of journalists. 24th USENIX Security Symposium (USENIX Security 15). pp. 399–414 (2015) 227. Sanches, P., Tsaknaki, V., Rostami, A. & Brown, B. Under surveillance: Technology practices of those monitored by the state. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2020) 228. Marczak, W. & Paxson, V. Social Engineering Attacks on Government Opponents: Target Perspectives.. Proc. Priv. Enhancing Technol.. 2017, 172–185 (2017) 229. McGregor, S., Roesner, F. & Caine, K. Individual versus Organizational Computer Security and Privacy Concerns in Journalism.. Proc. Priv. Enhancing Technol.. 2016, 418–435 (2016) 230. Consolvo, S., Kelley, P., Matthews, T., Thomas, K., Dunn, L. & Bursztein, E. “Why wouldn’t someone think of democracy as a target?”: Security practices & challenges of people involved with U.S. political campaigns. 30th USENIX Security Symposium (USENIX Security 21). pp. 1181–1198 (2021,8) 231. Daffalla, A., Simko, L., Kohno, T. & Bardas, A. Defensive technology use by political activists during the Sudanese revolution. 2021 IEEE Symposium On Security And Privacy (SP). pp. 372–390 (2021) 232. Kow, Y., Kou, Y., Semaan, B. & Cheng, W. Mediating the undercurrents: Using social media to sustain a social movement. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 3883–3894 (2016) 233. Tadic, B., Rohde, M., Wulf, V. & Randall, D. ICT use by prominent activists in Republika Srpska. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 3364–3377 (2016) 234. Blackwell, L., Hardy, J., Ammari, T., Veinot, T., Lampe, C. & Schoenebeck, S. LGBT parents and social media: Advocacy, privacy, and disclosure during shifting social movements. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 610–622 (2016) 235. Lerner, A., He, H., Kawakami, A., Zeamer, S. & Hoyle, R. Privacy and activism in the transgender community. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2020) 236. Warner, M., Kitkowska, A., Gibbs, J., Maestre, J. & Blandford, A. Evaluating’Prefer not to say’Around Sensitive Disclosures. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2020) 237. Geeng, C., Harris, M., Redmiles, E. & Roesner, F. “Like Lesbians Walking the Perimeter”: Experiences of U.S. LGBTQ+ Folks With Online Security, Safety, and Privacy Advice. 31st USENIX Security Symposium (USENIX Security 22). pp. 305–322 (2022,8)

References

95

238. Guberek, T., McDonald, A., Simioni, S., Mhaidli, A., Toyama, K. & Schaub, F. Keeping a low profile? Technology, risk and privacy among undocumented immigrants. Proceedings Of The 2018 CHI Conference On Human Factors In Computing Systems. pp. 1–15 (2018) 239. To, A., Sweeney, W., Hammer, J. & Kaufman, G. “They Just Don’t Get It”: Towards Social Technologies for Coping with Interpersonal Racism. Proceedings Of The ACM On HumanComputer Interaction. 4, 1–29 (2020) 240. Kozubaev, S., Rochaix, F., DiSalvo, C. & Le Dantec, C. Spaces and traces: Implications of smart technology in public housing. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2019) 241. Klassen, S., Kingsley, S., McCall, K., Weinberg, J. & Fiesler, C. More than a modern day Green book: Exploring the online community of Black Twitter. Proceedings Of The ACM On Human-Computer Interaction. 5, 1–29 (2021) 242. Matthews, T., O’Leary, K., Turner, A., Sleeper, M., Woelfer, J., Shelton, M., Manthorne, C., Churchill, E. & Consolvo, S. Stories from survivors: Privacy & security practices when coping with intimate partner abuse. Proceedings Of The 2017 CHI Conference On Human Factors In Computing Systems. pp. 2189–2201 (2017) 243. Freed, D., Palmer, J., Minchala, D., Levy, K., Ristenpart, T. & Dell, N. “A Stalker’s Paradise” How Intimate Partner Abusers Exploit Technology. Proceedings Of The 2018 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2018) 244. Chatterjee, R., Doerfler, P., Orgad, H., Havron, S., Palmer, J., Freed, D., Levy, K., Dell, N., McCoy, D. & Ristenpart, T. The spyware used in intimate partner violence. 2018 IEEE Symposium On Security And Privacy (SP). pp. 441–458 (2018) 245. Havron, S., Freed, D., Chatterjee, R., McCoy, D., Dell, N. & Ristenpart, T. Clinical Computer Security for Victims of Intimate Partner Violence.. USENIX Security Symposium. pp. 105–122 (2019) 246. Kumar, P., Naik, S., Devkar, U., Chetty, M., Clegg, T. & Vitak, J. ’No Telling Passcodes Out Because They’re Private’ Understanding Children’s Mental Models of Privacy and Security Online. Proceedings Of The ACM On Human-Computer Interaction. 1, 1–21 (2017) 247. McNally, B., Kumar, P., Hordatt, C., Mauriello, M., Naik, S., Norooz, L., Shorter, A., Golub, E. & Druin, A. Co-designing mobile online safety applications with children. Proceedings Of The 2018 CHI Conference On Human Factors In Computing Systems. pp. 1–9 (2018) 248. March, W. & Fleuriot, C. Girls, technology and privacy: “is my mother listening?”. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 107–110 (2006) 249. Mentis, H., Madjaroff, G. & Massey, A. Upside and downside risk in online security for older adults with mild cognitive impairment. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2019) 250. Frik, A., Nurgalieva, L., Bernd, J., Lee, J., Schaub, F. & Egelman, S. Privacy and security threat models and mitigation strategies of older adults. Fifteenth Symposium On Usable Privacy And Security (SOUPS 2019). pp. 21–40 (2019) 251. Napoli, D., Baig, K., Maqsood, S. & Chiasson, S. “I’m Literally Just Hoping This Will Work:” Obstacles Blocking the Online Security and Privacy of Users with Visual Disabilities. Seventeenth Symposium On Usable Privacy And Security (SOUPS 2021). pp. 263–280 (2021) 252. Akter, T., Dosono, B., Ahmed, T., Kapadia, A. & Semaan, B. “I am uncomfortable sharing what I can’t see” privacy concerns of the visually impaired with camera-based assistive applications. Proceedings Of The 29th USENIX Conference On Security Symposium. pp. 1929–1948 (2020) 253. Simko, L., Lerner, A., Ibtasam, S., Roesner, F. & Kohno, T. Computer security and privacy for refugees in the United States. 2018 IEEE Symposium On Security And Privacy (SP). pp. 409–423 (2018)

96

3 Overview of Usable Privacy Research: Major Themes and Research Directions

254. Steinbrink, E., Reichert, L., Mende, M. & Reuter, C. Digital Privacy Perceptions of Asylum Seekers in Germany: An Empirical Study about Smartphone Usage during the Flight. Proceedings Of The ACM On Human-Computer Interaction. 5, 1–24 (2021) 255. Wisniewski, P., Ghosh, A., Xu, H., Rosson, M. & Carroll, J. Parental control vs. teen selfregulation: Is there a middle ground for mobile online safety?. Proceedings Of The 2017 ACM Conference On Computer Supported Cooperative Work And Social Computing. pp. 51–69 (2017) 256. Ghosh, A., Hughes, C. & Wisniewski, P. Circle of trust: a new approach to mobile online safety for families. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2020) 257. Wang, G., Zhao, J., Van Kleek, M. & Shadbolt, N. Protection or punishment? relating the design space of parental control apps and perceptions about them to support parenting for online safety. Proceedings Of The ACM On Human-Computer Interaction. 5, 1–26 (2021) 258. Edalatnejad, K., Lueks, W., Martin, J., Ledésert, S., L’Hôte, A., Thomas, B., Girod, L. & Troncoso, C. DatashareNetwork: A Decentralized Privacy-Preserving Search Engine for Investigative Journalists. 29th USENIX Security Symposium (USENIX Security 20). pp. 1911–1927 (2020,8) 259. Wang, R., Yu, C., Yang, X., He, W. & Shi, Y. EarTouch: facilitating smartphone use for visually impaired people in mobile and public scenarios. Proceedings Of The 2019 Chi Conference On Human Factors In Computing Systems. pp. 1–13 (2019) 260. Pearce, K., Vitak, J. & Barta, K. Privacy at the margins| socially mediated visibility: Friendship and dissent in authoritarian Azerbaijan. International Journal Of Communication. 12 pp. 22 (2018) 261. DeVito, M., Walker, A. & Birnholtz, J. ’Too Gay for Facebook’ Presenting LGBTQ+ Identity Throughout the Personal Social Media Ecosystem. Proceedings Of The ACM On HumanComputer Interaction. 2, 1–23 (2018) 262. Sambasivan, N., Batool, A., Ahmed, N., Matthews, T., Thomas, K., Gaytán-Lugo, L., Nemer, D., Bursztein, E., Churchill, E. & Consolvo, S. “They Don’t Leave Us Alone Anywhere We Go” Gender and Digital Abuse in South Asia. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2019) 263. Sambasivan, N., Checkley, G., Batool, A., Ahmed, N., Nemer, D., Gaytán-Lugo, L., Matthews, T., Consolvo, S. & Churchill, E. “Privacy is not for me, it’s for those rich women”: Performative Privacy Practices on Mobile Phones by Women in South Asia.. SOUPS@ USENIX Security Symposium. pp. 127–142 (2018) 264. Wisniewski, P., Xu, H., Rosson, M. & Carroll, J. Parents just don’t understand: Why teens don’t talk to parents about their online risk experiences. Proceedings Of The 2017 ACM Conference On Computer Supported Cooperative Work And Social Computing. pp. 523–540 (2017) 265. Zhao, J., Wang, G., Dally, C., Slovak, P., Edbrooke-Childs, J., Van Kleek, M. & Shadbolt, N. I make up a silly name’ Understanding Children’s Perception of Privacy Risks Online. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2019) 266. Ahmed, S., Haque, M., Chen, J. & Dell, N. Digital privacy challenges with shared mobile phone use in Bangladesh. Proceedings Of The ACM On Human-Computer Interaction. 1, 1–20 (2017) 267. Ahmed, T., Hoyle, R., Connelly, K., Crandall, D. & Kapadia, A. Privacy concerns and behaviors of people with visual impairments. Proceedings Of The 33rd Annual ACM Conference On Human Factors In Computing Systems. pp. 3523–3532 (2015) 268. Ahmed, T., Andersen, K., Shaffer, P., Crocker, D., Ghosh, S., Connelly, K., Gummadi, K., Crandall, D., Kate, A., Kapadia, A. & Others Addressing physical safety, security, and privacy for people with visual impairments. Twelfth Symposium On Usable Privacy And Security (SOUPS 2016). pp. 341–354 (2016)

References

97

269. Badillo-Urquiola, K., Page, X. & Wisniewski, P. Risk vs. restriction: The tension between providing a sense of normalcy and keeping foster teens safe online. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2019) 270. Leitão, R. Anticipating smart home security and privacy threats with survivors of intimate partner abuse. Proceedings Of 2019 On Designing Interactive Systems Conference. pp. 527– 539 (2019) 271. Sleeper, M., Matthews, T., O’Leary, K., Turner, A., Woelfer, J., Shelton, M., Oplinger, A., Schou, A. & Consolvo, S. Tough times at transitional homeless shelters: Considering the impact of financial insecurity on digital security and privacy. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–12 (2019) 272. Kumar, P., Chetty, M., Clegg, T. & Vitak, J. Privacy and security considerations for digital technology use in elementary schools. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2019) 273. Marne, S., Al-Ameen, M. & Wright, M. Learning System-assigned Passwords: A Preliminary Study on the People with Learning Disabilities.. SOUPS. (2017) 274. Wan, L., Müller, C., Wulf, V. & Randall, D. Addressing the subtleties in dementia care: prestudy & evaluation of a GPS monitoring system. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 3987–3996 (2014) 275. Hamidi, F., Scheuerman, M. & Branham, S. Gender recognition or gender reductionism? The social implications of embedded gender recognition systems. Proceedings Of The 2018 Chi Conference On Human Factors In Computing Systems. pp. 1–13 (2018) 276. Cornejo, R., Brewer, R., Edasis, C. & Piper, A. Vulnerability, sharing, and privacy: Analyzing art therapy for older adults with dementia. Proceedings Of The 19th ACM Conference On Computer-Supported Cooperative Work & Social Computing. pp. 1572–1583 (2016) 277. Reichel, J., Peck, F., Inaba, M., Moges, B., Chawla, B. & Chetty, M. ‘I have too much respect for my elders’ understanding South African mobile users’ perceptions of privacy and current behaviors on Facebook and WhatsApp. Proceedings Of The 29th USENIX Conference On Security Symposium. pp. 1949–1966 (2020) 278. Rocheleau, J. & Chiasson, S. Privacy and Safety on Social Networking Sites: Autistic and Non-Autistic Teenagers’ Attitudes and Behaviors. ACM Transactions On Computer-Human Interaction (TOCHI). 29, 1–39 (2022) 279. Petelka, J., Van Kleunen, L., Albright, L., Murnane, E., Voida, S. & Snyder, J. Being (in) visible: Privacy, transparency, and disclosure in the self-management of bipolar disorder. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2020) 280. Moser, C., Chen, T. & Schoenebeck, S. Parents? And Children? s preferences about parents sharing about children on social media. Proceedings Of The 2017 CHI Conference On Human Factors In Computing Systems. pp. 5221–5225 (2017) 281. Clark, S., Goodspeed, T., Metzger, P., Wasserman, Z., Xu, K. & Blaze, M. Why (Special Agent) Johnny (Still) Can’t Encrypt: A Security Analysis of the APCO Project 25 Two-Way Radio System.. USENIX Security Symposium. 2011 pp. 8–12 (2011) 282. Ruoti, S., Andersen, J., Heidbrink, S., O’Neill, M., Vaziripour, E., Wu, J., Zappala, D. & Seamons, K. “We’re on the Same Page” A Usability Study of Secure Email Using Pairs of Novice Users. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 4298–4308 (2016) 283. Ruoti, S., Andersen, J., Zappala, D. & Seamons, K. Why Johnny still, still can’t encrypt: Evaluating the usability of a modern PGP client. ArXiv Preprint ArXiv:1510.08555. (2015) 284. Sheng, S., Broderick, L., Koranda, C. & Hyland, J. Why johnny still can’t encrypt: evaluating the usability of email encryption software. Symposium On Usable Privacy And Security. pp. 3–4 (2006)

98

3 Overview of Usable Privacy Research: Major Themes and Research Directions

285. Ruoti, S., Andersen, J., Monson, T., Zappala, D. & Seamons, K. A Comparative Usability Study of Key Management in Secure Email. SOUPS@ USENIX Security Symposium. pp. 375–394 (2018) 286. Monson, T., Ruoti, S., Reynolds, J., Zappala, D., Smith, T. & Seamons, K. A usability study of secure email deletion. European Workshop On Usable Security (EuroUSEC). (2018) 287. Schröder, S., Huber, M., Wind, D. & Rottermanner, C. When SIGNAL hits the fan: On the usability and security of state-of-the-art secure mobile messaging. European Workshop On Usable Security. IEEE. pp. 1–7 (2016) 288. Bai, W., Pearson, M., Kelley, P. & Mazurek, M. Improving non-experts’ understanding of endto-end encryption: an exploratory study. 2020 IEEE European Symposium On Security And Privacy Workshops (EuroS&PW). pp. 210–219 (2020) 289. Abu-Salma, R., Redmiles, E., Ur, B. & Wei, M. Exploring user mental models of end-to-end encrypted communication tools. 8th USENIX Workshop On Free And Open Communications On The Internet (FOCI 18). (2018) 290. Dechand, S., Naiakshina, A., Danilova, A. & Smith, M. In encryption we don’t trust: The effect of end-to-end encryption to the masses on user perception. 2019 IEEE European Symposium On Security And Privacy (EuroS&P). pp. 401–415 (2019) 291. Gerber, N., Zimmermann, V., Henhapl, B., Emeröz, S. & Volkamer, M. Finally johnny can encrypt: But does this make him feel more secure?. Proceedings Of The 13th International Conference On Availability, Reliability And Security. pp. 1–10 (2018) 292. De Luca, A., Das, S., Ortlieb, M., Ion, I. & Laurie, B. Expert and Non-Expert Attitudes towards (Secure) Instant Messaging. Proceedings Of The Twelfth USENIX Conference On Usable Privacy And Security. pp. 147–157 (2016) 293. DiSessa, A. Models of Computation. User Centered System Design: New Perspectives On Human-computer Interaction. (1986), Donald A. Norman, Stephen W. Draper, editors. 294. Distler, V., Zollinger, M., Lallemand, C., Roenne, P., Ryan, P. & Koenig, V. Security-Visible, Yet Unseen?. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2019) 295. Distler, V., Gutfleisch, T., Lallemand, C., Lenzini, G. & Koenig, V. Complex, but in a good way? How to represent encryption to non-experts through text and visuals–Evidence from expert co-creation and a vignette experiment. Computers In Human Behavior Reports. 5 pp. 100161 (2022) 296. Felt, A., Reeder, R., Ainslie, A., Harris, H., Walker, M., Thompson, C., Acer, M., Morant, E. & Consolvo, S. Rethinking connection security indicators. Twelfth Symposium On Usable Privacy And Security (SOUPS 2016). pp. 1–14 (2016) 297. Whalen, T. & Inkpen, K. Gathering evidence: use of visual security cues in web browsers. Proceedings Of Graphics Interface 2005. pp. 137–144 (2005) 298. Stransky, C., Wermke, D., Schrader, J., Huaman, N., Acar, Y., Fehlhaber, A., Wei, M., Ur, B. & Fahl, S. On the limited impact of visualizing encryption: Perceptions of E2E messaging security. Seventeenth Symposium On Usable Privacy And Security. pp. 437–454 (2021) 299. Akgul, O., Abu-Salma, R., Bai, W., Redmiles, E., Mazurek, M. & Ur, B. From secure to military-grade: Exploring the effect of app descriptions on user perceptions of secure messaging. Proceedings Of The 20th Workshop On Workshop On Privacy In The Electronic Society. pp. 119–135 (2021) 300. Distler, V., Lallemand, C. & Koenig, V. Making encryption feel secure: Investigating how descriptions of encryption impact perceived security. 2020 IEEE European Symposium On Security And Privacy Workshops (EuroS&PW). pp. 220–229 (2020) 301. Clark, H. Using Language. (Cambridge University Press,1996) 302. Thibodeau, P., Matlock, T. & Flusberg, S. The role of metaphor in communication and thought. Language And Linguistics Compass. 13, e12327 (2019)

References

99

303. Alty, J., Knott, R., Anderson, B. & Smyth, M. A framework for engineering metaphor at the user interface. Interacting With Computers. 13, 301–322 (2000) 304. Collins, A. & Gentner, D. How people construct mental models. Cultural Models In Language And Thought. 243, 243–265 (1987) 305. Staggers, N. & Norcio, A. Mental Models: Concepts for Human-Computer Interaction Research. Int. J. Man-Mach. Stud.. 38, 587–605 (1993,4) 306. Schaewitz, L., Lakotta, D., Sasse, M. & Rummel, N. Peeking Into the Black Box: Towards Understanding User Understanding of E2EE. Proceedings Of The 2021 European Symposium On Usable Security. pp. 129–140 (2021) 307. Nov, O. & Wattal, S. Social computing privacy concerns: antecedents and effects. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 333–336 (2009) 308. Stutzman, F. & Kramer-Duffield, J. Friends only: examining a privacy-enhancing behavior in facebook. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 1553–1562 (2010) 309. Wang, Y., Norcie, G., Komanduri, S., Acquisti, A., Leon, P. & Cranor, L. “I regretted the minute I pressed share” a qualitative study of regrets on Facebook. Proceedings Of The Seventh Symposium On Usable Privacy And Security. pp. 1–16 (2011) 310. Stutzman, F., Capra, R. & Thompson, J. Factors mediating disclosure in social network sites. Computers In Human Behavior. 27, 590–598 (2011) 311. Ma, X., Hancock, J. & Naaman, M. Anonymity, intimacy and self-disclosure in social media. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 3857–3869 (2016) 312. Torabi, S. & Beznosov, K. Sharing health information on facebook: practices, preferences, and risk perceptions of North American users. Symposium On Usable Privacy And Security (SOUPS). (2016) 313. Ayalon, O. & Toch, E. Retrospective privacy: Managing longitudinal privacy in online social networks. Proceedings Of The Ninth Symposium On Usable Privacy And Security. pp. 1–13 (2013) 314. Andalibi, N. Disclosure, privacy, and stigma on social media: Examining non-disclosure of distressing experiences. ACM Transactions On Computer-human Interaction (TOCHI). 27, 1– 43 (2020) 315. Li, Y., Rho, E. & Kobsa, A. Cultural differences in the effects of contextual factors and privacy concerns on users’ privacy decision on social networking sites. Behaviour & Information Technology. 41, 655–677 (2022) 316. Lee, H., Park, H. & Kim, J. Why do people share their context information on Social Network Services? A qualitative study and an experimental study on users’ behavior of balancing perceived benefit and risk. International Journal Of Human-Computer Studies. 71, 862–877 (2013) 317. Jia, H. & Xu, H. Autonomous and interdependent: Collaborative privacy management on social networking sites. Proceedings Of The 2016 CHI Conference On Human Factors In Computing Systems. pp. 4286–4297 (2016) 318. Pu, Y. & Grossklags, J. Valuating friends’ privacy: Does anonymity of sharing personal data matter. Thirteenth Symposium On Usable Privacy And Security (SOUPS 2017). pp. 339–355 (2017) 319. Cho, H., Knijnenburg, B., Kobsa, A. & Li, Y. Collective privacy management in social media: A cross-cultural validation. ACM Transactions On Computer-Human Interaction (TOCHI). 25, 1–33 (2018) 320. Suh, J., Metzger, M., Reid, S. & El Abbadi, A. Distinguishing group privacy from personal privacy: The effect of group inference technologies on privacy perceptions and behaviors. Proceedings Of The ACM On Human-Computer Interaction. 2, 1–22 (2018)

100

3 Overview of Usable Privacy Research: Major Themes and Research Directions

321. Mansour, A. & Francke, H. Collective privacy management practices: A study of privacy strategies and risks in a private Facebook group. Proceedings Of The ACM On Human-Computer Interaction. 5, 1–27 (2021) 322. Page, X., Ghaiumy Anaraky, R., Knijnenburg, B. & Wisniewski, P. Pragmatic tool vs. relational hindrance: Exploring why some social media users avoid privacy features. Proceedings Of The ACM On Human-Computer Interaction. 3, 1–23 (2019) 323. Mondal, M., Yilmaz, G., Hirsch, N., Khan, M., Tang, M., Tran, C., Kanich, C., Ur, B. & Zheleva, E. Moving beyond set-it-and-forget-it privacy settings on social media. Proceedings Of The 2019 ACM SIGSAC Conference On Computer And Communications Security. pp. 991–1008 (2019) 324. Kekulluoglu, D., Vaniea, K. & Magdy, W. Understanding Privacy Switching Behaviour on Twitter. Proceedings Of The 2022 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2022) 325. Mazzia, A., LeFevre, K. & Adar, E. The pviz comprehension tool for social network privacy settings. Proceedings Of The Eighth Symposium On Usable Privacy And Security. pp. 1–12 (2012) 326. Li, Y., Gui, X., Chen, Y., Xu, H. & Kobsa, A. When SNS privacy settings become granular: investigating users’ choices, rationales, and influences on their social experience. Proceedings Of The ACM On Human-Computer Interaction. 2, 1–21 (2018) 327. Kulyk, O., Gerber, P., Marky, K., Beckmann, C. & Volkamer, M. Does this app respect my privacy? Design and evaluation of information materials supporting privacy-related decisions of smartphone users. Workshop On Usable Security (USEC’19). San Diego, CA. pp. 1–10 (2019) 328. Van Kleek, M., Liccardi, I., Binns, R., Zhao, J., Weitzner, D. & Shadbolt, N. Better the devil you know: Exposing the data sharing practices of smartphone apps. Proceedings Of The 2017 CHI Conference On Human Factors In Computing Systems. pp. 5208–5220 (2017) 329. Balebako, R., Jung, J., Lu, W., Cranor, L. & Nguyen, C. “Little brothers watching you” raising awareness of data leaks on smartphones. Proceedings Of The Ninth Symposium On Usable Privacy And Security. pp. 1–11 (2013) 330. Liu, B., Schaarup Andersen, M., Schaub, F., Almuhimedi, H., Zhang, S., Sadeh, N., Acquisti, A. & Agarwal, Y. Follow my recommendations: A personalized privacy assistant for mobile app permissions. SOUPS 2016-Proceedings Of The 12th Symposium On Usable Privacy And Security. pp. 27–41 (2016) 331. Cao, W., Xia, C., Peddinti, S., Lie, D., Taft, N. & Austin, L. A Large Scale Study of User Behavior, Expectations and Engagement with Android Permissions.. USENIX Security Symposium. pp. 803–820 (2021) 332. Bonné, B., Peddinti, S., Bilogrevic, I. & Taft, N. Exploring decision making with Android’s runtime permission dialogs using in-context surveys. Thirteenth Symposium On Usable Privacy And Security (SOUPS 2017). pp. 195–210 (2017) 333. Micinski, K., Votipka, D., Stevens, R., Kofinas, N., Mazurek, M. & Foster, J. User interactions and permission use on android. Proceedings Of The 2017 CHI Conference On Human Factors In Computing Systems. pp. 362–373 (2017) 334. Bianchi, A., Corbetta, J., Invernizzi, L., Fratantonio, Y., Kruegel, C. & Vigna, G. What the app is that? deception and countermeasures in the android user interface. 2015 IEEE Symposium On Security And Privacy. pp. 931–948 (2015) 335. Elbitar, Y., Schilling, M., Nguyen, T., Backes, M. & Bugiel, S. Explanation beats context: The effect of timing & rationales on users’ runtime permission decisions. USENIX Security’21. (2021) 336. Harbach, M., Hettig, M., Weber, S. & Smith, M. Using personal examples to improve risk communication for security & privacy decisions. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 2647–2656 (2014)

References

101

337. Seberger, J. & Patil, S. Us and them (and it): social orientation, privacy concerns, and expected use of pandemic-tracking apps in the United States. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems. pp. 1–19 (2021) 338. Jamieson, J., Epstein, D., Chen, Y. & Yamashita, N. Unpacking intention and behavior: explaining contact tracing app adoption and hesitancy in the United States. Proceedings Of The 2022 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2022) 339. Dash, S., Jain, A., Dey, L., Dasgupta, T. & Naskar, A. Factors affecting user experience of contact tracing app during COVID-19: an aspect-based sentiment analysis of user-generated review. Behaviour & Information Technology. pp. 1–16 (2022) 340. Zakaria, C., Foong, P., Lim, C., VS Pakianathan, P., Koh, G. & Perrault, S. Does Mode of Digital Contact Tracing Affect User Willingness to Share Information? A Quantitative Study. Proceedings Of The 2022 CHI Conference On Human Factors In Computing Systems. pp. 1–18 (2022) 341. Almuhimedi, H., Schaub, F., Sadeh, N., Adjerid, I., Acquisti, A., Gluck, J., Cranor, L. & Agarwal, Y. Your location has been shared 5,398 times! A field study on mobile app privacy nudging. Proceedings Of The 33rd Annual ACM Conference On Human Factors In Computing Systems. pp. 787–796 (2015) 342. Tang, K., Hong, J. & Siewiorek, D. The implications of offering more disclosure choices for social location sharing. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 391–394 (2012) 343. Knijnenburg, B., Kobsa, A. & Jin, H. Preference-based location sharing: are more privacy options really better?. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 2667–2676 (2013) 344. Patil, S., Schlegel, R., Kapadia, A. & Lee, A. Reflection or action? how feedback and control affect location-sharing decisions. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 101–110 (2014) 345. Patil, S., Norcie, G., Kapadia, A. & Lee, A. Reasons, rewards, regrets: privacy considerations in location sharing as an interactive practice. Proceedings Of The Eighth Symposium On Usable Privacy And Security. pp. 1–15 (2012) 346. Wang, E. & Lin, R. Perceived quality factors of location-based apps on trust, perceived privacy risk, and continuous usage intention. Behaviour & Information Technology. 36, 2–10 (2017) 347. Feng, Y., Yao, Y. & Sadeh, N. A design space for privacy choices: Towards meaningful privacy control in the Internet of things. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems. pp. 1–16 (2021) 348. Schaub, F., Balebako, R., Durity, A. & Cranor, L. A design space for effective privacy notices. Eleventh Symposium On Usable Privacy And Security (SOUPS 2015). pp. 1–17 (2015) 349. Project, Tor. Tor Metrics., https://metrics.torproject.org/userstats-relay-country.html, April 25, 2023 350. Project, Tor. Browse Privately, Explore Freely., https://www.torproject.org/, April 25, 2023 351. Project, Tor. History., https://www.torproject.org/about/history/, April 25, 2023 352. Knijnenburg, B., Page, X., Wisniewski, P., Lipford, H., Proferes, N. & Romano, J. Modern socio-technical perspectives on privacy. (Springer Nature,2022) 353. Page, X., Berrios, S., Wilkinson, D. & Wisniewski, P. Social Media and Privacy. Modern SocioTechnical Perspectives On Privacy. pp. 113–147 (2022) 354. Lipford, H., Tabassum, M., Bahirat, P., Yao, Y. & Knijnenburg, B. Privacy and the Internet of Things. Modern Socio-Technical Perspectives On Privacy. pp. 233–264 (2022) 355. McDonald, N. & Forte, A. Privacy and Vulnerable Populations. Modern Socio-Technical Perspectives On Privacy. pp. 337–363 (2022)

102

3 Overview of Usable Privacy Research: Major Themes and Research Directions

356. Wisniewski, P., Vitak, J. & Hartikainen, H. Privacy in Adolescence. Modern Socio-Technical Perspectives On Privacy. pp. 315–336 (2022) 357. Wang, Y. & Price, C. Accessible Privacy. Modern Socio-Technical Perspectives On Privacy. pp. 293–313 (2022)

4

Challenges of Usable Privacy

4.1

Introduction

Already in their seminal paper “Why Johnny can’t encrypt” [48] back in 1999, the authors reported usable security and privacy challenges that were manifold. These included (1) challenges for designing security- (and privacy tools) with a match between developers’ and users’ mental models and (2) the evaluation of these tools, and they concluded that security (and privacy)-specific user interface design principles and evaluation techniques were needed. In order to research and develop usable privacy, an understanding of the diverse challenges is essential, so this chapter elaborates on that topic. It starts with discussing the challenges of conducting usable privacy research and then continues with challenges that are specific to designing and explaining usable privacy technologies. Finally, it also briefly addresses specific Human-Computer Interaction (HCI) challenges regarding the effective implementation of legal privacy requirements.

4.2

Challenges of Conducting Usable Privacy Research

4.2.1

Challenge of Encompassing Different and Sometimes Specific Users

Despite the importance of humans to systems, they are also the weakest point, as well as the most challenging to reason about, because of the enormous differences between them and their unique characteristics. There is no doubt that different groups of users have different cultural backgrounds, biases, and expectations [23, 24]. Therefore, when designing usable Privacy-Enhancing Technologies (PETs) and conducting usable privacy research, we might need to think about a variety of groups based on the context, the target groups, and the study © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Fischer-Hübner and F. Karegar, The Curious Case of Usable Privacy, Synthesis Lectures on Information Security, Privacy, and Trust, https://doi.org/10.1007/978-3-031-54158-2_4

103

104

4 Challenges of Usable Privacy

objectives. Among the people who may interact with privacy systems are experts, novices, people from diverse cultural backgrounds, marginalised groups, clueless or unmotivated individuals, and individuals from different age groups including elderly or children as well as individuals who have limitations, for whom the system must be usable and fulfil its privacy missions. Conducting studies with marginalised groups or with children may pose particular ethical and legal challenges. In addition, hiring professionals such as security experts, cryptography experts, and domain experts (such as medical doctors when testing eHealth privacy solutions) for user studies is more challenging than hiring laypeople because these individuals are fewer, geographically concentrated, typically earn higher salaries [21], and usually have restrictions time-wise. Therefore, if participants are compensated, the study will require more funding and it may be more challenging to engage experts in longer studies or to find common time slots, for example, to conduct focus groups involving experts. In addition, the lack of studies that have been conducted with people with specific characteristics makes reasoning about them even more challenging. Privacy is a multilevel concept. Usable privacy research should also address potential tensions between individual group members’ rights and preferences and between individuals and the group at large, ensuring that both individual and group privacy are respected. Despite the importance of group privacy, the privacy literature has been slow to recognise it. For more information on group privacy versus individual privacy, how multi-stakeholder privacy decision-making works, and potential tensions that may arise between a member and the group as a whole, as well as suggestions for promoting collaborative privacy management and group privacy protection, please refer to [138]. Developing usable privacy systems, products, and services for users from diverse cultural backgrounds requires knowledge of culture. User privacy behaviours and attitudes can be profoundly influenced by culturally distinct beliefs, norms, and values [26, 27]. Crosscultural studies (such as [31, 32, 35–37]) have, therefore, driven the development of usable privacy and security. Nevertheless, quantitative studies using cultural constructs are plagued by a methodological gap. Although invariance in measurement is an important component of scale robustness and is a prerequisite for cross-country comparisons [22], cultural measurements such as Hofstede’s cultural dimensions including individualism versus collectivism or uncertainty avoidance [28] (measured at individual level in [22]) and Triandis and Gelfand’s instruments to measure horizontal and vertical individualism and collectivism [29] are not invariant across countries, both in terms of measurement and causal effects [22]. The effect of cultural variables on privacy-related constructs is not generalisable, according to Ghaiumy et al. [22]. It may explain why some studies have found a relationship between cultural constructs and privacy-related outcomes (e.g. [30, 33, 34, 139, 140]), while others have not (e.g. [26] in which authors did not find a relationship between individualism dimension and any of their privacy-related factors). In other words, if only individual-level cultural measurements are taken into account and the country is excluded, the results may be inaccurate. Rather than conducting inferential statistics immediately, Ghaiumy and coworkers recom-

4.2

Challenges of Conducting Usable Privacy Research

105

mend integrating existing cultural dimensions appropriately and addressing invariance as early in the inference process as possible [22]. Further, when studying any user population, it is critical to consider how recruitment channels and methods may affect research results [25]. According to a replication study published in 2022 [125], users of the crowdsourcing platform Prolific1 are significantly more knowledgeable about privacy and security matters than the general U.S. population. Previous research has also shown that Amazon Mechanical Turk2 respondents are younger, better educated, more likely to use social media, and have greater privacy concerns than representative US adults [126]. It is therefore necessary for privacy studies to consider differences between samples recruited from crowdsourcing platforms and the general population. However, as recruiting professionals with specialised skills is difficult, crowdsourcing platforms may prove useful in locating such participants. In their study, Tahaei and Vaneia [25] examined the issue of hiring people with programming skills for online studies that are evaluating novel tools, or programming experience and attitudes. Their results show that students with CS backgrounds are a good source of participants with programming skills, and Prolific, a crowdsourcing platform, results in a cost-effective sample with more participants with basic programming skills [25]. Nonetheless, researchers should consider the differences in privacy and security attitudes across various recruitment channels and aim to recruit from multiple sources to reduce bias against certain recruitment channels. Crowdsourcing participants’ privacy and security attitudes vary, e.g. Clickworker participants are more aware of the privacy practices of online companies than Amazon MTurk or Prolific participants, according to Tahaei and Vaniea [25]. This section concludes with a call to consider marginalised people’s needs when designing privacy systems. Researchers, developers, and designers must be aware of the unique privacyrelated needs and behaviours of marginalised individuals to design and implement more inclusive PETs. Please refer to Sect. 3.4 where we discuss the importance of inclusive privacy and the efforts made towards more equitable privacy mechanisms and technologies. Research on marginalised groups, however, can be resource-intensive, for example, in terms of time, researchers’ professional skills, or budget due to certain challenges. In addition to being hard to reach or scarce (considering the recruitment channels researchers might have), marginalised participants may fear identification leakage and be unwilling to participate in a study as a result. Compensation should also be carefully considered for marginalised groups, who may be economically disenfranchised or vulnerable [38] (read more about the dilemma of compensating participants in Sect. 4.2.5.3). Sannon and Forte further discuss that research involving marginalised people can pose challenges around situational ethics [38], referring to ethical concerns that arise beyond formal procedural ethics requirements. They also emphasise the importance of recognising researchers’ positionality [38], i.e. their positions in society based on identity factors since it impacts the research process and how it is conducted. In marginalised contexts, researchers should be careful not to perpetuate marginalisation, 1 https://www.prolific.co/. 2 https://www.mturk.com/.

106

4 Challenges of Usable Privacy

and they should take care of themselves when they belong to the same group as those being studied [38].

4.2.2

Prioritised and Conflicting Goals

While most users desire privacy-friendly systems, it is not their foremost concern [47] or their primary need [45] and only a secondary goal [48]; privacy is separate from their day-today tasks in using different systems [47]. During interactions where the focus is on achieving other primary goals, users may not be concerned about privacy [46], may not be willing to be interrupted while doing their primary tasks due to security and privacy concerns, for example by a security warning or a privacy notice, and may not be willing to invest much time and energy in this. Fulfilling their primary tasks can, however, lead to situations that threaten users’ privacy and make them regret their choices. If users ignore privacy notices while performing their primary duties, they may suffer irreversible privacy harm. As a result, data controllers and processors may also be subject to legal repercussions. Therefore, designers and developers of privacy systems must take into account the needs and requirements of users, and provide them with an unobtrusive and usable means of protecting their privacy. Further, when designing user studies, researchers should be aware that designing tasks with the main focus on privacy affects the validity of the results they achieve depending on their research objectives. Different domains of computing systems must place a higher level of emphasis on specific quality factors, such as privacy, integrity, safety, and usability. However, certain quality factors can complement each other while others oppose each other. In particular, while usability is an important prerequisite for achieving the privacy goal of transparency, usability goals may conflict with other goals, such as legal requirements, standards, system performance, cost constraints, or security and privacy requirements. As fulfilling usability goals may degrade privacy and security in specific circumstances, achieving other goals may also negatively affect the usability of privacy and security systems. A number of studies have been conducted on user understanding and usability of anonymity network services like Tor (e.g. [49–52]), demonstrating a trade-off between privacy and usability. Gallagher et al. [52] report, for example, that although the Tor browser can improve users’ privacy, both experts and non-experts have behaviour and understanding gaps that could compromise anonymity. Tor use has also been regarded as suspicious activity by some governments, and access to it has been restricted in some cases [53]. Despite the fact that users appreciate onion services’ extra security, privacy, and Network Address Translation (NAT) punching capabilities, Winter et al. note that future generations of onion services should address a variety of usability issues [49]. As an example, users have limited ways to discover onion services, let alone navigate to them [49]. Passwords and password policies are other examples of how usability, privacy, and security are traded off in systems. Despite individuals caring about security, password policies are

4.2

Challenges of Conducting Usable Privacy Research

107

too inflexible to match human capabilities [54]. Moreover, as pointed out in [33], for intelligent transportation systems, there can be a trade-off between the usability of applications and services and privacy. Location-based queries can be made more precise or frequent, which in turn makes the responses and the overall application more useful for the users. However, the service provider will receive finer-grained location information at the same time. Trade-off decisions that need to be made between privacy and other goals also pose challenges for the usable configuration of PETs. For instance, Framner et al. [57] discuss usable configuration challenges for the Archistar system, which utilises secret sharing for redundantly storing data over multiple independent storage clouds in a secure and privacyfriendly manner. For selecting the optimal secret sharing parameters, the type and location of cloud storage servers, and other settings for securely storing the secret data shares on cloud servers, complex trade-offs between different protection goals, costs, and legal privacy requirements need to be made that require multi-disciplinary expertise. Although there may be an inherent trade-off between usability definition and privacy and security requirements in specific contexts, the implications can be mitigated by including users as early as possible in the design and deployment of new security or privacy features. Thus, users should be considered throughout the development cycle of privacy and security systems, from design to development to assessment. Developed by Norman in 1986 [55], User-Centred Design (UCD) places the needs and requirements of end users at the centre of an iterative design process. As a result of UCD, standardised procedures have been created for designing interactive systems. An example of a human-centred design life cycle is ISO 9241-210 [56], which specifies an iterative, step-by-step process for developing usable interactive systems. For example, Framner et al. [57] proposed HCI guidelines for usable configuration management of Architstar as a result of a UCD approach. Their usable configuration management automatically sets configuration parameters solving trade-offs in dependence on the type of data to be stored in the cloud.

4.2.3

Difficulty of Measuring the Right Thing and Privacy Paradox

Privacy is a complex, multi-facet, cultural, and context-dependent concept which makes it difficult to have privacy scales that reliably measure constructs associated with privacy such as privacy attitudes, behaviours, concerns, and the cost of privacy invasion, and their potential associations. Colnago et al. [58] studied whether people’s granular privacy perspectives, including attitudes, preferences, concerns, expectations, decisions, and behaviours, can be uniquely and reliably captured. The participants’ understandings of the statements and the privacy constructs associated with them appeared to be misaligned [58]. Moreover, both existing and newly developed privacy construct statements frequently cover more than one construct at once [58]. As an example, many scales used to measure constructs like privacy concerns are perceived by survey participants to describe other constructs, such as privacy

108

4 Challenges of Usable Privacy

preferences. Since the explored privacy constructs are so tightly intertwined, it may explain why current scales, even though thoroughly validated when proposed, aren’t always accurate in predicting, for example, how people engage with privacy behaviours based on their concerns about privacy [58]. A discrepancy between privacy attitudes and behaviours creates a dilemma in the context of usable privacy research when it comes to gauging the right thing. This disconnect between attitudes and behaviour is referred to as the “privacy paradox”—a term which was first coined in 2001 by Barry Brown regarding users’ complaints related to privacy despite their use of supermarket loyalty cards [61]. Since then, it has become a common term for this phenomenon. Many theoretical explanations have been offered for the privacy paradox (examples of summaries are [59, 60]), as well as empirical evidence about how individual factors affect privacy attitudes and behaviour (e.g. [63]). In their paper, Gerber et al. [63] examine the factors that significantly influence privacy attitudes, concerns, perceived risk, behavioural intention, and behaviour to explain the privacy paradox empirically. The authors also contend that drawing overall conclusions can be challenging because researchers often use different privacy constructs to conduct and report their studies [63]. The concepts of intention and willingness to disclose information may refer to the same construct or they may be two distinct theoretical constructs. Therefore, a shared definition of privacy constructs would allow the privacy research community to draw conclusions beyond individual samples and study designs [63]. There are usually two opposing sides to explanations of the privacy paradox, according to Daniel Solove [60]: In this debate, one side places the privacy paradox as a function of perceived benefits versus risks, arguing that people’s behaviour reflects their feelings about privacy more accurately than their attitudes. Other sides attempt to justify the disparity between attitudes and behaviour by arguing that people’s behaviour is irrational or at odds with their preferences. People’s behaviour is distorted by biases and heuristics, manipulation and skewing, misunderstandings and lack of knowledge, and several other factors so their behaviour is not a reliable indicator of their privacy values [64]. A related debate, however, argues that the paradox does not exist, despite these two opposing views which accept and explain the privacy paradox. For example, the discrepancy appears to be a consequence of methodological considerations, such as the inappropriate rationalisation of these constructs in specific studies that deal with privacy paradox [66]. People make risk decisions in very specific contexts when they participate in privacy paradox studies. Privacy concerns or the value people place on privacy, on the other hand, are far more general. Therefore, the generalisation of people’s risk decisions involving specific personal data in specific contexts to generalise about how they value privacy is illogical [60]. When behaviours and attitudes towards privacy are correctly understood, the privacy paradox is just an illusion created by faulty logic, inaccurate generalisations, and conflated issues, according to Daniel Solove [60]. In line with emphasising the role of context in privacy measurements and conclusions, Louise Barkhuus [65] argues that the research and practice of HCI should employ contextual measures since privacy cannot be easily measured.

4.2

Challenges of Conducting Usable Privacy Research

109

Nissenbaum’s “Contextual Integrity” theory [67] supports her argument, which emphasises the importance of appropriate information flow instead of a static act of sharing. By putting the Contextual Integrity theory to work with empirical data, Louise Barkhuus provides indications of the shortcomings of previous scenario-based studies of sensor-enabled services, as well as recommendations for future research [65].

4.2.4

The Issue of Ecological Validity

It is the ecological validity of a study that determines whether its methods, materials, and configuration approximate the real-world context being examined [74]; an ecologically valid study will be generalisable to the real world. The historical examination of ecological validity by Mark Schmuckler [69] indicated that ecological validity is a concern in multiple dimensions of empirical research. These dimensions include experimental settings, stimuli under investigation, and the nature of the task, behaviour, or response required of participants [69]. Despite this, there are no explicit criteria for using this multidimensionality to evaluate research [69]. Because of this problem, most experimental situations raise concerns about ecological validity. Ecological validity is especially challenging when designing experiments on usable privacy and security.

4.2.4.1 Multiple Dimensions: Environment, Stimuli, and the Nature of Tasks An influential aspect relating to the ecological validity of the environment is its realism and representative nature [69]. As a result, the environment setting should contain key characteristics associated with naturalistic settings. It is also critical to consider the environment’s relevance. A true environment is one where actors act regularly. In reality, privacy is not the primary goal of users in many scenarios, and simulating actual privacy and security risks in the face of ethical and legal challenges is nearly impossible. Even though lab studies enable researchers to create a controlled environment and isolate certain variables, participants may face different threats and motivations than they would in the field. Laboratory settings, including instructions and framing of research goals, can lead participants to claim they care more about privacy and security than they would in a real-world setting. For example, if participants knew that a study was about information privacy or that the researchers worked in the privacy field, it might have led to more participant attention to privacy than how they would actually behave in actual situations. This poses particular challenges regarding the degree to which the purpose of the study should be made transparent when obtaining consent from study participants. Ecological validity requires that, in addition to the environment, the stimuli under study are realistic and representative [69]. Stimuli must be analysed for their actual, stable properties even when removed from their natural context. Finally, the same issues exist for responses, tasks, and behaviours required by participants [69]. We need to ask whether

110

4 Challenges of Usable Privacy

the response is natural and representative. This is because the response should reflect the relevant behaviour for the issue being investigated when designing our studies. We should also consider whether the task assigned to participants is a natural and representative task that users may need to complete in the real world. At the same time, it is equally crucial that observed behaviour is truly relevant to the stimuli being investigated. Nevertheless, the measurement of privacy and its constructs is challenging due to its complexity, how it is conceptualised in different contexts and by different people, as well as the privacy paradox. For example, in spite of the fact that self-reported data about security and privacy behaviours can be useful, the reliability of these data can sometimes be poor due to a wide range of factors, such as not remembering past behaviours or feeling uncomfortable sharing accurate information.

4.2.4.2 Improving Realistic Experiences of Risks and the Use of Deception The extent to which participants believed that the risks in a study were real plays an important role in the ecological validity of the study. As approaches to creating realistic experiences of risk in user studies should also consider legal and ethical issues, the ecological validity of usable privacy and security fields remains a challenge [68]. For example, by using fictitious data instead of actual participant data, privacy risks can be reduced, and ethical and legal challenges can be addressed, but ecological validity may be compromised. Distler et al. [68] identified different approaches researchers take to create more realistic perceptions of risk in the context of usable privacy and security. Simulating a real-life situation through hypothetical scenarios and role-playing is one of the approaches in this regard [68]. However, a key finding of Schechter et al. [70] is that role-playing lowers security vigilance when it is used to generate a perception of risk. Long-term in situ studies are another option [68]. Forget et al. [71] developed the Security Behaviour Observatory (SBO) that recruited and observed a panel of consenting home computer users so that security-related behaviour could be studied in the real world. Another approach is to employ deception, which can provide a more realistic setting [68]. The conduct of some types of studies would be impossible or difficult without deception, and timely debriefing can minimise harm to participants so that deceptive studies are justified (see Sect. 4.2.5.4 for a brief discussion on ethical and legal issues concerning deception). There is a tendency in usable privacy and security studies to use deception to deceive participants about the objective of the study, for example, in its simplest form to prime them into believing there is no connection between the study and security or privacy or to give the impression that the risks are real, i.e. that their personal data would be revealed, which is not the case. For example, in their study, Anderson et al. [72] spoofed Google search results so participants thought they were evaluating browser extensions, but they actually installed experimental extensions. The manipulated search results presented unreasonable permission warnings (polymorphic or conventional warnings depending on the participant group) at random. The goal of the study was to determine whether polymorphic warnings could encourage more

4.2

Challenges of Conducting Usable Privacy Research

111

secure behaviour [72]. In another study, Samat and Acquisti [73] told participants that their information would be shared with a specified audience, but it wasn’t shared outside the researchers involved in the study.

4.2.4.3 Negative Effects of Biases on Ecological Validity To achieve higher ecological validity, we should be aware of potential biases that may occur at any phase of research, including study design, data collection, or in the process of data analysis which could affect the generalisability of results achieved in user studies into naturalistic situations. Our goal here is not to go deep into definitions and categorisations of different research biases as they abound and are out of the scope of this chapter but to raise awareness about the negative influence of biases on the ecological validity of user studies. For example, a participant may answer a question based on what he or she thinks is the right answer for the study objective or what is socially acceptable rather than what he or she really feels. In other words, the social desirability of privacy and security behaviours may play a role in how people respond to certain questions [75]. Also, researchers might ask questions in an order that may affect the participant’s response to the next question or ask leading questions that may prompt a certain response which all affect the realistic nature of responses as one dimension of ecological validity. Mneimneh et al. [76] observed that a social conformity bias may arise while reporting sensitive information in interviews with third-party presence, which means that the privacy of the interview setting needs to be considered in these cases to avoid misreporting.

4.2.5

Specific Ethical and Legal Challenges

Ethics forms one of the seven HCI grand challenges reported in the literature [3]. Not only for that reason, but ethical and legal considerations also play a fundamental role in conducting usable privacy research. First of all, performing usable privacy studies in an ethical and legally compliant manner requires extra time and resources and special attention. In addition, obtaining informed consent in the right way, especially when it involves deception about the objective of the study or the presence of risks, and handling sensitive research data in a privacy-preserving manner requires experience, time, and resources, including guidance from ethics advisors. Second, when designing usable privacy solutions, ethical considerations likewise need to be addressed. As an example, debates about the ethics of nudging people are also common in usable privacy research, which focuses on privacy nudges. Ethical and legal challenges of usable privacy research vary widely and should be considered case by case according to the research, its characteristics, and its underlying assumptions. Here, we briefly discuss nudge ethics, anonymisation practices for research data and

112

4 Challenges of Usable Privacy

data accessibility, the dilemma of compensating research participants, as well as deception ethics, as a subset of important ethical and legal issues in usable privacy research.

4.2.5.1 Ethics of Nudges Nudging interventions (see Sect. 1.4.13) may address problems associated with cognitive and behavioural biases that lead to making decisions that result in unwanted outcomes [15]. Several studies have explored whether nudges motivate people to maintain their privacy or behave more securely (e.g. [16–20]). Despite its potential to make Internet users more privacy conscious, nudging is criticised for violating several ethical values [13]. As long as nudges are done ethically and benefit users, they may facilitate complex decision-making. Since nudging may have ethical implications, researchers should consider these implications and receive ethical guidance. To this end, Renaud and Zimmermann [14] synthesised the arguments for and against nudging. They also mapped the results to five main ethical principles (i.e. Respect, Beneficence, Justice, Social Responsibility, and Scientific Integrity), producing ethical nudge-specific guidelines for researchers in the context of information security and privacy as we briefly report in the following part based on their work [14]. Note that when researchers employ nudges they should take care of the ethical issues concerning the employment and the specific design of their nudge and also the design and justification of their study which involves nudging. For example, Respect through Retention, Beneficence, Justice, and Social Responsibility principles are ethical considerations for the specific design of a nudge and Respect through Transparency and Scientific Integrity are related to the design and justification of the studies/research involving nudges. • Respect: All people are valued and their differences are respected. Participant autonomy should not be impaired over time. – Retention: It must still be possible for users to ignore the options that the nudges suggest. In other words, options should not be restricted or banned although in specific cases, for example, based on specific rules and regulations a restriction of options might be mandated. – Transparency: In order to prevent abuse by choice architects, people should be aware of the nudge and its influence. It might be appropriate to debrief participants at the end of the study in cases where transparency on nudging is expected to have opposite effects than expected. • Beneficence: A nudging intervention should only be deployed when it is justified and of clear benefit. A nudge should be designed to benefit the targeted group and measures should be implemented to confirm this assumption. Further, users should be able to contact the nudge deployer if they have any questions or concerns.

4.2

Challenges of Conducting Usable Privacy Research

113

• Justice: As many people as possible should be able to benefit from research involving nudging and nudging interventions. The intervention itself, as well as access to it, should neither be unjust nor difficult. • Scientific integrity: The nudge designer must provide scientific justification of the nudge’s alleged effect and demonstrate that it is beneficial to the nudgee, society, or a vulnerable group. In addition to following sound scientific practice, nudge designs should match the research aim. • Social responsibility: It is imperative to consider both the expected and unanticipated consequences of deploying nudges for the individual and society at large. For example, in the case of passwords, nudging people to create strong passwords may unintentionally lead to more inside attacks as more people may write down and store passwords insecurely.

4.2.5.2 Accessibility of Research Data and Anonymisation Practices The availability of openly accessible research data is essential to replicability and FAIR data practices (findability, accessibility, interoperability, and reusability). Separating research materials (such as data collected) from publications is common and hinders reproducibility and comparisons [5, 12]. Wacharamanotham et al. [6] have shown that sharing artefacts is uncommon in HCI research, with a percentage ranging from 14% for raw selective data (e.g. ethnographic notes) to 47% for hardware (e.g. circuit diagrams). Mathias et al. [4] interviewed researchers from both academia and industry who prototype security and privacyprotecting systems and have published work in top-tier venues to learn about their experiences conducting USEC research. Experts highlighted the dearth of open-source materials as a challenge that negatively affects the success of their research [4]. USEC experts believe the lack of open-source implementations of usable privacy and security systems makes building certain features time-consuming and challenging [4]. Even though making research data accessible can be a goal within the usable privacy and security community, for example, by including accessibility as a criterion for acceptance or adding badges to the papers with open data or materials [7, 8], accessible research data presents ethical and legal challenges, particularly if it is sensitive data. Among the top two reasons for not sharing research data are (a) data sensitivity and (b) lack of permission, according to Wacharamanotham et al. [6]. They found that Institutional Review Boards (IRB) or ethics boards sometimes forbid researchers from sharing their research artefacts [6]. However, researchers should not be generally prevented from publishing their research as a result of institutional restrictions or industry partnerships. One way to counter this problem is to anonymise the dataset or the results of the analysis conducted on the dataset, for example, by using the DPCreator tool3 which helps researchers publish differentially private results of statistics conducted on a dataset before releasing it to other researchers or the general public. 3 DPCreator is produced by OpenDP community and is available at: https://demo.dpcreator.org/.

114

4 Challenges of Usable Privacy

Despite its benefits, anonymisation has its challenges. Data anonymisation is constantly being debated from a legal and technical perspective, while the effectiveness of various anonymisation techniques is constantly changing as resources and technologies (particularly the power of machine learning-based attacks) evolve. Even after effective anonymisation, residual privacy risks typically remain, which practically means that the data cannot be considered anonymous but rather pseudonymous. Consequently, in such cases, the rules of the GDPR (or of other applicable privacy laws) still need to be applied. Rocher et al. [77] show that even heavily sampled anonymised datasets will still not be classified as anonymous data as defined by GDPR, which will therefore “seriously challenge the technical and legal adequacy of the de-identification release-and-forget model”. Hence, privacy risks cannot be completely eliminated, but they can be reduced in line with the privacy principle of data minimisation. This also means that it is imperative to take into account the expectations of the individuals concerned, e.g. by providing transparency and guaranteeing voluntary participation by obtaining their informed consent.4 Risks of re-identification of participants despite anonymisation when reporting and sharing data can be especially problematic in sensitive contexts such as health. Several examples are available [9, 10] of health data that has been re-identified by linking “quasi-identifiers”5 with other public databases. Abbott et al. [12] examined 509 CHI papers related to health, wellness, accessibility, and ageing to examine data reporting and sharing practices. Despite the sensitive nature of the research context, 47.7% of studies (N = 378) in [12] didn’t specify if any steps were taken to ensure the privacy of their participants. Participant privacy was often protected using codes (e.g. P1, P2, etc.), pseudonyms, and blurring images while the use of other anonymisation techniques was rare [12]. Although there is no similar systematic review of usable privacy literature within the HCI community, the shortcomings in HCIsensitive fields regarding data reporting and shorting practices suggest that usable privacy research may suffer from the same issues. Additionally, quotations of study participants that are common in HCI papers may raise additional privacy challenges of properly anonymising study participants due to the challenges of anonymising natural language in text and speech (see, for example, [78]). Additionally, anonymising data can make it hard for readers and reviewers to evaluate the validity of the work because it compromises its utility and usefulness [11]. Moreover, not all researchers are privacy experts or familiar with specific tools and techniques to anonymise their data, and the lack of time, resources, and expertise hinders researchers from taking this route. Consequently, researchers need more guidance and support to make a balance

4 Thus, even if the research institute chooses “public interest” (under Art. 6 (e) GDPR) as the legit-

imisation for processing personal research data, informed consent may still be required for ethical reasons. Consent by participants is also needed for citing them in publications, as the interview participants have the copyright for their spoken words. 5 Features not directly identifying, but that can be combined with other features to yield a unique identifier.

4.2

Challenges of Conducting Usable Privacy Research

115

between the commitment to protecting participant privacy and the benefits of more open data sharing.

4.2.5.3 The Dilemma of Compensating Participants User studies are one of the fundamental methods used in the Usable Privacy field. In designing user studies, compensation strategies are usually used to incentivise recruitment. Nevertheless, compensation may cause ethical issues, such as coercion, and hinder voluntary consent, in addition to biasing the study’s results if it is not done professionally. Researchers can engender positive feelings in participants by paying them to participate in a study. Thus, results could be misleading or overly optimistic, such as regarding willingness to share data or attitudes towards using a specific PET. Additionally, paying compensation below minimum salary standards and exploiting economically vulnerable subjects also raise ethical concerns and may also impact data quality. Even though IRBs and other research governing bodies may guide what is incorporated into compensation strategies, it can be a complex decision to decide whether and how to compensate participants. Research design, participant characteristics, regulations, funding availability, and cultural norms all contribute to this decision [2]. Thus, ethical compensation rules cannot be outlined in a general statement other than that each case must be carefully considered. A researcher’s proposal for a compensation structure may be reviewed and approved by the IRB, which facilitates the compensation process. The interdisciplinary nature of HCI research and the usable privacy field contributes to the wide range of compensation strategies that have been implemented and how they have been described in the literature, as reported by Pater et al. [1]. Inconsistent participant compensation and compensation reporting can further complicate ethical and replicability issues. Pater et al. assessed 1662 papers reporting about 2250 user studies in four top venues of HCI research regarding how they compensated research participants [1]. According to their analysis, 84.2% (n = 1894) of studies did not provide sufficient details regarding their compensation structure as they did not report at least one of these factors: if compensation was provided, the amount paid, the mode of payment, or the duration of participation. Moreover, only 286 (11.2%) papers had a rationale or justification to support the compensation level. Only a little over a quarter of the 1662 papers reviewed mentioned that they had been subjected to an IRB or other ethics review, and far fewer mentioned it concerning compensation [1]. As compensation methods and ethical research practices are strongly related, Pater et al. claim that improving transparency and standards for reporting participant compensation will make it easier for researchers to understand appropriate and ethical compensation, as well as increase replicability and clarity across general standards and practices [1]. Therefore, they recommend authors report at least the amount and form of payment, as well as how long the participants engaged in the research and where it took place.

116

4 Challenges of Usable Privacy

4.2.5.4 Deception Ethics Deception may cause different harms, which may not be severe but are questionable from an ethical and legal perspective. As an example, participants who are misled into believing simulated risks are real may worry about actual harm as a result. According to some authors, they avoided deception, even when it appeared useful, primarily due to ethical concerns and the lack of need for deception [96–98]. According to the “Ethical Principles of Psychologists and Code of Conduct” by the American Psychological Association [79], the use of deceptive techniques in research needs to be justified by the study’s significant prospective value and by a lack of effective non-deceptive alternative procedures. As well, participants should be debriefed as early as possible with the option of withdrawing their data if necessary. The research should not reasonably be expected to cause them physical or emotional distress. There is a long history of ethical issues associated with psychological research using deception. These issues include loss of faith in the investigator, lack of informed consent, and the insufficient impact of debriefings [94]. In the debate over deception, both critics and proponents have been vocal. Deception is not always morally objectionable, however, ethics committees should consider whether it is reasonable to withhold information from a participant based on the context of the study [95]. Researchers in the usable privacy and security community often use deception to mislead participants into believing that simulated risks are actual risks or that the study is not related to privacy or security. Nevertheless, the community has failed to agree on a definition of deception, which underscores the importance of reporting on how participants responded to and were debriefed [68]. Research misleading participants do not always report using deception, and not all efforts to avoid priming are deception, according to Distler et al. [68]. For instance, in one study, the authors intentionally omitted information about identity theft to prevent priming and limit self-selection bias in the study [99]. Likewise, researchers recruited Android users without mentioning permissions would be the focus of their study [100]. Although the authors of both studies claim that they used forms of deception, Distler et al. consider them as instances of partial disclosure, since the researchers did not mislead participants or withhold information essential to their understanding of the study [68]. Consequently, there is a need for a clearer definition of what constitutes deception. It is also necessary to discuss what types of deception are ethically acceptable in usable privacy and security studies. To this end, Distler et al. [68] offer a list of questions authors of deception studies should consider when reporting their studies and suggest that discussions of ethically acceptable deception should also include measures taken by researchers to minimise harm, including trained experimenters and requirements for strict debriefings.

4.3

HCI Challenges Related to Privacy Technologies

4.3

HCI Challenges Related to Privacy Technologies

4.3.1

Challenges of Explaining “Crypto Magic” and the Lack of Real-World Analogies

117

Privacy-enhancing techniques, such as Attribute-Based Credentials (ABCs) or homomorphic encryption, are often based on “crypto magic” that is usually counter-intuitive to users. Often no real-world analogies exist that can describe the privacy-enhancing functionality of such techniques accurately. This makes it also difficult to construct suitable metaphors for illustrating the privacy functionality and for helping users create correct mental models. Janic et al. [81] conclude that the use of PETs by service providers could promote the users’ trust in the service providers, but this may only hold if these privacy-enhancing mechanisms are understood by users. HCI challenges for evoking comprehensive mental models and establishing end-user trust in cryptographic selective disclosure technologies have been researched for attributebased credentials [82, 83] and for the German National identity card [84]. Both technologies implement credentials that enable user-controlled data minimisation functions, which allow the credential holder to selectively disclose attributes or characteristics of those attributes stored on the credentials (e.g. for authorising a user to access the library in a specific region, they allow only to reveal the fact that the credential holder (the user) is living in a certain municipality without revealing the exact address or any other information stored on the credential). User studies by Wästlund et al. [82] found for instance that using the metaphor of a card, from which attributes that should be withheld could be selectively blacked out, made test users still believe that ABCs would work in the same fashion as commonly used non-digital plastic credentials. Further, the concept of an “adapted” card metaphor was tested, which showed only the selected information inside the newly created adapted card, to convey the notion that only the selected information in this card was sent to a service provider and nothing else. The user study showed that while an “adapted” card metaphor helped more than half of the test users to understand that attributes could be selectively disclosed or hidden, better design paradigms for understanding the selective disclosure property of attribute characteristics were still needed. Another selective disclosure techniques related to ABCs are so-called malleable or redactable signatures. In contrast to ABCs, malleable signatures are restricted to traditional redactions/deletions (“blacking out”) of specified text elements in digitally signed documents as a means of selective disclosure of the remaining (non-redacted) information. Blacking out text on paper documents (including text with hand-written signatures) has been long practised already in the offline world. Hence, in an eHealth use case based on malleable signatures in the PRISMACLOUD EU project, which we outlined in Sect. 2.3.1, Alaqra et al. [86] used the stencil metaphor in UI mockups for blacking out information, illustrated by greying out text in the mockups (making it still visible what information is redacted).

118

4 Challenges of Usable Privacy

The user study conducted in [86] with walk-throughs in 5 focus group sessions with (N = 32) prospective patients with different technical knowledge showed that the stencil metaphors worked well for most of the study participants to understand the selective disclosure property of redactable signatures. Evoking the correct mental models with the proposed UI mock-ups by Alaqra et al. [86] for the validity of the redactable signature by the medical doctor after the redaction had taken place seemed to work well for the lay users with no practical experience in using crypto tools.

4.3.2

Challenges and Need to Cater for “digital-World” Analogies

In contrast to a text signed with a malleable signature, any modifications (including redactions) made to a text signed with a (conventional) digital signature are invalidating this signature. In the usability study conducted by Alaqra et al. [86] that we mentioned above involving malleable signed eHealth documents (EHR records), focus group participants with some technical background that were familiar with using crypto tools such as PGP for creating traditional digital signatures had, therefore, doubts in the validity of the (malleable) signature by the doctor after the redaction. The valid malleable signature by the doctor and a valid signature by the patient (that is required for the redaction operation for making the patient accountable for the redaction) are represented in the mockups by green check icons on the right side of the screen (see Fig. 4.1). They assumed that a malleable signature would have similar properties as a conventional signature and thus did not trust the statement made by the UI mockups that the doctor’s document signature would be still valid after the redaction (together with the signature by the patient that conducted the redaction for allowing to make patients accountable for their redactions). These findings are similar to the results of a user study conducted by Lerner et al. [85], which reports that users with technical security knowledge lacked trust in novel email encryption tools, which seemed to behave differently from the traditional email encryption tool, GNU Privacy Guard (GPG), that they were familiar with. Similar findings were also obtained in a user study [87] with 15 interviews of health professionals with different degrees of technical expertise for a privacy-preserving eHealth use case outlined in Sect. 2, in which medical data (electrocardiogram signals) of a patient is first homomorphically encrypted and then sent to a non-trusted cloud-based data analysis platform, which obtains and then returns the analysis result on the encrypted data in an encrypted form. Interviewees with research and technical expertise expressed concerns about trusting the feasibility of conducting data analysis on encrypted data and rather assumed that data would only be encrypted during transit [87]. Hence, users with some technical expertise may relate PETs to other similar security or privacy technologies that they are familiar with and assume that the PETs would have similar security and privacy characteristics. This was also confirmed by a user study of metaphors for

4.3

HCI Challenges Related to Privacy Technologies

119

Fig. 4.1 Redacted documents with valid malleable signature by the doctor represented by a green check icon [86]

differential privacy that are commonly used, including the metaphor of pixelation of a picture for DP, with N = 30 interviewees conducted by Karegar et al. [88]. The study revealed that interviewees compared differential privacy with security techniques that they were aware of and made respective assumptions about the privacy functionality of differential privacy. For instance, test participants who had some basic knowledge about encryption thought that differential privacy was (as encryption) reversible, and participants who were familiar with VPNs thought of differential privacy in terms of selective disclosure (as VPNs hide the user’s IP address), and participants familiar with firewalls compared differential privacy with access control. Therefore, the development of usable privacy metaphors for explaining PETs must take into account not only the real-world analogies but also the digital-world knowledge users with some technical security background might have about other technologies and analogies they might make with them.

4.3.3

Challenges of Usable Transparency-Enhancing Tools

As discussed in Chap. 2, both ex-ante and ex-post transparency is an important legal principle, which should enable the data subjects to control their personal data and maintain agency. Providing transparency about the processing of personal data, and making it possible for

120

4 Challenges of Usable Privacy

users to intervene in the process, have also been identified as factors contributing to user trust. As earlier research has shown, trust in a system can be enhanced if its procedures are clear, transparent, and reversible so that users feel in control [89, 90]. However, based on a survey of TETs, Janic et al. [81] conclude that the impact of increased transparency on users’ trust has been insufficiently studied. Even though transparency promotes trust, it may also be detrimental to trust, especially if the provided information reveals, or gives the impression that privacy risks or misuse may occur, which can result in service providers being reluctant to expand the concept of transparency to include information about privacy risks visible and accessible to consumers right when they need it. Several aspects affect the effectiveness of usable transparency of online privacy provided to users. Researchers have studied, for example, the effects of personalisation of privacy notices (e.g. [100, 133]), their framing (e.g. [127–129]), shortened notices (e.g. [127, 130]), and delivery methods of notices including the timing, channel, and modality of notices (e.g. [131, 132, 134]). For example, Gluck and colleagues [127] in their studies confirmed the utility of short-form policy information. Nonetheless, simplifying and shortening the information required for data-sharing decision-making is not as straightforward as it may seem. Despite the fact that condensing lengthy legalistic privacy information into concise privacy notices raises awareness, taking this a step further by reducing privacy notices to include only practices users are not aware of had the opposite effect [127]. More specifically, Guleck et al. showed that participants with shorter notices performed similarly on practices that remained on the short notice, but significantly worse on those that were removed from the notice [127]. In a similar vein, Adjerid and colleagues demonstrate that privacy information is contextually sensitive [128]. More specifically, increased disclosures are elicited by notices that emphasise protection. In contrast, notices that are framed as decreasing protection result in decreased disclosure, even when disclosure risks remain the same [128]. Nonetheless, Adjerid et al. also show that introducing a slight misdirection (such as a 15 s delay between notices and disclosure decisions), that does not alter the risk of disclosure, can mute or substantially reduce the ability of privacy-related information to impact disclosure, either the negative or positive impact [128]. Hence, providing the right level of information, at the right time, with proper framing and mode of delivery, that is of interest and useful for the user without overloading the user, making the user unnecessarily suspicious or convinced, or, in general, preventing transparency from becoming “a sleight of privacy” [128] still remains a challenging task. It is also challenging to provide usable transparency about the entities processing users’ information. Limitations of users’ mental models regarding the Internet and personal data flows on the net have for instance been observed by others (e.g. [91]). Several conducted user studies for different versions of a transparency-enhancing tool, Data Track developed at Karlstad University, revealed the challenge of creating correct mental models of personal data flow in networked and cloud-based infrastructures [92, 93]. In particular, it was hard for users to understand if data was stored or processed locally under their control or on a remote server or somewhere in the cloud. Understanding whether data is processed locally

4.4

HCI Challenges Related to Privacy Laws

121

(and securely) under the users’ control is however also of importance for establishing trust in a TET as a privacy-enhancing tool. For instance, different versions of the Data Track tool store (secured through encryption) and manage personal data exports from service providers or other history logs of what data has been disclosed to which service provider to be made transparent to users. Therefore, it may be important for users to understand that this personal information handled to provide transparency to users is secured and processed locally under their control. A discussion of ex-ante transparency issues, with a focus on providing effective consent forms that conform to requirements, will be provided in Sect. 4.4.2.

4.4

HCI Challenges Related to Privacy Laws

4.4.1

The Discrepancy Between Privacy Laws and What People Need

Service providers are constantly challenged to adhere to privacy laws and regulations, such as GDPR, which aim to protect users’ privacy. The legal privacy requirements, however, have HCI implications as they describe “mental processes and behaviour of the end user that must be supported to adhere to the principle” [121]. Unless the public actively participates in this emerging data-driven society through data stewardship at all levels, GDPR has little chance of transforming the way data is created and monetised. As a result, by considering the HCI implications of privacy laws, the gap between legal and user-centric transparency, intervenability, and consent can be reduced. Though the European regulator may have wanted to adopt a human-centred approach, empirical studies such as [122] have revealed that a substantial portion of (Dutch) citizens do not feel very central to this regulatory process, and rather feel that GDPR was imposed on them. So the question arises as to whether and how these regulations enhance user privacy in practice, whether users are aware of privacy laws and their rights, whether data subject rights meet users’ needs, or whether privacy rules and regulations make it more difficult for users to manage their data. As explained by [123], successfully enforcing GDPR will depend in several ways on the public’s awareness, attitude, and literacy regarding privacy. A Eurobarometer survey on the GDPR from 2019 [80] showed that a majority of respondents from the different member states have heard of most of their rights guaranteed by GDPR, and some have already exercised these rights. Nonetheless, the perceived efficacy and actual use of data subject rights can be starkly different from the awareness and understanding of those rights. For example, as far as knowledge about the GDPR is concerned, the Dutch appear to be relatively well-versed [122]. However, in the Dutch people’s opinion, as reported in [122], some data subject rights including the right to access information or to request the removal of information provide better protection for individual privacy and data protection than the right to data portability and the right to intervene in automated decision-making. Similarly, researchers

122

4 Challenges of Usable Privacy

examined Canadians’ awareness of their privacy rights and reported that around one in three respondents stated that they knew their privacy rights and how to protect them [124]. Overall, participants thought that the law offered weak, ineffective, and unregulated protection [124]. Individuals experience asymmetrical benefits and costs concerning GDPR, as the benefits are often indirect and more difficult to perceive because fewer people exercise their GDPR rights, or notice and benefit directly from GDPR-compliant data-sharing policies [123]. Consequently, more awareness of the benefits of GDPR and a broader exercise of the data subject rights that the GDPR provides could contribute to improving its enforcement, as well as enhancing digital privacy and personal data protection. In many scenarios, privacy is considered a secondary goal by users, contributing to the discrepancy between privacy laws enforced in practice and what users desire [111]. Legislators and designers may assume that users have different expectations and needs from what they actually do. For example, as part of current practices, users are required to read long, jargon-filled notices detailing how their data will or is being processed, comprehend the information, learn about their options and how to react to their choices, and finally decide how to react to them, all of which are very heavy burdens on users, considering how focused they are on their primary goals when using the digital world. We discuss the specific HCI implications concerning notice and choice in Sect. 4.4.2.

4.4.2

Problems with Notice and Choice

Consent forms and privacy notices have proven ineffective in a variety of contexts since users simply agree without reviewing and understanding the policies to achieve their primary goals. A user would have a hard time spending hundreds of hours, estimated by McDonald and Cranor in 2008 [103], reading all of the privacy policies on the websites they visit every year, even if they wished to do so. As privacy policies have become more detailed and delineated since the GDPR [101, 102], the amount of time required may be much extended. Although the GDPR stipulates that policy information should be provided in a “concise, transparent, intelligible, and easily accessible form, using clear and plain language” (Article 12 (1)) [104], it does not clarify what constitutes an effective consent enabling users to make informed decisions. Hence, the GDPR only sets forth the legal requirements for consent, which will allow users to retain control over their data if obtained properly, which has also been further explained and elaborated by the European Data Protection Board (EDPB) in their guidelines on consent under the GDPR [105]. Privacy notice problems impact service providers that rely on informed consent as the legal basis for processing personal data [110]. Consent forms and privacy notices have several issues, making it difficult to attract users’ attention, inform them, and enable them to keep effective control over their data. In particular, privacy notices that include policy information aim to satisfy the needs of different stakeholders [108], resulting in long, jargon-filled, and difficult-to-understand privacy notices [106, 107]. In spite of the fact that privacy policies

4.4

HCI Challenges Related to Privacy Laws

123

should demonstrate service providers’ compliance with legal requirements, users expect simple information about data practices and privacy controls. There are other reasons why users do not pay attention to privacy notices, do not read and comprehend them fully, but still consent to use different services [116, 117], such as notice fatigue and habituation, a lack of choice, decoupled notices [108, 109], and the existence of dark patterns. The ineffectiveness of privacy notice and choice paradigm is also partly related to a lack of usability for choices. According to Bannihatti et al., even when websites offer opt-outs as options, hyperlinks may not always work and using them may take more time than users are willing to spend [135]. Further, Habib et al. report that users have difficulty finding opt-out choices because the headings under which they appear are inconsistent across websites [136]. In addition, Korff and Böhme [137] have shown that participants who were confronted with a larger amount of privacy options reported more negative feelings, experienced more regret, and were less satisfied with the choices made. All of these studies show the importance of exploring solutions towards more usable choices accompanying notices. Concerning dark patterns, Utz et al. [114] report that dark pattern strategies such as highlighting the “Accept” button in a binary choice with a “Decline” button and pre-selected choices of cookies have a significant impact on third-party cookie acceptance. Similarly, a study by Machuletz and Böhme [115] found that the default “Select all” button significantly increases consent, with participants often regretting their choices after the experiment. Furthermore, researchers who oppose the current practices of consent such as Carolan [112] argue that legal consent requirements not only require users to choose actively but also assume they are capable of choosing on their own. In the same vein, Schermer et al. [110] suggest that reducing the legal requirements for consent can be achieved if users in different societies agree about what constitutes consent and what constitutes a fair use of data. Nonetheless, obtaining a common understanding of consent and fair use of data in order to relax consent requirements is impractical since privacy is a very complex concept that depends on many factors such as context, social norms, and individual preferences, among others. In an alternative approach, Nissen et al. [113] propose users delegate consent decisions to friends, experts, groups, and artificially intelligent entities through an ecosystem of third parties. However, the proposal by Nissen et al. [113] besides relying on users’ ability to decide if and how to consent poses a new decision challenge by providing more choices with regard to when to consent and when to delegate it. On the other hand, researchers who are looking for solutions to usable informed consent say that the problem is not fundamentally inherent in consent and transparency requirements, but rather in how notices and choices are currently designed [109, 118, 119]. It has been found that minor design decisions and inclusion of dark patterns can have a significant impact on how people respond to consent forms and how they make choices [114, 115, 120]. However, the scholars have not yet agreed on which design patterns are appropriate in consent forms when valid consent requirements are met or not met and have not yet specified which dark patterns violate what legal requirements [119]. To facilitate usable informed consent, novel design solutions need to be explored by mapping legal requirements to HCI solutions that

124

4 Challenges of Usable Privacy

optimise consent in different contexts, and it is necessary to identify how specific dark patterns relate to legal requirements and user experience. According to Gray et al. [119], a transdisciplinary dialogue needs to combine the perspectives of various disciplines (e.g. HCI, design, UX, psychology, law) to ensure that the choice of the user meets all the consent requirements to qualify as valid: free, informed, specific, unambiguous, readable, and accessible.

References 1. Pater, J., Coupe, A., Pfafman, R., Phelan, C., Toscos, T. & Jacobs, M. Standardizing reporting of participant compensation in HCI: A systematic literature review and recommendations for the field. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems. pp. 1–16 (2021) 2. Beck, K. Academic researcher decision-making processes for research participant compensation. (University of Iowa, 2019) 3. Stephanidis, C., Salvendy, G., Antona, M., Chen, J., Dong, J., Duffy, V., Fang, X., Fidopiastis, C., Fragomeni, G., Fu, L. & Others Seven HCI grand challenges. International Journal Of Human-Computer Interaction. 35, 1229–1269 (2019) 4. Mathis, F., Vaniea, K. & Khamis, M. Prototyping usable privacy and security systems: Insights from experts. International Journal Of Human-Computer Interaction. 38, 468–490 (2022) 5. Vines, T., Albert, A., Andrew, R., Débarre, F., Bock, D., Franklin, M., Gilbert, K., Moore, J., Renaut, S. & Rennison, D. The Availability of Research Data Declines Rapidly with Article Age. Current Biology. 24, 94–97 (2014) 6. Wacharamanotham, C., Eisenring, L., Haroz, S. & Echtler, F. Transparency of CHI Research Artifacts: Results of a Self-Reported Survey. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2020) 7. Kay, M., Haroz, S., Guha, S., Dragicevic, P. & Wacharamanotham, C. Moving Transparent Statistics Forward at CHI. Proceedings Of The 2017 CHI Conference Extended Abstracts On Human Factors In Computing Systems. pp. 534–541 (2017) 8. Kidwell, M., Lazarevi´c, L., Baranski, E., Hardwicke, T., Piechowski, S., Falkenberg, L., Kennett, C., Slowik, A., Sonnleitner, C., Hess-Holden, C. & Others Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS Biology. 14, e1002456 (2016) 9. El Emam, K., Jonker, E., Arbuckle, L. & Malin, B. A systematic review of re-identification attacks on health data. PloS One. 6, e28071 (2011) 10. Lee, Y. & Lee, K. What are the optimum quasi-identifiers to re-identify medical records?. 2018 20th International Conference On Advanced Communication Technology (ICACT). pp. 1025–1033 (2018) 11. Wiles, R., Charles, V., Crow, G. & Heath, S. Researching researchers: lessons for research ethics. Qualitative Research. 6, 283–299 (2006) 12. Abbott, J., MacLeod, H., Nurain, N., Ekobe, G. & Patil, S. Local standards for anonymization practices in health, wellness, accessibility, and aging research at CHI. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2019) 13. Veretilnykova, M. & Dogruel, L. Nudging Children and Adolescents toward Online Privacy: An Ethical Perspective. Journal Of Media Ethics. 36, 128–140 (2021) 14. Renaud, K. & Zimmermann, V. Ethical guidelines for nudging in information security & privacy. International Journal Of Human-Computer Studies. 120 pp. 22–35 (2018)

References

125

15. Acquisti, A., Adjerid, I., Balebako, R., Brandimarte, L., Cranor, L., Komanduri, S., Leon, P., Sadeh, N., Schaub, F., Sleeper, M. & Others Nudges for privacy and security: Understanding and assisting users’ choices online. ACM Computing Surveys (CSUR). 50, 1–41 (2017) 16. Choe, E., Jung, J., Lee, B. & Fisher, K. Nudging People Away from Privacy-Invasive Mobile Apps through Visual Framing. Human-Computer Interaction – INTERACT 2013. pp. 74–91 (2013) 17. Egelman, S., Sotirakopoulos, A., Muslukhov, I., Beznosov, K. & Herley, C. Does my password go up to eleven? The impact of password meters on password selection. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 2379–2388 (2013) 18. Tahaei, M., Frik, A. & Vaniea, K. Deciding on Personalized Ads: Nudging Developers About User Privacy. Seventeenth Symposium On Usable Privacy And Security (SOUPS 2021). pp. 573–596 (2021) 19. Masaki, H., Shibata, K., Hoshino, S., Ishihama, T., Saito, N. & Yatani, K. Exploring Nudge Designs to Help Adolescent SNS Users Avoid Privacy and Safety Threats. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–11 (2020) 20. Zibaei, S., Malapaya, D., Mercier, B., Salehi-Abari, A. & Thorpe, J. Do Password Managers Nudge Secure (Random) Passwords?. Eighteenth Symposium On Usable Privacy And Security (SOUPS 2022). pp. 581–597 (2022) 21. Acar, Y., Stransky, C., Wermke, D., Mazurek, M. & Fahl, S. Security developer studies with github users: Exploring a convenience sample. Thirteenth Symposium On Usable Privacy And Security. pp. 81–95 (2017) 22. Ghaiumy Anaraky, R., Li, Y. & Knijnenburg, B. Difficulties of measuring culture in privacy studies. Proceedings Of The ACM On Human-Computer Interaction. 5, 1–26 (2021) 23. Sawaya, Y., Sharif, M., Christin, N., Kubota, A., Nakarai, A. & Yamada, A. Self-confidence trumps knowledge: A cross-cultural study of security behavior. Proceedings Of The 2017 CHI Conference On Human Factors In Computing Systems. pp. 2202–2214 (2017) 24. Bellman, S., Johnson, E., Kobrin, S. & Lohse, G. International differences in information privacy concerns: A global survey of consumers. The Information Society. 20, 313–324 (2004) 25. Tahaei, M. & Vaniea, K. Recruiting Participants With Programming Skills: A Comparison of Four Crowdsourcing Platforms and a CS Student Mailing List. CHI Conference On Human Factors In Computing Systems. pp. 1–15 (2022) 26. Cao, J. & Everard, A. User attitude towards instant messaging: The effect of espoused national cultural values on awareness and privacy. Journal Of Global Information Technology Management. 11, 30–57 (2008) 27. Lee, S., Trimi, S. & Kim, C. The impact of cultural differences on technology adoption. Journal Of World Business. 48, 20–29 (2013) 28. Hofstede, G. & Hofstede, G. Culture’s consequences: Comparing values, behaviors, institutions and organizations across nations. (sage,2001) 29. Triandis, H. & Gelfand, M. Converging measurement of horizontal and vertical individualism and collectivism.. Journal Of Personality And Social Psychology. 74, 118 (1998) 30. Li, Y., Rho, E. & Kobsa, A. Cultural differences in the effects of contextual factors and privacy concerns on users’ privacy decision on social networking sites. Behaviour & Information Technology. 41, 655–677 (2022) 31. Constantinides, A., Belk, M., Fidas, C. & Samaras, G. On cultural-centered graphical passwords: leveraging on users’ cultural experiences for improving password memorability. Proceedings Of The 26th Conference On User Modeling, Adaptation And Personalization. pp. 245–249 (2018) 32. Cho, H., Knijnenburg, B., Kobsa, A. & Li, Y. Collective Privacy Management in Social Media: A Cross-Cultural Validation. ACM Trans. Comput.-Hum. Interact.. 25 (2018,6)

126

4 Challenges of Usable Privacy

33. Islami, L., Fischer-Hübner, S. & Papadimitratos, P. Capturing drivers’ privacy preferences for intelligent transportation systems: An intercultural perspective. Computers & Security. 123 pp. 102913 (2022) 34. Murmann, P., Beckerle, M., Fischer-Hübner, S. & Reinhardt, D. Reconciling the what, when and how of privacy notifications in fitness tracking scenarios. Pervasive And Mobile Computing. 77 pp. 101480 (2021) 35. Redmiles, E. “Should I Worry?” A Cross-Cultural Examination of Account Security Incident Response. 2019 IEEE Symposium On Security And Privacy (SP). pp. 920–934 (2019) 36. Wang, Y., Xia, H. & Huang, Y. Examining American and Chinese internet users’ contextual privacy preferences of behavioral advertising. Proceedings Of The 19th ACM Conference On Computer-Supported Cooperative Work & Social Computing. pp. 539–552 (2016) 37. Zhao, C., Hinds, P. & Gao, G. How and to whom people share: the role of culture in selfdisclosure in online communities. Proceedings Of The ACM 2012 Conference On Computer Supported Cooperative Work. pp. 67–76 (2012) 38. Sannon, S. & Forte, A. Privacy Research with Marginalized Groups: What We Know, What’s Needed, and What’s Next. Proceedings Of The ACM On Human-Computer Interaction. 6, 1–33 (2022) 39. Cook, K. Marginalized populations. The SAGE Encyclopedia Of Qualitative Research Methods. pp. 495–496 (2008) 40. Hall, J., Stevens, P. & Meleis, A. Marginalization: A guiding concept for valuing diversity in nursing knowledge development. Advances In Nursing Science. 16, 23–41 (1994) 41. DeVito, M., Birnholtz, J., Hancock, J., French, M. & Liu, S. How people form folk theories of social media feeds and what it means for how we study self-presentation. Proceedings Of The 2018 CHI Conference On Human Factors In Computing Systems. pp. 1–12 (2018) 42. Nova, F., DeVito, M., Saha, P., Rashid, K., Roy Turzo, S., Afrin, S. & Guha, S. “Facebook Promotes More Harassment” Social Media Ecosystem, Skill and Marginalized Hijra Identity in Bangladesh. Proceedings Of The ACM On Human-Computer Interaction. 5, 1–35 (2021) 43. Guberek, T., McDonald, A., Simioni, S., Mhaidli, A., Toyama, K. & Schaub, F. Keeping a low profile? Technology, risk and privacy among undocumented immigrants. Proceedings Of The 2018 CHI Conference On Human Factors In Computing Systems. pp. 1–15 (2018) 44. Seo, H., Britton, H., Ramaswamy, M., Altschwager, D., Blomberg, M., Aromona, S., Schuster, B., Booton, E., Ault, M. & Wickliffe, J. Returning to the digital world: Digital technology use and privacy management of women transitioning from incarceration. New Media & Society. 24, 641–666 (2022) 45. Trepte, S. & Masur, P. Need for privacy. Encyclopedia Of Personality And Individual Differences. pp. 3132–3135 (2020) 46. Lutz, C. & Ranzini, G. Where dating meets data: Investigating social and institutional privacy concerns on Tinder. Social Media+ Society. 3, 2056305117697735 (2017) 47. Das, S., Edwards, W., Kennedy-Mayo, D., Swire, P. & Wu, Y. Privacy for the People? Exploring Collective Action as a Mechanism to Shift Power to Consumers in End-User Privacy. IEEE Security & Privacy. 19, 66–70 (2021) 48. Whitten, A. & Tygar, J. Why Johnny Can’t Encrypt: A Usability Evaluation of PGP 5.0.. USENIX Security Symposium. 348 pp. 169–184 (1999) ˙ 49. Winter, P., Edmundson, A., Roberts, L., Dutkowska-Zuk, A., Chetty, M. & Feamster, N. How do tor users interact with onion services?. 27th USENIX Security Symposium (USENIX Security 18). pp. 411–428 (2018) 50. Clark, J., Oorschot, P. & Adams, C. Usability of Anonymous Web Browsing: An Examination of Tor Interfaces and Deployability. Proceedings Of The 3rd Symposium On Usable Privacy And Security. pp. 41–51 (2007)

References

127

51. Norcie, G., Blythe, J., Caine, K. & Camp, L. Why Johnny can’t blow the whistle: Identifying and reducing usability issues in anonymity systems. Workshop On Usable Security. 6 pp. 50–60 (2014) 52. Gallagher, K., Patil, S. & Memon, N. New Me: Understanding Expert and Non-Expert Perceptions and Usage of the Tor Anonymity Network. Thirteenth Symposium On Usable Privacy And Security (SOUPS 2017). pp. 385–398 (2017) 53. Harborth, D., Pape, S. & Rannenberg, K. Explaining the Technology Use Behavior of PrivacyEnhancing Technologies: The Case of Tor and JonDonym.. Proc. Priv. Enhancing Technol.. 2020, 111–128 (2020) 54. Inglesant, P. & Sasse, M. The true cost of unusable password policies: password use in the wild. Proceedings Of The Sigchi Conference On Human Factors In Computing Systems. pp. 383–392 (2010) 55. Norman, D. User-Centered System Design: New Perspectives on Human-Computer Interaction. (CRC Press, 1986) 56. International Organization for Standardization ISO 9241-210:2010(E): Ergonomics of humansystem interaction – Part 210: Human-centered design for interactive systems. (ISO,2010) 57. Framner, E., Fischer-Hübner, S., Lorünser, T., Alaqra, A. & Pettersson, J. Making secret sharing based cloud storage usable. Information & Computer Security. (2019) 58. Colnago, J., Cranor, L., Acquisti, A. & Stanton, K. Is it a concern or a preference? An investigation into the ability of privacy scales to capture and distinguish granular privacy constructs. Eighteenth Symposium On Usable Privacy And Security (SOUPS 2022). pp. 331–346 (2022) 59. Kokolakis, S. Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon. Computers & Security. 64 pp. 122–134 (2017) 60. Solove, D. The myth of the privacy paradox. Geo. Wash. L. Rev.. 89 pp. 1 (2021) 61. Brown, B. Studying the internet experience. HP Laboratories Technical Report HPL. 49 (2001) 62. Norberg, P., Horne, D. & Horne, D. The privacy paradox: Personal information disclosure intentions versus behaviors. Journal Of Consumer Affairs. 41, 100–126 (2007) 63. Gerber, N., Gerber, P. & Volkamer, M. Explaining the privacy paradox: A systematic review of literature investigating privacy attitude and behavior. Computers & Security. 77 pp. 226–261 (2018) 64. Acquisti, A. & Grossklags, J. Privacy and rationality in individual decision making. IEEE Security & Privacy. 3, 26–33 (2005) 65. Barkhuus, L. The mismeasurement of privacy: using contextual integrity to reconsider privacy in HCI. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 367–376 (2012) 66. Dienlin, T. & Trepte, S. Is the privacy paradox a relic of the past? An in-depth analysis of privacy attitudes and privacy behaviors. European Journal Of Social Psychology. 45, 285–297 (2015) 67. Nissenbaum, H. Privacy as contextual integrity. Wash. L. Rev.. 79 pp. 119 (2004) 68. Distler, V., Fassl, M., Habib, H., Krombholz, K., Lenzini, G., Lallemand, C., Cranor, L. & Koenig, V. A Systematic Literature Review of Empirical Methods and Risk Representation in Usable Privacy and Security Research. ACM Transactions On Computer-Human Interaction (TOCHI). 28, 1–50 (2021) 69. Schmuckler, M. What is ecological validity? A dimensional analysis. Infancy. 2, 419–436 (2001) 70. Schechter, S., Dhamija, R., Ozment, A. & Fischer, I. The emperor’s new security indicators. 2007 IEEE Symposium On Security And Privacy (SP’07). pp. 51–65 (2007) 71. Forget, A., Komanduri, S., Acquisti, A., Christin, N., Cranor, L. & Telang, R. Security Behavior Observatory: Infrastructure for Long-term Monitoring of Client Machines (CMU-CyLab-14009). (Carnegie Mellon University,2014)

128

4 Challenges of Usable Privacy

72. Anderson, B., Kirwan, C., Jenkins, J., Eargle, D., Howard, S. & Vance, A. How polymorphic warnings reduce habituation in the brain: Insights from an fMRI study. Proceedings Of The 33rd Annual ACM Conference On Human Factors In Computing Systems. pp. 2883–2892 (2015) 73. Samat, S. & Acquisti, A. Format vs. content: the impact of risk and presentation on disclosure decisions. Thirteenth Symposium On Usable Privacy And Security (SOUPS 2017). pp. 377–384 (2017) 74. Garfinkel, S. & Lipford, H. Usable security: History, themes, and challenges. Synthesis Lectures On Information Security, Privacy, And Trust. 5, 1–124 (2014) 75. Egelman, S. & Peer, E. Scaling the security wall: Developing a security behavior intentions scale (seBIS). Proceedings Of The 33rd Annual ACM Conference On Human Factors In Computing Systems. pp. 2873–2882 (2015) 76. Mneimneh, Z., Tourangeau, R., Pennell, B., Heeringa, S. & Elliott, M. Cultural variations in the effect of interview privacy and the need for social conformity on reporting sensitive information. Journal Of Official Statistics. 31, 673–697 (2015) 77. Rocher, L., Hendrickx, J. & De Montjoye, Y. Estimating the success of re-identifications in incomplete datasets using generative models. Nature Communications. 10, 1–9 (2019) 78. Nautsch, A., Jiménez, A., Treiber, A., Kolberg, J., Jasserand, C., Kindt, E., Delgado, H., Todisco, M., Hmani, M., Mtibaa, A. & Others Preserving privacy in speaker and speech characterisation. Computer Speech & Language. 58 pp. 441–480 (2019) 79. Association, A. & Others Ethical principles of psychologists and code of conduct. American Psychologist. 57, 1060–1073 (2002) 80. EU Commission Special Eurobarometer 487a – The General Data Protection Regulation. (2019) 81. Janic, M., Wijbenga, J. & Veugen, T. Transparency Enhancing Tools (TETs): An Overview. 2013 Third Workshop On Socio-Technical Aspects In Security And Trust. pp. 18–25 (2013) 82. Wästlund, E., Angulo, J. & Fischer-Hübner, S. Evoking comprehensive mental models of anonymous credentials. Open Problems In Network Security: IFIP WG 11.4 International Workshop, INetSec 2011, Lucerne, Switzerland, June 9, 2011, Revised Selected Papers. pp. 1–14 (2012) 83. Benenson, Z., Girard, A., Krontiris, I., Liagkou, V., Rannenberg, K. & Stamatiou, Y. User acceptance of privacy-abcs: An exploratory study. Human Aspects Of Information Security, Privacy, And Trust: Second International Conference, HAS 2014, Held As Part Of HCI International 2014, Heraklion, Crete, Greece, June 22–27, 2014. Proceedings 2. pp. 375–386 (2014) 84. Harbach, M., Fahl, S., Rieger, M. & Smith, M. On the acceptance of privacy-preserving authentication technology: the curious case of national identity cards. Privacy Enhancing Technologies: 13th International Symposium, PETS 2013, Bloomington, IN, USA, July 10–12, 2013. Proceedings 13. pp. 245–264 (2013) 85. Lerner, A., Zeng, E. & Roesner, F. Confidante: Usable Encrypted Email: A Case Study with Lawyers and Journalists. 2017 IEEE European Symposium On Security And Privacy (EuroS&P). pp. 385–400 (2017) 86. Alaqra, A., Fischer-Hübner, S. & Framner, E. Enhancing Privacy Controls for Patients via a Selective Authentic Electronic Health Record Exchange Service: Qualitative Study of Perspectives by Medical Professionals and Patients. J Med Internet Res. 20, e10954 (2018,12), https:// www.jmir.org/2018/12/e10954/ 87. Alaqra, A., Kane, B. & Fischer-Hübner, S. Machine Learning-Based Analysis of Encrypted Medical Data in the Cloud: Qualitative Study of Expert Stakeholders’ Perspectives. JMIR Hum Factors. 8, e21810 (2021,9), https://humanfactors.jmir.org/2021/3/e21810/ 88. Karegar, F., Alaqra, A. & Fischer-Hübner, S. Exploring User-Suitable Metaphors for Differentially Private Data Analyses. 18th Symposium On Usable Privacy And Security (SOUPS), Boston, United States, August 7–9, 2022.. pp. 175–193 (2022) 89. Fischer-Hubner, S. Trust in PRIME. Proceedings Of The Fifth IEEE International Symposium On Signal Processing And Information Technology, 2005.. pp. 552–559 (2005)

References

129

90. Crane, S., Lacohée, H. & Zaba, S. Trustguide-trust in ICT. BT Technology Journal. 24, 69–80 (2006) 91. Kang, R., Dabbish, L., Fruchter, N. & Kiesler, S. my data just goes everywhere:” user mental models of the internet and implications for privacy and security. Eleventh Symposium On Usable Privacy And Security (SOUPS 2015). pp. 39–52 (2015) 92. Karegar, F., Pulls, T. & Fischer-Hübner, S. Visualizing exports of personal data by exercising the right of data portability in the data track-are people ready for this?. Privacy And Identity Management. Facing Up To Next Steps: 11th IFIP WG 9.2, 9.5, 9.6/11.7, 11.4, 11.6/SIG 9.2. 2 International Summer School, Karlstad, Sweden, August 21–26, 2016, Revised Selected Papers 11. pp. 164–181 (2016) 93. Fischer-Hübner, S., Angulo, J., Karegar, F. & Pulls, T. Transparency, privacy and trustTechnology for tracking and controlling my data disclosures: Does this work?. Trust Management X: 10th IFIP WG 11.11 International Conference, IFIPTM 2016, Darmstadt, Germany, July 18–22, 2016, Proceedings 10. pp. 3–14 (2016) 94. Baumrind, D. Research using intentional deception: Ethical issues revisited.. American Psychologist. 40, 165 (1985) 95. Athanassoulis, N. & Wilson, J. When is deception in research ethical?. Clinical Ethics. 4, 44–49 (2009) 96. Dechand, S., Schürmann, D., Busse, K., Acar, Y., Fahl, S. & Smith, M. An Empirical Study of Textual Key-Fingerprint Representations. 25th USENIX Security Symposium (USENIX Security 16). pp. 193–208 (2016) 97. Haque, S., Scielzo, S. & Wright, M. Applying psychometrics to measure user comfort when constructing a strong password. 10th Symposium On Usable Privacy And Security (SOUPS 2014). pp. 231–242 (2014) 98. Volkamer, M., Gutmann, A., Renaud, K., Gerber, P. & Mayer, P. Replication Study: A CrossCountry Field Observation Study of Real World PIN Usage at ATMs and in Various Electronic Payment Scenarios. Fourteenth Symposium On Usable Privacy And Security (SOUPS 2018). pp. 1–11 (2018) 99. Zou, Y., Mhaidli, A., McCall, A. & Schaub, F. “I’ve Got Nothing to Lose”: Consumers’ Risk Perceptions and Protective Actions after the Equifax Data Breach. Fourteenth Symposium On Usable Privacy And Security (SOUPS 2018). pp. 197–216 (2018) 100. Harbach, M., Hettig, M., Weber, S. & Smith, M. Using personal examples to improve risk communication for security & privacy decisions. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 2647–2656 (2014) 101. Degeling, M., Utz, C., Lentzsch, C., Hosseini, H., Schaub, F. & Holz, T. We value your privacy... now take some cookies: Measuring the GDPR’s impact on web privacy. ArXiv Preprint ArXiv:1808.05096. (2018) 102. Linden, T., Khandelwal, R., Harkous, H. & Fawaz, K. The privacy policy landscape after the GDPR. Proceedings On Privacy Enhancing Technologies. 1 pp. 47–64 (2020) 103. McDonald, A. & Cranor, L. The cost of reading privacy policies. Isjlp. 4 pp. 543 (2008) 104. The European Parliament and the Council of the European Union Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal Of The European Union L. 119, 4.5.2016. pp. 1–88 (2016) 105. European Data Protection Board Guidelines 05/2020 on consent under Regulation 2016/679, Version 1.1, Adopted on 4 May 2020. (2020), https://edpb.europa.eu/sites/default/files/files/ file1/edpb_guidelines_202005_consent_en.pdf 106. Cate, F. The limits of notice and choice. IEEE Security & Privacy. 8, 59–62 (2010)

130

4 Challenges of Usable Privacy

107. Luger, E., Moran, S. & Rodden, T. Consent for all: revealing the hidden complexity of terms and conditions. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 2687–2696 (2013) 108. Schaub, F., Balebako, R. & Cranor, L. Designing effective privacy notices and controls. IEEE Internet Computing. 21, 70–77 (2017) 109. Schaub, F., Balebako, R., Durity, A. & Cranor, L. A design space for effective privacy notices. Eleventh Symposium On Usable Privacy And Security (SOUPS 2015). pp. 1–17 (2015) 110. Schermer, B., Custers, B. & Hof, S. The crisis of consent: How stronger legal protection may lead to weaker consent in data protection. Ethics And Information Technology. 16, 171–182 (2014) 111. Custers, B., Der Hof, S., Schermer, B., Appleby-Arnold, S. & Brockdorff, N. Informed consent in social media use-the gap between user expectations and EU personal data protection law. SCRIPTed. 10 pp. 435 (2013) 112. Carolan, E. The continuing problems with online consent under the EU’s emerging data protection principles. Computer Law & Security Review. 32, 462–473 (2016) 113. Nissen, B., Neumann, V., Mikusz, M., Gianni, R., Clinch, S., Speed, C. & Davies, N. Should I agree? Delegating consent decisions beyond the individual. Proceedings Of The 2019 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2019) 114. Utz, C., Degeling, M., Fahl, S., Schaub, F. & Holz, T. (Un) informed consent: Studying GDPR consent notices in the field. Proceedings Of The 2019 Acm Sigsac Conference On Computer And Communications Security. pp. 973–990 (2019) 115. Machuletz, D. & Böhme., R. Multiple Purposes, Multiple Problems: A User Study of Consent Dialogs after GDPR. Proceedings On Privacy Enhancing Technologies. 2 pp. 481–498 (2020) 116. Böhme, R. & Köpsell, S. Trained to accept? A field experiment on consent dialogs. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 2403–2406 (2010) 117. Pollach, I. What’s wrong with online privacy policies?. Communications Of The ACM. 50, 103–108 (2007) 118. Karegar, F., Pettersson, J. & Fischer-Hübner, S. The Dilemma of User Engagement in Privacy Notices: Effects of Interaction Modes and Habituation on User Attention. ACM Trans. Priv. Secur.. 23 (2020,2) 119. Gray, C., Santos, C., Bielova, N., Toth, M. & Clifford, D. Dark patterns and the legal requirements of consent banners: An interaction criticism perspective. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems. pp. 1–18 (2021) 120. Nouwens, M., Liccardi, I., Veale, M., Karger, D. & Kagal, L. Dark patterns after the GDPR: Scraping consent pop-ups and demonstrating their influence. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–13 (2020) 121. Patrick, A. & Kenny, S. From privacy legislation to interface design: Implementing information privacy in human-computer interactions. International Workshop On Privacy Enhancing Technologies. pp. 107–124 (2003) 122. Strycharz, J., Ausloos, J. & Helberger, N. Data protection or data frustration? Individual perceptions and attitudes towards the GDPR. Eur. Data Prot. L. Rev.. 6 pp. 407 (2020) 123. Rughinis, R., Rughinis, C., Vulpe, S. & Rosner, D. From social netizens to data citizens: Variations of GDPR awareness in 28 European countries. Computer Law & Security Review. 42 pp. 105585 (2021) 124. Zhang-Kennedy, L. & Chiasson, S. “Whether it’s moral is a whole other story”: Consumer perspectives on privacy regulations and corporate data practices. Seventeenth Symposium On Usable Privacy And Security (SOUPS 2021). pp. 197–216 (2021) 125. Tang, J., Birrell, E. & Lerner, A. Replication: How Well Do My Results Generalize Now? The External Validity of Online Privacy and Security Surveys. Eighteenth Symposium On Usable Privacy And Security (SOUPS 2022). pp. 367–385 (2022)

References

131

126. Kang, R., Brown, S., Dabbish, L. & Kiesler, S. Privacy Attitudes of Mechanical Turk Workers and the US. Public. 10th Symposium On Usable Privacy And Security (SOUPS 2014). pp. 37–49 (2014) 127. Gluck, J., Schaub, F., Friedman, A., Habib, H., Sadeh, N., Cranor, L. & Agarwal, Y. How short is too short? implications of length and framing on the effectiveness of privacy notices. Twelfth Symposium On Usable Privacy And Security (SOUPS 2016). pp. 321–340 (2016) 128. Adjerid, I., Acquisti, A., Brandimarte, L. & Loewenstein, G. Sleights of privacy: Framing, disclosures, and the limits of transparency. Proceedings Of The Ninth Symposium On Usable Privacy And Security. pp. 1–11 (2013) 129. Acquisti, A., Adjerid, I. & Brandimarte, L. Gone in 15 seconds: The limits of privacy transparency and control. IEEE Security & Privacy. 11, 72–74 (2013) 130. Kelley, P., Cesca, L., Bresee, J. & Cranor, L. Standardizing privacy notices: an online study of the nutrition label approach. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 1573–1582 (2010) 131. Patil, S., Hoyle, R., Schlegel, R., Kapadia, A. & Lee, A. Interrupt now or inform later? Comparing immediate and delayed privacy feedback. Proceedings Of The 33rd Annual ACM Conference On Human Factors In Computing Systems. pp. 1415–1418 (2015) 132. Balebako, R., Schaub, F., Adjerid, I., Acquisti, A. & Cranor, L. The impact of timing on the salience of smartphone app privacy notices. Proceedings Of The 5th Annual ACM CCS Workshop On Security And Privacy In Smartphones And Mobile Devices. pp. 63–74 (2015) 133. Kobsa, A. & Teltzrow, M. Contextualized communication of privacy practices and personalization benefits: Impacts on users’ data sharing and purchase behavior. Privacy Enhancing Technologies: 4th International Workshop, PET 2004, Toronto, Canada, May 26–28, 2004. Revised Selected Papers 4. pp. 329–343 (2005) 134. Balebako, R., Jung, J., Lu, W., Cranor, L. & Nguyen, C. “Little brothers watching you” raising awareness of data leaks on smartphones. Proceedings Of The Ninth Symposium On Usable Privacy And Security. pp. 1–11 (2013) 135. Bannihatti Kumar, V., Iyengar, R., Nisal, N., Feng, Y., Habib, H., Story, P., Cherivirala, S., Hagan, M., Cranor, L., Wilson, S. & Others Finding a choice in a haystack: Automatic extraction of opt-out statements from privacy policy text. Proceedings Of The Web Conference 2020. pp. 1943–1954 (2020) 136. Habib, H., Pearman, S., Wang, J., Zou, Y., Acquisti, A., Cranor, L., Sadeh, N. & Schaub, F. “It’s a Scavenger Hunt”: Usability of Websites’ Opt-Out and Data Deletion Choices. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–12 (2020) 137. Korff, S. & Böhme, R. Too Much Choice: End-User Privacy Decisions in the Context of Choice Proliferation. 10th Symposium On Usable Privacy And Security (SOUPS 2014). pp. 69–87 (2014,7) 138. Suh, J. & Metzger, M. Privacy Beyond the Individual Level. Modern Socio-Technical Perspectives On Privacy. pp. 91–109 (2022) 139. Li, Y., Kobsa, A., Knijnenburg, B., Nguyen, M. & Others Cross-Cultural Privacy Prediction.. Proc. Priv. Enhancing Technol.. 2017, 113–132 (2017) 140. Li, Y. Cross-cultural privacy differences. Modern Socio-technical Perspectives On Privacy. pp. 267–292 (2022)

5

Addressing Challenges: A Way Forward

5.1

Introduction

While the usability of privacy and related human factors constitute a major challenge for research and practice, promising approaches and useful solutions for addressing these challenges exist as well. In a follow-up to our previous chapter in which we discussed the challenges of usable privacy, here we will investigate and discuss some selected approaches and solutions that have been proposed, implemented, tested, and/or successfully applied for advancing usable privacy. We will also demonstrate how fundamental privacy principles of the EU General Data Protection Regulation (GDPR) can be mapped to Human-Computer Interaction (HCI) requirements and, ultimately, to HCI solutions.

5.2

Human-Centred and Privacy by Design Approaches Combined

Prior to delving into potential solutions for specific challenges, it is crucial to underscore the significance of adopting a human-centred design approach when crafting usable privacy by design solutions. To elicit end-user requirements and evaluate and improve solutions, end users must be involved from the beginning and throughout the entire development cycle. The importance of involving end users, who should ultimately benefit from privacy by design, as stakeholders in the privacy by design process, as well as involving experts from multiple disciplines including usability design was also highlighted earlier by Tsormpatzoudi et al. [2]. Also, Ann Cavoukian emphasised that the privacy by design principle “Respect for Privacy” comprises the need for User Interfaces (UIs) to be “human-centred, user-centric and user-friendly, so that informed privacy decision may be reliably exercised” [3]. Hence, particularly in the realm of UI development, merging privacy by design principles with human-centred approaches is imperative. This fusion requires the active involvement of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Fischer-Hübner and F. Karegar, The Curious Case of Usable Privacy, Synthesis Lectures on Information Security, Privacy, and Trust, https://doi.org/10.1007/978-3-031-54158-2_5

133

134

5 Addressing Challenges: A Way Forward

both end users and data protection experts as stakeholders. Their comprehensive involvement is essential for effectively tackling both user-specific needs and legal privacy requirements from the beginning and consistently throughout the UI development process.

5.3

Encompassing Different Types of Users

5.3.1

Inclusive Design

Usable privacy solutions should not only be human-centred but also inclusive by accommodating the needs of different types of users. An “inclusive design” will address some challenges that we outlined in Sect. 4.2.1. Approaches to inclusive design or “universal design” have emerged since the mid-80 s to make products and environments usable for all kinds of users, as far as possible, without the need for adaptation or having a specialised design for a special target group. User diversity, in contrast to modelling an average user, or a typical user, is acknowledged as important, and hence knowledge and awareness of different users’ needs, preferences, and abilities are central in the inclusive design approaches [4]. Users with special needs are not the only ones who will benefit from accessible and inclusive systems. When for instance the content of a website’s standard privacy policy is also made available in a “simple language” version, then the combination of these two versions likely fulfils the GDPR requirements of Art. 12 that information should be “intelligible” and “easy to read” to a higher degree than the site’s standard policy version alone does. Directly involving a broad range of users with different needs for eliciting requirements or evaluations in user studies is usually not possible for practical and cost-related reasons. However, existing and well-acknowledged requirements for inclusive design and accessibility, which are provided by EU laws and standards, should drive the development and evaluation of usable privacy solutions. Important accessibility requirements for products and services are particularly provided by the Web Accessibility Directive 2016/2102 and the complementing European Accessibility Act (EU Directive 2019/882 on the accessibility requirements for products and services). For instance, the European Accessibility Act requires, in its Annex 1, that the information on the use of a product, which also comprises, e.g. privacy policies or warnings, should be available via more than one sensory channel; presented in an understandable way and in ways users can perceive; and presented with adequate fonts and formats, taking into account foreseeable conditions for the expected context of use. Additionally, the ETSI standard EN 301 549 “Accessibility Requirements for ICT Products and Services” [5], which supports the EU Directive 2016/2102, defines requirements that products and services based on information and communication technologies should meet to enable inclusiveness.

5.4

Configuring PETs and Addressing Conflicting Goals

5.3.2

135

Culture-Dependent Privacy Management Strategies and Privacy Profiles

Even when following a universal design approach, the users’ demographics, such as cultural background, gender, and age, should still be considered for offering a selection of suitable privacy profiles with configuration or preference settings that are likely to suit their needs. User segmentation studies for deriving suitable privacy personas that represent users with specific characteristics can be conducted for subsequently deriving suitable selectable privacy profiles that can be offered to users after they have started from a “universal” privacy by a default profile. Previous intercultural studies revealed for instance that privacy preferences for users from one specific culture (or region) may differ from those preferences that users from another culture (or region) have, thus suggesting that culture-specific (or region-specific) privacy management strategies, privacy personas, and profiles can be defined [6, 11, 66]. Different profiles offered to users should reflect the different privacy preferences that users from different cultures or regions typically have. Offering or recommending users a selection of choices, including options that likely meet their characteristics and needs, should help users to more easily adapt their privacy configurations and preferences to fit their needs. The design considerations for culture-dependent privacy sharing preferences and management strategies are for instance discussed by Li [66]. Culture-specific data-sharing preferences that are offered should for instance consider that users from collectivist cultures1 communicate and share data more freely with close friends and users in their social groups, while users from individualistic countries are more ready to trust and communicate also with strangers outside their social groups. Moreover, since collectivist users are more concerned about the negative impact of their information sharing on others’ privacy and group privacy, collective and group-level privacy management strategies should be supported for them, including options for collaborating with others on privacy protection and negotiating with each others’ privacy boundary. For users from individualist countries, on the other hand, individual-level privacy protections should be supported, e.g. by highlighting and enhancing corrective privacy management, supported by increased (ex-post) transparency about potential risk and information controls [66].

5.4

Configuring PETs and Addressing Conflicting Goals

As discussed in Chap. 4, when configuring PETs, complex decisions on suitable parameters and trade-offs between conflicting goals may need to be made that may require multidisci1 According to Hofstede [67], the individualism (versus collectivism) cultural dimension addresses the

degree of interdependence that a society maintains between its members. In individualistic societies, people are supposed to care about themselves and their immediate family. In contrast, in collectivist societies, people belong to “groups” that take care of them in return for their loyalty.

136

5 Addressing Challenges: A Way Forward

plinary expertise. Therefore, for designing tools or guidelines for the usable configuration of PETs, stakeholders with different backgrounds and expertise, e.g. in technology, law, or management or with domain expertise, need to be involved when taking a human-centred design approach. This will be further motivated by the examples given below. For instance, for the configuration of differential private analytics, trade-off decisions need to be made between data minimisation and utility, which require domain expertise regarding acceptable losses of accuracy as well as legal expertise for judging the legal implications of the remaining privacy risks for a specific domain. Sarathy et al. [41] investigated perceptions, challenges, and opportunities surrounding the use of Differential Privacy (DP) in an interview-based study with practitioners who were non-experts in DP and were using DP data analysis prototype to release privacy-preserving statistics. For choosing and interpreting privacy-loss parameters, as well as setting metadata parameters, all practitioners that were interviewed needed consulting and guidance from DP experts. Moreover, participants suggested that trained privacy officers or Institutional Review Boards could be responsible for making or reviewing decisions about privacy-loss parameters and budget allocations to avoid problems arising from the unknown implications of privacy parameters for individual and institutional liability. However, the implications of DP concerning privacy laws are still unclear. It takes time to understand how the law might impact privacy-loss parameters and personal information leakage despite the explorations of the relationship between DP and privacy regulations in the U.S. and Europe by researchers [42–44]. An approach for the usable configuration of PETs through the automatic activation of suitable default configuration settings is provided by Framner et al. [16]. They discuss the process of deriving usable configuration guidelines and interfaces for the Archistar system, outlined in Chap. 2, which utilises secret sharing for redundantly storing data over multiple independent storage clouds in a secure and privacy-friendly manner. The guidelines were derived through interviews and two iterations of mockup-based user studies with system admins and stakeholders with backgrounds in technology, business administration, and/or data protection. For selecting the optimal secret sharing parameters (K out of N), the type and location of cloud storage servers, and other settings for secure cloud storage of data shares, complex trade-off decisions need to be made between different protection goals, costs, and legal privacy requirements depending on the data storage project. Hence, as we further detail below, the configuration and trade-off decisions require different types of backgrounds and expertise from multiple disciplines that go beyond the expertise that system admins usually have. To comply with the GDPR, legal expertise is required to determine under what conditions secret shares, which still fall under the definition of personal data,2 could be stored on nonEuropean servers. Based on context-specific attacker models, technical expertise contributes to requirements regarding the choice of the secret sharing parameters N and K. In addition, 2 Despite the fact that secret sharing of personal data is information-theoretically secure when the

Shamir scheme [15] is used, it remains pseudonymous data, and therefore personal data under GDPR, since it can be reconstructed with the help of other data shares.

5.4

Configuring PETs and Addressing Conflicting Goals

137

business and data protection expertise is required to understand which cloud storage servers could be selected, taking into account cost constraints and knowledge of existing procurement and data processing agreements an organisation has with cloud service providers. The user study showed that even for system admins and security experts, it was a challenge to make a reasonable selection of configuration parameters for specific storage projects or to specify the security and data protection requirements accurately. For simplifying the configuration for users including system admins, suitable default configurations were defined depending on the type of data that should be stored in the cloud. The specified data type can then be used to automatically determine the security and data protection requirements for the data storage project, from which suitable standard configurations including suitable trade-offs between protection goals and other requirements can be derived. Hence, instead of requiring users to do all fine-grained configuration settings and choices themselves, the idea is that users simply specify the data type to be stored, which then in turn automatically activates suitable configuration settings. The users can then either keep these settings or review and further adapt them. Figure 5.1 shows default configurations that are applied according to the type of data to be securely stored that users choose. For instance, for data with high privacy and confidentiality requirements, data encryption is used in addition to secret sharing by default as an extra level of protection, and a higher threshold K is used (so that more parties need to maliciously collide to break the scheme). In addition, the Cloud Storage Providers (CSPs) will be located in the European Economic Area (EEA) and (with 2 private and 3 external cloud servers and a 3 out of 5 scheme) at least one share that is needed to reconstruct the data is placed in the private cloud of the user’s organisation. For time-critical data that need to be quickly

Fig. 5.1 Archistar configuration defaults (based on [16])

138

5 Addressing Challenges: A Way Forward

recovered in case of an incident, the number of servers (N) and distances to the chosen servers will be kept low, and servers with high up-time guarantees will be selected. Finally, for organisationally critical data, which are essential for the survival of an organisation in the case of an incident, a high redundancy (N-K) is picked and servers with high ratings in terms of contingency plans and disaster recovery guarantees are chosen. For cases in which users choose more than one of the checkboxes in Fig. 5.1 for data to be stored that fulfil several characteristics, suitable trade-off default configurations are defined as well.

5.5

Privacy as a Secondary Goal—Attracting Users’ Attention

As we discuss in Chap. 4, privacy is usually only a secondary goal for users. This means that privacy policies, privacy notifications, and privacy warnings may be perceived as interfering with users’ primary tasks, so they are ignored or disregarded and “clicked away”. If frequently repeated, the effect of dismissing this information may be intensified due to habituation.

5.5.1

Content, Form, Timing, and Channel of Privacy Notices

Research has been conducted on the content, presentation, timing, and channel of privacy notices to attract the user’s attention, which in turn may help them get well-informed and make better decisions. In other words, one popular approach to solving the problem of lack of attention and disdain by users is to improve User Interfaces (UIs) of policy notices and consent forms, or to adjust when and how they are presented to users so that they can pay more attention. In Sect. 5.6, we particularly focus on solutions which improve the content and structure of policy notices and consent forms. Further, in Sect. 5.7 we focus on approaches that are related to the timing and context of presenting notices and allow a partial automation of privacy management. An example solution for this approach is just-in-time privacy notices (and notices for dynamic consent as their special cases—see Sect. 5.7) which display and emphasise essential information at a time or in a context in which users perceive the information as relevant. Moreover, specific channels for communicating privacy and policy information may be utilised for enhancing the users’ attention in a specific context. For instance, if the user’s primary task requires or takes the user’s full attention, essential privacy information could also be conveyed via a secondary channel. For example, for a car driver, any disclosure and use of a car driver’s location information that may be unexpected to them in a moment when driving could be communicated via audio as a secondary channel [1]. Another approach for using a background channel for providing transparency and raising awareness of personal

5.5

Privacy as a Secondary Goal—Attracting Users’ Attention

139

data flows via unobtrusive but pervasive visualisations in a glanceable manner, added into the users’ screensavers or lock screens, is presented and discussed in [10] and displayed in Fig. 5.8. As the authors discuss, a background channel like a screensaver may help in improving the salience of privacy visualisations for users who may otherwise not bother to actively use a transparency tool for looking up such information.

5.5.2

Engaging Users with Privacy Notices

Addressing the challenge that privacy is only a user’s secondary goal with the consequence that they often do not pay attention to privacy notices in consent forms was also studied previously by our research group from another perspective—improving the ways users interact with the content of notices. The idea is that users can be actively engaged in policy content as an alternative route to raising their attention. Pursuant to Art. 4 (11) GDPR, valid consent requires a statement or a clear affirmative action to confirm a user’s decision. Pettersson et al. suggested earlier the novel concept of Drag-and-Drop Agreements (DADA) [7] which uses Drag and Drop (DAD) as an affirmative action for privacy choices to address the problem of habituation to which Click-Through Agreements are vulnerable. With the proposed DADA approach, the users have to pick and drag a set of data icons on a graphical user interface and drop them on the right receiver icons standing for the data controllers or processor to which the user intends to disclose the data. Pettersson et al. assumed that such DAD operations require more attention from the user to policy content and choices to be made (by correctly dragging and dropping data items to data recipients to which they should be disclosed) than needed for simply checking boxes or clicking on “I agree” in consent forms. Hence, they are expected to lead to more conscious user decisions. Later, Karegar et al. investigated how different types of interactions that engage users with consent forms differ in terms of their effectiveness, efficiency, and user satisfaction, and especially what interactions are best suited to raise user attention to essential policy information in consent forms [8, 9]. Moreover, they analysed if and how habituation, by repeated exposure to consent forms with different interaction types, affects user attention, efficiency, and satisfaction. To this end, Karegar et al. [9] conducted a controlled experiment with 80 participants in four different groups where people either were engaged actively with the policy content via DAD, swipe, or checkboxes or were not actively engaged with the content (as the control condition) in a first-exposure phase and a habituation phase. They measured user attention to consent forms along multiple dimensions, including direct, objective measurements and indirect, self-reported measures. Figure 5.2 shows an example user interface, where the users need to confirm what data items they want to disclose for what purposes via DAD. The results of the studies conducted in [9] confirmed that the different types of interactions may affect user attention to certain parts of policy information. In particular, the DAD action

140

5 Addressing Challenges: A Way Forward

Fig. 5.2 Consent user interface using drag-and-drop (DAD) for selecting policy content [9]

results in significantly more user attention to the (dragged) data items compared to the other tested forms of interactions with consent forms, and the drag operation of data items receives more attention than the drop operation. However, with repeated exposure to consent forms, the difference between DAD, swipe, and checkboxes disappears. Karegar et al., therefore, conclude that user engagement with policy content needs to be designed with care so that attention to substantial policy information is increased and not negatively affected. Consequently, the following design guidelines for engaging users with policy content in consent forms are given [9]: • Use the same type of user engagement with all substantial policy information, as the different types of interactions may create some biases in user attention to different parts of the policy. • For choosing the right interaction type for engaging users with policy notices, the context of use needs to be considered. If you are designing for frequently appearing notices such as cookie consent notices, then checkboxes suffice for both fulfilling the legal requirements and catching user attention (since the effect of DAD disappears with habituation). However, if the consent forms are shown seldom (e.g. only at setup) or once in a while, using DAD actions is recommended.

5.6

Designing Usable Privacy Notices

141

• Since users pay more attention to the drag than to the drop operations, if the DAD is used to engage users with the content, it is recommended to design the DAD in a way that all substantial policy information gets dragged rather than being the target for the drop action. Users could, for example, drag the combination of both the data item and the data processing purpose and drop them to a single dedicated area for drop action, e.g. called “My Policy”. This also means that if users are actively engaged with policy content, no essential policy information should be included in parts of the user interface with which users are not interacting.

5.6

Designing Usable Privacy Notices

This section discusses approaches for designing usable privacy notices (and choice) through improvements in content, structure, and presentation of notices, which can help to effectively inform users and can thus support them when making privacy-related choices, thus addressing challenges that we earlier outlined in Sect. 4.4.2. A detailed overview of approaches for designing effective privacy notices based on a literature review can also be found in [1]. Moreover, as we mentioned in Sect. 5.5.1, policy notices can be more useful for users if provided dynamically in a context where this information is relevant. This objective can be achieved by so-called just-in-time privacy notices and agreements and dynamic consent requests, which are dynamically triggered in a context where privacy decisions matter for users. We discuss this in Sect. 5.7.

5.6.1

Multi-layered Privacy Notices

While it is important to provide ex-ante transparency about many data processing practices for enabling well-informed decision-making, showing all information in one single and long privacy policy notice will not constitute a usable solution. The GDPR provides in its Art. 13 (a) long list of policy information to be made transparent to data subjects, but also requires at the same time in its Art. 12 that privacy policy information should be provided in “a concise, transparent, intelligible and easily accessible form”. One commonly used approach for making complex privacy policies more accessible and easy to read is to structure and present the policy content in multiple layers. This approach of multi-layered structured policy notices, where “each layer should offer individuals the information needed to understand their position and make decisions”, was initially suggested in 2004 by Art. 29 Working Party in its opinion on “More Harmonised Information Provisions” [19] and later reiterated by the European Data Protection Board (EDPB) [18]. As the EDPB highlights, layered and granular information can be an appropriate means for

142

5 Addressing Challenges: A Way Forward

providing precise and complete as well as understandable policy information [18]. Layered privacy notices as part of consent requests may also be particularly suitable for small screens. Layered privacy notices allow users to go directly to relevant sections, avoiding the need to search through lengthy policy text [20]. Multi-layered policy notices are also recommended as a good privacy by design practice by other regulators and organisations [21, 22] and have been deployed by several large companies [23]. For following this recommended approach, complex policies should be structured into different layers which together provide all needed policy information, including policy details required by Art 13. of GDPR. The top layer should only provide a short and simple privacy notice with the most essential policy information. Nevertheless, providing information in a compact form should not impair the desired transparency, especially in the first layer which has the greatest chance of being seen by users. For the top layer, information required by Art. 13 (1) GDPR should specifically be considered which includes information about the identity of the controller and the contact of the organisation’s DPO (if applicable), data processing purposes, the legal basis for data processing, data recipients, and intended transfer of data to third countries outside Europe. Further complementing policy information or simply the full privacy policy should be provided in a second policy layer (or in further lower policy layers), to which the top (or next higher) layer is linked. For instance, a second layer should typically include information required by Art 13 (2) GDPR, such as information about data retention periods, data subject rights and how they can be executed, or any further information needed by data subjects to understand the position and make informed decisions that are not presented on the top layer. It is important to adjust the granularity of information provided at each layer according to the context, while also taking into account the screen size and types of user interfaces available on the device. Related to the latter aspect, Schaub et al. also discuss layered policy notices utilising various device modalities for informing users. For instance, Xbox Kinect uses two LED lights to communicate the “top-layer” policy information that motion detection, video, or audio are activated and that related information is potentially sent to a server, while users can still access the full privacy policy via the screen connected to Xbox or the Xbox website [1]. Figure 5.3 provides an example of a short top-layer policy notice as part of a multi-layered policy notice including complementing illustrating policy icons and a link to a “Full Privacy Policy” in a second policy layer. Policy information should also be relevant for the user and actionable. Therefore, in addition to information that needs to be displayed to fulfil regulatory requirements, policy notices should also inform users, preferably at the top layer, about unexpected data practices and should communicate risks, for instance, illustrated with examples, to help users assess the privacy implication and offer choices on how to react [1]. The approach of multi-layered privacy policy presentations has also in recent years been suggested for IoT privacy labels. A multi-layered “privacy facts label” that can be affixed to packages of IoT devices was first suggested by Railean et al. [62, 63] to provide potential

5.6

Designing Usable Privacy Notices

143

Fig. 5.3 Examples of a short privacy notice to appear at the top layer of a multi-layered policy notice

customers of such devices with a quick overview of how an IoT device collects and handles data for promoting better-informed purchase decisions. This privacy facts label for IoT Transparency Enhancement (LITE), displayed in Fig. 5.4a, presents a short privacy notice together with a link via a QR code to a full privacy notice. It was designed to comply with Art. 29 WP and EDPB recommendations and GDPR requirements. A very similar design for two-layer IoT Security & Privacy Label was later presented by Emami-Naeini et al. [61] and is promoted by the IoT Security & Privacy Label group at Carnegie Mellon University. As Fig. 5.4b shows, the top layer of their label contains the policy information in a tabular presentation that is most important to consumers and is also complemented with a QR code leading to further security and privacy policy information in a secondary layer.

144

5 Addressing Challenges: A Way Forward

Fig. 5.4 Designs for multi-layered Privacy notices for labels that can be affixed to IoT devices packages

5.6.2

Providing Usable Choices

Providing usable choices is an important element of usable privacy notices. As we discussed in Sect. 4.4.2, a lack of (usable) choices is among the problems contributing to the ineffectiveness of notice and choice paradigm and lack of user attention. Research on usable choices, however, is scarce, compared to studies on usable privacy notices. Users encounter conceptual and technical problems when attempting to intervene with the processing of their personal data. Researchers provided a number of coarse-grained recommendations, such as improving transparency by providing accessible and understandable information [46, 49], to mitigate those problems. Habib et al. [46] provided design recommendations for how websites should present privacy choices for email marketing, advertising, and deletion. They recommend providing unified settings in a standard location, providing additional paths and in-place controls (e.g. by providing links to privacy choices in account settings, privacy policy forms, and website help pages), reducing the effort required to understand and use choices, and bolstering confidence that choices will be respected [46]. For TETs that operate on the basis of the privacy notice and choice paradigm, Murmann and Karegar proposed and evaluated a set of design guidelines [48]. Responses or reactions to a privacy notification are supported by their intervention requirements. Notices should inform users whether they are supposed to act, order the options appropriately (based on their suitability for the related scenario of the notice), provide visual cues (to show whether

5.6

Designing Usable Privacy Notices

145

acting is necessary or advisable), and indicate the consequences of acting and subsequent steps [48]. Constructing a design space for privacy choices based on a comprehensive literature review, Feng et al. provide a detailed overview of approaches for designing effective privacy choices which can help practitioners [68]. In 2022, Habib et al. [47] also developed a framework to guide the design of future usability evaluations of privacy choice interactions based on the usability factors explored in prior work as well as the methods used.

5.6.3

Personalised Presentations

By using the Android app installation process as an example, Harbach et al. [52] have revealed the value of leveraging personalised examples to enhance risk communication for making security and privacy decisions. Additionally, Harbach et al. found that when decision dialogues became personal, users’ attention increased [52]. Providing personalised examples may not always be possible in all situations (because the service provider does not yet have consent to collect data), but they can be particularly helpful for ex-post privacy notices or, for example, in ex-ante permission dialogues of identity providers where users should make a privacy decision regarding the personal data to be shared with a service provider.

5.6.4

Visual Presentations

Textual policy presentations can precisely present complex legal aspects, while visual privacy policy presentations, as discussed below, can help users to easily recognise, understand, and compare policies or policy elements, especially if they are standardised or are based on suitable metaphors, in line with Nielsens’ usability heuristics of “consistency and standards” and of providing a “match between system and the real world” [26]. Polymorphic design: Polymorphic designs [55, 56] and visual indicators [54] are another way to catch and maintain user attention. Using polymorphic notices, users are continually exposed to different visual materials, thus sustaining their attention and reducing habituation [55]. In any case, if polymorphic privacy notices are chosen, guidelines regarding usable options, such as prominence for actions and how to take action, i.e. making a choice, should be strictly followed to reduce cognitive load on users because of the progressively changing design. Comic-based presentation: In addition, researchers explored the effects of comic-based policy interfaces and found that participants spent more time on comics than textured and plain text notices [53]. However, more research is needed to determine whether spending more time on comic-based interfaces results in greater comprehension and understanding of the content. There is also greater potential for comic-based formats to counter habituation than text-based formats, but further research is needed.

146

5 Addressing Challenges: A Way Forward

Tabular presentations: One of the examples of user interface designs for short privacy notices that can be used at the top layer of multi-layered policies are design proposals based on a privacy nutrition label metaphor. This approach, initially suggested in [24], uses a tabular presentation to show the key components of a privacy policy. Usability studies showed that, in contrast to traditional text-based policies, it allows users to find information more quickly and accurately, and provides higher user satisfaction [24]. Standardised tabular notices, in particular, provide users with familiar interfaces and make it easier to compare and understand policies [25]. Policy icons: User interfaces, which are based on real-world metaphors, for instance in the form of suitable icons, are easier to learn and understand [26]. Privacy policy icons have in the past been researched and developed by industry and academia for visualising policy elements in privacy policies to make the content of legal policy notices easier to access and comprehend for users (see Appendix B in [28] for an overview). The importance of developing policy icons based on semiotic studies, testing and standardising policy icons, and making them usable across different cultures has been emphasised as well [27, 28]. The GDPR also supports standardised icons as a means of providing transparent information. More precisely, Art. 12 (7) GDPR states that “the information to be provided to data subjects pursuant to Articles 13 and 14 may be provided in combination with standardised icons in order to give in an easily visible, intelligible and clearly legible manner a meaningful overview of the intended processing”. Moreover, it mentions that “where the icons are presented electronically they shall be machine-readable”. As part of their article on challenges to the successful implementation of icons for data protection, Rossi and Palmirani [50] provide complementary explanations of why lawmakers in Europe have overtly mentioned pictograms as one of the visual means to enhance legal communication transparency. As policy icons usually cannot precisely define policy content and may leave room for misinterpretations, it is important to note that they can help as a complement to textual policy representations but cannot replace policy text. In their study, Habib et al. discuss effective icons to indicate privacy choices. They found icon-link text pairings that conveyed the presence of privacy choices without creating misconceptions, with a stylised toggle icon effectively conveying the notion of choice when paired with “Privacy Options” [51] (see Fig. 5.5). The meaning of abstract indicators such as policy icons, however, needs to be learned and thus also additionally requires education [1] which was also emphasised by Habib et al. [51] who, based on their study results, recommend that user-tested icons should be paired with outreach to the general public and education. An overview of the main research challenges

Fig. 5.5 Combination of the stylised toggle icon and “Privacy Options” text [51]

5.6

Designing Usable Privacy Notices

147

Fig. 5.6 Examples of PrimeLife policy icons [30]

posed by the development and evaluation of a data protection icon set, enshrined by GDPR, including the challenge of defining icon functions, icon evaluations, and also the universal interpretation of icons, is provided in [50]. Examples of policy icons addressing the legal transparency requirements are the ones developed by the PrimeLife EU project. These policy icons were developed for illustrating core privacy policy statements in short privacy notices, namely statements about personal data types, data processing purposes, and processing steps [29, 30]. An intercultural comparison test of the policy icons conducted with Swedish and Chinese students provided insights into which icons seem to be well understood by students of both cultures and which icons were understood differently by persons with different cultural backgrounds [30]. Icons that were easily understood by both Swedish and Chinese students are shown in Fig. 5.6, displaying types of data (personal data, medical data, payment data), the processing steps (storage, retention), and the purpose of “shipping”. An example of an icon that was well understood by Swedish test participants but not understood by Chinese students was an icon showing a posthorn for the data processing purpose of shipping. These usability experiments demonstrated well the challenges of finding privacy policy icons that are well understood by different users and across cultures.

5.6.5

Informing Users About Policy Mismatches

Another approach for making the core privacy policy content that matters to the users easily accessible is to show users prominently (e.g. with suitable icons) if the policy of a service provider matches the users’ privacy preferences or if there are any deviations, e.g. as part of a consent form. Such an approach usually requires that the users define their privacy preferences in a machine-readable format, which can then be automatically compared (“matched”) with the service provider’s policy which is also specified in a machine-readable form. The Platform for Privacy Preferences P3P and the PrimeLife Policy language PPL are examples of XML- or XACML-based policy languages that could be used to this end. Despite this, these policy languages have not been successfully deployed in practice since they require users to specify their preferences and a sufficient number of service sides that offer machine-readable policies.

148

5.6.6

5 Addressing Challenges: A Way Forward

Avoiding Dark Patterns

Dark patterns, often used for cookie banners, are defined as intentionally and carefully crafted UIs to trick users into unintentionally revealing personal information (e.g. by accepting all cookies). For a detailed definition of dark patterns and an example illustrating their classification, please refer to Sect. 1.4.13. However, UI designers could also unintentionally include UI elements that are detrimental to users’ privacy if they lack sufficient legal domain knowledge. Published and well-documented dark patterns can therefore play a similar role as “anti-patterns”, which are used by software engineers to avoid design choices that seem to be appropriate but have more unforeseen bad consequences than good ones. UI designers of cookie banners and consent forms should therefore carefully check that they do not include UI elements that have been documented in published dark patterns (see [57–60] and visit https://www.deceptive.design/types for examples of published dark patterns).

5.7

(Semi-)automated Privacy Management Based on Defaults and Dynamic Support

Users are often overwhelmed with frequent privacy decisions and required actions for protecting their personal data. Simplifying privacy management could involve automating certain aspects, thereby easing the burden on users. This could be achieved through the implementation of appropriate privacy defaults and dynamic assistance that facilitates decision-making based on relevant situations or contexts. Privacy-friendly default settings are needed for complying with Art. 25 GDPR (Data Protection by Default) and can at the same time automatically enforce privacy and security by default without required user actions. Transparently (in the sense of unobtrusively) incooperating default encryption via the TLS/HTTPS protocol has for instance been proven as an effective means for automatically protecting the confidentiality of communication. (Nonetheless, according to Wu et al. [13], a notable consequence arises in situations involving TLS warnings where users’ insufficient knowledge about TLS and a lack of context often lead to poor decision-making.) In the same way, anonymous communication, e.g. enabled through onion routing, should be enabled by default, especially as a prerequisite for enabling data minimisation at the application level, e.g. for anonymous payment schemes or eVoting schemes. While default privacy permissions or preference settings that are pre-assigned should provide the highest privacy protection pursuant to Art. 25 GDPR, users can also receive dynamic support for adapting their settings in accordance with their privacy personalities and interests. This could, for example, be achieved by “personalised privacy assistants” (suggested by a couple of works [32–34]) that use personalised and semi-automated recommendation approaches for changing users’ permission settings for Android or IoT systems. These assistants use machine learning to analyse the users’ privacy behaviour and privacy

5.7

(Semi-)automated Privacy Management Based on Defaults and Dynamic Support

149

personality to derive and suggest a “privacy profile”, with privacy permission settings predicted to best suit users’ choices and allowing for changing to the recommended settings, thus supporting personalisation and Nielsen’s usability heuristics of “efficiency of use”. It is important, however, to note that these personalised privacy assistants should refrain from autonomously altering settings on behalf of users. Such actions could potentially breach the legal obligation requiring explicit user consent through affirmative actions, as mandated by Article 4 (11) of the GDPR [65]. For contextual integrity, recommendations for changes of settings should preferably dynamically appear in a context that is meaningful for users. For achieving this, Angulo et al. [31] suggested, for instance, a usable “on-the-fly” policy management approach for PPL: if users make a decision differing from their defined preferences, they will be asked if their preferences should be updated accordingly. In case of a mismatch between the user’s preferences and a site’s policy, the user is asked in a consent form whether or not they want to accept this mismatch (and thus disclose the data) for this transaction only, or for all future transactions. In the latter case, it is also suggested that the users’ preferences be updated accordingly and saved under a new name (see Fig. 5.7). Similarly, updates are also suggested if the user does not consent to data disclosures even if there is no mismatch between the users’ preferences and the site’s policy. A dynamic consent is another example of a privacy decision that can be dynamically requested in a context, where this decision is meaningful for the user. Dynamic consent, coming originally from the biomedical domain [35], can be defined as regular consent that is requested in a specific context any time after an initial consent was collected, particularly for authorising incremental changes to the previously given consent, e.g. in case that the data controller wants also to process the data for other purposes or to change its policy [36]. Moreover, if it is detected, in the current context, that the data to be collected/processed is

Fig. 5.7 User interface elements for “On-the-fly” privacy preference management [31]

150

5 Addressing Challenges: A Way Forward

classified as sensitive, i.e. as special categories of data (as specified in Art. 9 GDPR), explicit consent from the user should be obtained dynamically in that specific context. Dynamic consent can be seen as a special form of Just-in-time click-through agreements (JITCTAs), initially proposed by Patrick and Kenny [37]. JITCTAs provide a short contextualised (just-in-time) privacy notice and obtain consent with a concise dialogue specific to a certain data practice triggered when it becomes relevant for the user, e.g. if data practices are considered sensitive or unexpected [1, 38].

5.8

Explaining PETs

In Sect. 3.6, we reviewed the literature on users’ mental models of PETs including PETs based on encryption, anonymity, and Differential Privacy (DP), and the work on explaining different PETs to users which utilised different approaches from using educational messages and tutorials, to metaphors and explanations of privacy parameters and associated risks. Further, in Sect. 4.3, we discussed the challenges of explaining the privacy functionality that different PETs provide to end users. In particular, we also concluded that metaphors and other explanations for PETs must take into account not only the real-world analogies but also the digital-world analogies that users with some technical security background might make. Here, in this section, we first briefly elaborate on the potential reasons for the ineffectiveness of existing explanations and shed light on the promising directions one can take for explaining PETs to users. Commonly used metaphors (e.g. the metaphor of pixelation of pictures used for the US Census for explaining DP) rather provide a structural explanation for a PET with details of how DP works (i.e. that privacy is achieved by “adding noise”). These structural explanations may also contribute to the problem of making wrong digital-world analogies, e.g. by assuming that DP works like encryption because encryption also involves the alteration of information and makes it “fuzzy” for protecting it. However, equating the two disregards the fundamental differences in their underlying mechanisms, intended purposes, and protection properties (and, e.g. may lead also to the misconception that DP is reversible like encryption). Recent research has shown that functional explanations of encryption technologies, which focus on a system’s key privacy and security properties, are better understood by end users than structural explanations [12, 13]. A better understanding of functional explanations by both expert and lay users was also confirmed by the results of our work for the case of functional encryption (FE) [64] (despite that for the case of Tor, research however showed that experts possess structural mental models [45]). Furthermore, our work showed that participants, regardless of their expertise, trusted and were more satisfied with the structural explanations for functional encryption. Also, the perceived comprehension of FE was higher with structural explanations, even though, as mentioned, the overall objective comprehension of FE was higher with functional explanations.

5.9

Usable Transparency and Control

151

Particularly, approaches for providing usable functional explanations on how PETs can adequately reduce privacy risks can be further investigated. Such functional explanations could for instance be communicated by showing the results of a conducted Privacy Impact Assessment (PIA) for a system using PETs, which can present how the PETs can reduce privacy risks. Interviews that Alaqra et al. conducted with eHealth professionals showed that the core information from a PIA (for a PET using homomorphic encryption for data to be analysed on an untrusted cloud server), and also the mere presence of information about the fact that a PIA was conducted, was perceived as useful and increased trust in the PET [14]. Still, participants requested further complementing information about the PIA method and how it was conducted, the qualification of the individuals that conducted the PIA, as well as the PET method (structural information on how homomorphic encryption works) [14]. This confirms our conclusion from [64] that while emphasis should be put on functional explanations of PETs for addressing a broad audience of users, complementing structural explanations should still be provided or made easily accessible (e.g. as a second layer of information via a link from the page providing the functional explanations) for establishing reliable trust in the PETs and for addressing interests and needs for information that especially expert users may have. In addition to explaining how PETs reduce privacy risks, users and other stakeholders should be guided regarding adequate risks per context and about the practical implications of risk reductions. Managers may for instance be interested to know whether privacy risk reductions through data minimisation PETs will lead to anonymous data or to secure pseudonymisation. GDPR rules and restrictions do not need to be applied any longer if data is rendered anonymous. However, it can rarely be guaranteed that anonymisation techniques actually lead to anonymous data. But even if data cannot be assumed to be rendered anonymous, the secure pseudonymisation of data may already bring important practical benefits. For instance, securely pseudonymised data may be permitted to be processed on non-European cloud servers, while still complying with the GDPR (according to the recommendations of the European Data Protection Board (EDPB) after the Schrems II court decision [17]).

5.9

Usable Transparency and Control

The difficulty users have understanding personal data flows on the net is highlighted as a challenge that usable transparency-enhancing tools (TETs) need to address. For making personal data flows more transparent, different forms of visualisations have been suggested. These include a trace view that we used in the Data Track tool [39, 69], which visualises data disclosure to data processors and also provides links for exercising data subject rights online for data that have been tracked via the Data Track. Alternative data-centric or app-centric (actor-centric) views were suggested by Wilkinson et al. [10], which either show what data is shared or with whom (with what app) it

152

5 Addressing Challenges: A Way Forward

Fig. 5.8 Annotated designs for “Glanceable data exposure visualisations” [10] which vary in their level of granularity and presentation style (app-centric versus data-centric)

is shared. This information is displayed with different levels of granularity to users on a screensaver/lock screen; see the alternative views of their “Glanceable Data Exposure Visualisations” in Fig. 5.8. An interview-based comparison study of app-centric (actor-centric)

5.9

Usable Transparency and Control

153

Fig. 5.9 Trace view user interfaces, which illustrate personal data flows [40]

versus data-centric information with different levels of details revealed that the users’ preferences for the types of views depend on whether they value relational or content privacy boundaries. The study also showed that the granularity of the details impacts users’ understanding of data exposure and the glanceability of UI. Hence, the authors conclude that providing less information with a higher granularity at times is often more useful. In the PAPAYA EU project, we developed a Data Disclosure Visualisation Tool that provides transparency to data subjects for an application,3 informing them about what actors can access and process what types of their personal data [40]. We present the visualisation tool briefly here, as it provides another example of a TET that provides a data subject with the three different and alternative types of views mentioned above: The trace view, the data view, and the actor view that corresponds to the app-centric view in [10]. The different views can be activated via three tabs on the top of the UI. The trace view screens (see Fig. 5.9) are divided into two parts: The “Data”, located at the upper half of the screen, shows all types of personal data (with an icon and a name) of a data subject that will be processed for this application. The “Actor”, at the lower half of the screen, shows all actors including the data subject (also with an icon and name), who process or have access to the data subject’s data. Upon clicking on an actor icon, traces appear for the data that the actor can access or process. Additionally, the user can click on the data icon to see who processes or has access to the data.

3 More precisely, it was developed for a privacy-enhancing data analytics application using differential

privacy for federated learning.

154

5 Addressing Challenges: A Way Forward

Fig. 5.10 The actor view provides descriptions of the actors and what types of data they process or can access [40]

The actor view (see Fig. 5.10) provides a description for all “actors” for the application who have access to and/or process the user’s personal data (i.e. the data subject and data processors and controllers). In comparison to the trace view, it provides an alternative way for the data subject to get an overview of the type of data that an actor processes or has access to. The data view (see Fig. 5.11) describes all types of personal data that can be accessed within an application. In comparison to the trace view, it also provides an alternative way for the data subject to get an overview of actors who have access to these types of data.

5.10

Guidance for Mapping (GDPR) Privacy Principles to HCI Solutions

Patrick and Kenny [37] pointed out that legal privacy requirements can be mapped to HCI requirements that specify the mental processes and behaviours of the end user that must be supported in order to adhere to the principle. As they suggest, these HCI requirements for effective privacy interface design can be grouped into four categories: (1) comprehension: to understand, or know; (2) consciousness: be aware, or informed; (3) control: to manipulate, or be empowered; (4) consent: to agree.

5.10

Guidance for Mapping (GDPR) Privacy Principles to HCI Solutions

155

Fig. 5.11 The data view provides descriptions of the data types and lists the actors that can access or process the data [40]

Following their approach, we show in Table 5.1 how a selection of fundamental privacy principles of the GDPR can be mapped into HCI requirements and to HCI solutions that can be used for fulfilling these requirements. For instance, the GDPR principle of Fairness and Transparency (Art. 5 (1)) can be mapped to the HCI requirements that users are aware and informed about privacy policies and understand the consequences. Example HCI solutions for achieving this include strategies for avoiding “dark patterns” to prevent users from being unfairly “tricked” into accepting policies that do not match their preferences and that they are not fully aware of. Moreover, multi-layered policies with complementing policy icons can help make policies and consequences better understandable to individuals and raise their awareness and help them to be better informed. The table focuses on mapping those legal GDPR requirements (in the first table column) that have obvious HCI implications, including the requirements for fairness, transparency, informed consent, and data protection by default. We particularly list example HCI solutions (in the third column) that are discussed in this chapter for categorising these solutions according to the legal privacy (GDPR) principles that they are addressing and implementing.

156

5 Addressing Challenges: A Way Forward

Table 5.1 Mapping fundamental GDPR privacy principles to HCI requirements and HCI solutions for usable privacy GDPR requirements

HCI requirements

Example HCI solutions

Fairness and transparency (Art. Users are aware and informed 5 (1)) about privacy policies and understand the consequences

No use of “dark patterns”, multi-layered policies, policy icons

Lawfulness and informed consent (Art. 5 (1), 6 (1) (a))

Users are informed about the privacy policy, understand the consequences and consent

Multi-layered policies, policy icons, just-in-time notices or dynamic consent, drag-and-drop agreements

Purpose specification (Art. 5 (2))

Users are aware of the data processing purposes

Purposes specified in privacy notices are complemented by illustrating policy icons, raising awareness via drag-and-drop interactions

Data minimisation, data protection by design and default (Art. 5 (1) (c), 25)

Users are supported and Privacy-friendly default empowered to disclose only the settings and support via minimal amount of data semi-automation, no use of “dark patterns” for setting up or overriding default settings

Ex-post transparency and intervenability (Art. 15–22)

Users are informed about how their data have been processed, are informed about their rights, understand and can exercise them to have control over their data

Data tracking tools/dashboards that visualise data flows and processing, inform about rights and provide links for exercising them online, pervasive but unobtrusive privacy notifications

References 1. Schaub, F., Balebako, R., Durity, A. & Cranor, L. A design space for effective privacy notices. Eleventh Symposium On Usable Privacy And Security (SOUPS 2015). pp. 1–17 (2015) 2. Tsormpatzoudi, P., Berendt, B. & Coudert, F. Privacy by design: from research and policy to practice-the challenge of multi-disciplinarity. Privacy Technologies And Policy: Third Annual Privacy Forum, APF 2015, Luxembourg, Luxembourg, October 7–8, 2015, Revised Selected Papers 3. pp. 199–212 (2016) 3. Cavoukian, A. & Others Privacy by design: The 7 foundational principles. Information And Privacy Commissioner Of Ontario, Canada. 5 pp. 12 (2009) 4. Fritsch, L., Fuglerud, K. & Solheim, I. Towards inclusive identity management. Identity In The Information Society. 3 pp. 515–538 (2010) 5. ETSI standard EN 301 549. Accessibility requirements for ICT products and services. https:// www.etsi.org/deliver/etsi_en/301500_301599/301549/03.02.01_60/en_301549v030201p.pdf (2021)

References

157

6. Islami, L., Fischer-Hübner, S. & Papadimitratos, P. Capturing drivers’ privacy preferences for intelligent transportation systems: An intercultural perspective. Computers & Security. 123 pp. 102913 (2022) 7. Pettersson, J., Fischer-Hübner, S., Danielsson, N., Nilsson, J., Bergmann, M., Clauss, S., Kriegelstein, T. & Krasemann, H. Making PRIME usable. Proceedings Of The 2005 Symposium On Usable Privacy And Security. pp. 53–64 (2005) 8. Karegar, F., Gerber, N., Volkamer, M. & Fischer-Hübner, S. Helping john to make informed decisions on using social login. Proceedings Of The 33rd Annual ACM Symposium On Applied Computing. pp. 1165–1174 (2018) 9. Karegar, F., Pettersson, J. & Fischer-Hübner, S. The dilemma of user engagement in privacy notices: Effects of interaction modes and habituation on user attention. ACM Transactions On Privacy And Security (TOPS). 23, 1–38 (2020) 10. Wilkinson, D., Bahirat, P., Namara, M., Lyu, J., Alsubhi, A., Qiu, J., Wisniewski, P. & Knijnenburg, B. Privacy at a glance: the user-centric design of glanceable data exposure visualizations. (Proceedings on Privacy Enhancing Technologies,2020) 11. Murmann, P., Beckerle, M., Fischer-Hübner, S. & Reinhardt, D. Reconciling the what, when and how of privacy notifications in fitness tracking scenarios. Pervasive And Mobile Computing. 77 pp. 101480 (2021) 12. Demjaha, A., Spring, J., Becker, I., Parkin, S. & Sasse, M. Metaphors considered harmful? An exploratory study of the effectiveness of functional metaphors for end-to-end encryption. Proc. USEC. 2018 (2018) 13. Wu, J. & Zappala, D. When is a Tree Really a Truck? Exploring Mental Models of Encryption. Fourteenth Symposium On Usable Privacy And Security (SOUPS 2018). pp. 395–409 (2018, 8), https://www.usenix.org/conference/soups2018/presentation/wu 14. Alaqra, A., Kane, B. & Fischer-Hübner, S. Machine Learning-Based Analysis of Encrypted Medical Data in the Cloud: Qualitative Study of Expert Stakeholders’ Perspectives. JMIR Human Factors. 8, e21810 (2021), https://humanfactors.jmir.org/2021/3/e21810/ 15. Shamir, A. How to share a secret. Communications Of The ACM. 22, 612–613 (1979) 16. Framner, E., Fischer-Hübner, S., Lorünser, T., Alaqra, A. & Pettersson, J. Making secret sharing based cloud storage usable. Information & Computer Security. (2019) 17. European Data Protection Board Recommendations 01/2020 on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data. (2020), https://edpb.europa.eu/system/files/2021-06/edpb_recommendations_202001vo. 2.0_supplementarymeasurestransferstools_en.pdf 18. European Data Protection Board Guidelines 05/2020 on consent under Regulation 2016/679. Version 1.1. Adopted on 4 May 2020. https://edpb.europa.eu/sites/default/files/files/file1/edpb_ guidelines_202005_consent_en.pdf (2020) 19. Art. 29 Data Protection Working Party. Opinion 10/2004 on More Harmonised Information Provisions. European Commission. https://ec.europa.eu/justice/article-29/documentation/opinionrecommendation/files/2004/wp100_en.pdf (2004) 20. Article 29 Working Party. Guidelines on transparency under Regulation 2016/679. European Commission. Adopted on 29 November 2017 As last Revised and Adopted on 11 April 2018. https://ec.europa.eu/newsroom/article29/items/622227/en (2018) 21. Office of the Australian Information Commissioner. Guide to developing an APP privacy policy. https://www.oaic.gov.au/privacy/guidance-and-advice/guide-to-developing-anapp-privacy-policy (2014) 22. OECD. Making Privacy Notices Simple. Digital Economy Papers 120. http://www.oecd-ilibrary. org/science-and-technology/making-privacy- notices-simple. (2006)

158

5 Addressing Challenges: A Way Forward

23. McDonald, A., Reeder, R., Kelley, P. & Cranor, L. A comparative study of online privacy policies and formats. Privacy Enhancing Technologies: 9th International Symposium, PETS 2009, Seattle, WA, USA, August 5–7, 2009. Proceedings 9. pp. 37–55 (2009) 24. Kelley, P., Bresee, J., Cranor, L. & Reeder, R. A “nutrition label” for privacy. Proceedings Of The 5th Symposium On Usable Privacy And Security. pp. 1–12 (2009) 25. Kelley, P., Cesca, L., Bresee, J. & Cranor, L. Standardizing privacy notices: an online study of the nutrition label approach. Proceedings Of The SIGCHI Conference On Human Factors In Computing Systems. pp. 1573–1582 (2010) 26. Nielsen, J. Usability inspection methods. Conference Companion On Human Factors In Computing Systems. pp. 413–414 (1994) 27. Fischer-Hübner, S., Angulo, J. & Pulls, T. How can cloud users be supported in deciding on, tracking and controlling how their data are used?. Privacy And Identity Management For Emerging Services And Technologies: 8th IFIP WG 9.2, 9.5, 9.6/11.7, 11.4, 11.6 International Summer School, Nijmegen, The Netherlands, June 17–21, 2013, Revised Selected Papers 8. pp. 77–92 (2014) 28. Tschofenig, H., Volkamer, M., Jentzsch, N., Fischer-Hübner, S., Schiffner, S. & Tirtea, R. On the security, privacy and usability of online seals: An overview. (ENISA,2013) 29. Holtz, L., Nocun, K. & Hansen, M. Towards displaying privacy information with icons. Privacy And Identity Management For Life: 6th IFIP WG 9.2, 9.6/11.7, 11.4, 11.6/PrimeLife International Summer School, Helsingborg, Sweden, August 2–6, 2010, Revised Selected Papers 6. pp. 338–348 (2011) 30. Fischer-Hübner, S. & Zwingelberg, H (Ed.). UI Prototypes: Policy administration and presentation-Version 2. PrimeLife Project Deliverable D. 4.3. 2 (2010). (2000) 31. Angulo, J., Fischer-Hübner, S., Wästlund, E. & Pulls, T. Towards usable privacy policy display and management. Information Management & Computer Security. 20, 4–17 (2012) 32. Smullen, D., Feng, Y., Aerin Zhang, S. & Sadeh, N. The Best of Both Worlds: Mitigating Tradeoffs Between Accuracy and User Burden in Capturing Mobile App Privacy Preferences. Proceedings On Privacy Enhancing Technologies. 2020, 195–215 (2020,1), https://petsymposium. org/popets/2020/popets-2020-0011.php 33. Bahirat, P., He, Y., Menon, A. & Knijnenburg, B. A Data-Driven Approach to Developing IoT Privacy-Setting Interfaces. 23rd International Conference On Intelligent User Interfaces. pp. 165–176 (2018, 3), https://dl.acm.org/doi/10.1145/3172944.3172982 34. Liu, B., Andersen, M., Schaub, F., Almuhimedi, H., Zhang, S., Sadeh, N., Acquisti, A. & Agarwal, Y. Follow My Recommendations: A Personalized Privacy Assistant for Mobile App Permissions. (Usenix Association, 2016) 35. Prictor, M., Lewis, M., Newson, A., Haas, M., Baba, S., Kim, H., Kokado, M., Minari, J., MolnárGábor, F., Yamamoto, B., Kaye, J. & Teare, H. Dynamic Consent: An Evaluation and Reporting Framework. Journal Of Empirical Research On Human Research Ethics. 15, 175–186 (2020, 7), http://journals.sagepub.com/doi/10.1177/1556264619887073 36. Schlehahn, E., Murmann, P., Karegar, F. & Fischer-Hübner, S. Opportunities and challenges of dynamic consent in commercial big data analytics. Privacy And Identity Management. Data For Better Living: AI And Privacy: 14th IFIP WG 9.2, 9.6/11.7, 11.6/SIG 9.2. 2 International Summer School, Windisch, Switzerland, August 19–23, 2019, Revised Selected Papers 14. pp. 29–44 (2020) 37. Patrick, A. & Kenny, S. From privacy legislation to interface design: Implementing information privacy in human-computer interactions. Privacy Enhancing Technologies: Third International Workshop, PET 2003, Dresden, Germany, March 26–28, 2003. Revised Papers 3. pp. 107–124 (2003)

References

159

38. Kobsa, A. & Teltzrow, M. Contextualized communication of privacy practices and personalization benefits: Impacts on users’ data sharing and purchase behavior. Privacy Enhancing Technologies: 4th International Workshop, PET 2004, Toronto, Canada, May 26–28, 2004. Revised Selected Papers 4. pp. 329–343 (2005) 39. Angulo, J., Fischer-Hübner, S., Pulls, T. & Wästlund, E. Usable transparency with the data track: a tool for visualizing data disclosures. Proceedings Of The 33rd Annual ACM Conference Extended Abstracts On Human Factors In Computing Systems. pp. 1803–1808 (2015) 40. Rozenberg, B., Bozdemir, B., Ermis, O., Önen, M., Canard, S., ORA, B., Perez, A., Ituarte, N., Pulls, T., Fischer-Hübner, S. & Others D5. 4-PAPAYA PLATFORM GUIDE. (2021) 41. Sarathy, J., Song, S., Haque, A., Schlatter, T. & Vadhan, S. Don’t Look at the Data! How Differential Privacy Reconfigures the Practices of Data Science. Proceedings Of The 2023 CHI Conference On Human Factors In Computing Systems. pp. 1–19 (2023) 42. Nissim, K., Bembenek, A., Wood, A., Bun, M., Gaboardi, M., Gasser, U., O’Brien, D., Steinke, T. & Vadhan, S. Bridging the gap between computer science and legal approaches to privacy. Harv. JL & Tech.. 31 pp. 687 (2017) 43. Altman, M., Cohen, A., Nissim, K. & Wood, A. What a hybrid legal-technical analysis teaches us about privacy regulation: The case of singling out. BUJ Sci. & Tech. L.. 27 pp. 1 (2021) 44. Prokhorenkov, D. Alternative methodology and framework for assessing differential privacy constraints and consequences from a gdpr perspective. 2022 IEEE 12th Annual Computing And Communication Workshop And Conference (CCWC). pp. 0359–0364 (2022) 45. Gallagher, K., Patil, S. & Memon, N. New me: Understanding expert and non-expert perceptions and usage of the Tor anonymity network. Thirteenth Symposium On Usable Privacy And Security (SOUPS 2017). pp. 385–398 (2017) 46. Habib, H., Pearman, S., Wang, J., Zou, Y., Acquisti, A., Cranor, L., Sadeh, N. & Schaub, F. “It’s a Scavenger Hunt”: Usability of Websites’ Opt-Out and Data Deletion Choices. Proceedings Of The 2020 CHI Conference On Human Factors In Computing Systems. pp. 1–12 (2020) 47. Habib, H. & Cranor, L. Evaluating the usability of privacy choice mechanisms. Eighteenth Symposium On Usable Privacy And Security (SOUPS 2022). pp. 273–289 (2022) 48. Murmann, P. & Karegar, F. From design requirements to effective privacy notifications: Empowering users of online services to make informed decisions. International Journal Of HumanComputer Interaction. 37, 1823–1848 (2021) 49. Ramokapane, K., Rashid, A. & Such, J. “I feel stupid I can’t delete...”: A Study of Users’ Cloud Deletion Practices and Coping Strategies. Thirteenth Symposium On Usable Privacy And Security (SOUPS 2017). pp. 241–256 (2017, 7) 50. Rossi, A. & Palmirani, M. Can Visual Design Provide Legal Transparency? The Challenges for Successful Implementation of Icons for Data Protection. Design Issues. 36, 82–96 (2020, 6) 51. Habib, H., Zou, Y., Yao, Y., Acquisti, A., Cranor, L., Reidenberg, J., Sadeh, N. & Schaub, F. Toggles, Dollar Signs, and Triangles: How to (In)Effectively Convey Privacy Choices with Icons and Link Texts. Proceedings Of The 2021 CHI Conference On Human Factors In Computing Systems. (2021) 52. Harbach, M., Hettig, M., Weber, S. & Smith, M. Using personal examples to improve risk communication for security & privacy decisions. Proceedings Of The SIGCHI Conference on Human Factors in Computing Systems. pp. 2647–2656 (2014) 53. Tabassum, M., Alqhatani, A., Aldossari, M. & Richter Lipford, H. Increasing User Attention with a Comic-Based Policy. Proceedings Of The 2018 CHI Conference on Human Factors in Computing Systems. pp. 1–6 (2018) 54. Bravo-Lillo, C., Komanduri, S., Cranor, L., Reeder, R., Sleeper, M., Downs, J. & Schechter, S. Your attention please: Designing security-decision UIs to make genuine risks harder to ignore. Proceedings of the Ninth Symposium on Usable Privacy and Security. pp. 1–12 (2013)

160

5 Addressing Challenges: A Way Forward

55. Anderson, B., Jenkins, J., Vance, A., Kirwan, C. & Eargle, D. Your memory is working against you: How eye tracking and memory explain habituation to security warnings. Decision Support Systems. 92 pp. 3–13 (2016) 56. Anderson, B., Vance, A., Kirwan, C., Jenkins, J. & Eargle, D. From warning to wallpaper: Why the brain habituates to security warnings and what can be done about it. Journal Of Management Information Systems. 33, 713–743 (2016) 57. Luguri, J. & Strahilevitz, L. Shining a light on dark patterns. Journal Of Legal Analysis. 13, 43–109 (2021) 58. Gray, C., Kou, Y., Battles, B., Hoggatt, J. & Toombs, A. The dark (patterns) side of UX design. Proceedings Of The 2018 CHI Conference On Human Factors In Computing Systems. pp. 1–14 (2018) 59. Bösch, C., Erb, B., Kargl, F., Kopp, H. & Pfattheicher, S. Tales from the Dark Side: Privacy Dark Strategies and Privacy Dark Patterns.. Proc. Priv. Enhancing Technol.. 2016, 237–254 (2016) 60. Mathur, A., Acar, G., Friedman, M., Lucherini, E., Mayer, J., Chetty, M. & Narayanan, A. Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites. Proc. ACM Hum.-Comput. Interact.. 3 (2019,11) 61. Emami-Naeini, P., Dheenadhayalan, J., Agarwal, Y. & Cranor, L. “nutrition” label for internet of things devices. IEEE Security & Privacy. 20, 31–39 (2021) 62. Railean, A. & Reinhardt, D. Let there be lite: design and evaluation of a label for iot transparency enhancement. Proceedings Of The 20th International Conference On Human-Computer Interaction With Mobile Devices And Services Adjunct. pp. 103–110 (2018) 63. Railean, A. Improving IoT device transparency by means of privacy labels. (2022) 64. Alaqra, A., Karegar, F. & Fischer-Hübner, S. Structural and functional explanations for informing lay and expert users: the case of functional encryption. Proceedings On Privacy Enhancing Technologies. 4 pp. 359–380 (2023) 65. Morel, V. & Fischer-Hübner, S. Automating privacy decisions-where to draw the line?. ArXiv Preprint ArXiv:2305.08747. (2023) 66. Li, Y. Cross-cultural privacy differences. Modern Socio-technical Perspectives On Privacy. pp. 267–292 (2022) 67. Hofstede, G. & Others Organizations and cultures: Software of the mind. McGrawHill, New York. pp. 418–506 (1991) 68. Feng, Y., Yao, Y. & Sadeh, N. A design space for privacy choices: Towards meaningful privacy control in the internet of things. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. pp. 1–16 (2021) 69. Fischer-Hübner, S., Angulo, J., Karegar, F. & Pulls, T. Transparency, privacy and trust– Technology for tracking and controlling my data disclosures: Does this work?. Trust Management X: 10th IFIP WG 11.11 International Conference, IFIPTM 2016, Darmstadt, Germany, July 18– 22, 2016, Proceedings 10. pp. 3–14, Springer International Publishing (2016)

6

Lessons Learnt, Outlook, and Conclusions

6.1

Introduction

In this final chapter, we are summarising some key takeaways from research regarding challenges and promising approaches for usable privacy. As an outlook, we further also discuss the role that virtual assistants and AI-supported automation can play in enhancing usable privacy by providing policy information and recommendations matching the users’ interests and needs. Furthermore, we underscore the critical importance of using such AI support in a responsible, ethical, and privacy-preserving manner.

6.2

Key Takeaways

Some important lessons learnt from usable privacy research can be summarised with the following key takeaways: • Legal privacy principles, especially those relating to consent, transparency, and data subject rights, have Human-Computer Interaction (HCI) implications and can only be met with usable privacy solutions. • Privacy by design approaches should be combined with human-centred and inclusive design approaches. • Privacy is only the users’ secondary goal and task, and hence usable privacy research needs to find approaches to raise users’ attention to the essential privacy information and to prevent habituation and privacy fatigue. • Multi-layered privacy notices, which can be complemented with illustrative policy icons, have become a de facto standard for drawing users’ attention to the essential policy © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 S. Fischer-Hübner and F. Karegar, The Curious Case of Usable Privacy, Synthesis Lectures on Information Security, Privacy, and Trust, https://doi.org/10.1007/978-3-031-54158-2_6

161

162

• •















6 Lessons Learnt, Outlook, and Conclusions

information that is displayed on the top layer. In addition, how users interact with policy content when providing consent can also play a role in drawing their attention. A privacy interface needs to be flexible and adaptable to changing privacy regulations, technology, and user needs. Besides privacy notices, privacy controls and choices should also be designed to be usable and accessible, with appropriate default settings, and should not require users to have technical expertise to use and set them. Finding suitable real-world analogies can help develop metaphors for explaining the properties of Privacy-Enhancing Technologies (PETs). It is also important to consider analogies between the PET in question and security and privacy technologies that users may already be familiar with, since users may perceive the PET to operate similarly to or to have similar characteristics to those technologies. The functional explanation of a PET, which explains its key privacy properties, is more intuitive for users than the structural explanation, which explains how it works. Nevertheless, complementary structural explanations can still contribute to establishing trust in PETs and may increase users’ satisfaction with the provided explanation. Personalisation and context are important factors in designing privacy interfaces, as users may have different privacy preferences depending on the situation or context. A privacy decision should therefore be preferably asked in a context that matters to the user and supported by Machine Learning (ML)-created recommendations that predict their preferences. There is no such thing as a one-size-fits-all usable privacy solution. For example, it can be challenging and tricky to provide the right amount of information at the right time, with the right framing and mode of delivery that is relevant, useful, and interesting to users with different demographic backgrounds without overloading them, or without making them unnecessarily suspicious or convinced. Adequate and sufficient user testing with representatives of target groups is needed to address this challenge. Multidisciplinary expertise and interdisciplinary collaboration are essential for creating effective and usable privacy solutions. It is especially needed for developing guidelines and tools for the usable configuration of PETs, which require making complex contextdependent trade-off decisions between requirements such as privacy, security, performance, and/or costs. In a world where users rely increasingly on different tools and underlying privacy mechanisms deployed to protect their data, developers’ privacy attitudes and practices are of paramount importance and we need to focus on effective methods of educating and training developers on privacy issues and how they can balance different goals, such as privacy and usability. The consent and notice paradigm is an important aspect of privacy protection, but it comes with its own challenges. Therefore, usable privacy research should also focus on developing other and complementing technical, legal, and social approaches to privacy protection.

6.3

6.3

Outlook

163

Outlook

The current AI revolution affects the usable privacy domain as well, creating both opportunities and challenges. On one side, AI and automation can support users and increase usability. Especially in mobile and IoT environments involving devices with limited screen sizes or limited possibilities for user interactions, the usability of privacy management is a challenge that can be supported by ML and automation. One example of AI-supported usable privacy solutions is “personalized privacy assistants” [2–4] that make ML-based recommendations for choosing and changing users’ privacy settings. Such privacy assistants, which we also discussed in Section 5.7, analyse users’ privacy behaviour and privacy personality to derive and suggest a “privacy profile”, with predicted privacy settings to best suit users’ choices and to allow users to easily change to the recommended settings. ML techniques can indeed accurately predict the users’ privacy choices with more than 95% (see [5]) and may lead to semi-automated decisions better matching the users’ privacy interests than manual decisions. Further future opportunities are provided by chatbots and virtual assistants. The design and use of the conversational chatbot called PriBots for delivering privacy notices and for setting privacy preferences as well as related legal and HCI challenges were already presented and discussed by [6]. In the future, virtual assistants and AI chatbots based on Large Language Models (LLMs) will likely be increasingly used for making privacy policy information and policy management more usable. In particular, privacy policies could be crafted by a chatbot in a way that suits the users’ preferences in terms of language, presentation style, terminology (e.g. short policies in “simple” English or presentations in English for legal or technical experts), or in terms of policy aspects of special interest that should be highlighted (e.g. information on retention policies or third country data transfers). In this way, privacy policy information can be better adapted to the needs of users with different backgrounds and/or requirements. Additionally, virtual assistants could present further information complementing the privacy policies provided by websites, such as information about available trust ratings of service providers. However, AI-supported enhancements of usable privacy come with privacy challenges. Especially if LLMs are used, there is no guarantee that the provided information is accurate and correct, as LLMs “do not distinguish between fact and fiction” [9]. Moreover, AI support for enhancing usable privacy may, for example, require or permit profiling of privacy preferences or personalities. Any user profiling to assist users should be implemented in a privacy-preserving manner to prevent sensitive data leakage. This can particularly be achieved if machine learning algorithms of personalised privacy assistants run under the users’ control on their own devices, which may, however, be challenging performance-wise. Moreover, privacy-enhancing techniques such as differential privacy, federated learning, and multi-party computation should be employed if data analytics is conducted centrally. Another privacy issue that may arise from the use of AI technologies is biased AI algorithms which result in discriminatory outcomes that impact privacy. For example, facial recognition

164

6 Lessons Learnt, Outlook, and Conclusions

algorithms are less accurate for certain ethnic groups, potentially leading to incorrect identification and discrimination [8]. This exacerbates the current privacy issues of marginalised or vulnerable populations who already face more online privacy challenges and have fewer resources to protect their privacy. Finally, algorithmic transparency and explainability of AI should also be guaranteed to comply with GDPR transparency requirements and to make AI trustworthy in line with the Ethics Guidelines for Trustworthy AI promoted by the EU Commission [1]. While much work from Computer Science has been contributed to the technical aspects of explainable AI, especially on the question of how explanations can be generated, efforts for humancentric, and thus human-understandable explanations of AI also play an important role, as is discussed in [7]. Usable AI explanations are likewise important in the case of AIpowered data collection and analysis, as without them it will be difficult for individuals to make informed decisions regarding disclosing and allowing the inclusion of their data to be analysed with AI technologies.

6.4

Final Conclusions

Usable privacy is still a young field, but has already generated many interesting discoveries and insights, as well as developed valuable approaches for enhancing usable privacy. Nevertheless, there are many challenges to be overcome, and continuous technical developments create both new opportunities and new challenges. Recent decades have seen significant technological advancements in PET research and development, but non-technical aspects, including human and socio-economic aspects, are often the ones that pose the greatest challenge because of the diversity and complexity of human nature and society as a whole. This calls for continuous interdisciplinary efforts and cooperation for promoting human-centred privacy by design.

References 1. High-Level Expert Group on Artificial Intelligence – set up by the EU Commission. ETHICS GUIDELINES FOR TRUSTWORTHY AI. (2019), https://digital-strategy.ec.europa.eu/en/ library/ethics-guidelines-trustworthy-ai 2. Liu, B., Andersen, M., Schaub, F., Almuhimedi, H., Zhang, S., Sadeh, N., Acquisti, A. & Agarwal, Y. Follow My Recommendations: A Personalized Privacy Assistant for Mobile App Permissions. (Usenix Association, 2016) 3. Bahirat, P., He, Y., Menon, A. & Knijnenburg, B. A Data-Driven Approach to Developing IoT Privacy-Setting Interfaces. 23rd International Conference On Intelligent User Interfaces. pp. 165– 176 (2018, 3), https://dl.acm.org/doi/10.1145/3172944.3172982

References

165

4. Smullen, D., Feng, Y., Aerin Zhang, S. & Sadeh, N. The Best of Both Worlds: Mitigating Trade-offs Between Accuracy and User Burden in Capturing Mobile App Privacy Preferences. Proceedings On Privacy Enhancing Technologies. 2020, 195–215 (2020, 1), https://petsymposium.org/popets/ 2020/popets-2020-0011.php 5. Wijesekera, P., Baokar, A., Tsai, L., Reardon, J., Egelman, S., Wagner, D. & Beznosov, K. The feasibility of dynamically granted permissions: Aligning mobile privacy with user preferences. 2017 IEEE Symposium On Security And Privacy (SP). pp. 1077–1093 (2017) 6. Harkous, H., Fawaz, K., Shin, K. & Aberer, K. Pribots: Conversational privacy with chatbots. Workshop On The Future Of Privacy Notices And Indicators, At The Twelfth Symposium On Usable Privacy And Security, SOUPS 2016. (2016) 7. Confalonieri, R., Coba, L., Wagner, B. & Besold, T. A historical perspective of explainable Artificial Intelligence. Wiley Interdisciplinary Reviews: Data Mining And Knowledge Discovery. 11, e1391 (2021) 8. Raposo, V. When facial recognition does not ‘recognise’: erroneous identifications and resulting liabilities. AI & SOCIETY. pp. 1–13 (2023) 9. Mittelstadt, B., Wachter, S. & Russell, C. To protect science, we must use LLMs as zero-shot translators. Nature Human Behaviour. pp. 1–3 (2023)