136 77 9MB
English Pages 716 Year 2022
The Palgrave Handbook of Gendered Violence and Technology Edited by Anastasia Powell Asher Flynn · Lisa Sugiura
The Palgrave Handbook of Gendered Violence and Technology “This book is a landmark collection charting the rapid development of interest in, and the impact of, technology on the nature and extent of gendered violence. In putting victim-survivor voices at the front and centre this collection will undoubtedly shape the future in this area of investigation. Scholarly, provocative, critical and committed, it will become the major reference point for anyone developing research in, or policy responses to, gender, technology and violence.” —Sandra Walklate, Eleanor Rathbone Chair of Sociology, Liverpool, UK, conjoint Chair of Criminology, Monash, Australia
Anastasia Powell · Asher Flynn · Lisa Sugiura Editors
The Palgrave Handbook of Gendered Violence and Technology
Editors Anastasia Powell Criminology and Justice Studies RMIT University Melbourne, VIC, Australia
Asher Flynn School of Social Sciences Monash University Clayton, VIC, Australia
Lisa Sugiura University of Portsmouth Southampton, Hampshire, UK
ISBN 978-3-030-83733-4 ISBN 978-3-030-83734-1 https://doi.org/10.1007/978-3-030-83734-1
(eBook)
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover credit: Westend61/Getty This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Acknowledgements
The editors wish to pay their respects to the Traditional Owners of the lands across Australia on which they live and work, and to recognise the ongoing impacts of colonisation on first nation peoples around the world. They also acknowledge the lived experiences of victim-survivors affected by gendered violence. The editors, and all the contributing authors to this book, are united in our mission towards a world without violence and abuse. The editors would like to thank all the chapter contributors for helping them to put together a handbook with such insightful reflections, wide-reaching coverage, and that further develops scholarship in this field. Thank you also to Josie Taylor at Palgrave Macmillan for her invitation to put this volume together and her support throughout the publishing process. Finally, the authors would like to thank their respective institutions, colleagues, families and loved ones for all their support over the years.
v
Contents
1
Gender, Violence and Technology: At a Conceptual and Empirical Crossroad Anastasia Powell, Asher Flynn, and Lisa Sugiura
Part I 2
3
4
1
Reflecting on Experiences
[Cum]munity Standards: Resisting Online Sexual Harassment and Abuse Morgan Barbour Legal Possibilities and Criminalised Population Groups: A Personal Experience of an Indigenous Woman in the Sex Trade Naomi Sayers Image-Based Sexual Abuse and Deepfakes: A Survivor Turned Activist’s Perspective Noelle Martin
19
41
55
vii
viii
Contents
Part II
Framing Gender, Technology & Violence
5
6
7
8
From Individual Perpetrators to Global Mobilisation Strategies: The Micro-Foundations of Digital Violence Against Women Lilia Giugni Alternate Realities, Alternate Internets: African Feminist Research for a Feminist Internet Neema Iyer
10
11
12
93
Culturally and Linguistically Diverse (CALD) Women’s Experiences of Technology-Facilitated Violence: An Intersectional Approach Carolina Leyton Zamora, Jennifer Boddy, Patrick O’Leary, and Jianqiang (Joe) Liang
115
Abuse as Artefact: Understanding Digital Abuse of Women as Cultural Informant Lauren Rosewarne
135
Part III 9
75
Stalking and Partner Abuse
‘Intimate Intrusions’: Technology Facilitated Dating and Intimate Partner Violence Anastasia Powell
157
Love, Hate and Sovereign Bodies: The Exigencies of Aboriginal Online Dating Bronwyn Carlson and Madi Day
181
Cyberstalking: Prevalence, Characteristics, and Impact Jenna L. Harewell, Afroditi Pina, and Jennifer E. Storey
203
Crossing a Line? Understandings of the Relative Seriousness of Online and Offline Intrusive Behaviours Among Young Adults Victoria Coleman, Adrian J. Scott, Jeff Gavin, and Nikki Rajakaruna
229
Contents
Part IV
Sexual and Image Based Abuse
13 The Impact of Technology-Facilitated Sexual Violence: A Literature Review of Qualitative Research Joanne Worsley and Grace Carter 14
ix
‘It’s like Mental Rape I Guess’: Young New Zealanders’ Responses to Image-Based Sexual Abuse Claire Meehan
261
281
15
Image-Based Sexual Abuse: An LGBTQ+ Perspective Ronnie Meechan-Rogers, Caroline Bradbury Jones, and Nicola Ward
297
16
Sexual Violence and Consent in the Digital Age Alexandra S. Marcotte and Jessica J. Hille
319
Part V 17
18
Online Hate
It’s Just a Preference: Indigenous LGBTIQ+ Peoples and Technologically Facilitated Violence Andrew Farrell ‘Women Get Away with the Consequences of Their Actions with a Pussy Pass’: Incel’s Justifications for Misogyny Lisa Sugiura
335
355
19 The Dirtbag Left: Bernie Bros and the Persistence of Left-Wing Misogyny Pratiksha Menon and Julia R. DeCook
375
20
395
Bystander Experiences of Online Gendered Hate Jo Smith
x
Contents
Part VI 21
22
Technologies for Justice
Police Body-Worn Cameras in Response to Domestic and Family Violence: A Study of Police Perceptions and Experiences Mary Iliadis, Zarina Vakhitova, Bridget Harris, Danielle Tyson, and Asher Flynn
417
He Said, She Said, We Watched: Video Evidence in Sexual Assault Trials Amanda Glasbeek
441
23 The Promises and Perils of Anti-rape Technologies Lesley McMillan and Deborah White 24
Using Machine Learning Methods to Study Technology-Facilitated Abuse: Evidence from the Analysis of UK Crimestoppers’ Text Data Felix Soldner, Leonie Maria Tanczer, Daniel Hammocks, Isabel Lopez-Neira, and Shane D. Johnson
Part VII 25
26
27
461
481
Legal Developments
Gaps in the Law on Image-Based Sexual Abuse and Its Implementation: Taking an Intersectional Approach Akhila Kolisetty Gender-Based Abuse Online: An Assessment of Law, Policy and Reform in England and Wales Kim Barker and Olga Jurasz Promises and Pitfalls of Legal Responses to Image-Based Sexual Abuse: Critical Insights from the Italian Case Elena Pavan and Anita Lavorgna
507
529
545
Contents
28
29
Restorative Responses to the Rhizomatic Harm of Nonconsensual Pornography Alexa Dodge Disrupting and Preventing Deepfake Abuse: Exploring Criminal Law Responses to AI-Facilitated Abuse Asher Flynn, Jonathan Clough, and Talani Cooke
Part VIII 30
31
32
33
34
565
583
Community Responses and Activism
Of Commitment and Precarity: Community-Based Approaches to Addressing Gender-Based Online Harm Rosel Kim and Cee Strauss Digital Defence in the Classroom: Developing Feminist School Guidance on Online Sexual Harassment for Under 18s Tanya Horeck, Kaitlynn Mendes, and Jessica Ringrose “GirlsDoPorn”: Online Pornography and Politics of Responsibility for Technology Facilitated Sexual Exploitation Ashlee Gore and Leisha Du Preez Image-Based Sexual Abuse, Legal Responses and Online Activism in the Aotearoa New Zealand Context Fairleigh Evelyn Gilmour Public Responses to Online Resistance: Bringing Power into Confrontation Laura Vitis and Laura Naegler
Index
xi
607
631
651
673
693
711
Notes on Contributors
Morgan Barbour is an American model, movement director, circus artist, writer and activist. She is a vocal advocate for victims of sexual abuse and has publicly criticised therapy restrictions for rape victims in the UK. Her writing has been recognised by Amnesty International and the United Nations. She is a former professor of movement at the University of Nebraska-Lincoln and a visiting special lecturer at Central St Martins. Kim Barker is a Senior Lecturer in Law at the Open University (UK). Dr. Barker’s research focuses on internet and intellectual property law. Her research explores the regulation and control of online multi-user platforms including online environments (particularly online games and social media sites); and the intersection between user responsibility, platform provider responsibility and legal regulations. Her research (with Dr. Olga Jurasz) explores the issues of online misogyny, including online violence against women, and assesses the legal responses to such societal problems.
xiii
xiv
Notes on Contributors
Jennifer Boddy is an Associate Professor in the School of Health Sciences and Social Work at Griffith University (Australia), and is passionate about creating healthy, sustainable environments, free from violence. Jennifer’s research is focused on environmental justice, feminism and programme evaluation. Coupled with her practice experience as a counsellor and therapeutic caseworker, she has in depth understanding of the complexities of people’s environments and their impact on health, wellbeing and safety. Bronwyn Carlson is the Head of the Department of Indigenous Studies, Macquarie University (Australia). She is the recipient of three consecutive Australian Research Council grants exploring Indigenous peoples’ cultural, social, intimate and political engagements on social media. She is the author of The Politics of Identity Who Counts as Aboriginal Today? (Aboriginal Studies Press, 2016) and co-author of Indigenous Digital Life: The Practice and Politics of Being Indigenous on Social Media (Carlson & Frazer, Palgrave Macmillan forthcoming, 2021) and co-editor and contributor of Indigenous people Rise Up: The Global Ascendency of Social Media Activism (Carlson & Berglund, Rutgers University Press, 2021). Grace Carter is a Research Fellow at Coventry University (England, UK). Grace is working on the £1.3m NIHR funded MESARCH project, examining health and service access among survivors of sexual assault and abuse, generating time-critical evidence to enhance policy at the highest level nationally and inform practice in the sexual assault and abuse sector. Grace is also working on the JiCSAV project funded by the ESRC as part of the UKRI Rapid Response to Covid-19, exploring how the pandemic has impacted the criminal justice system in sexual offences cases. Grace was awarded a Ph.D. from the University of Liverpool and her thesis explored how to strengthen the UK evidence base of interventions for children who have experienced domestic violence and abuse. Jonathan Clough is a Professor and Associate Dean (International) in the Faculty of Law, Monash University (Australia). He is an internationally recognised expert in the field of Cybercrime, being the author of
Notes on Contributors
xv
Principles of Cybercrime, (2nd edn, Cambridge University Press, 2015) as well as numerous articles in national and international journals. He has specific expertise in the laws relating to the online exploitation of children, as well as the challenges of facilitating the international enforcement of cybercrimes. He has provided advice to government on cybercrime-related issues, and was a member of the Commonwealth Cybercrime Expert Working Group. Victoria Coleman is a Ph.D. student in the Department of Psychology at the University of Bath, where she is investigating mental health and neurodevelopmental conditions. Victoria has a broad range of interests in Psychology and Education. Talani Cooke is a final year Law (Honours) student at Monash University (Australia). In 2020, Talani was selected for Monash Law School’s High Achiever Programme as one of the top 25 LLB students. She also completed her honours thesis on the need for further reforms to criminalise the non-consensual removal of condoms during intercourse. Talani hopes to further her research interests in sexual offences, cybercrimes and juries by commencing a Ph.D. in coming years. Madi Day is a Ph.D. candidate and Lecturer in the Department of Indigenous Studies, Macquarie University (Australia). They are a coinvestigator with Bronwyn Carlson on Mapping ‘what works’ in Aboriginal and Torres Strait Islander healing programs that respond to domestic and family violence, and sexual assault (2020–2022), funded by Australia’s National Research Organisation for Women’s Safety (ANROWS). They are the author of ‘Indigenist Origins: Institutionalising Indigenous queer and trans studies in Australia’ in Transgender Studies Quarterly (Vol. 7. Issue 3, 2020). Julia R. DeCook Ph.D., is an Assistant Professor of Advocacy and Social Change in the School of Communication at Loyola University Chicago. Her research focuses on the ways that online extremist groups, particularly male supremacist groups, navigate the constraints and affordances of digital infrastructure to sustain their movements.
xvi
Notes on Contributors
Alexa Dodge is the Hill Postdoctoral Fellow in Law, Justice and Society at Dalhousie University. Her current research analyses criminal and restorative justice responses to nonconsensual pornography, cyberbullying and digital harassment. Leisha Du Preez is a current Ph.D. student in the School of Social Sciences at Western Sydney University. She has previously completed her Bachelor and Honours degrees in Social Science, majoring in Criminology at Western. Leisha teaches across a variety of criminology subjects, including ‘Gender, Crime, and Violence’ . Her Ph.D. work is using feminist visual and other methods to focus on the gendered nature of responsibility and risk in public spaces at night and women’s perceptions of safety. Andrew Farrell is an Indigenous Early Career Academic Fellow and Ph.D. Student in the Department for Indigenous Studies, Macquarie University. Andrew is a Wodi Wodi descendant from Jerrinja Aboriginal community on the South Coast of NSW. Their research is multidisciplinary with a focus on Aboriginal LGBTIQ gender and sexualities, community, media and online studies. Asher Flynn is an Associate Professor of Criminology in the School of Social Sciences, Faculty of Arts at Monash University (Melbourne, Australia), and Vice President of the Australian and New Zealand Society of Criminology. She is a leading international researcher in policy and prevention concerning gendered and sexual violence, and AI and technology-facilitated abuse. Asher has published 7 books and approximately 60 chapters, articles and reports, and is lead investigator on three externally funded projects on technology-facilitated abuse. Jeff Gavin is a Senior Lecturer in the Department of Psychology at the University of Bath. As a critical social psychologist, Jeff has established a programme of research examining how identities are constructed and negotiated through online communication. His current research explores the role of social media in young people’s relationships, identity and support, as well as the risks associated with sharing digital images and texts with peers.
Notes on Contributors
xvii
Fairleigh Evelyn Gilmour is a Lecturer in Criminology and Gender Studies at the University of Otago, New Zealand. Their teaching and research interests include: sex work governance, media representations of crime, incarceration and surveillance. Lilia Giugni is a Research Associate at the Cambridge Centre for Social Innovation at the University of Cambridge (England, UK), and the cofounder and CEO of GenPol—Gender & Policy Insights—a UK-based think tank researching matters of gender and advocating for a gender just world. A multi-disciplinary researcher, Lilia holds a Ph.D. in Politics from the University of Cambridge and is a Fellow at the Royal Society of Arts and Commerce. Her research interests and advocacy work cover gender-based violence and online abuse, intersectional feminism and the gendered side of institutions, work, technology and innovation. She sits on the board of several feminist organisations. Amanda Glasbeek is an Associate Professor of Criminology in the interdisciplinary Department of Social Science at York University (Toronto, Canada). Her research interests span a range of topics at the intersection of feminist criminology and surveillance studies. Ashlee Gore is a Criminologist and Lecturer at Western Sydney University, specialising in Gender, Crime, and Violence. Her overarching research priority is gendered violence with a focus on violence against women and the social, cultural and legal constructions of responsibility. Ashlee is currently completing a monograph with Routledge titled Fatal Relationships: Gender, Homicide, and the Politics of Responsibility in a Postfeminist Moment. Daniel Hammocks is a Ph.D. student at the University College London’s Department of Security and Crime Science and in the second of a four-year MRes+M.Phil./Ph.D. programme. He has a background in mathematics and data science, with his current research focusing on the detection and prediction of emerging crime trends, as well as future methods of perpetration. Across his research portfolio, he is interested in the application of data science, computer vision, and data visualisation to the Crime Science domain with a particular focus on policing.
xviii
Notes on Contributors
Jenna L. Harewell is a Forensic Psychology Ph.D. student at the Centre of Research and Education in Forensic Psychology (CORE-FP) and a graduate teaching assistant in Psychology at the University of Kent (England, UK). Her main areas of research are victim-blaming, image-based sexual abuse and technology facilitated abuse. Bridget Harris is an Australian Research Council ‘Discovery Early Career Researcher Award’ Fellow and chief investigator in the Digital Media Research Centre and Centre for Justice at Queensland University of Technology. Her DECRA project explores responses to digital coercive control and, how technology could potentially be harnessed to prevent and respond to harm and, protect and empower domestic violence victim-survivors. She works in the areas of domestic, family and sexual violence; technology-facilitated harm, advocacy and justice in the context of gender-based violence; and spatiality (place, space and spacelessness) and violence. Jessica J. Hille is a gender and sexuality scholar and the Assistant Director for Education at the Kinsey Institute. Her research focuses on asexuality and ace spectrum identities, sexual pleasure and orgasm, and critical theory. She completed her Ph.D. in Gender Studies at Indiana University under Dr. Stephanie Sanders and Dr. Justin Garcia. Tanya Horeck is an Associate Professor in Film, Media & Culture at Anglia Ruskin University, UK. She has published widely on crime, violence and rape culture in popular TV and film. She is the author of the books Justice on Demand: True Crime in the Digital Streaming Era and Public Rape: Representing Violation in Fiction and Film and co-editor of two anthologies, The New Extremism in Cinema and Rape in Stieg Larsson’s Millennium Trilogy and Beyond . Mary Iliadis is a Senior Lecturer in Criminology at Deakin University and Co-Convenor of the Deakin Research into Violence Against Women Hub. Mary is leading a Criminology Research Council funded project titled, ‘Police body-worn cameras in response to family violence: A national study of victim-survivor perspectives and experiences’. Mary’s scholarship is comparative and international in scope, and explores the rights and protections afforded to victims in policy and practice in
Notes on Contributors
xix
criminal justice systems. She also researches prosecutorial discretion and explores access to justice issues for victims of gender-based violence. Neema Iyer is a technologist, based between Frankfurt, Germany and Kampala, Uganda. She has a Master’s in Public Health from Emory University (USA), and is the founder and director of Policy, a civic technology organisation based in Uganda. Policy uses data, design and technology to improve how citizens and government engage around public service delivery. Neema leads the design of projects focused on building data skills, fostering conversations on data privacy and digital security, and innovating around policy. Neema runs a podcast called Terms and Conditions, together with Berhan Taye. Social media handles: @pollicyorg @neemaiyer. Shane D. Johnson is the Director of the Dawes Centre for Future Crime, Professor of Future Crimes and Co-Director of the Centre for Doctoral Training in Cybersecurity at University College London. He has worked within the fields of criminology and forensic psychology for three decades. His research has explored how methods from other disciplines (e.g., complexity science) can inform understanding of crime and security issues, and the extent to which theories developed to explain common crimes can clarify more extreme events such as riots, maritime piracy and cybercrime. He is currently interested in how technological and social change informs new opportunities for offending or approaches to crime prevention. Caroline Bradbury Jones is a Professor and the lead for the Risk Abuse and Violence research programme at the University of Birmingham (England, UK). Caroline’s research is focused on issues of family violence and child abuse and neglect. Olga Jurasz is a Senior Lecturer in Law at the Open University (UK). Dr. Jurasz’s research focuses on international law, human rights and legal responses to violence against women (including online violence), specialising in feminist perspectives on law in these areas. Her research (with Dr. Kim Barker) also explores a number of aspects of online, text-based abuse, including consideration of online misogyny and online violence against women as a hate crime, as well as legal regulation of online abuse.
xx
Notes on Contributors
Rosel Kim is a staff lawyer at the Women’s Legal Education and Action Fund (LEAF) Canada, and chair of LEAF’s Technology-Facilitated Violence Project. Rosel holds an M.A. in English Literature and a common and civil law degrees from McGill University. Rosel is one of the founding members of Asian Canadian Women’s Alliance, a coalition of Asian Canadian-identifying women advocating for systemic change through a feminist and anti-oppressive lens. She has given numerous public talks on issues of privilege, equity and the #MeToo movement, and her writing on race, gender and identity have appeared in Huffington Post Canada, Precedent Magazine and GUTS Magazine. Akhila Kolisetty is a lawyer, feminist policy advocate and writer based in the USA. She is Co-Director of End Cyber Abuse, a collective of activists, lawyers and researchers who tackle technology-facilitated gender violence. She has consulted and worked internationally on women’s rights, gender violence in online and offline spaces, and the legal empowerment of marginalised communities in Nepal, India, Bangladesh, Afghanistan and Sierra Leone. In New York, she provided legal representation to survivors of domestic violence and online harms. She holds a Bachelor’s in Economics and Political Science from Northwestern University and a J.D. from Harvard Law School. Anita Lavorgna is an Associate Professor of Criminology at the University of Southampton (UK). Dr. Lavorgna has an international research track record on, and expertise in, interdisciplinary research drawing together criminology, socio-legal studies and web science. Among her publications are the textbook Cybercrimes: Critical Issues in a Global Context and the edited book Medical Misinformation and Social Harm in Non-Science Based Health Practices: A Multidisciplinary Perspective. Carolina Leyton Zamora is a Ph.D. student in the school of Human Services and Social Work at the Griffith University. Carolina’s work experience in Social Aid Programs back in her home country, Bolivia, has focused on helping women, who are victims of domestic and family violence. Her research and publication focus is indigenous and immigrant women’s vulnerable situation.
Notes on Contributors
xxi
Jianqiang (Joe) Liang is a Lecturer in the School of Human Services and Social Work at Griffith University (Australia). Joe’s areas of expertise include international social work teaching and research, working alongside young people, and programme evaluation (e.g., domestic violence project, positive youth development project). Joe’s publications cover a wide range of topic such as social work with ethnics minorities, adolescents and families, migration and psychosocial competences. Isabel Lopez-Neira is currently a Sustainability Policy Officer at the European Consumer Organisation (BEUC) and the European Consumer Voice in Standardisation (ANEC). During her time working as Research Assistant at University College London’s Department of Science, Technology, Engineering and Public Policy (STEaPP), she was an active member of the ‘Gender and IoT’ research project, and since then maintains close links with the project. Isabel holds a UCL Master’s degree in Science and Technology Studies. Her research interests focus on ethical and sociological questions surrounding research, technology and innovation. Alexandra S. Marcotte is a Postdoctoral Research Fellow at the Kinsey Institute at Indiana University, Bloomington. Her research focuses on the intersections between sexuality, intimacy and technology. She is especially interested in how people use technology to facilitate their sexual and romantic lives, as well as how they negotiate consent in digital contexts. She was awarded a Ph.D. in Gender Studies from Indiana University under the guidance of Dr. Justin Garcia and Dr. Stephanie Sanders. Noelle Martin is a Lawyer in Western Australia and is currently completing her Master of Laws dissertation at The University of Western Australia with the Minderoo Tech and Policy Lab on the legal, ethical and social implications of Facebook Reality Labs’ hyper-realistic human avatars. She worked as judge’s associate at the Supreme Court of Western Australia in 2020, and in the Department of Justice in Western Australia prior to that. Lesley McMillan is a Professor of Criminology and Sociology at Glasgow Caledonian University. Her research focuses primarily on sexual
xxii
Notes on Contributors
violence, with a particular interest in statutory and institutional responses including criminal justice and policing, forensic medical intervention and evidence. She also researches the role of technology in relation to sexual violence perpetration, intervention and prevention. She is Associate Director of the Scottish Institute for Policing Research where she leads the Public Protection Research Network, and Associate Director of the Centre for Research in Families and Relationships, University of Edinburgh. Ronnie Meechan-Rogers is an Associate Dean in the School of Nursing at BPP University (England, UK). Ronnie is also a member of the Risk Abuse and Violence research programme at the University of Birmingham (England, UK). Claire Meehan is a Lecturer in Criminology at the University of Auckland (New Zealand). Her research interests are in the field of digital sexuality, pornography, gender and education. She is currently involved in several research projects, working with young people in Aotearoa New Zealand to gain insights and understandings into their digital sexual lives. Kaitlynn Mendes is a Professor of Media, Gender and Sociology at the University of Leicester, UK. She has written widely around representations of feminism in the media, and feminists’ use of social media to challenge rape culture. She is the author or editor of five books including SlutWalk: Feminism, activism and media (2015), Feminism in the News (2011) and Digital Feminist Activism: Girls and Women Fight Back Against Rape Culture (2019, with Jessica Ringrose and Jessalynn Keller). More recently, she has studied online sexual harassment in school settings, and translated her research findings to impactful resources for schools and young people. Pratiksha Menon is a doctoral candidate in the Department of Communication and Media at the University of Michigan. Her current research focuses on the ways in which humour functions as a tool in the mainstreaming of extremist ideologies.
Notes on Contributors
xxiii
Laura Naegler is a Lecturer in criminology at the University of Liverpool. Her intersectional research focuses on cultural and political resistance. She is the author of several publications emerging from her ethnographic research with social movements and her study of processes and practices located on the urban margins. Her cross-disciplinary research is located within a theoretical tradition of critical and cultural criminology, advancing knowledge about practices of and responses to resistance. With a particular focus on the motivations and agency of social actors in urban contexts, her research further explores the dynamics of processes of control and reactions to them. Patrick O’Leary is Professor and Director of the Violence Research and Prevention Program (VRPP) and member of the Executive Leadership and Research Committee in the Griffith Criminology Institute (Australia). Patrick is an internationally recognised researcher with significant expertise in domestic violence/gender-based violence, child protection and the long-term impact of child sexual abuse. He has conducted a number of complex research projects in Australia, USA, UK, China, Indonesia, Sri Lanka, Pakistan, Albania, Sudan, Nepal and Lebanon for international clients including Terre des hommes, Islamic Relief Worldwide, and UNICEF. Elena Pavan is a Senior Assistant Professor at the Department of Sociology and Social Research, University of Trent (Italy). She holds a degree in Communication Sciences (University of Padova, Italy, 2004) and a Ph.D. in Sociology (University of Trento, 2009). She developed her expertise in the study and the use of social network analysis in various fields of research (from supranational governance political processes to human–computer interaction) and in conjunction with other analytical techniques, such as lexicon-content analysis. Afroditi Pina is a Senior Lecturer in Forensic Psychology at the Centre of Research and Education in Forensic Psychology (CORE-FP) at the University of Kent (England, UK). Her main areas of research are online and offline sexual violence, including rape and sexual harassment, image-based sexual abuse, intimate partner violence and self and sexual objectification.
xxiv
Notes on Contributors
Anastasia Powell is an Associate Professor in Criminology & Justice Studies at RMIT University (Melbourne, Australia). She is Editor-inChief of the book series Crime and Justice in Digital Society (Springer), a member of the editorial boards of the journals Crime Media Culture and Current Issues in Criminal Justice, and a board director of Our Watch— Australia’s national organisation for the prevention of violence against women. Her research examines the intersections of gender, violence, justice and technology, and includes the books: Image Based Sexual Abuse (2020, Routledge), Digital Criminology (2018, Routledge), Sexual Violence in a Digital Age (2017, Palgrave), Rape Justice (2015, Palgrave), Preventing Sexual Violence (2014, Palgrave), Domestic Violence (2011, Australian Scholarly Publishing), and Sex, Power and Consent (2010, Cambridge University Press). Nikki Rajakaruna is a Lecturer in the School of Arts and Humanities at Edith Cowan University, where she is a member of the Sellenger Centre for Research in Law, Justice and Social Change. Nikki has a broad interest in policing and has strong collaborations with various industry partners including Western Australia Police. Jessica Ringrose is a Professor of Sociology of Gender and Education at the UCL Institute of Education. She is an internationally recognised expert on gender equity in education, youth digital sexual cultures, and feminist participatory research methodologies and has collaborated on funded research on these topics with colleagues in UK, Canada, USA, Australia, New Zealand and Europe. She is the 2020 Recipient of the Distinguished Contributions to Gender Equity in Education Research Award, from the American Educational Research Association, which recognises her commitment to societal impact and making research matter beyond academic audiences. Lauren Rosewarne is an Associate Professor at the University of Melbourne (Australia). She has published and commented widely on issues of sex, gender, politics, new technology and the media. She is the author of eleven books, most recently Why We Remake: The Politics, Economics and Emotions of Film and TV Remakes (Routledge, 2020). More information is available at www.laurenrosewarne.com.
Notes on Contributors
xxv
Naomi Sayers is an Indigenous feminist lawyer based out of Ontario (Canada), called to the bar in two provinces. She has a public law practice and works with organisations in challenging state responses to the needs of vulnerable and marginalised groups, particularly Indigenous women and girls. She has published widely on issues impacting Indigenous women and girls, and her work is often cited by others to influence change and advocate for policy or law reform. She uses her personal story along with conventional legal knowledge and training to inspire change among various kinds of institutions. She tweets over at @kwetoday. Adrian J. Scott is a Senior Lecturer in the Department of Psychology at Goldsmiths, University of London, where he is a member of the Forensic Psychology Unit and Co-Director of an accredited M.Sc. programme in Forensic Psychology. Adrian has a broad interest in forensic psychology, specialising in the areas of stalking, non-consensual image sharing, investigative interviewing and eyewitness testimony. Jo Smith is a Lecturer in Law at the University of Brighton. She recently completed her Ph.D. looking at feminist women’s experiences of online gendered hate. She is the chair of the British Society of Criminology Hate Crime Network, and a practising solicitor who works for legal charity Rights of Women. Felix Soldner is a Ph.D. student at University College London’s Department of Security and Crime Science. He has a wide interest in many research areas, which led him to study Psychology, Biomimetics as well as Brain and Cognitive Sciences. Following this, he developed an interest in data science, machine learning and natural language processing, which he now integrates in his work. His current research focuses on how data science methods can facilitate the detection and prevention of online fraud (e.g., counterfeit goods). Jennifer E. Storey is a Lecturer in Forensic Psychology at the Centre of Research and Education in Forensic Psychology (CORE-FP) at the University of Kent. Her main areas of research are stalking, elder abuse and intimate partner violence. She works extensively with health, criminal justice, social work, and other agencies that respond to interpersonal violence.
xxvi
Notes on Contributors
Cee Strauss is a staff lawyer at the Women’s Legal Education and Action Fund (LEAF) Canada. Cee received their B.C.L./L.L.B. from McGill University in 2016, graduating with the Elizabeth Torrance Gold Medal. They also hold an M.A. in Communication Studies from McGill. Cee was called to the Ontario Bar in 2017 and the Barreau du Québec in 2020. Throughout their studies and working life, Cee has volunteered with the Prisoner Correspondence Project, a queer and trans prisoner penpal programme. Lisa Sugiura is a Senior Lecturer in Criminology and Cybercrime at the School of Criminology and Criminal Justice, University of Portsmouth (Portsmouth, England), and the Deputy Director of the Cybercrime Awareness Clinic. Her research focuses on online deviance, and technology-facilitated sexual abuse and violence. She has published on topics including the online pharmaceutical trade, phishing, online research ethics and rape culture online. Her research projects, which include funding from the National Cyber Security Centre (NCSC) and the UK Home Office, involve the language of cybersexism, victims of computer misuse, technology-facilitated domestic abuse and extremist and misogynistic behaviours in incel communities. Leonie Maria Tanczer is a Lecturer in International Security and Emerging Technologies at University College London’s Department of Science, Technology, Engineering and Public Policy (STEaPP) and affiliated with UCL’s Academic Centre of Excellence in Cyber Security Research. Her research focuses on questions related to Internet security, and she is specifically interested in the intersection points of technology, security, and gender. Since 2018, she is the Principal Investigator of the ‘Gender and IoT’ study, which examines the implications of the Internet of Things (IoT) and other digital technologies on victims/survivors of gender-based domestic violence and abuse. Danielle Tyson is a Senior Lecturer in Criminology at Deakin University and Co-Convenor of Deakin Research on Violence Against Women Hub and Co-Director of the Monash Deakin Filicide Research Hub. Danielle’s research interests include intimate partner/domestic/family violence, legal responses in cases of intimate partner homicide, the
Notes on Contributors
xxvii
impacts of homicide law reform, and the nature and dynamics of filicide (the killing of a child by a parent or step-parent). Zarina Vakhitova is a Monash University Lecturer with the School of Social Sciences, specialising in criminology and criminal justice. Her research interests focus on the application of environmental criminology principles to understanding and controlling crime, particularly in the context of cyberspace. Her work is published in PLoS One, Computers in Human Behaviour, Crime Science, Crime Prevention and Community Safety, Deviant Behaviour, Journal of Contemporary Criminal Justice, International Review of Criminologyand other international peer-reviewed journals. Laura Vitis is a Lecturer at the Queensland University of Technology. Her research focuses on how technology is used to facilitate gendered, sexual and domestic violence within the Global South. Her work also examines the regulation of and resistance to technologically facilitated violence and youth sexting. In 2017, she co-edited a collection entitled Gender, Technology and Violence for Routledge. She has established an international research profile aligned with this research agenda. Her research on exo-judicial justice for online sexual harassment has been published in leading international journal Crime, Media, Culture. Nicola Ward is a Lecturer in the School of Social Policy, at the University of Birmingham (England, UK). Her primary role, as a social work lecturer, enables her to combine her interests in social work, social care and social justice. Her research is focused on diversity, intersectionality and care ethics. Deborah White is a Professor in the Department of Sociology at Trent University in Ontario, Canada. Her research focuses on the institutional responses to sexual violence, particularly medico-legal interventions and the role and nature of forensic evidence and experts in criminal justice systems. As a feminist scholar of science and technology studies (STS), she also conducts critical research on technologies of rape and sexual assault, specifically anti-rape technologies.
xxviii
Notes on Contributors
Joanne Worsley is a Research Associate at the University of Liverpool. In collaboration with the National Centre for Cyberstalking Research in the UK, Joanne explored the psychological and interpersonal harms of being stalked via electronic means through the voices and experiences of one hundred cyberstalking victims (Worsley et al., 2017). Joanne has also conducted research on cyber victimisation in adolescent and university student samples (McIntyre et al., 2018; Worsley et al., 2018). Joanne was awarded her Ph.D. in Clinical Psychology from the University of Liverpool, and her thesis explored the relationship between mental health, wellbeing and the online environment.
List of Figures
Fig. 12.1
Fig. 21.1 Fig. 21.2 Fig. 21.3
Mean rankings of conceptually similar online and offline intrusive behaviours (1 least serious, 20 most serious) Frequency of BWC use among the WAPOL participants Frequency of BWC use among the QPS participants The perceived likelihood that BWCs will positively impact public perceptions of the police, procedural fairness, and transparency and accountability in decision-making in DFV responses. Vertical lines indicate median scores
238 426 427
429
xxix
List of Tables
Table 12.1 Table 12.2 Table 21.1 Table 21.2 Table 21.3 Table 21.4 Table 24.1
Cue card categories and example intrusive behaviours Categorisations of conceptually similar online and offline intrusive behaviours Descriptive statistics of the sample (N = 452) Contexts of BWC use in DFV responses Spearman correlation coefficients (r ) for the variables of interest Police impressions of BWCs by specialisation Averaged performance scores for the “linearSVC” classifier. SD values in parentheses
235 237 425 428 431 432 491
xxxi
1 Gender, Violence and Technology: At a Conceptual and Empirical Crossroad Anastasia Powell, Asher Flynn, and Lisa Sugiura
Introduction Gendered violence is a global and urgent human rights concern. Worldwide, over a third of women have experienced physical and/or sexual A. Powell (B) Criminology and Justice Studies, RMIT University, Melbourne, VIC, Australia e-mail: [email protected] A. Flynn School of Social Sciences, Monash University, Clayton, VIC, Australia e-mail: [email protected] L. Sugiura School of Criminology and Criminal Justice, University of Portsmouth, Southampton, UK e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_1
1
2
A. Powell et al.
intimate partner violence, or non-partner sexual violence, in their lifetime (World Health Organisation, 2013). There can be little doubt that as digital, surveillance and communications technologies have advanced in capabilities and uptake, they have also become embedded in both familiar and novel forms of violence, harassment and abuse. Indeed the ‘perpetual contact’ (Katz & Akhaus, 2002) that these technologies enable, as well as the sheer extent of sensitive information and opportunities for invasion of personal life, have equipped perpetrators with ready means for a range of harms including stalking, domestic and partner violence, sexual violence and sexual harassment. The differential experience of technology-facilitated violences and harms according to a person’s gender (and many other intersecting marginalisations) are increasingly evident in a burgeoning field of scholarly works. In this handbook, contributors provide an account of the rapid developments in gendered technology-facilitated violence and abuse, the diversity of scholarship seeking to better understand it, as well as the need for reforms and activism to address it. This chapter provides an overview of the organising themes examined in this state-of-the-art volume of gendered violence and technology. Though the chapters themselves examine many examples of gendered violence (including stalking, domestic and family violence, dating violence, sexual violence, sexual harassment, and image-based abuse), they also engage to varying extents and in different ways with core concerns of the gendered nature of violence, technological affordances in the perpetration of gendered violence, justice responses, and resistance in civil society. In this chapter, we seek to establish some grounding concepts and frameworks that provide an important context and shared understanding for the chapters that follow. This chapter proceeds with a discussion of three overarching themes that run throughout the handbook: gendering violence, gendering technology, and digital justice and resistance. This discussion is followed by an outline of the structural organisation of the handbook as a whole. This handbook represents a vital gathering of knowledge on gender, technology and violence; one that will no doubt seed further investigations and analyses that will expand and enhance the terrains explored here.
1 Gender, Violence and Technology: At a Conceptual …
3
Gendering Violence What are the qualities or features that make violence ‘gendered’? Arguably, all violence could be considered gendered in so far as gender is an institutionalised and organising structure of societies. In turn, all victim/survivors and perpetrators of violence could be said to be gendered subjects, who cannot help but bring the social, cultural and political meanings and their own lived realities of gender into their experiences of violence. It is not merely men’s violence against women then that we might consider to be ‘gendered violence’. Men’s violence towards other men, women’s violence towards men and women, and violence towards transgender and gender non-binary people can all be examined as gendered in various ways. Yet, this is not typically how gendered violence is operationalised within contemporary international scholarship. Perhaps among the most common terms in the field are ‘violence against women’ or ‘men’s violence against women’; though these could also readily be described as just a sub-type of gendered violence. In these terms, the defining quality appears to be prevalence-based, whereby violence is asymmetrical according to gender. In other words, where one gender predominantly perpetrates violence against another. However, as is examined throughout this handbook, gender asymmetry in prevalence is certainly not the only way in which various violences can be understood as gendered . As many scholars have argued, language matters, and the terms we use to describe violence carry with them numerous assumptions, inclusions and exclusions (e.g. Boyle, 2018; Maddocks, 2018; McGlynn & Rackley, 2017). In her 2018 essay ‘What’s in a name? Theorising the inter-relationships of gender and violence’, Karen Boyle asks: ‘Is our focus on commonalities among victims (violence against women), perpetrators (men’s violence) and meanings (gendered or gender-based violence; sexual violence), or on a theoretical and political approach (feminist )?’ (p. 20, emphasis in original). As we discuss below, all of these different meanings or foci of ‘gendered violence’ are represented throughout this handbook. Yet additionally, one of the ways that we have sought to practice an inclusive feminist approach here is through inviting contributions from survivors, advocates and activists, as well as
4
A. Powell et al.
traditional academic scholars. Though a majority of chapter contributions remain authored by career academics, opening up the volume has enabled equally vital forms of knowledge to be reflected within its pages. Throughout this volume, chapter contributions examine varying aspects of the gendered nature of violence and abuse that is enacted with the aid of an assemblage of digital, surveillance and communication technologies. In some examples, contributors reflect on their own lived experiences of online victimisation and how it was characterised by gender and in some cases sexualised violence (see Chapters 2, 3 and 4). In Chapter 2, model and activist Morgan Barbour provides an account of her experiences of online harassment and abuse, as well as her social media presence (under the handle ‘Cummunity Standards’), in which she challenges the sexist and harmful abuse she has received. In Chapter 3, indigenous feminist lawyer Naomi Sayers, situates her own experiences of technology-facilitated abuse within structures of systemic discrimination that are experienced by indigenous women and women of colour in particular. In Chapter 4, lawyer and activist Noelle Martin shares her experience of image-based abuse, as well as her fight to reform the law and to improve community understanding of the severe harms of such abuse. In other chapters, contributors focus squarely on the structural and systemic nature of gendered technology-facilitated abuse, identifying these harms not only as an extension of women’s experiences of partner violence, sexual violence and harassment, but directly situating these with gender inequality. For instance, as Lilia Giugni discusses in Chapter 5, women’s participation in both offline and online public spaces are marked by systems of gender inequality and, at times, outright misogyny (see also Chapters 8 and 9). Some chapters more specifically examine particular forms of violence and abuse with respect to their differential extent, relational nature (such as in partner or sexual violence) and/or impacts on women relative to men. For example, in Chapter 11, Jenna L. Harewell, Afroditi Pina and Jennifer E. Storey discuss the gendered extent and impacts of cyberstalking, while Coleman and colleagues (Chapter 12) further examine gendered differences in perceptions of online versus offline stalking. Sexual and image-based forms of abuse are elaborated upon in Chapter 13, by Joanne Worsley and Grace Carter,
1 Gender, Violence and Technology: At a Conceptual …
5
who provide an overview of current trends in technology-facilitated sexual violence; and in Chapter 14, where Claire Meehan discusses young women’s experiences of image-based sexual abuse in the New Zealand context. Though much of the handbook details women’s experiences of diverse forms of gender-based violence, chapter contributions also engage with intersecting inequalities, marginalisations, discrimination and hate-based abuse. In Chapter 18, for instance, Lisa Sugiura describes how misogyny is excused by the online incel community and broader manosphere, normalising men’s violence against women. In Chapter 19, Pratiksha Menon and Julia R. DeCook explore the misogyny pervasive in left-wing politics, which has largely evaded academic analysis and attention. Additionally, in Chapter 20, Jo Smith highlights the experiences of women peripheral to online misogynistic hatred, showing how these forms of abuse do not need to be directly experienced in order to cause harm to bystanders. In some examples, contributors note the differential extent and impacts of technology-facilitated abuses for transgender, non-binary and/or sexuality diverse people. For example, in Chapter 17, Andrew Farrell suggests that dating apps enable dangerous and violent experiences that affect Aboriginal and Torres Strait Islander LGBTQI+ peoples in unique ways. In Chapter 15, Ronnie Meechan-Rogers and colleagues provide insights into the lived experiences of LGBTQ+ victimsurvivors of image-based sexual abuse. Drawing on an international study involving participants across Australia, Canada, Ireland, New Zealand, Singapore, Switzerland, the UK and the US, they argue that IBSA has a pervasive and pernicious impact upon the lives of LGBTQ+ victim-survivors, specific to their sexuality. Several chapters engage explicitly with the experiences of culturally and linguistically diverse (CALD) women, as well as Indigenous peoples, women of colour, and the concept of intersectionality (Crenshaw, 1990). In Chapter 6 for example, Neema Iyer draws on African women’s experiences of technology-facilitated abuse and advocates for a fundamental shift in power to challenge the inequalities that underpin these experiences. Meanwhile, in Chapter 7, Carolina Leyton and colleagues discuss the experiences of CALD women in Australia, highlighting the relative
6
A. Powell et al.
inattention to their experiences of online abuse. In Chapter 10, Bronwyn Carlson and Madi Day focus on the intersecting issues of racism, colonialism, and sexism in Aboriginal peoples’ experiences navigating online dating in the Australian context. Related themes are further discussed by staff lawyers at the Women’s Legal Education and Action Fund (LEAF) Canada, Rosel Kim and Cee Strauss in Chapter 30, as they highlight how gender-based online harm almost always targets intersecting markers of oppression, such that there are specific forms of colonial gender-based online harms, which are both familiar and especially virulent. Gender is operationalised throughout this handbook in an inclusive way, and one that seeks to recognise that though gender may be a major structural factor in women’s experiences of digital harms, violence and abuse, it clearly intersects with other axes of inequality and marginalisation (see also Chapters 2 and 25).
Gendering Technology It is by no means novel to suggest that technology itself is gendered in a host of ways. From the failure to recognise the contributions of women as early inventors and developers, and the male-dominated cultures and workplaces of contemporary technology companies, to the ways (masculine) gender is blindly designed into the very code of the machines, and the many ultimate uses of technology to abuse; the gender inequality present in our societies is likewise deeply enmeshed in our technologies (see e.g. Blair, 2019; Broad, 2018; Wajcman, 2004, 2010). Some early technologists, including some cyber and technofeminists, were certainly optimistic about the potential democratising effect of online technologies (see e.g. Haraway, 1985, 1991; Plant, 1996, 1997 ; Turkle, 1995). Donna Haraway (1985) was famously so, suggesting that women needed to embrace the opportunities enabled by technoscience for the purposes of emancipation. Meanwhile, Sherry Turkle (1995) put forward a vision of women’s freedom from gendered identities as digital spheres, enabled new expressions of self that resisted the old norms. Sadie Plant (1996) similarly suggested that ‘virtuality brings a fluidity to identities which once had to be fixed’ (Plant, 1996, p. 324) and as such,
1 Gender, Violence and Technology: At a Conceptual …
7
has the potential to produce ‘genderquakes’; fundamentally breaking-up traditional gendered power relations. Yet more contemporary scholarship has conversely identified that technologies have not in fact opened up diverse possibilities for more marginalised identities to participate and be represented (see e.g. Wajcman, 2004; Wyatt, 2008); at least not to an extent that an equality now exists. Rather, there remains a multi-layered, and gendered, digital divide (Cooper & Weaver, 2003; Robinson et al., 2015). Not only have technologies failed to deliver on the promise of equality and diversity of participation and representation, but their integration into our daily lives has arguably become particularly influential in the realm of private life. As Green and Singleton (2013) note, contemporary developments in digital and communications technologies are particularly concerned with private information, personal connection and belonging; constituting a ‘digital sociality’. Indeed, ‘it is across these key arenas – time/space, personal relationships, sociality – that a gender lens is vital’ (Green & Singleton, 2013 p. 35). In effect, technologies have become fully enmeshed in our personal relationships, and this is precisely the arena in which women are most likely to experience violence, harassment and abuse (especially as compared with men who typically experience violence in the realms of public life, and largely from other men). A further overarching theme that is woven throughout many of the chapter contributions here is a complex understanding of the relationship between technology and violence; one that does not position technology itself as a primary or determining cause of gendered violence and abuse. Though technologies may certainly extend perpetrators’ access to victims across time and space, as well as amplify the impacts of abuse (see Powell & Henry, 2017), these new capabilities are used to engage in largely all-too-familiar harms. For example, as Alexandra S. Marcotte and Jessica J. Hille discuss in Chapter 16, violations of individual autonomy with respect to sexual consent carry many parallels between technologyassisted and conventional sexual violence, even though there are aspects to digital sexual violence which may be novel in both their reach and harms. Yet neither are technologies mere neutral tools in the hands of perpetrators of gendered violence. The proliferation of purpose-built
8
A. Powell et al.
‘stalking apps’, as well as the gendered nature of technology design and development more broadly are suggestive of a mutual shaping of technology and gender-relations including violence. Such is a ‘technofeminist’ (see Wajcman, 2004) approach to gendered violence—one that seeks to understand and address this mutual shaping of technology and gender-relations ‘over time and across multiple sites’ (Wajcman, 2010, p. 150). Indeed throughout this handbook, chapter contributors engage with varying aspects of the nexus of gender, violence and technology, seeking to elucidate to varying extents the roles of individual, institutional and societal level factors (see Chapters 8, 9, 18). In doing so, several of the chapters in this handbook engage directly with theories of technology and society, with some taking an explicitly feminist approach to analysing the mutual shaping of gendered power relations and technology-facilitated abuse.
Resisting Gendered Violence There remains an optimism within studies of technology and society that as communications technologies continue to expand and develop, they offer more opportunities for democratisation, participation and representation (Bennett, 2003, 2005; Eubanks, 2012). Indeed, there is similar optimism within so-called ‘fourth wave’ and digital feminisms that social media in particular have radically opened up knowledge and discourse challenging gendered violence (Baer, 2016; Clark, 2016; Horeck, 2014; Keller et al., 2016; Nuñez Puente, 2011; Rentschler, 2014; Thrift, 2014; Williams, 2015). The potential of online activism came to fruition when victim-survivors of sexual violence took to social media and shared their experiences using the hashtag #MeToo, though this was by no means the first publicly shared accounts of harassment and abuse, nor the original use of the term (attributable to Tarana Burke). Nevertheless, these digital disclosures trended worldwide on Twitter, captured public interest, and reinvigorated feminist activism against sexual violence (Powell & Sugiura, 2018). Such forms of digital feminist activism evidence the systemic problem of sexual violence, yet the
1 Gender, Violence and Technology: At a Conceptual …
9
broader gender power imbalances and socio-cultural drivers enabling it, also need to be tackled. Many of the chapters explore the role of law reform and policy change in relation to technology-facilitated abuse and the ways in which governments have sought to keep up with technological change, often ineffectively. In Chapter 26, Kim Barker and Olga Jurasz explore the potential for legal reform in England and Wales to address gender-based abuse online. While acknowledging some shifts towards addressing the harms of gendered violence, they argue that the reforms to date have been fragmented, disjointed and left gaps that overlook gender-based abuse and online violence against women. Similarly, in Chapter 27, Elena Pavan and Anita Lavorgna examine the promises and pitfalls of legal responses to image-based sexual abuse in Italy. Much like Barker and Jurasz, they find that the current law is unfocused and fails to address the harms and differing types of image-based sexual abuse, thereby failing to keep up with the way technologies are being used to abuse. Others argue for the introduction of alternatives to legal reform. For example, in Chapter 28, Alexa Dodge discusses the possibility of introducing restorative approaches to image-based sexual abuse in schools, as a way to both acknowledge the harms experienced by victim-survivors, but also to transform and address probelmatic beliefs within an institution that may deepened and extend the harms. In some chapters, contributors explore the potential for technologies to assist with justice. Mary Iliadis and colleagues discuss the potential for domestic and family violence victim-survivors to have improved experiences with police as a result of body-worn camera technologies which reduce requirements on victim-survivors to testify, and which they argue can improve perceptions and experiences of procedural justice (Chapter 21). Others however, show how technologies aimed to assist victim-survivors in seeking justice are limited in their capacity to do so in practice. In Chapter 22 for example, Amanda Glasbeek examines the role of CCTV in sexual assault trials. While recognising the potential for this to assist in cases where identification is key, Glasbeek ultimately argues that it offers little in the way of assisting victim-survivors. Resistance is a further overarching theme across many chapter contributions. In Chapter 33, Fairleigh Gilmour, examines legal and activist
10
A. Powell et al.
responses to image-based sexual abuse in the Aotearoa New Zealand context, claiming that despite there being a shift from traditional victimblaming narratives, anti-victim elements remain with the legislation and activism, or organisational and institutional change. In Chapter 31, Tanya Horeck, Kaitlynn Mendes and Jessica Ringrose present their development of Online Sexual Harassment guidance and recommendations for schools, arguing that school policies must move beyond the limitations of the law in order to recognise and respond to the highly gendered and sexualised nature of harms experienced by young people. There are also calls for more radical social, cultural and political change, such as in Chapter 30, where Rosel Kim and Cee strauss discuss the potential of community-based responses to meet the justice needs of victim-survivors of gender-based online harm, and the tensions arising from the mediated nature of online communication hosted on platforms with corporate interests. Meanwhile, Laura Vitis and Laura Naegler in Chapter 34, demonstrate the significance of public responses to online testimony to provide insight into contemporary understandings of both sexual violence and online resistance. Collectively, our contributors seek to challenge and resist technology-facilitated gendered violence and the systems of inequality that underpin it. This is, of course, no small undertaking. In light of the mutual shaping of technology and society, and consequently the nexus of gender, violence and technology, these harms will not simply be addressed through better or more informed technological design (see Chapter 23). Nonetheless, there is a role for technology companies and providers to cooperate with governments and law enforcement, as well as to be more accountable to their communities of users. For instance, as Flynn and colleagues discuss in Chapter 29, there is a clear role for Internet intermediaries to take on a higher level of corporate responsibility in detecting, preventing and responding to technology-facilitated abuse, specifically in relation to the creation and sharing of image-based sexual abuse. Similarly, in Chapter 32, Ashlee Gore and Leisha Du Preez argue that companies and platforms hosting pornography are able to evade responsibility for the gendered harms ensuing from the reproduction of non-consensual content via rationalities of ‘postfeminsim’ and reactive formations of popular misogyny. Ultimately, as the problem of technology-facilitated gendered violence is multifaceted, so must the
1 Gender, Violence and Technology: At a Conceptual …
11
solutions be. Each of the various technical, legal and social aspects need to be addressed if we are to ensure justice for victim-survivors and address the causes so as to prevent these harms in the first place.
Organisation of the Handbook The chapters within this volume bring together interdisciplinary perspectives from across criminology, psychology, sociology, education, law and others to address various types and contexts of gendered violence including stalking, domestic and family violence, dating violence, sexual violence, sexual harassment, and image-based abuse. This handbook represents a comprehensive resource for scholars, students, teachers and those advocating for change to address gendered forms of online abuse and justice for victim-survivors. As editors, we have sought to include scholarship representing different regions globally, as well as disciplinary perspectives across the social and political sciences, media and cultural studies, psychology and law. The contributions published in this volume are testament to the rapid developments in gendered technology-facilitated abuse, the need for reforms and activism to address it, as well as the diversity of scholarship seeking to better understand it. The handbook consists of 34 chapters that are divided into eight parts. It begins with Part I: Reflecting on Experiences, in which survivor-scholars provide an account of their own experiences of technology-facilitated abuse and how they themselves frame its nature, causes and impacts. In Part II: Contextualising Gender, Technology and Violence, the chapters unpick the structural and systemic gendered nature of online abuse as a whole. The next three parts examine key topics on the nature and impacts of technology-facilitated abuse, with chapters that each provide an up-to-date examination of various forms of gendered technologyfacilitated violence and the impacts of these harms on victim-survivors, comprising: Part III: Stalking and Partner Abuse, Part IV: Sexual and Image-Based Abuse, and Part V: Online Hate. Then, contributors engage with questions of response and redress. In Part VI: Technologies for Justice chapters address how technologies might be harnessed to assist
12
A. Powell et al.
justice-seeking for victim-survivors. Next in Part VII: Legal Developments, contributors examine law reform in response to multiple forms of technology-facilitated abuse. Finally, in Part VIII: Community Responses and Activism, chapter contributions examine efforts to tackle online abuse through institutional, organisational and community settings, and consider the role of technologies in advocacy and activism to address gendered violence more broadly. Each of the chapters provides contemporary theoretical, empirical, political and/or policy relevant contributions to the field that do not need to be read in a particular order; though we of course invite readers to engage with the handbook in its entirety.
Conclusion There is little doubt that digital, communications and surveillance technologies have become increasingly normalised and enmeshed into our day-to-day lives. Just as there is an inextricable bond between technologies and contemporary social and political life, so too is there a nexus of gender, violence and technology. In one sense, we might understand technologies as providing tools or strategies that are utilised by perpetrators as part of their repertoire of violence and abuse. Yet at the same time, we must not ignore that technologies themselves are not genderblind; that in some instances their very design, whether overtly intended or not, replicates, amplifies and exacerbates gendered violence. Indeed technologies and their integration into experiences of gendered violence reflect to us the continuum of inequalities, violence and abuse that exist throughout our societies. The chapters throughout this volume provide an account of the rapid developments in gendered technology-facilitated violence and abuse, the diversity of scholarship seeking to better understand it, as well as the need for reforms and activism to address it. Though we have endeavoured to be as thorough as possible, we acknowledge that there are always gaps and limitations to any anthology. Nonetheless, we see this handbook as a vital gathering of knowledge on gender, technology and violence that will seed further investigations and analyses.
1 Gender, Violence and Technology: At a Conceptual …
13
Acknowledgements The author(s) received no financial or research support for the authorship and/or publication of this chapter.
References Baer, H. (2016). Redoing feminism: Digital activism, body politics, and neoliberalism. Feminist Media Studies, 16 (1), 17–34. Bennett, W. (2003). Communicating global activism. Information, Communication & Society, 6 (2), 143–168. Bennett, W. L. (2005). Social movements beyond borders: Understanding two eras of transnational activism. Transnational protest and global activism, 203, 26. Blair, K. L. (2019). Technofeminist storiographies: Women, information technology, and cultural representation. Rowman & Littlefield. Broad, E. (2018). Made by humans: The AI condition. Melbourne University Publishing. Boyle, K. (2018). What’s in a name? Theorising the inter-relationships of gender and violence. Feminist Theory, 20 (1), 19–36. Cho, S., Crenshaw, K. W., & McCall, L. (2013). Toward a field of intersectionality studies: Theory, applications, and praxis. Signs: Journal of women in culture and society, 38(4), 785–810. Clark, R. (2016). “Hope in a hashtag”: The discursive activism of #WhyIStayed. Feminist Media Studies, 16 (5), 788–804. Cooper, J., & Weaver, K. D. (2003). Gender and computers: Understanding the digital divide. Psychology Press. Crenshaw, K. (1990). Mapping the margins: Intersectionality, identity politics, and violence against women of color. The Stanford Law Review, 43, 1241. Green, E., & Singleton, C. (2013). ‘Gendering the digital’: The impact of gender and technology perspectives on the sociological imagination. In Digital sociology (pp. 34–50). Palgrave Macmillan. Eubanks, V. (2012). Digital dead end: Fighting for social justice in the information age. MIT Press. Haraway, D. (1991). Simians, Cyborgs, and women: The reinvention of nature. Routledge. Haraway, D. (1985). Manifesto for Cyborgs. The Socialist Review, 80, 65–107.
14
A. Powell et al.
Horeck, T. (2014). #AskThicke: “Blurred lines,” rape culture, and the feminist hashtag takeover. Feminist Media Studies, 14 (6), 1105–110. Katz, J. E., & Aakhus, M. (Eds.). (2002). Perpetual contact: Mobile communication, private talk, public performance. Cambridge University Press. Keller, J., Mendes, K., & Ringrose, J. (2016). Speaking ‘unspeakable things’: documenting digital feminist responses to rape culture. Journal of Gender Studies, 1–15. Maddocks, S. (2018). From non-consensual pornography to image-based sexual abuse: Charting the course of a problem with many names. Australian Feminist Studies, 33(97), 345–361. McGlynn, C., & Rackley, E. (2017). Image-based sexual abuse. Oxford Journal of Legal Studies, 37 (3), 534–561. Nuñez Puente, S. (2011). Feminist cyberactivism: Violence against women, internet politics, and Spanish feminist praxis online. Continuum: Journal of Media & Cultural Studies, 25 (3), 333–346. Plant, S. (1997). Zeros and ones: Digital women and the new technoculture. Doubleday. Plant, S. (1996). On the Matrix: Cyberfeminist simulations. In R. Shields (Ed.), Cultures of internet: Virtual spaces, real histories, living bodies. Sage. Powell, A., & Henry, N. (2017). Sexual violence in a digital age. London: Springer. Powell, A., & Sugiura, L. (2018). Resisting rape culture in digital society. In DeKeseredy, W. S., Rennison, C. M., & Hall-Sanchez, A. K. (Eds.). The Routledge international handbook of violence studies. Routledge. (pp. 447– 457). Rentschler, C. A. (2014). Rape culture and the feminist politics of social media. Girlhood Studies, 7 (1), 65–82. Robinson, L., Cotten, S. R., Ono, H., Quan-Haase, A., Mesch, G., Chen, W., & Stern, M. J. (2015). Digital inequalities and why they matter. Information, Communication & Society, 18(5), 569–582. Thrift, S. C. (2014). #YesAllWomen as feminist meme event. Feminist Media Studies, 14 (6), 1090–1092. Turkle, S. (1995). Life on the screen: Identity in the age of the internet. London: Weidenfeld and Nicolson. Wajcman, J. (2004). Technofeminism. Polity. Wajcman, J. (2010). Feminist theories of technology. Cambridge Journal of Economics, 34 (1), 143–152. Williams, S. (2015). Digital defense: Black feminists resist violence with hashtag activism. Feminist Media Studies, 15 (2), 341–344.
1 Gender, Violence and Technology: At a Conceptual …
15
World Health Organisation (WHO). (2013). Global and regional estimates of violence against women: Prevalence and health effects of intimate partner violence and non-partner sexual violence. WHO. Wyatt, S. (2008). Feminism, technology and the information society: Learning from the past, imagining the future. Information, Communication and Society, 11(1): 111–130.
Part I Reflecting on Experiences
2 [Cum]munity Standards: Resisting Online Sexual Harassment and Abuse Morgan Barbour
Introduction I was twelve-years-old the first time a stranger on the internet asked me for a photo of my genitals. An adult man wrote to me on DeviantArt, an online art sharing community, requesting a photo of my ‘pussy’. The sheltered child that I was, and so eager to please anyone I perceived as an authority figure, I complied. That photo fortunately featured the only definition of pussy I knew at the time—the family cat. That was also the day I was called a cunt and blocked online for the first time. It was only with the hindsight of adulthood that I was able to fully comprehend how a pejorative of my own prepubescent genitalia had been hurled across the digital sphere in an attempt to shame a child too naïve to understand she was being preyed upon. M. Barbour (B) London, UK e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_2
19
20
M. Barbour
It has been sixteen years since that first incident, and if an underage pussy photo request was the first battle, my collective experience as a woman on the internet has been a relentless war. I owe the internet my career. I am a multidisciplinary artist who has been able to travel the world, largely thanks to a significant online following. The internet has opened up opportunities that would have never been otherwise presented to me, and for that I am immensely grateful. My intent in writing this chapter, much like my intent within the core of my activism, is not to demonise or vilify the internet. Instead I wish to examine a medium that has greatly impacted my life with empathetic criticism, examining an incredibly wondrous and flawed tool with the hopes of improving it for future users. Within this chapter, I will speak frankly of my own personal experiences as a woman making a career utilising the internet and social media (ranging from prepubescent explicit photo requests, to real world violence and stalking from powerful men) and the impact of my project, Cummunity Standards (a pictorial callout blog responding to social media’s cherry picking adherence to their Terms of Service). I will also examine the real world violence and castigation of marginalised persons stemming from digital rhetoric and actualised IRL (in real life). I hope to empower readers to think critically of their own online interactions and sense of digital autonomy and anonymity. If there could be one takeaway from this chapter, it would be this: if reading about the abuse I and other women have experienced online makes you deeply uncomfortable, imagine how it must feel to be on the receiving end.
This Post Does not Violate Our [Cum]munity Standards Amnesty International labelled Twitter a ‘toxic place for women’ in 2018, citing Twitter’s refusal to properly investigate gendered abuse on their platform as the cause of self-censorship of female users, with some users being driven off the platform entirely (Amnesty International, 2018). Facemash, Facebook’s predecessor, launched by Mark Zuckerberg in 2003, featured images of fellow Harvard students posted without consent, comparing them to barnyard animals in order to rate their
2 [Cum]munity Standards: Resisting Online Sexual …
21
attractiveness (Hoffman, 2018). YouTube, the video sharing platform responsible for 37% of all mobile network traffic, has been criticised for its algorithms propagating a ‘rabbit hole effect’ whereby recommended videos contain increasingly radical and inflammatory content to keep viewers engaged so as to increase advertising revenue, consequently helping provide a sounding board for alt-right anti-feminists such as Milo Yiannopoulos and Mike Cernovich (Bates, 2020). The internet is a reflection of our society. The aforementioned platforms were founded exclusively by cisgender men, arguably borne from a culture of privilege and kept afloat in part by systemic misogyny. Social media now sports more female-identifying users (71%) than maleidentifying (62%), but studies show that male users still log more time online, likely due to existing societal structures that support a gendered imbalance of labour (Idemudia et al., 2017). When social media platforms are founded in a misogynistic world and knowingly run algorithms that push dangerous narratives to protect their bottom line, we are left with a form of communication that is silently metastasising. ‘I have never met someone who brags about being hit on as much as you’, my ex-partner told me when I publicly posted up screenshots detailing a particularly violent sexual message that had slid into my DMs (direct messages) on Instagram one morning. I had engaged in this practice for a few years when this comment was made, and had gained online traction speaking about unregulated gendered harassment running amok on the internet. While my former partner’s brazen disregard for my emotional state after years of online abuse was disheartening, his views were not unusual. The idea that women are ‘asking’ for online abuse or that the reported abuse by women is ‘highly exaggerated’ is heavily purported in online communities. I have frequently been advised that engaging with online abusers in any capacity amounts to giving the abuser what they wanted: attention. This notion that one should not ‘feed the trolls’, an amusing anecdote that helps to denigrate the severity of online abuse, has always left a bad taste in my mouth. I can understand how an outsider may think that the mental toll of trying to tackle online abuse would be greatly dissipated if the victim simply let the abuse roll off their back, but this approach puts the onus on the victim, rather than demanding the abuser
22
M. Barbour
abstain. This outsider would be correct in thinking that it should not be up to the victim alone to combat this abuse; moderation implemented by social media platforms, de-platforming (or the removal of accounts from social media networks that continuously breach terms of services by inciting violence and abuse), and policy changes regarding online abuse would help to mitigate the notion that online anonymity permits consequence-free action. Regurgitating the empty advice that engagement is encouragement is a prettified way of saying that the quiet suffering of digital despoliation is the price women should pay to access the internet. A decade of this unsolicited advice led to me launching Cummunity Standards, a pictorial callout blog highlighting online abuse by transposing real degrading DMs, over the bodies of those on the receiving end. In March 2020, at the start of the first coronavirus lockdown in the United Kingdom, I found myself gravitating to the internet more than usual to fill my idle time. I was disappointed but unsurprised to find a significant influx in abusive messages as the world grinded to a halt. The content of these messages was nothing new, but Instagram’s response to the problem had taken an insidious turn. Every report of abuse was suddenly met with this message: ‘We couldn’t review your report. We have fewer people available to review reports because of the coronavirus (Covid-19) outbreak, so we’re only able to review content with the most potential for harm’. The first report that received this response was for a DM from a user threatening to find me, rape me and murder me. It would seem ‘most potential for harm’ is subjective. I launched Cummunity Standards as a response to that message, exhausted from years of relentless online abuse, DMs saturated with angry purple penises and ‘baby-you’re-so-beautiful-send-me-nudes-wellfuck-you-too-you-fat-frigid-bitch’ love letters. It was my hope that by putting those messages over my likeness, I would be able to humanise the abuse. These words may have been typed out carelessly by men I had never met before and catapulted through cyber space, but they were aimed at a living, breathing human being who, for a myriad of contrived reasons, had earned the rage of strangers. The response was swift. I quickly became inundated with messages from other victims. Some sent me screenshots, others asked me for
2 [Cum]munity Standards: Resisting Online Sexual …
23
advice, others still thanked me for drawing attention to a problem they had been weathering alone. I also received many messages from male followers apologising ‘on behalf of all men’ but reminding me that ‘we aren’t all like that’ and that I would be much better off ‘blocking and moving on’. Some followers asked that I find some self-respect, as images with ‘U PORN FILTHY WHIRE’ (a direct quote from one user’s furious attempt to label me a whore) emblazoned on them had made them uncomfortable. ‘I am a fan of your work’, one Instagram user wrote, ‘but I really wish you wouldn’t get so political’. As a woman using the internet as a primary medium for both the promotion and distribution of much of my work, the idea that any part of my online presence could be considered ‘too political’ feels oxymoronic. The very act of taking up digital space and producing work that often involves the demystification of a body society has insisted on objectifying and denigrating is subversive, with the political and the personal becoming inextricable. While Cummunity Standards relies on a certain level of shock value at first glance, advocating against online violence is not an act of political posturing, but one of survival and resistance. Cummunity Standards’ reach quickly surpassed any of my expectations. In May 2020 I was invited to write about the project for the Association for Progressive Communications (APC); this quickly gained traction. My research found its way to the United Nations by July in a special report by the World Wide Web Foundation on the increase of online gendered violence during the COVID-19 pandemic, following a call for submissions by Dr. Dubravka Šimonovic, United Nations Special Rapporteur on violence against women, its causes and consequences (Brudvig et al., 2020). Less than six months following a snap decision to create a blog in protest of the decrease in moderation in the face of a pandemic, proof of my abuse was in the hands of organisations with the potential to enforce real world change. It would seem that engagement has the power to do more than just sate the appetites of abusers. Instagram’s automated message informing me death and rape threats did not qualify as ‘most potential for harm’ was no accident. Big tech companies such as Google, Facebook, Microsoft and Twitter openly
24
M. Barbour
prioritised the moderation of misinformation surrounding the COVID19 pandemic on their platforms (Rivera, 2020). As a result, many social media platforms found themselves delegating much of their moderation during the COVID-19 pandemic to artificial intelligence (AI). Facebook’s December 2020 COVID-19 Report acknowledged that some users may find that their reports are not handled swiftly or perhaps at all, noting that some users may ‘feel the impact’ of this decision (Jin, 2020). In a world that is more interconnected than any other time in history, the fight against the propagation of fake news, especially when the real world consequences within a pandemic could be deadly, is important, but it is disingenuous and frankly dangerous to downplay or outright ignore the abuse of marginalised internet users in the process. For those who have not experienced online gendered violence, the jump from a model-run blog to the United Nations may seem extreme. It is this exact downplaying of online violence, and the ignorance of its real world implications, that is allowing online violence to flourish, especially within the confines of a global pandemic. A 2020 U Report survey (administered by UNICEF) of young people’s experience of online harassment found that 52% of surveyed young women have experienced direct online harassment, with 87% of responding women and girls reporting that harassment is increasing. Fifty-one percent reported that their experiences with online harassment directly impacted their physical and/or emotional wellbeing (U Report, 2020). I will expand upon the real world impacts of online gendered violence later in this chapter, but if ‘most potential for harm’ included the lives lost directly in connection to online gendered violence, perhaps I could have afforded, as my followers often advise, not to get ‘so political.’
Cancel Culture is Not a Substitute for Active Policy Eron Gjoni published a blog post in 2014 claiming that his ex-girlfriend, independent games developer Zoë Quinn, had performed sexual acts with games reviewer Nathan Grayson in exchange for a favourable review of her game Depression Quest on the website Kotaku. These
2 [Cum]munity Standards: Resisting Online Sexual …
25
claims were investigated by Kotaku and dismissed, proving that Grayson never reviewed Quinn’s game, but by this time, the damage was done. The controversy, dubbed ‘Gamergate’, went viral and Quinn found herself inundated with online abuse. As news spread, feminist media critic Anita Sarkeesian found herself similarly inundated with abusive messages, resulting in the cancellation of a scheduled speech at Utah State University after one internet user threatened to carry out a violent attack on campus if Sarkeesian was permitted to speak. These coordinated online attacks, known today as ‘brigading,’ utilised sock puppet accounts (multiple fake accounts created by one user for the sole purpose of mass harassment), culminating in two million tweets utilising the hashtag #gamergate in two months’ time (Bates, 2020). Brigading has become a common tactic. In 2013, British feminist writer Caroline Criado-Perez’s Twitter account was inundated with threats of violence following her campaign to have a female figurehead other than the sitting Queen appear on British currency (Hess, 2014). Her abuse went viral, resulting in British police communications advisor, Andy Trotter, going on record saying that social media platforms should handle such abuse as investigating online Twitter threats would be ‘too difficult for a hard-pressed police service’. Trotter further insisted that if the police force dedicated time to investigating online harassment it would divert valuable time from unspecified ‘other things’ (Dodd & Martinson, 2013). Twitter rolled out a more accessible report abuse feature in response to the controversy, but insisted that if the abuse made the receiver genuinely fear for their safety they should directly contact the police (Hess, 2014). Tomas Chamorro-Premuzic, a Professor of business psychology at University College London (UCL), says that the best method to handle online abuse is to disengage and report misconduct on the supporting platform, stating that it is much easier to track online versus offline abuse, even with the veil of anonymity that many platforms afford their users (Chamorro-Premuzic, 2014). This is good advice on paper, and offers a theoretical framework where victims could receive real life justice in an ever evolving abstract world. In practice, victims often find themselves in stasis. As seen at the start of the COVID-19 pandemic, tech giants Instagram and Facebook chose to shift much of their moderation
26
M. Barbour
load to AI, leaving room for error or ignoring reports of abuse entirely. Within the police force, attitudes range from those like Trotter’s insistence that there’s simply too much abuse for them to tackle (and that the responsibility should be shouldered by the platforms), to officers with such limited knowledge of the internet that they question what Twitter is, as feminist writer Laura Bates recalls being asked in her book Men Who Hate Women (Bates, 2020) at a time when she tried to report her own experiences of abuse. Revenge pornography, or images of a sexual or intimate nature that are distributed without consent, has become a pressing issue both socially and legally in the twenty-first century. Section 33 of the 2015 Criminal Justice and Courts Act deemed revenge pornography a criminal offense in the United Kingdom, carrying a maximum jail sentence of 2 years (Bond & Tyrrell, 2021). From April—December 2015, a total of 1160 revenge pornography cases were formally reported to UK police; 11% of these cases were prosecuted (Sherlock, 2016). Education of police officers investigating and prosecuting these crimes is essential to effective implementation of revenge pornography laws. A 2017 national survey funded by Police Knowledge Fund through College of Policing and HEFCE found that of the 783 law enforcement professionals surveyed, only 1.2% of respondents reported that they possessed an ‘excellent understanding’ of revenge pornography (Bond & Tyrrell, 2021). These gaps in concrete education and comprehension of revenge pornography by law enforcement are not the only hurdles victims and advocates must navigate when seeking justice for online abuse. One must critically examine the culture and views purported by the public servants elected to help establish such laws in the first place. In 2015, United States Representative Matt Gaetz (R-Florida) was one of two sitting state representatives to vote against a bill seeking to outlaw the distribution of non-consensual intimate images, allegedly stating that ‘any picture was his to use as he wanted to, as an expression of his rights’. Mr. Gaetz currently stands accused of engaging in sexual acts and trafficking of a minor across State lines, and of showing non-consensual intimate imagery with his fellow lawmakers on the House floor (Shammas, 2021). The creation and implementation of the law is only as effective as those with the power to exercise it.
2 [Cum]munity Standards: Resisting Online Sexual …
27
With #MeToo becoming a mainstream hashtag and constant media coverage over ‘cancel culture’, or the idea that social media holds the power to ostracise certain individuals in the court of public opinion, I can understand how some may think the world is changing in favour of victims. Some might say that Cummunity Standards is a by-product of cancel culture, despite only one user having their social media removed to date (a man remaining in a position of power who I discuss later). There have certainly been cases of real world justice in response to gendered cybercrimes. There are records as early as 1999 of criminal charges against cyber abusers, with California resident Gary S Dellapenta being charged with cyberstalking before the turn of the new millennium (Miller & Maharaj, 1999). Steven King was sentenced to eight weeks in prison and twelve months suspended after sending Labour MP Angela Eagle a message telling her to ‘Leave the UK…or die’ (Bates, 2020). After public outcry surrounding Criado-Perez’s case, two individuals, Isabella Sorley and John Nimmo, pled guilty to ‘sending by means of a public electronic communications network messages which were menacing in character’ in violation of Section 127(1) (A) of the Communications Act 2003 (BBC, 2014). They were jailed for 12 and 8 weeks respectively (Guardian, 2014). When asked about her feelings on one of her harassers being a woman, Criado-Perez said, ‘I don’t see why we should think that women who are brought up in a society steeped in misogyny should be any less affected by it and any less likely to hate women’ (BBC, 2014). These prosecutions are few and far between, and require both objectivity and a base level of education surrounding internet culture from the prosecution. To better understand the challenges of successful prosecution of online gendered violence, it is important to critically examine society’s response to gendered violence as a whole. The United Kingdom saw a 173% increase in reports of sexual assault from 2014 to 2018. Despite this increase, only 19% of all reports were referred onward to prosecution, with prosecutions by the Crown Prosecution Services (CPS) dropping by 44% (Barr & Bowcott, 2019). I don’t know what ‘other things’ Trotter fears the prosecution of cybercrime will keep police from, but I would hedge my bets that gendered abuse isn’t high on the priority list.
28
M. Barbour
If platforms will not properly moderate, and if police argue that it is too much for them to manage, what can victims of online abuse do? Perhaps we should take this harassment as a sign of change, as Slate editor, Hanna Rosin, argues in her book, The end of men: the rise of women. In a follow up op-ed on her book, Rosin argues that the rise of prominent female online influence gives those on the receiving end of online abuse a platform to ‘gleefully skewer the responsible sexist in one of many available online outlets, and get results’ (Rosin, 2013, n.p.). There have been activists who have posted up screenshots of abuse, sent proof to wives and bosses and mothers; while these actions make for juicy viral Facebook posts, the reality is that even when confronted with proof of online misconduct, many abusers still walk away unscathed. As a woman working in entertainment, I found the #MeToo movement both empowering and exhausting. I found myself inundated with male colleagues and friends expressing shock over stories shared by the women in their lives. I vividly remember one casting director talking himself in circles: could I believe that this was happening? Had it ever happened to me? It was awful, sure, but couldn’t a lot of these women just have said no? I also remember the sense of cocksureness from both colleagues and certain sectors of the media that arrests and professional exiles of prominent men such as Harvey Weinstein and Bill Cosby signalled an end to an era of lawless misogyny and professional predation. I spent an evening at a Netflix event downing cocktails to burn out the memories of the photographer who exposed his genitals to me on set earlier that week, smiling and nodding in conversations with generic Hollywood strangers who all seemed very keen for their performative wokeness to drown out victims waxing rhapsodic. I fully understand how those who are not on the constant receiving end of abuse may see some headlines of powerful men being taken down and take it as a sign of genuine change. I can understand why Chamorro-Premuzic is suggesting that reporting abuse, both to supporting platforms and to law enforcement, is a healthy and proactive way forward. I even understand how some Men’s Rights Activists (MRAs) can get swept up in the fearmongering within the #MeToo era, certain that a misinterpreted text or missed signal at work could render them destitute. In the 2012–2013
2 [Cum]munity Standards: Resisting Online Sexual …
29
news cycle, the UK’s Daily Mail used the phrase ‘cried rape’ in 54 headlines; shocking language that directly feeds into the idea that women who speak up against sexual violence are attention-seeking liars (Plank, 2013). An English CPS review reported that nationwide, 35 prosecutions went forward for false allegations of sexual assault; 19 times less than the Daily Mail’s flippant headline use and representing a mere 0.006% of over 5600 prosecutions for rape in the same period (CPS, 2013; Starmer, 2013). It is important to note that research estimates only 2 to 8% of rape allegations are false, running in the same league as false accusations of theft, yet there is a permeating narrative in certain circles, egged on by media outlets relying on shock value and fearmongering to drive up clicks, that women are nonchalantly lying about sex crimes at record levels (Plank, 2013). This blatantly ignores the fact that in any given year, approximately 85,000 women and 12,000 men in England and Wales will be sexually assaulted, a rate of 11 assaults per hour (Ministry of Justice, Home Office, & Office of National Statistics, 2013). Convictions for rape are also significantly lower than other crimes in the United Kingdom, with an approximate 5.6% of all reported cases ending in prosecution (Ministry of Justice, Home Office, & Office of National Statistics, 2013). It is important to take these statistics of real world gendered crimes into account when discussing how best to deal with those in the digital sphere. We cannot expect online gendered violence to be appropriately addressed when institutional justice still eludes the vast majority of victims of physical sexual harms, and when the very social frameworks that gave birth to the internet remain steeped in misogyny. In 2019, I was cyberstalked by a producer at a major commercial broadcast television and radio network. This individual approached me via my website claiming to be interested in hiring me for an upcoming show. He invited me to the studio for lunch to discuss the project. At the last minute he claimed that he would prefer if I met him at his private residence in the Hollywood Hills instead. It quickly became apparent that he wanted to pay me for sexual services ($500 for oral sex, to be precise). This communication repeated over several months over numerous different platforms. I changed my phone number, but like many modern artists, I rely on social media as my main form of professional networking. Making my social media presence private or removing
30
M. Barbour
it altogether would have drastically impacted my income. This individual finally contacted me on his private Instagram that clearly linked him to the network. After consulting with a few trusted women in the industry, I contacted the network and the Director’s Guild of America (DGA) with physical proof of his ongoing cyberstalking and sexual solicitation. I was assured that the matter would be looked into. At the time of writing, this individual is still employed at his show and is still a member of the DGA. His private Instagram has been removed, but to my knowledge, Instagram never reviewed the messages of his that I had reported. I admire Rosin’s optimism that misogynists will now be ‘skewered’ for their actions, but experience tells a different tale. I spoke to several members of the media to see if there was anything that could be done to name this individual but these things often require multiple accusers who are willing to go on the record with their experiences, an act that puts their livelihoods and personal safety at risk. ‘I knew exactly who you were talking about before you named him’, one PA on my cyber stalker’s show wrote to me. She was incredibly empathetic, but fearful of going on record. I hold no animosity towards her. The system is broken, and sugar coating it fixes nothing. When speaking to reporters such as Mo Ryan and Daniel Holloway, both of whom have dedicated much of their careers to trying to make the creative industries safer for women, I received sympathy and offers of help, but the reality is that without a mass push of other victims willing to come forward on the record, the media is often left tied up navigating threats of libel. By stuffing cotton in our ears and insisting that if victims speak up perpetrators will be ‘cancelled’ without any action towards policy change, we ignore the problem in favour of a comfortable, but ultimately false narrative. By insinuating that online conduct is not indicative of real world behaviour, we intentionally overlook the microcosms nurtured in digital spaces that are breeding real world violence. Harvey Weinstein may have gone to prison (Bekiempis, 2021), but Donald Trump, arguably one of the pioneers of political figureheads using social media to campaign and directly address their followers, was elected President of the United States in 2016, despite numerous sexual assault allegations, sexually demeaning tweets and recorded comments condoning sexual assault (Muller & Arneson, 2019). Men like Louis C.K. who were
2 [Cum]munity Standards: Resisting Online Sexual …
31
‘cancelled’ online, have found themselves bouncing back to comfortable careers. The digital and physical worlds are becoming increasingly intertwined, and writing off online misconduct as an abstract separation of society is disingenuous. I have heard worries expressed over the power of cancel culture and the risk of innocents being cancelled by emotional mobs, but this seems to overlook the very real pattern of abusers in positions of power who have angry op-eds written about them, impassioned hashtags hurdled at them, and who still manage to comfortably return to power once the storm has passed. Online abuse is like a hydra; you cut off one abuser, and two dick pics and rape threats grow back in its place.
The Transference of Shitposting to IRL Violence Online harassment has real world impacts. In this chapter, I have shown how harassment can cause severe mental distress and how abusers rarely face real world consequences. In some instances, failure to crack down on online gendered abuse has been deadly. On 23 May 2014, 22-year-old Elliot Rodger opened fire on students and faculty on UC Santa Barbara campus, killing six and injuring 14 before ending his own life. Prior to the mass killing, Rodger had uploaded a video to YouTube titled ‘Elliot Rodger’s Retribution’ and emailed a 107,000-word manifesto entitled My Twisted World to 34 people, including his therapist Charles Sophy (Bates, 2020). In the video and the manifesto, both of which are accessible on the front page of a Google search, Rodger states that his motivation behind the attacks was driven by misogyny, citing his inability to get a girlfriend and insisting that if women would not sleep with him they deserved to die. Rodger has become valorised by parts of the fringe internet community incels (‘involuntary celibates’), which ranges from users on forums such as Reddit, 4chan, and Incels.net commiserating over their inability to achieve intimacy to users encouraging rape and demanding government-mandated sex slaves as retribution for their suffering. While certainly the majority of incels will not follow in Rodger’s footsteps of real world violence, his actions have cemented his place as a hero and martyr within the community, being dubbed ‘The
32
M. Barbour
Supreme Gentleman’ (Branson-Potts & Winton, 2018) by his supporters (see also Chapter 18, this volume). Chris Harper-Mercer left ten dead in 2015, including himself and one victim who died later from injuries, after forcing students at Umpqua Community College to stand in the middle of the classroom to be executed. Harper-Mercer left behind a similar manifesto to Rodger that refers to Rodger as an ‘elite who stood amongst the gods’ (Anderson, 2017). In 2018, Alek Minassian killed 10 pedestrians and injured 16 in the deadliest mass murder attack in the history of Toronto, Canada. Prior to the attack, Minassian posted publicly on Facebook: ‘The Incel rebellion has already begun! We will overthrow the Chads and Stacys! All hail the Supreme Gentleman Elliot Rodger!’ (Ma, 2018). ‘Chad’ and ‘Stacy’ are names used in the incel community to refer to men who are able to get frequent sexual favours (Chads) and the superficially attractive women who provide them (Stacys). Minassian told authorities that he was radicalised online around the time of Rodger’s attack, and hoped that his actions would inspire others to ‘rise up’ in the future (see also Chapter 18, this volume). 33-year-old Sarah Everard was kidnapped and murdered while walking home in London on 3 March 2021 by London Metropolitan Police Officer Wayne Couzens (Evans, 2021), causing #TextMeWhenYoureHome to go viral with women around the world documenting the lengths taken to stay safe while navigating the world as a femalepresenting person (Onibada, 2021). Couzens had been accused of and investigated for indecent exposure on 28 February 2021, less than a week prior to Everard’s abduction and murder (Dodd & Rawlinson, 2021). While Couzens’ motivations behind the murder remain unclear at the time of this writing, Everard’s death has sparked division in the United Kingdom, with some internet users and law enforcement officers stating that women should not go out alone after dark (Cruse, 2021) (with some users going so far as to blame Everard for being out walking at all), leading to Green Peer Baroness Jenny Jones satirically recommending that there should be a curfew placed on men. This was met with outrage from some prominent male talking heads, with Nigel Farage calling it ‘deranged’ and Welsh First Minister Mark Drakeford saying that Jones’
2 [Cum]munity Standards: Resisting Online Sexual …
33
statement was ‘a sad distraction when what’s needed is a proper discussion about women’s safety and why a woman is killed every three days by a man in the UK’ (Mahdawi, 2021). I would perhaps have more time for Drakeford’s serious response to a satirised solution if it did not once again deflect from the commonplace culture of misogyny and place the onus of safety squarely back on women’s shoulders. Six days after Everard’s body was discovered in Kent, 21-year-old Robert Aaron Long opened fire at three Atlanta massage parlours, killing eight people. Long survived the attacks and would later tell authorities that he targeted the massage parlours due to his ‘sex addiction’, claiming that he viewed them as ‘temptations that he wanted to eliminate’. The attacks have sparked online debates within the United States over the nature of Long’s attacks and if they could be classified as hate crimes, as the majority of his victims were women of Asian descent. State Representative Bee Nguyen (D-Georgia) called the attacks an ‘intersection of gender-based violence, misogyny, and xenophobia’ (Brumback & Wang, 2021). This violence permeates beyond the fringe of incel culture and into the general manosphere, or the portion of the internet dedicated expressly to misogynistic views. Despite these ideologies being spread with ease on the internet, despite men like Minassian openly saying they were ‘radicalised’, and despite these attacks falling under the umbrella definition of terrorism, the US Extremist Crime Database (ECDB) does not categorise misogynistic attacks as acts of terrorism, including only ‘violent or financial crimes committed by one or more suspects who adhered to a far-right, Al Qaeda-inspired, or extremist animal/environmental belief system’ (Bates, 2020). It would seem that even IRL, gendered violence does not fall under the category of ‘most potential for harm’. Violence is intersectional, and to try to segregate gendered attacks from those motivated by race or creed is not pragmatic. It is dangerous to ignore the misogynistic undercurrents in real world violence and to demurely refer to such attacks as anything other than terrorism. The hatred of women and marginalised groups runs deep on the internet and is perpetrated by people of all walks of life, as demonstrated by the wide range of participants in the 6 January 2021 attack on the United States Capitol, an attack organised online and encouraged by the then sitting
34
M. Barbour
President. Following Donald Trump’s mass de-platforming in response to the incited sedition, supporters flocked to forums such as TheDonald.win to express their anger, with one user saying that supporters who are ‘good at pulling, need to start targeting politicians and big techs’ wives or girlfriends and fuck them’. Even in an attack that allegedly was not driven by misogyny, when it came to talk of retribution, the hatred of women was damning.
Conclusion Resistance is not futile, but it certainly is exhausting. The internet has evolved into a pillar of modern day communication, and we owe it to ourselves to fight for a more equitable and safe digital space. I am forever indebted to the internet. Social media is how I have been able to frequently communicate with my family during a global pandemic, despite sheltering thousands of miles away. The internet has allowed me to see more of the world than I ever could have dreamed of. It has afforded me a career, an education and a platform to speak my mind. It is how I was extended the incredible opportunity to share my story with you in the book you are currently reading, a literal representation of transference from digital to physical spheres. The internet is also where my mere presence has resulted in cyberstalking, threats of bodily harm, and uninvited sexual imagery. It is in every aspect, a brutally honest reflection of our society. The internet has not invited more abuse, it has simply empowered abusers to exploit existing power dynamics from the comfort of their living room couch or office break room. We owe it to ourselves and to the coming generations, to continue to do the work to create a safer world, a world that will now forever transcend the physical. While writing this chapter, my sister gave birth to my niece. I want nothing more than to shield her from the onslaught of wilful exploitation I grew up with, and if shielding is not possible—an idealistic approach if ever there was one—then I will type my fingers raw to the bone to create a digital space that will properly moderate and prosecute violence, that understands that the freedom of anonymity does not soften the lasting blows of abuse. It
2 [Cum]munity Standards: Resisting Online Sexual …
35
is unconscionable to expect marginalised groups to navigate the violent stream of consciousness purported by internet culture in exchange for access to social media. I would love to be able to create work that didn’t involve writing ‘DIE WHORE’ over my body in protest, to inhabit a world where the images I have made to highlight my abuse do not run a higher risk of being pulled down, than the actual messages that inspired them. Cummunity Standards has taught me that by being loud and unapologetic, people will listen. The efficacy of gendered violence relies on an exploitation of power imbalance. By empowering victims to choose how to best handle their own online abuse instead of berating them, we stand a fighting chance of tipping the scales. Witnessing abuse second hand is uncomfortable, and I recognise that some users may prefer to not have their newsfeeds of cat videos and Bernie Sanders memes disrupted by screenshots calling a woman a ‘fucking worthless disgusting nasty fucking disease pig bitch’. I ask that you look beyond your own discomfort and think about how uncomfortable, beat down, and afraid the receivers of such messages must be, and how taxing it is to be expected to smile and carry on if they wish to participate as digital citizens. When women are pushed off social media by online abuse, they are actively denied the necessary communication tools of our modern world; their agency and identity stripped away in the name of self-preservation. I implore you to join the fight. Speak up and demand better moderation. Acknowledge that things said under the guise of anonymity are often truthful and transparent portrayals of self. Recognise how online abuse facilitates real world violence and the lasting psychological and professional effects on victims on and offline. I promise you that those cat videos will be all the more enjoyable in a world where 12-year-old girls are not put in the position to be called cunts and blocked for their failure to send pussy photos to strange shadowy men, or at the very least, in a world where those men are not able to freely send such messages without retribution or rebuke. Acknowledgements Many thanks to the Association of Progressive Communications for believing in my work. To the World Wide Web Foundation for including Cummunity Standards in their 2020 report Covid-19 and increasing
36
M. Barbour
domestic violence against women: The pandemic of online gender-based violence. To Dr. Dubravka Šimonovic, United Nations Special Rapporteur on violence against women, its causes and consequences, for listening and taking my experiences and research into account. And to my family, friends, housemates and kind internet strangers for sending me cat videos and Bernie Sanders memes when I needed a mental break, and for being active disrupters by helping spread Cummunity Standards far and wide.
References Amnesty International. (2018). Why Twitter is a toxic place for women. (n.d.). https://www.amnesty.org/en/latest/research/2018/03/online-violenceagainst-women-chapter-1/ Anderson, R. (2017, September 24). ‘Here I Am, 26, with no friends, no job, No Girlfriend’: Shooter’s manifesto offers clues to 2015 Oregon College rampage. https://www.latimes.com/nation/la-na-school-shootings-2017-story.html Bates, L. (2020). Men who hate women: From incels to pickup artists: The truth about extreme misogyny and how it affects us all . Simon & Schuster UK. Barr, C., & Bowcott, O. (2019). Half of rape victims drop out of cases even after suspect is identified . https://www.theguardian.com/society/2019/nov/ 10/half-of-victims-drop-out-of-cases-even-after-suspect-is-identified BBC. (2014). Two guilty over abusive tweets to Caroline Criado-Perez. https:// www.bbc.co.uk/news/uk-25641941 Bekiempis, V. (2021). One year later, what is going on with Harvey Weinstein? https://www.vulture.com/2021/03/one-year-since-harvey-weinst ein-rape-sentence.html Bond, E., & Tyrrell, K. (2021). Understanding revenge pornography: A national survey of police officers and staff in England and Wales. Journal of Interpersonal Violence, 36 (5–6), 2166–2181. https://doi.org/10.1177/088 6260518760011 Branson-Potts, H., & Winton, R. (2018, April 26). How Elliot Rodger went from misfit mass murderer to ‘saint’ for group of misogynists—and suspected toronto killer. https://www.latimes.com/local/lanow/la-me-ln-elliot-rodgerincel-20180426-story.html Brudvig, I., Chair, C., & Van der Wilk, A. (2020). Covid-19 and increasing domestic violence against women: The pandemic of online gender-based
2 [Cum]munity Standards: Resisting Online Sexual …
37
violence. http://webfoundation.org/docs/2020/07/WWWF-SubmissionCOVID-19-and-the-increase-of-domestic-violence-against-women-1.pdf Brumback, K., & Wang, A. (2021). Man charged with killing 8 people at Georgia massage parlours. https://apnews.com/article/georgia-massage-parlorshootings-leave-8-dead-f3841a8e0215d3ab3d1f23d489b7af81 Chamorro-Premuzic, T. (2014). Behind the online comments: The psychology of internet trolls. https://www.theguardian.com/media-network/medianetwork-blog/2014/sep/18/psychology-internet-trolls-pewdiepie-youtubemary-beard CPS. (2013). Violence against women and girls crime Report, 2012–2013. Crown Prosecution Service (CPS). https://www.cps.gov.uk/sites/default/ files/documents/publications/cps_vawg_report_2013.pdf Cruse, E. (2021). Anger after police ‘tell women not to go out alone’ in wake of Sarah Everard’s disappearance. https://www.mylondon.news/news/south-lon don-news/anger-after-police-tell-women-20047944 Dodd, V., & Martinson, J. (2013). Police chief challenges social media firms to tackle online abuse. https://www.theguardian.com/media/2013/jul/29/pol ice-chief-social-media-abuse Dodd, V., & Rawlinson, K. (2021). Sarah Everard suspect: Met faces inquiry over indecent exposure claim. https://www.theguardian.com/uk-news/2021/mar/ 11/sarah-everard-suspect-met-accused-failures-alleged-indecent-exposure Evans, M. (2021). Wayne Couzens to go to trial accused of murdering Sarah Everard in October. https://www.telegraph.co.uk/news/2021/03/16/waynecouzens-go-trial-accused-murdering-sarah-everard-october/ The Guardian. (2014). Two jailed for Twitter abuse of feminist campaigner. https://www.theguardian.com/uk-news/2014/jan/24/two-jailed-twitterabuse-feminist-campaigner Hess, A. (2014). The next civil rights issue: Why women aren’t welcome on the internet. https://psmag.com/social-justice/women-arent-welcome-int ernet-72170 Hoffman, C. (2018). The battle for Facebook. https://www.rollingstone.com/cul ture/culture-news/the-battle-for-facebook-242989/ Idemudia, E., & Raisinghani, M., Adeola, O., & Achebo, N. (2017). The effects of gender on social media adoption the effects of gender on the adoption of social media: An empirical investigation. Jin, K. (2020). Keeping people safe and informed about the Coronavirus. https:// about.fb.com/news/2020/12/coronavirus/ Ma, A. (2018). The Toronto van attack suspect warned of an ‘incel rebellion’ on Facebook hours before the attack—Here’s what that means. https://www.
38
M. Barbour
businessinsider.com/incel-alex-minassian-toronto-van-attack-facebook-post2018-4?r=US&IR=T Mahdawi, A. (2021). Angry at the idea of a curfew for men? Think of all the ways women are told to adapt. https://www.theguardian.com/commentisfree/ 2021/mar/13/men-curfew-sarah-everard-women-adapt-violence Miller, G., & Maharaj, D. (1999). N. Hollywood man charged in 1st cyber-stalking case. https://www.latimes.com/archives/la-xpm-1999-jan-22mn-523-story.html Muller, M. G., & Arneson, K. (2019). A timeline of Donald Trump’s inappropriate history with women. https://www.glamour.com/story/a-history-tim eline-of-donald-trump-sexual-assault Ministry of Justice, Home Office, & Office of National Statistics. (2013). An overview of sexual offending in England & Wales. https://webarchive.nation alarchives.gov.uk/20160106113426/, http://www.ons.gov.uk/ons/rel/crimestats/an-overview-of-sexual-offending-in-england---wales/december-2012/ index.html Onibada, A. (2021). A viral post saying “Text me when you get home” has women sharing how they feel unsafe after Sarah Everard’s death. https://www.buzzfe ednews.com/article/adeonibada/text-me-sarah-everard-womens-safety Plank, E. (2013). The Daily Mail used the term “Cried rape” in 54 headlines in the last year. https://www.mic.com/articles/32159/the-daily-mail-used-theterm-cried-rape-in-54-headlines-in-the-last-year Rivera, J. (2020). Amid coronavirus, Google, Microsoft, Facebook, Twitter and other tech companies ask for help to fight ‘misinformation’ . https://eu.usatoday. com/story/tech/2020/03/16/coronavirus-tech-google-microsoft-facebookmisinformation/5064880002/ Rosin, H. (2012). The end of men: And the rise of women. Riverhead Books. Rosin, H. (2013). Feminists, accept it. the patriarchy is dead . https://slate.com/ human-interest/2013/09/the-end-of-men-why-feminists-wont-accept-thatthings-are-looking-up-for-women.html Shammas, B. (2021). Gaetz fought revenge porn bill, saying ex-lovers can use photos as they see fit, sponsor says. https://www.washingtonpost.com/politics/ 2021/04/06/matt-gaetz-revenge-porn-bill/ Sherlock, P. (2016). Revenge pornography victims as young as 11, investigation finds. http://www.bbc.co.uk/news/uk-england-36054273
2 [Cum]munity Standards: Resisting Online Sexual …
39
Starmer, K. (2013). False allegations of rape and domestic violence are few and far between. https://www.theguardian.com/commenti-free/2013/mar/ 13/false-allegations-rape-domestic-violence-rare U Report. (2020). Internet safety day—Online harassment. https://ureport.in/ opinion/3983/
3 Legal Possibilities and Criminalised Population Groups: A Personal Experience of an Indigenous Woman in the Sex Trade Naomi Sayers
Introduction On 7 July 2014, I arrived at the front of the door, to be let into Canada’s Parliamentary Committee on Justice and Human Rights. It was the first day of hearings that would occur that week on the introduction of legislation to re-criminalise certain aspects of prostitution in Canada. The then-bill, now law, was introduced within seven months from the seminal decision from Canada’s highest court, Canada (Attorney I recognize that the term sex work is often used in advocacy circles to destigmatize ideas about selling and trading sexual services; however, for Indigenous women, often times, sex work is not work for them. As such, I use the term sex trade to acknowledge these nuanced experiences. Sometimes I may refer to sex work but that is only applicable to my experiences as opposed to more broadly, experiences of Indigenous women, where I will use and refer to the experiences in the sex trade, more broadly.
N. Sayers (B) Sault Ste. Marie, ON, Canada e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_3
41
42
N. Sayers
General) v Bedford (McLachlin, 2013). When I set out to give my testimony, I was speaking about the experiences often ignored and erased: Indigenous women selling and trading sexual services in regions outside of large city centres (Sayers, 2014). I have worked across Canada and in the United States, selling and trading sexual services, beginning at the age of 18. I was the only First Nations woman opposing the legislation throughout these proceedings. After I gave my testimony, I celebrated by ordering a martini from a nearby restaurant. On my way to the airport, I reflected on what just happened—thrust out into national media, with little to no guidance on managing media relations, and being the only First Nations woman publicly opposing this legislation nationally with my full legal name. I arrived at the airport with a notification from my Facebook from an anti-sex worker. I didn’t respond and I blocked the account. By the end of Summer, naked photos of my sex working identity were linked to my legal identity. The anti-sex workers began to share these photos, suggesting that my naked photos were public and as such, I couldn’t really be afforded any right to privacy. I couldn’t make the sharing of the photos stop. I felt isolated and most of all, I felt alone. This chapter examines the tort of invasion of privacy in the Canadian context, with a comparative analysis in other jurisdictions within and outside of Canada. For clarity, in Canada, statutory laws are made at regional, provincial and national levels. There are only four provinces in Canada that contain the invasion of privacy as tort existing in legislation, as opposed to common law (British Columbia, 1996; Manitoba, 2008; Newfoundland & Labrador, 1990; Saskatchewan, 1978). Disagreements similarly exist in Australia, though New Zealand offers a desirable approach (Australian Law Reform Commission, 2007; Law Commission, 2009; Whata, 2012; Gault, 2004; Moreham, 2016). There is also little agreement about where such a tort may lie; whether statutorily or in common law (Australian Law Reform Commission, 2007; Law Commission, 2011). This chapter will focus on the test as it has developed in the common law, given the legislative irregularities within Canada and elsewhere.
3 Legal Possibilities and Criminalised Population …
43
Legal Silences and ‘Talking Back’ While there are jurisdictional differences among the application and scope of the invasion of privacy, there are equally both limitations and possibilities that the tort provides and = prohibits, especially in the lives of vulnerable and marginalised population groups (see books, 1984 for more discussion on possibilities). Adopting an Indigenous autoethnographic approach (Whitinui, 2014) in challenging legal responses to the right to privacy among sex trade workers, I talk back to moments of silences in surviving technology-facilitated violence, namely, image-based sexual violence. An Indigenous autoethnography is one that reconnects with others and the world they live in (Whitinui, 2014). I seek to connect to others through these stories. Yet, this talking back is staking my place in this realm of technology-facilitated violence scholarship, similar to bell hooks’ Talking Back: Thinking Feminist, Thinking Black (1989) and her approach to a feminist theory. bell hooks’ (1984) framing in Talking Back is critical to this discussion because often for Indigenous women with similar experiences to my own—selling and trading sexual services—our experiences are often taken up by policy or legislative reforms to advocate for positions that criminalise or recriminalise our lived experiences without interrogating how these positions are part of the problem; that problem being that law or legislative approaches to Indigenous women’s experiences consume our experiences for the law’s own benefit—we are a means to an end. Yet, we are rarely listened to when our stories diverge or differ from the mainstream approach. We are expected to be silent while we are silenced. In applying such a framework, I recognise the privileged spaces I occupy as an Indigenous woman lawyer with sex working experience. I use this privilege to invite others to question their assumptions in theorising about technology-facilitated violence, calling attention to the moments or periods of silence, in our stories, our lives and also, in the law. Sex workers are often made invisible in discussions around violence and labour including in the digital space (Rand, 2019). While many organisations make assumptions about those in the sex trade, I invite the reader in to challenge their own biases and assumptions to see how the law can be used to facilitate digital and social justice for those often
44
N. Sayers
erased and forgotten in discussions about technology-facilitated violence and harassment. There is no agreed upon terms for describing technology-facilitated violence (see Henry et al., 2020 for an overview of approaches to technology-facilitated violence). There is a focus on the actors inflicting the violence as between intimate partners and as acknowledged by others, there is a hyper-focus on children and young people (Henry et al., 2020). This is not saying that this latter group does not matter, but this hyper-focus creates difficulties for adult women seeking remedies at law to protect their personhood (Powell & Henry, 2018). For Indigenous women, who are also in the sex trade (and with similar experiences to mine), we are often described as deserving of the violence we experience (Holmes et al., 2014; National Inquiry into Missing and Murdered Indigenous Women and Girls, 2019; see also Patel & Roesch, 2020 for a discussion on who gets to report this kind of violence). To date, there is limited discussion on image-based sexual violence against sex workers as a means to coerce, control or engage in other forms of violence (Henry & Powell, 2015). I seek to fill this gap. This means, however, that I am not the only one talking back to these silences; rather I seek to create a space and acknowledgement of the ways sex workers are often silenced, including through technology-facilitated violence. Silence may, in and of itself, not be violence; rather, silence can play an active role in creating opportunities for violence. Gaps or uncertainties in the law operate the same as silence. More to the point, technologyfacilitated violence operates the same way in relying on these gaps to create opportunities for violence. Such gaps include the non-human role that helps facilitate technology-facilitated violence, becoming algorithmically amplified (Henry et al., 2020). Technology-facilitated violence is elevated from the human hand to a non-human component (Henry et al., 2020; See also Powell et al., 2019). However, technology-facilitated violence is neither non-human nor without human action; it is very intentional. Technology-facilitated violence can, then, have both human and non-human action. The non-human action is how such violence manifests itself through technology-facilitated violence by exploiting this silence in intentional ways. For sex workers, this non-human action may reveal itself through limiting sex workers’ ability to cross colonial borders,
3 Legal Possibilities and Criminalised Population …
45
or policing agencies non-consensually analysing their advertisements for evidence of human trafficking (Holmes et al., 2014; Peers Victoria Resources Society, 2014; Romano, 2018). In other words, image-based sexual violence is a sub-set of technology-facilitated violence that needs to be critically examined especially as state actors are seeking to employ technology to monitor crime or crime-adjacent activities (Powell & Henry, 2018).
The Promise and Pitfalls of Privacy Torts to Respond to Image-Based Abuse While there are many potential avenues for justice in response to imagebased abuse and other forms of technology-facilitated violence (as is discussed elsewhere in this handbook, see Chapter X), there are some benefits of strong legislative response to privacy torts. Privacy torts embedded in legislation first, at minimum, provide guidance to the judiciary about the statutory elements of a privacy statutory tort and second, such a response assists with victims of image-based abuse to understand that there is a remedy available, instead of relying on unpredictable litigation involved in torts generally. Further, some victims of image-based abuse, like myself, have minimal faith in the criminal justice system to respond to such acts of violence, often involving the justice system blaming the victim for their predicament. Before considering such benefits however, a brief overview of some different approaches to the tort of invasion of privacy is necessary. First, in Canada, the tort of invasion of privacy is often called intrusion upon inclusion (Sharpe, 2012). This latter term is confusing and I highlight it here for education purposes but call upon legislative and law reforms to use the more common or layman term, invasion of privacy. The terms mean the same thing and as a general principle in affirming access to justice for the general public, the former should be adopted. Though it is beyond the scope of this chapter to overview equivalent laws globally, it is worth noting that this is the same in New Zealand, where it is also called the intrusion tort (Gault, 2004; Moreham, 2016). Meanwhile, in Australia, case law has relied on leveraging breach of confidence
46
N. Sayers
claims with, at the time in 2004, no privacy tort existing in common law (Australian Law Reform Commission, 2007). There are also adaptations of this tort in the United Kingdom. Finally, in the United States, generally, the invasion of privacy tort is heavily outweighed by the right to freedom of expression (Gault, 2004). As such, I refer to this tort as the invasion of privacy or privacy tort. Second, while there are statutory torts (see above), this chapter does not examine these. The reason is that, often, these torts are interpreted in light of the common law and other principles informing statutory interpretation which obviously vary across jurisdictions. Further, by focusing on the common law privacy tort, this discussion amplifies the possibilities and seeks to overcome the rifts between jurisdictions. Finally, statutory torts often end up in courts over disputes on the use of language within those specific torts. This means that there is very little benefit to examining how a statutory tort adopts a certain language, or its scope, because common law influences its interpretation. In Canada, the test for invasion of privacy is as follows: 1. That the defendant’s conduct is intentional (including recklessness); 2. That defendant must have invaded, without lawful justification, the plaintiff ’s private affairs or concerns; and 3. That a reasonable person would regard the invasion as highly offensive causing distress, humiliation or anguish (Sharpe, 2012). In New Zealand, the privacy tort contains the following two elements: In this jurisdiction it can be said that there are two fundamental requirements for a successful claim for interference with privacy:
1. The existence of facts in respect of which there is a reasonable expectation of privacy; and 2. Publicity given to those private facts that would be considered highly offensive to an objective reasonable person (Gault, 2004). New Zealand outlined that the facts must be of a private nature as similar to those claims in breach of confidence claims but stated that
3 Legal Possibilities and Criminalised Population …
47
not all private facts will give rise to an invasion of privacy (Gault, 2004; Moreham, 2016). New Zealand posited that living in a community requires some individuals to give up expectations of privacy and that the concern centres around the ‘wide-spread publicity of personal and private matters’ (Gault, 2004, n.p.). Regrettably, in Australia, there remains less hope for the privacy tort. There have been steps to introduce statutory torts (see e.g. Government of New South Wales, 2020), but Australia’s development of this privacy tort remains feeble (Gault, 2004). Technology-facilitated violence unfortunately exists in these gaps or the silences in the laws: Where a law does not exist, violence will seek to fill it. Technology-facilitated violence is then reduced down to nonhuman action outside of human control, or in some instances, better left to free market wherein companies argue that they do not control what others post, relying on other principles in law like freedom of speech (Gault, 2004; Powell & Henry, 2018). State and policymakers can step in to fill these gaps or the law’s silence around technology-facilitated violence. Particularly, the law can be used to break these silences in ‘purposive ways’ and more importantly, the law can be used to interrogate these silences to bring about meaningful social action (Hine, 2020). Currently, the criminal justice system in Canada involves a victim of image-based abuse to first, file a complaint with their local police agency and second, an investigating officer with the will, resources and knowledge of these types of crimes. The understanding of these types of crimes depends on the resources available at the local level to investigate such crimes. For smaller regions with small policing agencies, the investigations of these crimes are limited by the different investigative resources (including money) to go through with determining the accused in imagebased abuse. For my own experiences, the abusers are often nameless and faceless. It becomes very difficult to determine who is engaging this behaviour and I had to resort to hiring my own private investigator, a resource not available for everyone. Flowing from these interactions with the police, it then takes a prosecutor to understand that a crime has taken place and to determine whether there is sufficient evidence to prosecute the individual, if determined, based on the evidence available and only if the evidence is available. Only if charges are laid, then the victim of the
48
N. Sayers
image-based abuse then is left wondering if the trier of fact (judge-only or jury trials) hears the allegations will understand the crime itself. Flowing from the trial’s truth-seeking process, the victim is left feeling unheard, disbelieved and potentially discredited about whether they were actually harmed.
The Selective Silencing of Law Speaking back to the time I remained silent after my sex working identity, together with nude photos, was connected to my legal identity, people often question why I didn’t go to the police. I did. They also question why I didn’t speak about it more when it did happen. Viewing my silence as constitutive and meaningful, we must be cautious about viewing all silence as negative (Henson, 2021; Hine, 2020; Hirschauer, 2006; Tsalach, 2013). Questions around silence must also recognise silence as a feature to be taken seriously (Henson, 2021; Hine, 2020; Hirschauer, 2006; Tsalach, 2013). Lessons ought to be learned from the silence of victims and survivors of technology-facilitated violence. Survivors and victims of technology-facilitated violence are often left with little recourse in image-based sexual violence (Powell & Henry, 2018). Women with suggestive photos or even private nude photos are often blamed for their fate (Deibert, 2017). Only once did police take my experiences with image-based sexual violence seriously, and this occurred in early 2020. This was likely and partly due to the fact that I broke down crying after being in court representing my clients and I opened up my phone during a court recess (a break during court). I had an email from an unnamed individual who saved my photos he found online and sent them back to me with a sexually suggestive email text following the images. After all was said and done in court, I sat outside the courtroom, scrolling through my phone, trying to look busy, waiting for other lawyers to leave and immediately approached the guard once there appeared to be no one else around. All I could say when I approached the guard was, ‘Can I talk to you for a minute?’, with a lump in the back of my throat. Then, once we entered the private interview room nearby and he closed the door, I
3 Legal Possibilities and Criminalised Population …
49
broke down crying. He saw the email and photos, expressing disgust. He took down detailed notes. I felt validated and heard. Within a few days, another police officer called me, saying that he was going to investigate it further. While they completed the investigation, they also could not find any details about who sent the email. This was the only time I felt validated for my experiences, as a lawyer following court appearances. I often reflect back on this moment and ask whether it would have been different had I not appeared as counsel that day in court. The silence of technology-facilitated violence in law and policy is ironically both harmful and purposive. It is harmful because it permits the violence it inflicts on its victims to exist without question and with minimal recourse (Henry & Powell, 2015; Powell & Henry, 2018). In Canada, it is even more clear that technology used to facilitate violence is not permissible (Deibert, 2017). Comparably, in Australia, the privacy tort remains up for debate including its scope and limits (Government of New South Wales, 2020). Yet, this silence is also purposive because it shows us who is capable of receiving justice in the face of injustice. While the police investigated the 2020 incident, I also employed a separate private investigator to examine the data that I had on my end. They could only locate that the person was likely a male in the United States. This was much more information acquired than the police could provide me (the police said they could not find any information but that they had looked into it). This ability to hide one’s identity is also a form of silencing that uses non-human action to facilitate human violence. Reflecting back on the 2014 incident, I was not seen as a respectable victim capable of receiving justice; rather, I was seen only as a whore that had invited these harms into her own life. What did I expect? What could I expect? Certainly, I could not expect justice. I wandered through my first year of law school, the year this incident took place, contemplating self-harm. When I sought help from the doctors, I told them that I was being stalked but the health practitioners became suspicious when I could not identify who was stalking me or how they were stalking me. After the invasion of privacy, the harm and its effects remained omnipresent. Innocent conversations with strangers, mainly other (white) women, similar to the majority of women who did this to me, caused me to feel suspicious about whether they recognised who
50
N. Sayers
I was. Did they recognise my features from the photos lurking around on social media or did the anti-sex worker send out the photos to others (when she said she saved them)? By comparison in the 2020 incident, I was arguably the ‘ideal victim’ (Christie, 1986; Corteen, 2018); a woman lawyer just doing her job and who happened to open up her phone to these saved photos and suggestive emails. Resources were whipped up to respond to my complaint, including a full investigation. This point in time was important for me to take up this space in this moment as a young Indigenous lawyer in a region of my ancestral territory. I was and am acknowledging all the stories of Indigenous women who came before and who could not speak up or be believed in their own moments of abuse—let alone, imagebased abuse. While I am a lawyer with some privilege, I am not afforded all the privileges that say, a white-abled body woman may experience in these systems. The barriers that others face who have come before and who will continue to come after me are far greater and I believe that, as an Indigenous woman lawyer, that I have to speak out about these types of violence and how they intersect with my advocacy as a form of silencing; I talk back to these forms of silences and do not wish to be silenced further.
Conclusion As a lawyer, I occupy a privileged space, but this does not mean I am privileged in all the spaces that I occupy. My race, gender and class impact how I exist and am perceived by justice system participants. Being ‘out’ about my sex work in 2014, the police understandably did not know how to respond to my complaint. The policing agency I complained to also likely did not have the resources to investigate the complaint. It involved an individual in another jurisdiction. In 2020, there is no excuse for a policing agency to not have the resources to investigate these complaints. Still, police complain that they need more resources and more money to invest in the technology to investigate. Yet, would the police respond the same if I was a sex worker in court that day as opposed to a lawyer dressed
3 Legal Possibilities and Criminalised Population …
51
in a suit? I cannot tell. I would hope that they would respond appropriately. The silence in the law only invites more questions and leaves the door open for more violence. The question is not what the silence tells us in technology-facilitated violence; rather, state and policymakers must examine what this silence says about policy and legal approaches to technology-facilitated violence. Courts refusing to adopt certain arguments is a form of silence and silencing. State, law and policymakers are concerned about freedom of expression when balancing the growing body of law governing invasion of privacy but freedom of expression for whom and on what grounds? In 2014, I called the police, and I was not believed. In 2020, I was likely only believed because I was seen as a professional, deserving of justice that day. Refusing to enact certain legislation, like in Australia is a legislative choice; it is a policy, refusing to protect victims of technology-facilitated violence. Acknowledgements Naomi would like to thank Saranjit Dhindsa, a law student at the time of writing, for compiling the research to support this paper. Naomi would also like to thank all the sex workers, especially Indigenous sex workers who have come before her, who continue to fight and who have risked their lives and privacy in the name of safety.
References Australia Law Reform Commission. (2007). Review of Australian Privacy Law [2007] ALRCDP 72. Online. http://www.austlii.edu.au/au/other/law reform/ALRCDP/2007/72.html British Columbia. (1996). Privacy Act, R.S.B.C. 1996, c. 373. Christie, N. (1986). The ideal victim: In From crime policy to victim policy (pp. 17–30). Palgrave Macmillan. Corteen, K. (2018). New victimisations: Female sex worker hate crime and the ‘ideal victim’. In M. Duggan (Ed.), Revisiting the ’ideal victim’: Developments in critical Victimology. Policy Press. Deibert, R. J. (2017). Submission of the citizen lab (Munk School of Global Affairs, University of Toronto) to the United Nations Special Rapporteur on
52
N. Sayers
violence against women, its causes and consequences, Ms. Dubravka Šimonovi´c . University of Toronto. Online. https://citizenlab.ca/wp-content/uploads/ 2017/11/Final-UNSRVAG-CitizenLab.pdf Gault, P. (2004). Hosking v Runting. Court of Appeal of New Zealand. Online. https://www.5rb.com/wp-content/uploads/2013/10/Hos king-v-Runting-NZCA-25-Mar-2004.pdf Government of New South Wales. Australia (2020). Civil remedies for serious invasions of Privacy Bill 2020. New South Wales. Online. https://www.par liament.nsw.gov.au/bill/files/3723/First%20Print.pdf Henry, N., Flynn, A., & Powell, A. (2020). Technology-facilitated domestic and sexual violence: A review. Violence Against Women. https://doi.org/10. 1177/1077801219875821 Henry, N., & Powell, A. (2015). Embodied harms. Violence Against Women, 21(6), 758–779. https://doi.org/10.1177/1077801215576581 Henson, D. F. (2021). Dreaming of silence: An autoethnography of space, place, time and trauma. International Review of Qualitative Research. https:// doi.org/10.1177/1940844720937812 Hine, C. (2020). Strategies for reflexive ethnography in the smart home: Autoethnography of silence and emotion. Sociology. https://doi.org/10. 1177/0038038519855325 Hirschauer, S. (2006). Putting things into words. Ethnographic description and the silence of the social. Human Studies. https://doi.org/10.1007/s10 746-007-9041-1. Holmes, C., Hunt, S., & Piedalue, A. (2014). Violence, colonialism and space: Towards a decolonizing dialogue. ACME: An International E-Journal for Critical Geographies, 14 (2), 539–570. hooks, b. (1984). Feminist theory from margin to center. South End Press. hooks, b. (1989). Talking back: Thinking feminist, thinking black. South End Press. Law Commission. (2009). Invasion of privacy: Penalties and remedies. Review of the Law of Privacy: Stage 3. Online. https://www.lawcom.govt.nz/sites/def ault/files/projectAvailableFormats/NZLC%20IP14.pdf Law Commission. (2011). Review of the Privacy Act 1993. Review of the Law of Privacy: Stage 4. Online. https://www.lawcom.govt.nz/sites/default/files/ projectAvailableFormats/NZLC%20R123.pdf Manitoba. (2008). Privacy Act, C.C.S.M. c. P125. McLachlin, B. (2013). Canada (AG) v Bedford . The Supreme Court of Canada. Online. https://scc-csc.lexum.com/scc-csc/scc-csc/fr/item/13389/index.do
3 Legal Possibilities and Criminalised Population …
53
Moreham, N. A. (2016). A conceptual framework for the New Zealand tort of intrusion. Victoria University of Wellington Law Review, 47 (2), 283–304. National Inquiry into Missing and Murdered Indigenous Women and Girls. (2019). Reclaiming power and place: The final reporting of the national inquiry into missing and murdered indigenous women and girls. Online. https://www. mmiwg-ffada.ca/wp-content/uploads/2019/06/Final_Report_Vol_1a-1.pdf Newfoundland & Labrador. (1990). Privacy Act, R.S.N.L. 1990, c. P-22. Patel, U., & Roesch, R. (2020). The prevalence of technology-facilitated sexual violence: A meta-analysis and systematic review. Trauma, Violence, & Abuse. https://doi.org/10.1177/1524838020958057 Peers Victoria Resources Society. (2014). Sex work and the legal environment. Accessed October 8, 2018. http://www.safersexwork.ca/wp-content/ uploads/2014/06/PEERS-SexWorktheLaw-25June2014.pdf Powell, A. & Henry, N. (2018). Policing technology-facilitated sexual violence against adult victims: Police and service sector perspectives. Policing and Society. https://doi.org/10.1080/10439463.2016.1154964 Powell, A., Flynn, A., & Henry, N. (2019). Sexual violence in digital society: Understanding the human technosocial factors. In The human factor of cybercrime. Routledge. Rand, H. M. (2019). Challenging the invisibility of sex work in digital labour politics. Feminist Review, 123(1), 40–55. https://doi.org/10.1177/014177 8919879749 Romano, A. (2018). A new law intended to curb sex trafficking threatens the future of the internet as we know it. Vox. Online. https://www.vox.com/cul ture/2018/4/13/17172762/fosta-sesta-backpage-230-internet-freedom Saskatchewan. (1978). Privacy Act, R.S.S. 1978, c. P-24. Sayers, N. (2014). Statement at Parliament of Canada. Justice and Human Rights Committee. Online. https://openparliament.ca/committees/justice/ 41-2/33/naomi-sayers-1/only/ Sharpe, R. J. (2012). Jones v Tsige. Ontario Court of Appeal. Online. https:// www.canlii.org/en/on/onca/doc/2012/2012onca32/2012onca32.html Tsalach, C. (2013). Between silence and speech: Autoethnography as an otherness-resisting practice. Qualitative Inquiry. https://doi.org/10.1177/ 1077800412462986 Whata, J. (2012). C v Holland . High Court of New Zealand. Online. http:// www.nzlii.org/nz/cases/NZHC/2012/2155.html Whitinui, P. (2014). Indigenous autoethnography: Exploring, engaging and experiencing “self as a native method of inquiry”. Journal of Contemporary Ethnography. https://doi.org/10.1177/0891241613508148
4 Image-Based Sexual Abuse and Deepfakes: A Survivor Turned Activist’s Perspective Noelle Martin
Introduction There is nothing stronger in the world than the human spirit. It is a powerful force that can withstand life’s cruelties and injustices. Sexual predators have tried for almost a decade to break my spirit, but I have wielded a strength to fight back that they simply cannot break. This is my war story, of a battle raged on a borderless digital plane. My words are my sword. And my spirit is my sustenance. In this chapter, I share my experiences of technology-facilitated abuse, in its raw, unadulterated, beauty, and ugliness. I share how a mediocre, nobody, young woman of colour—a daughter of immigrants—forged a path from victim, to survivor, to law reform campaigner, to global activist. My efforts played a small part in changing laws across Australia. N. Martin (B) Nedlands, WA, Australia
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_4
55
56
N. Martin
My voice has sought to highlight the human cost of technologyfacilitated abuse, image-based sexual abuse, and deepfakes on television, radio, podcasts, magazines, and in media interviews all over the world, in countries such as Japan, USA, UK, Germany, and Spain, to name a few. My activism led to being awarded Young Western Australian of the Year, and a Forbes 30 Under 30 Asia list honoree. This is my story.
The Abuse ‘Take me back to the beginning, to when it all began’ asks a journalist. For the 100th time, I pause, mask a deep sigh, and transport back to that moment in time, the moment that altered the course of my life forever. This is where my story begins. I remember it vividly. I was 18 years old and it was late at night. I was sitting upright on my bed with my laptop in front of me in a share house not far from my university campus in Sydney, Australia. I had moved away from home and my family in Perth, Australia, for law school. I was idealistic, starry-eyed, and full of hope that I would one day achieve my life-long dream to become a lawyer. That night, I remembered having learned about Google Image Reverse Search, a function of Google that lets you search by image, rather than text, and it shows you where that image or similar images are on the internet. Out of pure, unexplainable curiosity, in the same way someone might Google their name, I tried this reverse Google image search for myself. I remember using an image of me wearing a black dress that I had taken at 17 years old before a night out. It was a selfie of me in a bathroom mirror. I didn’t have any expectations for what I might find. I’m not even sure I expected anything to appear at all. I certainly did not expect, not in my wildest dreams, to discover that I was a target of ‘image-based sexual abuse’, a term I would later learn describes what was happening to me (Powell et al., 2019, 2020). Image-based sexual abuse involves three key non-consensual forms of harm—the non-consensual creation or taking of nude or sexual imagery, the non-consensual sharing of nude or sexual imagery, and
4 Image-Based Sexual Abuse and Deepfakes: A Survivor …
57
threats to share non-consensual imagery (Henry et al., 2020). More recently, image-based abuse has moved beyond captured imagery to digitally manipulated imagery, with the assistance of technologies such as photoshop and Artificial Intelligence (Flynn, 2019; Henry et al., 2020; see also Chapter 29, this volume). This is commonly referred to as ‘deepfakes’, and essentially, it involves an image being manipulated onto pornographic content (see Chapter 29, this volume). Within a split second of undertaking a reverse Google image search, my laptop screen was plastered with dozens of links to images of me on numerous pornographic sites across multiple pages of search results. My stomach sank to the floor. My heart was pounding out of my chest. My eyes widening in shock and disbelief. I was completely horrified. One by one, I clicked on the links to these various pornographic sites. I saw multiple threads and galleries that contained photos of me. I learnt quickly that the images of me weren’t just stolen from my social media pages, but were lifted from my friend’s social media accounts and from public, online photo galleries from the university bars I regularly attended. Identifiable information about me accompanied the images. It included details about where I lived and what I studied, and there were links added to my social media pages. Degrading and jarring sexually graphic commentary also accompanied my images. This included comments about the way I looked and what the commenters would like to do to me: ‘Cover her face and we’d fuck her body’ one comment read. They even juxtaposed ordinary images of me next to naked bodies of adult actresses with bodies and skin tones that resembled my own and made false claims that the juxtaposed intimate images were me and that I had sent those images to them. Words cannot describe the violation I felt seeing myself, my identity, name, body, and agency being misappropriated and misrepresented in this way. Little did I know that the mere distribution of ordinary images of me on pornographic sites was just the ‘gateway’ for what was to come. I soon discovered these anonymous sexual predators had been manipulating, altering, and doctoring ordinary images of me into pornographic material. My face had been doctored onto the bodies of naked adult actresses in solo positions and in imagery that depicted me having sexual intercourse with others. My face was photoshopped with semen on it,
58
N. Martin
and in imagery depicting me being ejaculated on. My face was edited onto multiple covers of adult movies. My blouses were edited to give the effect that they were wet, or transparent, so people could see my computer-generated, fake nipples. These sexual predators had even ejaculated on images of me, took photos of their semen and penises on my image, and posted these secondary photos onto pornographic sites in what is referred to as ‘tributes’ or ‘cumonprintedpics’ (see also Henry & Flynn, 2019 for a discussion on sexual deviancy and illicit networks online). And to make it clear that they were out to misrepresent and misappropriate me, they even edited my name ‘Noelle’ in fancy font and superimposed the text on some of the images to falsely depict me as someone I am not. Countless thoughts and questions flooded my mind: What was happening? Why was this happening to me? Who is responsible? Is it someone I know? What should I do? Is there anything I can do? I had (and still have) never been in a proper relationship, so I had no ‘jilted ex-lover’. I don’t believe I had (or have) any enemies, certainly nobody I can think of that I know personally who would do this. Before that moment, I had never heard about anything like this happening to another person. I didn’t even know what the term was to describe what was happening to me. In fact, the term ‘image-based sexual abuse’ had not been coined yet. The term was later developed as a concept and publicly deployed in 2015 by Professors Clare McGlynn and Ericka Rackley (McGlynn & Rackley, 2017).
The Removal Phase In the beginning, I reached for help. I called the police. I went to the police station with my laptop in hand with the various tabs open to the many pornographic sites. I contacted government agencies. I made enquiries to a private investigator. But there wasn’t much that anybody could do. In fact, there was nothing they could do, or did do. The websites were hosted overseas. There were no specific laws that dealt with this issue, and as I didn’t know who the perpetrators were, it felt like not only would there be no justice or accountability, but there would
4 Image-Based Sexual Abuse and Deepfakes: A Survivor …
59
be no prospect of justice or accountability. Maybe if I knew who was responsible, then potentially something more could have been done, but I didn’t, so it felt like there was no starting point, nothing concrete to work with to tackle the harm and abuse I suffered. Riddled with unanswered questions and left without any support, the only practical option I had was to contact the webmasters of the sites myself to try to get everything deleted. It felt like a race against time, just hoping that nobody I knew would stumble upon them. Feeling utterly alone, scared, violated, degraded, and dehumanised, I contacted the sites one by one, sending the webmasters standardised requests to delete the material that had been created and shared without my consent. As these sites had been in existence and operating for many months, likely even a year (since I was 17 years old) before I discovered the images, I didn’t know then what I know now, which is that by the time I found out, the damage was done. It was already too late. Trying to get the material removed was futile. It was too late for me to ever get on top of the situation and control the proliferation of these doctored images, and even if I somehow managed to control the situation by getting the non-consensual material removed from the internet, the images had already proliferated so much that I would never, ever, be able to guarantee they would not resurface (or control how they would resurface) weeks, months, or even years later. Only time would reveal the futility of the removal approach, because the more I tried getting the material removed, the more sites I would discover, the more graphic the images were becoming and the more my images were being shared and seen. Sometimes I would be successful in getting an image or thread removed only to have it resurface on that same site weeks later. Some sites required ID to prove it was me making the requests. On some sites, I could not locate the contact for the webmaster. Some sites just did not respond at all. It was exhausting. It was literally a never-ending battle that I could not win no matter how much I tried. I remember spending my university breaks embarking on this task. There were times I would try to forget this was happening to me, but it was short-lived because the fear, worry, and paranoia would sink in. I couldn’t sleep at night worrying how this would impact my life, fearful that I would never get a job if employers
60
N. Martin
saw the material of me. I started smoking to relieve my stress and shame. I drank to cope, and I spread myself thin with casual sexual encounters, internalising my objectification and struggling with cripplingly poor self-esteem and self-worth. My education was significantly disrupted. I struggled to find the motivation to study and navigate law school, while also silently fighting for my life, name, reputation, employability, dignity, and humanity. At times, it felt like me versus the World Wide Web. I was powerless, helpless, and alone, but I continued this process because there was nothing else I could do. This continued until one webmaster responded that he would only delete the material if I sent him nude photos of myself within 24 hours, a process I later learned was called ‘sextortion’, and was something many victims of image-based sexual abuse have experienced (Henry et al., 2019a, 2020; McGlynn et al., 2020; Rackley et al., 2021). Throughout the removal phase of my journey, while I wasn’t having much luck getting the images taken down, I still clung onto the comfort that the abuse had not yet reached a point where friends or acquaintances were stumbling upon this material of me, or where if you Googled my name by text, this material would be found. It was only through reverse searching my images that the images would be discovered. This small comfort had sustained me, until one day, a stranger reached out to me through LinkedIn notifying me of the material. This was the first time somebody had independently communicated to me any knowledge of the images. My fears around the images being discovered were materialising. This stranger, who used various aliases, took it upon himself to identify and notify me of all the pornographic sites that contained my images (many I was not aware of ) and helped to get them removed. His assistance, while unsettling, was helpful, as the sites that he identified and emailed me about were, in fact, being removed. While I still don’t know the identity of the person involved or the motivations behind his assistance, victims of this kind of abuse have come to learn of the role of ‘white knights’ or ‘saviours’, which refers to people who scour these sites, reach out to victims to let them know what is happening and help to get material removed, possibly for nefarious reasons and/or to initiate contact with victims. In fact, some pornographic sites that host and
4 Image-Based Sexual Abuse and Deepfakes: A Survivor …
61
facilitate the exchange of manipulated images of women, which are still in existence today, are fully aware of these ‘white knights’ and actively discourage the practice.
Speaking Out The emotional toll the abuse was taking on me was significant. The futility of the removal process, the sextortion request, the potential ‘white knight’ discovery, and the fact that there was no prospect of accountability or justice, compounded by the fact that there were no specific laws to deal with image-based sexual abuse, that it was being ignored by the media and people seemed unaware of this kind of abuse, all combined with the disruption to my education, well-being, and fear that I would not find employment, especially in the legal profession where name, image, and reputation is vital, finally led me to my breaking point. I had had enough. I wished it hadn’t come to this. I wished that there was something else I could do, but it felt like there wasn’t. The only option left was to face my fears and insecurities and speak out publicly to reclaim my name and fight for justice before it destroyed my life. My family, with the best of intentions, actively discouraged me from doing so, but I felt I had to. I had no other choice. I originally wrote an article for my university magazine about my experiences in 2015. But that didn’t provide any meaningful or tangible benefit to my fundamental aims to reclaim my name and fight for justice. At the time, because I did not know the term to describe the abuse, I coined the term ‘morphed porn’, because one of the pornographic threads that featured the fake, altered imagery of me was entitled ‘morphs’. The term ‘morph porn’ was later used in the Explanatory Memorandum to the Commonwealth laws that would be enacted years later to criminalise image-based sexual abuse at the federal level (Enhancing Online Safety (Non-consensual Sharing of Intimate Images) Act 2018 [Cth]). In 2016, I wrote an article for my blog ‘contemporarywomen.org’ about my experiences—a blog that I had previously created as an outlet
62
N. Martin
for my feminist rage. I sent the article to multiple news organisations to share my story. Australia’s public broadcaster, the Australian Broadcasting Corporation (ABC) was the only news organisation that responded. And finally, after years, I felt some reprieve. I thought that by doing an interview with the ABC, the story would pop up on Google if you were to search my name, and people would know the truth before potentially seeing the manipulated imagery. Suddenly, I felt a sense of potential hope after years of it being stripped away. At the age of 22, I gave my first televised interview about my experiences. At the time, I was personally unaware of anybody else in the world with a lived experience of being the target of the non-consensual distribution of altered intimate imagery who was actively, consistently, and publicly speaking out about it, so there was no blueprint to guide me or help me navigate this process. I just had to figure it out as I went. My intention was that this one interview would be sufficient to reclaim my name. I didn’t expect anything else would come from it. I remember being advised to get off social media for a few weeks after it was broadcast because we all know how cruel and merciless comments can be online. I was also aware and expected that I would receive some level of victim blaming and slut shaming from the public after the interview (see, Henry et al., 2019b; Powell et al., 2018). On the night the ABC interview aired, a handful of my friends came around to my apartment on campus as a show of support, but I could not watch it with them. Instead, I anxiously paced back and forth at the front of my apartment the whole time it aired. And admittedly drowned my pain with alcohol after the airing. Soon my story started reverberating around the world. Media organisations globally began sharing it and reaching out to me directly for interviews. At the same time, I began to receive a level of hate from members of the public that I did not anticipate. I was not prepared, and don’t think you ever can be, for the onslaught of horrible, cruel, and merciless commentary that comes with speaking out about sexual abuse. I was ruthlessly victim blamed and slut shamed. I was told I was deserving of what happened to me. The slut was asking for it. Stupid, naive attention seeking whore. No selfrespect, look at her. She should be flattered people would do that to her, it’s a
4 Image-Based Sexual Abuse and Deepfakes: A Survivor …
63
compliment. She’s fat, gross, and ugly, who would do that to her, were just some of the things they said. I was enraged. As if what I wear or how I look is an invitation to abuse me. The clothes I wear and the body I have do not entitle another person to misappropriate my personhood, sexuality, or agency. They’d also say that I should be careful what I post online, get off social media altogether, or change my privacy settings—placing the blame and responsibility on me to change my behaviour to avoid being abused by perpetrators, when quite frankly it should be the perpetrators changing their behaviour. And on this point, social media can be used as economic opportunities for people, for individuals to participate in social, economic, and political discourse, for individuals from, for example, culturally and linguistically diverse groups to create communities, be represented, visible, and take up space in society that, in many ways, social media has allowed that traditional forms of media and other societal structures have not. The mentality that victims of image-based sexual abuse should just get off social media or change their privacy settings to avoid being abused, is inherently victim blaming and would disproportionately target women, and particularly women of colour as they are disproportionately affected by image-based sexual abuse, excluding us from opportunities afforded to others, without such conditions (see Henry et al., 2018). Not to mention it sets a dangerously low bar for society and speaks volumes about the behaviour we accept and tolerate, and arguably enable with such a mentality. While I refused to be swayed by the backlash that was not only sexist and misogynistic, but racialised because of the colour of my skin, I’m still human, and the commentary was agony. At times it felt like the public hate was almost as bad as the abuse itself, because this tough exterior I sometimes like to project is really just a façade, unconvincing to those close to me who know I’m more sensitive and fragile than most. After I spoke out, I tried hanging myself with rope on the swing in my backyard. My father stopped me and almost had to call the ambulance. I put on a lot of weight and my trauma started showing up on my body that is now riddled with stretch marks; the beautiful scars of my survival. From my perspective, because I spoke out about an issue that wasn’t specifically criminalised at the time, it meant that there weren’t
64
N. Martin
entrenched, enshrined, and established community attitudes towards this issue that I could hold onto. Take, for example, child sexual abuse, which is not only criminalised, but universally condemned. What I was speaking about was not. For me, it meant that I copped what felt like a lot of the public’s misapprehension or misunderstanding of the nature and harms of this abuse. And because I put a human face to a relatively novel issue that certainly was not dominating the public’s consciousness at the time, it meant that I copped backlash that was directed at, and personalised towards, me. What also made speaking out about this abuse difficult, was the fact that I am a brown-skinned, thicker, woman of colour, who does not fit traditional notions of idealised standards of beauty in the West. Sexual predators creating and distributing altered pornographic material of me naturally drew intense judgement and scrutiny about the way I look in an attempt for people to understand why this happened to me. I have no doubt my identity and the way I look shaped the public’s reception of my story. ‘Why?’ ‘Why did it happen to you?’ ‘Why did they target you?’ or some iteration of it, has been the most commonly asked question by journalists or people I’ve spoken to about my experiences. While I don’t hold it against any one that has asked me this question, one that I’ve certainly asked myself, it has been emotionally exhausting, humiliating, and soul-crushing to have to speculate and, in some ways, justify to people why I was targeted in this way (because I don’t actually know why I was targeted, as I do not know who is responsible or what their motivations are, I can only infer ‘why’ from my understanding of my case). Sometimes I feel like responding, ‘why don’t you ask the perpetrators and let me know’. In response to this commonly asked question, I would say that I was ‘sexually fetishised’, partly because it makes some sense, but mostly in order to help other people comprehend, digest, or reconcile why I could have possibly been targeted by this conduct. And whenever I have responded in such a way, I can almost see or hear, figuratively speaking, the light bulb flash in people’s minds, like they finally get it. The intersection of race and gender and the subtle or not so subtle ways it manifests can be hard to explain, qualify, and quantify, but it leads me to wonder, if I was a conventionally attractive, skinny, white woman who epitomises
4 Image-Based Sexual Abuse and Deepfakes: A Survivor …
65
traditional and idealised standards of beauty in the West, would I still be asked the same question. There’s almost this sense of what Christie (1986) referred to as the ‘ideal’ victim, whom I certainly do not fit.
Law Reform Amidst all the public backlash, I just couldn’t and wasn’t going to let the outpouring of hate defeat me. I knew what the perpetrators had done was wrong and I knew victims deserved justice. In retrospect, it was this unwavering belief that the conduct of the perpetrators was unjust and unacceptable and that victims deserve justice, that sustained me during this period against all the hate, criticism, family push back, and the many obstacles I faced. I decided to petition the Australian Government to criminalise this form of abuse. I launched two petitions. One was an e-petition on the Parliament of Australia website. The other was on Change.org, which I focussed my energy on. I contacted academics, activists, women’s safety organisations, anybody I thought may support this fight, to sign my petition. But I only received around 300 signatures. The lack of support for my petition was one of the most disheartening aspects of my journey, and I ended up giving up on petitioning altogether, and sadly distanced myself from friends who didn’t show up for me. Instead, I focussed on raising public awareness and fighting for justice in the media by intentionally speaking out to audiences from all walks of life, so as to avoid preaching to the choir, including delivering a TEDx talk that was later published as a TED talk, because I knew that my message had to reach as many people as possible if there was ever to be any shift in community attitudes towards this issue (Martin, 2017). Completely organically, and without my direct petitioning efforts, my Change.org petition ended up surpassing 44,000 signatures. In May 2017, I delivered a speech to the media, standing alongside the New South Wales Attorney General at a press conference announcing the criminalisation of image-based sexual abuse in the state (Crimes Act 1900 [NSW]). From the public gallery, I watched as the NSW Attorney General delivered the Second Reading speech for new laws introduced
66
N. Martin
to criminalise this conduct, and in his speech, to my complete surprise, I was mentioned by name. Amidst all the abuse I had endured, this was probably one of the most special moments for me throughout the whole journey. Listening to the highest law officer in the state recognise my journey made me well up inside to the point of tears. In fact, it was this gesture that marked the start of my still incomplete healing journey, because despite the introduction of these laws, I knew there was still nothing that could be done to assist me, especially because I don’t know who the perpetrators of my abuse are. But it was almost as if I had received some form of justice, and it helped to create a pathway for anyone else who experienced the type of abuse I did to also have the possibility of securing justice through the law. And I am immeasurably grateful to the NSW Attorney General for that. It seemed like for the first time, things were really looking up because, somehow, I had managed to complete all my work for my law degree at the end of 2017, and a few months later, in February 2018, I received my certification of qualification for a double degree in Law and Arts. I remember choking up when I received that certification because it felt like such an enormous relief, that after everything that had happened, I was still standing, and did not let all the pain stop me from achieving my goals. In June 2018, I stood alongside the Western Australian Attorney General at a press conference announcing the criminalisation of imagebased sexual abuse in Western Australia (Criminal Code Act 1913 [WA]). For me, the specific criminalisation of this abuse served as a welcomed validation of an issue that I did not feel was always taken seriously. In hindsight, I wish that I had celebrated these wins more when they occurred. Now, I want to be clear, and it almost goes without saying, that I do not want, nor believe it is fair, to take credit for ‘changing the laws’ in this area. That credit belongs to the tireless work of world-class academics, including Drs Asher Flynn, Nicola Henry, and Anastasia Powell, without whom we would not have image-based sexual abuse laws at all, as well as other lawmakers, policymakers, and victims and survivors who have fought the hard fight. In many ways, it could be said that I was just
4 Image-Based Sexual Abuse and Deepfakes: A Survivor …
67
a ‘public face’ for this issue—a ‘human story’—and that is all. While I concede that this may be true, I know that I fought with my life and whole heart to help, in the only way I knew how to bring about law reform. Image-based sexual abuse is now criminalised in every state and territory, except Tasmania, and at the federal level in Australia—something that seemed almost unimaginable even ten years ago (Flynn & Henry, 2019).
Deepfakes While all the progress was being made on the legal front, and while I was continuing to speak out in the media, the abuse did not stop. Perhaps unsurprisingly, it also increased in nature and its level of proliferation. Sexual predators continued to manipulate and distribute images of me. There was one that depicted me wearing a shirt with the words ‘I AM A DUMB COW’ on it. Others had taken images of me that were used in media articles where I was advocating against the abuse to carry out the very thing I was fighting against. In one such case, I had received a certificate of nomination at the NSW/ACT Young Achiever Awards in recognition of my efforts in this area, and an image of me holding my certificate was doctored so that a fake adult movie cover was depicted in the area where the certificate was supposed to be, and the background that had accurately contained the brand names of the bodies that supported the awards read as ‘MyFreeCams’ to depict me as attending an adult pornography event. The abuse didn’t stop when I spoke out. And it certainly didn’t stop after the laws were introduced in NSW, in WA, and later across Australia. It was another price I had to pay for daring to speak out about abuse. Little did I know that things were about to really step up. In 2018, while at work, I received an email from an unknown email address that there was a ‘deepfake video of [me] on some porn sites’, a computergenerated fake pornographic video. My initial reaction was mixed. While I was utterly shocked, I didn’t feel the same emotional reaction I did when I discovered the altered images, which in itself is troubling and concerning, but more importantly, it was telling of how normalised this
68
N. Martin
abuse had become in my life that a fake, pornographic video of me did not evoke an intense emotional reaction. Rather, I felt anger. Anger at the audacity of the perpetrators to continue to target me and demonstrate not only an utter disregard for me, but also for the laws of Australia. I don’t know how this person got my email address, but I presume it may have been from the removal phase of my journey when I would reach out to the webmasters to get the material removed. I was later sent a link to an 11-second video depicting me engaged in sexual intercourse. The video was entitled ‘Noelle Martin on BBC’ (An acronym for ‘Big Black Cock’). I soon discovered another video depicting me performing oral sex. The now-deleted videos (that are still accessible in other locations) amassed almost 40,000 views. The same person who emailed me about the deepfake video sent another email that the ‘poster of the deepfake says he has 30 videos’, and that the poster wants money from PayPal. I am yet to see the other videos. I am currently only aware of two. I believe that the motivations behind the sexual predators distributing the fake videos were different from the motivations behind the distribution of the original manipulated images. While I do not know for sure what the precise motivations are, I believe that the videos were weaponised to taunt, intimidate, and silence me. What frightens me about deepfakes and the rise of technologyfacilitated abuse is that the reporting rate for all kinds of sexual abuse is already incredibly low. Many people are understandably hesitant to report sexual abuse, or speak out, out of fear of not being believed, of victim blaming and slut shaming, of not being taken seriously (see Burgin & Flynn, 2019; Flynn, 2015; Henry et al., 2015). And when you enter the realm of reporting or speaking out about this kind of sexual abuse, victims may feel reluctant to do so, because to speak out would increase the level of exposure of the very material that is the source of much of the harm. When you then compound the element of deepfake videos, or manipulated media using AI technologies, material is becoming increasingly difficult to detect what is real or fake, and this is going to serve as another powerful deterrent to victims seeking help or speaking out (see also Chapter 29, this volume).
4 Image-Based Sexual Abuse and Deepfakes: A Survivor …
69
What also frightens me about the rise in technology-facilitated abuse, and deepfakes in particular, is not necessarily how advanced the technology is becoming, such that it may become impossible to detect what is real or fake, it is that the technology, as it currently exists, even in an imperfect, detectable form, is enough to cause irreparable and life-long harm to a victim. This technology has potential effects on all women, not just those who are celebrities or public figures (Flynn, 2019). While it is currently more likely for a fake, pornographic video to be created of a celebrity woman than a non-celebrity woman, anybody could potentially be targeted (see Chapter 29, this volume). While this chapter is not here to comment on how deepfakes could emotionally impact a celebrity woman or a noncelebrity woman, there are some unique differences that I envision would result from a non-celebrity woman being targeted by this abuse. The first is that, unlike celebrities or public figures, ordinary people may not have the platform to potentially debunk the abuse. Second, for well-known celebrities, reasonable members of the public have the capacity to take what they hear or see about celebrities with a grain of salt, a critical eye, in a sea of tabloid gossip and rumours, an advantage that may help celebrity women in ways that may not help non-celebrity women. The third factor is that for ordinary women who are not public figures, the general public may not know them at all, and so any false manipulated media about that ordinary person, could have the unfortunate effect of altering, consuming, or defining their life. It’s deeply concerning how this abuse could and does affect ordinary women.
Conclusion In 2020, I was admitted as a lawyer in WA. At last, my dreams came true despite this rollercoaster of a journey that stood in the way. For me, this was a testament to the power of the human spirit to overcome life’s cruelty and injustice. However, no accolade or award can heal you from the trauma of abuse. I am still healing and trying to navigate life, relationships, studies, and employability.
70
N. Martin
Sadly to this day, I still experience abuse. In April 2021, a sexual predator shared doctored intimate images of me to three separate friends I have on social media. When this occurred, I had a sad realisation that despite the laws changing in Australia, and despite there being a personally observable shift in community attitudes towards this abuse, and significant strides being made around the world to tackle this form of harm, there is still a very, very long way to go to achieve accountability and justice for victims, and to create a safer online and offline world. One thing I know for sure is that we all have the power within us to overcome adversity and help shape a better world, regardless of who we are or what we look like. I take enormous comfort in the resilience, strength, and power of the human spirit to eventually lead us to that place. Acknowledgements I would like to acknowledge my dear, beloved family for their unconditional love and immense support throughout my life—Leon, Annalise, Naomi, Stephanie, Kimberly, and Lauren. I would also like to express my deepest gratitude to Liam Downey, the most selfless and caring human being I know, my rock, my confidante, and a rare gem of a human who has always been there for me from day one and who has this unparalleled way of calming me down whenever I’m in his presence. Tanaya Kar, the most wonderful friend I could ever hope for, whose passion, intelligence, and conviction never ceases to inspire me and who makes life worth living. Mads Duffield, whose strength is incomparable, who I am in constant awe of, and who makes the world an immeasurably better place. Kati Kraszlan, who has been my guardian angel throughout this journey, who took a chance on me and helped me when literally nobody else would. I wouldn’t be anywhere without you, and I am forever in your debt. Mikaela James and Erin Browne, for being beacons of light, comfort, and warmth in my life. Nigel Khine, whose words of kindness and love sustained me in my darkest moments. Zara Bending and Shireen Daft, for being my role models and supporting me amidst the public backlash and ever since. Their love, kindness, and encouragement have meant the world to me. I would also like to thank Drs Asher Flynn, Anastasia Powell, and Lisa Sugiura for the opportunity to share my story and insights. It is uncommon for survivors and activists to be brought to the table, and so I am immeasurably grateful for including our voices in this book.
4 Image-Based Sexual Abuse and Deepfakes: A Survivor …
71
References Crimes Act 1900 (NSW). Criminal Code Act 1913 (WA). Christie N. (1986). The ideal victim. In E. A. Fattah (Eds.), From crime policy to victim policy. Palgrave Macmillan. Burgin, R., & Flynn, A. (2019). Women’s behavior as implied consent: Male “reasonableness” in Australian Rape Law. Criminology & Criminal Justice. Online first. https://doi.org/10.1177/1748895819880953 Enhancing Online Safety (Non-consensual Sharing of Intimate Images) Act 2018 (Cth). Flynn, A. (2019, March 12). Image-based abuse: The disturbing phenomenon of the “deep fake”. Monash Lens. Flynn, A. (2015). Sexual violence and innovative responses to justice: Interrupting the recognisable narratives. In A. Powell, N. Henry, & A. Flynn (Eds.), Rape justice: Beyond the criminal law (pp. 92–111). Palgrave Macmillan. Flynn, A., & Henry, N. (2019). Image-based sexual abuse: An Australian reflection. Women and Criminal Justice. Online first. https://doi.org/10.1080/089 74554.2019.1646190 Flynn, A., Clough, J., & Cooke, T. (2021). Disrupting and preventing deepfake abuse: Exploring criminal law responses to AI-facilitated abuse. In A. Powell, A. Flynn, & L. Sugiura (Eds.), The palgrave handbook of gendered violence and technology (pp. 583–602). Palgrave Macmillan. Henry, N., & Flynn, A. (2019). Image-based sexual abuse: Online distribution channels and illicit communities of support. Violence Against Women. Online first. https://doi.org/10/1177/1077801219863881 Henry, N., Flynn, A., & Powell, A. (2018). Policing image-based sexual abuse: Stakeholder perspectives. Police Practice and Research: An International Journal, 19 (6), 565–581. Henry, N., Flynn, A., & Powell, A. (2019a). Image-based abuse: Victimisation and perpetration of non-consensual sexual or nude imagery. Trends and Issues in Crime and Justice, 572. Henry, N., Flynn, A., & Powell, A. (2019b). Responding to revenge pornography: the scope, nature and impact of Australian Criminal Laws: A report to the criminology research council . Australian Institute of Criminology.
72
N. Martin
Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. J. (2020). Image-based sexual abuse: A study on the causes and consequences of non-consensual nude and sexual imagery. Routledge. Henry, N., Powell, A., & Flynn, A. (2015). The promise and paradox of justice: Rape justice beyond the Criminal Law. In A. Powell, N. Henry, & A. Flynn (Eds.), Rape justice: Beyond the Criminal Law (pp. 1–17). Palgrave Macmillan. Martin, N. (2017). Online predators spread fake porn of me. Here’s how I fought back. Ted Talk. Available at: Noelle Martin: Online predators spread fake porn of me. Here’s how I fought back | TED Talk. Accessed 2 June 2021. McGlynn, C., & Rackley, E. (2017). Image-based sexual abuse. Oxford Journal of Legal Studies, 37 (3), 534–561. McGlynn, C., Rackley, E., Johnson, K., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2020). It’s torture for the soul: The harms of image-based sexual abuse. Social & Legal Studies. Online first. https://doi.org/10.1177/096466 3920947791 Powell, A., Henry, N., & Flynn, A. (2018). Image-based sexual abuse. In W. S. DeKeseredy & M. Dragiewicz (Eds.), Handbook of critical criminology (Chapter 25). Routledge. Powell, A., Henry, N., Flynn, A., & Scott, A. J. (2019). Image-based sexual abuse: The extent, nature, and predictors of perpetration in a community sample of Australian adults. Computers in Human Behavior, 92, 393–402. Powell, A., Scott, A. J., Flynn, A., & Henry, N. (2020). Image-based sexual abuse: An international study of victims and perpetrators—A summary report. RMIT University. Rackley, E., McGlynn, C., Johnson, K., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2021). Seeking justice and redress for victim-survivors of imagebased sexual abuse. Feminist Legal Studies. Online first. https://doi.org/10. 1007/s10691-021-09460-8
Part II Framing Gender, Technology & Violence
5 From Individual Perpetrators to Global Mobilisation Strategies: The Micro-Foundations of Digital Violence Against Women Lilia Giugni
Introduction ‘Whenever a journalist wants to write about me, I have to warn them of what will happen in the comments section and on social media’—I was once told by Jess Phillips, a British lawmaker who is regularly submerged by rape and death threats sent via Twitter, email, and underneath news pieces that report on her work. After the murder of her close friend Jo Cox (another legislator who used to be aggressively haunted online), Phillips had to install panic rooms in both her house and constituency office. ‘Their aim is to silence you’—she said to me as we chatted about her assailants—‘to make you feel vulnerable and worthless, and eventually to isolate you from all your support networks’. Indian political commentator Rana Ayyub also attracts constant vitriol on the Internet. But the abuse skyrocketed in 2018, when a ‘deepfaked’ pornographic L. Giugni (B) University of Cambridge, Cambridge, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_5
75
76
L. Giugni
video of her was circulated over WhatsApp and Facebook, and then shared over 40,000 times. The following day, Ayyub was ‘doxed’—her phone number was distributed aside a screenshot of the doctored video. She felt so ill that she had to be immediately hospitalised. Female campaigners and other public-facing women have long been the object of intense psychological, physical, and sexual harassment (Delap, 2020). Starting from the early 2000s, however, a novel, pernicious form of technology-mediated violence has emerged, which contributes to instil new life into deep-rooted patriarchal oppression. From US Congresswoman Alexandria Ocasio Cortez to Finnish reporter Jessikka Aro, from Argentinian activists protesting against femicide to their fellow feminists reclaiming the right to drive in Saudi Arabia, politically active women worldwide are being targeted through a variety of digital outlets and techniques. Moreover, there are clear indications that these assaults are often organised in a top-down manner, and involve both male supremacist groups (Gotell & Dutton, 2016) and shrewd political operators seeking to hegemonise disenfranchised segments of the male electorate (Boatright & Sperling, 2019). In the present chapter, I explore these dynamics drawing on two main research streams, which I propose to integrate and expand. Firstly, I discuss the political science literature that investigates violence against women in politics, arguing that the peculiarities and implications of digital outbreaks have been so far at least partially overlooked. Secondly, I briefly review the interdisciplinary scholarship on digital gendermotivated abuse, which has been mostly characterised by an individuallevel focus on either perpetrators or survivors. I thus highlight the benefits of connecting these two bodies of knowledge. In the second part of this chapter, I integrate a set of concepts and propositions from feminist and institutional theories, in order to offer a multi-level conceptualisation of digital assaults on politically engaged women. Specifically, I link micro-aspects of digital interactions to global patterns of white male radicalisation, and to the mobilisation strategies purposefully enacted by high-placed political stakeholders. In doing so, I shed light on key empirical trends, systematise existing knowledge, and advance theory-building efforts on the micro-foundations of patriarchal power and their ties to contemporary socio-political developments.
5 From Individual Perpetrators to Global Mobilisation …
77
Misogyny and Gender-Based Abuse in Global Politics In recent years, a compelling body of research has shed light on the historical and geographical ubiquity of political violence against women (see Krook, 2017 for a detailed synthesis). Mona Lena Krook and her co-authors, for instance, sought to demystify the idea that personal aggressions should be considered part of the ‘cost of doing business’ for female politicians, campaigners, and pundits (Krook & Sanín, 2020). Instead, they denounced the magnitude and systematicity of these incidents and assessed their repercussions on women’s political participation and democratic health more generally (Krook, 2018). Numerous country experts have contributed to this debate, surveying offensives against South Asian female candidates (Nelson et al., 1994), Middle Eastern women’s rights advocates (Agarwal & Wigand, 2012), Latino-American social movement leaders and female voters more generally (Albaine, 2018). Others unveiled attempts to sabotage the activities of women office holders in Europe, Africa, and Australia, and deplored the lack of support available to them from political parties and the state (Goetz, 2003; Shepherd, 2014). A significant contribution of this body of literature has been to present political violence against women as qualitatively and quantitively different from other violent political events (for example, from generalised hostility against parliamentary representatives in contexts shaped by weak state capacity, see Piscopo, 2016). On the one hand, it has been put forward that women tend to be stricken due to their gender attributes (i.e. their more or less feminine look, supposed sexual desirability, or presumed unsuitability to fulfil stereotyped gender roles), rather than to their policy positions (Bardall, 2013; Lawless & Fox, 2010). On the other hand, solid evidence has been provided that women of colour and those belonging to religious or sexual minorities find themselves at the intersection of multiple forms of violations—which qualifies the phenomenon as both a markedly gendered and an intersectional one (Da Silva & Larkins, 2019; Dhrodia, 2018). Other useful considerations can be extracted from analyses of the current surge in so-called ‘men’s rights’ activism, and of its links with
78
L. Giugni
white supremacist and far-right extremism in Europe and the Americas (Kimmel, 2017; Messner, 1998). Tellingly, scholars have found that misogynistic and anti-LGBTQ+ beliefs were a key influence in the revitalisation of several radical right coteries, pointing to the online abuse of female opponents as an important tool in these groups’ arsenal (Christensen & Jensen, 2014; Ferber, 1999). Examples of this include members hacking the accounts of women’s rights campaigners or progressive female journalists, leaking their private images and personal details, and vilifying them on social media (Bezio, 2018; Hoffman et al., 2020). Recent investigations have also reconstructed the connections between online men’s rights activism and the success of political leaders such as Trump, Putin, and Bolsonaro (Bratich & Banet-Weiser, 2019), who made of machoistic rhetoric a distinctive trait of their public personas (Aro, 2016; Ott, 2017). Furthermore, pioneering findings on the effects of micro-targeted political messaging suggest that the diffusion of sexist contents might be playing an increasing part in the manipulation of political consensus (Bayer, 2020; Till, 2020). Altogether, this rich body of work has provided multifaceted insights and helped document different and unsettling instances of technologyfacilitated violence. However, it has also largely failed to recognise the unique traits of digital attacks and their specific impact on the wider trends under analysis. These omissions are important for a number of reasons. The first is empirical and has to do with the distinctive ways in which digital abuse can undermine both women’s well-being and democratic fairness. Specifically, due to the anonymity granted by many Internet services and to the reduced sensations typical of online experiences, digital offenders tend to subconsciously de-humanise their victims, which increases their propensity to engage in new violent acts (Suler, 2004). Such powerful psychological effects, research has exposed, are particularly relevant to the functioning of male supremacist websites, where users can organise to strike women in a pack-like manner, anonymously and from across different platforms and even countries (Ging & Siapera, 2018; Mantilla, 2015). Importantly, actors connected to Trump’s White House, the Kremlin, and representatives of the Italian and Polish far-right, have been all shown to adroitly take advantage of this dynamic, spreading inflammatory online contents which instigated some of the
5 From Individual Perpetrators to Global Mobilisation …
79
aggressions and eventually gained them the support of alienated male voters (see Bracciale & Martella, 2017; Graff et al., 2019). A final key point concerns the difficulty of removing offensive materials from the web. This feature makes digital violence an especially effective weapon against female antagonists (Maddocks, 2020), particularly when it entails the non-consensual distribution of sexually explicit images. But there is a second reason why students of political violence might benefit from a more in-depth exploration of digital gender-based abuse. In fact, specialised research on the phenomenon (which I cursorily summarise in the following section) provides precious micro-level information on the motivations of individual wrongdoers, as well as on the varying, situated responses of survivors. Therefore, from a theoretical standpoint, this literature provides an ideal complement to socio-political macro-analyses, bringing to light the anthropological and psychological roots of complex patterns of radicalisation and shifting power relations. As I will discuss later in this chapter, one of the central tasks that await scholars working at the intersection between gender, technology, and power in years to come, is precisely to shed light on these intricate processes, and to theorise their multiple connections.
Gender-Based Violence 2.0 Over the last decade, experts from a wide range of disciplines have considerably improved our understanding of the violent uses of technology from a gender perspective (see Powell & Henry, 2017). This insightful and diverse scholarship has identified numerous manifestations of digital abuse—among which non-consensual and unsolicited pornography, online sexual harassment, cyber stalking, gender-motivated hate speech, ‘grooming’, and other attempts to lure women into sexual exploitation and trafficking (e.g. Citron & Norton, 2011; Henry & Powell, 2016; Latonero et al., 2012). Intriguingly, and depending on their disciplinary and theoretical stance, authors have regarded these disparate occurrences as a pervasive-gendered discourse that crosscuts offline and online arenas, a violent form of communication, or a distinctive typology of violence against women (Lewis et al., 2017).
80
L. Giugni
While referring to other chapters in this book for a more extensive review of this body of work, it is worth reporting here the findings of specialised criminologists and cyber-psychologists. For example, much has been written on the motivators of (mostly male) transgressors, and on how online habits altered their perceptions and socialised them into aggressive behaviours (Reed et al., 2016; Vandebosch & Van Cleemput, 2008). In parallel, an equally prolific stream of research has stressed the grave consequences of digital onslaughts on survivors’ mental and physical health, their professional development, and the economic well-being of their communities (see Harris & Woodlock, 2019; Henry & Powell, 2018). The challenges of particular categories of women (non-white ones, teenagers, LBTQ+ youth) have also been consistently emphasised (Jackson & Banaszczyk, 2016; Ryan Vickery & Everbach, 2018). For their part, media and gender studies scholars have often highlighted opportunities for resistance (Mendes et al., 2018; Williams, 2015), concentrating their attention on the efforts of women wishing to reclaim the Internet for liberating purposes, such as reaching out to fellow feminists or raising awareness about abuse (Jane, 2016; Mendes et al., 2019). Specifically, a plethora of studies illustrated the survival strategies of victimised female academics (Olson & LaPoe, 2018), journalists (Ferrier & Garud-Patkar, 2018), celebrities (Marwick, 2017), and of survivors of particular violent practices, such as ‘upskirting’, ‘cyberflashing’, and other manifestations of non-consensual pornography (Poole, 2015). Relevantly to our discussion, the vast majority of these inquiries reviewed harassment directed against female public figures (Chen et al., 2020; Martellozzo & Jane, 2017), using expressions such as ‘e-bile’ and ‘gender-trolling’ to account for their peculiarities (Jane, 2014; Mantilla, 2013). Lastly, legal scholars contributed to conceptualise such acts as criminal violations, critiquing artificial and unhelpful distinctions between ‘virtual’ and ‘real world’ violations, and pinpointing potential policy responses (Citron, 2009; Stratton et al., 2017). Still, several critical elements are missing from this conversation. To begin with, studies of digital violence, while showcasing the complexities of individual experiences, have often resorted to structural, culturecentred grand theories as ultimate explanations. They have, in other
5 From Individual Perpetrators to Global Mobilisation …
81
words, portrayed digital abuse as the newest incarnation of historical, inveterate gendered views, assumed to have been already present in society, and later amplified by communication technologies (see Moloney & Love, 2018). This interpretation has many merits, and it is one I share to a large extent. Yet between individual behaviours and background societal beliefs there are all-important meso-level mechanisms, whereby actors at various levels maintain (and can potentially challenge) entrenched patriarchal structures. At the same time, the current literature mostly treats digital technologies as black boxes, supposing they will somehow magnify existing injustice but without explaining why and in which ways this might happen (Faraj et al., 2018). This is a shame, as we are prevented from bringing into the picture the gendered processes that inform the production of technology, and play a pivotal role in digital gender-based assaults. Overall my argument here completes and extends the claims formulated in the previous section. I propose that, just like political macroanalysts may find it advantageous to reflect more in-depth on the specificities of technology-mediated violence, digital abuse scholars would benefit from reviewing their findings in light of wider insights from social science ground theories. In so doing, I build on a small but growing body of work that, while focusing specifically on digital genderbased abuse and various forms of online harm, draws on techno-social frameworks and broader theories of the intertwined relationship between technology and society (Powell et al., 2018). Additionally, I argue that we can usefully expand these different bodies of research by conceptualising more carefully the interactions, sites, and mechanisms that tie individual digital practices to societal level misogyny, and to its global political implications. In the remainder of the chapter, I thus proceed to list some propositions meant to support such future conceptual efforts.
A Conceptual Toolkit for Future Research Drawing on an eclectic combination of previous works, I have so far suggested that digital violence against politically active women is both a unique form of gender-based aggression and a purposeful strategy
82
L. Giugni
in the hand of political stakeholders. These may include anti-feminist groups wishing to disempower women collectively, and skilful operators who build political careers out of their relationships with disaffected male constituencies, which they aim to retain and polarise by getting them involved in misogynistic online activities. To clarify the connections between these various aspects, I now introduce some theoretical devices borrowed from feminist and institutional theory. The institutional characteristics of patriarchy and technology. The concept of patriarchy has long allowed feminist movements to make sense of women’s oppression, of its origins and inner workings. If activists and writers have understood the term rather differently through time (see Beechey, 1979), most definitions of patriarchy focalise on systemic and resilient power dynamics that let men dominate women. Notably, feminist scholars have also theorised the ways in which patriarchy intersects with other oppressive social structures, such as the class and race systems and various incarnations of the capitalistic economy (Arruzza, 2016; Crenshaw, 1990). For the purpose of our discussion, I find it particularly useful to consider the institutional properties of patriarchy— namely the features that make it pervasive, influential, and ever-present in time and space. As observed by feminist sociologist Patricia Yancey Martin (2004), the defining characteristics of social institutions are their endurance amidst historical conflicts and changes, their ‘taken for grantedness’, and their capacity to inform social identities and power relations; all qualities that undeniably apply to patriarchy. Furthermore, social institutions are generally thought to overlap with one another, and their reciprocal effects to be intensified in the process, as it has historically been the case for patriarchy and other systems of repression. Crucially, technology has also been understood as an influential institution that regulates the lives of people and organisations (Orlikowski & Scott, 2008). Here, the seminal work of techno-feminist scholars is of particular interest (see Wajcman, 1991). Techno-feminists, indeed, concern themselves with the way technological processes and patriarchal hierarchies reciprocally constitute each other (Wajcman, 2010). Consciously or not—their analyses show—gender and intersectional inequalities in digital education or the tech sector get gradually embedded in technological artefacts. This, on the other hand, increases
5 From Individual Perpetrators to Global Mobilisation …
83
the vulnerability of women and other historically oppressed groups, and their exclusion from the benefits stemming from technological innovation (Noble, 2018). A case in point are, for instance, supposedly gender-neutral technologies like dating applications, where female users report experiencing various types of violence, including sexual harassment and unsolicited pornography (Thompson, 2018). At the same time, gender injustice in technological sites is also tied to global political relations, with governments and legislative assemblies acting as key decision-makers in the regulation of digital platforms and their role in democratic processes (Kreiss & McGregor, 2019). With all this in mind, I propose to look at patriarchy and technology as mutually supporting institutions. This, I argue, enables us to theorise in a nuanced manner the avenues whereby technological development influences gender relations. Patriarchy’s (digital) micro-foundations. Scholars interested in social institutions use the idea of institutional micro-foundations to explain how everyday actions inform the institutionalisation of broader processes (Powell & Rerup, 2017). This helps them rationalise in a sophisticated fashion the relationship between structural factors and various forms of individual and collective agency, as well as to evaluate the often incongruent outcomes of entangled causal chains (Harmon et al., 2019). In this context, my proposition is to look at digital violence against politically active women as one of the above-mentioned intermediatelevel mechanisms, through which individual misogynistic behaviours eventually contribute to preserve patriarchal oppression. The advantage of this conceptualisation is, I think, threefold. First, while recognising the resilience of patriarchal relations, this approach allows us to see individual perpetrators not simply as receptacles of diehard gendered norms, but also as purposeful agents that contribute to reproduce a complex system (Martin, 2006). Second, the microfoundation framework acknowledges the importance of meso-levels of analysis and organised strategic action. In so doing, it correctly differentiates between, say, an individual man who is radicalised online and participates in a targeted campaign against a public-facing woman, the male supremacy groups that recruited and politicised him, and the powerful actors that may gain from his actions (say, politicians who
84
L. Giugni
seek his vote, and tech platforms that fail to prevent digital violence, see Cole, 2018; Keilty, 2018). Third, this perspective gives us further insights into the relationship between patriarchy and technology. To put it differently, digital violence can be seen as a mechanism that does not only reproduce patriarchal domination, but also the gendered traits of technological processes. In particular, online attacks perpetuate stereotyped thinking about the Internet as a male space, rife in early digital communities as well as the tech industry. This can discourage women from engaging with new digital trends, continuing a state of segregation that, as we saw, gets then encoded into material technological artefacts. The circle of gender-based exclusion can thus go on. Gendered forms of institutional policing. Another advantageous tool in the study of digital gender-based violence is the notion of policing as a technique towards institutional maintenance. Students of social institutions as varied as markets, families, and religions have found that disciplinary enforcement, both in subtle and psychological forms and of a manifest and extremely violent nature, is central to their endurability. Patriarchy, it has been argued, is of course no exception (Risman, 2004). The most obvious way in which the concept of policing applies to digital abuse concerns its being used to ‘punish’ female transgressors of patriarchal norms (say, women who dare expressing themselves publicly both online and offline). As we saw, there is evidence that this acts as an effective deterrent, at least in the short term: digital violence victims often temporarily stop engaging in political activism (Krook & Sanín, 2016). However, as I have already discussed at length in this chapter, while digital violence may well be addressed against specific individuals, it also serves as an arena for the construction of toxic masculine identities and their recruitment into political projects. Seen in this light, digital attacks are clearly involved in other, more complex types of policing. To begin with, they reinforce symbolic boundaries between different typologies of masculinities. White men, for example, when radicalised in white and male supremacy online communities, develop a perception of themselves as being entitled to both sexual and racial privileges, as well as close bond with other members of this privileged category (O’Neill, 2018). Additionally, perpetrators who strike female political opponents gain from this a distorted sense of power that is
5 From Individual Perpetrators to Global Mobilisation …
85
in itself an incentive-based form of policing (Jones et al., 2020). As a result, patriarchal norms are enforced both within Internet communities, and—relatedly—in connected political and social groups (Dignam & Rohlinger, 2019). Conceptualising space for resistance. Finally, ending on a brighter note, incorporating inputs from the theoretical perspectives I have reviewed may also illuminate pathways towards positive change. For once, the idea that meaningful institutional transformations can be generated by the everyday acts of individuals is central to both feminist and institutional theory. The main principle, here, is that actors may successfully challenge gender injustice by working to redefine the meanings, symbols, identities, and socio-material relationships that uphold the patriarchal regime (Martin, 2003). A thorough analysis of resistance strategies against digital gender-based violence falls outside the scope of this contribution. Yet I believe that these reflections can help draw attention to the connections between plans of actions not always linked to one another. These include efforts to regulate online political campaigns, projects centred around disenfranchised male constituencies, and initiatives aimed at supporting gender-equal representation in both politics and tech (see Weber, 2006).
Conclusion Throughout this chapter, my goal has been to bridge conversations around digital violence that involve social researchers from different backgrounds. I have juxtaposed the macro-inquiries of political scientists interested in attacks against women in politics to the specific findings of research on digital gender-based abuse. I have also integrated ideas from feminist and institutional scholarship, which have enabled me to clarify the relationship between the patriarchal system and technological innovation. Based on this, I have suggested to look at patriarchy and technology as two mutually reinforcing institutions, and to consider their micro-foundations—namely, the processes that bring individual actors, who enjoy different degrees of agency, to act upon existing patriarchal relations and technological processes. I have then described digital gender-based violence as a mechanism functional to the preservation
86
L. Giugni
of patriarchy, as it not only deters female transgressors but strengthens dominant forms of masculinities and male bonds. Finally, I have underlined the intersections between online male radicalisation (of which digital gender-based violence is a chief manifestation) and the emergence of hypermasculine models of leadership and political discourses worldwide. I have noted the circular repercussions this has on women’s rights, and explored the potential for such vicious circles to be broken. In conclusion, I hope to have provided valuable food for thought for both digital violence experts and those who have just started to familiarise themselves with the phenomenon. Acknowledgements The author(s) received no financial support for the research, authorship, and/or publication of this article.
References Agarwal, N., Lim, M., & Wigand, R. T. (2012). Online collective action and the role of social media in mobilizing opinions: A case study on women’s right-to-drive campaigns in Saudi Arabia. In C. G. Reddick & S. K. Aikins (Eds.), Web 2.0 technologies and democratic governance (pp. 99–123). Springer. Albaine, L. (2018). Estrategias legales contra la violencia política de género: Las oportunidades de acción. La ventana: Revista de Estudios de Género, 6 (48), 264–293. Aro, J. (2016). The cyberspace war: Propaganda and trolling as warfare tools. European View, 15 (1), 121–132. Arruzza, C. (2016). Functionalist, determinist, reductionist: Social reproduction feminism and its critics. Science & Society, 80 (1), 9–30. Bardall, G. (2013). Gender-specific election violence: The role of information and communication technologies. Stability: International Journal of Security and Development, 2(3). Bayer, J. (2020). Double harm to voters: Data-driven micro-targeting and democratic public discourse. Internet Policy Review, 9 (1), 1–17. Beechey, V. (1979). On patriarchy. Feminist Review, 3(1), 66–82.
5 From Individual Perpetrators to Global Mobilisation …
87
Bezio, K. M. (2018). Ctrl-Alt-Del: GamerGate as a precursor to the rise of the alt-right. Leadership, 14 (5), 556–566. Boatright, R. G., & Sperling, V. (2019). Trumping politics as usual: Masculinity, misogyny, and the 2016 elections. Oxford University Press. Bracciale, R., & Martella, A. (2017). Define the populist political communication style: The case of Italian political leaders on Twitter. Information, Communication & Society, 20 (9), 1310–1329. Bratich, J., & Banet-Weiser, S. (2019). From pick-up artists to incels: Con (fidence) games, networked misogyny, and the failure of neoliberalism. International Journal of Communication, 13, 25. Chen, G. M., Pain, P., Chen, V. Y., Mekelburg, M., Springer, N., & Troger, F. (2020). ‘You really have to have a thick skin’: A cross-cultural perspective on how online harassment influences female journalists. Journalism, 21(7), 877–895. Christensen, A. D., & Jensen, S. Q. (2014). Combining hegemonic masculinity and intersectionality. NORMA: International Journal for Masculinity Studies, 9 (1), 60–75. Citron, D. K. (2009). Law’s expressive value in combating cyber gender harassment. Michigan Law Review, 108, 373. Citron, D. K., & Norton, H. (2011). Intermediaries and hate speech: Fostering digital citizenship for our information age. BUL Review, 91, 1435. Cole, M. (2018). Trump, the Alt-right and public pedagogies of hate and for fascism: What is to be done? Routledge. Crenshaw, K. (1990). Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43, 1241. Da Silva, A. J. B., & Larkins, E. R. (2019). The Bolsonaro election, antiblackness, and changing race relations in Brazil. The Journal of Latin American and Caribbean Anthropology, 24 (4), 893–913. Delap, L. (2020). Feminisms: A global history. Penguin. Dhrodia, A. (2018). Unsocial media: A toxic place for women. IPPR Progressive Review, 24 (4), 380–387. Dignam, P. A., & Rohlinger, D. A. (2019). Misogynistic men online: How the red pill helped elect trump. Signs: Journal of Women in Culture and Society, 44 (3), 589–612. Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning algorithm. Information and Organization, 28(1), 62–70. Ferber, A. L. (1999). White man falling: Race, gender, and white supremacy. Rowman & Littlefield Publishers.
88
L. Giugni
Ferreday, D. (2013). Afterword: Digital relationships and feminist hope. In Digital sociology (pp. 51–57). Palgrave Macmillan. Ferrier, M., & Garud-Patkar, N. (2018). TrollBusters: Fighting online harassment of women journalists. In J. R. Vickery & T. Everbach (Eds.), Mediating misogyny (pp. 311–332). Palgrave Macmillan. Ging, D., & Siapera, E. (2018). Special issue on online misogyny. Feminist Media Studies, 4 (18), 515–524. Goetz, A. M. (2003). Women’s political effectiveness: A conceptual framework. No shortcuts to power: African women in politics and policy making, 2003. Gotell, L., & Dutton, E. (2016). Sexual violence in the ‘manosphere’: Antifeminist men’s rights discourses on rape. International Journal for Crime, Justice and Social Democracy, 5 (2), 65. Graff, A., Kapur, R., & Walters, S. D. (2019). Introduction: Gender and the rise of the global right. Signs: Journal of Women in Culture and Society, 44 (3), 541–560. Harmon, D. J., Haack, P., & Roulet, T. J. (2019). Microfoundations of institutions: A matter of structure versus agency or level of analysis? Academy of Management Review, 44 (2), 464–467. Harris, B. A., & Woodlock, D. (2019). Digital coercive control: Insights from two landmark domestic violence studies. The British Journal of Criminology, 59 (3), 530–550. Henry, N., & Powell, A. (2016). Sexual violence in the digital age: The scope and limits of criminal law. Social & Legal Studies, 25 (4), 397–418. Henry, N., & Powell, A. (2018). Technology-facilitated sexual violence: A literature review of empirical research. Trauma, Violence, & Abuse, 19 (2), 195–208. Hoffman, B., Ware, J., & Shapiro, E. (2020). Assessing the threat of incel violence. Studies in Conflict & Terrorism, 43(7), 565–587. Jackson, S. J., & Banaszczyk, S. (2016). Digital standpoints: Debating gendered violence and racial exclusions in the feminist counterpublic. Journal of Communication Inquiry, 40 (4), 391–407. Jane, E. A. (2016). Online misogyny and feminist digilantism. Continuum, 30 (3), 284–297. Jane, E. A. (2014). ‘Back to the kitchen, cunt’: Speaking the unspeakable about online misogyny. Continuum, 28(4), 558–570. Jones, C., Trott, V., & Wright, S. (2020). Sluts and soyboys: MGTOW and the production of misogynistic online harassment. New Media & Society, 22(10), 1903–1921.
5 From Individual Perpetrators to Global Mobilisation …
89
Keilty, P. (2018). Desire by design: Pornography as technology industry. Porn Studies, 5 (3), 338–342. Kimmel, M. (2017). Angry white men: American masculinity at the end of an era. Hachette, UK. Kreiss, D., & McGregor, S. C. (2019). The “arbiters of what our voters see”: Facebook and Google’s struggle with policy, process, and enforcement around political advertising. Political Communication, 36 (4), 499–522. Krook, M. L. (2017). Violence against women in politics. Journal of Democracy, 28(1), 74–88. Krook, M. L. (2018). Violence against women in politics: A rising global trend. Politics & Gender, 14 (4), 673–675. Krook, M. L., & Restrepo Sanín, J. (2016). Violence against women in politics: A defense of the concept. Política y gobierno, 23(2), 459–490. Krook, M. L., & Sanín, J. R. (2020). The cost of doing politics? Analyzing violence and harassment against female politicians. Perspectives on Politics, 18(3), 740–755. Latonero, M., Musto, J., Boyd, Z., Boyle, E., Bissel, A., Gibson, K., & Kim, J. (2012). The rise of mobile and the diffusion of technology-facilitated trafficking (p. 43). University of Southern California: Center on Communication Leadership & Policy. Lawless, J. L., & Fox, R. L. (2010). It still takes a candidate: Why women don’t run for office. Cambridge University Press. Lewis, R., Rowe, M., & Wiper, C. (2017). Online abuse of feminists as an emerging form of violence against women and girls. British Journal of Criminology, 57 (6), 1462–1481. Maddocks, S. (2020). ‘A Deepfake Porn Plot Intended to Silence Me’: Exploring continuities between pornographic and ‘political’ deep fakes. Porn Studies, 7 (4), 415–423. Mantilla, K. (2013). Gendertrolling: Misogyny adapts to new media. Feminist Studies, 39 (2), 563–570. Mantilla, K. (2015). Gendertrolling: How misogyny went viral . ABC-CLIO. Martellozzo, E., & Jane, E. A. (eds.). (2017). Cybercrime and its victims. Routledge. Martin, P. Y. (2003). “Said and done” versus “saying and doing” gendering practices, practicing gender at work. Gender & Society, 17 (3), 342–366. Martin, P. Y. (2004). Gender as social institution. Social Forces, 82(4), 1249– 1273. Martin, P. Y. (2006). Practising gender at work: Further thoughts on reflexivity. Gender, Work & Organization, 13(3), 254–276.
90
L. Giugni
Marwick, A. E. (2017). Scandal or sex crime? Gendered privacy and the celebrity nude photo leaks. Ethics and Information Technology, 19 (3), 177– 191. Mendes, K., Ringrose, J., & Keller, J. (2018). #MeToo and the promise and pitfalls of challenging rape culture through digital feminist activism. European Journal of Women’s Studies, 25 (2), 236–246. Mendes, K., Ringrose, J., & Keller, J. (2019). Digital feminist activism: Girls and women fight back against rape culture. Oxford University Press. Messner, M. A. (1998). The limits of “the male sex role” an analysis of the men’s liberation and men’s rights movements’ discourse. Gender & Society, 12(3), 255–276. Moloney, M. E., & Love, T. P. (2018). Assessing online misogyny: Perspectives from sociology and feminist media studies. Sociology Compass, 12(5), 12577. Nelson, B. J., Chowdhury, N., & Caudhur¯ı, N. (eds.). (1994). Women and politics worldwide. Yale University Press. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press. O’Neill, R. (2018). Seduction: Men, masculinity and mediated intimacy. Wiley. Olson, C. C., & LaPoe, V. (2018). Combating the digital spiral of silence: Academic activists versus social media trolls. In J. R. Vickery & T. Everbach (Eds.), Mediating misogyny (pp. 271–291). Palgrave Macmillan. Orlikowski, W. J., & Scott, S. V. (2008). 10 sociomateriality: Challenging the separation of technology, work and organization. Academy of Management Annals, 2(1), 433–474. Ott, B. L. (2017). The age of Twitter: Donald J. Trump and the politics of debasement. Critical Studies in Media Communication, 34 (1), 59–68. Piscopo, J. M. (2016). State capacity, criminal justice, and political rights: Rethinking violence against women in politics. Política y Gobierno, 23(2), 437–458. Poole, E. (2015). Fighting back against non-consensual pornography. USF Law Review, 49, 181. Powell, A., & Henry, N. (2017). Sexual violence in a digital age. Springer. Powell, A., Stratton, G., & Cameron, R. (2018). Digital criminology: Crime and justice in digital society. Routledge. Powell, W. W., & Rerup, C. (2017). Opening the black box: The microfoundations of institutions. In R. Greenwood, C. Oliver, T. B. Lawrence, & R. E. Meyer (Eds.), The Sage handbook of organizational institutionalism, 2 (pp. 311–337). Sage.
5 From Individual Perpetrators to Global Mobilisation …
91
Reed, L. A., Tolman, R. M., & Ward, L. M. (2016). Snooping and sexting: Digital media as a context for dating aggression and abuse among college students. Violence Against Women, 22(13), 1556–1576. Risman, B. J. (2004). Gender as a social structure: Theory wrestling with activism. Gender & Society, 18(4), 429–450. Shepherd, L. J. (ed.). (2014). Gender matters in global politics: A feminist introduction to international relations. Routledge. Stratton, G., Powell, A., & Cameron, R. (2017). Crime and justice in digital society: Towards a ‘digital criminology’? International Journal for Crime, Justice and Social Democracy, 6 (2), 17. Suler, J. (2004). The online disinhibition effect. Cyberpsychology & Behavior, 7 (3), 321–326. Thompson, L. (2018). “I can be your Tinder nightmare”: Harassment and misogyny in the online sexual marketplace. Feminism & Psychology, 28(1), 69–89. Till, C. (2020). Propaganda through ‘reflexive control’ and the mediated construction of reality. New Media & Society, 1461. Vandebosch, H., & Van Cleemput, K. (2008). Defining cyberbullying: A qualitative research into the perceptions of youngsters. CyberPsychology & Behavior, 11(4), 499–503. Vickery, J. R., & Everbach, T. (2018). Mediating misogyny. Palgrave Macmillan. Wajcman, J. (1991). Patriarchy, technology, and conceptions of skill. Work and Occupations, 18(1), 29–45. Wajcman, J. (2010). Feminist theories of technology. Cambridge Journal of Economics, 34 (1), 143–152. Weber, J. (2006). From science and technology to feminist technoscience. Handbook of gender and women’s studies. Sage. Williams, S. (2015). Digital defense: Black feminists resist violence with hashtag activism. Feminist Media Studies, 15 (2), 341–344.
6 Alternate Realities, Alternate Internets: African Feminist Research for a Feminist Internet Neema Iyer
Introduction For the past decade, internet connectivity has been praised for its potential to close the gender gap in Africa (Wagacha, 2019). Even though the rise of digitalisation and the spread of digital technologies can provide new possibilities for being and knowing, it is important to be attentive to how power is shaped, embedded and wielded in these technologies and discourse (Chun, 2006). Online spaces can be mobilised both to resist and to reinforce hierarchies of gender and race, including through violence enacted online. Online gender-based violence is commonly defined as an action facilitated by one or more people that harms others based on their sex or gender identity, or by enforcing harmful gender norms, which is N. Iyer (B) Kampala, Uganda e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_6
93
94
N. Iyer
carried out by using the internet or mobile technology (Hinson et al., 2018). This includes stalking, bullying, sexual harassment, defamation, hate speech and exploitation, or any other online controlling behaviour (Hinson et al., 2018). Online gender-based violence has been found to be just as damaging to women as physical violence (EWL, 2017). Women are more likely to be repeat victims of online gender-based violence and are more likely to experience it in greater severity than men (Woodlock, 2016). Experiences of online violence also differ, and can be compounded, at the intersection of gender and race as well as other oppressions. For example, a 2018 survey of 778 women by Amnesty International found that Black women journalists and politicians in the US and UK were 84% more likely to be the target of hate speech online compared to their white counterparts. Overall, women of colour were found to be 34% more likely to be targeted with abusive language on Twitter (Amnesty International, 2018a). Meanwhile, in a study of online violence against women who are journalists in Pakistan, results showed that eight out of ten women self-censor in an attempt to counter online violence, and three out of ten experience more severe crimes such as incitement to violence offline (Kamran, 2019). This self-censorship, in turn, impacts the diversity of information, news and stories the public receives due to the censorship and silencing of women. This chapter seeks to examine the intersections of gender and race in women’s experiences of online violence and abuse, as well as the opportunities for online spaces to promote greater independence and freedom for women. It does so through a discussion of research conducted by Pollicy, a civic technology organisation based in Kampala, Uganda. The research focused on the online lived experiences of women living across five sub-Saharan African countries to illustrate that repeated negative encounters fundamentally impact how women navigate and utilise the internet. This presents data on the prevalence, experiences and responses to online gender-based violence against women and examines the current legislative framework in place within the five countries to respond to these harms. Qualitative and quantitative data from Ethiopia, Kenya, Uganda, Senegal and South Africa inform the argument for a radical shift in developing alternate digital networks grounded in feminist theory.
6 Alternate Realities, Alternate Internets …
95
First however, the chapter provides a framework for this research by foregrounding marginalised women’s experiences of online space more broadly.
Gender, Violence and Marginalisation Online In the sub-Saharan African context there are at least three main ways in which women’s experiences of online violence are marked by minimisation, marginalisation and exclusion; though these arguably also hold relevance elsewhere in the world. First, despite the widespread and severe nature of online gender-based violence, there is a significant gap in data on the prevalence of all types of online violence against women and girls in low and middle-income countries. Furthermore, where this evidence is available, the data is rarely gender-disaggregated nor does not consider the intersectional impact on class, women with disabilities, refugee situations or traditionally marginalised areas. For example, the South Sudanese organisation Access for All conducted research showing that urban refugees in Uganda were disproportionately targeted with online violence. Interviews conducted with urban refugees in Uganda showed that three in four of the respondents had experienced some form of online violence including abuse, stalking, unwarranted sexual advances and hacking of social media accounts (Kalemera, 2019). Other gaps include a lack of a gendered analysis of online harassment, a lack of a comparative analysis of online versus physical violence against women and lack of documented perspectives of online GBV experienced by women (Lewis et al., 2017). Of course data alone cannot lead to appropriate response; the role of data must be contextualised, rather than perceived as a panacea. Furthermore, it is essential to consider revictimisation and the impacts of laying the onus of documentation upon victims themselves. Nonetheless, understanding women’s online experiences and preferences is a fundamental building block towards creating a feminist internet. A second marginalisation is that though overall online violence is poorly documented and understood, there is evidence emerging internationally that highlights a blurring between violence in online and offline
96
N. Iyer
spaces. In other words, violence that begins online can be continued offline and vice-versa. For example, in a study among 5647 youth in the US, victims of sexual cyber dating abuse were seven times more likely to have also experienced in-person sexual coercion (Zweig et al., 2010). In a study with Lesbian, Bisexual, Queer (LBQ) Womxn and Female Sex workers (FSW) in Uganda, online violence often occurred as a result of failing relationships in the real world (Kemigisha & Kwikiriza, 2021). In the case of FSW, this vice was found to be largely perpetrated by clients to manipulate the FSW into agreeing to actions they might have been uncomfortable with or by their (ex)-partners to “shame and out them.” Thirdly, online violence, harassment, and abuse arguably contribute to an overall culture where such harms are normalised, tolerated, and indeed deemed inevitable, both in online and offline spaces (Fraser & Martineau-Searle, 2018). Tolerance for violence against women online can be viewed as both a symptom of, and further contribute to, a broader cultural and social tolerance of violence against women generally. As such, to understand the experiences of African women in online spaces, we can no longer view online violence as isolated incidents removed from existing gendered power relations and structural frameworks (WLB, 2015). Just as discriminatory gendered practices in the physical world are shaped by social, economic, cultural and political structures, so too are these reproduced online across digital platforms (Kee, 2006). Though there is little doubt that African women experience multiple marginalisations in online space, it does not follow that there are no positive experiences or opportunities for empowerment online. Diverse groups of women react differently to, benefit differently from, and are fundamentally treated differently on, various technology platforms. For some women, digital spaces are a space for entertainment, while for others, it is a source of economic stability or exploitation. The way a company CEO utilises the internet is different from how their household staff might use the internet. For women who have been traditionally marginalised or silenced, the internet can be a new space to convene and push for the betterment of their communities collaboratively. Indeed, women’s interactions with technology may allow new forms of self-expression and identity. As far back as 1995, Sherry Turkle (1995) proposed that technology enables
6 Alternate Realities, Alternate Internets …
97
multiplicity of self-expression and enables people to enact identities that may not be available in offline spaces. However, we also know that such potential is curbed by conscious design practices. The internalised cultural beliefs of developers (most often male, and themselves relatively privileged) are reproduced through technologies, and in turn, identities and social relations are forcefully re-established and re-codified (Balsamo, 2011). These practices are vital to social media and similar platforms because this sorting of users by gender is essential for digital monetisation and targeted advertising. Yet, this constant feedback loop, enforced through ubiquitous surveillance, ensures that we conform to our gender identities by perpetually conditioning us via suggestions or recommendations in our newsfeeds, timelines and inboxes (Cheney-Lippold, 2011). Recent examples of technological reinforcement of gender inequality by design include the withholding of financial services advertising from older and female users by Facebook, facial recognition technologies with disproportionately misidentify women of colour and the biased screening of women on automated hiring platforms (Bryson et al., 2020).
Findings from a Five-Country Study of the Online Experiences of Sub-Saharan African Women In order to understand the lived experiences of African women in online spaces, Pollicy, a civic technology organisation based in Kampala conducted a broad-based research study in five countries, i.e. Ethiopia, Kenya, Senegal, South Africa and Uganda. In-depth interviews and focus groups were carried out across Addis Ababa, Nairobi, Dakar, Johannesburg and Kampala. Women were asked questions about their experiences online and their perceptions of online violence against women and their digital safety practices. In the sections that follow, I outline some of our main findings.
98
N. Iyer
Perceptions of Safety Online Women are growing increasingly concerned about their safety in online spaces. More than half of the women interviewed attribute this change to either having experienced or witnessed accounts of online violence and attacks. Respondents reported how the online environment replicates patriarchal power imbalances and the acceptance of online gender-based violence. I personally think most people are subject to this violence or harassment, but we either don’t know it, or we normalise it. I haven’t considered my case as online violence related to gender until you told me so. –IDI 002, Ethiopia The misogyny. There are no consequences. Patriarchyis the structure that all other structures stem from. We get social tools. Now, they are able to access women they didn’t think they could access. The whole idea of how women are seen or expected to be in society is still very much present. –IDI 002, Kenya
Among those who did take concrete steps to increase their safety, the primary action was to change their passwords. Fewer still reported blocking or unfollowing suspicious persons as well as using VPNs or updating their applications. Women who did not take measures to increase their safety online downplayed their importance or self-worth responding that they had “never having thought about it” or that “no one would take the time to hack my account.” Despite such perceptions, there is a crucial need for appropriate information on digital safety, with the current reach of organisations working on digital rights remaining limited. A significant proportion of women interviewed did not know where to turn for information on online safety and security. For example: Before, it was terrifying, and I would stay away from social media. But that’s what they want. If I feel the need to respond, then I do. Otherwise, I ignore it. I’ve become a bit hardened. I would cry and get depressed about the hate. I don’t bother with reporting any more. A lot of these hacks happen to us as individuals, but I ignore. I only just keep changing my password to avoid being hacked .–IDI 004, Kenya
6 Alternate Realities, Alternate Internets …
99
Among those women who said that they did know where to access digital safety information, common responses were that they would research on Google or ask a friend. A similar number of women responded that they might go to local authorities such as the police to access further information on how to stay safe online, however, most local authorities are not trained on tackling these issues, much less from a gender-sensitive perspective. There is an ongoing urgent need for resources to be more readily available, adapted to local languages, and further mainstreamed in schools and other academic institutions’ educational curricula. Lack of knowledge of resources persevered independently of educational background and access to resources.
Experiences of Online Gender-Based Violence Women reported online violence as widespread and common in their communities across all five countries (see Image 1). Of the women who reported having experienced some form of online violence, these incidences most frequently took the form of sexual harassment. Other forms of online violence that occurred were unwelcome sexual advances, offensive name-calling, stalking, repeated contact and doxing. Technology has opened up spaces for women to speak out openly where previously they were not able to. So, the more we try to enter this space further, the more violence we get from some men. –Focus Group Discussion, South Africa
These findings were echoed by other studies, including that of Amnesty International conducted in 2018 on social and harassment on social media platforms, which found that twenty-three per cent (23%) of women surveyed had experienced online abuse at least once (Amnesty International, 2018b).
100
N. Iyer
Without question (online gender-based violence is common in Ethiopia). Even I heard a lot of cases around my area. Especially in universities and high schools, the problem becomes out of control. –IDI 001, Ethiopia
The vast majority of all online gender-based violence incidents against our respondents occurred on Facebook. In Kenya, Uganda, Senegal and South Africa, this violence was primarily located on Facebook and Whatsapp (see Image 2). In the case of Ethiopia, Facebook and additionally, Telegram were the leading platforms where women experienced online violence. From my stand, I would say Telegram is unmanageable communication media. It is worrying and out of regulation. You can’t do anything about it. We can say the Facebook community standard is weak, but the good thing is it has a regulatory system. –FGD Participant, Ethiopia
Exacerbating factors that could help understand such widespread violence on these platforms are two-fold: lack of identification verification on Facebook, helping to preserve perpetrator anonymity and no clear community standards present on closed platforms, such as Telegram. While respondents suffered from offensive name-calling and threats on Facebook, they were often doxxed on Telegram and could not remove this content. The power of the administrators and the perpetrators, and the lack of agency among the platform users are of particular importance in this case. Administrators continue to uphold the ideals of patriarchy, which places power in men’s hands in cultural, social, and political spaces, with little recourse for justice afforded to female victims. In the latter situations, abuse can be “performative” (Lewis et al., 2017), and a source of social clout to build up the status and identity of the perpetrator. In both situations, the final outcome is the silencing of women and their dismissal from digital spaces. Looking deeper, the perpetrators use three overlapping strategies, namely intimidation, shaming and discrediting, to limit the voices, power and influence of women in digital spaces (Sobieraj, 2018).
6 Alternate Realities, Alternate Internets …
101
Coordinated Assaults and Violence in Online Spaces In most cases where respondents had experienced online gender-based violence, only one specific person was responsible for the incidents. However, in a significant proportion of cases (23%), multiple people were involved in the online attack. Organised trolling has been on the rise, especially against women with public-facing careers such as journalists, media personalities, activists and politicians. Common recommendations to curb so-called “trolling” are to simply ignore the harassment (e.g. “Don’t feed the trolls” [Sanfilippo et al., 2017]). However such views are tantamount to victim-blaming and growing opinion because such an approach is inefficient (Golf-Papez & Veer, 2017). Online threats are mainly organised trolling. I’ve received death threats. They come up with campaigns or a hashtag , so they rant at me all day. These insults are based on me as a woman, my anatomy, my family. They will use parts of the female body. The insults are so personal. There is a time I considered leaving Facebookand Twitter because the trolling became so bad. They use money and people to troll me online. –IDI 004, Kenya
Among numerous respondents, coordinated trolling organised around physically hurting women, some going so far as to call for the murder of the targeted women. These calls for violence happened both on open platforms such as Twitter, whereby they could be reported and on closed platforms such as Whatsapp where moderation is not possible. Woman journalists, when they post or write stories that some other people do not like, they are harassed, they are insulted. To make them feel so useless. They get insults below the belt, which is obviously sexual harassment . To humiliate a woman, they must go below the belt. Women journalists and politicians. It is terrible. You find in most times, women opt out of social media. At least in Kenya. Including myself, that’s why I opted out of Facebook. –IDI 003, Kenya
102
N. Iyer
Impact of Online Gender-Based Violence on Women The detrimental impact of online gender-based violence on women is well documented both in the short and long term. In 2014, UNICEF reported that the risk of a suicide attempt is more than double for a victim of cyber harassment compared to non-victims 63 (UNICEF, 2014). Online violence is different from offline violence due to the tendency of the content to endure online. Even years after the initial incidents, women would continue receiving phone calls and messages related to the attack. Because “the internet never forgets,” these violent incidents enter a permanent domain and can resurface repeatedly to revictimise the women who have experienced them. Furthermore, women who have experienced online violence have reported its impact on their mental health, including but not limited to suffering from depression, anxiety, fear and an overall sense of powerlessness (WLB, 2015). In our Pollicy study, in Senegal, more than half of the women surveyed reported suffering from mental stress and anxiety. In Ethiopia, key issues included problems with friends and family, damage to reputation and workplace problems. For example: I didn’t have self-esteem and confidence because I felt like everybody was staring at me because of this incident. Maybe some students in the university are members of that group. Can you imagine what people are thinking about me after they saw the post? OMG, that was the hardest time I ever had in my life. –IDI 005, Ethiopia
Cases of violence were also not isolated incidents. In Kenya, for instance, harassment could last up to a month among those who had suffered online abuse. Almost half of all respondents who suffered from some form of violence believed that their gender was a primary reason for these attacks. There is an immediate need for appropriate counselling tailored to the needs of women who have experienced online violence.
6 Alternate Realities, Alternate Internets …
103
Responding to and Addressing Online Gender-Based Violence The global proportion of women using the internet is twelve per cent (12%) lower than that of men. This gap widens further in the global south countries, reaching up to thirty-three per cent (32.9%). Our study found that while some women respond to violence online by blocking perpetrators, others choose to leave online spaces (and offline spaces) themselves, or advise others to avoid digital spaces altogether. Some women who would have been new users choose not to access the internet at all out of fear (Web Foundation, 2015). By far, the most common response to online violence was reported as blocking the abusers (see Image 3). A smaller subset of women deleted or deactivated their accounts, and some stopped using a digital service entirely after experiencing online violence. Our findings clearly showed that the damage did not stop there. The lack of support that women face in tackling online gender-based violence coupled with the fear, shame and anguish experienced at the hands of perpetrators could lead women to take drastic actions such as suicide. This response cannot only be interpreted as yet another form of selfcensorship and restriction on the freedom of expression of women, but also the complete erasure of women’s digital identities and presence. A single negative experience, or repeated adverse interactions, in online spaces can severely impact the engagement and participation of women on digital platforms, leading in some cases to their complete absence. After that incident, I immediately stopped using social media, especially Facebook, for around a year. –IDI 005, Ethiopia
In Uganda, male perpetrators committed the vast majority of reported online violence cases, usually acting as lone abusers. In Ethiopia however, almost all respondents who experienced online violence either did not know anything about the identity of the perpetrator or otherwise found the perpetrator to be a stranger. Because of the anonymity, propagation and perpetuation of online gender-based violence, it is difficult to identify both the primary perpetrator, i.e. the person initiating the
104
N. Iyer
violence, and the secondary perpetrators, i.e. the persons who negligently or recklessly downloads, shares or likes offending data or information. By sharing content through screenshots, likes, retweets and even comments, simultaneously, everyone and no one becomes the perpetrator. These findings are particularly important, as it follows then that legislative measures against online gender-based violence are ineffectual when you cannot prove who the perpetrators are. The anonymity of the discussion allows for certain people with the tendency to abuse you. Gender is a weakness they perceive. Especially, with the macho men. –Focus Group Discussion, South Africa
As indicated earlier in Chart 4, only a small percentage (12.4%) of the women who had suffered from online violence reported the incident to the website or online platform. In Senegal, while the number of respondents who reported to online platforms was minimal, it was encouraging to note that the majority of respondents said that the offending content was taken down. Overall, reported incidents of online gender-based violence remained largely unresolved across the five countries. The panorama was the most difficult in Uganda, where well over half all cases had reached no resolution. South Africa and Kenya stood out in terms of weak or unsatisfactory responses to reported cases. These results show that few women turn to technology platforms for mediation, and even fewer of those who do so get desirable results. Social media platforms like Facebook and Twitter have historically ignored African markets and spend fewer resources in contextualising their products to these diverse markets, including processing African languages and attempting to grasp nuance and convention. Despite serving a population of 1.2 billion, the number of staff within these companies dedicated to and working from Africa is negligible. In terms of legislative response, responding to the needs of African women against online gender-based violence has been slow and inefficient. Violence against women online is often trivialised with insufficient punitive action taken by authorities, further exacerbated by victimblaming (Women’s Legal and Human Rights Bureau, Inc, 2015). While
6 Alternate Realities, Alternate Internets …
105
some countries, mostly outside Africa, have attempted to address online violence, through legal and other means, the enforcement of such measures has proven tricky due to a lack of appropriate mechanisms, procedures and capacity (Aziz, 2013). In some instances, women who have had their information shared without their consent have even been punished by the law. One such victim in Uganda, Desire Luzinda made a public apology after an incident in 2014 and was charged with exhibition of pornographic material contrary to section 13 of the Anti-Pornography Act (Sullivan, 2014). Legal frameworks are rarely fully representative of the practical realities in any country (Nwaodike & Naidoo, 2020). For example, South Africa boasts “progressive” gender-based violence laws yet maintains one of the highest gender-based violence incidence rates in the world (Saferspaces, 2020). Specific laws addressing online gender-based violence are not the ultimate solution to a systemic problem and need to be adopted in conjunction with other measures, such as socio-economic upliftment, awareness-raising programmes and counselling services. However, irrespective of such legislation, it is clear that a lack of will to address online gender-based violence dilutes any potential deterring effects that criminal laws may have on the perpetration of online violence. The ineffectiveness of ambiguous laws is a further problem. While commendable and questionable laws may exist, such as placing the duty on internet service providers (ISPs) to assist courts, underreporting and trivialisation by law enforcement still remains a challenge. The limited portfolio of civil remedies means that the governmental strategies for eliminating online gender-based violence are not survivor-centred. Instead, online gender-based violence is mostly left up to a criminal justice system designed on philosophies of punishment—a system in which women are often the victims. In many cases, legislation is simply non-existent, and where present, does not work to provide protection or justice for women. Most women were either unaware of the laws that exist to protect them or have tried to approach law enforcement staff unsuccessfully. Some respondents who tried to approach police were insulted, laughed at, or told to return if the case escalated further.
106
N. Iyer
Women are not reporting even domestic violence because of the culture and the norm. Imagine going to report online GBV. They are going to make fun of you and tell you to come when the real violence happens. –FGD Participant, Ethiopia
In Senegal, it was clear that the vast majority of women were not aware of any policies and laws in place to protect them against online genderbased violence in their country. In Uganda, the lack of information was even more acute, with almost all women reporting that they were unaware of any protective policies or laws against online gender-based violence. Countries across the continent were shown to not have specific legal language or strategies against online gender-based violence. Preventive measures, including simple data capture, to specifically target online gender-based violence, are lacking (Smit, 2015). The Uganda Demographic and Household Survey 2016 (UDHS) reported that twenty-two per cent (22%) of women experience domestic violence (UBOS & ICF, 2017), but no data on online gender-based violence was collected. Current legislation fails to target online gender-based violence directly. Approaches can only be described as tangential at best, such as the Computer Misuse Act (2011) in Uganda where a person faces a fine, imprisonment or both if they are found to produce, make available, distribute, procure or unlawfully possess child (under 18 years of age) pornography. Section 25 of the Act states that any person who willfully and repeatedly uses electronic communication to disturb or attempt to disturb the peace, quiet or right of privacy of any person with no purpose of legitimate communication commits the offence of offensive communication. There are, however, no clear definitions of what constitutes indecent, disturbing of peace and quiet or legitimate communication. Those who did know about the Computer Misuse Act in Uganda (5%) reported that it is impartially applied. This lack of clarity effectively renders the law a device to repress dissenting voices and a censorship tool rather than a mechanism to protect women. Evidence shows that such legislation has been used selectively to silence and punish women activists, as evidenced by the case of Dr Stella Nyanzi in Uganda. Arrested in 2017, she was formally charged
6 Alternate Realities, Alternate Internets …
107
under sections 24(1), 24(2)(a) (cyber harassment) and 25 (offensive communication) of the Computer Misuse Act, 2011 (Act) concerning a Facebook post she had created regarding President Museveni, in particular a post whereby she refers to him as “a pair of buttocks.” The High Court of Uganda finally overturned Stella Nyanzi’s convictions in early 2020 on the grounds of lack of jurisdiction and fair hearing. However, by then she had already spent 16 months in imprisonment.
Envisioning the Future Moving forward, it is essential to highlight some key measures necessary to understand and safeguard women in digital environments. While some of these entail practical measures that can be implemented now, such as facilitating women’s digital self-care, working collaboratively with technology provides and reforming legal approaches; ultimately, what we need is a complete feminist reshaping of the internet. Across the countries surveyed, many respondents did not know where to access information related to digital security and self-care. There is an urgent need for digital security resources to be adapted to local contexts and languages and mainstreamed in educational curricula. In Africa and globally, there is a lack of information on what actually helps prevent online gender-based violence. Online harassment and attacks on high school girls are on the rise. In countries such as Ethiopia and South Africa, these have led some young women to end their own lives (eNCA, 2019). Current solutions rely on encouraging women who have experienced online violence to share and document their experiences or provide psychosocial support in special cases. Overall, research shows that few interventions aim to prevent primary and secondary perpetrators from acting violently in the first place. Rates of reporting perpetrators to technology platforms remain low, and responses to these reports have not been encouraging. Technology platforms have few representatives across Africa and in general, do not recognise the needs of African users. This becomes evident, considering the low amount of resources and funding invested in Africa. The products are often not contextualised to African markets, including processing
108
N. Iyer
African languages, appropriate content moderation or adapting to patterns/constraints in internet usage. Social media platforms need to increase the number of indigenous content moderators and promote the efficacy of reporting harassment and violence experienced on their platforms. Policy advocacy and legal approaches in strengthening online harassment laws remain viable methods in preventing perpetrators from committing online gender-based violence through an increased focus on law enforcement authorities. However, research on the existence and impact of online gender-based violence is outpaced by the frequency with which it occurs, raising whether national-level regulatory frameworks are at all equipped to protect women’s rights online. Furthermore, there is a fine line between appropriate regulation and stifling of freedom of expression. Law enforcement personnel must be trained on a gender-sensitive digital safety curriculum to address online gender-based violence complaints and provide timely technical assistance, counselling and support to women who do choose to report. Along with this, countries need to adopt data protection and privacy laws and put committees and mechanisms in place to implement these laws. A year after Uganda passed its Data Protection and Privacy Act (in 2019), the Data Protection office in charge of implementing the law has still not been established (Unwanted Witness, n.d.). Policymakers tasked with addressing gender-based violence at a regulatory level will be required to reconcile both the standard and exceptional characteristics of online gender-based violence. Gender-based violence is not unique to the internet. However, laws have to be designed to accommodate the unique capabilities of online spaces in furthering gender-based violence, both online and offline. Further research is necessary, particularly in understanding how online violence manifests in physical spaces. There is also a need to focus on the vulnerability of people with disabilities in online spaces. Disabled people face hostility and harassment in many of their environments, and online spaces create a context that further encourages hostility. Additionally, research on the institutional response to online gender-based violence from both governmental bodies and the private sector agents, such as social media platforms, is lacking. There are limits to the impact
6 Alternate Realities, Alternate Internets …
109
that “victim-focused” research alone can achieve, and there is a need to move the needle on responses by government bodies (e.g. African Union, INGOs) and social media platforms to prevent online genderbased violence and appropriately deal with perpetrators and aggressors. Lastly, there is little to no research on these topics from Francophone and Lusophone African countries. The information available on West and Central Africa is sparse in general. It is important to understand the experiences of women from these countries.
African Women Need an Internet That Serves Their Needs A new, better internet is needed—one in which the intersectionality of the lived experiences of African women is addressed. One where we avoid outdated stereotypes of African women as a homogenous group of the voiceless/oppressed, as is commonly depicted in “ICT for Good” initiatives across the continent and instead as the sources of innovation within which they can contribute to this new solution. This new internet would be a decentralised, community based, ongoing technological endeavour with improvements supported by and for all users, including African women. Feminist theory supports such progressive endeavours. Black cyberfeminism provides an intersectional approach to resist oppressive gender structures and achieve equality in digital spaces. With the growth in access to technology and the internet, there are new spaces for women worldwide to convene and counter repressive norms propped up by patriarchy (Daniels, 2009). Afrofuturism is a transdisciplinary cultural movement and school of thought, grounded in creativity that explores the intersection between African diaspora, and technology and science fiction (Elia, 2015). It provides a lens enabling the imagination of alternate realities and futures, not necessarily bound to linear time progression constraints. This creativity can manifest as literature, music, visual art, performances, etc. Afrofuturism also provides a space for black women to further explore the intersection of gender, sexuality and race. Conversely, Africanfuturism
110
N. Iyer
is grounded in African culture, history, mythology without centring Western hegemonic perspective. A notable example of Africanfuturism is the work of Nnedi Okarafor, which centres women as protagonists in technology-dominated Afrofutures (Okorafor, 2018). We can further expand the thinking around Africanfuturism (and Afrofuturism) to encompass Afrofeminist Futures, whereby African women are not simply instrumentalised as a vulnerable target group but rather as a key player with a place at the table in determining what a technological future grounded in feminist and decolonial thinking can look and be like. African women are experts on their own lived experiences within digital spaces in their contexts and must be brought on board for envisioning alternatives to the technological order and totality imposed by foreign, private interests (Tamale, 2020). Therefore, in building alliances with anguished netizens worldwide, African women can provide useful insights into the affected landscape.
Conclusion This chapter sought to describe online gender-based violence experienced by African women in online spaces. Our findings showed us that around one in three women interviewed had experienced some form of online gender-based violence and that women firstly, do not know where to seek help in cases of violence and secondly, are left largely unprotected by absent or ineffective legislation. These continual, regular and perpetual acts of aggression manifest in different but very visceral ways. Given the lack of transparency in how social media companies handle issues related to online gender-based violence and a vacuum of legislation to prevent or remediate such violence, this moment in time presents an opportunity to rethink the entire internet rather than trying to repair broken systems. For many women across Africa, social media is the internet, and perhaps, social media has been a failed experiment altogether. With that in mind, we can continue to think critically about how we can co-create a better internet. However far away such a horizon may seem, we know we must actively and creatively work towards deconstructing the patriarchal, heteronormative global North-run internet
6 Alternate Realities, Alternate Internets …
111
that we have today and initiate a movement towards an internet that celebrates, encourages and gives voice to the full spectrum of identities. Feminist values must be embedded into the ethos of technology companies regarding how they design, develop, market and create values from their products. African women must be centralised instead of marginalised when leading these types of initiatives. Such an internet will thrive when safe spaces are created, allowing for the self-definition of diverse identities. Acknowledgements This research study was made possible with funding from Internews, and the Association of Progressive Communication “Feminist Internet Research Network” project, supported by the International Development Research Centre, Ottawa, Canada. The views expressed herein do not necessarily represent those of IDRC or its Board of Governors. I would also like to thank my research team, Bonnita Nyamwire and Sandra Nabulega, as well as all the women who graciously gave us their time and shared their experiences with us.
References African Charter on Human and People’s Rights. (1981). Article 2. Amnesty International. (2018a). #TOXICTWITTER: Violence and abuse against women online. Amnesty International. (2018b). Women abused on Twitter every 30 seconds— New study. Association for Progressive Communications. (2015). From impunity to justice: Exploring corporate and legal remedies for technology-related violence against women. Aziz, Z. A. (2013). Due diligence framework: State accountability framework for eliminating violence against women. Balsamo, A. (2011). Designing Culture: the Technological Imagination at Work. https://doi.org/10.1215/9780822392149 Bryson, J., Etlinger, S., Keyes, O., Leonard, A., & Rankin, J. (2020). Gender bias in technology: How far have we come and what comes next?
112
N. Iyer
Cheney-Lippold, J. (2011). A new algorithmic identity: Soft biopolitics and the modulation of control. Theory, Culture & Society, 28(6), 164–181. https:// doi.org/10.1177/0263276411424420 Chun, W. (2006). Control and freedom: Power and freedom in the age of fiber optics. Daniels, J. (2009). Rethinking cyberfeminism(s): Race, gender, and embodiment. Women’s Studies Quarterly, 37 (1–2), 101–124. Elia, A. (2015). The languages of afrofuturism. Lingue e Linguaggi, 12, 83–96. https://doi.org/10.1285/i22390359v12p83 eNCA. (2019). Cyberbullied teen commits suicide. European Women’s Lobby. (2017). #HerNetHerRights: Mapping the state of online violence against women and girls in Europe. Fascendini, F., & Fialová, K. (2011). Voices from digital spaces: Technology-related violence against women. Fraser, E., & Martineau-Searle, L. (2018). Nature and prevalence of cyber violence against women and girls. Golf-Papez, M., & Veer, E. (2017). Don’t feed the trolling: Rethinking how online trolling is being defined and combated. Journal of Marketing Management, 33(15–16), 1336–1354. Hinson, L., Mueller, J., O’Brien-Milne, L., & Wandera, N. (2018). Technologyfacilitated gender-based violence: What is it, and how do we measure it? International Covenant on Civil and Political Rights. (1996). Article 3. Kalemera, A. (2019). Building digital literacy and security capacity of women refugees in Uganda. Kamran, H. (2019). A study of online violence against women journalists. Kee, J. S. (2006). Cultivating violence through technology? Exploring the connections between information communication technologies (ICT) and violence against women. Kemigisha, E., & Kwikiriza, S. (2021). The trends and impact of technology assisted violence among Lesbian, Bisexual, Queer (LBQ) Womxn and Female Sex workers (FSW) in Uganda. Lewis, R., Rowe, M., & Wiper, C. (2017). Online abuse of feminists as an emerging form of violence against women and girls. The British Journal of Criminology, 57 (6), 1462–1481. https://doi.org/10.1093/bjc/azw073 Nwaodike, C., & Naidoo, N. (2020). Fighting violence against women online. A Comparative Analysis of Legal Frameworks in Ethiopia, Kenya, Senegal, South Africa and Uganda. Okorafor, N. (2018). “Mother of Invention” A new short story by the author of Marvel’s Black Panther: Long Live the King.
6 Alternate Realities, Alternate Internets …
113
Saferspaces. (2020). Gender-based violence in South Africa. Sanfilippo, M. R., Yang, S., & Fichman, P. (2017). Managing online trolling: From deviant to social and political trolls. Smit, D. (2015). Cyberbullying in South African and American schools: A legal comparative study. South African Journal of Education. South African Journal of Education, 35 (2), 01–11. https://doi.org/10.15700/saje.v35n2a 1076 Sobieraj, S. (2018). Bitch, slut, skank, cunt: patterned resistance to women’s visibility in digital publics. Information, Communication & Society, 21(11), 1700–1714. https://doi.org/10.1080/1369118X.2017.1348535 Sullivan, G. (2014). Ugandan official wants to arrest victim of revenge porn: ‘She should be locked up and isolated. Tamale, S. (2020). Decolonization and Afro-feminism (1st ed.). Daraja Press. Turkle, S. (1995). Life on the screen: Identity in the age of the internet. New York: Simon & Schuster. The Women’s Legal and Human Rights Bureau (WLB). (2015). End violence: Women’s rights and safety online: From impunity to justice: Domestic legal remedies for cases of technology-related violence against women. Uganda Bureau of Statistics and ICF. (2017). Uganda demographic health survey 2016: Key indicators report. UNICEF France. (2014). Ecoutons ce que les enfants ont à nous dire, Consultation nationale. Unwanted Witness. (n.d.). One year on, what has Uganda’s data protection law changed? Wagacha, W. (2019). Access to information as a driver towards closing of the gender equality gap: the emerging scene in Kenya. Web Foundation. (2015). Womens rights online. Women’s Legal and Human Rights Bureau, Inc. (2015). From impunity to justice: Domestic legal reminders for cases of technology related violence against women. Woodlock, D. (2016). The abuse of technology in domestic violence and stalking. Violence Against Women, 23(5). https://doi.org/10.1177/107780 1216646277 Zweig, J. M., Dank, M., Yahner, J., & Lachman, P. (2010). The rate of cyber dating abuse among teens and how it relates to other forms of teen dating violence. Journal of Youth and Adolescence, 42, 1063–1077.
7 Culturally and Linguistically Diverse (CALD) Women’s Experiences of Technology-Facilitated Violence: An Intersectional Approach Carolina Leyton Zamora, Jennifer Boddy, Patrick O’Leary, and Jianqiang (Joe) Liang
C. Leyton Zamora (B) · J. Boddy · P. O’Leary · J. (Joe) Liang School of Health Sciences and Social Work, Griffith University, Southport, QLD, Australia e-mail: [email protected] J. Boddy e-mail: [email protected] P. O’Leary e-mail: [email protected] J. (Joe) Liang e-mail: [email protected] J. Boddy Griffith Criminology Institute, Griffith University, Southport, QLD, Australia P. O’Leary Disrupting Violence Beacon, Griffith Criminology Institute, Griffith University, Southport, QLD, Australia © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_7
115
116
C. Leyton Zamora et al.
Introduction Technological advancements such as smart phones, computers, cameras and tracking devices have unwittingly provided perpetrators with new tools to harm and abuse their victims in a domestic and family violence context (DFV) (Dragiewicz et al., 2018; Hand et al., 2009; Woodlock, 2017). Immigrant women, also called culturally and linguistically diverse (CALD) women, are a more vulnerable target for this type of violence, as they depend on technology to connect with family overseas and therefore reduce the anguish and stress produced by isolation (Douglas et al., 2019a; Nduhura et al., 2019). Perpetrators aim to further isolate the victim using these technological tools to stalk, harass and monitor their partners, sometimes involving applications installed on the victims’ mobile telephone (e.g. Spyware) (Chatterjee et al., 2018). Further, perpetrators at times threaten CALD victims with sending explicit or culturally inappropriate images to their family, friends and community (Office of the eSafety Commissioner, 2019). Technological skill is also used for hacking or accessing the victim’s mail, social media and bank accounts (Douglas et al., 2019b; Henry & Powell, 2018). In the case of economically dependent victims, as many CALD women are, the perpetrator can restrict their access to money or limit and control their access to technology and the internet (Al-Alosi, 2017; Cardoso et al., 2019). This technology-facilitated violence, in the context of DFV, occurs across countries and cultures. However, few studies have examined the experiences of CALD women and the intersectional factors affecting their experiences. The importance on studying immigrant and CALD women’s experiences of technology-facilitated violence lays in the lack of research in this field despite it has been identified as a problem (Dimond et al., 2011; Henry et al., 2021; Office of the eSafety Commissioner, 2019). CALD women experience similar patterns of abuse to those experienced by nonimmigrant women with added barriers and difficulties for seeking help.
7 Culturally and Linguistically Diverse (CALD) …
117
These barriers and difficulties intersect with factors such as race, discrimination, limited access to welfare, fear of deportation and others (Sullivan et al., 2020; Vaughan et al., 2015). This chapter, through intersectional lens, will explore CALD women’s experiences of technology-facilitated violence in the context of domestic and family violence (DFV). It will describe what is known about the types of technology-facilitated violence that disproportionally affects CALD women and how these experiences intersect with factors such as race, level of education, migrations status and others. The following section explores the impact of institutional racism and discrimination on minority groups in their helpseeking decisions and behaviour. Finally, the chapter will present some recommendations for research, policy and practice.
Technology-Facilitated Violence Towards CALD Women Globally, women’s migration process has been perceived as passive, dependent on men (particularly citizens) and their difficulties and vulnerabilities have been largely ignored (Weishaar, 2008). United States is the number one country recipient of migrant population, with 40 million people living in the United States were born in another country (Akbari & MacDonald, 2014). They account for 14.3% of the national resident population and 19.8% of the global international stock. Next in line is Canada with 3.1%, followed by Australia with 2.8% and New Zealand with 0.49%. The Australian Bureau of Statistics census (2016) indicates that there are approximately 2.3 million women who were born outside Australia and 2.5 million women speak a language other than English at home (Australian Bureau of Statistics, 2017). The migration process is often challenging and a large portion of CALD women (70%) arrive in Australia with a sponsorship/partner visa or a marriage prospective visa, so they often experience some dependency on their partners (Australian Bureau of Statistics, 2018). Raijman and Semyonov (1997) pointed out that the literature had focused on international male migration and the hardships they experience when entering the labour market of a new country. However, in comparison, CALD
118
C. Leyton Zamora et al.
women are likely to experience significant structural hardships. Many of the hardships faced by CALD women are related to being isolated and economically dependent on their partners, as employment and education are not always available to them (Gonçalves & Matos, 2020). This dependency, and a lack of resources, may lead to a situation in which women become more vulnerable to DFV and in particular technologyfacilitated violence, making it difficult or impossible for them to leave an abusive partner (Cho et al., 2020; Douglas et al., 2019a; Office of the eSafety Commissioner, 2019; Vaughan et al., 2015). Technology-facilitated violence refers to offensive behaviours that include: following or watching the victim with an electronic tracking device (e.g. GPS tracking system, computer, spyware application), threatening with posting personal images on the internet, impersonating the victim online to damage their reputation, hacking or accessing the victims’ mail or social media account and limiting the victims’ access to technology or internet (Douglas et al., 2019a; Dragiewicz et al., 2019; Henry et al., 2017; Office of the eSafety Commissioner, 2019). Earlier conceptualizations of technology-facilitated violence have given less attention to the culturally specific nature of these abuses including, for example, threats by perpetrators publishing culturally sensitive information online with the intent of hurting the victim publicly (Messing et al., 2020). Yet, particular cultural considerations need to take into account how CALD women’s reputations can be represented in social media in ways that damage family prestige, religious and cultural status (Maher & Segrave, 2018). Recent research has indicated that when CALD women experience technology-facilitated violence they generally feel completely isolated with an inability to access resources or seek help (Office of the eSafety Commissioner, 2019; Woodlock, 2015b). Isolation also increases the women’s risk of harm, including physical harm or mental health problems, such as anxiety and depression (Cavallaro, 2010; Rai & Choi, 2018). In addition, CALD women face unique, intersecting challenges such as fear of deportation, tenuous immigration status, limited host-language skills and institutional and systemic racism from service providers, police and law enforcement (Douglas et al., 2019a).
7 Culturally and Linguistically Diverse (CALD) …
119
Types of Technology-Facilitated Violence Towards CALD Women There is sadly limited research focused on CALD women and technology-facilitated violence. Within the Australian context, a Queensland study from 2014 to 2017 conducted interviews with 65 women. The majority of participants were Australian born or had migrated to Australia with their families when they were children; however, 25 (37.5%) women were born overseas (Douglas et al., 2019b). CALD participants in this study identified they were reliant on technology to maintain contact and connection with family and friends in their home country, especially when they were newly arrived in Australia (Douglas et al., 2019b). Another study conducted by the Australian Office of the eSafety Commissioner (2019), is based on information provided by 29 CALD women and 20 stakeholders who have experience working with CALD women. Findings show that the most common types of technology-facilitated violence towards CALD women include harassing, messages and threats; surveillance, monitoring or stalking; impersonation; image-based abuse; and limited access to technology. The Australian Office of the eSafety Commissioner (2019) found that social media is a tool often used to harass and threaten CALD women and their families and friends. Douglas et al. (2019a) findings also show that CALD women experience deeper levels of hardship when suffering from image-based abuse in social media, because of their religious beliefs, shame for the family and culture expectations in the country of origin. Surveillance, monitoring or stalking is a common type of abuse experienced by CALD women as the perpetrators use GPS to track the victims’ locations at all times (Douglas et al., 2019b). Additionally, the Office of the eSafety Commissioner (2019) reported that perpetrators appeared to be monitoring their partners’ mobile phones regularly.Perpetrators will also impersonate their victim to gain access to her bank account and to request financial resources from the victim’s family (Douglas et al., 2019b; Office of the eSafety Commissioner, 2019). Perpetrators are able to get women’s passwords by threatening them with deportation or slowing down the visa process (Office of the eSafety Commissioner, 2019). Perpetrators also restrict access to technology as punishment or
120
C. Leyton Zamora et al.
to isolate the victim. Isolation techniques further include restricting their contact with family and restricting contact with the community in the host country (Office of the eSafety Commissioner, 2019). Internationally, research is likewise limited though some studies have examined the technology-facilitated abuse experiences of women, including CALD and migrant women as participants. For example, Dimond et al. (2011) interviewed 10 women, residents at a DFV shelter in southern United States, exploring how technology helps women access to help. Participants’ race/ethnicity included African, African American, Caribbean and white. However the results were generalized to ‘women’. Zaidi et al. (2015) study in Canada, also explores how the use of technology influenced the victims/survivor’s ability to access appropriate services. This study was the first one focused on immigrant women only and found that only 46% of migrant women used their cell phone to escape the violence, while the majority reported not having enough digital knowledge to use it. Another study from United States with a range of cultural, racial and socioeconomic background participants but with no differentiated results, is Freed et al. (2018) study. Perpetrators would many times exploit their ownership on the victims/survivor’s devices to track them. Some participants also mentioned to have the suspicion of the presence of apps such as spyware in their phones. Similarly to the last study, the Messing et al. (2020) study, in southwestern United States, included white, African American, Hispanic participants, however the results are not differentiated or specific on the experiences of migrant women in particular. Results from the report show that there is a high prevalence of intimate partner stalking, monitoring, online harassment and cyberstalking, among shelter and service seeking survivors. Interestingly this last study acknowledges that many of the victims/survivors are experiencing multiple intersecting barriers, for example job loss, health issues and migration problems (Messing et al., 2020).
7 Culturally and Linguistically Diverse (CALD) …
121
An Intersectional Feminist Approach to Deal with Technology-Facilitated Violence Towards CALD Women Intersectional feminist theory emerged to make the unique experiences and vulnerabilities of marginalized women more visible (Crenshaw, 1989). This effort has been addressed by a range of black feminist legal theorists such as Judy Scales-Trent (1989), Regina Austin (1989) and Dorothy Roberts (1997), who have argued for an explicitly black feminist jurisprudence and equal protection of black women in the Constitution, among other things. The terminology, however, was coined by critical race black feminist Kimberle Crenshaw (1989). It recognizes that oppressed groups (specifically black women) live at the margins of society with inequitable access to resources resulting in societal inequities and oppressive factors (Crenshaw, 1989). Crenshaw’s theory of intersecting oppressive factors yielding multiplicative, negative effects has been used by an expanding circle of researchers and applied to theories addressing both the epidemiology of DFV and the impacts of immigration status (Adams & Campbell, 2012; Bond, 2003; Carbin & Edenheim, 2013; Collins & Bilge, 2020; Curran, 2016). Over the last decade this outsider knowledge has become a mainstay of the academy, the prevailing framework for understanding how structures of domination reinforce each other. Legal scholars have developed the framework of intersectionality to great effect. Some recent works on intersectionality have a well-established paradigm that informs research regarding women of colour and their experiences with respect to race, gender and other categories of identity (Alexander-Floyd, 2010). This approach has also been used in diverse fields such as domestic and family violence, human rights, sociology, medicine, psychology, political science and even economics (Crenshaw & Thomas, 2004). Elsewhere, scholars and activists from different parts of the globe are finding ways to utilize the unique insights supported by this approach. Furthermore, some current usage domestically and internationally of intersectionality theory is in the context of international human rights, in the DFV context (Akhmedshina, 2020; Bond, 2003; O’Leary & Tsui, 2020). Domestic violence against women is one of the most widespread human
122
C. Leyton Zamora et al.
rights violations in the world (Akhmedshina, 2020). An intersectional approach recognizes how diversity in human identity and disadvantage affects the articulation, realization, violation and enforcement of human rights (Curran, 2016). Besides, due to social exclusion and lack of social support in the host country, immigrant women are at risk enhancer for multiple victimizations (Erez et al., 2009). The dynamics and interactive processes proposed by the intersectionality paradigm have motivated some researchers to study how the intersection of immigrant status with other dimensions (e.g. gender, ethnicity, poverty) increases women’s vulnerability to victimization, influences their help-seeking behaviour and affects the way they respond to abuse experiences in the host country. Difficulties faced by CALD victims coexist with the challenges they experience as immigrants, clearly and interactively illustrating the structural and political dimensions of intersectionality (Erez et al., 2009). Regardless of the class to which the woman belongs in her home country, she faces subordination not only as a woman, but also as a minority race in a foreign land (Collins, 2015). Recognizing the intersections of these different aspects of marginalization and the multiplicative, amplifying effects that they exert is crucial in studying women’s experience of DFV and the forces working against them. As Glass et al. (2011) assert, ‘any effort to help immigrant women will be futile unless this intersectionality is well understood’ (p. 209). Immigration status can be separated out as yet another category of intersectionality, rather than considering immigration status as a variable or static category within race, it should be considered as part of the multiple grounds of identity influencing CALD women’s experiences of domestic violence (Erez et al., 2009). If technology is added to the mix, the combined factors of gender, race/ethnicity and temporal migration status compound and increase women’s disenfranchisement. However, very little is known about the ways migrant groups access and use digital technology and about their attitudes towards, awareness of and skills in using technology (CostaPinto, 2014). Until recently, intersectional work has been largely absent in studies on technology-facilitated violence. Notable emerging work has begun to explore intersectional experiences across sexuality and gender identity and gender minority adults (Powell et al., 2018). Nevertheless,
7 Culturally and Linguistically Diverse (CALD) …
123
there is a lack of research on minority groups with culturally and linguistically diverse backgrounds and their help-seeking behaviour in response to this abuse (Powell et al., 2018).
Institutional Discrimination, Racism and Help-Seeking Behaviour The intersection of DFV among CALD women, institutional discrimination, and diminished rights as citizens significantly detract from achieving human rights for all. As O’Leary and Tsui (2020) highlight, ‘the road to meaningful and impactful action on human rights is a journey for all of us. Ultimately, human rights are about access to livelihoods and environments that sustain life within a context where people can live with the freedoms of beliefs, identity, relationships and justice to reach their full potential creatively’ (p. 131). Human rights in the context of DFV, and more specifically in the context of technology-facilitated violence, needs to be addressed urgently. The notion that all citizens have the same rights is a key value that should underpin every aspect of society. Racism denies people from CALD backgrounds include immigrants access to rights such as the ability to participate equally and freely in community and public life, equitable service provision and freedom from violence (Ferdinand et al., 2015). Despite evidence that immigration can intensify DFV, including technology-facilitated violence, and increase vulnerabilities via social and cultural dislocation, research remains fairly limited in this area (Douglas et al., 2019b; Ghafournia & Easteal, 2019). Recognizing the intersection between human rights, racial or ethnic background and domestic and family violence is essential for an in-depth understanding of migrant groups’ experiences of technologyfacilitated violence, and the barriers when seeking help (Viruell-Fuentes et al., 2012). Racism within government settings is also likely to interfere with immigrant and CALD communities’ right to civic and social participation, which then prevents adequate representation of people from CALD backgrounds in developing governmental policies and programmes (Ferdinand et al., 2015).
124
C. Leyton Zamora et al.
In the Australian context, efforts from non-profit and governmental organizations have been focused on helping women to develop safe practices and improve their knowledge around technology-facilitated violence (e.g. Office of the eSafety Commissioner). Regardless of these efforts, CALD women are not always able to access this information (Amanor-Boadu et al., 2012; Office of the eSafety Commissioner, 2019). According to Woodlock (2015a), practitioners in the DFV area noticed that most information about online and social media safety in Australia is in English; consequently, not all of this information is accessible for CALD women. As Sullivan et al. (2020) stated in their literature review, the need for improvement in health service delivery for CALD populations is urgent. An approach for a more tailored delivery is important. There is some literature reporting that CALD communities are hesitant to use the available health services due to perceived racism, cultural differences and misunderstandings (Henderson & Kendall, 2011; Sullivan et al., 2020). Internationally, racism and migration status are challenges most commonly experienced by CALD women (Cho et al., 2020; Ghafournia & Easteal, 2019; Vaughan et al., 2015). These challenges demonstrate that in addition to all the existing barriers to accessing services for victims of DFV (including technology-facilitated violence), the plight of abuse victims is greatly exacerbated if they have a culturally and linguistically diverse background (Erez et al., 2009). For CALD women arriving in Australia, migration status can be a challenge as it has a direct impact on rights and access to support. Temporary migration status is a broad category in Australia and includes women who are seeking asylum (for example, on bridging visas), women on temporary visas such as student or tourist visas, and those who have a provisional partner-related visa. Visas offered in such categories have restrictions on access to services (Maher & Segrave, 2018). Each of these visa types is associated with different provisions for DFV protection and response, which are then further affected by service access (Vaughan et al., 2015). For example, Sawrikar (2019) highlighted that most shelters require proof of citizenship or language proficiency for victims to enter. This disproportionally impacts CALD women in comparison to non-CALD women as they might not fulfill the requirements for shelter. Another challenge identified relates to a lack of understanding
7 Culturally and Linguistically Diverse (CALD) …
125
of culture among some police officers when working with survivors of DFV. Ghafournia and Easteal (2018) reported that women from African communities, in particular, felt that police did not treat their experiences with sensitivity. These women felt like they experienced racism. In the event that victims of violence feel disrespected or mistreated by police, they tend to lose trust in law enforcement (Ghafournia & Easteal, 2018). According to Ghafournia and Easteal (2019) intentional or unintentional, discriminatory behaviours can contribute to increased marginalization of CALD women victims of DFV. Therefore, DFV services (including police, social workers, shelter workers and others) need to better understand CALD women’s experiences and cultural needs in service delivery.
Recommendations for Research, Policy and Practice There is a need for a deeper understanding about the complexities of the intersection of gender with race, ethnicity, digital literacy, migration status and human rights in the context of technology-facilitated violence towards CALD women. As researchers have noted, internet technologies and virtual communities operate in a manner that benefits privileged identities. These unequal power relations are accepted as legitimate and are embedded in the cultural practices of digital technology (Richard & Gray, 2018). It is necessary, therefore, to ensure that family violence prevention programmes engage with multicultural women’s services and women leaders from CALD communities. DFV prevention programmes also must be based on evidence about the intersections of factors that influence these communities’ experiences of technologyfacilitated violence. Therefore, greater cultural and technological awareness is needed. Prevention programmes and DFV service providers should create multi-language written and audio resources for broad dissemination in places where isolated CALD women frequent (e.g. supermarkets, schools, health services, shopping centres) and through community radio, to provide information about technology-facilitated violence, how to report an incident, the legality of technology-facilitated
126
C. Leyton Zamora et al.
violence, where to access support and other relevant services. We recommend DFV services to develop multilingual, technology safe and culturally sensitive service workforce that is reflective of local populations, adequately trained and institutionally supported, to respond to the needs of CALD women. It is vital that technology developers and researchers exercise caution and assess the possible dangers when designing or recommending online information to support individuals seeking support regarding technology-facilitated violence including: harassment, messages and threats; surveillance, monitoring or stalking; impersonation; image-based abuse; and limiting access to technology. Additional research is needed to understand how vulnerable populations, such as CALD women, use social media and online resources in their daily lives to be able to develop successful and tailored interventions using technology to prevent and support CALD women victims of technology-facilitated violence. Interventions should be culturally sensitive as well as tech-safe for the victims. Technology as a preventative tool can be used as a valuable information resource. Technology developers have the responsibility, as private companies, to promote and provide information to the public about technology-facilitated violence in a DFV context as well as in other contexts. Furthermore, a careful study of the potential barriers is needed in terms of the ability of the abusers/perpetrators to use technology to harm the victims versus the ability of the victims to report the abuse. Deeper understanding is also needed on the tools offered by social media companies and other online support for victims. There are important design and policy implications for web designers and companies to be accountable for information privacy and personal safety of women in DFV situations. However, there is still much to learn about the patterns and preceding behaviours when an abuser exploits online networks to stalk, monitor, harass, humiliate or otherwise abuse in a DFV context. In addition to addressing service gaps, training and capacity; an intersectional conceptual framework suggests addressing the multiple layers of oppressions of women of colour, minority communities, culturally and linguistically diverse background women specifically. The prevalence of DFV can not adequately be measured or addressed without taking into account the fact that different cultures define this violence differently. As
7 Culturally and Linguistically Diverse (CALD) …
127
explained above, culturally sensitive technology-facilitated violence practices will do greater harm to CALD women particularly. For example, a picture published where the victim/survivor is not wearing her hijab can be particularly meaningful to an immigrant woman, while not for other women. The complex layers and intersection of ‘structural inequalities’ serve to amplify experiences of immigrant and CALD women. Some examples of these structural inequalities are government policies, which protect women with certain types of visas but not to ‘temporary visas’ (Australian example). Shelter’s requirements vary depending on their policies, some shelters need English language proficiency so in this case immigrant women might experiment a language barrier to get help. Another example it’s the discrimination that an immigrant woman would suffer when calling the police for a threat from the perpetrator of shaming her in a religious way. Police officers, social workers, service providers in general must be culturally sensitive and culturally aware of the issues that different background communities will find as hurtful and disrespectful. In addition, intersectional framework suggests that DFV, and therefore technology-facilitated violence, is not a monolithic phenomenon and that it is necessary to analyse how is the violence experienced by self and responded by others, how personal and social consequences are represented and how safety can be obtain. Exploring the trajectories and intersection of in-person abuse and technology-based violence would assist in developing an understanding of online resources to in-person violence and vice versa.
Conclusion In summary, there is a general lack of literature on the experiences of CALD women experiencing technology-facilitated violence, in a DFV context. Immigrant women and CALD women make up a significant portion of the population, yet there is limited understanding of interpersonal violence within CALD communities. There are few studies on the imperative role of technology in the perpetration of violence within these communities even though it is known that CALD women depend on access to technology to keep in contact with their family and friends
128
C. Leyton Zamora et al.
back home. In addition, CALD women experience layers of oppression and other culturally sensitive acts of harm, for example a particular fear of shaming due to cultural and family values. Cultural biases by service providers, police and law enforcement are also considered to lead to adverse outcomes for CALD women as they are less likely to pursue criminal justice avenues because of past experiences of discrimination. Therefore, an intersectional understanding of the intricacies of how particular cultural factors influence a woman’s experience of migration, DFV and technology-facilitated violence is needed. In addition, human rights in the context of DFV, and more specifically in the context of technology-facilitated violence, needs to be addressed urgently. The stigma and consequences of being identified as the victim of DFV in some cultural groups impacts not only on the victim herself but the family and significant others. As we mentioned previously, this may result in being ostracised from the community, which might otherwise be the only social support available for CALD women. Service providers and other front-line staff hold significant power to improve the experiences of CALD women during situations of technology-facilitated violence, in the context of DFV. More work may be needed to support service providers, police and policymakers utilizing intersectional frameworks and comprehend the multidimensional levels of oppression experienced by women from diverse backgrounds. Researchers and practitioners must recognize that focusing on the multiple victimizations of CALD women, sometimes overlaps with human rights. It is CALD communities’ basic human right to have adequate access to information and services which aim to prevent technology-facilitated violence and DFV in general. This chapter informs future research and highlights the gap in the literature on institutional racism towards minority groups and the importance of applying a more integrated theory of intersectionality to the issue of technology-facilitated violence. Acknowledgements The author(s) received no financial support for the research, authorship, and/or publication of this article.
7 Culturally and Linguistically Diverse (CALD) …
129
References Adams, & Campbell, J. (2012). Being undocumented & intimate partner violence (IPV): Multiple vulnerabilities through the lens of feminist intersectionality. https://tspace.library.utoronto.ca/bitstream/1807/32411/1/11.1_A dams_&_Campbell.pdf Akbari, A. H., & MacDonald, M. (2014). Immigration policy in Australia, Canada, New Zealand, and the United States: An overview of recent trends. International Migration Review, 48(3), 801–822. https://doi.org/10.1111/ imre.12128 Akhmedshina, F. (2020). Violence against women: A form of discrimination and human rights violations. Mental Enlightenment Scientific-Methodological Journal, 2020 (1), 13–23. https://uzjournals.edu.uz/cgi/viewcontent.cgi?art icle=1061&context=tziuj Al-Alosi, H. (2017). Cyber-violence: Digital abuse in the context of domestic violence. University of New South Wales Law Journal, 40 (4), 1573–1603. Alexander-Floyd, N. G. (2010). Critical race Black feminism: A “Jurisprudence of resistance” and the transformation of the academy. Signs: Journal of Women in Culture and Society, 35 (4), 810–820. https://doi.org/10.1086/ 651036 Amanor-Boadu, Y., Messing, J. T., Stith, S. M., Anderson, J. R., O’Sullivan, C. S., & Campbell, J. C. (2012). Immigrant and nonimmigrant women: Factors that predict leaving an abusive relationship. Violence against Women, 18(5), 611–633. https://doi.org/10.1177/1077801212453139 Australian Bureau of Statistics. (2017). Cultural diversity in Australia reflecting a nation: Stories from the 2011 Census. Catalogue No. 2071.0:https://www. abs.gov.au/ausstats/[email protected]/Lookup/2071.0main+features902012-2013 Australian Bureau of Statistics. (2018). Cultural diversity in Australia reflecting a nation: Stories from the 2016 Census. Catalogue No. 2071.0. https://www. abs.gov.au/ausstats/[email protected]/Lookup/2071.0main+features302016 Bond, J. E. (2003). International intersectionality: A theoretical and pragmatic exploration of women’s international human rights violations. Emory LJ, 52, 71. Carbin, M., & Edenheim, S. (2013). The intersectional turn in feminist theory: A dream of a common language? European Journal of Women’s Studies, 20 (3), 233–248. https://doi.org/10.1177/1350506813484723
130
C. Leyton Zamora et al.
Cardoso, L. F., Sorenson, S. B., Webb, O., & Landers, S. (2019). Recent and emerging technologies: Implications for women’s safety. Technology in Society, 58, 101108. https://doi.org/10.1016/j.techsoc.2019.01.001 Cavallaro, L. (2010). “I lived in fear because i knew nothing”: Barriers to the justice system faced by CALD women experiencing family violence. InTouch Multicultural Centre Against Family Violence, Victoria Law Foundation. Chatterjee, R., Doerfler, P., Orgad, H., Havron, S., Palmer, J., Freed, D., Levy, K., Dell, N., McCoy, D., & Ristenpart, T. (2018). The spyware used in intimate partner violence. Paper presented at the 2018 IEEE Symposium on Security and Privacy (SP), 441–458. https://doi.org/10.1109/SP.2018. 00061 Cho, H., Shamrova, D., Han, J.-B., & Levchenko, P. (2020). Patterns of intimate partner violence victimization and survivors’ help-seeking. Journal of Interpersonal Violence, 35 (21–22), 4558–4582. https://doi.org/10.1177/088 6260517715027 Collins, P. H. (2015). Intersectionality’s definitional dilemmas. Annual Review of Sociology, 41, 1–20. https://doi.org/10.1146/annurev-soc-073014-112142 Collins, P. H., & Bilge, S. (2020). Intersectionality. Wiley. Costa-Pinto, S. (2014). Making the most of technology: Indian women migrants in Australia. International Migration, 52(2), 198–217. https://doi. org/10.1111/j.1468-2435.2010.00640.x Crenshaw, K. (1989). Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, 139. Crenshaw, K., & Thomas, S. (2004). Intersectionality: The double bind of race and gender. Perspectives Magazine, 2. Curran, S. (2016). Intersectionality and human rights law: An examination of the coercive sterilisations of romani women. The Equal Rights Review, 16 , 132–159. https://citeseerx.ist.psu.edu/viewdoc/download?doi= 10.1.1.690.2572&rep=rep1&type=pdf Dimond, J. P., Fiesler, C., & Bruckman, A. S. (2011). Domestic violence and information communication technologies. Interacting with Computers, 23(5), 413–421. https://doi.org/10.1016/j.intcom.2011.04.006 Douglas, H., Harris, B., & Dragiewicz, M. (2019a, February 1). Migrant women are particularly vulnerable to technology-facilitated domestic abuse. The Conversation. https://theconversation.com/migrant-women-areparticularly-vulnerable-to-technology-facilitated-domestic-abuse-110270
7 Culturally and Linguistically Diverse (CALD) …
131
Douglas, H., Harris, B. A., & Dragiewicz, M. (2019b). Technology-facilitated domestic and family violence: women’s experiences. The British Journal of Criminology, 59 (3), 551–570. https://doi.org/10.1093/bjc/azy068 Dragiewicz, M., Burgess, J., Matamoros-Fernández, A., Salter, M., Suzor, N. P., Woodlock, D., & Harris, B. (2018). Technology facilitated coercive control: Domestic violence and the competing roles of digital media platforms. Feminist Media Studies, 18(4), 609–625. https://doi.org/10.1080/ 14680777.2018.1447341 Dragiewicz, M., Harris, B., Woodlock, D., Salter, M., Easton, H., Lynch, A.,Campell, H., Leach j., & Milne, L. (2019). Domestic violence and communication technology: Survivor experiences of intrusion, surveillance, and identity crime (Australian Communications Consumer Action Network ACCAN) ACCAN. http://accan.org.au/files/Grants/Domestic%20viol ence%20and%20communication%20technology%20Survivor%20experie nce%20of%20intrusion%20surveillance%20and%20identity%20crime.pdf Erez, E., Adelman, M., & Gregory, C. (2009). Intersections of immigration and domestic violence: Voices of battered immigrant women. Feminist Criminology, 4 (1), 32–56. Ferdinand, A. S., Paradies, Y., & Kelaher, M. (2015). Mental health impacts of racial discrimination in Australian culturally and linguistically diverse communities: A cross-sectional survey. BMC Public Health, 15 (1), 401–414. https://doi.org/10.1186/s12889-015-1661-1 Freed, D., Palmer, J., Minchala, D., Levy, K., Ristenpart, T., & Dell, N. (2018). “A stalker’s paradise” How intimate partner abusers exploit technology. Paper presented at the Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/abs/10. 1145/3173574.3174241 Ghafournia, N., & Easteal, P. (2018). Are immigrant women visible in Australian domestic violence reports that potentially influence policy? Laws, 7 (4), 32. https://doi.org/10.3390/laws7040032 Ghafournia, N., & Easteal, P. (2019). Help-seeking experiences of immigrant domestic violence survivors in Australia: A snapshot of Muslim survivors. Journal of Interpersonal Violence, 36 (19–20), 0886260519863722. https:// doi.org/10.1177/0886260519863722 Glass, N., Annan, S. L., Bhandari, S., Bloom, T., & Fishwick, N. (2011). Nursing care of immigrant and rural abused women. Family violence and nursing practice, 207–225. Gonçalves, M., & Matos, M. (2020). Mental health of multiple victimized immigrant women in Portugal: does resilience make a difference? Journal
132
C. Leyton Zamora et al.
of Human Behavior in the Social Environment, 30 (3), 353–368. https://doi. org/10.1080/10911359.2019.1685423 Hand, T., Chung, D., & Peters, M. (2009). The use of information and communication technologies to coerce and control in domestic violence and following separation: Australian Domestic & Family Violence Clearinghouse. Henderson, S., & Kendall, E. (2011). Culturally and linguistically diverse peoples’ knowledge of accessibility and utilisation of health services: exploring the need for improvement in health service delivery. Australian Journal of Primary Health, 17 (2), 195–201. https://doi.org/10.1071/PY1 0065 Henry, N., & Powell, A. (2018). Technology-facilitated sexual violence: A literature review of empirical research. Trauma, Violence & Abuse, 19 (2), 195–208. https://doi.org/10.1177/1524838016650189 Henry, N., Powell, A., & Flynn, A. (2017). Not just ‘revenge pornography’: Australians’ Experiences of image-based abuse. Melbourne: RMIT University. https://www.researchgate.net/profile/Asher-Flynn/publication/323078201_ Not_Just_%27Revenge_Pornography%27_Australians%27_Experiences_ of_Image-Based_Abuse_A_SUMMARY_REPORT/links/5a7e6f74a6fd cc0d4ba8321e/Not-Just-Revenge-Pornography-Australians-Experiences-ofImage-Based-Abuse-A-SUMMARY-REPORT.pdf Henry, N., Vasil, S., Flynn, A., Kellard, K., & Mortreux, C. (2021). Technology-facilitated domestic violence against immigrant and refugee women: A qualitative study. Journal of Interpersonal Violence, 8862605211001465. https://doi.org/10.1177/08862605211001465 Maher, J., & Segrave, M. (2018). Family violence risk, migration status and “vulnerability”: hearing the voices of immigrant women. Journal of GenderBased Violence, 2(3), 503–518. https://doi.org/10.1332/239868018X15375 304047178 Messing, J., Bagwell-Gray, M., Brown, M. L., Kappas, A., & Durfee, A. (2020). Intersections of stalking and technology-based abuse: Emerging definitions, conceptualization, and measurement. Journal of Family Violence, 1–12. https://doi.org/10.1007/s10896-019-00114-7 Nduhura, D., Kim, S. D., & Mumporeze, N. (2019). “When social media are your sole life jacket”: A capability analysis of foreign brides’ empowerment by social media in South Korea. Omnes, 9 (1), 148–184. https://doi.org/10. 14431/omnes.2019.01.9.1.148 Office of the eSafety Commissioner. (2019). eSafety for women from culturally and linguistically diverse backgrounds. The Office comissioned research from the Social Research Centre, Australia. https://www.esafety.gov.au/sites/
7 Culturally and Linguistically Diverse (CALD) …
133
default/files/2019-07/summary-report-for-women-from-cald-backgrounds. pdf O’Leary, P., & Tsui, M.-S. (2020). Human rights and social development: A journey we walk together. International Social Work, 63(2), 131–132. https://doi.org/10.1177/0020872820907215 Powell, A., Scott, A. J., & Henry, N. (2018). Digital harassment and abuse: Experiences of sexuality and gender minority adults. European Journal of Criminology, 17 (2), 1477370818788006. https://doi.org/10.1177/147737 0818788006 Rai, A., & Choi, Y. J. (2018). Socio-cultural risk factors impacting domestic violence among South Asian immigrant women: A scoping review. Aggression and Violent Behavior, 38, 76–85. https://doi.org/10.1016/j.avb.2017.12.001 Raijman, R., & Semyonov, M. (1997). Gender, ethnicity, and immigration: Double disadvantage and triple disadvantage among recent immigrant women in the Israeli labor market. Gender and Society, 11(1), 108–125. https://doi.org/10.1177/089124397011001007 Richard, G. T., & Gray, K. L. (2018). Gendered play, racialized reality: Black cyberfeminism, inclusive communities of practice, and the intersections of learning, socialization, and resilience in online gaming. Frontiers: A Journal of Women Studies, 39 (1), 112–148. https://doi.org/10.5250/fronjwomestud. 39.1.0112 Sawrikar, P. (2019). Child protection, domestic violence, and ethnic minorities: Narrative results from a mixed methods study in Australia. PLoS ONE, 14 (12), e0226031. https://doi.org/10.1371/journal.pone.0226031 Sullivan, C., Vaughan, C., & Wright, J. (2020). Migrant and refugee women’s mental health in Australia: A literature review. The University of Melbourne. Multicultural Centre for Women’s Health. https://www.mcwh.com.au/wpcontent/uploads/Lit-review_mental-health.pdf Vaughan, C., Murdolo, A., Murray, L., Davis, E., Chen, J., Block, K., Quiazon, R., & Warr, D. (2015). ASPIRE: A multi-site community-based participatory research project to increase understanding of the dynamics of violence against immigrant and refugee women in Australia. BMC Public Health, 15 (1), 1–9. https://doi.org/10.1186/s12889-015-2634-0 Viruell-Fuentes, E. A., Miranda, P. Y., & Abdulrahim, S. (2012). More than culture: Structural racism, intersectionality theory, and immigrant health. Social Science and Medicine, 75 (12), 2099–2106. https://doi.org/10.1016/j. socscimed.2011.12.037 Weishaar, H. B. (2008). Consequences of international migration: A qualitative study on stress among Polish migrant workers in Scotland.
134
C. Leyton Zamora et al.
Public Health (london), 122(11), 1250–1256. https://doi.org/10.1016/j. puhe.2008.03.016 Woodlock, D. (2015a). ReCharge: Women’s technology safety, legal resources, research & training. Women’s Legal Service NSW, Domestic Violence Resource Centre Victoria and WESNET, Collingwood. http://www.smarts afe.org.au/sites/default/files/ReCharge-Womens-Technology-Safety-Report2015.pdf Woodlock, D. (2015b). Remote control. DVRCV Advocate (1), 37–40. Woodlock, D. (2017). The abuse of technology in domestic violence and stalking. Violence against Women, 23(5), 584–602. https://doi.org/10.1177/ 1077801216646277 Zaidi, A. U., Zaidi, A. U., Fernando, S., & Ammar, N. (2015). An exploratory study of the impact of information communication technology (ICT) or computer mediated communication (CMC) on the level of violence and access to service among intimate partner violence (IPV) survivors in Canada. Technology in Society, 41, 91–97. https://doi.org/10.1016/j.techsoc.2014. 12.003
8 Abuse as Artefact: Understanding Digital Abuse of Women as Cultural Informant Lauren Rosewarne
Introduction From slut-shaming to “revenge porn”, from harassment to the emerging concern of deepfake pornography, the internet can be a distinctly hostile place for women. The internet boasts many unique features that position it well as a tool for perpetrating abuse: anonymity, accessibility and affordability, sometimes clustered as the “triple-As” of the internet (Cooper, 1998). While much online abuse mimics acts perpetrated offline, others—like deepfakes—have been made possible because of the technology. Just as the internet proffers new ways to abuse, it also creates new perpetrators and new victims: anonymity, for example, is considered as a tool which not only facilitates abuse but which creates abusers out of L. Rosewarne (B) School of Social and Political Sciences, University of Melbourne, Melbourne, Australia e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_8
135
136
L. Rosewarne
people who would not likely physically attack (Rosewarne, 2016a, 2016b, 2016c). Further, with nearly everyone online for most of each day, the pool of potential victims has increased enormously. While all abuse can have horrific consequences, something that distinguishes online abuse is its public nature: it can play out in front of a large audience and record(ing)s can exist in perpetuity. Whether such abuses are perpetrated for reasons of misogyny, revenge, recreation or to expel women from the digital public space, the abuse of women online is ubiquitous. In this chapter I examine online abuse as an artefact that is richly revealing about our culture. This chapter focuses on abuses including sexual harassment, image-based sexual abuse and sexual violence, examining what these practices reveal about the status of women. I begin with a discussion of abuse as artefact, I then explore women’s sexual objectification as a central undercurrent and I use the objectification lens to examine some specific forms of abuse disproportionately directed at women.
Digital Abuse and Cultural Revelations Artefacts are of interest to scholars because they provide important information about the culture that produced them. Commonly we think about artefacts narrowly as objects like tools or weapons produced in eras before our own, but digital content is also an example. In some of the earliest work about the internet—even before the dawn of Web 2.0 and the rise of user-generated content like social media—online material was considered as an artefact laden with meaning (Woolgar, 1996). Nowadays everything digital—from memes (Rees, 2020) to Tweets (Hine, 2020) to Instagram posts (Sashittal & Jassawalla, 2020)—have been considered as culturally significant. In 2010 for example, when the Library of Congress in the US received a collection of all tweets sent between 2006 and 2010, librarian James H. Billington noted, “the Twitter digital archive has extraordinary potential for research into our contemporary way of life” ( Library of Congress, 2010, n.p.). Such datasets reveal much about our daily lives and, particularly as relevant for this discussion, our cultural values. What people choose to tweet
8 Abuse as Artefact …
137
about—the tweets that get traction, get liked, go viral—provide insight into cultural preoccupations. Social media datasets more broadly provide information about how groups of people are treated and, notably expose their ongoing discrimination and abuse. I posit that online abuse also works in the narrowest definition of artefact. That user-generated content such as social media can be understood as tools and also, in fact, as weapons; working to maintain patriarchy and frame public space as a hostile place for women. Not only are women’s experiences shaped by technology characterised by male dominance, sexism and abuse (Citron, 2014), but some feminist theorists posit that the technology has, in fact, specifically been designed that way (Wajcman, 2013). While each form of online abuse offers its own revelations, one attribute consistent across each is women’s objectification. The sexual objectification of women describes the process by which a woman becomes a thing and her value is attached not to her personhood but to her status as an object of desire. Concern around sexual objectification began with the second-wave of feminism and is epitomised by the radical feminist anti-pornography movement. In this chapter I contend that sexual objectification is at the heart of the online abuse of women. Undergirding each of the abuses that I examine is the notion of women valued primarily for their sex appeal. Here, I am not interested in the accusations of self-objectification levelled at digital activity like selfie-taking or Instagram (Bell et al., 2018), nor do I subscribe to the radical feminist view of all pornography as objectifying (Dworkin, 1989). Rather, my focus is on online activity characterised by the absence of consent. This does not always mean that the incidents are treated with the same level of egregiousness by every victim—scholars, for example, observe that reactions to online abuse like receiving dick pics spans the spectrum of offence through to arousal (Paasonen et al., 2019)—but I focus on the acts themselves as opposed to victims’ interpretations. Further, it is worth flagging that discussing a culture of objectifying women does not mean everyone in a culture participates nor that such behaviour goes uncontested. While the online abuse of women
138
L. Rosewarne
is common, and while I argue that such abuse exemplifies rampant objectification, feminist resistance is also happily widespread (Ringrose & Lawrence, 2018). In the sections that follow I examine several instances of online abuse to explore what these artefacts reveal about our cultural values and the objectification of women, beginning with an exploration of digital sexual harassment.
Digital Sexual Harassment Offline, the concept of street harassment describes the very common and very gendered occurrence whereby just by being visible in public space a woman can become objectified and her body viewed as a site for appraisal and unsolicited commentary. The same thing happens online. For women who are appraised as attractive unwanted attention can ensue. For women who are deemed unattractive the same situation can unfold. For women with additional “deviant” physical characteristics the likelihood of them being abused increases. In the sections that follow I discuss three common forms of sexual harassment experienced by women online: commentary on bodies, unwanted propositions and slut-shaming.
Commentary on Bodies In a society that disproportionately values women based on their appearance, being attacked for being unattractive—for being “ugly”—becomes a means by which the premium of beauty is maintained. The means by which appearance-based abuse is levelled is distinctly revealing about what is found attractive in our culture and also what is found deviant. One of the most common forms of abuse centres on weight. The “fat” slur has gravitas in a world where women are disproportionately valued for their beauty; “fat” is a way to frame a woman as less desirable and thus less worthy. Be the target actually fat or just the victim of an easy appearance-based insult, weight-centred slurs are widely documented
8 Abuse as Artefact …
139
(Mantilla, 2015). In a culture where fatness is maligned, insulting women based on this attribute functions in several ways. In line with any insult, it silences, demeans and attempts to push a woman out of public space and exclude her from public discourse. Writer and actor Lena Dunham discussed the extensive cyberbullying she was subjected to—much which centred on her appearance and notably her weight— and which led to her leaving Twitter: “it truly wasn’t a safe space for me” (in Ledbetter, 2015). Pushing women—those who are fat and those who have just been abused as such—out of the public sphere is an digital means to do what is often done to women offline whereby (some) men make efforts to control space and limit women’s impact. Such marginalisation is compounded for fat women in that they already experience extensive social exclusion offline (Gailey, 2014). In addition to such bullying, such behaviour also works to shame. Shaming occurs for many reasons including to exhibit moral authority, to virtue signal, to punish and for sport: fat-shaming, be it offline or online, is one manifestation of this. While “fat” is commonly coupled with other abusive—and notably gendered —words like bitch, whore, cunt and slut, it’s worth also spotlighting that it is also often linked with other physical attributes. Scholars for example, have examined the manner in which weight intersects as a marginalising factor with other maligned characteristics like race (Daufin, 2015), being queer (Ellison, 2020), being trans (Simmons & White, 2014), being old (Napier et al., 2005) and being disabled (Henderson, 2020). While the mocking of fatness is gendered, these compounding attributes can create additional targets on the backs of women online. In recent years with the increased visibility of transwomen, appearance-based attacks often centre specifically on a perceived failure to conform with femininity mandates. While beauty is subjective, mainstream ideals are very narrow and involve qualities such as slenderness, whiteness, youth, and notably femininity. Researchers document that among the instances of cyberbullying directed at trans people, slurs like “lady boy”, “chicks with dicks” and “shemale” are common (Hunte, 2019). Such abuse reflects not only rampant transphobia and binary understandings of gender, but also serves as means to gatekeep gender
140
L. Rosewarne
presentations and, notably, to punish deviations. Being old, queer or disabled are additional examples whereby a woman “fails” to adhere to her directive to comply with the narrow aesthetic deemed beautiful in our culture and thus gets victimised accordingly. While in the situations discussed thus far, harassment follows a negative appraisal of appearance, also worth exploring is the harassment that occurs when a woman is deemed an object of desire and receives unwanted “positive” attention.
Unwanted Propositions While pestering for dates might seem like something more commonly associated with the realm of online dating and hook-ups—and indeed, such harassment is prevalent on these platforms (DelGreco & Denes, 2020; Thompson, 2018)—such behaviour has also been reported on sites completely disconnected from romance. The propositioning of women on the professional networking site LinkedIn (Fiore, 2020) for example, highlights that there is, in fact, no online space where women are free from unsolicited approaches. In such instances, just by going about normal professional activity a woman is objectified and viewed as available for a sexual approach. In a culture where objectification of women is common, there exists an expectation about availability; that in putting herself “out there” and being visible in public space, a woman is framed as fair game for approach. Such a dynamic also reflects some of the cultural expectations of masculinity such as men’s sexuality needing to be actively asserted (Moore, 1994), and sexual scripts dictating that it’s acceptable for men to wear women down in a courtship dance (Kimmel, 2005). When rebuffed, men’s expectations are thwarted which not only leads to dissatisfaction but potentially motivates their harassing behaviour. Expectancy violations theory for example, describes the negative consequences that follow when an interaction fails to unfold as anticipated (Burgoon, 1993). Such rejection can be a driver for some of the harassment discussed throughout this chapter.
8 Abuse as Artefact …
141
Later in this chapter I discuss “dick pics” as a form of image-based abuse. Relevant to this section, men sometimes claim to send such photos as part of a proposition. Mandau for example, examines men’s rationales for sending such images and quotes “Robert”: Yeah, that’s pretty cool, because you also hope that something more will happen later, if you just send a picture and then maybe she’ll just send a picture, and then it’s like “Hey! You wanna hook up?” or something. (in Mandau, 2020, p. 81)
This belief that abusing women is a way to win them over—akin to pulling a girl’s hair in the school-yard—illustrates one of many problematic scripts in our culture as related to courtship.
Slut-Shaming An interesting irony exists between women’s value being hinged to their sex appeal but their sex appeal also being the grounds by which they are often vilified. For the purposes of this chapter, I define slut-shaming as the practice by which women are judged and condemned for their sexual behaviour, be it real or imagined. While being perceived as attractive is a “good” thing in a culture that sexually objectifies women, to be perceived as acting on this—to be thought of as promiscuous—is often grounds for abuse. While slut-shaming can be a means of insult on social media—of abusing or dismissing a woman on the grounds that she is “cheaper” and less valuable—it can also manifest in digital abuse like so-called revenge porn whereby recordings of her sexual activity are exhibited to an unintended audience (explored later in this chapter). While there are obvious privacy concerns raised through such abuse, the fact that slut-shaming is such a potent form of attack serves to provide insight into cultural attitudes towards female sexuality. Unquestionably since the sexual revolution of the late 1960s Western society has become more tolerant of sexual expressions. Greater permissiveness as related to sexuality has lulled us into thinking that sex can be had without consequence. This has in fact never been true for women. The non-consensual release of a
142
L. Rosewarne
sex tape itself is obviously harassing, but the very fact that such exposure can destroy a reputation, lead to harassment and be the vehicle by which women are judged and dismissed is illustrative of two sexual double standards. The first is the slut/stud paradox whereby men and women are appraised differently for their sexual behaviour: she is framed as promiscuous, he is praised as desirable. The second, as hinted to earlier, is that the same culture that dictates that women should be desirable is also the one to shame them for being sexual. While any woman can be subjected to slut-shaming, certain women experience this disproportionately. For women who have other transgressive attributes—non-whiteness, fatness, etc.—slut-shaming takes on some of the elements of appearance-based demonisation discussed earlier. Kinzel for example observes: A fat slut is not merely a sexually promiscuous woman, but a sexually promiscuous woman with whom no decent man would want to have sex. A fat slut is a woman whose sexuality is itself wrong and offensive, black fatness precludes sexual attractiveness. A fat slut is a problem because if a woman is mostly good for sex, and she dares be fat, she has no purpose… (Kinzel, 2012, n.p.)
While it is likely that in many instances “fat slut” is just plucked from the grab-bag of insults routinely hurled at women, certain women who are fat and sexual have this slur directed at them as a means to condemn them specifically for their sexual presentation. Lena Dunham mentioned earlier—who has done extensive public nakedness in her performances— has been abused this way. Singer Lizzo has experienced similar abuse, centred on the transgressive trinity of her weight, her blackness and her sexiness. Other identity markers—for example, being trans (Wade, 2015)—can compound the insult. Female politicians, both online and off, are frequently victims of slut-shaming, something enacted for reasons including to push them out of the public sphere and to punish them for transgressing the expectations of primarily being desirable and instead seeking power (Rosewarne, 2018).
8 Abuse as Artefact …
143
Something inextricably linked to slut-shaming, and an element of much digital abuse, is the threat of sexual violence, something encountered by women who have been slut-shamed but more broadly by women who, by virtue of their gender, become prey in a hostile environment. Threats are a central aspect of image-based abuse, discussed in the next section.
Imaged-Based Abuse Image-based abuse is a classification for sexual images that have been taken or created without consent, shared without consent and also the issuing of threats connected to such material (Henry & Powell, 2015). In this section I focus on two forms of image-based abuse: dick pics and so-called “revenge porn”.
Dick Pics Dick pics are self-taken photos of penises sent by men online. Such photos fall into two categories: solicited dick pics whereby intimate photos are exchanged between consenting parties (i.e. sexting), and unsolicited dick pics whereby they are sent without invitation and function as a kind of “cyber-flashing” (Thompson, 2016), akin to exhibitionist crimes that transpire offline. A poll undertaken in 2018 suggests that 41% of British women reported having received a dick pic that they did not ask for (Smith, 2018), thus indicating such abuse is common. While forms of dick pics existed in earlier eras—in the “Angry Women” episode of the 2006 BBC series Lefties for example, lesbian feminist women talk about being harassed with naked self-taken Polaroids of men in the 1980s—the accessibility, affordability and anonymity (Cooper, 1998) of the internet means that the volume of such unsolicited material has increased manifold. Such photos highlight the phallocentricity of masculinity: there is no clearer demonstration of “I am here and I am male” than a man sending a picture of his penis. While the assertion here is obviously anatomical,
144
L. Rosewarne
other kinds are also evident. In Waling and Pym’s work on dick pics for example, they observe that in our culture heterosexual masculinity necessitates that men “engage in a set of practices that are phallocentric, heteronormative and position men as active subjects” (Waling & Pym, 2018, p. 72). Akin to propositioning women online as discussed earlier, maleness is actively asserted via the act of sending the photo and penetrating a woman’s space with unsolicited sexual imagery. The idea of penis imagery being used as a kind of marking of territory and method to assert dominance is widely detected for example in penis-themed vandalism (Rosewarne, 2004), and penises drawn on ballot papers (Rosewarne, 2016d). Similar ideas around masculinity undergird each: it is a means for men to remind women that no matter where they are, or what they are doing, space can be sexualised as well as masculinis ed by men and sexual imagery (Rosewarne, 2006). Ringrose and Lawrence (2018) in fact, couple dick pics with the literal—and also distinctly gendered—space-occupying act of manspreading as ways in which men take up space, assert male dominance and push women aside. As with every abuse discussed in this chapter, certain attributes increase the chances of a woman being victimised. In 2019 for example, New Zealand Prime Minister Jacinda Ardern disclosed having been sent such images via Twitter illustrating that, though any woman is open to such abuse, such behaviour serves as a specific attack on women who take up space and participate in public, and specifically political , discourse. Discussed earlier was the notion of entitlement as related to sexual propositions. Men’s entitlement in our culture is also proposed by scholars as one of the interpretations of dick pics: Unsolicited dick pics may be part of some men’s efforts to resist women’s increased power to control courtship interactions in the face of cultural shifts in expectations around dating and hooking up. (Hayes & Dragiewicz, 2018, p. 116)
Be it about a response to—or punishment of—women’s increased power in the realm of dating or, more broadly, the public sphere, such behaviour is indicative of a backlash against feminism and is also an interpretation for some of the deepfake porn and doxing experienced by feminist women.
8 Abuse as Artefact …
145
“Revenge Porn” Be it an aggrieved partner leaking a privately recorded sex tape or a photo archive being hacked and uploaded to a website, not only has privacy been invaded through the act of so-called revenge porn, but reputations potentially catastrophically damaged. The cultural revelations of this form of image-based abuse are manifold. First, the word “revenge” becomes relevant here in that such behaviour can be interpreted as retaliation against women and their sexuality. Be it because she is devalued by the man because she had a hook-up with him and is now considered disposable (Rosewarne, 2016c), or she is deemed egregious because she cheated on him or left a relationship, the notion that men have rights to retaliate—that they are socially permitted to seek revenge—is illustrative of sexual politics and the threats that hover in the background of women’s interactions with men; a threat that the internet amplifies. This kind of abuse functions as a form of slut-shaming and exposes a world where women are judged and condemned based on their sexuality. More broadly however, the websites that exist to give platform to such material highlight the largely unregulated space of the internet, in conjunction with the multi-million dollar porn industry whereby little incentive exists for platforms to probe the provenance of their uploads. In fact, that such platforms even exist highlights the market for non-consensual content. Revenge porn can also be an issue for women who have never participated in intimate photography, as apparent in deepfake material whereby digitally fabricated content is created. Discerning fact from fiction is becoming increasingly difficult with the existence of technology that makes it easier to produce ever more convincing fake video footage. Deepfake technology has been used for a variety of purposes but most relevant for this chapter is its deployment to create fake pornography. While fake porn is not new—celebrity faces have been crudely pasted onto the bodies of porn stars in videos for many years already—in recent years it has become easier and cheaper to produce substantially more realistic content. The more realistic a video the more difficult it is to debunk.
146
L. Rosewarne
The existence of such videos of course, is illustrative of the sexual objectification of women manifesting in abuse: a likeness of a woman is inserted into sexually explicit content for an audience she has no interest in arousing. Such videos are a means by which women get used for purposes of sexual gratification and are an example of the many ways they can be shamed, ridiculed and disparaged online, notably as connected to their gender and sexuality. Gieseke (2020) discusses the “debasing” of women through deepfake pornography. While the moralistic undercurrents of this word sit uncomfortably with me, nonetheless, the idea is useful in the sense of understanding what such videos reveal about our culture. While some deepfake porn will primarily be about simply creating arousing material for a commercial audience—and thus, “debasement” is an externality rather than goal—in other examples, the intention is to explicitly cause harm in a culture where women are judged and condemned for their sexual behaviour. Maddocks (2020) examines the deepfake porn that used a likeness of actor Bella Thorne and was thought to have been created in retaliation against her advocacy for survivors of sexual violence. Gieseke similarly discusses deepfake porn featuring the likenesses of feminist media critic Anita Sarkeesian and investigative journalist Rana Ayyub. While deepfake porn involving entertainment industry personalities can be interpreted as further catering to an audience who has already come to sexualise such figures, to present academic, activist and journalist women this way can be construed as a retaliation against their professional work—serving as a kind of overt backlash against feminism—and damaging their professional reputations. Such content can also be construed as yet another attempt to push women out of the public sphere. It should also come as no surprise that extensive deepfake content exists involving high-profile female political figures such as Alexandria Ocasio-Cortez, Hillary Clinton and Angela Merkel, functioning—as discussed earlier—as a kind of slut-shaming of female politicians daring to be present in the traditionally male world of politics (Rosewarne, 2018).
8 Abuse as Artefact …
147
Sexual Violence Something that brings together notions of objectification and perceptions of women as sex objects is sexual violence, both threatened and also enacted. Sexual violence is one of the many dangers that exist online for women. The most common way sexual violence enters the lives of women online is through digital sexualised abuse. This can take the form of a kind of retaliation or punishment (you deserve to be raped ) or violent wishful thinking (you should get raped ) and in rarer examples an outright threat (I’m going to rape you). Rama Ayyub, mentioned earlier, was the recipient of a deluge of rape threats following her inclusion in a deepfake video (Gieseke, 2020). The use of rape in these ways illustrates several cultural factors. First, it taps into the genuine and widespread fear of rape that women harbour (Pryor & Hughes, 2013). Second, and undergirding the first, is the cultural notion that women are rapeable (Cahill, 2001). Allusions to rape function to remind women that their vulnerability lies in their gender and sexuality and that this is the vehicle by which they are punished; in thought or in deed. The concept of “rape culture” describes a society where rape is normalised and where women are viewed as complicit in the abusive things that happen to them (Buchwald et al., 2005). The online abuse of women through allusions to rape is one way that rape culture is maintained with rape threats being so common that they become construed as just another cost of women daring to be in public space and are in turn not taken seriously. While thus far I have discussed online abuse taking the form of threats of sexual violence, it is necessary to also explore acts of sexual violence occurring online. While “revenge porn” and deepfake porn are two examples already discussed in this chapter, other kinds also exist. In the earliest days of the internet—notably through virtual realitytype games such as Second Life—feminists became concerned about the things that happened to players and their avatars within these digital worlds, notably as related to incidents of sexual violence (Whitty et al., 2011). In recent years a range of new manifestations of online sexual violence have been observed. One of the many COVID-19 consequences, for example, was the move of many social interactions online.
148
L. Rosewarne
Resultantly “Zoom-bombing”—whereby a video conference is interrupted with, for example, displays of pornography (Dayton, 2020)—is one example, ditto Zoom exhibitionism of the kind that led to CNN analyst Jeffrey Toobin’s suspension in October 2020. Harduf (2019) similarly documents cases of “virtual rape” whereby people are blackmailed into performing sexual acts via webcam. Such examples exist as reminders that every new incarnation of technology proffer new ways for women to be abused by men. More broadly, they exist as reminders of women’s vulnerability online and the inescapability of being both objectified and abused.
Conclusion: Digital Abuse as Artefact of Objectification Even with all the successes of the feminist movement in getting the personal to be considered as political, much of the abuse women suffer at the hands of men still happens behind closed doors, without ever being reported to the police. The internet however, has changed the landscape of abuse, providing not only new ways to abuse and be abused, but also a means to showcase it: the abuse of women online is frequently conducted publicly, making such abuse starkly visible. Such abuse provides a snapshot of the treatment of women in our culture. Women’s value—disproportionately hinged under patriarchy to appearance—becomes the entry point for their assault. Regardless of whether a positive or negative appraisal of appearance has transpired, by virtue of being female online—by being present, participating and visible—a woman becomes vulnerable. This vulnerability subjects her to unwanted attention, threats and potential violence. The internet is the place this happens and the tool by which the abuse is enacted. Because this abuse is unfolding online—in front of a global audience—not only are there many witnesses, but the many examples of such gendered abuse works to constitute a dataset that can be viewed as a cultural artefact of the treatment of women in digital public space. While the extent of the abuse of women and the means by which this transpires is hinged to the technology, ultimately it is humans who
8 Abuse as Artefact …
149
have created and who use the internet. Therefore, it should come as no genuine surprise that the same issues and the same misogyny that plague the offline world plays out online: arguably the internet just makes it easier to do, on a larger scale, all with the perk of anonymity. The internet might have once been envisaged as a digital utopia and as some great big wonderful marketplace of ideas—and of course, in part it is this—but the technology as a tool and as a place is also marred by a toxicity that is distinctly gendered and which reflects the very worst of patriarchy and the power disparities that exist between men and women.
References Bell, B. T., Cassarly, J. A., & Dunbar, L. (2018). Selfie-objectification: Selfobjectification and positive feedback (“likes”) are associated with frequency of posting sexually objectifying self- images on social media. Body Image, 26 , 83–89. https://doi.org/10.1016/j.bodyim.2018.06.005 Burgoon, J. K. (1993). Interpersonal expectations, expectancy violations, and emotional communication. Journal of Language and Social Psychology, 12, 30–48. https://doi.org/10.1177/0261927X93121003 Cahill, A. (2001). Rethinking rape. Ithaca NY: Cornell University Press. Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press. Cooper, A. (1998). Sexuality and the Internet: Surfing into the New Millennium. Cyberpsychology & Behavior, 1, 187–193. https://doi.org/10.1089/ cpb.1998.1.187 Daufin, E.K. (2015). Black women in fat activism. In R. Chastain (Ed.) The politics of size: Perspectives from the fat acceptance movement (pp. 163–186). ABC-CLIO. Dayton, J. (2020). Zoom-bombing and online sexual misconduct: what are our options? ADZ Law, April 16. Retrieved January 13, 2021 from https:// www.adzlaw.com/victim-advocacy/2020/04/16/zoom-bombing-and-onlinesexual-misconduct-what-are-your-options/ DelGreco, M., & Denes, A. (2020). You are not as cute as you think you are: Emotional responses to expectancy violations in heterosexual online dating interactions. Sex Roles, 82, 622–632. https://doi.org/10.1007/s11199-01901078-0 Dworkin, A. (1989). Pornography: Men possessing women. New York: Plume.
150
L. Rosewarne
Ellison, J. (2020). Being fat: Women, weight, and feminist activism in Canada. Toronto: University of Toronto Press. Fiore, K. (2020, October 12). This is how women are fighting back against harassment on LinkedIn. Fast Company. Retrieved January 9, 2021 from https://www.fastcompany.com/90560407/this-is-how-womenare-fighting-back-against-harassment-on-linkedin. Gailey, J. A. (2014). The hyper (in)visible fat women: Weight and gender discourse in contemporary society. Palgrave. Gieseke, A. P. (2020). “The new weapon of choice”: Law’s current inability to properly address deepfake pornography. Vanderbilt Law Review, 73(5), 1479–1515. Harduf, A. (2019, October 22). Rape goes cyber: Online violations of sexual autonomy. SSRN . Retrieved January 13, 2021 from https://papers.ssrn.com/ sol3/papers.cfm?abstract_id=3473826 Hayes, R. M., & Dragiewicz, M. (2018). Unsolicited dick pics: Erotica, exhibitionism or entitlement? Women’s Studies International Forum, 71, 114–120, doi: https://doi.org/10.1016/j.wsif.2018.07.001 Henderson, E. F. (2020). Gender, definitional politics and ‘live’ knowledge production: Contesting at conferences. Routledge. Henry, N., & Powell, A. (2015). Beyond the ‘sext’: Technology-facilitated sexual violence and harassment against adult women. Journal of Criminology, 48(1), 104–118. https://doi.org/10.1177/0004865814524218 Hine, C. (2020). The evolution and diversification of Twitter as a cultural artefact in the British Press 2007–2014. Journalism Studies, 21(5), 678–696. https://doi.org/10.1080/1461670X.2020.1719369 Hunte, B. (2019, October 25). Transgender people treated ‘inhumanely’ online. BBC News. Retrieved January 11, 2021 from https://www.bbc.com/ news/technology-50166900 Kimmel, M. (2005). Men, masculinity, and the rape culture. In E. Buchwald, P. R. Fletcher & M. Roth (Eds.), Transforming a rape culture (pp. 140–157). Milkweek. Kinzel, L. (2012). Two whole cakes: How to stop dieting and learn to love your body. Feminist Press. Retrieved January 11, 2021 from https://books.google. com.au/books?id=shSNAgAAQBAJ Ledbetter, C. (2015, September 29). Lena Dunham says Twitter ‘Truly wasn’t a safe space for me’. The Huffington Post. Retrieved January 11, 2021 from https://www.huffingtonpost.com.au/entry/lena-dunham-twitterrecode-interview_n_5609b3afe4b0af3706dd7df3?section=australia
8 Abuse as Artefact …
151
Library of Congress. (2010). Twitter donates entire tweet archive to Library of Congress. News Releases, April. Retrieved January 7, 2021 from http:// www.loc.gov/today/pr/2010/10-081.html Maddocks, S. (2020). ‘A deepfake porn plot intended to silence me’: Exploring continuities between pornographic and ‘political’ deep fakes. Porn Studies, 7 (4), 415–423. https://doi.org/10.1080/23268743.2020.1757499 Mandau, M. B. H. (2020). ‘Directly in your face’: A qualitative study on the sending and receiving of unsolicited ‘dick pics’ among young adults. Sexuality and Culture, 24, 72–93. https://doi.org/10.1080/10304312.2012. 665840 Mantilla, K. (2015). Gendertrolling: How misogyny went viral: How misogyny went viral . ABC-CLIO. Moore, H. (1994). The problem of explaining violence in the social sciences. In R. Harvey (Ed.), Sex and violence: Issues in representation and experience (pp. 138–155). Routledge. Napier, E., Meyer, M. H., & Himes, C. L. (2005). Ageism in the new millennium. Generations, 29 (3), 31–36. Paasonen, S., Light, B., & Jarrett, K. (2019, April–June). The dick pic: Harassment, curation, and desire. Social Media + Society, 1–10. https://doi.org/10. 1177/2056305119826126 Pryor, D. W., & Hughes, M. R. (2013). Fear of rape among college women: A social psychological analysis. Violence and Victims, 23(3), 443–465. https:// doi.org/10.1891/0886-6708.vv-d-12-00029 Rees, A. (2020, January 17). Are memes worth preserving? ACMI . Retrieved January 8, 2021 from https://www.acmi.net.au/stories-and-ideas/ are-memes-worth-preserving/ Ringrose, J., & Lawrence, E. (2018). Remixing misandry, manspreading, and dick pics: networked feminist humour on Tumblr. Feminist Media Studies, 18(4), 686–704. https://doi.org/10.1080/14680777.2018.1450351 Rosewarne, L. (2004). Marking their territory: A feminist exploration of graffiti. In A. Powell & S. Holland (Eds.), Crime revisited conference proceedings (pp. 93–105). University of Melbourne. Rosewarne, L. (2006). The men’s gallery. Outdoor advertising and public space: Gender, fear, and feminism. Women’s Studies International Forum, 28, 67–78. https://doi.org/10.1016/j.wsif.2005.02.005 Rosewarne, L. (2016a). Cyberbullies, cyberactivists, cyberpredators: Film, TV, and Internet stereotypes. Praeger. Rosewarne, L. (2016b). Intimacy on the Internet: Media representations of online connections. Routledge.
152
L. Rosewarne
Rosewarne, L. (2016c). Cinema and cyberphobia: Internet clichés in film and television. Australian Journal of Telecommunications and the Digital Economy, 4 (1), 36–53. 10.18080/jtde.v4n1.46 Rosewarne, L. (2016d, July 4). Why dick doodles on the ballot paper are their own election statement. The Conversation. Retrieved January 11, 2021 from https://theconversation.com/why-dick-doodles-on-the-bal lot-paper-are-their-own-election-statement-61977 Rosewarne, L. (2018, August 29). Slut-shaming and the double standards in Australian politics. Broad Agenda. Retrieved January 13, 2021 from https://www.broadagenda.com.au/2018/slut-shaming-and-the-doublestandards-in-australian-politics/ Sashittal, H. C., & Jassawalla, A. R. (2020). The personal influence of Instagram bloggers on consumer–brand interactions: brands as tribal artifacts. Journal of Brand Management, 27 , 679–690. https://doi.org/10.1057/s41 262-020-00203-9 Simmons, H., & White, F. (2014). Our many selves. In L. Erickson-Schroth. In Trans bodies, trans selves: a resource for the transgender community (pp. 3– 23). Oxford University Press. Smith, M. (2018). Four in ten female millennials have been sent an unsolicited penis photo. Retrieved January 9, 2021 from https://yougov.co.uk/topics/ relationships/articles-reports/2018/02/16/four-ten-female-millennials-beensent-dick-pic Thompson, L. (2016). Dickpics are no joke: Cyber-flashing, misogyny and online dating. The Conversation. Retrieved January 9, 2021 from http:// theconversation.com/dickpics-are-no-joke-cyber-flashing-misogyny-and-onl ine-dating-53843 Thompson, L. (2018). “I can be your Tinder nightmare”: Harassment and misogyny in the online sexual marketplace. Feminism and Psychology, 28(1), 69–89. https://doi.org/10.1177/0959353517720226 Wade, M. (2015 November). Let’s talk about sex. Star Observer, 12–15. Wajcman, J. (2013). Techno feminism. Polity Press. Waling, A., & Pym, T. (2018). ‘C’mon, no one wants a dick pic’: Exploring the cultural framings of the ‘Dick Pic’ in contemporary online publics. Journal of Gender Studies, 28(1), 70–85. https://doi.org/10.1080/09589236.2017. 1394821 Whitty, M. T., Young, G., & Goodings, L. (2011). What I won’t do in pixels: Examining the limits of taboo violation in MMORPGs. Computers
8 Abuse as Artefact …
153
in Human Behavior, 27 (1), 268–275. https://doi.org/10.1016/j.chb.2010. 08.004 Woolgar, S. (1996). Technologies as cultural artefacts. In W. Dutton (Ed.), Information and communication technologies—Visions and realities (pp. 87– 102). Oxford University Press.
Part III Stalking and Partner Abuse
9 ‘Intimate Intrusions’: Technology Facilitated Dating and Intimate Partner Violence Anastasia Powell
Introduction Worldwide, over a third of women have experienced physical and/or sexual intimate partner violence, or non-partner sexual violence, in their lifetime (World Health Organisation, 2013). Indeed, intimate partner violence is one of the most common forms of violence against women, and is typically defined as including physical, sexual, psychological, emotional abuse, as well as stalking and controlling behaviours by a current or former intimate partner (Centres for Disease Control and Prevention, 2020; World Health Organisation, 2013). Rapid developments in communications and surveillance technologies have, perhaps unsurprisingly, been increasingly utilised as tools in the hands of perpetrators of intimate partner violence. It is a trend that was first noted A. Powell (B) Criminology and Justice Studies, RMIT University, Melbourne, VIC, Australia e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_9
157
158
A. Powell
by domestic violence services and victim advocates, foremost in the United States, almost 20 years ago (see e.g. Kranz & Nakumara, 2002; Southworth, 2003; Southworth et al., 2005). Yet broader scholarly investigation into the extent, nature, impacts and outcomes of diverse forms of technology facilitated partner violence has remained less developed until more recently. In particular, though there are a plethora of studies on cyberstalking and cyberbullying, fewer studies have engaged specifically with adult women’s experiences of a range of technology facilitated abuses from current or former intimate partners and dates (for notable exceptions see Borrajo, Gámez-Guadix, & Calvete, 2015; Brown & Hegarty, 2021; Douglas et al., 2019; Harris, 2018; Harris & Woodlock, 2019; Henry, Flynn, & Powell, 2020; Yardley, 2020). This chapter provides both an overview of the current state of the field of research into technology facilitated dating and intimate partner violence. Drawing on feminist criminologies, it further develops some key concepts in the field, seeking to advance understanding of both the familiar and unique aspects of these harms. The chapter begins by summarising the extant literature in the field with a focus on abuse experienced by women in the context of dating and intimate partner relationships including foremost: digital dating abuse, domestic violence and digital coercive control, as well as partner sexual violence. The next section then discusses established feminist conceptual frameworks on violence against women, in particular, the works of Elizabeth Stanko and Liz Kelly, and considers how these may continue to be relevant to framing dating and intimate partner violence in the digital age. Finally, in an elaboration of ‘continuum thinking’ (Boyle, 2018), the chapter considers future directions in the field, seeking to advance understanding of both the familiar and unique aspects of these harms.
Technology Facilitated Dating and Intimate Partner Violence Internationally technology facilitated abuse has garnered increasing policy, program and research attention. The term is wide-ranging and inclusive of many sub-forms of abuse, including partner and family
9 ‘Intimate Intrusions’ …
159
violence, stalking, image-based abuse, sexual violence, sexual harassment, as well as gender and sexuality-based hate (many of which are addressed elsewhere in this volume). To date, policy and emerging research has typically focused on family violence in particular, and the ways that technologies are implicated in monitoring and controlling behaviours, typically by cohabiting male partners or ex-partners (see for example Douglas et al., 2019; Dragiewicz et al., 2018; Harris & Woodlock, 2019; Woodlock, McKenzie, et al., 2020), as well as image-based abuse or what has been known colloquially as ‘revenge pornography’ (Flynn & Henry, 2019; Henry et al., 2020; Henry, Powell, & Flynn, 2017; McGlynn & Rackley, 2017; Powell et al., 2018, 2019, 2020). For the purposes of this chapter, I have sought to narrow the focus squarely on the abuse experienced by women in the context of dating and intimate partner relationships. It is not my intention that these necessarily represent discrete categories or types of technology facilitated abuse. Indeed, there are a ‘constellation of tactics’ (see Reed et al., 2016) that might be used by perpetrators in a range of relational contexts and for a variety of perpetrator goals, including for sexual gratification, as well as victim coercion, humiliation, and control. Technology facilitated tactics such as image-based abuse, online threats, or monitoring and stalking behaviours, for example, might be utilised by strangers, colleagues, acquaintances or friends, as well as in the context of dating and intimate relationships. Though there may thus be parallels between the different contexts in which technology facilitated abuse occurs, arguably it remains important to shine a spotlight on women’s experiences of dating and partner violence specifically. Not least because women’s experiences of dating and partner violence are so common, but also because these forms of abuse are often repeated in their occurrence with long-lasting impacts on victims, and furthermore because there is a high overlap between technology facilitated, physical and/or sexual forms of partner violence (Borrajo, Gámez-Guadix, & Calvete, 2015; Melander & Marganski, 2020). Nonetheless, for the purposes of this chapter, I find it useful to broadly group the research literature into
160
A. Powell
digital dating abuse, domestic violence, and sexual violence in particular, as these represent clear (though overlapping) spheres of research in the field to date.1
Digital Dating Abuse Digital Dating Abuse (DDA) refers to aggressive, abusive and sexually harassing behaviours that are perpetrated with the aid of technologies by a current or former dating partner, though certainly ambiguity exists in the field regarding both definitions and terminology (see Brown & Hegarty, 2018). For example, studies variously refer to cyber dating abuse (Borrajo et al., 2015; Caridade et al., 2019; van Ouytsel et al., 2016), cyber aggression in relationships (Watkins et al., 2018), electronic dating violence (Cutbush et al., 2021; Hinduja & Patchin, 2011), and electronic intrusion (Doucette et al., 2021; Reed et al., 2015), as well as digital dating abuse (Brown & Hegarty, 2018; Tompson et al., 2013). Though the language, definition and scope of ‘dating partner’ (and consequently, dating abuse), varies considerably in research and legislative contexts it is commonly used in at least two ways. Firstly, the international research literature largely uses ‘dating partner’ and ‘dating abuse’ when referring to adolescent and young adults who are engaged in romantic and/or sexual intimacy typically outside of a cohabiting, de facto, domestic partner, and/or marriage relationship (see Borrajo, Gámez-Guadix, & Calvete, 2015; Brown & Hegarty, 2018; Melander & Marganski, 2020). Secondly, in many legal contexts, ‘dating partner’ refers to non-residing and/or non-financially co-dependent romantic or intimate partners, which extends, for instance, to dating partners at any point in the life course. Whether someone is considered a ‘casual partner’ or ‘dating partner’, as opposed to a ‘de facto’, ‘domestic partner’ or ‘spouse’, can carry specific legal implications. For example, in some United States’ (US) jurisdictions, different offences exist for battery or assault of a dating partner (defined as a non-financially dependent partner, though excluding casual partners) as compared with domestic 1 For a more detailed discussion of terminology and nomenclature in the field, see Henry, Flynn, and Powell (2020).
9 ‘Intimate Intrusions’ …
161
partner battery or assault (see e.g. LA Rev Stat, §34.9, §35.3). Furthermore, penalties for criminal assault can be higher if the assault meets the requirements of specific domestic violence statutes that can apply where a couple are either married, cohabiting or have children together. The range of behaviours generally included within DDA vary across the research, but generally studies report multiple dimensions of aggressive, abusive and sexually harassing behaviours. For instance, a study by Reed and colleagues (2017) with 703 high school students found three dimensions of DDA: Digital Sexual Coercion (including pressured sexting, unsolicited sexts, and non-consensual distribution of a sexual image or ‘image-based abuse, discussed further below), Digital Direct Aggression (including sending threatening messages, posting mean or hurtful messages online, teasing or spreading hurtful rumours or put downs online), and Digital Monitoring/Control (including frequently checking who a person is with, their whereabouts and activities, and looking at private information without permission). Internationally, many studies of DDA have focused on the experiences of adolescents and young adults, often in high school and university/college settings, and varying widely in age though frequently from 12 to 24 years. Prevalence of DDA behaviours might be anticipated to be high amongst this cohort, due to both the high uptake of digital technologies amongst younger adults, as well as the life stage of adolescents and young adults with regards to increasingly active dating and intimacy. Indeed, in one study of 788 adults who had been in a dating relationship, aged between 18 and 30 years, prevalence of cyber dating abuse was reportedly 10.6% for ‘direct aggression’ perpetration (such as threats, insults, image-based abuse and identity theft) and 14% for victimisation; whilst a second factor of ‘control/monitoring’ behaviours (such as surveillance and invasion of privacy) found 75% for perpetration and 82% for victimisation (Borrajo et al., 2015). The same study found that cyber dating abuse was highly likely to be repeated, with the mean chronicity for perpetration being 5.16 times in the preceding year, and 4.83 times for victimisation; that cyber dating abuse also associated with offline forms of dating violence; and that there was a high overlap between those who had experienced victimisation and those engaging in perpetration of cyber dating abuse (Borrajo et al., 2015). Other studies
162
A. Powell
have variously reported 13% victimisation and 12% perpetration (high school students with mean age 18.09, Temple et al., 2016); 26% victimisation and 12% perpetration (12–18 years, Zweig et al., 2013), 65% victimisation (16–22 years, van Ouytsel et al., 2016), with some studies reporting as high as 77% of participants experiencing electronic victimisation in their dating relationships (college students with mean age 20 years, Bennett et al., 2011). Several studies have further found that digital dating abuse victimisation is more commonly experienced by girls and young women (e.g. Barter et al., 2009; Zweig et al., 2013), with other studies further finding that such abuse is more likely to be perpetrated by boys and young men (e.g. Deans & Bhogal, 2019). Some studies meanwhile report little to no gender differences (e.g. Bennett et al., 2011; Wolford-Clevenger et al., 2016; Zapor et al., 2017), or conversely that gender differences are evident only for sub-forms of abuse, with men and boys more likely to perpetrate sexualised forms of DDA (Reed et al., 2016, 2017; Smith-Darden et al., 2017). Overall, what the prevalence data suggest is that DDA is very common amongst high school and college-age populations. Though notably, few studies examine dating abuse behaviours in young and middle adults in their twenties and thirties, which is an important limitation of current research given increasing delays in marriage/cohabiting relationships and increases in divorce (e.g. Kreider & Ellis, 2011), which together result in many more adults remaining single and/or actively dating in their 30s and beyond (Manning, 2020). Research suggests that there may be differential gendered impacts of DDA, with girls and young women more likely to report distress and harm than boys and young men. For instance, Reed et al. (2017) found that girls reported being significantly more distressed than boys in response to three types of DDA: digital sexual coercion, digital direct aggression, and digital monitoring/controlling behaviours. In particular, more girls than boys experienced emotional distress in response to behaviours such as receiving pressure to send a sext, receiving unsolicited sexual images, pressure to engage in sexual activities, receiving threats, and mean or hurtful messages. Girls were also reportedly more likely to take action in response to a range of DDA behaviours in order to
9 ‘Intimate Intrusions’ …
163
limit their partner’s continued access to them (Reed et al., 2017). Meanwhile in a qualitative study with 38 young adults (aged 16–24 years), Brown and colleagues (2020) found a range of gendered differences in youth perceptions of harmful digital behaviours. In particular, young people reported that men were typically those perpetrating sexually based harms (such as sharing nude images of women), that women experience serious emotional impacts from digital harms, whilst men tend to underestimate the seriousness or severity of digital harms on women (Brown et al., 2020). Moreover, men tended to describe the impacts on them in reputational terms (such as embarrassment or shame), with women reportedly feeling fearful, anxious and vulnerable. The study further found that both men and women engage in controlling or monitoring behaviours, but that these may differ in nature and context with participants describing men as more overtly verbally controlling while women may be more covert in monitoring a partner’s communications with others (Brown et al., 2020). Additionally, some studies have reported correlations between perpetration of cyber dating abuse with adherence to hostile and benevolent sexism (Martinez-Pecino & Durán, 2019), as well as traditional gender norms and tolerance for violence generally (Reyes et al., 2016). Such research echoes findings in related fields suggesting that, in addition to overall prevalence rates, the impacts and contexts of abuse are important when considering its gendered nature (e.g. Henry et al., 2020).
Domestic Violence and Digital Coercive Control A range of terms have been used to refer to the use of technologies in the perpetration of domestic or intimate partner violence. These include: technology facilitated domestic and family violence (Douglas et al., 2019), technology facilitated domestic abuse (Yardley, 2020), technology-facilitated stalking (Woodlock, 2017), digital coercive control (Harris & Woodlock, 2019; Woodlock, McKenzie, 2020), and partner cyber abuse (Wolford-Clevenger et al., 2016). Some of the earliest research in the field identified technology specifically, as a tool being utilised by perpetrators of intimate partner or domestic violence in the context of stalking and monitoring behaviours (e.g.
164
A. Powell
Southworth et al., 2007). Indeed, intimate partner stalking has long been readily assisted by technologies including through: repeated calls or messages, interception or monitoring of a victim-survivor’s communications (such as e-mail, phone, SMS, and more recently social network sites and chat platforms), tracking a victim’s whereabouts (via an array of location technologies), threats made or offensive material sent via digital communications, as well as unauthorised access and/or impersonation via computer and mobile devices and/or accounts, publishing private information (‘doxxing’), and unauthorised surveillance recording of audio or imagery (Woodlock, 2017). As technologies become further embedded in our everyday lives however, research has increasingly moved beyond a framing of cyberstalking as a sub-form of domestic violence, and instead highlights a wider range of abusive tactics employed by perpetrators of domestic violence involving technologies in various ways (e.g. Douglas et al., 2019). Of particular note in the research is the role technologies can play in enabling coercive control. The passage of what has been regarded internationally as leading the way in domestic violence legislation, the Domestic Abuse Act 2018 (Scotland), has arguably brought coercive control further into the legal spotlight (see Stark & Hester, 2019). Originally coined by Evan Stark (1994, 2007), and in part a critique of the deficit model presented by ‘battered woman syndrome’, Stark describes coercive control as: an offense to liberty that prevents women from freely developing their personhood, utilizing their capacities, or practising citizenship, consequences they experience as entrapment. (Stark, 2007, p. 4, emphasis in the original)
The concept emphasises the need to see the holistic pattern of a perpetrators’ abusive behaviours, rather than the traditional criminal justice focus on discrete physical or sexual assault—a ‘violent incidents’ model of domestic violence (Stark, 2012). Strategies of coercive control include: isolation, intimidation, threats, shaming, gaslighting, surveillance, stalking and degradation (Stark, 2007). Whilst each individual
9 ‘Intimate Intrusions’ …
165
act might be defined as a minor offense, or in many cases not considered criminal, the pattern of relentless control with little respite leaves a victim/survivor in a highly restrictive space in which she is constantly monitoring and adapting her behaviour in order to avoid abuse (Stark, 2007, 2012). Digital coercive control then refers to the “use of digital devices and digital media to stalk, harass, threaten and abuse partners or expartners (and children)” (Harris & Woodlock, 2019, p. 533). Also referred to as ‘technology-facilitated coercive control’ (Dragiewicz et al., 2018), abusive behaviours may include many of the hallmarks of ‘cyberstalking’ described above. Yet it is also clear that perpetrators, who are cut off from some means of access and control for abuse of a former partner (such as if a woman leaves the relationship, and/or there is a civil order to desist), will utilise innovate methods to continue their abuse. For instance, banks and women’s services have both described abusive ex-partners using the descriptions on bank transfers, often less than $1, to send threatening messages (Grieve, 2020). Children’s toys have been embedded with tracking devices, listening devices and even video recording devices (Damajanovic, 2017). Software applications to monitor a person’s communications and location, also known as ‘spyware’ or ‘stalkerware’, can also be installed either surreptitiously or through coercion (Harkin & Molnar, 2021). Technologies are implicated in workplace disruption, through communications utilised for pestering and harassment, as well as various means of sabotaging women’s employment (Showalter, 2016). Overall, though controlling behaviour has long featured in the constellation of tactics of domestic abusers, technologies have certainly expanded the repertoire of tools for abuse. Indeed, as some researchers have noted, ‘switching off ’ the technology or access of abusers does not always make women safer, as this may be precisely when a perpetrator escalates their abuse (Woodlock et al., 2020). In their 2019 paper, Harris and Woodlock further describe the ‘spacelessness’ that characterises the impacts of digital coercive control, such that perpetrators achieve an omnipresence in the lives of victim/survivors. As such, the abuse can transcend time, space and place, invading every aspect of a victim/survivor’s life, through various channels, and at any time. Though an abuser may not be monitoring a
166
A. Powell
victim/survivor 24/7 (though technological affordances certainly enable this potential), the knowledge that an abuser could be monitoring through various means and avenues at any given time, has the resultant impact on victims’ feeling constantly scrutinised and adjusting their behaviours accordingly. Digital coercive control is, arguably, an ultimate form of ‘panoptic surveillance’ in the Foucauldian sense (Foucault, 1977; see also Elmer, 2012). In turn, the impact on victim/survivors’ sense of autonomy and mental wellbeing is well documented in the international literature (Harris & Woodlock, 2019; Woodlock, 2017). Coercive control has been described as a predominantly gendered phenomena (Anderson, 2009; Stark, 2007). Prevalence of victimisation and perpetration is one aspect of this gendered nature. In Australia for instance, analyses of national victimisation surveys report that 1 in 6 women (and 1 in 17 men) have experienced physical and or sexual violence by a current or former cohabiting partner since the age of 15; whilst 1 in 4 women (and 1 in 6 men) have experienced emotional partner abuse including controlling behaviours, threats and/or humiliation (ABS, 2016). Yet, as international research demonstrates, the gendered prevalence of coercive control behaviours in particular becomes more pronounced when using more comprehensive measures. For example, in an analysis of the Crime Survey for England and Wales (CSEW), Myhill (2015) found that of victim/survivors of intimate partner violence, women (30%) were significantly more likely than men (6%) to have also experienced abuse that constituted coercive control. Indeed, the study further found that victims of coercive control were in turn more likely to have experienced greater frequency of physical violence (Myhill, 2015). Similarly, Walby and Towers (2018), again analysing CSEW data, found that all patterns of domestic violence were gendered whereby women were more likely to experience victimisation, but that the more frequent and injurious the violence, the greater the gender asymmetry in victimisation. Few studies to date have examined the prevalence of technology facilitated coercive control, though surveys with domestic violence support workers and interviews with victim/survivors indicate that technologies are routinely present as tools or strategies in controlling behaviours and emotional abuse (Flynn et al., 2021; Harris & Woodlock, 2019; Woodlock, Bentley et al.‚ 2020).
9 ‘Intimate Intrusions’ …
167
In seeking to explain the gendered nature of coercive control, Stark (2007) states that it is women’s structural inequality relative to men (which itself presents unequal resources to women) that makes women more vulnerable to the strategies of coercive control (Stark, 2007). Though, as Anderson (2009) highlights, psychological theory and research suggests that there is a further link between individual men’s use of coercive control against women and their experiences of, as well as re-construction of, particular masculine identities. Applying West and Zimmerman (1987), Anderson (2009) further argues that men are more able to achieve control over women because social and cultural norms define the performance of masculinity as controlling others, whilst femininity is often defined by subservience. At an individual level, this further explains the correlations regularly found between attitudes that adhere to rigid and traditional gender roles and expressions, whilst also endorsing violence against women, or indeed, enacting it (Flood & Pease, 2009; Herrero et al., 2017; Webster et al., 2019, 2021).
Sexual Violence and Image-Based Abuse Sexual violence can feature both in dating abuse and in domestic violence and coercive control, highlighting once again that these are not necessarily neatly distinct categories of abuse, though they may not always co-occur. Indeed there are various ways in which technology further enables and features in sexual violence. Technology Facilitated Sexual Violence (or TFSV, Powell, 2010; Powell & Henry, 2014, 2017, 2016/2019), encompasses a broad range of sexually harassing, coercive and/or abusive behaviours via digital and other communications technologies. According to Powell and Henry (2016, 2019), such behaviours can include: digital sexual harassment (such as unwanted sexually offensive communications and/or requests), sexual aggression and coercion (where unwanted sexual experiences either in person or online, such as via live video, are enabled or coerced via technologies including dating apps), image-based abuse (where nude or sexual images are taken, created, distributed or threats are made to distribute, without consent), as well as gender and sexuality-based harassment (where offensive and/or degrading
168
A. Powell
communications or content are directed at a person because of their gender or sexuality). It is well established in the international literature that, overall, the majority of victims of sexual violence are women, with perpetrators predominantly men who are known to them (see e.g. ABS, 2016; Brunton-Smith et al., 2020; World Health Organisation, 2013).nYet TFSV is reportedly similarly likely to be experienced by both men and women (see Powell & Henry, 2016, 2019). However, continuing research is suggesting that the extent, nature and impacts of different types of TFSV behaviours, as well as their perpetration, are demonstrative of gender-based violence. The US Pew Online Harassment Survey for instance found that women were more likely to experience online sexual harassment and cyberstalking in particular as compared to men (Pew Research Center, 2014). In a recent Canadian study with university students meanwhile, Snaychuk and O’Neill (2020) report that women were significantly more likely to have experienced many of the TFSV behaviours as developed by Powell and Henry (2016/2019), including sexually harassing, stalking, monitoring, threatening and coercive sexual behaviours. Men meanwhile, were similarly likely as women to have had nude or semi-nude images taken or distributed without permission, and to have experienced simulated sexual violence in a gaming environment (Snaychuk & O’Neill, 2020). This echoes similar findings in relation to image-based abuse victimisation found by Powell and colleagues (2018), where men’s and women’s overall victimisation rates showed little differences, though women were more likely to experience such abuse in the context of an intimate relationship or former relationship. With regards to perpetration however, Powell and colleagues (2019) have further found that men are significantly more likely to self-report engaging in image-based abuse perpetration behaviours than women. Research further suggests that the impacts of TFSV, including imagebased abuse, differ markedly by gender. In particular, several studies report that women experience significant psychological and emotional distress with associated impacts on their mental wellbeing including anxiety and depression (see e.g. Bates, 2017; McGlynn et al., 2020; Patel & Roesch, 2020; Powell & Henry, 2016, 2019; Snaychuk & O’Neill, 2020). Furthermore, research into image-based abuse has
9 ‘Intimate Intrusions’ …
169
suggested that for women victims in particular, the abuse is more likely to co-occur with other abusive behaviours and to cause the victim/survivor to feel fearful for their safety (Henry et al., 2020; Rackley et al., 2021). Such findings suggest that in women’s victimisation via technology facilitated abuses in particular, there are often overlaps in abusive tactics experienced, in effect, exacerbating the impacts of each abusive incident.
A Continuum of ‘Intimate Intrusions’ In 1985, Elizabeth ‘Betsy’ Stanko first published Intimate Intrusions: Women’s Experience of Male Violence, a treatise on the intimidation, threats, coercion and violence that women experience from men in their everyday lives. In it, Stanko (1985) argues that women are ‘continually on guard to the possibility of men’s violence’ (p. 1); not knowing whether a harassing comment, a slap or a grab, may escalate to further violence. Too often as Stanko notes, women themselves are blamed and responsibilised for men’s violence, as it is viewed as a problem ‘of women’s respectability, not men’s behaviour’ (p. 4). Related feminist scholarship on men’s violence against women has likewise identified that rather than discrete categories of violence and non-violence, women’s experiences exist along a ‘continuum’ of transgressions from the everyday intrusions or ‘small interruptions’ (as Vera-Gray, 2018 has also identified), through to pressure, coercion, and force (Kelly, 1987). These experiences are also frequently minimised; such that women are told that ‘nothing really happened’ (Kelly & Radford, 1990, emphasis added), or she was ‘asking for it’ (e.g. Edwards et al., 2011) or, in the case of intimate partner violence, that if it was that bad ‘why doesn’t she just leave?’ (e.g. Anderson et al., 2003). Moreover, such intrusions have a cumulative effect on women; with each transgression heightening victims’ experience of vulnerability to male violence, their (reasonable) fear that it will escalate, and investment in the ‘safety work’ that will inevitably fail (Vera-Gray, 2018).
170
A. Powell
Technology facilitated dating, sexual and partner abuse take place in the context of a continuum of intimate intrusions, whether ‘small interruptions’ (Vera-Gray, 2018) or larger ‘ruptures’ (Henry et al., 2020; McGlynn et al., 2020), to women’s selfhood and autonomy. When taken individually, many of the intrusions described earlier might be too readily minimised; ‘it’s not really a threat if it’s online’, or ‘she was asking for it when she took those nude selfies’, or if it is that bad ‘why doesn’t she just stay offline?’. But when the continuity of these abuses and their cumulative effect on women, both individually and collectively as well as both online and offline, are taken into account it becomes apparent just how invasive and damaging technology facilitated abuse can be. Karen Boyle (2018) posits a useful extension of Kelly’s (1987) continuum of sexual violence, highlighting “the importance of continuum thinking as a means of making connections, whilst noting the importance of clarity in relation to the nature of these connections and the necessity of distinction within this” (p. 28, emphasis in original). Such ‘continuum thinking’, Boyle suggests, ought to take place in the plural; such that technology facilitated abuse for instance may connect both with women’s experiences of abuse along a continuum of sexual violence, as well as a continuum of online and offline harms (see also Henry, Flynn, & Powell, 2020 for a discussion). In turn, this suggests that we ought to acknowledge offline and online abuse as interconnected and in the context of partner violence often co-occurring; they may be better understood as different tactics of the one pattern of perpetrator behaviour rather than distinct types of harm. However, as colleagues and I have argued elsewhere, it is also currently necessary to highlight the specific instances and tactics of technology facilitated abuse in order to ensure that policy, legal and support responses properly take account of these rapidly developing tactics of abuse (see Henry et al., 2020; Powell & Henry, 2017). Though we are quickly approaching a time in which the embeddedness of technology in everyday life, and indeed in everyday experiences of violence and abuse, will be taken for granted such that it may no longer be considered remarkable.
9 ‘Intimate Intrusions’ …
171
Flipping the Continuum to Men’s Violences Conceptually of course, continuum thinking is vital not only for identifying the cumulative impact of men’s violence on women’s lives, and across various relational contexts, but that when we identify the continua of men’s violence, we can more readily see their extent, their institutionalisation and their structural underpinnings. Indeed Boyle (2018) makes a further extension to continuum thinking that is additionally useful to framing technology facilitated partner violence and its interconnections with other forms of both online and offline abuse. She writes that: …it can be equally useful to feminist analysis to think of men’s behaviours (rather than women’s experiences) on a continuum. The continuum of men’s violences allows us to think of violence as being gender based not because of who it targets but, rather, because of how that violence is understood in relation to perpetrators’ gender performances. (Boyle, 2018, p. 29)
In other words, Boyle argues that the term ‘gender-based violence’ highlights not only that such experiences are gender-based because women experience them disproportionately and typically at the hands of men (i.e. demographically gender-based), but that men’s use of violence tactics to varying degrees, in different contexts, and even towards different types of victims, can often be understood ‘as expressions of patriarchal belief systems’ (p. 30). She is furthermore careful to note that hers is a constructionist, rather than a biological argument regarding the interconnected and gendered nature of men’s violences. They are connected through the “the ways in which they relate to and embody particular – often normative – constructions of gender roles” (p. 31, emphasis added). Arguably, this is what Evan Stark’s (1994, 2007) concept of coercive control also sought to do, in the context of domestic violence at least, to switch our focus from women’s ‘learned helplessness’ and questions as to ‘why she stays’, and onto the effectiveness of a set of strategies by abusers that remove women’s autonomy and entrap them in their personal lives. Increasingly, research is emphasising the need for such nuanced and complex understandings of what constitutes gender-based violence itself
172
A. Powell
and how to frame and understand these harms. It is a consideration, I suggest, that is likewise urgently needed within the rapidly developing field of technology facilitated abuse.
Conclusion This chapter has discussed the current state of the field of research into technology facilitated dating and intimate partner violence. The overlapping spheres of research into each of digital dating abuse, domestic violence and digital coercive control, and sexual violence (including image-based abuse) have been summarised and examined with particular respect to their gendered nature and impacts. The discussion presented here suggests a need for research into women’s experiences of technology facilitated abuses to carefully consider both the relational contexts of these harms, as well as the gendered patterns associated with their frequency, co-occurrence and impacts. Ultimately, addressing gendered violence will require not only responding to women’s experiences of it, but addressing the gendered behaviours, expressions and attitudes that normalise not only men’s violences, but indeed practices of gender inequality and restrictive gender roles more generally. Acknowledgements The author(s) received no financial support for the research, authorship, and/or publication of this article.
References Anderson, K. L. (2009). Gendering coercive control. Violence against Women, 15 (12), 1444–1457. Anderson, M. A., Gillig, P. M., Sitaker, M., McCloskey, K., Malloy, K., & Grigsby, N. (2003). “Why doesn’t she just leave?”: A descriptive study of victim reported impediments to her safety. Journal of Family Violence, 18(3), 151–155.
9 ‘Intimate Intrusions’ …
173
Australian Bureau of Statistics (ABS). (2016). Personal safety survey. Australian Bureau of Statistics. Retrieved from https://www.abs.gov.au/statistics/people/ crime-and-justice/personal-safety-australia/latest-release Bates, S. (2017). Revenge porn and mental health: A qualitative analysis of the mental health effects of revenge porn on female survivors. Feminist Criminology, 12(1), 22–42. Barter, C. McCarry, M., Berridge, D., & Evans, K. (2009). Partner exploitation and violence in teenage intimate relationships. NSPCC. Retrieved from http:// www.nspcc.org.uk/inform/research/findings/partner_exploitation_and_vio lence_summary_wdf68093.pdf Bennett, D. C., Guran, E. L., Ramos, M. C., & Margolin, G. (2011). College students’ electronic victimization in friendships and dating relationships: Anticipated distress and associations with risky behaviors. Violence and Victims, 26 (4), 410–429. Borrajo, E., Gámez-Guadix, M., & Calvete, E. (2015). Cyber dating abuse: Prevalence, context, and relationship with offline dating aggression. Psychological Reports, 116 (2), 565–585. Borrajo, E., Gámez-Guadix, M., Pereda, N., & Calvete, E. (2015). The development and validation of the cyber dating abuse questionnaire among young couples. Computers in Human Behavior, 48, 358–365. Boyle, K. (2018). What’s in a name? Theorising the inter-relationships of gender and violence. Feminist Theory, 20 (1), 19-36. Brown, C., & Hegarty, K. (2018). Digital dating abuse measures: A critical review. Aggression and Violent Behavior, 40, 44–59. Brown, C., & Hegarty, K. (2021). Development and validation of the TAR Scale: A measure of technology-facilitated abuse in relationships. Computers in Human Behavior Reports, 3, 100059. Brown, C., Flood, M., & Hegarty, K. (2020). Digital dating abuse perpetration and impact: The importance of gender. Journal of Youth Studies, 1–16. https://doi.org/10.1080/13676261.2020.1858041 Brunton-Smith, I., Flatley, J., & Tarling, R. (2020). Prevalence of sexual violence: A comparison of estimates from UK national surveys. European Journal of Criminology, online ahead of print. Buiten, D., & Naidoo, K. (2020). Laying claim to a name: Towards a sociology of “gender-based violence”. South African Review of Sociology, 1–8. https:// doi.org/10.1080/21528586.2020.1813194 Caridade, S., Braga, T., & Borrajo, E. (2019). Cyber dating abuse (CDA): Evidence from a systematic review. Aggression and Violent Behavior, 48, 152– 168.
174
A. Powell
Centers for Disease Control and Prevention. (2020). Intimate partner violence. Retrieved from www.cdc.gov/violenceprevention/intimatepartnerviolence Cutbush, S., Williams, J., Miller, S., Gibbs, D., & Clinton-Sherrod, M. (2021). Longitudinal patterns of electronic teen dating violence among middle school students. Journal of Interpersonal Violence, 36 (5–6), NP2506– NP2526. Damajanovic, D. (2017). Abusive partners stalking women with tracking devices in toys, prams. ABC News. Retrieved from https://www.abc.net. au/news/2017-06-07/domestic-violence-perpetrators-using-technology-totrack-victims/8572944 Deans, H., & Bhogal, M. S. (2019). Perpetrating cyber dating abuse: A brief report on the role of aggression, romantic jealousy and gender. Current Psychology, 38(5), 1077–1082. Doucette, H., Collibee, C., Hood, E., Gittins Stone, D. I., DeJesus, B., & Rizzo, C. J. (2021). Perpetration of electronic intrusiveness among adolescent females: Associations with in-person dating violence. Journal of Interpersonal Violence, 36 (11–12), 6581-6601. Douglas, H., Harris, B. A., & Dragiewicz, M. (2019). Technology-facilitated domestic and family violence: Women’s experiences. The British Journal of Criminology, 59 (3), 551–570. Dragiewicz, M., Burgess, J., Matamoros-Fernández, A., Salter, M., Suzor, N. P., Woodlock, D., & Harris, B. (2018). Technology facilitated coercive control: Domestic violence and the competing roles of digital media platforms. Feminist Media Studies, 18(4), 609–625. Edwards, K. M., Turchik, J. A., Dardis, C. M., Reynolds, N., & Gidycz, C. A. (2011). Rape myths: History, individual and institutional-level presence, and implications for change. Sex Roles, 65 (11–12), 761–773. Elmer, G. (2012). Panopticon-discipline-control. In K. Ball, K. Haggerty, & D. Lyon. Routledge handbook of surveillance studies. Routledge. Flood, M., & Pease, B. (2009). Factors influencing attitudes to violence against women. Trauma, Violence & Abuse, 10 (2), 125–142. Flynn, A., & Henry, N. (2019). Image-based sexual abuse: An Australian reflection. Women and Criminal Justice, online first. https://doi.org/10.1080/089 74554.2019 Flynn, A., Powell, A., & Hindes, S. (2021). Technology-facilitated abuse: A survey of support services stakeholders. Research report. Australia’s National Research Organisation for Women’s Safety (ANROWS). Grieve, C. (2020, June 4). ‘We can see you’: CBA to ban customers that send abusive messages. The Sydney Morning Herald . Retrieved from https://www.
9 ‘Intimate Intrusions’ …
175
smh.com.au/business/banking-and-finance/we-can-see-you-cba-to-ban-cus tomers-that-send-abusive-messages-20200604-p54zla.html Harkin, D., & Molnar, A. (2021). Operating-system design and its implications for victims of family violence: the comparative threat of smart phone spyware for Android versus iPhone users. Violence against Women, 27 (6–7), 851–875. Harris, B. A. (2018). Spacelessness, spatiality and intimate partner violence. In Intimate partner violence, risk and security: Securing women’s lives in a global world , 52–70. Routledge. Harris, B. A., & Woodlock, D. (2019). Digital coercive control: Insights from two landmark domestic violence studies. The British Journal of Criminology, 59 (3), 530–550. Henry, N., Flynn, A., & Powell, A. (2020). Technology-facilitated domestic and sexual violence: a review. Violence against Women, 26 (15–16), 1828– 1854. Henry, N., Powell, A., & Flynn, A. (2017). Not just ‘revenge pornography’: Australians’ experiences of image-based abuse. A summary report. Melbourne: RMIT University. Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. J. (2020). Image-based sexual abuse: A study on the causes and consequences of non-consensual nude or sexual imagery. London: Routledge. Herrero, J., Torres, A., Rodríguez, F. J., & Juarros-Basterretxea, J. (2017). Intimate partner violence against women in the European Union: The influence of male partners’ traditional gender roles and general violence. Psychology of Violence, 7 (3), 385. Hinduja, S., & Patchin, J. W. (2011). Electronic dating violence. Kelly, L. (1987). The continuum of sexual violence. In J. Hanmar & M. Maynard (Eds.), Women, violence and social control (pp. 46–60). Palgrave Macmillan. Kelly, L., & Radford, J. (1990). “Nothing really happened”: The invalidation of women’s experiences of sexual violence. Critical Social Policy, 10 (30), 39–53. Kranz, A. L., & Nakumara, K. (2002). Helpful or harmful? How innovative communication technology affects survivors of intimate violence. Minnesota Center Against Violence and Abuse, University of Minnesota. Kreider, R. M., & Ellis, R. (2011). Number, timing, and duration of marriages and divorces: 2009. Current population reports (pp. 70–125). U.S. Census Bureau. Manning, W. D. (2020). Young adulthood relationships in an era of uncertainty: A case for cohabitation. Demography, 57 , 799–819.
176
A. Powell
Martinez-Pecino, R., & Durán, M. (2019). I love you but I cyberbully you: The role of hostile sexism. Journal of Interpersonal Violence, 34 (4), 812–825. McGlynn, C., & Rackley, E. (2017). Image-based sexual abuse. Oxford Journal of Legal Studies, 37 (3), 534–561. McGlynn, C., Johnson, K., Rackley, E., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2020). ‘It’s torture for the soul’: The harms of image-based sexual abuse. Social & Legal Studies, online-ahead-of-print: 0964663920947791. Melander, L. A., & Marganski, A. J. (2020). Cyber and in-person intimate partner violence victimization: Examining maladaptive psychosocial and behavioral correlates. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 14 (1), Article 1. Myhill, A. (2015). Measuring coercive control: What can we learn from national population surveys? Violence against Women, 21(3), 355–375. Patel, U., & Roesch, R. (2020). The prevalence of technology-facilitated sexual violence: a meta-analysis and systematic review. Trauma, Violence, & Abuse, online ahead of print. Pew Research Center. (2014). Online harassment. Pew research Center. Retrieved from http://www.pewinternet.org/2014/10/22/online-harass ment/ Powell, A. (2010, July 1–2). Technology-facilitated sexual violence: New harms or ‘new’ ways for committing ‘old’ crimes’. Paper presented at the Australian and New Zealand Critical Criminology Conference, The University of Sydney. Powell, A., & Henry, N. (2014). Blurred lines? Responding to “sexting” and gender-based violence among young people. Children Australia, 39 (2), 119– 124. Powell, A., & Henry, N. (2016/2019). Technology-facilitated sexual violence victimization: Results from an online survey of Australian adults. Journal of interpersonal violence, 34 (17), 3637–3665 (first published online 2016). Powell, A., & Henry, N. (2017). Sexual violence in a digital age. Springer. Powell, A., Henry, N., & Flynn, A. (2018). Image-based sexual abuse. In W. S. DeKeseredy, & M. Dragiewicz (Eds.), Routledge handbook of critical criminology. Routledge. Powell, A., Henry, N., Flynn, A., & Scott, A. J. (2019). Image-based sexual abuse: The extent, nature, and predictors of perpetration in a community sample of Australian residents. Computers in Human Behavior, 92, 393–402. Powell, A., Scott, A. J., Flynn, A. & Henry, N. (2020). Image-based sexual abuse: An international study of victims and perpetrators. Summary Report. Melbourne: RMIT University.
9 ‘Intimate Intrusions’ …
177
Rackley, E., McGlynn, C., Johnson, K., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2021). Seeking justice and redress for victim-survivors of imagebased sexual abuse. Feminist Legal Studies, online first, 1–30. Reed, L. A., Tolman, R. M., & Safyer, P. (2015). Too close for comfort: Attachment insecurity and electronic intrusion in college students’ dating relationships. Computers in Human Behavior, 50, 431–438. Reed, L. A., Tolman, R. M., & Ward, L. M. (2016). Snooping and sexting: Digital media as a context for dating aggression and abuse among college students. Violence against Women, 22(13), 1556–1576. Reed, L. A., Tolman, R. M., & Ward, L. M. (2017). Gender matters: Experiences and consequences of digital dating abuse victimization in adolescent dating relationships. Journal of Adolescence, 59, 79–89. Reyes, H. L. M., Foshee, V. A., Niolon, P. H., Reidy, D. E., & Hall, J. E. (2016). Gender role attitudes and male adolescent dating violence perpetration: Normative beliefs as moderators. Journal of Youth and Adolescence, 45 (2), 350–360. Showalter, K. (2016). Women’s employment and domestic violence: A review of the literature. Aggression and Violent Behavior, 31, 37–47. Smith-Darden, J. P., Kernsmith, P. D., Victor, B. G., & Lathrop, R. A. (2017). Electronic displays of aggression in teen dating relationships: Does the social ecology matter? Computers in Human Behavior, 67 , 33–40. Snaychuk, L. A., & O’Neill, M. L. (2020). Technology-facilitated sexual violence: Prevalence, risk, and resiliency in undergraduate students. Journal of Aggression, Maltreatment & Trauma, 29 (8), 984–999. Southworth, C. (2003, June 8, Sunday). Technology’s dark side. The Washington Post. Retrieved from http://www.ncdsv.org/images/TechnologyDarkSide.pdf Southworth, C., Dawson, S., Fraser, C., & Tucker, S. (2005). A high-tech twist on abuse: Technology, intimate partner stalking, and advocacy. Report commissioned by Violence against Women Online Resources, National Resource Center on Domestic Violence. Southworth, C., Finn, J., Dawson, S., Fraser, C., & Tucker, S. (2007). Intimate partner violence, technology, and stalking. Violence against Women, 13(8), 842–856. Stanko, E. (1985). Intimate intrusions: Women’s experience of male violence. Routledge. Stark, E. (1994). Re-presenting woman battering: From battered woman syndrome to coercive control. Albany Law Review, 58, 973. Stark, E. (2007). Coercive control: How men entrap women in personal life. Oxford University Press.
178
A. Powell
Stark, E. (2012). Looking beyond domestic violence: Policing coercive control. Journal of Police Crisis Negotiations, 12(2), 199–217. Stark, E., & Hester, M. (2019). Coercive control: Update and review. Violence against Women, 25 (1), 81–104. Temple, J. R., Choi, H. J., Brem, M., Wolford-Clevenger, C., Stuart, G. L., Peskin, M. F., & Elmquist, J. (2016). The temporal association between traditional and cyber dating abuse among adolescents. Journal of Youth and Adolescence, 45 (2), 340–349. Tompson, T. Benz, J., & Agiesta, J. (2013). The digital abuse study: Experiences of teens and young adults. AP-NORC Centre for Public Affairs Research. Retrieved from https://apnorc.org/projects/the-digitalabuse-study-experiences-of-teens-and-young-adults/ Van Ouytsel, J., Ponnet, K., Walrave, M., & Temple, J. R. (2016). Adolescent cyber dating abuse victimization and its associations with substance use, and sexual behaviors. Public Health, 135, 147–151. Vera-Gray, F. (2018). The right amount of panic: How women trade freedom for safety. Bristol. Walby, S., & Towers, J. (2018). Untangling the concept of coercive control: Theorizing domestic violent crime. Criminology & Criminal Justice, 18(1), 7–28. Watkins, L. E., Maldonado, R. C., & DiLillo, D. (2018). The cyber aggression in relationships scale: A new multidimensional measure of technology-based intimate partner aggression. Assessment, 25 (5), 608–626. Webster, K., Ward, A., Diemer, K., Flood, M., Honey, N., Morgan, J., Politov, V., Powell, A., & Stubbs, J. (2021). How are gender inequality and violence against women related? Findings from a population-level community attitudes survey. Australian Journal of Social Issues, 56, 374–392. Webster, K., Ward, A., Diemer, K., Flood, M., Powell, A., Forster, K., & Honey, N. (2019). Attitudinal support for violence against women: What a population-level survey of the Australian community can and cannot tell us. Australian Journal of Social Issues, 54 (1), 52–75. West, C., & Zimmerman, D. H. (1987). Doing gender. Gender & Society, 1(2)‚ 125–151. Woodlock, D. (2017). The abuse of technology in domestic violence and stalking. Violence against Women, 23(5), 584–602. Woodlock, D., Bentley, K., Schulze, D., Mahoney, N., Chung, D., & Pracilio, A., (2020). Second national survey of technology abuse and domestic violence in Australia. WESNET. Retrieved from https://wesnet.org.au/wp-
9 ‘Intimate Intrusions’ …
179
content/uploads/sites/3/2020/11/Wesnet-2020-2nd-National-Survey-Rep ort-72pp-A4-FINAL.pdf Woodlock, D., McKenzie, M., Western, D., & Harris, B. (2020). Technology as a weapon in domestic violence: Responding to digital coercive control. Australian Social Work, 73(3), 368–380. Wolford-Clevenger, C., Zapor, H., Brasfield, H., Febres, J., Elmquist, J., Brem, M., & Stuart, G. L. (2016). An examination of the partner cyber abuse questionnaire in a college student sample. Psychology of Violence, 6 (1), 156. World Health Organisation (WHO). (2013). Global and regional estimates of violence against women: Prevalence and health effects of intimate partner violence and non-partner sexual violence. WHO. Yardley, E. (2020). Technology-facilitated domestic abuse in political economy: a new theoretical framework. Violence Against Women, 1077801220947172. Zweig, J. M., Dank, M., Lachman, P., & Yahner, J. (2013). Technology teen dating violence and abuse, and bullying. Urban Institute, Justice Policy Center. Retrieved from https://www.ncjrs.gov/pdffiles1/nij/grants/243296. pdf
10 Love, Hate and Sovereign Bodies: The Exigencies of Aboriginal Online Dating Bronwyn Carlson and Madi Day
Introduction Aboriginal women and LGBTQIA+ people are avid users of all forms of digital technologies, including those used for hook-ups and dating (Carlson, 2019; Carlson & Frazer, 2018). Platforms such as Tinder, Grindr and OKCupid, among others, are widely used for seeking friendship, fun, casual sex and romance. While such technologies bring many positive affordances, they also provide platforms for harmful behaviours including those associated with white supremacist racism and heteropatriarchy, which often mimic settler violence perpetrated in offline spaces (Verass, 2016; 2019; Carlson & Day, 2021; Kennedy, 2020). The level of violence against women online has reached epic proportions and has resulted in the United Nations (UN) issuing a ‘wake up call’ about the B. Carlson (B) · M. Day Department of Indigenous Studies, Faculty of Arts, Macquarie University, Macquarie Park, NSW, Australia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_10
181
182
B. Carlson and M. Day
high levels of violence that women are also subject to via digital platforms (cited in Perasso, 2015). Similarly, the 2020 Galop1 report notes that online anti-LBGTQIA+ violence was not limited to ‘low-level’ incidents, with threats of physical violence, sexual assault, and death also a common occurrence for many LBGTQIA+ victims (Hubbard, 2020). While technology-facilitated violence is becoming increasingly common, there is little information and data on the extent and nature of this kind of settler violence and particularly as it targets Aboriginal women and LGBTQIA+ people. Similarly, there is generally little attention focussed on the perpetrators of this violence and that also explores the longevity of such behaviours which are deeply entrenched in colonial power relations. Drawing on qualitative interviews with Aboriginal people who have experienced settler violence on dating apps along with published news articles, this chapter considers some of the exigencies of racism as it manifests online for Aboriginal women and LGBTQIA+ users of dating sites. We firstly, outline our methodological position informed by Indigenous standpoint theory and settler-colonial theory. We then provide historical context to demonstrate that violence perpetrated in online spaces is an extension of colonial violence that has been widely documented by Indigenous Studies scholars. We argue that Aboriginal women and LGBTQIA+ people across this continent now known as Australia are required to navigate their intimate lives and sexuality under threat of settler violence and imposition. Framed as ‘sexual preference’ we argue that Aboriginal women and LGBTQIA+ users of dating sites are subject to a more insidious pattern of racial hatred. In seeking love online, Aboriginal users of dating apps navigate a complex terrain, in turn utilizing a range of protective mechanisms to maintain their safety, health and bodily sovereignty. In the final section we present the ways in which Aboriginal people are responding to gendered and racialized attacks as they continue to resist the violence of colonialism. Drawing on the scholarship of Indigenous queer scholars, we argue that there is no division between the claim of sovereignty over our bodies and that of our lands. To conclude we present a discussion on
1
Galop is an LGBT+ anti-violence charity in the UK.
10 Love, Hate and Sovereign Bodies …
183
the expressions of ‘decolonial love’ and ‘body sovereignty’ as a mechanism of survivance and resistance to colonial violence.
A Word on Methodology This chapter draws on Indigenous standpoint theory to critically analyse online violence towards Aboriginal women and LGBTQIA+ people as part of the broader context of settler-colonialism in Australia. The authors, an Aboriginal woman and Professor of Indigenous Studies, and a non-binary transgender Aboriginal Ph.D. candidate, deploy Indigenist, decolonial feminist and settler-colonial theory for the purpose of disclosing both the incidence and effects of online violence. Indigenous standpoint theory (Foley, 2006, 2003; Moreton-Robinson, 2003, 2013; Nakata, 2007) centres Aboriginal and Torres Strait Islander people and perspectives in the study of the social situations we navigate as colonized peoples. To this end, we attempt here to present the stories of Aboriginal people alongside Indigenous intellectual critiques of colonial heteropatriarchy and settler violence. This is in line with current directions in Indigenous Studies, which simultaneously deploy Indigenous and Western knowledges and practices to conduct sustained interdisciplinary critiques of whiteness, colonialism and related structures (MoretonRobinson, 2015, pp. xv–xvi) in service of Indigenist futures. The authors join a bourgeoning field of Indigenous writers, academics, artists and cultural critics from across the globe who tactically oppose colonial heteropatriarchy and interrogate its function and its effects (Arvin et al., 2013; Byrd, 2020; Nopera, 2015; Rowe, 2017; Simpson, 2016; Tallbear, 2018; Wilson, 2015). To us, Indigenist futures are possible realities free of colonial dominance and oppression where Indigenous people thrive in conditions of our own making. Our approach to this chapter moves toward this vision presenting a clear demonstration of the online struggles we face daily in the reclamation of our sovereign bodies.
184
B. Carlson and M. Day
Sex, Gender and Settler-Colonialism The differentiation between online and offline spaces is somewhat false in relation to the enduring effects of colonial discourses. However, it is useful to first understand the history and ongoing nature of gendered abuse in the context of colonialism before further exploring the precise nature and incidence of online abuse so we can conceive of ways that will make these sites safer. Since early contact, colonizers have violated Aboriginal women’s bodies. The rape of Aboriginal women can be read as an attempt to eradicate women’s sovereign bodies through acts of extreme violence that continue to this day (Conor, 2013; Sullivan, 2018). We refer to the sovereign bodies of Aboriginal women throughout this chapter as the right to corporeal safety, to our absolute power over ourselves, physically and mentally and also, to our authority as original inhabitants of the land. This right, as we know, has been repeatedly violated since colonization. Memoirs of colonial officials reflect intense interest in Aboriginal people’s sexual, social and emotional lives (Povinelli, 1994, pp. 122, 123), as well as a fixation on Aboriginal peoples generally, and more specifically, on Aboriginal women’s bodies (Conor, 2013; Sullivan, 2018). Indeed, settler colonialism in Australia takes hold not only through the appropriation of Indigenous people’s lands and the decimation of Indigenous cultures, but also through the dehumanization and objectification of Indigenous bodies, and intervention in Indigenous social and sexual lives (Povinelli, 1994, p. 122). Globally, colonial sovereignty is asserted through regulation and domination (Lugones, 2007). As a result, colonization concerns itself with ownership and control of Indigenous bodies (Wilson, 2015), and Indigenous identities, relationships and sexualities (Lugones, 2007). In Australia the domination of Indigenous women’s bodies is discursively realized through a systemic indoctrination into all societal institutions about what our bodies ‘mean’ in terms of the value ascribed to them by colonizers. The intimate lives of Aboriginal people have been, and still are closely surveyed and regulated by colonial government agencies, policies and laws. In the late seventeenth and early eighteenth century this occurred through the Aborigines Protection Act, and later, the Aborigines Welfare
10 Love, Hate and Sovereign Bodies …
185
Board, which meticulously controlled Aboriginal people’s movements, marriages and sexual and familial relationships until it was abolished in 1969. Simultaneously, Aboriginal identities, families and relationships occurred in the name of Christianity on Aboriginal missions. In 1997, Bringing Them Home, the Australian Human Rights Commission’s report of the National Inquiry into the Separation of Aboriginal and Torres Strait Islander Children from Their Families, outlined the emotional, physical and sexual abuse inflicted on Indigenous children who were forcibly removed from their families and incarcerated in state and religious institutions. To this day, Aboriginal and Torres Strait Islander people resist settler colonial policies that enact power on their families and relationships including those that involve child removal and placement with settler foster families (Haebich, 2015). Rigid restrictions relating to ‘traditional’ family structures or those that most closely uphold European ideals and relations also continue to influence Native Title and land rights claims (Povinelli, 1994, p. 123). Settler preoccupation with Aboriginal relationships and bodies sustains a complex state apparatus and, at the same time, governs intimate interactions between Aboriginal people and settlers both in person and online. Relations between white settler men and Aboriginal women in Australia are perhaps the most visible sites of colonial sexual politics. Early settler impressions of Aboriginal women permeate Australian culture and society as a sort of national mythology. Common knowledge about and imagery of Aboriginal women has largely been produced by settlers and lacks actual identities and agencies (Conor, 2016, pp. 4–43). The derogatory settler lexicon of ‘lubra’, ‘gin’ and ‘black velvet’ (Conor, 2016, p. 3), terms used frequently in the colonial period to describe Aboriginal women, serve to ornamentalize and objectify. These are dehumanizing terms used to fetishize and reduce our shared culture and humanity as Indigenous peoples. Such descriptors played an important role in normalizing the violation of Aboriginal women and positioning us a simultaneously sexual and undesirable (Sullivan, 2018, p. 397). Corrinne Sullivan presents compelling analysis of early colonial texts where settlers recount instances of Aboriginal women using sexuality to negotiate trade, warfare and survival on the frontier (2018, pp. 399– 400). However, colonialists documented these exchanges in such a way
186
B. Carlson and M. Day
that omitted Aboriginal women’s full personhoods and agencies. Sullivan explains that ‘by entrenching the sexualization of Indigenous women’s identities, colonial society enforced sexist ideologies and representations of Indigenous women as prostitutes [sic], which sometimes led to our degradation and humiliation’ (2018, p. 398). Settler representations construct Aboriginal women as ‘vacant’ vessels of settler desire and fantasies (Conor, 2013, pp. 3–11). Objectification and fetishization work in service of settler sexual access to Aboriginal women; entrenching sexual violence is ‘part and parcel of colonisation’ (Deer, 2009, p. 150). Unsurprisingly, the colonial archive is also rife with accounts of sexual violence and abuse of Aboriginal women by settlers. Settler anxiety around Aboriginal women’s sexuality is intimately tied to reproduction and narratives of racial purity pertinent to early colonies and white Australia. Framed as dangerous and indecent, Aboriginal women represent a supposed threat to white settler claims to a ‘pure’ and undifferentiated body politic due to accusations and assumptions of sexual immorality and miscegenation (Horton, 2010, p. 13), despite the latter being authorized white abusers. This is the colonizer’s conundrum; on the one hand, they aspire to a Christian-bound ethos of sexual ‘morality’ while on the other, they are beset by their own lack of morality. Thus, acts of sexual violence must be displaced onto Aboriginal women as their fault. Gender and sexuality are critical to settler societies where land acquisition takes place through marriage and family. Race as well as gender organizes settler social systems which are set apart as civilized and rational only in relation to Aboriginal peoples (Meiu, 2015). Settlers encode the sexual and familial lives of Aboriginal peoples as already dysfunctional, recycling tropes of Aboriginal men as violent partners and fathers, and Aboriginal women as complacent and incompetent (Bond, 2016; Kean, 2019). In this way, settler men and women alike use and subjugate Indigenous women so they may understand themselves as sovereign, civilized and superior (Moreton-Robinson, 2015). Complex narratives rooted in racism and misogyny serve to objectify, suppress and, simultaneously make Indigenous women’s bodies accessible and useful to settlers and their stolen lands. Settler women are implicated in the racialized and gendered abuse of Indigenous women through their involvement in these dehumanizing colonial discourses
10 Love, Hate and Sovereign Bodies …
187
(Carlson & McGlade in Gregoire, 2021). Settler women’s compliance with, and participation in sexual violence perpetrated by white men helps sustain the view of white women as moral and befitting in the creation of a new body politic. While settler violence is continually asserted both discursively and physically on Aboriginal people. Other systemic forms of interpersonal violence also prevail. The imposition of heteropatriarchy whereby a prevailing gendered and sexualized order is dominant, has resulted in great loss, harm and suffering. This ‘norm’ replaces a system that had its own order based in ancient social structures, not in land ownership. The imposition of European Christian values and their organization of gender and sexuality, based on ideals of land acquisition and ownership, the nuclear family unit, female subordination and so on, have significantly impacted Aboriginal people’s identities, cultures and social and sexual relationships with one another (Gorrie, 2017; Simpson, 2016). These relationships were undoubtedly diverse, as are all human relationships; why would it be otherwise. Sandy O’Sullivan argues that ‘[t]o believe the theory that queer people did not exist in Aboriginal culture prior to invasion, suggests that Aboriginal queers are either not Aboriginal, or our presentation is only a presentation of the colonial mindset’ (2019, p. 108). In this sense, contending whether Aboriginal people were or were not queer prior to colonization is redundant. What is clear is that Aboriginal cultures and societies were and are organized around complex relational kinship systems (Carlson, 2020; Graham, 2008; Kwaymullina & Kwaymullina, 2010) that are not strictly confined to heterosexual monogamous models of sexuality and family. Steadily, Indigenous queer and gender diverse scholars emerge, challenging heteropatriarchy’s function as yet another colonial tool of categorization, assimilation, erasure and elimination (Clark, 2015; Coleman, 2020; Day, 2020; Farrell, 2020; Whittaker, 2015). It is also evident that colonial models of gender, sexuality and relationships continue to harm and marginalize Aboriginal people who do not neatly subscribe to these categories. Aboriginal LGBTQIA+ people report abuse and rejection from within their own communities as well as from broader Australian society (AHCSA, 2019; Bonson, 2016) and as a result, are at extreme risk of poverty, poor health and
188
B. Carlson and M. Day
suicidality (ATSIPEP, 2015; Uink et al., 2020). Colonial heteropatriarchy culminates in myriad forms of gendered and racialized violence towards Aboriginal women and LGBTQIA+ people. While heteropatriarchy operates to perpetuate the objectification and fetishization of Aboriginal women, and undermine Aboriginal women’s value to their own communities, colonial systems and discourses also work to erase gender and sexual diversity among Indigenous peoples (Driskill, 2004). Rates of physical and sexual violence are so significant that transnational Indigenous movements, facilitated in part by social media (MoekePickering et al., 2018), call for colonial governments to answer for or at least inquire into the high numbers of Missing and Murdered Indigenous Women (MMIW) and LGBTQIA+ people in Australia, the United States and Canada (Murphy-Allas et al., 2018; Oates, 2018). Across these Anglo settler nations, colonial heteropatriarchy fosters hostile contexts for Indigenous women and LGBTQIA+ people where our bodies, identities and relationships are under constant threat from settlers and their systems. Settler dominance asserts itself through continuing impositions on Aboriginal people’s bodies and lives. Discursive and systemic mechanisms work in tandem to dehumanize and harm Aboriginal women and LGBTQIA+ people in specific ways—both offline and online. Settlers weaponize our sexualities and invade our intimacies to affirm their own desires, fantasies and impressions of themselves. Fantasies, representation and impressions are increasingly pertinent with the emergence of online spaces for hook-ups, and dating—contemporary hotbeds of romantic and sexual intimacy and desire. On most dating apps, the quest for true love and sexual satisfaction begins with a short, superficial glimpse of a person—a photo and a few words on preference and interests. As we discuss in the following section, interest and impressions are rarely neutral, and preference is laced with colonial and white supremacist ideals. Aboriginal women and LGBTQIA+ people navigate dating online in the context of colonial heteropatriarchy. Power relations in the form of settler dominance and desire encroach heavily on this realm of intimacy. Dating and social media platforms comprise another domain where colonial violence and gendered and racialized abuse occur and where Indigenous people have to continually navigate and negotiate a ‘safe’ place from where to speak and express desire.
10 Love, Hate and Sovereign Bodies …
189
Sexual Racism and ‘Preference’ Digital technologies, including those used for hook-ups and dating, reflect the societies from which they emerge (Bedi, 2015; Carlson, 2019). Hence, sexual preference is never free from existing power relations and the resulting implications of race, sex, gender and sexuality. As we have established, Aboriginal women and LGBTQIA+ people in Australia continue to navigate their intimate lives and sexuality under threat of settler violence and imposition. Settlers reproduce this same dynamic in online spaces including online dating apps. Sexual racism functions as a tool of denigration towards Aboriginal women and LGBTQIA+ people—representing us often as least loveable and least attractive in a hierarchy of desirability that puts white settlers at the peak. Below, we engage with qualitative data from Aboriginal women and LGBTQIA+ people who use dating apps. Users shared and discussed recurrent instances of anti-Indigenous rhetoric and sexual racism on dating apps via social media, news media and in interviews. Consistently, Indigenous users of apps report being rejected and abused based on their Indigeneity. When we examine the pattern of these behaviours, frequently exhibited by white men, the guise of racial preference falls away to reveal something more insidious. We trouble the neutrality of racial preference on dating apps, and insist instead that it is racist colonial subterfuge, thinly masking racial hatred and violent settler tendencies towards Aboriginal women and LGBTQIA+ people. Theorising sexual racism as a kind of ‘racial discrimination that takes place in the sexual and intimate sphere’ (Bedi, 2015, p. 998) only goes so far in the context of colonial heteropatriarchy. As we have seen, racial and gendered abuse towards Indigenous people in Australia has significant discursive and systemic utility for settlers, their nation states, and their claims to land. Narratives and representations of Aboriginal women as both sexually useful and indecent, serve to dehumanize Aboriginal women and rationalize violent sexual behaviour from settler men. Frontier violence endures both interpersonally and at the digital interface between Aboriginal people and settlers. One interview participant, an Aboriginal woman in her 30s, was threatened with sexual assault via a dating app when her match identified her as Aboriginal and told that
190
B. Carlson and M. Day
‘you black cunts are only good for fucking’ (Carlson, 2019, p. 13). Aboriginal LGBTQIA+ people also reported violent sexual threats via dating apps with some aggressors even historicizing their own behaviour. One individual received messages directly referring to rape, genocide and mass killings of Aboriginal people by settlers. The aggressor also told them they would like to treat them how Captain Cook treated Aboriginal people (Wilson, 2020). In these messages, the aggressor connects to a broader context of colonial sexual violence. Sexual racism, in this case, is not simply racial discrimination . It has a critical discursive function in sexually objectifying and threatening Aboriginal people in service of settler dominance. Sexual preference is troubled further in instances where an Aboriginal person may match with another user on an app only to be rejected, met with hostility and/or unmatched when they are made aware of their Indigeneity. Some Aboriginal people report being immediately confronted with racial hatred on dating apps. One woman was ‘superliked’ on Tinder only to receive a message that said ‘Is it true abos [sic] have two sized nostrils – one for unleaded and one for leaded?’ (Clarke, 2011). Such comments allude to stereotypes and misrepresentations of Aboriginal people prevalent in Australian media—widely propagated myths that we are ‘dole-bludging’ and ‘petrol-sniffing’ people lacking morals and employment (Banerjee & Osuri, 2000; Kean, 2019). Many participants reported receiving racist messages like these. An Aboriginal queer person objected to a racial slur while in conversation with another user on Grinder, consequentially ‘outing’ themselves as Aboriginal. A torrent of abusive messages ensued including: You’ve scabbed free handouts all your life probably don’t even know how much your elders will never admit there was people here before you monkeys because you might lose your handouts every day there’s subsidized education medical and employment programs because no one would actually hire you without some kind of subsidies bet you’ll never get a proper job without some coon victims program. No doubt you cry victim every day call people racist bigots all the rest then cry to the government because everyone hates you.
10 Love, Hate and Sovereign Bodies …
191
This message refers to a broader context of racial hatred towards Aboriginal people—‘everyone’ is presumably other Australians or settlers. Many participants reported bracing themselves for, or proactively mitigating sexual racism on dating apps. Some did this by identifying themselves as Aboriginal on their profiles using emojis, photos and statements about their nation or cultural identity. In some cases, this backfired and attracted comments akin to those above. Other participants chose not to identify as Aboriginal on their profiles. This also rebounded in some circumstances where another user ‘liked’ or ‘swiped right’ on their profile and then became abusive when they learned about their Indigeneity. One Aboriginal woman was talking to a man on Tinder who initially complimented her ‘exotic’ appearance. He then asked her if she was Aboriginal and, when she confirmed that she was, he wished her luck and unmatched her immediately (Wilson, 2020). One Aboriginal queer person was enjoying pleasantries with a match on Grindr before the other user noticed that they identified as Aboriginal on their profile: Other user: Sorry didn’t see the Aboriginal part. Participant: Bruh what does that mean? Other user: No longer interested, soz. Participant: Ah so you’re racist! You’re pathetic and you deserve to rot bye xx Other user: Nah just not interested in knowing/learning how to treat the most precious people in the world. Also y’all are not the only group in the world that has suffered. Stop being so sensitive.
These messages involve much more than expressions of sexual preference. Settlers are not simply excluding Aboriginal people from selection on dating apps, they are actively rejecting and abusing people on the grounds of their Indigeneity even after they are attracted to their profile or appearance. Clearly, Indigenous women and LGBTQIA+ people are experiencing much more than discrimination in romantic and intimate spaces online. The experiences highlighted in this research reveal active, violent tendencies among settlers on dating apps that specifically target Indigenous
192
B. Carlson and M. Day
peoples. Claims of sexual preference are fundamentally troubled by occasions where users are initially attracted to a person only to become uninterested, or even enraged when they find out a person is Indigenous. This is less an expression of preference than an expression of hatred for an entire group of people. Indigenous people in Australia are diverse. We look, act and present ourselves with endless variation. To reject abuse or threaten a person based on their Indigeneity is an explicit act of violence. Various forms of fetishization occur in comments about bodies, body parts and the suggestion of difference that either titillates the speaker or serves as a point of mockery or degradation. In either case, such comments function to reinforce a dominant power relation based on a heteropatriarchal ‘norm’. Threats of sexual violence often accompany unashamed expressions of racial hatred for Indigenous people online. The function of sexual racism under these circumstances is not simply to marginalize and suppress Aboriginal women and LGBTQIA+ people but to position us as less human in relation to settlers, as stated, to maintain or augment a pre-existing norm initiated by and for those who seek to oppress. This kind of racialized and sexual violence is endemic to the context of colonial heteropatriarchy where settlers continue to assert themselves violently both online and offline.
Body Sovereignty and Decolonial Love Aboriginal women and LGBTQIA+ people are looking for love online, and, in many instances, finding hate. The experiences described above occur within a broader context where settlers impose upon Indigenous identities, relationships and sexualities. These colonial intrusions continue to impact Indigenous peoples, not only through systemic violence and interventionist policies, but also in our bodies, homes and relationships. One of the most devastating impacts of colonial heteropatriarchy on Aboriginal people is the internalization of values from settler culture, which harm our relationships with each other and ourselves (Simpson, 2020). This weighs heavily on Aboriginal people, on their sense of self, their families and their lived realities, especially those of Aboriginal women and LGBTQIA+ people. However, we have never
10 Love, Hate and Sovereign Bodies …
193
fully assented to the force of settlers and colonial powers. Resistance to colonized identities, relationships, and sexualities on this continent is as old as resistance to settler occupation of Indigenous territories. Globally, resistance to colonial impositions that relate to gender and sexuality may be the oldest continuous resistance to colonialism overall (Sinclair, in Nicholson, 2016). Gendered and racialized attacks on Indigenous women and LGBTQIA+ people are not without formidable response. Regardless of popular colonial narratives about Indigenous people, we are avid users and adapters of digital technologies for our own means (Carlson, 2020). Digital platforms connect Indigenous people internationally, enabling global collaborations in thinking, writing, and other work that deconstructs colonialism, settler colonialism and their assemblages (Wilson et al., 2017). Rapidly growing movements concern themselves with decolonization and the sovereignty of Indigenous identities, bodies and lands. For Cree scholar Alex Wilson (2015, p. 4), there is no division between these movements ‘Indigenous sovereignty over our lands is inseparable from sovereignty over our bodies, sexuality and gender expression’. Qwo-Li Driskill (2004, p. 53) echoes this sentiment, arguing that to reclaim one’s sexuality from colonialism is to reclaim ones ‘first homelands: the body’. Bodies and relationships are inherently entwined with land in many Indigenous, relational world orders (Kwaymullina & Kwaymullina, 2010). While settlers continue to harm and dominate Indigenous bodies, relationships and lands, Indigenous people work to heal, revitalize and reclaim them. Indigenous people articulate this work in many ways. Among these, are ‘decolonial love’ (Moreno, 2019) and asserting ‘body sovereignty’ (Wilson, 2015). Both are expressions of survivance and resistance to colonial heteropatriarchy. Engaging with broader discourses on ‘decolonial love’ and decolonizing preferences on social media, many Aboriginal women and LGBTQIA+ people have begun questioning their own beauty standards and attraction white settlers. One interview participant, a queer Aboriginal person explained that: It was my own reflections about, like, decolonizing sexual preferences and desire…the majority of the men that I’ve slept with in my lengthy sexual
194
B. Carlson and M. Day
career have been majority white. There’s been other mob [Indigenous] but um, you know, mostly white men. And like, why is that?
Another interviewee, a gay Aboriginal man speculated ‘I think…I developed what is or isn’t attractive from what I see around me. I never saw Black people or blackfullas in desirable ways in any media’. Many participants reported grappling with a type of unconscious racial preference in their dating lives, one based on internalized colonial narratives of desirability - yet another way that settlers impose upon and gain access to Indigenous bodies. One queer Aboriginal person described coming to this realization and taking action to address it. I took a break a few years ago from dating white women when I came out of a relationship with a white lesbian who gave me hell. It was one of the best choices I made because it really made me think through who I was attracted to and why. When I looked back I had often dated skinny cisgender white femmes. I started branching out meeting a lot of WOC [women of colour] and the funny thing was dating was so much better!
Aboriginal women and LGBTQIA+ people are on our own trajectories of critiquing preference, and decolonizing love and desire. Many Indigenous people describe this process as returning to ourselves as an extension of our lands or taking back sovereignty over our own bodies, relationships and sexuality (Moreno, 2019; Wilson, 2015). Given the constancy of settler intrusion and violence in Indigenous lives, it seems unsurprising that some Aboriginal women and LGBTQIA+ people would withdraw access from white settlers to their bodies as an act of sovereignty. One Aboriginal queer person reported joining a trend among people of colour on OkCupid and adding ‘No YT PPL’ (no white people) to their profile. They also described cultivating their dating profile to specifically attract women of colour and deter white women or white polyamorous couples. Another interviewee, a heterosexual Aboriginal woman using Tinder, explicitly stated that she did not date white men on her profile. She articulated this as an act of body sovereignty:
10 Love, Hate and Sovereign Bodies …
195
I’m like, well, because white people have colonised my land and I’m not gonna let them colonize my body. It’s that last part of sovereignty of my body that I want to maintain. You know, it might sound stupid or whatever, but it’s that’s just how I see it. You know, our bodies have been used in ways by white people that even today that I don’t want to let a white man touch my body. And if that’s racist, well, call me racist. I really don’t care.
Many white men responded to her profile with abusive and negative messages, she explained, often targeting her body or her appearance. Here, we can see how white settlers become violent when they are denied access to Indigenous people’s bodies, and utilize various forms of denigration to attempt to regain power. This participant articulates this thoughtfully—‘it’s like they are on top of the food chain and you should aspire to date a white man. And if you don’t, that just topples them’. Aboriginal women and LGBTQIA+ people respond to settler impositions and violence with assertions of decoloniality and sovereignty. Acts of decolonial love and body sovereignty may look different for everyone. They include but are not limited to engaging critically with sexual and racial dating preferences, and refusing romantic and sexual contact with white settlers. Ultimately, these are movements towards healing and revitalizing our relationships with each other and ourselves as extensions of our lands. They are moves to resist internalized and intruding colonial heteropatriarchy, and towards Indigenist models of relationality. Participants in this research have variously noted the effects of online violence on their health and safety. In some instances, racial hatred has been so intense it has deterred users from further engagement with apps. One user stated that their anxiety levels exceeded their desire to ‘play’ online. Many stated a continual censoring of their profile and information due to fear of racism. Research suggests that racism, while appearing more on some sites than others, is not confined to the use of any particular app but rather appears across a spectrum of sites. The effects of continual—and in some instances—unexpected abuse is extremely harmful. While Indigenous people are used to many forms of racism and have always found ways of subverting and deflecting, its deleterious effects are significant on women and LGBTQIA+ people. We
196
B. Carlson and M. Day
are aware that more research is needed in this area that constitutes yet another mode of settler abuse towards Indigenous people. We continue to monitor online dating sites to determine our own capacity for vigilance and our right to find pleasure and maintain the sovereignty of our bodies.
Conclusion Colonial heteropatriarchy and settler violence weigh heavily on the digital and intimate lives of Indigenous women and LGBTQIA+ people. We use technology and adapt to its dictates as we have always done knowing that online sites are not dissimilar to offline sites and that our sexual and social relationships are a constant process of negotiation. Online dating sites have their uses. We seize them for our own ends and, mostly, we know how to deflect and navigate for our own pleasure and for the reclamation of our sovereign bodies which continue to be violated. Online spaces and dating apps in particular are new terrain, however, the rules and regulations are the same. But we have a long history of resistance and our familiarity with the systemic racism inscribed in colonial discourses can be seamlessly transferred in many instances from offline to online sites; as noted, the differentiation is false when it comes to the insidious ways in which colonial discourses continue to position Indigenous peoples. We are adept users, though, conversant with all forms of imposed technology, and will continue to assert the sovereignty of our bodies, our sexual preferences and desires and our right to reject the imposed values of heteropatriarchy online, as we have always done offline. Acknowledgements The authors wish to acknowledge the fortitude, survivance and ingenuity of Indigenous women and LGBTQIA+ people including the participants in this study. This paper is possible because of your generosity, insight and pithy observations in the face of settler violence. Our lives are enriched by being in community with you. Some data included in this paper is from research that is supported by the Australian Research Council Discovery Indigenous, Projects ID:
10 Love, Hate and Sovereign Bodies …
197
IN130100036; IN160100049; INED200100010. Each of these projects abided by and embodied Indigenous and university ethical guidelines. They were approved by university ethics committees: IN13010036, The University of Wollongong, HE13/32; IN160100049, Macquarie University, 5201700667; INED200100010, Macquarie University, 52020664615936.
References Aboriginal and Torres Strait Islander Suicide Prevention Evaluation Project. (2015). Sexuality and gender diverse populations (Lesbian, Gay, Bisexual, Transsexual, Queer and Intersex—LGBTQI) Roundtable Report. Retrieved from: https://www.atsispep.sis.uwa.edu.au/__data/assets/pdf_file/0012/285 7539/LGBTQI-Roundtable-Report-.pdf Aboriginal Health Council of South Australia Ltd. (2019). The aboriginal gender study: Final report. AHCSA, Adelaide. Retrieved from: https://abo riginalgenderstudy.ahcsa.org.au/app/uploads/2019/06/AHC4831_Gender_ Study_online.pdf Allas, T., Bui, M., Carlson, B., Kasat, P., McGlade, H., Perera, S., Pugliese, J., Qwaider, A., & Singh, C. (2018). Indigenous femicide and the killing state. Deathscapes: Mapping race and violence in settler states. https://www.deathscapes.org/case-studies/indigenous-femicideand-the-killing-state-in-progress/ Arvin, M., Tuck, E., & Morrill, A. (2013). Decolonizing feminism: Challenging connections between settler colonialism and heteropatriarchy. Feminist Formations, 25, 8–34. Banerjee, S. B., & Osuri, G. (2000). Silences of the media: Whiting out Aboriginality in making news and making history. Media, Culture and Society, 22(3), 263–284. Bedi, S. (2015). Sexual racism: Intimacy as a matter of justice. The Journal of Politics, 77 (4), 998–1011. Bond, C. (2016, August 5). The white man’s burden: Bill Leak and telling ‘the truth’ about Aboriginal lives. The Conversation. https://theconversat ion.com/the-white-mans-burden-bill-leak-and-telling-the-truth-about-abo riginal-lives-63524
198
B. Carlson and M. Day
Bonson, D. (2016, March 15). Indigenous suicide, sexuality and gender diverse populations. IndigenousX. https://indigenousx.com.au/indigenoussuicide-sexuality-and-gender-diverse-populations/ Byrd, J. A. (2020). What’s normative got to do with it? Toward Indigenous Queer relationality. Social Text, 38(4), 105–123. Carlson, B. (2019). Love and hate at the cultural interface: Indigenous Australians and dating apps. Journal of Sociology, 56 (2), 133–150. Carlson, B. (2020). Indigenous Killjoys negotiating the labyrinth of dis/mistrust. In T. Moeke-Pickering, A. Pegoraro, & S. Cote-Meek (Eds.), Critical reflections and politics on advancing women in the academy (pp. 105– 123). IGI Global. Carlson, B., & d Day, M. (2021, December). Technology-facilitated abuse: The need for Indigenous-led research and response. In B. Harris & D. Woodlock (Eds.), Technology and domestic violence: Victimisation, perpetration and responses. Routledge. Carlson, B., & Frazer, R. (2018). Social media mob: Being Indigenous online. Sydney: Macquarie University. https://research-management.mq.edu.au/ ws/portalfiles/portal/85013179/MQU_SocialMediaMob_report_Carlson_F razer.pdf Clark, M. (2015). Are we queer? Reflections on “Peopling the empty mirror” twenty years on. In D. Hodge (Ed.), Colouring the rainbow: Black Queer and Trans perspectives: Life stories and essays by first nations people of Australia (pp. 238–252). Wakefield Press. Clarke, A. Y. (2011). Inequalities of love: College-educated Black women and the barriers to romance and family. Duke University Press. Coleman, C. (2020). Aboriginal feminism and gender. NGV . Retrieved from: https://www.ngv.vic.gov.au/essay/aboriginal-feminism-and-gender/ Conor, L. (2013). ‘A species of rough gallantry’: Bride capture and settlercolonial print on Australian Aboriginal gender relations. Settler Colonial Studies, 3(1), 6–26. Conor, L. (2016). Skin deep: Settler impressions of Aboriginal women. University of Western Australia Publishing. Day, M. (2020). Indigenist origins: Institutionalizing Indigenous Queer and Trans studies in Australia. TSQ: Trans Studies Quarterly, 7 (3), 367–373. Deer, S. (2009). Decolonizing rape law: A native feminist synthesis of safety and sovereignty. Wicazo Sa Review, 24 (2), 149–167. Driskill, Q. L. (2004). Stolen from our bodies: First Nations TwoSpirits/Queers and the journey to a sovereign erotic. Studies in American Indian Literatures, 16 (2), 50–64.
10 Love, Hate and Sovereign Bodies …
199
Farrell, A. (2020, June 13). Queer and Aboriginal in a regional setting: Identity and place. Archer. http://archermagazine.com.au/2020/06/queer-and-aborig inal-identity-and-place/ Foley, D. (2003). Indigenous epistemology and Indigenous standpoint theory. Social Alternatives, 22(1), 44–52. Foley, D. (2006). An Indigenous standpoint theory. International Journal of Humanities, 3(8), 25–36. Graham, M. (2008). Thoughts about the philosophical underpinnings of aboriginal worldviews. Australian Humanities Review Issue 4, 22(8). http:// australianhumanitiesreview.org/2008/11/01/some-thoughts-about-the-phi losophical-underpinnings-of-aboriginal-worldviews/ Gorrie, N. (2017, August 12). Being Black and Queer in Australia right now. NITV . https://www.sbs.com.au/nitv/article/2017/08/12/being-blackand-queer-australia-right-now Gregoire, P. (2021, March 24). State-sanctioned violence against first nations women: An interview with professors Carlson and McGlade. Sydney Criminal Lawyers. https://www.sydneycriminallawyers.com.au/blog/statesanctioned-violence-against-first-nations-women-an-interview-with-profes sors-carlson-and-mcglade/ Haebich, A. (2015). Neoliberalism, settler colonialism and the history of Indigenous child removal in Australia. Australian Indigenous Law Review, 19 (1), 20–31. Horton, J. (2010). The case of Elsie Barrett: Aboriginal women, sexuality and the Victorian Board for the Protection of Aborigines. Journal of Australian Studies, 34 (1), 1–18. Hubbard, L. (2020). Online hate crime report: Challenging online homophobia, biphobia and transphobia. Galop, the LGBT+ Anti-Violence Charity. http:// www.galop.org.uk/wp-content/uploads/Online-Crime-2020_0.pdf Kean, J. (2019). Coming to terms: Race, class and intimacy in Australian public culture. Sexualities, 22(7–8), 1182–1196. Kennedy, T. (2020). Indigenous people’ experiences of harmful content on social media. Macquarie University. https://research-management.mq.edu. au/ws/portalfiles/portal/135775224/MQU_HarmfulContentonSocialMe dia_report_201202.pdf Kwaymullina, A., & Kwaymullina, B. (2010). Learning to read the signs: Law in an indigenous reality. Journal of Australian Studies, 34 (2), 195–208. Lugones, M. (2007). Heterosexualism and the colonial/modern gender system. Hypatia, 22(1), 186–219.
200
B. Carlson and M. Day
Meiu, G. P. (2015). Colonialism and sexuality. The International Encyclopedia of Human Sexuality, 1, 197–290. Moeke-Pickering, T., Cote-Meek, S., & Pegoraro, A. (2018). Understanding the ways missing and murdered Indigenous women are framed and handled by social media users. Media International Australia Incorporating Culture & Policy, 169 (1), 54–64. Moreno, S. (2019). Love as resistance: Exploring conceptualizations of decolonial love in settler states. Girlhood Studies, 12(3), 116–133. Moreton-Robinson A. (2003). Researching whiteness: Some reflections from an Indigenous woman’s standpoint. Hecate, 29(2), 72–85. Moreton-Robinson, A. (2013). Towards an Australian Indigenous women’s standpoint theory: A methodological tool. Australian Feminist Studies, 28(78), 331–347. Moreton-Robinson, A. (2015). The white possessive: Property, power, and indigenous sovereignty. University of Minnesota Press. Murphy-Oates, L. (2018). Vanished: Lost voices of our sisters. Dateline, SBS. https://www.sbs.com.au/vanished/ Nakata, M. (2007). Disciplining the savages: Savaging the disciplines. Canberra: Aboriginal Studies Press. Nicholson, H. (Ed.). (2016). Love beyond body, space, and time: An Indigenous LGBT Sci-Fi anthology. Winnipeg, Canada: Bedside Press. Nopera, T. (2015). Tranny tricks: The blending and contouring of Raranga research. Journal of Global Indigeneity, 1(1), 6. O’Sullivan, S. (2019). A lived experience of Aboriginal knowledges and perspectives: How cultural wisdom saved my life. In Practice wisdom (pp. 107–112). Brill Sense. Perasso, V. (2015). 100 women 2015: Social media ‘fuels’ gendered violence. BBC News. https://www.bbc.com/news/world-34911605 Povinelli, E. A. (1994). Sexual savages/sexual sovereignty: Australian colonial texts and the postcolonial politics of nationalism. Diacritics, 24 (2/3), 122– 150. Rowe, A. C. (2017). A Queer Indigenous manifesto. QED: A Journal in GLBTQ Worldmaking, 4 (2), 93–99. Simpson, A. (2016). The state is a man: Theresa Spence, Loretta Saunders and the gender of settler sovereignty. Theory & Event, 19 (4). https://www.muse. jhu.edu/article/633280. Simpson, L. (2020, March 24). Not murdered, not missing: Rebelling against colonial gender violence. Verso Blogs. https://www.versobooks.com/blogs/ 4611-not-murdered-not-missing-rebelling-against-colonial-gender-violence
10 Love, Hate and Sovereign Bodies …
201
Sullivan, C. T. (2018). Indigenous Australian women’s colonial sexual intimacies: Positioning Indigenous women’s agency. Culture, Health & Sexuality, 20 (4), 397–410. Tallbear, K. (2018). Making love and relations beyond settler sex and family. In A. Clarke & D. Haraway (Eds.), Making Kin not population (pp. 145–209). Prickly Paradigm Press. Uink, B., Liddelow-Hunt, S., Daglas, K., & Ducasse, D. (2020). The time for inclusive care for Aboriginal and Torres Strait Islander LGBTQ+ young people is now. The Medical Journal of Australia, 213(5), 201–204. Verass, S. (2016, April 14). Racism on Grindr: Indigenous gay man screenshots racial abuse online. NITV . https://www.sbs.com.au/nitv/sexuality/art icle/2016/04/14/man-shares-experiences-of-racism-on-grinder Whittaker, A. (2015). The border made of mirrors: Indigenous queerness, deep colonisation and (De)fining Indigenousness in settler law. In D. Hodge (Ed.), Colouring the rainbow: Black Queer and Trans perspectives: Life stories and essays by first nations people of Australia (pp. 21–34). Wakefield Press. Wilson, A. (2015). Our coming in stories: Cree identity, body sovereignty and gender self-determination. Journal of Global Indigeneity, 1(1), 4. Wilson, C. (2020, July 3). Indigenous Australians face sexual racism on dating apps: The second he found out about my heritage he was gone. ABC News. https://www.abc.net.au/news/science/2020-07-03/indigenous-datingapp-racism-tinder-grindr/12406402 Wilson, A., Carlson, B., & Sciascia, A. (2017). Reterritorializing social media: Indigenous people rise up. Australasian Journal of Information Systems, 21. https://journal.acs.org.au/index.php/ajis/article/view/1591/781
11 Cyberstalking: Prevalence, Characteristics, and Impact Jenna L. Harewell, Afroditi Pina, and Jennifer E. Storey
Introduction Harassing and abusive behaviours are transcending into online spaces with the advancement of technology. Digital communications as well as tracking technologies are increasingly ubiquitous in the facilitation of offending behaviours including stalking; commonly referred to in the research literature as ‘cyberstalking’. Conceptually similar to stalking and equally impactful on the lives of victims, cyberstalking gives perpetrators more accessible means and opportunities to harass, as well as a sense of omnipresence in victims’ lives, even without a physical presence (Yardley, 2021). Despite decades of research into cyberstalking generally, there are still aspects to the gendered nature of these harmful behaviours that continue to emerge. Indeed, as technologies themselves develop and J. L. Harewell · A. Pina (B) · J. E. Storey Centre of Research and Education in Forensic Psychology (CORE-FP), University of Kent, Canterbury, UK e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_11
203
204
J. L. Harewell et al.
change over time, perpetrators of cyberstalking can access a variety of means to pursue, harass, and abuse their targets. This chapter examines the current state of knowledge regarding cyberstalking, its gendered nature, prevalence and correlates. Firstly, we focus on the difficulties in defining cyberstalking and assessing the prevalence of the behaviour. The lack of a universal definition introduces issues in making meaningful comparisons between research studies and identifying the prevalence. We will also assess technology-based, behaviourbased, and motive-based typologies of cyberstalking. Third, the existing research on cyberstalking perpetration is reviewed, outlining not only demographics but also the psychological characteristics, childhood experiences, and online behaviour of perpetrators. Finally, the chapter focuses on the psychological, physical, and social impact that cyberstalking has on victims as well as help-seeking and management strategies employed. Ultimately, the chapter argues that despite the disagreement around its definition, and hence the disparities in prevalence rates, cyberstalking is a prevalent form of aggression perpetrated or facilitated by technology, characterised by repetitive behaviours that instil fear, distress, and alarm to the target. We conclude that cyberstalking is a gendered crime, due to the asymmetrical and manifold impact it has on its female victims, and argue that future research needs to examine cyberstalking, cyberdating abuse, and intimate partner violence in parallel, always taking into consideration the context of relationships and the impact on the victim.
Defining Cyberstalking When defining cyberstalking, a prominent debate in the literature is its association with or distinction from stalking (Kaur et al., 2021; Reyns & Fissel, 2020; Sheridan & Grant, 2007). Specifically, there is contention as to whether cyberstalking is independent from stalking, whether it is related to stalking or whether cyberstalking is simply a form of stalking (Nobles et al., 2014). Sheridan and Grant (2007) for instance have advocated that cyberstalking is adjunct to stalking rather than a completely distinguishable behaviour and therefore needs no rigid definition. Similarly, Reyns and Fisher (2018) have suggested that
11 Cyberstalking: Prevalence, Characteristics, and Impact
205
cyberstalking is an additional tool for stalking rather than a wholly independent phenomenon. However, as cyberstalking can occur independently of terrestrial stalking it is not always an extension. Bocij and McFarlane (2002) proposed cyberstalking as a distinct form of behaviour albeit related to stalking. They reasoned that the differences in geographical proximity and the potential involvement of third parties or groups in cyberstalking make the behaviour distinct. On the basis of similarities in the key definitional aspects (i.e. repeated pursuit) and the consequences experienced by victims (i.e. psychological harm) we suggest that cyberstalking be recognised as a form of stalking that is assisted by technology and thus differs only in that it is perpetrated by differing means (Kaur et al., 2021). The National Centre for Cyberstalking Research in the United Kingdom defines cyberstalking as ‘a course of action that involves more than one incident perpetrated through or utilising electronic means, that causes distress, fear or alarm’ (Maple et al., 2011, p. 4). This definition encompasses the recurring themes of repetition and fear that are embodied in different stalking definitions in the literature and legislation (Dreßing et al., 2014; Kaur et al., 2021; Maple et al., 2011; Nobles et al., 2014; Reyns & Fisher, 2018; Reyns et al., 2018). The key distinction then between definitions of cyberstalking and stalking is the required utilisation of electronic means for pursuit (Maple et al., 2011; Reyns & Fisher, 2018). Due to the increasing embeddedness of technology in day-to-day life it is less logical to distinguish between stalking and cyberstalking strictly but to acknowledge that they will likely overlap. Whilst cyberstalking will meet the legal criteria for stalking this does not necessarily mean that offline stalking behaviours must also occur.
Prevalence of Victimisation Though there may be some agreement as to the broad definition of cyberstalking (e.g. requiring a repeated ‘course of action’, via electronic means, that causes distress, fear or alarm), specific definitions in the research vary. This in turn creates difficulty in identifying the prevalence of the behaviour. Cyberstalking victimisation prevalence is estimated to
206
J. L. Harewell et al.
be between 1 and 40%, with the lack of a universally accepted definition identified as the reason for such disparity (Dreßing et al., 2014; Fissel & Reyns, 2020; Kalaitzaki, 2020; Reyns et al., 2012). Research conducted by Dreßing et al. (2014) demonstrated the variability in prevalence estimations based on the stringency of definition. A significant proportion of participants (43.4%) reported experiencing online harassment at least once in their lifetime. However, when accounting for a full definition of repetitive harassment that occurred for a duration of at least two weeks and fear being experienced, the prevalence was drastically reduced to 6.3%. Reyns et al. (2012) measured prevalence through the occurrence of any of the following behaviours on two or more occasions: unwanted attempts at communication or contact, harassment, unwanted sexual advances and threats of violence or physical harm. Using the above definition, several studies found around 40% of participants to report experiencing cyberstalking (Begotti et al., 2020; Maran & Begotti, 2019; Pereira & Matos, 2016; Reyns et al., 2012). These figures are likely to be an overestimation due to the broad definition and the lack of a requirement for victim fear. More recently, Fissel and Reyns (2020) used a stricter cyberstalking criterion defined by the victims experiencing repeated pursuit behaviours (two or more times) and a substantial emotional response or fear which resulted in 31.8% of respondents reporting cyberstalking victimisation in the last year. The discrepancies in prevalence demonstrate the pertinent issues with epidemiological research and discordant definitions. Differences in key elements such as victim fear and number of stalking behaviours can result in vastly divergent estimations of victimisation. A further issue with identifying the prevalence of cyberstalking is its entwinement with traditional stalking. Dreßing et al. (2014) outlined that only one quarter of cyberstalking cases are exclusively online cyberstalking; most commonly (42% of cases) cyberstalking is combined with stalking. It is often unclear in studies whether prevalence rates refer to cases that are exclusively cyberstalking or include overlapping behaviours. Sheridan and Grant (2007) reported a cyberstalking victimisation prevalence estimation of 7.2% with all behaviours originating online and remaining solely online for a minimum of 4 weeks. However, 38.6% of
11 Cyberstalking: Prevalence, Characteristics, and Impact
207
participants experienced online stalking behaviour as part of a stalking campaign, and these cases with both on and offline methods were not included in prevalence estimations of cyberstalking. The aforementioned issues with prevalence rates and definitions, as well as measurement disparities, highlight that cyberstalking and stalking behaviours are difficult to disentangle as there are significant conceptual and behavioural overlaps and also co-morbidity (Maran & Begotti, 2019). Due to the online nature of cyberstalking and different technological means that can be used by perpetrators to monitor but remain undetected by victims, there may also be a lack of offence recognition, and consequently experiencing of fear by victims. Many research participants erroneously consider online behaviours that fall into the categories of aggression or harassment as an inevitable part of online life (Pew Research Center, 2017). As we learn more about technologies and devices, definitions of cyberstalking will expand, and perpetration and victimisation rates will continue to fluctuate. Across the literature, females are disproportionately found to have experienced cyberstalking victimisation (Dreßing et al., 2014; Kalaitzaki, 2020; Reyns & Fisher, 2018; Reyns et al., 2012). Women are three times more likely to be cyberstalked than men (Reyns & Fisher, 2018). Victimisation is experienced significantly more by females across pursuit behaviours including repeated contact, harassment, and unwanted sexual advances; no significant gender differences have been found for threats of violence (Reyns et al., 2012). The nature of the cyberstalker relationship may also vary between genders. The proportion of men who are cyberstalked by someone they work with is almost two times greater than the proportion of women. Whereas women are over one and half times more likely than men to be cyberstalked by someone they have dated (Maple et al., 2011). Cyberstalking victimisation is most commonly perpetrated by someone known to the victim with only around 20% of cases involving a stranger (Maple et al., 2011). In almost 30% of cases the perpetrator is an ex-partner of the victim, whilst a further 28.5% of cases involve an acquaintance or friend (Dreßing et al., 2014). Further to this, the most common reason for cyberstalking as interpreted by victims is revenge for rejection or intimacy pursuit (Dreßing et al., 2014). The literature
208
J. L. Harewell et al.
on intimate partner violence perpetrated via technology highlights an overlap with the cyberstalking literature in that it has identified high rates of intimate partner cyberstalking in several studies. For instance, over 30% of female responders in DeKeseredy et al. (2019) reported being victimised by cyberstalking from intimate partners at higher rates than offline stalking. Conversely, in Smoker and March (2017) women were significantly more likely to engage in intimate partner cyberstalking than men (e.g. check accounts without permission, internet history, monitor partner’s behaviour through social media and check partner’s location without their knowledge).
Victim Impact The vast majority of cyberstalking victims report being significantly impacted by their experiences with only 2.5% of victims reporting no negative consequences (Dreßing et al., 2014). Fissel and Reyns (2020) found that 61% of victims experience health consequences, 51% experience social consequences, 48% experience work consequences and 41% experience educational (university/college) consequences. The harm suffered by victims of cyberstalking is comparable to stalking, but victims are three times more likely to experience these consequences if cyberstalking is combined with stalking (Fissel & Reyns, 2020; Sheridan & Grant, 2007; Short et al., 2015; Stevens et al., 2020; Worsley et al., 2017).
Psychological and Emotional Impact The most researched elements of victim impact are the manifold emotional and psychological consequences of cyberstalking, resulting in significantly poorer mental wellbeing (Dreßing et al., 2014). The length of time that cyberstalking persists can exacerbate such impacts. Victims are twice as likely to experience health consequences when cyberstalking continues for more than one week, compared to victimisation that occurs for less than one week (Fissel & Reyns, 2020). A systematic review
11 Cyberstalking: Prevalence, Characteristics, and Impact
209
conducted by Stevens et al. (2020) demonstrated that the most prevalent emotions experienced by victims are depression, anxiety, stress, fear, anger, low self-esteem, self-harm, shame, and isolation. Experiencing fear is a prominent consequence of cyberstalking, which further strengthens the argument for its inclusion in legal and scholar definitions. Maple et al. (2011) found that 80% of victims feel fear and 94% feel distress beyond the minimal threshold of alarm and that these feelings are experienced by a greater proportion of females than males. The predominant fear reported by victims is damage to reputation (34%; 46.3% of males and 28.4% of females), followed by fear of personal physical injury (23.8%; 14.7% of males and 28% of females). Victims were asked to choose one primary fear above others and whilst women were twice as likely to report physical injury as their biggest fear than men, most men reported reputational damage as their biggest fear. This highlights the gendered nature of the impact that cyberstalking has. The type of cyberstalking behaviours perpetrated influence the extent of fear with repeated contact, harassment and threats causing more fear than identity theft or impersonation (Maple et al., 2011). Level of fear is also impacted by the presumed motivation of the perpetrator. Victims who believe the perpetrator is motivated by revenge or rejection are significantly more fearful than those who believe the motivation is affection (Fissel, 2021). A significant amount of research on the psychological impact of cyberstalking highlights the pervasiveness of anxiety and depression amongst victims (Kalaitzaki, 2020; Short et al., 2014; Worsley et al., 2017). Using Becks Depression Inventory, 35–42% of victims are classified as having depression and 4.1% of victims experience severe depression, these rates of depression are significantly higher than non-victims (Begotti & Maran, 2019; Begotti et al., 2020). Nine out of ten victims report suffering from some anxiety and more than one-third suffer severe anxiety (Maple et al., 2011). However, Begotti et al. (2020) and Begotti and Maran (2019) did not find significant differences in anxiety levels between victims and non-victims. A further psychological consequence of cyberstalking is post-traumatic stress disorder (PTSD) which affects one in three victims (Maple et al., 2011; Short et al., 2015). Maple et al. (2011) highlighted that whilst only 5–10% of the general population
210
J. L. Harewell et al.
experience PTSD, 35% of cyberstalking victims present all of the symptoms. This is even greater when cyberstalking is combined with stalking, with almost 50% of these victims experiencing PTSD. Other negative emotional symptoms experienced by victims include irritation, anger, and aggression, which affect over half of victims (Begotti & Maran, 2019; Dreßing et al., 2014; Short et al., 2014; Worsley et al., 2017). The likelihood of experiencing aggressiveness and paranoia is significantly greater when multiple types of cyberstalking behaviours are experienced compared to just one (Begotti & Maran, 2019). Nearly 80% of victims report feelings of inner unrest and more than half feel helpless (Dreßing et al., 2014).
Physical and Social Impact Intrinsically linked with psychological consequences, cyberstalking victims also experience physical symptoms and impacts to their social and everyday lives. Physical symptoms include weight change, appetite trouble, sleeping issues, panic attacks, headaches, fatigue, nausea, and, in extreme cases, self-harm (Begotti & Maran, 2019; Short et al., 2015). Sleeping disorders are reported by 64.2% of victims (Dreßing et al., 2014). Physical symptoms experienced may overlap with, or be secondary symptoms resulting from, psychological consequences, such as stress or mood disorders. Cyberstalking has a high likelihood of social impact, which is heightened in crossover or proximal cases (Brown et al., 2017). Relationship impairments are reported by 80% of male victims and 70% of female victims (Maple et al., 2011). More specifically, male victims are more likely to give up social activities, whilst female victims are more likely to be alienated from friends or family. Some cyberstalking behaviours can result in reputational damage or turn friends and family against the victim (Short et al., 2014; Worsley et al., 2017). Relationship impairments can also occur indirectly from the strain of coping with stress, anxiety, or depressive disorders (Worsley et al., 2017). A further indirect impact that can lead to social deficits is the mistrust towards others that victims often feel (Dreßing et al., 2014). Secondary victimisation
11 Cyberstalking: Prevalence, Characteristics, and Impact
211
is also likely, due to victims not being taken seriously by friends, family, and official agencies (e.g. police, medical professionals) which exacerbates feelings of isolation and embarrassment (Jansen van Rensburg, 2017; Worsley et al., 2017). More than 70% of victims report negative consequences on their work life including adverse effects on performance, changing their career, reducing hours or, in extreme cases, loss of employment (Brown et al., 2017; Maple et al., 2011; Short et al., 2014; Worsley et al., 2017). Similarly, university students report impacts to their academic life; these are three times more likely to occur in victims who also experience stalking or victims who experience cyberstalking for longer than one month (Fissel & Reyns, 2020). Cyberstalking victimisation frequently leads to significant financial impacts including moving home, employing additional security measures, job loss, and therapy or legal expenses (Brown et al., 2017; Maple et al., 2011). Financial costs are incurred by 44% of victims (Kaur et al., 2021). This is significantly higher for victims that experience both stalking and cyberstalking with almost 70% incurring financial loss.
Coping Mechanisms and Help Seeking A key area of research in understanding and preventing cyberstalking victimisation is coping and help-seeking behaviours amongst victims. Research conducted by Tokunaga and Aune (2017) identified seven management tactics that victims use to cope with cyberstalking: ignore and avoid , active technological disassociation, help seeking, negotiation/threat, compliance/excuses, technological privacy maintenance and derogation. Avoidance or minimisation is a commonly used tactic, employed by 35–49% of victims and is also utilised more commonly by female than male victims (Begotti et al., 2020; Kalaitzaki, 2020; Tokunaga & Aune, 2017). Other preferred coping behaviours include behaving more carefully online (66.6%), stopping online contact (65%) and limiting information shared online (62.8%) (Begotti & Maran, 2019; Kalaitzaki, 2020). Reducing internet usage is a common coping mechanism amongst cyberstalking victims with almost one-third
212
J. L. Harewell et al.
reporting doing so (Begotti & Maran, 2019). As socialising and work life are increasingly conducted online this can have negative consequences such as isolation and financial loss. Research presents mixed figures regarding the prevalence of reporting cyberstalking which ranges from 17.5% to over 50% (Fissel, 2018; Kalaitzaki, 2020; Maple et al., 2011; Tokunaga & Aune, 2017). If it occurs, reporting is most often informal with 43.8% of victims telling a friend or family member compared to only 14.5% reporting to law enforcement (Fissel, 2018). Meeting with police officers and substantiating a legal case are strategies that are ‘not at all likely’ to be adopted by over 70% of victims (Kalaitzaki, 2020). Formal reporting is more likely when the perpetrator is an intimate partner or if the cyberstalking continues over a longer period of time (Fissel, 2018). Additionally, whilst women make up 70% of cyberstalking victims, male victims are three times more likely to report cyberstalking to law enforcement (Fissel, 2018). The author highlighted that women are more likely to seek informal help but expressed a need for further research to clarify and explain gender differences in reporting behaviour. Notable reasons for not reporting cyberstalking are embarrassment or failure to recognise the seriousness of the behaviour and considering it unworthy of reporting (Kaur et al., 2021; Worsley et al., 2017). On the other hand, despite acknowledging that serious abuse has taken place, hesitation to report can be underpinned by fear of escalation or emotional feelings of guilt or sympathy with the perpetrator (al-Khateeb et al., 2017) hence victims show a preference for private resolution (al-Khateeb et al., 2017; Kaur et al., 2021). Victims that report cyberstalking experiences to law enforcement often recount negative experiences arising from blame and lack of support (alKhateeb et al., 2017; Jansen van Rensburg, 2017; Maple et al., 2011; Short et al., 2014; Worsley et al., 2017). Many victims also experience communication issues with the police including lack of updates or significant delays in case processing (al-Khateeb et al., 2017). Less than 30% of victims who report cyberstalking to the police feel they have benefitted, and the most common outcome is that reporting has no effect (al-Khateeb et al., 2017). Furthermore, 69% of victims report
11 Cyberstalking: Prevalence, Characteristics, and Impact
213
receiving no support in coping with the consequences of victimisation (Maple et al., 2011). However, more recently several organisations have developed online services to help victims secure their devices and protect online profiles, although, these are generally more focused on intimate-partner victims (Havron et al., 2019; Leitão, 2021). Victims have even fewer positive experiences when reporting to website and social networking site administrators with less than 20% reporting an improvement (al-Khateeb et al., 2017). Whilst attempts are made to remove offending harassers, new profiles can quickly and easily be recreated. This emphasises the need for accountability and regulatory improvements by online platforms, as webmail and social networks are the most prevalent environment for cyberstalking experiences (alKhateeb et al., 2017). A large percentage of victims advocate that service providers should take responsibility for dealing with cyberstalking (Maple et al., 2011).
Typology Existing research has developed cyberstalking typologies in several ways: based on overlap with stalking, by the underlying motivations, and by specific categories of behaviour (e.g. McFarlane & Bocij, 2003; Reyns et al., 2012; Sheridan & Grant, 2007). One of the first attempts to categorise cyberstalking was outlined by Sheridan and Grant (2007) to demonstrate cyberstalking as a degree of stalking rather than an independent phenomenon. The authors presented four categories of stalking victimisation dependent on the degree of cyber-involvement, thus outlining the conceptual and practical overlap. The first category, purely-offline, does not include cyberstalking and refers only to stalking victimisation offline. The second category, purely-online, encompasses cyberstalking victimisation that takes place solely online with no offline interaction. The further two categories include cases with overlap between cyberstalking and stalking. This includes crossover: cyberstalkers who cross over to stalking and proximal with online stalkers who use the internet as a tactic of stalking. Research has utilised this classification method as a cyberstalking
214
J. L. Harewell et al.
typology (excluding the purely-offline category) finding differences in negative victim experiences between the types (Brown et al., 2017). Typologies of cyberstalking that are based on types of pursuit behaviour have also been introduced. It is acknowledged that cyberstalking can be confined to one type of behaviour or can be categorised as multi-type with perpetrators using different types of behaviour to control, intimidate, and monitor their victim (Begotti & Maran, 2019). These types of pursuit behaviours include (1) contact, (2) harassment, (3) unwanted sexual advances (4) threats of violence, and (5) identity fraud (Begotti & Maran, 2019; Begotti et al., 2020; Reyns et al., 2012). Originally these behaviours were outlined for definitional purposes by Bocij and McFarlane (2002) but were later introduced as categories of cyberstalking by Reyns et al. (2012). There is some contention over the inclusion of identity fraud as a type, initially it was measured separately with the belief that it was qualitatively different and did not necessarily fit in with the ‘pursuit’ characteristic of cyberstalking (Reyns et al., 2012). Nevertheless, Begotti and Maran (2019) and Maran and Begotti (2019) included identity fraud as a cyberstalking category in their research. Researchers that have utilised this typology consistently find contact as the most common type and threats and identity fraud as the least common types of cyberstalking behaviour. Using a behaviouralbased typology allows research to focus on how these methods of pursuit can vary between specific perpetrators or victims. For example, unwanted sexual advances are more prevalent with female victims and male perpetrators whilst male victims are more likely to experience threats of violence (Begotti & Maran, 2019; Maran & Begotti, 2019; Reyns et al., 2012). Finally, the principal motivation for perpetration has been used to categorise cyberstalking. Cavezza and McEwan (2014) classified cyberstalkers using the five motivation types previously introduced in stalking typology (Mullen et al., 2000, 2009). These motives are rejected , resentful , intimacy seekers, incompetent suitors, and predatory. Whilst cyberstalkers fitted into these categories, the vast majority (75%) appeared to be motivated by rejection, which is also the most common category of motivation for stalkers (47%) but with more variability between the other categories (Cavezza & McEwan, 2014). McFarlane
11 Cyberstalking: Prevalence, Characteristics, and Impact
215
and Bocij (2003) introduced a new motivation-based typology specifically for cyberstalking through a qualitative analysis of 24 victims. They devised a typology of four major types of cyberstalker: vindictive, composed , intimate, and collective. Vindictive cyberstalkers are the most ferocious and threatening, and the majority will also perpetrate stalking. The composed cyberstalker aims to annoy, irritate, and cause distress but without the intention to establish a relationship. Intimate cyberstalkers aim to gain the attention and affection of their victims and are comprised of ex-intimates and infatuates. This third type is similar to the rejected, intimacy seekers and incompetent suitors of Mullen et al. (2000, 2009). The final type of the McFarlane and Bocij (2003) cyberstalking typology, collective cyberstalkers, refers to individuals working together to target a victim most commonly for punishment or vigilantism. The aforementioned typologies have been proposed and used in the literature to explain cyberstalking. Most of these typologies have been based on previously existing stalking typologies, further indicating a conceptual and practical overlap between the two behaviours. Further research is required to devise a clear, multi-modal framework that would include behaviours, motivations but also victim impact to adequately explain cyberstalking.
Perpetrator Characteristics Prevalence There is a much smaller body of research examining the prevalence of cyberstalking perpetration, resulting in similarly discordant rates depending on behaviours examined. Some research suggests relatively low rates of cyberstalking perpetration, between 4.5 and 8.5% (Kalaitzaki, 2020; Reyns et al., 2012). Other research reports much higher rates (22–37%) of perpetration, when including online harassment, threats, sexual advances, tracking, monitoring, or sharing confidential information as measures of cyberstalking (DeMatteo et al., 2017; Fissel et al., 2021; Kalaitzaki, 2020; Reyns et al., 2012). Other research reports much higher rates (22–37%) of perpetration, when including
216
J. L. Harewell et al.
online harassment, threats, sexual advances, tracking, monitoring, or sharing private information as measures of cyberstalking (DeMatteo et al., 2017; Fissel et al., 2021).
Gender There is a consensus in the literature that cyberstalking is a gendered crime, however, research into gender differences in perpetration present inconsistent findings. Recent research identified perpetration to be positively associated with males and negatively associated with females (Fissel et al., 2021), with almost 70% of cyberstalking perpetrators being male (Dreßing et al., 2014). Males are also more likely to admit to being perpetrators (7%) compared to females (4%) in self-report research (Reyns et al., 2012). Begotti et al. (2020) compared the gender of perpetrators of different types of cyberstalking behaviour as reported by the victim, and all behaviours were predominantly perpetrated by males. The largest gender difference is found in the perpetration of unwanted sexual advances online, which is 87% male perpetrated. Nevertheless, there is contradicting research that presents cyberstalking as perpetrated more commonly by females (Kalaitzaki, 2020). In cases of intimate partner cyberstalking, women were identified as the most likely perpetrators of monitoring and controlling of a partner’s mobile devices and social media profiles using a scale to measure specific intimate partner cyberstalking behaviours such as using apps to track a partner or searching a partner’s phone history (March et al., 2020; Smoker & March, 2017). Finally, there is also research demonstrating that cyberstalking does not differ between the genders (Fissel et al., 2021; Kircaburun et al., 2018). Thus, depending on the context, or relationship between the victim and perpetrator, differences emerge in cyberstalking perpetration by gender, but more research is needed to firmly establish gender patterns and differences in behaviour (Kalaitzaki, 2020). In cases of intimate partner cyberstalking, women were identified as the most likely perpetrators of monitoring and controlling of a partner’s mobile devices and social media profiles using a scale to measure specific intimate partner cyberstalking behaviours such as using apps to track
11 Cyberstalking: Prevalence, Characteristics, and Impact
217
a partner or searching a partner’s phone history (March et al., 2020; Smoker & March, 2017). Finally, there is also research demonstrating that cyberstalking does not differ between the genders (Fissel et al., 2021; Kircaburun et al., 2018). Thus, depending on the context, or relationship between the victim and perpetrator, differences emerge in cyberstalking perpetration by gender, but more research is needed to firmly establish gender patterns and differences in behaviour. Gender differences in the prevalence of cyberstalking perpetration may occur due to perceptions associated with female and male perpetrators. Cyberstalking is perceived as more severe and illegal when the perpetrator is male (Ahlgrim & Terrance, 2018). The requirement of victim fear to establish cyberstalking victimisation, may skew prevalence data since males are less likely to report feeling afraid for their physical wellbeing when cyberstalked (e.g. Maple et al., 2011). Nevertheless, the fact that women are more likely to report fear when experiencing cyberstalking by a male is indicative of a difference in impact and severity of the experience that cannot be ignored. Higher levels of female victim impact are reported in several studies examining online intimate partner violence (e.g. Burke et al., 2011; Reed et al., 2016; Gracia-Leiva et al., 2020). It has been found that being young and female and having experienced cyber and offline intimate partner violence was associated with a tenfold increase in risk of suicide (Ahlgrim & Terrance, 2018). Due to the requirement of fear to establish cyberstalking victimisation, the perceived severity may mean that males will be the more prevalent perpetrators when reported by victims. Correspondingly, higher prevalence rates may be self-reported for conduction of online pursuit behaviours by females despite it being perceived as less severe or fear-inducing by the victim. Therefore, the disparities in gender differences may be dependent on whether perpetrator gender data is collected by victims or perpetrators. On the other hand, female perpetrators of cyberstalking are more likely to be perceived as conducting courtship behaviours than male perpetrators (Ahlgrim & Terrance, 2018). This may explain higher prevalence rates for female perpetration when isolating intimate partner cyberstalking. Research into cyber dating abuse and intimate partner cyberstalking shows that male behaviours tend to be more overt (publicly conducted),
218
J. L. Harewell et al.
direct and severe than behaviours perpetrated by females who are more likely to engage in controlling behaviours (e.g. monitoring a partner’s social media and phone; Smoker & March, 2017). Furthermore, females are more likely to engage in minor cyber unwanted pursuit behaviours than men (e.g. sending excessive emails, texts, monitoring partner’s social media), but these are set apart from severe cyber unwanted pursuit behaviours (e.g. using spyware, webcams, and posting inappropriate pictures of partners; Dardis & Gidycz, 2019). Correspondingly, higher prevalence rates may be due to online pursuit or controlling behaviours being more readily self-reported by females because they are universally perceived as less severe or less fear-inducing for the victim. Therefore, the disparities in gender differences may be dependent on the types of questions asked and whether data on perpetrator gender is reported by victims or perpetrators. On the other hand, female perpetrators of cyberstalking are more likely to be perceived as engaging in courtship behaviours than male perpetrators (Ahlgrim & Terrance, 2018). Therefore, isolating different intimate partner cyberstalking behaviours may explain higher prevalence rates for female perpetration, but without taking into consideration context and impact a full picture of gender differences in cyberstalking cannot be painted, although undoubtedly, cyberstalking is linked with greater impact on females.
Dark Tetrad Whilst research into the psychological characteristics of perpetrators is limited, there is a small body of literature examining malevolent personality traits as predictors of cyberstalking. Smoker and March (2017) found that the dark tetrad traits of Machiavellianism, narcissism, psychopathy, and sadism were positive predictors of cyberstalking perpetration, specifically intimate partner cyberstalking. March et al. (2020) found that vulnerable narcissism (also known as covert narcissism and correlated with neuroticism) but not grandiose narcissism (also known as overt narcissism and correlated with extroversion) was significantly predictive of intimate partner cyberstalking. Verbal and physical sadism
11 Cyberstalking: Prevalence, Characteristics, and Impact
219
were also predictive of intimate partner cyberstalking; however, Machiavellianism was not a significant predictor despite moderate positive correlations with perpetration. Secondary psychopathy (characterised by impulsivity and high anxiety), but not primary psychopathy (characterised by callousness and low anxiety) were predictors of cyberstalking. When examining the sample by gender, secondary psychopathy was only a predictor of cyberstalking for males whilst vulnerable narcissism, verbal sadism and physical sadism were only significant predictors of cyberstalking for females. The findings of Kircaburun et al. (2018) indicate that sadism, narcissism, and Machiavellianism, but not psychopathy are predictive of cyberstalking. Similar to research, on other types of online aggression (cyberbullying, harassment and Image Based Sexual Abuse), there are discrepancies in the precise contribution of different dark tetrad traits to cyberstalking perpetration. Nevertheless, it is clear that malevolent personality traits have some association with cyberstalking behaviours.
Mental Health and Self-Control There is dearth of research on the impact of mental health problems on cyberstalking perpetration, however, Cavezza and McEwan (2014) identified that the majority of cyberstalkers (61%) were diagnosed with a personality disorder or had problematic personality traits. A mood disorder was also present in 31% of the sample. Additionally, research by McFarlane and Bocij (2003) theorised that vindictive cyberstalkers may have medium to severe mental health issues due to the disturbing nature of the contact made by perpetrators in the cases examined (i.e. images of corpses, bizarre comments, or unrelated ramblings). Perpetration has also been positively related to low self-control. Individuals with low self-control have been found to be twice as likely to perpetrate cyberstalking (Fissel et al., 2021; Reyns, 2019). The influence of self-control on cyberstalking perpetration has been found to differ by gender. Females with low self-control are three times more likely to perpetrate cyberstalking whilst no significant differences are found with males (Reyns, 2019). Marcum et al. (2016, 2018) found that low
220
J. L. Harewell et al.
self-control was predictive of cyberstalking intimate partners through tracking apps or logging into their accounts without their knowledge.
Technological Literary The online behaviour, technological skills, and use of individuals can impact their likelihood to engage in cyberstalking. McFarlane and Bocij (2003) found that perpetrators generally have a medium to very high computer literacy with the exception of intimate cyberstalkers whose skills vary. Over 90% of cyberstalkers rated themselves as having medium, fairly high, or high computer skills and 50% rated their skills as high or fairly high. Internet addictions have been found to be predictive of cyberstalking in juvenile populations, but further research is required to examine this link in adults (Navarro et al., 2016). Participation in sexting has also been shown to be associated with cyberstalking perpetration. Individuals who have received sexual images are twice as likely to perpetrate cyberstalking (Reyns, 2019). When modelling genders separately, only sending ‘sexts’ was a significant predictor of cyberstalking for males and was linked to four times the likelihood of perpetration. The opposite was true for females with only receiving ‘sexts’ being a significant predictor of cyberstalking perpetration, also linked to a four-fold increase. It is commonly found across many behaviours belonging to the umbrella of technology facilitated abuse (i.e. image based sexual abuse, cyberbullying, technology facilitated domestic abuse etc.) that technology usage that involves victimisation may also predict perpetration (Marcum et al., 2014; Powell et al., 2019). Fissel et al. (2021) found that individuals who have experienced cyberbullying or cyberstalking are significantly more likely to perpetrate cyberstalking. Villora et al. (2019) also found that over 35% of their female participants were both perpetrators and victims of cyber dating abuse (in particular engaging in excessive communications and monitoring) indicating a social learning of abuse through intimate interactions.
11 Cyberstalking: Prevalence, Characteristics, and Impact
221
Childhood Experiences Abuse in childhood and insecure parental attachments have been found to be associated with cyberstalking. Findings from Ménard and Pincus (2012) indicate that sexual abuse in childhood as well as preoccupied attachments predict cyberstalking in both males and females. Neglectful childhood abuse is predictive of perpetration in males whilst physical childhood abuse is predictive of female perpetration. Perpetration has also been found to be associated with mothers’ affectionless control and neglectful parenting by Kalaitzaki (2020).
Conclusion Cyberstalking research is rapidly advancing. It is clear that it is a highly prevalent issue with up to 40% of individuals experiencing some form of victimisation. Perpetration is much less likely to be divulged with less than 10% of people admitting to having perpetrated cyberstalking. Developing a universal definition and a single, multi-modal cyberstalking typology will allow research to more clearly outline the epidemiology of this phenomenon. The impact of cyberstalking victimisation has serious psychological, social, and physical consequences. The majority of victims experience anxiety, depression or post-traumatic stress, demonstrating the severity and long-lasting effects of cyberstalking. Despite this, there are considerable insufficiencies in providing adequate support and solutions for victims. Notwithstanding that most victims do not seek help from law enforcement, those who do are faced with insufficient support. Nevertheless, new developments in the field highlight emerging cybersecurity support which emphasises the protective role of technology against online risk and provides solutions for victims. Whilst perpetrator research is limited, there are some prominent characteristics that are linked with an increased likelihood to perpetrate cyberstalking. Low self-control and dark tetrad personality characteristics are the most likely predictors highlighted in research. The literature on cyberstalking and cyber-dating and intimate partner abuse has produced
222
J. L. Harewell et al.
results that can be contradictory at first glance when it comes to the gendered nature of these behaviours. Whilst forms of cyberstalking can, undoubtedly, be perpetrated by all genders, once the severity of behaviours and the psychological and practical impact are taken into consideration, cyberstalking can be considered as a gendered crime; more severe cyberstalking is perpetrated by males and linked with great psychological impact for female victims. It is imperative that future research takes into consideration the context of the relationship between the victim and perpetrator as well as the manifold impact on victims and examine the potential crossover between cyber-dating abuse, as well as online and offline intimate partner violence, and cyberstalking. The future of cyberstalking research must focus on better coordinated support and help for victims. As we are increasingly conducting our lives online, technological solutions, online platforms, social networking sites, and internet providers must be held accountable for the safety and positive experience of their users and must put measures in place to monitor and restrict behaviours that facilitate cyberstalking. Acknowledgements The author(s) received no financial support for the research, authorship, and/or publication of this article.
References Ahlgrim, B., & Terrance, C. (2018). Perceptions of cyberstalking: Impact of perpetrator gender and cyberstalker/victim relationship. Journal of Interpersonal Violence, 1–20. https://doi.org/10.1177/0886260518784590 al-Khateeb, H. M., Epiphaniou, G., Alhaboby, Z. A., Barnes, J., & Short, E. (2017). Cyberstalking: Investigating formal intervention and the role of corporate social responsibility. Telematics and Informatics, 34 (4), 339–349. https://doi.org/10.1016/j.tele.2016.08.016 Begotti, T., Bollo, M., & Maran, D. A. (2020). Coping strategies and anxiety and depressive symptoms in young adult victims of cyberstalking: A questionnaire survey in an Italian sample. Future Internet, 12(8), 136–148. https://doi.org/10.3390/FI12080136
11 Cyberstalking: Prevalence, Characteristics, and Impact
223
Begotti, T., & Maran, D. A. (2019). Characteristics of cyberstalking behavior, consequences, and coping strategies: A cross-sectional study in a sample of Italian University students. Future Internet, 11(5), 120. https://doi.org/10. 3390/fi11050120 Bocij, P., & McFarlane, L. (2002). Online harassment: Towards a definition of cyberstalking. Prison Service Journal , 139, 31–38. http://proquest.umi. com/pqdweb?did=1013260141&Fmt=7&clientId=9718&RQT=309&VNa me=PQD Brown, A., Gibson, M., & Short, E. (2017). Modes of cyberstalking and cyberharassment: Measuring the negative effects in the lives of victims in the UK. Annual Review of CyberTherapy and Telemedicine, 15, 57–63. http://hdl.han dle.net/10547/623528 Burke, S. C., Wallen, W., Vail-Smith, K., & Knox, D. (2011). Using technology to control intimate partners: An exploratory study of college undergraduates. Computers in Human Behavior, 27 (3), 1162–1167. https://doi. org/10.1016/j.chb.2010.12.010 Cavezza, C., & McEwan, T. E. (2014). Cyberstalking versus off-line stalking in a forensic sample. Psychology, Crime and Law, 20 (10), 955–970. https:// doi.org/10.1080/1068316X.2014.893334 Dardis, C. M., & Gidycz, C. A. (2019). Reconciliation or retaliation? An integrative model of postrelationship in-person and cyber unwanted pursuit perpetration among undergraduate men and women. Psychology of Violence, 9 (3), 328–339. https://doi.org/10.1037/vio0000102 DeMatteo, D., Wagage, S., & Fairfax-Columbo, J. (2017). Cyberstalking: Are we on the same (web)page? A comparison of statutes, case law, and public perception. Journal of Aggression, Conflict and Peace Research, 9 (2), 83–94. https://doi.org/10.1108/JACPR-06-2016-0234 DeKeseredy, W. S., Schwartz, M. D., Nolan, J., Mastron, N., & Hall-Sanchez, A. (2019). Polyvictimization and the continuum of sexual abuse at a college campus: Does negative peer support increase the likelihood of multiple victimizations? British Journal of Criminology, 59 (2), 276–295. https://doi. org/10.1093/bjc/azy036. Dreßing, H., Bailer, J., Anders, A., Wagner, H., & Gallas, C. (2014). Cyberstalking in a large sample of social network users: Prevalence, characteristics, and impact upon victims. Cyberpsychology, Behavior, and Social Networking, 17 (2), 61–67. https://doi.org/10.1089/cyber.2012.0231 Fissel, E. R. (2018). The reporting and help-seeking behaviors of cyberstalking victims. Journal of Interpersonal Violence, 1–26. https://doi.org/10.1177/088 6260518801942
224
J. L. Harewell et al.
Fissel, E. R. (2021). Victims’ perceptions of cyberstalking: An examination of perceived offender motivation. American Journal of Criminal Justice. https:// doi.org/10.1007/s12103-021-09608-x Fissel, E. R., Fisher, B. S., & Nedelec, J. L. (2021). Cyberstalking perpetration among young adults: An assessment of the effects of low self-control and moral disengagement. Crime & Delinquency. https://doi.org/10.1177/001 1128721989079 Fissel, E. R., & Reyns, B. W. (2020). The aftermath of cyberstalking: School, work, social, and health costs of victimization. American Journal of Criminal Justice, 45 (1), 70–87. https://doi.org/10.1007/s12103-019-09489-1 Gracia-Leiva, M., Puente-Martínez, A., Ubillos-Landa, S., González-Castro, J. L., & Páez-Rovira, D. (2020). Off-and online heterosexual dating violence, perceived attachment to parents and peers and suicide risk in young women. International Journal of Environmental Research and Public Health, 17 (9), 3174. https://doi.org/10.3390/ijerph17093174 Havron, S., Freed, D., Chatterjee, R., McCoy, D., Dell, N., & Ristenpart, T. (2019). Clinical computer security for victims of intimate partner violence. Proceedings of the 28th USENIX Security Symposium (pp. 105–122). Jansen van Rensburg, S. K. (2017). Unwanted attention: The psychological impact of cyberstalking on its survivors. Journal of Psychology in Africa, 27 (3), 273–276. https://doi.org/10.1080/14330237.2017.1321858 Kalaitzaki, A. (2020). Cyberstalking victimization and perpetration among young adults. In M. F. Wright (Ed.), Recent advances in digital media impacts on identity, sexuality and relationships (pp. 22–38). IGI Global. https://doi. org/10.4018/978-1-7998-1063-6.ch002 Kaur, P., Dhir, A., Tandon, A., Alzeiby, E. A., & Abohassan, A. A. (2021). A systematic literature review on cyberstalking: An analysis of past achievements and future promises. Technological Forecasting and Social Change, 163.https://doi.org/10.1016/j.techfore.2020.120426 Kircaburun, K., Jonason, P. K., & Griffiths, M. D. (2018). The Dark Tetrad traits and problematic social media use: The mediating role of cyberbullying and cyberstalking. Personality and Individual Differences, 135, 264–269. https://doi.org/10.1016/j.paid.2018.07.034 Leitão, R. (2021). Technology-facilitated intimate partner abuse: A qualitative analysis of data from online domestic abuse forums. Human-Computer Interaction, 36 (3), 203–242. https://doi.org/10.1080/07370024.2019.168 5883 Maple, C., Short, E., & Brown, A. (2011). Cyberstalking in the United Kingdom: An analysis of the ECHO pilot survey. National Centre for
11 Cyberstalking: Prevalence, Characteristics, and Impact
225
Cyberstalking Research, University of Bedfordshire. https://paladinservice. co.uk/wp-content/uploads/2013/12/ECHO_Pilot_Final-Cyberstalking-inthe-UK-University-of-Bedfordshire.pdf Maran, D. A., & Begotti, T. (2019). Prevalence of cyberstalking and previous offline victimization in a sample of Italian university students. Social Sciences, 8(1), 30–39. https://doi.org/10.3390/socsci8010030 March, E., Litten, V., Sullivan, D. H., & Ward, L. (2020). Somebody that I (used to) know: Gender and dimensions of dark personality traits as predictors of intimate partner cyberstalking. Personality and Individual Differences, 163. https://doi.org/10.1016/j.paid.2020.110084 Marcum, C. D., Higgins, G. E., Freiburger, T. L., & Ricketts, M. L. (2014). Exploration of the cyberbullying victim/offender overlap by sex. American Journal of Criminal Justice, 39 (3), 538–548. https://doi.org/10.1007/s12 103-013-9217-3 Marcum, C. D., Higgins, G. E., & Nicholson, J. (2018). Crossing boundaries online in romantic relationships: An exploratory study of the perceptions of impact on partners by cyberstalking offenders. Deviant Behavior, 39 (6), 716–731. https://doi.org/10.1080/01639625.2017.1304801 Marcum, C. D., Higgins, G. E., & Poff, B. (2016). Exploratory investigation on theoretical predictors of the electronic leash. Computers in Human Behavior, 61, 213–218. https://doi.org/10.1016/j.chb.2016.03.010 McFarlane, L., & Bocij, P. (2003). An exploration of predatory behaviour in cyberspace: Towards a typology of cyberstalkers. First Monday, 8(9), 9–13. https://doi.org/10.5210/fm.v8i9.1076 Ménard, K. S., & Pincus, A. L. (2012). Predicting overt and cyber stalking perpetration by male and female college students. Journal of Interpersonal Violence, 27 (11), 2183–2207. https://doi.org/10.1177/0886260511432144 Mullen, P. E., Pathé, M., & Purcell, R. (2000). Stalkers and their victims. In Stalkers and their victims. Cambridge University Press. https://doi.org/10. 1017/cbo9781139106863 Mullen, P. E., Pathé, M., & Purcell, R. (2009). Stalkers and their Victims (2nd ed.). In The British Journal of Psychiatry (2nd ed.). Cambridge University Press. Navarro, J. N., Marcum, C. D., Higgins, G. E., & Ricketts, M. L. (2016). Addicted to the thrill of the virtual hunt: Examining the effects of internet addiction on the cyberstalking behaviors of juveniles. Deviant Behavior, 37 (8), 893–903. https://doi.org/10.1080/01639625.2016.1153366 Nobles, M. R., Reyns, B. W., Fox, K. A., & Fisher, B. S. (2014). Protection against pursuit: A conceptual and empirical comparison of cyberstalking
226
J. L. Harewell et al.
and stalking victimization among a national sample. Justice Quarterly, 31(6), 986–1014. https://doi.org/10.1080/07418825.2012.723030 Pereira, F., & Matos, M. (2016). Cyber-stalking victimization: What predicts fear among Portuguese adolescents? European Journal on Criminal Policy and Research, 22(2), 253–270. https://doi.org/10.1007/s10610-015-9285-7 Pew Research Center. (2017). Online Harassment 2017 . https://www.pewres earch.org/internet/2017/07/11/online-harassment-2017/ Powell, A., Henry, N., Flynn, A., & Scott, A. J. (2019). Image-based sexual abuse: The extent, nature, and predictors of perpetration in a community sample of Australian residents. Computers in Human Behavior, 92, 393–402. https://doi.org/10.1016/j.chb.2018.11.009 Reed, L. A., Tolman, R. M., & Ward, L. M. (2016). Snooping and sexting: Digital media as a context for dating aggression and abuse among college students. Violence Against Women, 22(13), 1556–1576. https://doi.org/10. 1177/1077801216630143 Reyns, B. W. (2019). Online pursuit in the twilight zone: Cyberstalking perpetration by college students. Victims and Offenders, 14 (2), 183–198. https:// doi.org/10.1080/15564886.2018.1557092 Reyns, B. W., & Fisher, B. S. (2018). The relationship between offline and online stalking victimization: A gender-specific analysis. Violence and Victims, 33(4), 769–786. https://doi.org/10.1891/0886-6708.VV-D17-00121 Reyns, B. W., Fisher, B. S., & Randa, R. (2018). Explaining cyberstalking victimization against college women using a multitheoretical approach: Selfcontrol, opportunity, and control balance. Crime and Delinquency, 64 (13), 1742–1764. https://doi.org/10.1177/0011128717753116 Reyns, B. W., & Fissel, E. R. (2020). Cyberstalking. In A. M. Bossler & T. J. Holt (Eds.), The Palgrave handbook of international cybercrime and cyberdeviance (pp. 1283–1306). Palgrave Macmillan. https://doi.org/10.1007/9783-319-78440-3_57 Reyns, B. W., Henson, B., & Fisher, B. S. (2012). Stalking in the twilight zone: Extent of cyberstalking victimization and offending among college students. Deviant Behavior, 33(1), 1–25. https://doi.org/10.1080/01639625.2010. 538364 Sheridan, L. P., & Grant, T. (2007). Is cyberstalking different? Psychology, Crime and Law, 13(6), 627–640. https://doi.org/10.1080/10683160701340528 Short, E., Guppy, A., Hart, J. A., & Barnes, J. (2015). The impact of cyberstalking. Studies in Media and Communication, 3(2), 23–37. https://doi.org/ 10.11114/smc.v3i2.970
11 Cyberstalking: Prevalence, Characteristics, and Impact
227
Short, E., Linford, S., Wheatcroft, J. M., & Maple, C. (2014). The impact of cyberstalking: The lived experience—A thematic analysis. Annual Review of CyberTherapy and Telemedicine, 12, 133–137. Smoker, M., & March, E. (2017). Predicting perpetration of intimate partner cyberstalking: Gender and the dark tetrad. Computers in Human Behavior, 72, 390–396. https://doi.org/10.1016/j.chb.2017.03.012 Stevens, F., Nurse, J. R. C., & Arief, B. (2020). Cyber stalking, cyber harassment, and adult mental health: A systematic review. Cyberpsychology, Behavior, and Social Networking. https://doi.org/10.1089/cyber.2020.0253 Tokunaga, R. S., & Aune, K. S. (2017). Cyber-defense: A taxonomy of tactics for managing cyberstalking. Journal of Interpersonal Violence, 32(10), 1451– 1475. https://doi.org/10.1177/0886260515589564 Villora, B., Yubero, S., & Navarro, R. (2019). Cyber dating abuse and masculine gender norms in a sample of male adults. Future Internet, 11(4), 84–95. https://doi.org/10.3390/FI11040084 Worsley, J. D., Wheatcroft, J. M., Short, E., & Corcoran, R. (2017). Victims’ voices: Understanding the emotional impact of cyberstalking and individuals’ coping responses. SAGE Open, 7 (2), 1–13. https://doi.org/10.1177/ 2158244017710292 Yardley, E. (2021). Technology-facilitated domestic abuse in political economy: A new theoretical framework. Violence Against Women, 27 (10), 1479–1498. https://doi.org/10.1177/1077801220947172
12 Crossing a Line? Understandings of the Relative Seriousness of Online and Offline Intrusive Behaviours Among Young Adults Victoria Coleman, Adrian J. Scott, Jeff Gavin, and Nikki Rajakaruna
Introduction Internationally, research suggests there has been a rise in online stalking, also referred to as cyberstalking and technology-facilitated stalking, via social media platforms. Arguably, the increasing utility of social media for online stalking is occurring, in part, because of the proliferation and ease of use of these tools, but also because perpetrators can perform many intrusive behaviours with a degree of anonymity and relative impunity. V. Coleman · J. Gavin University of Bath, Bath, UK A. J. Scott (B) Goldsmiths, University of London, London, UK e-mail: [email protected] N. Rajakaruna Edith Cowan University, Joondalup, WA, Australia
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_12
229
230
V. Coleman et al.
Furthermore, the use of online tools for stalking appears to be associated with the normalisation of a range of intrusive and monitoring behaviours, particularly among younger adult populations (Gillett, 2018; Milivojevi´c et al., 2018). Importantly, both online and offline stalking research suggests that experiences of intrusive behaviour are gendered in a range of ways. For example, many studies report that women are more likely than men to be victims of stalking generally (Fansher & Randa, 2019; Fedina et al., 2020), and online stalking specifically (Dreßing et al., 2014; Fansher & Randa, 2019; Reyns et al., 2012). Additionally, research suggests that women experience greater levels of fear and harm as a result of stalking generally (Logan, 2020; Sheridan & Lyndon, 2012), and online stalking specifically (Pereira & Matos, 2016; Vakhitova et al., 2021), and that the relational context of stalking victimisation is gendered. For example, women are more likely than men to experience fear and harm in the context of intimate partner, sexual, and dating violence for both offline (Logan, 2020; Sheridan & Lyndon, 2012) and online stalking (Vakhitova et al., 2021). This chapter draws on findings from a study into young adults’ understandings of the relative seriousness of conceptually similar online and offline intrusive behaviours. Two research questions are examined. First, how do students categorise and rank the seriousness of conceptually similar online and offline intrusive behaviours? Second, what underlying assumptions shape understandings of the relative seriousness of these behaviours? Consideration is also given to whether men’s and women’s understandings of intrusive behaviour differ and, if so, how. The chapter proceeds as follows. In the next section we provide some background and context regarding the extent and nature of online stalking with a particular focus on the role of social media. We further consider prior research into perceptions of online as compared to offline stalking. We then present the method for the present study, followed by the findings; including the key assumptions that shaped participants’ understandings of conceptually similar online and offline intrusive behaviours, structured according to five themes: intent, effort, physicality, choice and control, and norms and expectations. We further examine notable gender differences in participants’ perceptions of the seriousness of these behaviours. Finally, we discuss the implications of the study for future research and responses to online stalking.
12 Crossing a Line? Understandings of the Relative Seriousness …
231
Online Stalking by Social Media: Background and Gendered Context In recent years, attention has been drawn to the concerning rise in online stalking via social media platforms, including Facebook, Instagram and WhatsApp (Chandler, 2019; Nachiappan, 2021; Warburton, 2020). Early research in the United Kingdom revealed that 62% of online stalking victims were harassed on social networking sites (Maple et al., 2011), and recent research in the United States has further revealed that an increasing percentage of online harassment victims were harassed on social media platforms: 58% of most recent incidents in 2017 compared to 75% in 2020 (Duggan, 2017; Vogels, 2021). Social media platforms are arguably appealing tools for stalking perpetrators because they facilitate contact with a person, contact with members of a person’s network, and virtual expressions of affection towards a person via messages and virtual gifts (Chaulk & Jones, 2011; Kaur et al., 2021). Stalking perpetrators can also perform many of these behaviours with a degree of anonymity and as such relative impunity. For example, Facebook users cannot track who views their profile or how often their profile is viewed (Chaulk & Jones, 2011; Facebook, 2021). Although social media platforms have developed more stringent policies on what is and what is not acceptable, as well as new and more accessible ways to report online victimisation (Jiang et al., 2020), people are often critical of these policies. For example, a nationally representative survey in the United States found only 18% of respondents believed social media platforms are doing an ‘excellent’ or ‘good’ job at addressing online victimisation (Vogels, 2021). Although limited, research examining the gendered nature of online intrusive behaviour suggests that, like offline intrusive behaviour, it is experienced differently by men and women. For example, Smith and Duggan (2013) found that 42% of women who engaged in online dating experienced behaviour that made them feel harassed and uncomfortable compared to 17% of men. Similarly, Anderson et al. (2020) found that 48% of women who had used online dating applications or websites experienced continued contact after indicating a lack of interest compared to 27% of men, and that 11% of women had received a threat
232
V. Coleman et al.
of physical harm compared to 6% of men. It has also been argued that the risks associated with dating and intimate partner violence, which are experienced by women more than men, are heightened further by the increased access, monitoring, and control afforded by technological advances (Gillett, 2018; Mason & Magnet, 2012; Woodlock, 2017). Online stalking victims experience a range of detrimental health, social and financial impacts, that are often comparable to those experienced by offline stalking victims (Dreßing et al., 2014; Fissel & Reyns, 2020; Short et al., 2015). However, research in the United States found that 43% of online stalking victims did not seek help and that 85% did not report their experiences to the police (Fissel, 2018). Furthermore, although Nobles et al. (2014) found that online stalking victims tended to engage in more self-protective measures than offline stalking victims, they were often slower to take these measures. For example, the number of self-protective measures was positively associated with the presence of physical threats in cases of offline stalking, and fear over time, the presence of physical attacks and being a woman in cases of online stalking. The authors suggested that while many offline intrusive behaviours are immediately recognised as problematic, many online intrusive behaviours are not recognised as problematic until they ‘…escalate in seriousness, duration, or other modalities’ (Nobles et al., 2014, p. 1008). It appears, therefore, that victims may fail to recognise and/or minimise the seriousness of online intrusive behaviours relative to offline intrusive behaviours, when such behaviours can be just as problematic. Online and offline stalking are difficult to define because they are pattern-based and incorporate a range of unwanted intrusive behaviours over an extended period, many of which may appear benign when considered separately (Pathé et al., 2004; Sheridan & Davies, 2001). Defining online stalking is further complicated by a lack of agreement regarding whether it should be considered an extension of offline stalking, or a distinct form of stalking (Fraser et al., 2010; Maple et al., 2011; Reyns & Fisher, 2018; see also Chapter 11, this volume), as well as the continual evolution of technology and an expanding repertoire of online intrusive behaviours (Cavezza & McEwan, 2014; Wilson et al., 2020). In broad terms, stalking has been defined as persistent harassment
12 Crossing a Line? Understandings of the Relative Seriousness …
233
in which one person repeatedly attempts to impose unwanted communication and/or contact on another (Mullen et al., 1999, 2001), while online stalking ‘…involves using technology to repeatedly communicate with, harass, or threaten the victim’ (Wilson et al., 2020, p. 2). Subjective appraisals of intrusive behaviour as unwanted and problematic are central to recognising and acknowledging stalking victimisation (Page & Scott, 2021), and have led to a body of research that examines the influence of various characteristics on perceptions of stalking (Scott, 2020), including modality (online, offline) and perceiver gender (man, woman). The limited research examining modality has found that individuals generally perceive offline stalking as more serious than online stalking. For example, Alexy et al. (2005) found that only 30% of participants labelled an online stalking scenario as such. Furthermore, Ramirez (2019) found that participants were more likely to report offline stalking behaviours (69%) than online stalking behaviours (56%). Therefore, perceptions mirror the ‘reality’ of the behaviour insofar as the behaviour is minimised and not reported. It is also apparent that perceptions of online and offline stalking differ by perceiver gender. In the context of online stalking, research has found that women are more likely than men to label stalking scenarios as stalking (Becker et al., 2020), as representing illegal behaviour (Ahlgrim & Terrance, 2021), and as containing intrusive behaviour that they would report to the police (Feuer, 2014). Similarly, in the context of offline stalking, research has found that women are more likely than men label stalking scenarios as stalking (Finnegan & Timmons Fritz, 2012; Scott et al., 2015), as causing the victim to experience alarm and fear of violence, and as warranting police intervention and a criminal conviction (Scott et al., 2015). The concept of ‘safety work’ is relevant when trying to understand these gender differences (Kelly, 2012). Safety work refers to the ‘invisible’ work girls and women undertake when participating in public spaces (including virtual public spaces such as social media) to avoid sexual harassment and other forms of sexual violence, as well as attributions of blame should such victimisation occur (VeraGray & Kelly, 2020). Safety work involves a heightened awareness of the environment, and an array of self-restricting (yet socially mandated) behaviours that pre-empt sexual violence. Consequently, participation
234
V. Coleman et al.
in social media calls forth such safety work, and gender differences in perceptions and responses to online stalking may be explained by the heightened awareness to potential threats and greater likelihood of pre-emptive action by women compared to men.
Method The present study examines young adults’ understandings of the seriousness of conceptually similar online and offline intrusive behaviours using focus groups and a modified Q-Sort task. Forty-five students from a university in Western Australia participated in one of 10 focus groups: 30 identified as women and 15 identified as men with an average age of 21.41 years (SD = 5.77 years). Each focus group comprised 4 to 6 participants: five comprising men and women (mixed gender), four comprising women only, and one comprising men only. Each participant received an AU$20 gift card for taking part in the study. The modified Q-sort task (Brown, 1996; McKeown & Thomas, 1988) required participants to categorise and rank 20 cue cards that presented 10 offline and 10 conceptually similar online intrusive behaviours. Participants first categorised the 20 intrusive behaviours according to whether they were perceived as ‘not serious’, ‘potentially serious’ or ‘definitely serious’, and then ranked them from 1 (least serious) to 20 (most serious). The 20 intrusive behaviours were revised versions of the 24 obsessional relational intrusion ‘tactics’ (12 offline and 12 online) identified by Chaulk and Jones (2011). Each cue card provided a category heading and example behaviours (see Table 12.1). Importantly, all online intrusive behaviours were enacted via Facebook because it remains the leading global social networking site (Statista, 2021b), with 55% of active Facebook users aged between 18 and 34 years (Statista, 2021a). Therefore, Facebook provides an appropriate context for the examination of student understandings of online intrusive behaviours. Informed consent was obtained prior to the study, and debrief statements were provided after. The research was approved by an institutional ethics committee, and followed guidelines as prescribed by the Australian National Statement on Ethical Conduct in Human Research.
12 Crossing a Line? Understandings of the Relative Seriousness …
Table 12.1
235
Cue card categories and example intrusive behaviours
Category
Offline
Online
Unwanted gifts
Leaving unwanted gifts (e.g. flowers, photographs, presents)
Unwanted messages
Leaving unwanted messages (e.g. notes, cards, letters)
Exaggerated affection
Making exaggerated displays of affection (e.g., saying ‘I love you’ after limited interaction)
Following
Following you (e.g. following you to or from work, school, home, gym, daily activities) Intruding uninvited into your interactions (e.g. initiates unwanted conversations and intrudes on your conversations with other people)
Leaving unwanted gifts via Facebook (e.g. flowers, photographs, presents using applications) Leaving unwanted messages via Facebook (e.g. messages, posting on your wall) Making exaggerated displays of affection via Facebook (e.g. saying ‘I love you’ on your wall after limited interaction) Following you via Facebook (e.g. joining the same networks and groups as you) Intruding uninvited into your interactions via Facebook (e.g. initiates unwanted conversations on Facebook and comments on your photos and posts) Involving you in activities in unwanted ways via Facebook (e.g. sending you invitations to events and groups, tagging you in posts, adding you to group conversations) Intruding upon your friends, family or co-workers via Facebook (e.g. trying to add your friends to their friend list, attempting to be invited to the same events and groups as you)
Intruding uninvited
Involving in activities
Involving you in activities in unwanted ways (e.g. sending invitations to events and groups, enrolling you in programs, putting you on mailing lists)
Intruding upon friends etc.
Intruding upon your friends, family or co-workers (e.g. trying to befriend your friends, attempting to be invited to the same events and groups as you)
(continued)
236
V. Coleman et al.
Table 12.1
(continued)
Category
Offline
Online
Monitoring
Monitoring you and/or your behaviour (e.g. calling at all hours to check up on your whereabouts, checking up on you by asking mutual friends)
Regulatory harassment
Engaging in regulatory harassment (e.g. making false allegations of harassment against you, spreading false rumours about you to your boss/instructor)
Threatening objects
Leaving or sending you threatening objects (e.g. posting marked up images and photo-shopped photographs of you)
Monitoring you and/or your behaviour via Facebook (e.g. constantly checking your profile for updates, checking up on you via your friend’s profiles) Engaging in regulatory harassment via Facebook (e.g. falsely reporting you to Facebook for harassment, posting false rumours about you on the wall of your boss/instructor) Leaving or sending you threatening objects via Facebook (e.g. uploading marked up images and photo-shopped photographs of you)
The focus groups were video recorded, and the participant discussions were transcribed. Frequency and descriptive analyses were conducted to examine how participants categorised and ranked each intrusive behaviour. Thematic analysis was then conducted, using the process outlined by Braun and Clarke (2013), to explore the underlying assumptions that shaped their understandings of the relative seriousness of these behaviours. Analysis involved thorough familiarisation with the transcripts of participant discussions as they completed the modified Q-sort task. Themes were then identified and refined to capture overarching topics of discussion across the focus groups.
237
12 Crossing a Line? Understandings of the Relative Seriousness …
Findings and Discussion Frequency analyses revealed that all 10 offline intrusive behaviours were more likely to be categorised as ‘definitely serious’ compared to their conceptually similar online counterparts (see Table 12.2). Although it is acknowledged that some of these differences were relatively minor (threatening objects: 90% vs. 70%; regulatory harassment: 80% vs. 70%), others were considerable (following: 100% vs. 0%; monitoring: 80% vs. 20%). Similarly, descriptive analyses revealed that all 10 offline intrusive behaviours were ranked more highly (were perceived as more serious) than their conceptually similar online counterparts (see Fig. 12.1). Furthermore, four of the offline intrusive behaviours were perceived as Table 12.2 Categorisations of conceptually similar online and offline intrusive behaviours Offline
Online
Definitely Potentially Not Definitely Potentially Not serious serious serious serious serious serious (%) (%) (%) (%) (%) (%) Following Threatening objects Regulatory harassment Monitoring Intruding upon friends etc. Unwanted gifts Involving in activities Intruding uninvited Exaggerated affection Unwanted messages
100 90
0 10
0 0
0 70
30 30
70 0
80
20
0
70
30
0
80 70
20 30
0 0
20 30
40 70
40 0
50
50
0
20
20
60
30
60
10
10
30
60
30
40
30
10
30
60
20
80
0
0
90
10
10
80
10
0
60
40
238
V. Coleman et al.
Threatening objects Following Regulatory harassment Monitoring Intruding upon friends etc. Unwanted gifts Exaggerated affection Indruding uninvited Involving in activities Unwanted messages 0
2
4
6 Offline
8
10
12
14
16
18
20
Online
Fig. 12.1 Mean rankings of conceptually similar online and offline intrusive behaviours (1 least serious, 20 most serious)
the most serious (threatening objects, following, regulatory harassment, monitoring) and seven of the online intrusive behaviours were perceived as the least serious (following, unwanted messages, involving in activities, unwanted gifts, intruding uninvited, monitoring‚ exaggerated affection). It is apparent, therefore, that participants perceived conceptually similar online and offline intrusive behaviours differently. Thematic analysis generated five themes that shaped participants’ understandings of the relative seriousness of these behaviours: intent, effort, physicality, choice and control, and norms and expectations. Importantly, although these themes were shared across the focus groups, there were gender differences in the ways conceptually similar online and offline intrusive behaviours were discussed. These differences cut across the five themes and are presented at the end of this section.
Theme 1: Intent Intent came up as participants sought to explain why they perceived certain intrusive behaviours as more serious than others. Whether a behaviour was considered serious or not depended on the perceived
12 Crossing a Line? Understandings of the Relative Seriousness …
239
intent of the perpetrator. If the motivation was harmful, the behaviour was serious. Enacting behaviour on Facebook, however, rendered intent ambiguous, and this ambiguity reduced the seriousness of the behaviour: A lot of the stuff on Facebook … isn’t taken as seriously because you can’t tell how someone’s feeling when they send you something. (Participant 5, Women only group 4)
Participants also believed that on Facebook, there was a lack of intent to cause harm, again reducing the perceived seriousness of the behaviour. There was an assumption that behaviour enacted via Facebook is more likely to be motivated by boredom, or by a desire to be annoying rather than harmful: It’s different. I reckon if they are that bored on the computer they could just be really annoying and tagging you and just trying to annoy you by harassing you. But if you are actually doing it in person, that’s pretty serious. (Participant 2, Woman, Mixed gender group 1)
Interest, or curiosity, was another motivation attributed to monitoring behaviours on Facebook, highlighting the distinction between the benign and everyday practice of ‘Facebook stalking’, and the more serious practice of ‘stalking’, which involves harmful intent: You have quite a few people on Facebook that stalk … not because they are actually stalking them or anything. Like not with the intention of doing anything to them … they are just you know … checking on them or they are just like interested in what they have to say. (Participant 1, Women only group 1)
These findings are consistent with perception research in the context of offline stalking, whereby people are more likely to perceive behaviour to be stalking if the perpetrator intends to cause fear or harm (Dennison, 2007; Scott et al., 2014). However, the finding that intent was considered ambiguous and difficult to interpret in the context of online stalking is concerning because applied research has found that intent was
240
V. Coleman et al.
fundamentally similar for both online and offline stalking perpetrators (Cavezza & McEwan, 2014). Importantly, rather than err on the side of caution, participants assumed that ambiguous intent equated to harmless intent or a desire to be annoying. This minimisation of risk could have important implications, especially given that it is easier to mask intent online due to the anonymity afforded by the Internet.
Theme 2: Effort The second theme related to effort. If the behaviour was perceived as effortful, it was perceived as serious. This theme was closely associated with intent, in that participants believed that effortful behaviour was more likely to involve harmful intent. Participants viewed online intrusive behaviours as less serious because they are ‘easy’, non-timeconsuming and involve minimal effort, while the opposite was true for offline intrusive behaviours: Facebook’sreally easy, you just click on the profile and read everything that they’ve done. It takes two seconds as opposed to taking all of their time to stalk you. (Participant 1, Women only group 3)
Participants also discussed how the ease of online monitoring enables perpetrators to target multiple people simultaneously. The ease of online monitoring was contrasted with the comparable effort of offline surveillance, and the tendency for perpetrators to target one person at a time. Furthermore, the targeting of one, as opposed to multiple people was considered more serious: I think like the effort side as well. Because if it’s online then there’s not much effort going into that because they could be stalking like twenty people, but to actually then take that a step further and come to wherever you are like offline and like make that difference between the twenty and just the one. (Participant 4, Women only group 1)
12 Crossing a Line? Understandings of the Relative Seriousness …
241
A related aspect of effort was the idea of someone ‘going out of their way’ to engage in intrusive behaviour. Participants believed offline perpetrators were more likely go out of their way than online perpetrators, thus making their behaviour more serious. For example, offline gift-giving was seen as involving greater effort in terms of time and money, and was perceived as more serious than online gift-giving: I suppose they’ve gone outta the way to go down to the shop‚ buy the gift‚ pay the money‚ go to their house and drop it off rather than all you have to do online is like click a little button. (Participant 3, Men only group)
Parallels can be drawn between these findings and perception research in the context of offline stalking, whereby persistence (a form of effort) is important in determining whether intrusive behaviour is labelled as stalking (Dennison, 2007; Scott et al., 2014). The minimisation of the perceived seriousness of stalking behaviour based on perceived effort is concerning, given that the ease of a stalking behaviour does not make it inherently less serious. Melander (2010) highlighted that the ease and speed of technology may be attractive to perpetrators, and a recent systematic review noted that advances in social media and smart devices are linked to rising cases of online stalking (Kaur et al., 2021). Furthermore, the assumption that online perpetrators are more likely to target multiple people than offline perpetrators contrasts with the finding that online and offline perpetrators are similarly likely to target multiple people (Cavezza & McEwan, 2014).
Theme 3: Physicality The physical nature of some intrusive behaviours was central to explanations of why they were perceived as more serious than others, and why offline intrusive behaviours were perceived as more concerning than their conceptually similar online counterparts. Participants tended to distinguish the more serious offline intrusive behaviours, which were considered ‘physical’ and ‘real’, from the less serious online intrusive behaviours, which were considered separate from the ‘real world’:
242
V. Coleman et al.
There’s stuff that happens online and there’s stuff that happens in‚ you know‚ real life or real space and time. I guess … like if someone was knocking on your door and phoning you, to me that seems more serious than leaving them messages on Facebook. (Participant 4, Man, Mixed gender group 4)
The physical proximity of offline intrusive behaviours compared to that of online intrusive behaviours was also discussed. Participants viewed online intrusive behaviours as less proximal than offline intrusive behaviours: Via the Internet, it’s not physical, it’s not, you know, within proximity. (Participant 4, Man, Mixed gender group 5)
The notion of someone knowing where the targeted person lived also resonated with participants, and was highly salient in determining the seriousness of offline intrusive behaviours; particularly in relation to gifts and messages, which implied knowledge of where the targeted person lived: I reckon Facebook’sone thing but in person they know where you live so they’re around you, so it’s really creepy. (Participant 1, Woman, Mixed gender group 2)
Furthermore, participants believed that knowledge of the targeted person’s whereabouts increased the risk of physical harm. Therefore, online intrusive behaviours were considered less serious than offline intrusive behaviours because the computer screen acts as a protective barrier that reduces the potential for harm: I think in person there is more immediate danger to yourself, as opposed to online where … you have to arrange a meeting and they can’t like exactly attack you through the computer screen, well not physically harming you. (Participant 1, Women only group 1)
The finding that participants focus on the risk of physical harm when determining the seriousness of intrusive behaviours to the neglect of other harms is concerning given that research has demonstrated that
12 Crossing a Line? Understandings of the Relative Seriousness …
243
online and offline stalking victims experience a comparable range of detrimental health, social and financial impacts (Dreßing et al., 2014; Fissel & Reyns, 2020; Short et al., 2015). For example, Short et al. found that online stalking victims often experienced disruptions to sleeping and eating, anxiety, stress and fear. Consequently, this apparent focus on physical harm may prevent targeted people from identifying and acknowledging a situation as problematic and taking appropriate protective measures.
Theme 4: Choice and Control Participants suggested that there was greater choice on Facebook, which negated the severity of online intrusive behaviours. Participants discussed how people choose to use Facebook, and by extension, can choose to stop using it. Extending this logic, they argued that the online intrusive behaviours were less serious than their offline counterparts because it is not possible to simply opt out offline: It’s your choice to have it, it’s not like you have to, it’s not like it’s a compulsory thing, it’s your choice to be online. (Participant 1, Woman, Mixed gender group 3)
Furthermore, participants discussed how people can choose what information to share on Facebook, and the absence of comparable choices offline: If it’s online, it’s what you want to share, but if it’s offline, it’s like whatever you do even if you don’t want other people to know about it, whatever you do, they see it. (Participant 3, Women only group 1)
Participants also believed individuals had greater control over how to respond to online intrusive behaviours compared to offline intrusive behaviours. For example, they discussed how users can block and control problematic online intrusive behaviours via the privacy options available on Facebook, and the absence of comparable controls offline. Therefore,
244
V. Coleman et al.
offline intrusive behaviours were perceived as more serious than their conceptually similar online counterparts: If they don’t want to talk to the person they can block them, they have that option. The victim does have the option to block it out of their lives whereas if they’re like in real … they don’t really have that option. (Participant 4, Women only group)
This finding is at odds with people’s actual behaviour on Facebook, where there is often a mismatch between users’ privacy intentions and their actual privacy settings (Madejski et al., 2012; Mondal et al., 2019). Facebook is making ongoing attempts to rectify this issue by, for example, periodically reminding users who can see their posts, and highlighting appropriate privacy settings (Hutchinson, 2018).
Theme 5: Norms and Expectations The final theme concerns the normalisation of Facebook stalking, which renders online intrusive behaviours less serious than their offline counterparts. There was a general assumption that everybody ‘stalks’ on Facebook: Most Facebook users would have done something like this in the years that they’ve been on it. (Participant 3, Women only group 4)
The everydayness of certain online intrusive behaviours is reflected in the phrase ‘Facebook stalking’, which does not carry the same threatening connotations as ‘real stalking’: People do make a joke of Facebook though. Like they will say I’m going to Facebook stalk your profile. (Participant 3, Women only group 2)
The inevitable risk of Facebook stalking also arose with participants viewing intrusion into other people’s lives as the purpose of Facebook. Being monitored on Facebook was considered acceptable and an expected outcome of using Facebook, with users being able to choose
12 Crossing a Line? Understandings of the Relative Seriousness …
245
and control what they shared, whereas offline surveillance was considered problematic and cause for concern. You’ve just gotta assume that people are, that’s what Facebook is for. It’s for looking into other people’s private lives and sharing the details of your own private life. (Participant 3, Women only group 4)
Finally, permission to stalk is perceived as inherent in signing-up to Facebook. Therefore, in contrast to offline targets, online targets have, by default, already permitted these behaviours. You’re pretty much giving them permission to look into your life whereas if they’re following you around you haven’t really given them permission. (Participant 4, Women only group 3)
For over 10 years now, Facebook stalking has been recognised as a normalised and acceptable practice involving the routine monitoring of friends that constitutes many users’ everyday activities on Facebook (Frampton & Fox, 2021; Lewis & West, 2009; Young, 2011), and using the term in this way reduces the gravitas that many online intrusive behaviours may deserve (Frede, 2012). For example, Nobles et al. (2014) found that threats increased the likelihood of self-protective measures in the context of offline but not online stalking, where fear and/or physical assault were necessary. Consequently, the normalisation of Facebook stalking may allow online intrusive behaviours to escalate unchecked, with targeted individuals slower to take protective measures.
Gendered Understandings of Online and Offline Intrusive Behaviours While the five themes did not differ for men and women, the ways they discussed the relative risks of online and offline intrusive behaviours did. Although manifest in different ways depending on the context, men tended to focus on the explicit content of these behaviours, while women tended to focus on the implicit meaning of the same behaviours. This is highlighted in the following exchange:
246
V. Coleman et al.
Participant 2, Woman: …but then you have to think Participant 4, Man: …it’s just a message Participant 2, Woman: …about what ‘I love you’ can mean. (Mixed gender group 5)
On the whole, there was a tendency for men to both trivialise the intrusive behaviours, and to see threat only in the explicit content of these behaviours. For example, unwanted gifts were often taken at face value by men (‘Free stuff, no?’, Participant 4, Man, Mixed gender group 3), a point not lost on some of their female counterparts (‘The boys are like “Ooh, free flowers”’, Participant 1, Woman, Mixed gender group 3). When intrusive acts were constructed as threatening by men, it was often with reference to exaggerated examples of unwanted online messages and friend requests: Participant 2, Man: Well, they’ll send you a friend request or something? Participant 3, Man: Yeah, they would but then you’d say no, but what about if they’re like some super hacker who can, you know, know what you’re doing 24/7. You see that’s where I’m a bit scared. (Mixed gender group 2)
Women, on the other hand, saw threats in the mundane. It was not the content of intrusive behaviours that was important, but the underlying threat that they represent; this applied equally to online and offline intrusive behaviours. For example, in the extract below, the woman refocuses discussion away from the type of gift received to the message that may be inadvertently conveyed by accepting an unwanted gift, no matter what that gift might be: Participant 1, Man: …because it depends on the types of gifts you’re getting as well Participant 2, Woman: …cos if they’re seeing that you’re accepting the gifts, you know, that might just kind of make them want to do it more and more and more and more. They’re invading your privacy when doing that so it kinda fits in here, it’s like potentially serious but could be definitely serious. (Mixed gender group 5)
12 Crossing a Line? Understandings of the Relative Seriousness …
247
Similarly, the seemingly innocuous act of following on Facebook is seen as a potential threat by these women, but not recognised as such by their male counterparts: Participant 3, Man: If it’s just that [intrusive behaviour], it’s not really serious. I mean Participant 1, Woman: …but it has the potential, because they are monitoring you Participant 2, Woman: …that has the potential to be really serious Participant 1, Woman: Yeah, because they can start like stalking you properly. (Mixed gender group 3)
This potential threat is not explicit; for women it is inscribed in the deeper meanings and implications of unwanted behaviours. For example, the potential threat of an online declaration of love is threatening not because of the content of the message, but because of the reciprocal actions that such a declaration requires from the recipient. One’s response, or lack thereof, sends a message: The reason why I’m kinda like umm about this one [intrusive behaviour] is because okay, if someone does say that they do love you, okay and you just don’t, you don’t care, you don’t pay attention. Yeah, but I think sometimes people kind of force issues… and they, they could be feeling it and you’re not feeling it … and that could be an issue because you’re not feeling it but they’re feeling it. (Participant 2, Woman, Mixed gender group 5)
These differences between men’s and women’s understandings of intrusive behaviours, may stem from their experiences of such behaviours. Throughout the focus group discussions, participants drew on their personal experiences of stalking. The vast majority of these anecdotal examples were provided by women: I dunno … depends what they want out of you. Like because I was harassed last year, by a girl who was actually dating the guy I dated like years ago. And she was like pretending to be him and I’m like are you okay? What’s going on? And it was just really weird, and then I just blocked her and it all went away, so it was like okay. (Participant 4, Women only group 2)
248
V. Coleman et al.
It is not surprising, therefore, that the language used by women when discussing theoretical scenarios often included expressions of fear and apprehension. For example, the underlying threat involved in everyday behaviours, such as posting a selfie, a food photograph or adding a friend, often invoked fear. The words ‘scary’ and ‘dread’ was used throughout many of the focus group discussions, predominantly by women: Participant 3: Yeah, and that was the big thing for me cos I worked at night, so this person used to follow me to ‘check’ that I was okay, and I mean for me it’s experience. I mean … when I see an unwanted gift it fills me with a feeling of dread umm yeah, so I guess that’s why I put that [intrusive behaviour] in there Participant 4: Yeah, they might think it’s nice but it’s scary. (Women only group 4)
These findings are consistent with perception research, which has found that judgements regarding the seriousness of both online and offline stalking differ by perceiver gender (Ahlgrim & Terrance, 2021; Becker et al., 2020; Feuer, 2014; Finnegan & Timmons Fritz, 2012; Scott et al., 2015). The finding that women are more likely to focus on the implicit meaning of intrusive behaviours may also help develop an understanding of why women are more likely than men to experience alarm and fear of violence (see Scott et al., 2015), and contribute to the reason women are more likely than men to report their experiences to the police (see Feuer, 2014; Finnegan & Timmons Fritz, 2012). Since women generally experience intimate partner violence, sexual violence and public harassment at higher rates than men, and often with severe harms (Mellgren et al., 2018; World Health Organisation, 2013), it is also possible that women have more at stake in being able to identify and avoid intrusive behaviours. This may form a further explanation as to why women, in this study at least, have developed a capacity to read implicit meanings and possible motivations for these behaviours. Indeed, it reflects the sort of heightened awareness, vigilance and preemptive behaviours that constitute women’s ‘safety work’ in other public environments (Kelly, 2012; Vera-Gray & Kelly, 2020). In any public space, virtual or physical, women are effectively ‘on guard’ (Gardner, 1995), and there are many habitual strategies women use on a regular
12 Crossing a Line? Understandings of the Relative Seriousness …
249
basis (both consciously and unconsciously) to monitor risk and alter their own behaviour accordingly (see Kelly, 2012; Very-Gray 2018; VeraGray & Kelly 2020). Therefore, it is possible that the attentiveness to the implicit meanings of intrusive stalking behaviours, as well as how to respond to those behaviours (such as whether to return, ignore or accept an unwanted gift), is further reflective of this gendered safety work.
Conclusion This chapter examined student understandings of the relative seriousness of conceptually similar online and offline intrusive behaviours. Participants were more likely to perceive offline intrusive behaviours as serious than their conceptually similar online counterparts, and five key assumptions shaped their understandings of the relative seriousness of these behaviours. The assumptions related to intent, effort, physicality, choice and control, and norms and expectations. Although these themes did not differ by gender, perceptions of the relative risks of online and offline intrusive behaviours did. Men tended to focus on the explicit content of the intrusive behaviours whereas women tended to focus on the implicit meaning of these behaviours. Understandings of the relative seriousness of online and offline intrusive behaviours were also influenced by participants’ personal experiences, with women more likely than men to express fear and apprehension regarding these experiences. Therefore, the men who participated in the research were more likely to trivialise both online and offline intrusive behaviours compared to the women. Collectively, these findings suggest that offline intrusive behaviours are perceived as more serious, and by extension, worthy of help and protection than online intrusive behaviours. It is important, therefore, for future research to examine the function of online and conceptually similar offline intrusive behaviours from the perspective of stalking perpetrators. Online and offline intrusive behaviours may serve different purposes, with online intrusive behaviours less indicative of a problematic situation and by extension a need for help and protection. Alternatively, online and offline intrusive behaviours may serve the same purpose and individuals targeted by online intrusive behaviours may
250
V. Coleman et al.
fail to seek help and take appropriate self-protective measures. There is increasing pressure for social media platforms to take online safety more seriously, however, the effectiveness of more stringent policies and accessible ways of reporting online victimisation will be limited if victims of online intrusive behaviour fail to recognise, acknowledge, and report these behaviours. Finally, it is important to acknowledge that social media and technological advances often outpace research. Thus, although Facebook continues to play an integral role in young people’s lives, it’s popularity with younger generations is starting to diminish (Children’s Commissioner, 2018). Furthermore, there are additional online intrusive behaviours that are not applicable to Facebook, or do not have offline counterparts, that were not considered in this study. These include ephemeral content, such as content posted on Snapchat, Instagram stories, and Facebook live, as well as integrated geo-location capabilities, greater integration between platforms, and data collection by social media providers. Social media providers have an ethical responsibility to take proactive steps in supporting user safety (Kaur et al., 2021; Suzor et al., 2019). They need to develop and increase awareness of community behaviour standards on their platforms so that users understand that intrusive behaviours are not acceptable (Dhillon & Smith, 2019). Furthermore, policies need to be supported by safety mechanisms and procedures so that victims can report intrusive behaviours and be responded to in a timely manner, and so that perpetrators can be held accountable for their intrusive behaviour (Dhillon & Smith, 2019). Legal authorities also have a responsibility to recognise the risks posed by online intrusive behaviours and to establish mechanisms to work with social media providers in responding, rather than dismissing online intrusive behaviour as less important than offline behaviour (Chang, 2020; Dhillon & Smith, 2019; Holt et al., 2019). Finally, young people need to be educated about what is and what is not appropriate behaviour across both online and offline contexts to promote good behaviour and mutual respect, and to ensure that individuals targeted by intrusive behaviours are able to recognise, acknowledge, and report these unwanted behaviours. As Reyns and Fisher (2018) stated, ‘…the conceptual divisions between offline and online risk that appear in many studies
12 Crossing a Line? Understandings of the Relative Seriousness …
251
may be obscuring important connections whose influences span these two domains of victimization’ (p. 781). Acknowledgements The authors gratefully acknowledge the assistance of Rebecca Burgess, Hannah Edwards and Laura Hemming during the conducting of the research. The authors would also like to thank the Anastasia Powell for her valuable comments and edits on an earlier version of this chapter.
References Ahlgrim, B., & Terrance, C. (2021). Perceptions of cyberstalking: Impact of perpetrator gender and cyberstalker/victim relationship. Journal of Interpersonal Violence, 36, NP4074–NP4093. https://doi.org/10.1177/088626051 8784590 Alexy, E. M., Burgess, A. W., Baker, T., & Smokyak, S. A. (2005). Perceptions of cyberstalking among college students. Brief Treatment and Crisis Intervention, 5, 279–289. https://doi.org/10.1093/brief-treatment/mhi020 Anderson, M., Vogels, E. A., & Turner, E. (2020, February). Online dating: The virtues and downsides. Pew Research Center. Retrieved from https://www. pewresearch.org/ Becker, A., Ford, J. V., & Valshtein, T. J. (2020). Confusing stalking for romance: Examining the labelling and acceptability of men’s (cyber) stalking of women. Sex Roles. Advance online publication. https://doi.org/10.1007/ s11199-020-01205-2 Braun, V., & Clarke, V. (2013). Successful qualitative research: A practical guide for beginners. Sage Publications. Brown, S. R. (1996). Q methodology and qualitative research. Qualitative Health Research, 6 , 561–567. https://doi.org/10.1177/104973239600 600408 Cavezza, C., & McEwan, T. E. (2014). Cyberstalking versus off-line stalking in a forensic sample. Psychology, Crime & Law, 20, 955–970. https://doi.org/ 10.1080/1068316X.2014.893334 Chandler, S. (2019, October 11). Social media is fostering a big rise in realworld stalking. Forbes. Retrieved from https://www.forbes.com/
252
V. Coleman et al.
Chang, W. J. (2020). Cyberstalking and law enforcement. Procedia Computer Science, 176 , 1188–1194. https://doi.org/10.1016/j.procs.2020.09.115 Chaulk, K., & Jones, T. (2011). Online obsessive relational intrusion: Further concerns about Facebook. Journal of Family Violence, 26 , 245–254. https:// doi.org/10.1007/s10896-011-9360-x Children’s Commissioner. (2018). Life in ‘likes’: Children’s Commissioner report into social media use among 8–12 year olds. Retrieved from https://www.chi ldrenscommissioner.gov.uk/ Dennison, S. M. (2007). Interpersonal relationships and stalking: Identifying when to intervene. Law and Human Behavior, 31, 353–367. https://doi.org/ 10.1007/s10979-006-9067-3 Dhillon, G., & Smith, K. J. (2019). Defining objectives for preventing cyberstalking. Journal of Business Ethics, 157 , 137–158. https://doi.org/10.1007/ s10551-017-3697-x Dreßing, H., Bailer, J., Anders, A., Wagner, H., & Gallas, C. (2014). Cyberstalking in a large sample of social network users: Prevalence, characteristics, and impact upon victims. Cyberpsychology, Behaviour, and Social Networking, 17 , 61–67. https://doi.org/10.1089/cyber.2012.0231 Duggan, M. (2017, July). Online harassment 2017 . Pew Research Center. Retrieved from https://www.pewresearch.org/ Facebook. (2021). Can I tell who’s viewing my Facebook profile? Retrieved from https://en-gb.facebook.com/help/210896588933875 Fansher, A. K., & Randa, R. (2019). Risky social media behaviors and the potential for victimization: A descriptive look at college students victimized by someone met online. Violence and Gender, 6 , 115–123. https://doi.org/ 10.1089/vio.2017.0073 Fedina, L., Backes, B. L., Sulley, C., Wood, L., & Busch-Armendariz, N. (2020). Prevalence and sociodemographic factors associated with stalking victimization among college students. Journal of American College Health, 68, 624–630. https://doi.org/10.1080/07448481.2019.1583664 Feuer, B. S. (2014). The effects of cyberstalking severity, gender, and the victimperpetrator relationship on reporting to law enforcement (unpublished doctoral dissertation). Alliant International University. Finnegan, H. A., & Timmons Fritz, P. A. (2012). Differential effects of gender on perceptions of stalking and harassment behavior. Violence and Victims, 27 , 895–910. https://doi.org/10.1891/0886-6708.27.6.895 Fissel, E. R. (2018). The reporting and help-seeking behaviors of cyberstalking victims. Journal of Interpersonal Violence. Advance online publication. https://doi.org/10.1177/0886260518801942
12 Crossing a Line? Understandings of the Relative Seriousness …
253
Fissel, E. R., & Reyns, B. W. (2020). The aftermath of cyberstalking: School, work, social, and health costs of victimization. American Journal of Criminal Justice, 45, 70–87. https://doi.org/10.1007/s12103-019-09489-1 Frampton, J. R., & Fox, J. (2021). Monitoring, creeping, or surveillance? A synthesis of online social information seeking concepts. Review of Communication Research, 9. https://doi.org/10.12840/ISSN.2255-4165.025 Fraser, C., Olsen, E., Lee, K., Southworth, C., & Tucker, S. (2010). The new age of stalking: Technological implications for stalking. Juvenile and Family Court Journal, 61, 39–55. https://doi.org/10.1111/j.1755-6988.2010.010 51.x Frede, S. (2012, November 27). Facebook stalking: It’s no longer just a joke. Highlander News. Retrieved from http://www.highlandernews.org/ Gardner, C. B. (1995). Passing by: Gender and public harassment. University of California Press. Gillett, R. (2018). Intimate intrusions online: Studying the normalisation of abuse in dating apps. Women’s Studies International Forum, 69, 212–219. https://doi.org/10.1016/j.wsif.2018.04.005 Holt, T. J., Lee, J. R., Liggett, R., Holt, K. M., & Bossler, A. (2019). Examining perceptions of online harassment among constables in England and Wales. International Journal of Cybersecurity Intelligence & Cybercrime, 2, 24–39. Retrieved from https://vc.bridgew.edu/ijcic Hutchinson, A. (2018, May 25). Facebook launches new privacy prompt to remind users of data controls. Social Media Today. Retrieved from https:// www.socialmediatoday.com/ Jiang, J. A., Middler, S., Brubaker, J. R., & Fiesler, C. (2020, October). Characterizing community guidelines on social media platforms. Conference on Computer Supported Cooperative Work and Social Computing. Kaur, P., Dhir, A., Tandon, A., Alzeiby, E. A., & Abohassan, A. A. (2021). A systematic literature review on cyberstalking. An analysis of past achievements and future promises. Technological Forecasting and Social Change, 163. https://doi.org/10.1016/j.techfore.2020.120426 Kelly, L. (2012). Standing the test of time? Reflections on the concept of the continuum of sexual violence. In J. Brown & S. Walklate (Eds.), Handbook on sexual violence (pp. xvii–xxvi). Routledge. Lewis, J., & West, A. (2009). ‘Friending’: London-based undergraduates’ experience of Facebook. New Media and Society, 11, 1–21. https://doi.org/10. 1177/1461444809342058
254
V. Coleman et al.
Logan, T. K. (2020). Examining stalking experiences and outcomes for men and women stalked by (ex) partners and non-partners. Journal of Family Violence, 35, 729–739. https://doi.org/10.1007/s10896-019-00111-w Madejski, M., Johnson, M., & Bellovin, S.M. (2012). A study of privacy setting errors in an online social network. Pervasive Computing and Communications Workshops, 340–345. https://doi.org/10.1109/PerComW.2012.619 7507 Maple, C., Short, E., & Brown, A. (2011). Cyberstalking in the United Kingdom: An analysis of the ECHO pilot survey. University of Bedfordshire National Centre for Cyberstalking Research. Retrieved from https://uobrep. openrepository.com/ Mason, C. L., & Magnet, S. (2012). Surveillance studies and violence against women. Surveillance & Society, 10, 105–118. https://doi.org/10.24908/ss. v10i2.4094 McKeown, B., & Thomas, D. (1988). Q methodology. Sage Publications. Melander, L. A. (2010). College students’ perceptions of intimate partner cyber harassment. Cyberpsychology, Behavior, and Social Networking, 13, 263–268. https://doi.org/10.1089/cyber.2009.0221 Mellgren, C., Andersson, M., & Ivert, A. K. (2018). “It happens all the time”: Women’s experiences and normalization of sexual harassment in public space. Women & Criminal Justice, 28, 262–281. https://doi.org/10.1080/ 08974454.2017.1372328 Milivojevi´c, S., Crofts, T., Lee, M., & McGovern, A. (2018). ‘A sneaky bit of stalking’: Young people, social network sites, and practices of online surveillance. Temida, 21, 181–205. https://doi.org/10.2298/TEM1802181M Mondal, M., Yilmaz, G. S., Hirsch, N., Khan, M. T., Tang, M., Tran, C., Kanich, C., Ur, B., & Zheleva, E. (2019, November). Moving beyond setit-and-forget-it privacy settings on social media. ACM SIGSAC Conference on Computer and Communications Security. Mullen, P. E., Pathé, M., & Purcell, R. (2001). Stalking: New constructions of human behaviour. Australian and New Zealand Journal of Psychiatry, 35, 9–16. https://doi.org/10.1046/j.1440-1614.2001.00849.x Mullen, P. E., Pathé, M., Purcell, R., & Stuart, G. W. (1999). Study of stalkers. American Journal of Psychiatry, 156 , 1244–1249. https://doi.org/10.1176/ ajp.156.8.1244 Nachiappan, A. (2021, January 1). Social media stalking on rise as harassers dodge identity checks. The Sunday Times. Retrieved from https://www.the times.co.uk/
12 Crossing a Line? Understandings of the Relative Seriousness …
255
Nobles, M. R., Reyns, B. W., Fox, K. A., & Fisher, B. S. (2014). Protection against pursuit: A conceptual and empirical comparison of cyberstalking and stalking victimization among a national sample. Justice Quarterly, 31, 986– 1014. https://doi.org/10.1080/07418825.2012.723030 Page, T. E., & Scott, A. J. (2021). Current understandings of sex-based harassment and stalking perpetration. In J. M. Brown & M. A. H. Horvath (Eds.), Cambridge handbook of forensic psychology (2nd ed.), Cambridge University Press. Pathé, M., MacKenzie, R., & Mullen, P. E. (2004). Stalking by law: Damaging victims and rewarding offenders. Journal of Law and Medicine, 12, 103–111. Retrieved from https://www.westlaw.com.au Pereira, F., & Matos, M. (2016). Cyber-stalking victimization: What predicts fear among Portuguese adolescents? European Journal on Criminal Policy and Research, 22, 253–270. https://doi.org/10.1007/s10610-015-9285-7 Ramirez, C. (2019). College students’ perceptions, experiences, and attitudes towards stalking and cyberstalking (unpublished Masters thesis). California State University. Reyns, B. W., & Fisher, B. S. (2018). The relationship between offline and online stalking victimization: A gender-specific analysis. Violence and Victims, 33, 769–786. https://doi.org/10.1891/0886-6708.VV-D-17-00121 Reyns, B. W., Henson, B., & Fisher, B. S. (2012). Stalking in the twilight zone: Extent of cyberstalking victimization and offending among college students. Deviant Behavior, 33, 1–25. https://doi.org/10.1080/01639625. 2010.538364 Scott, A. J. (2020). Stalking: How perceptions differ from reality and why these differences matter. In R. Bull & I. Blandon-Gitlin (Eds.), The Routledge international handbook of legal and investigative psychology (pp. 238–254). Routledge. Scott, A. J., Rajakaruna, N., Sheridan, L., & Gavin, J. (2015). International perceptions of relational stalking: The influence of prior relationship, perpetrator sex, target sex, and participant sex. Journal of Interpersonal Violence, 30, 3308–3323. https://doi.org/10.1177/0886260514555012 Scott, A. J., Rajakaruna, N., Sheridan, L., & Sleath, E. (2014). International perceptions of stalking and responsibility: The influence of prior relationship and severity of behavior. Criminal Justice and Behavior, 41, 220–236. https:// doi.org/10.1177/0093854813500956 Sheridan, L., & Davies, G. M. (2001). Violence and the prior victim-stalker relationship. Criminal Behaviour and Mental Health, 11, 102–116. https:// doi.org/10.1002/cbm.375
256
V. Coleman et al.
Sheridan, L., & Lyndon, A. E. (2012). The influence of prior relationship, gender, and fear on the consequences of stalking victimization. Sex Roles, 66 , 340–350. https://doi.org/10.1007/s11199-010-9889-9 Short, E., Guppy, A., Hart, J. A., & Barnes, J. (2015). The impact of cyberstalking. Studies in Media and Communication, 3, 23–37. https://doi.org/10. 11114/smc.v3i2.970 Smith, A. & Duggan, M. (2013, October). Online dating & relationships. Pew Research Center. Retrieved from https://www.pewresearch.org/ Statista. (2021a). Distribution of Facebook users worldwide as of January 2021, by age and gender. Retrieved from https://www.statista.com/ Statista. (2021b). Most popular social networks worldwide as of January 2021, ranked by number of active users. Retrieved from https://www.statista.com/ Suzor, N., Dragiewicz, M., Harris, B., Gillett, R., Burgess, J., & Van Geelen, T. (2019). Human rights by design: The responsibilities of social media platforms to address gender-based violence online. Policy & Internet, 11, 84–103. https://doi.org/10.1002/poi3.185 Vakhitova, Z. I., Alston-Knox, C. L., Reeves, E., & Mawby, R. I. (2021). Explaining victim impact from cyber abuse: An exploratory mixed methods analysis. Deviant Behavior. Advance online publication. https://doi.org/10. 1080/01639625.2021.1921558 Vera-Gray, F. (2018). The right amount of panic: How women trade freedom for safety. Policy press. Vera-Gray, F., & Kelly, L. (2020). Contested gendered space: Public sexual harassment and women’s safety work. International Journal of Comparative and Applied Criminal Justice, 44, 265–275. https://doi.org/10.1080/019 24036.2020.1732435 Vogels, E. A. (2021, January). The state of online harassment. Pew Research Center. Retrieved from https://www.pewresearch.org/ Warburton, D. (2020, September 19). Stalking crimes treble as web weirdos use social media for vicious campaigns. Mirror. Retrieved from https://www.mir ror.co.uk/ Wilson, C., Sheridan, L., & Garratt-Reed, D. (2020). What is cyberstalking? A review of measurements. Journal of Interpersonal Violence. Advance online publication. https://doi.org/10.1177/0886260520985489 Woodlock, D. (2017). The abuse of technology in domestic violence and stalking. Violence Against Women, 23, 584–602. https://doi.org/10.1177/107 7801216646277 World Health Organization. (2013). Global and regional estimates of violence against women: Prevalence and health effects of intimate partner violence and
12 Crossing a Line? Understandings of the Relative Seriousness …
257
non-partner sexual violence. World Health Organization. Retrieved from https://www.who.int/ Young, K. (2011). Social ties, social networks and the Facebook experience. International Journal of Emerging Technologies and Society, 9, 20–34. Retrieved from https://citeseerx.ist.psu.edu/
Part IV Sexual and Image Based Abuse
13 The Impact of Technology-Facilitated Sexual Violence: A Literature Review of Qualitative Research Joanne Worsley and Grace Carter
Introduction Sexual violence is a considerable global public health issue. Internationally, approximately one in every three women (35%) has experienced physical and/or sexual violence perpetrated by an intimate partner or sexual violence perpetrated by someone else at some point in their lives (WHO, 2013). Sexual violence is defined as: any sexual act, attempt to obtain a sexual act, unwanted sexual comments or advances, or acts to traffic, or otherwise directed, against a person’s sexuality using coercion, by any person regardless of their relationship to J. Worsley University of Liverpool, Liverpool, UK G. Carter (B) Coventry University, Coventry, UK e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_13
261
262
J. Worsley and G. Carter
the victim, in any setting, including but not limited to home and work. (WHO, 2012, p. 2)
There are devastating immediate and long-term physical and mental health consequences for people who experience sexual violence, and these include physical injuries, substance misuse, eating disorders, post-traumatic stress disorder (PTSD), anxiety, depression, self-harm and suicidality (WHO, 2013). There are also considerable social and economic costs associated with sexual violence and abuse, which affect family, community and economic life. Furthermore, there are lost productivity, police, criminal justice and social service costs as a result of individuals experiencing sexual violence (Brown et al., 2020). Given the ubiquitous nature of the internet and the recent proliferation of digital devices, a new form of sexual violence has emerged. Technology-facilitated sexual violence (TFSV) refers to a broad range of behaviours where digital technologies are utilised as tools to facilitate sexually based harms. Individuals can experience TFSV at any time, in any location, which creates an omnipresent sense of threat. Examples of TFSV include online sexual harassment (e.g. unwanted or unwelcome offensive, humiliating, or intimidating conduct of a sexual nature; Powell & Henry, 2017), image-based sexual abuse (e.g. nonconsensual creation, dissemination and/or threats to disseminate private sexual images; McGlynn & Rackley, 2017), and cyberstalking. Although the line between sexual and nonsexual behaviours is not always clear, in our chapter we include cyberstalking as a form of TFSV as this crime often involves unwanted and repeated sexual pursuit that induces fear or alarm. Quantitative research has highlighted that TFSV is far reaching. An online survey of Australian adults (n = 2956) revealed that online sexual harassment was the most frequently experienced TFSV dimension, with young people aged 18–24 years experiencing the highest rate of victimisation (Powell & Henry, 2019). Based on a US sample (n = 2849), the Pew Online Harassment Survey revealed that age and gender are most closely associated with how individuals experience online harassment. Women aged 18–24 years experienced more severe forms of online harassment compared to their male counterparts, including cyberstalking
13 The Impact of Technology-Facilitated Sexual Violence …
263
and online sexual harassment (Pew Research Center, 2014). In an online survey (n = 5798), 20% of Australian respondents and 16.6% of UK respondents reported experiencing online sexual harassment in their lifetime (Powell & Henry, 2017). Women reported experiencing higher rates than their male counterparts, and more than 32% of young women aged 18–24 years reported experiencing online sexual harassment (Powell & Henry, 2017). There is an extensive body of literature that explores how experiencing TFSV impacts individuals. In a recent review of empirical studies examining the nature and/or prevalence of various forms of TFSV against adults, Henry and Powell (2018) observed a dearth of empirical studies, identifying 11 quantitative studies in the international literature. Although the quantitative body of evidence illustrates the negative consequences of individuals experiencing TFSV, a review of qualitative studies exploring the broader social and psychological implications of TFSV is timely and necessary. However, few studies have empirically examined the experiences of victims/survivors. TFSV is growing rapidly, with little attention given to the lived experience of the people affected. Thus, there is a need to explore and understand more about how individuals experience sexual violence in the digital context. In this chapter, we critically review international qualitative studies in order to understand how the lives of children, young people and adults have been impacted by experiencing TFSV. Throughout this chapter, we refer to individuals as ‘victims/survivors’ and as those who have ‘experienced’ TFSV, whilst recognising that individuals may choose to use these terms differently. We are mindful of how we refer to individuals who have experienced TFSV and want to recognise that simply referring to individuals as either a ‘victim’ or a ‘survivor’ does not adequately represent their experiences of abuse and can devalue their agency and limit how meaning can be given to their experiences (Nissim-Sabat, 2009; Schott, 2012).
264
J. Worsley and G. Carter
Method The focus of our review was to include any primary empirical study that was qualitative, written in English, and specifically included children’s, adolescents’ or adults’ lived experiences of TFSV. The review was based on an extensive electronic literature search using a variety of databases (e.g. Academic Search Premier, Google Scholar, PsychINFO), and reference lists in each of the studies selected were also reviewed. In order to locate the most recent evidence, scholarly articles or grey literature written between 2000 and 2020 were included. In our search, we included qualitative study designs including ethnography, phenomenological studies, narrative studies, action research studies, case studies, grounded theory studies, visual studies and mixed methods designs where data were collected and analysed using qualitative methods. We excluded papers that were editorials, commentaries and opinion papers, and studies that collected data using qualitative methods, but did not analyse these data using qualitative analysis methods (e.g. open-ended survey questions where the response data are analysed using descriptive statistics only). Given the limited number of studies that met our criteria, we did not exclude studies based on their quality. In light of the broad range of behaviours that we included in our review, we searched a range of keywords and combinations. These included cyberstalking, online stalking, cyber-obsessive pursuit, imagebased sexual exploitation, revenge porn, sexting, electronic sexual harassment, cyber-sexual harassment, online sexual harassment, online sexual bullying, technology-facilitated sexual abuse, digital sexual violence and sexploitation. The original literature search produced in excess of 1000 articles, in part due to the broad range of keywords used. Of these, a number were selected based on the inclusion criteria identified above. We reviewed abstracts and full texts to ensure they met our criteria. We included 18 studies in the final review (see Appendix 1).
13 The Impact of Technology-Facilitated Sexual Violence …
265
Results The studies were published between 2010 and 2020. Fourteen studies were conducted in high-income countries such as England, Canada, Australia, the United States, New Zealand, Denmark, Hungary, and Belgium, whilst four were conducted in middle-income countries such as South Africa, Malaysia and Bangladesh. The majority were conducted in Western countries, predominantly within the United Kingdom. With regard to data collection methods, the majority of studies used interviews, focus groups and/or online surveys. One study analysed posts from an online Danish hotline. The qualitative literature has examined the consequences or outcomes of TFSV from victim/survivor perspectives. Whilst the majority of studies focused on adult victims/survivors, three studies focused exclusively on children’s, adolescents’ and/or young people’s perspectives (e.g. Mandau, 2021; Project deSHAME, 2017; Ringrose et al., 2012). Our findings suggest that the impact of TFSV is multi-layered and pervasive across different dimensions of victims/survivors’ personal and social lives. As the effect on victims’ lives is multifaceted, we have summarised the findings under the following subheadings: psychological impacts, physical health outcomes, interpersonal consequences, social consequences, self-blame/victim blaming, coping strategies and support.
Psychological Impacts Victims/survivors describe the longevity of the abuse. As perpetrators of TFSV can create a sense of omnipresence due to relentless texts or phone calls, many victims/survivors described feeling trapped (Woodlock, 2017). The invasive nature of the abuse also elicits fear and helplessness. For some victims/survivors, a heightened sense of fear overshadowed every aspect of their life (Worsley et al., 2017). In a large sample of undergraduate students (n = 85) with lived experience of cyberstalking, the fear of private information being released was common (Davies et al., 2016). For adult victims/survivors of IBSA, uncertainty around whether images would be uploaded online was a constant source
266
J. Worsley and G. Carter
of fear and anxiety (McGlynn et al., 2021). Those who were successful in removing the images from the internet still experienced fear in relation to those images resurfacing in the future (Huber, 2020). Many victims/survivors described their behaviour following IBSA as obsessive, as they started spending a lot of time searching the internet to determine whether the images had been uploaded. In one study, the number of times victims/survivors checked online ranged from every couple of minutes to every few days (Huber, 2020). Obsessive behaviours such as checking the internet were also common following cyberstalking (Reveley, 2017). One victim/survivor of cyberstalking likened her continual checking to obsessive–compulsive disorder (OCD): It was just a constant case of googling everybody’s name, checking their Facebook, checking their Twitter, making sure she hadn’t got into anything, and that was like I suppose it was like OCD in a way, it was just constant. (Reveley, 2017, p. 86)
The emotional impact of TFSV predominantly includes depression and anxiety. According to Reveley (2017), depression was the most prevalent form of distress experienced by victims/survivors of cyberstalking. In fact, four out of five victims/survivors reported symptoms of depression and/or taking antidepressants as a consequence of their ordeal (Reveley, 2017). Similarly, in a sample of 17 women aged 19–46 years, Huber (2020) found the most common form of distress experienced following IBSA was depression, with at least 15 of the women experiencing depressive symptoms following the trauma of image dissemination (Huber, 2020). As members of their local community may see the images and recognise them, many victims/survivors of IBSA became agoraphobic. In one sample, the majority of victims/survivors found it very difficult to leave their homes, and this varied from a few days to six months (Huber, 2020). It was common for victims/survivors to assume that others were looking and talking about them. Such assumptions were heightened for those working in occupations requiring face-to-face contact with members of the general public (Huber, 2020). In light of this, one victim/survivor completely changed her appearance as a disguise, which
13 The Impact of Technology-Facilitated Sexual Violence …
267
illustrates the drastic lengths some victims go to following the trauma of image dissemination: I’ve changed my hair colour. I intentionally put on weight, because I was quite thin and I always had quite an athletic sort of figure, and it sounds so ridiculous because I intentionally made myself put on [substantial weight] so people wouldn’t [recognise me]. (McGlynn et al., 2021, p. 10)
Victims/survivors of cyberstalking described distress in the form of paranoia (Jansen van Rensburg, 2017; Short et al., 2014; Worsley et al., 2017), and hyper vigilance is common following cyberstalking (Jansen van Rensburg, 2017; Worsley et al., 2017). For others, symptoms of posttraumatic stress disorder were commonly experienced. For example, vivid memories of the ordeal are relived involuntarily, and intrusive recollections were also described by victims/survivors (Worsley et al., 2017). One victim described over-medicating with prescribed anti-anxiety medication when reliving the trauma (Reveley, 2017). In a sample of emerging young women aged 19–29 years with lived experience of cyber-sexual violence, self-inflicted injuries or cybercide were viewed as the only options available to end their trauma (Pashang et al., 2019). Given the public and pervasive nature of TFSV, victims/survivors experience a host of negative psychological impacts, which place them in need of professional services. Some victims/survivors received a mental health diagnosis, and were prescribed antidepressants and/or antianxiety medication to relieve their symptoms (Bates, 2017; Reveley, 2017). Although some victims/survivors of TFSV sought help for their distress in the form of counselling (Davies et al., 2016), others felt too embarrassed to seek professional support: ‘Embarrassment has made me reluctant to involve the police’ and ‘I probably should have entered counselling earlier although I felt too ashamed’ (Worsley et al., 2017, p. 6).
268
J. Worsley and G. Carter
Physical Health Outcomes For some victims/survivors of IBSA, poor mental health contributed to, or resulted in, poor physical health such as lack of nutrition, weight loss, and/or lack of general hygiene (Huber, 2020). Adolescent and adult victims/survivors of TFSV encountered unusual sleep patterns such as lack of sleep or sleeping for multiple days (Bates, 2017; Davies et al., 2016; Haron & Yusof, 2010; Huber, 2020; Jansen van Rensburg, 2017; Mandau, 2021). Victims/survivors of cyberstalking described experiencing nausea (Davies et al., 2016; Reveley, 2017; Worsley et al., 2017), headaches (Haron & Yusof, 2010), and/or weight changes including both losses and gains (Davies et al., 2016; Reveley, 2017). Taken together, victims/survivors general physical health deteriorates following TFSV.
Interpersonal Consequences Victims/survivors of TFSV described that their ordeal damaged their relationships with others including family members, friends, employers and colleagues (Huber, 2020; Reveley, 2017; Short et al., 2014; Worsley et al., 2017). For some victims/survivors, not being believed was identified as the key factor underpinning the dissolution of relationships (Reveley, 2017; Worsley et al., 2017). Some victims/survivors of TFSV found new relationships difficult to maintain, whilst others felt fearful about the prospect of fostering new relationships (Huber, 2020; Jansen van Rensburg, 2017). Following TFSV, victims/survivors reported a general loss of trust in others (Bates, 2017; Haron & Yusof, 2010; Huber, 2020; Pashang et al., 2019). In particular, following betrayal by someone they loved, many victims/survivors of IBSA changed from being very trusting individuals to hardly trusting anyone, with some struggling to allow others to become emotionally or physically close to them (Huber, 2020). For example, one victim/survivor’s ex-boyfriend was a former coworker, which subsequently led her to mistrust colleagues in her new workplace:
13 The Impact of Technology-Facilitated Sexual Violence …
269
At my new job now I don’t really want to talk to any men or anything ’cause like you just don’t know who’s capable of, like I didn’t think [exboyfriend from previous job] was capable of doing something like that. (Bates, 2017, p. 31)
Victims/survivors of TFSV also reported damage to their reputation or fears concerning damage to their reputation (Bates, 2017; Jansen van Rensburg, 2017; McGlynn et al., 2021; Reveley, 2017; Short et al., 2014; Worsley et al., 2017). It was common for victims/survivors’ reputations to be damaged due to accusations of sexual impropriety (Reveley, 2017). The dissemination of sexually explicit images can hinder women’s reputations so severely that they may be forced to find a new job.
Social Consequences TFSV has a direct impact on victims/survivors employment or education as many victims/survivors found it difficult to concentrate following their ordeal (Haron & Yusof, 2010; Huber, 2020). Those who were in education found their grades were negatively impacted (Huber, 2020), whilst those in employment suffered from loss of work (Short et al., 2014; Worsley et al., 2017). TFSV can also result in significant changes to victims/survivors’ daily lives. In one study, many victims/survivors made changes to their daily lives such as avoiding areas where the perpetrator lived, worked and socialised (Huber, 2020). Some victims/survivors of cyberstalking ‘quit the internet totally’ (Reveley, 2017; Worsley et al., 2017), whereas others changed their privacy settings or closed down their social media accounts (Haron & Yusof, 2010; Woodlock, 2017; Worsley et al., 2017). Victims/survivors of online sexual harassment stopped using anonymous social networking sites in order to avoid receiving sexual messages that were humiliating and insulting (Nova et al., 2019). Although changing a phone number or closing a social media account may seem minor inconveniences, victims/survivors may become disconnected from their support networks as a consequence (Worsley et al., 2017). Some victims/survivors were even left feeling as though they had no choice other than to relocate (Woodlock, 2017; Worsley et al., 2017).
270
J. Worsley and G. Carter
Although victims/survivors withdrew from social media and relocated in an attempt to re-establish a sense of power and control, this often left them feeling socially isolated (Reveley, 2017). Following cyber-sexual violence, emerging young women isolated themselves by withdrawing from friends, families, social networks, employment, and education (Pashang et al., 2019). Social isolation was also common following IBSA, as many victims/survivors did not disclose their experiences to other people, particularly family members (Henry et al., 2020; Huber, 2020).
Self-Blame/Victim Blaming Young people and adults shared how they felt they were to blame for what happened. Some described their victimisation as their own fault, whereby they perceived the sending of a sexual image as a ‘mistake’ that could have been preventable. Many believed that they had ‘learned from’ their mistake and would not do this again in order to prevent further victimisation. Others perceived the non-consensual sharing of their images as being the result of their own personal qualities such as being ‘naive’ and ‘stupid’. In such cases, they identified strongly with being a ‘victim’ and felt that they only had themselves to blame, reflecting a loss of self-worth. This belief affected the extent to which individuals decided to seek help and support (Burkett, 2015; Mandau, 2021). For some, sharing their experiences with family and friends led to individuals being blamed for what happened to them. The frustration of being blamed by others and not feeling supported silenced individuals about their harassment as they felt they could not speak out (Nova et al., 2019).
Coping Strategies Individuals shared a range of coping strategies in light of their experiences. These included developing self-management or emotional resources to believe in themselves when others were telling lies about them (Ringrose et al., 2012), and negotiating online sexual communication by using humour or threats (Project deSHAME, 2017). Young people enlisted the help of their peers to reject the requests for sending
13 The Impact of Technology-Facilitated Sexual Violence …
271
images, and to come up with legitimate excuses such as pretending their phone contract did not have enough credit to send an image or that they had a ‘boyfriend’ (Ringrose et al., 2012). Others reported that they had engaged in self-inflicted behaviours, addictions and cyber revenge in order to cope with the mental health implications of cyber-sexual violence, especially if they felt they had a limited or non-existent support system (Pashang et al., 2019). Victims/survivors also shared how they engaged in safety behaviours such as hiding from the perpetrator, changing their routes, telephone numbers and staying at home. Coping strategies such as excessive checking, ruminating, hiding and suppressing thoughts led to a diminished quality of life (Reveley, 2017). One individual shared how they had dealt with their anger inwardly: Luckily erm the anger that I had I coped with by turning in on myself … slapping myself round the face, banging my head on the wall, you know calling myself all the dirty names that I thought I deserved erm rather than going and taking them out on someone else. (Reveley, 2017, p. 103)
Some coping strategies were strategies that victims/survivors implemented immediately after experiencing abuse. For example, as individuals were unlikely to report what had happened, they consequently tried to ignore the situation or resolve it themselves. Some victims/survivors who felt silenced decided to delete or leave the digital platform through which the harassment happened (Nova et al., 2019), or to ‘block’ the perpetrator on social media (Project deSHAME, 2017). Others shared how they were in denial and engaged in self-medicating behaviours. However, some coping strategies were reported as being implemented gradually by victims/survivors as time passed and these mechanisms involved ‘active’ and ‘systematic’ structures such as seeking social support or counselling, and reporting the harassment. These were perceived to occur when victims/survivors felt powerless in dealing their experience on their own (Bates, 2017; Thambo et al., 2019).
272
J. Worsley and G. Carter
Support Some individuals shared that they did not receive adequate support from justice, law and professionals who they thought would be able to support them and this created a sense of feeling that key agencies were against them (McGlynn et al., 2021; Nova et al., 2019). Often individuals were not signposted by law enforcement agencies to other organisations that could help, and therefore felt helpless. Others felt that professionals, such law enforcement, had not taken them seriously because their experience of abuse had not been in physical form. They attributed the response as directly escalating a negative impact, ‘Even health professional friends washed their hands of me and so in that sense, colluded in the abuse of me’ (Worsley et al., 2017, p. 7). Some victims/survivors felt that increasing general awareness about cyberstalking and the negative emotional consequences would have led to better support from informal networks, ‘People having a better understanding of cyberstalking. My friends, family and employers all reacted very strongly and judged me’ (Worsley et al., 2017, p. 8).
Discussion All forms of TFSV negatively affect people’s health and wellbeing. The findings of this review demonstrate that victim/survivor experiences of online sexual harassment, IBSA, and cyberstalking are remarkably similar in terms of the impacts each may have on health and wellbeing. Victims/survivors of TFSV experienced a host of negative impacts across many different areas of their lives. Mental health, breakdown of relationships, loss of freedom, loss of jobs, and damage to their reputation were all reported. Across the studies, we observed how victims/survivors were likely to blame themselves for their experience of abuse. Although there was a constant threat of exposure to abuse, victims/survivors felt that they could adopt strategies in order to prevent it from happening again, and this could be seen as their way to gain control over the situation. Others felt silenced to speak out about their experience because their support
13 The Impact of Technology-Facilitated Sexual Violence …
273
networks blamed them for what happened. Young people and adults also engaged in a range of coping strategies, some that were implemented immediately and others over a longer period, with some of these strategies resulting in negative health consequences. Individuals shared how the support they received from professionals and informal networks, such as family and friends, was not adequate and further exacerbated the negative impacts of their experiences. In some circumstances where individuals felt that support from others was limited, this led to individuals developing maladaptive coping strategies and being silenced.
Limitations and the Need for Further Research Whilst to our knowledge this is the first review specifically examining the qualitative literature regarding how individuals perceive the impact of experiencing TFSV, we are mindful that our review had limitations. First, the searches were limited to a 20-year date range (2000–2020); however, this date range was deemed appropriate as we aimed to identify the most recent data. Second, the searches were limited to the English language, which may have meant that a range of studies were excluded. However, we were able to include studies that were conducted in a range of countries including lower-middle income countries, although these were limited in number. Although there was a large body of qualitative research on specific forms of TFSV, such as cyberstalking, qualitative research was limited in relation to other forms of TFSV, such as online sexual harassment, indicating an area for further exploration. Furthermore, most research studies examined specific types of TFSV in isolation. We did not locate any qualitative evidence exploring the impacts experienced by victims subjected to multiple forms of TFSV simultaneously. We also observed a distinct paucity of longitudinal qualitative research in this area. Longitudinal qualitative research exploring the impact of TFSV over time would provide a richer insight and understanding of people’s experiences and perspectives, and may help identify mechanisms for improved health. Although females may be more likely to experience TFSV, we observed that the view of male victims/survivors was largely absent across
274
J. Worsley and G. Carter
the studies we included. Moreover, the large majority of the studies we identified explored the experiences of young people or young adults. We recommend that future studies should explore the experiences of male victims/survivors and older adults to examine how their experiences may be similar or different. When samples lack diversity, the circumstances of people’s sexual violence may not be thoroughly understood, as factors such as age, race, gender, ethnicity, sexuality and disabilities could mean that individuals experience abuse in different ways. Very few studies attempted to examine the distribution of impacts across subpopulations by gender, ethnicity, sexuality or other characteristics. We recommend that future research studies consider these observations in order to develop an evidence base that is more nuanced in terms of differential impacts. One of the key findings from our review was that victims/survivors identified ways in which they could have been better supported. We suggest that the voices of a wide representation victims/survivors, as well as those who formally and informally support them should inform what support looks like. Although it was beyond the scope of our review, we recommend that future research examines the perspectives of individuals who support victims/survivors formally and informally in order to identify any barriers or challenges they have in responding to disclosures and help seeking.
Conclusion In our review, we note how qualitative studies have started to explore how victims/survivors cope in the aftermath of TFSV and how they perceive the support they do or do not receive. It is clear from the studies that knowing how to best support victims/survivors is still an area that needs further exploration. For some, the experience of formal or informal support networks perpetuating myths about victim blaming may lead to victims/survivors experiencing secondary victimisation which may have long-term negative health and social consequences for individuals. Disclosure or reaching out to family/friends or professionals creates key opportunities for early intervention, essential to
13 The Impact of Technology-Facilitated Sexual Violence …
275
preventing long-term negative health sequelae. In light of this, we recommend that raising awareness of TFSV, developing interventions that support victims/survivors, and training formal and informal support networks is needed. Critically, we believe that this should be informed by the voices of victims/survivors and shaped by their experiences of accessibility and perceptions of support received or not received. Furthermore, the voices of individuals who provide formal or informal support to victims/survivors should also inform how such networks are equipped to provide support that is empowering to victims/survivors. The voices of victims/survivors are fundamental to understanding how individuals experience TFSV. Whilst all forms of TFSV can negatively affect people’s health and wellbeing, victims/survivors have developed a range of coping strategies in response to their experiences. However, the experiences that currently shape our understanding are limited and need to be more representative of individuals who experience TFSV (see, Henry et al., 2020). The findings from our review suggest that there are barriers to and challenges in accessing support. Drawing on the experiences of victims/survivors, friends and family, as well as professionals about receiving or providing support will be critical to informing interventions and training so that victims/survivors can be supported in ways that are meaningful to them. Acknowledgements The author(s) received no financial support for the research, authorship, and/or publication of this article.
Appendix 1 Included studies Authors, date
Title
Publication outlet
Bates (2017)
Revenge porn and mental health: A qualitative analysis of the mental health effects of revenge porn
Feminist Criminology
(continued)
276
J. Worsley and G. Carter
(continued) Authors, date
Title
Publication outlet
Burkett (2015)
Sex(t) talk: A qualitative analysis of young adults’ negotiations of the pleasures and perils of sexting Self-reports of adverse health effects associated with cyberstalking and cyberharassment: A thematic analysis of victims’ lived experiences Cyber stalking: The social impact of social networking technology
Sexuality & Culture
Davies et al. (2016)
Haron and Yusof (2010)
Henry et al. (2020)
Huber (2020)
Jansen van Rensburg (2017)
Mandau (2021)
McGlynn et al. (2021)
Nova et al. (2019)
Image-based sexual abuse: A study on the causes and consequences of non-consensual nude or sexual imagery Women, image based sexual abuse and the pursuit of justice Unwanted attention: The psychological impact of cyberstalking on its survivors “Snaps”, “screenshots” and self-blame: A qualitative study of image-based sexual abuse victimisation among adolescent Danish girls ‘It’s torture of the soul’: The harms of image-based sexual abuse Online sexual harassment over anonymous social media in Bangladesh
Southwestern Oklahoma State University Digital Commons, US
International Conference on Education and Management Technology Routledge Critical Studies in Crime, Diversity and Criminal Justice Liverpool John Moores University, UK Journal of Psychology in Africa
Journal of Children and Media
Social & Legal Studies
Proceedings of the Tenth International Conference on Information and Communications Technologies and Development (continued)
13 The Impact of Technology-Facilitated Sexual Violence …
277
(continued) Authors, date
Title
Publication outlet
Pashang et al. (2019)
The mental health impact of cyber-sexual violence
Project deSHAME (2017)
Young people’s experiences of online sexual harassment “It’s been devastating”: An interpretative phenomenological analysis of the experience of being cyberstalked A qualitative study of children, young people and ‘sexting’: A report prepared for the NSPCC The impact of cyberstalking: The lived experience – A thematic analysis Resilience in women: Strategies female students employ to deal with online sexual harassment The abuse of technology in domestic violence and stalking Victims’ voices: Understanding the emotional impact of cyberstalking and individuals’ coping responses
International Journal of Mental Health and Addiction Childnet
Reveley (2017)
Ringrose et al. (2012)
Short et al. (2014)
Thambo et al. (2019)
Woodlock (2017)
Worsley et al. (2017)
University of East London, UK
The London School of Economics and Political Science, UK Studies in health technology and informatics African Journal of Gender, Society & Development
Violence Against Women SAGE Open
References Bates, S. (2017). Revenge porn and mental health: A qualitative analysis of the mental health effects of revenge porn on female survivors. Feminist Criminology, 12(1), 22–42.
278
J. Worsley and G. Carter
Brown, S. J., Khasteganan, N., Carter, G. J., Brown, K., Caswell, R., Howarth, E., Feder, G., & O’Doherty, L. (2020). Survivor, family and professional perspectives of psychosocial interventions for sexual abuse and violence: A qualitative evidence synthesis (Protocol). Cochrane Database of Systematic Reviews. https://doi.org/10.1002/14651858.CD013648 Burkett, M. (2015). Sex (t) talk: A qualitative analysis of young adults’ negotiations of the pleasures and perils of sexting. Sexuality & Culture, 19 (4), 835–863. Davies, E. L., Clark, J., & Roden, A. L. (2016). Self-reports of adverse health effects associated with cyberstalking and cyberharassment: A thematic analysis of victims’ lived experiences. Southwestern Oklahoma State University Digital Commons. Haron, H., & Yusof, F. B. M. (2010). Cyber stalking: The social impact of social networking technology. In International Conference on Education and Management Technology (pp. 237–241). https://doi.org/10.1109/ICEMT. 2010.5657665 Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. J. (2020). Image-based sexual abuse: A study on the causes and consequences of non-consensual nude or sexual imagery. Routledge. Henry, N., & Powell, A. (2018). Technology-facilitated sexual violence: A literature review of empirical research. Trauma, Violence, & Abuse, 19 (2), 195–208. Huber, A. (2020). Women, image based sexual abuse and the pursuit of justice. Liverpool John Moores University. Jansen van Rensburg, S. K. (2017). Unwanted attention: The psychological impact of cyberstalking on its survivors. Journal of Psychology in Africa, 27 (3), 273–276. Mandau, M. B. H. (2021). “Snaps”, “screenshots”, and self-blame: A qualitative study of image-based sexual abuse victimization among adolescent Danish girls. Journal of Children and Media, 15 (3), 431–447. McGlynn, C., Johnson, K., Rackley, E., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2021). ‘It’s torture for the soul’: The harms of image-based sexual abuse. Social & Legal Studies, 30 (4), 541–562. McGlynn, C., & Rackley, E. (2017). Image-based sexual abuse. Oxford Journal of Legal Studies, 37 (3), 534–561. Nissim-Sabat, M. (2009). Neither victim nor survivor: Thinking toward a new humanity. Lexington Books. Nova, F. F., Rifat, M. R., Saha, P., Ahmed, S. I., & Guha, S. (2019, January). Online sexual harassment over anonymous social media in Bangladesh.
13 The Impact of Technology-Facilitated Sexual Violence …
279
In Proceedings of the Tenth International Conference on Information and Communication Technologies and Development (pp. 1–12). Pashang, S., Khanlou, N., & Clarke, J. (2019). The mental health impact of cyber sexual violence on youth identity. International Journal of Mental Health and Addiction, 17 (5), 1119–1131. Pew Research Center. (2014). Online harassment. Retrieved from: https://www. pewresearch.org/internet/2014/10/22/online-harassment/ Powell, A., & Henry, N. (2017). Sexual violence in a digital age. Palgrave Macmillan. Powell, A., & Henry, N. (2019). Technology-facilitated sexual violence victimization: Results from an online survey of Australian adults. Journal of Interpersonal Violence, 34 (17), 3637–3665. Project deSHAME. (2017). Young people’s experiences of online sexual harassment. Retrieved from: https://www.childnet.com/ufiles/Project_deSHAME_Dec_ 2017_Report.pdf Reveley, K. (2017). “It’s been devastating”: An interpretative phenomenological analysis of the experience of being cyberstalked . University of East London. Ringrose, J., Gill, R., Livingstone, S., & Harvey, L. (2012). A qualitative study of children, young people and “sexting”: A report prepared for the NSPCC . The London School of Economic and Political Science. Schott, R. M. (2012). Review essay on neither victim nor survivor: Thinking toward a new humanity, by Marilyn Nissim-Sabat; theorizing sexual violence, edited by Renée J. Heberle and Victoria Grace. Hypatia: A Journal of Feminist Philosophy, 27 , 929–935. https://doi.org/10.1111/j.1527-2001. 2012.01308.x Short, E., Linford, S., Wheatcroft, J. M., & Maple, C. (2014). The impact of cyberstalking: The lived experience-a thematic analysis. Studies in Health Technology and Informatics, 199, 133–137. Thambo, S., Tshifhumulo, R., Amaechi, K. O., & Mabale, D. (2019). Resilience in women-strategies female students employ to deal with online sexual harassment. African Journal of Gender, Society & Development, 8(2), 91. Woodlock, D. (2017). The abuse of technology in domestic violence and stalking. Violence against Women, 23(5), 584–602. World Health Organization. (2012). Understanding and addressing violence against women. World Health Organization. World Health Organization. (2013). Global and regional estimates of violence against women: Prevalence and health effects of intimate partner violence and non-partner sexual violence. World Health Organization.
280
J. Worsley and G. Carter
Worsley, J. D., Wheatcroft, J. M., Short, E., & Corcoran, R. (2017). Victims’ voices: Understanding the emotional impact of cyberstalking and individuals’ coping responses. SAGE Open, 7 (2). https://doi.org/10.1177/215824 4017710292.
14 ‘It’s like Mental Rape I Guess’: Young New Zealanders’ Responses to Image-Based Sexual Abuse Claire Meehan
Introduction While the definition of sexting varies in the literature, the term is often used broadly and inclusively to encompass sexually explicit texts and images and explore how they are taken and shared in peer-networked activity (Lee et al., 2013; Ringrose et al., 2012). Sexting is underpinned by a set of informal norms and expectations around consent and privacy (Hasinoff & Shepherd, 2014). Young people report sexting for various reasons, including flirting, exploring their sexuality and their sexual identities, or as a joke or bonding ritual (Van Ouytsel et al., 2017). A challenge of using the term sexting is its overarching reach to include consensual, coercive and non-consensual sharing of intimate images or texts. There are concerns that young women are pressured or coerced to create and send intimate images. For some young people, sexting C. Meehan (B) University of Auckland, Auckland, New Zealand e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_14
281
282
C. Meehan
is considered a gift (Lee et al., 2013) or proof of love (Van Ouytsel et al., 2017) to maintain a relationship. Equally problematic is the nonconsensual distribution (or threat of distribution) of images, which is distressing and harmful (Setty, 2019). Image-based sexual abuse is the ‘non-consensual creation and/or distribution of private sexual images’ (McGlynn & Rackley, 2017, p. 534; see also, Henry et al., 2020). While most intimate images are not forwarded without consent, those that are have the potential to go viral (McGlynn et al., 2017). As well as exploiting a person’s sexual identity and violating their sexual autonomy, victims often experience online abuse in addition to experiencing imagebased sexual abuse (McGlynn et al., 2017). This manifests as sexual threats, offensive comments about the victim’s appearance, body, sexuality and sexual agency (McGlynn et al., 2017). Additional harms to the victim can include trauma, depression, bullying, low self-esteem, self-harm and even suicide (Angelides, 2013). Gendered assumptions underlie discourses around young people’s sexuality and sexual expression. Young women are often required to balance potential risks and harms of sexting with starting and maintaining relationships (Burkett, 2015). If a young woman’s images are forwarded without her consent, she is often shamed and experiences loss of social status. In contrast, young men who have their images shared non-consensually are more likely to be able to ‘laugh it off ’ (Salter et al., 2013) and are less likely to be shamed or bullied (see also, Henry et al., 2020). This is in part due to the meanings ascribed to young people’s sexting practices. Young women’s images are often understood one-dimensionally with a focus on sexualisation (Albury et al., 2013). Young men’s images are characterised with the benefit of context as well as intent, for example, images may be sexual in nature, but they may also be understood as funny or ‘for a laugh’ (Salter et al., 2013). By understanding youth sexting culture in this way, norms which reflect and reinforce male sexuality as normal, legitimate and desirable are upheld and reinforced (Dobson & Ringrose, 2015). In doing so, young women are understood as passive ‘reactors’ to young men’s requests for sexual images for their sexual pleasure rather than ‘agents’ (Thomas, 2018) capable of sexual expression with a right to sexuality, their own sexual pleasure and bodily autonomy.
14 ‘It’s like Mental Rape I Guess’: Young New Zealanders’ …
283
Public discussions surrounding youth sexting usually conflate both consensual and non-consensual sexting practices. These conversations are routinely shaped by gendered assumptions about the risks and harms that young people are likely to encounter (Hasinoff, 2015). Sexting is considered to be inherently perilous for young women who are perceived as vulnerable to young men’s hormonally driven requests and at risk of having their intimate images shared further (Döring, 2014). Blaming, shaming and bullying of young women who have had their images shared is presented as inevitable (Lee & Crofts, 2015). Public discourse feeds into abstinence-focused educational responses to youth sexting, which seek to warn young people of the dangers (Albury, 2017; Hasinoff, 2015), which are portrayed as certain. In this chapter, I present findings from qualitative small group interviews with young people and one-to-one interviews with teachers exploring their perceptions and practices around sexting and imagebased sexual abuse. The research focused on how both young people and adults understood young women’s sexual, non-sexual and sexualised images, including their perceptions of responsibility, violations of privacy, abuse and blame. I argue that in order to address harmful sharing practices, peer and school responses must go beyond a narrow abstinence-based approach, which solely focuses on privacy violations of sexual images at the expense of addressing nuance, sociocultural meanings, norms and contexts in which image sharing is understood. The chapter begins by giving an overview of the qualitative data collection before exploring young people’s concerns around privacy and responsibility for image-based sexual abuse. Following this, I examine the privacy-as-protection-centred rhetoric around teen sexting and its implications. Finally, I argue that sexting is a youth cultural phenomenon which could provide opportunities for meaningful change to the current provision of sex education but to do so we need to go beyond the myopic focus on privacy alone.
284
C. Meehan
Methodology Small friendship group interviews were conducted with 106 self-selecting young people, aged 12–16, in three participating schools in New Zealand. Young people were recruited from one rural co-ed and two urban single sex (one male, one female) schools. In the sample schools, the study was advertised to students and they self-selected if they were interested in participating. Small friendship group interviews of established friends in the same year group allowed many of the benefits of a focus group, but overcame some of the limitations, such as the participants not knowing or liking each other. It was hoped that a comfortable small group setting with established friends would be more conducive to providing deeper insights and respecting confidentiality. I provided refreshments and the groups began with a scenario to open discussion then focused on loose topic areas. The participants and I were the only people present at the group interviews which were held in a classroom at each school. Six individual semi-structured face-to-face interviews were undertaken with teachers from each of the three sample schools (two from each school). Data were recorded and transcribed by a professional transcriber and I verified the transcripts. Drawing on Braun and Clarke’s (2006) thematic analysis framework, data were coded and analysed using a latent approach, which allowed me to move away from the explicit and obvious content of the data, to identify initial organising codes. Using an interactionist approach, I considered how shared meanings, reflexive interpretations and interpersonal experiences shaped young people’s and teacher’s understandings of image-based sexual abuse (Blumer, 1969) to generate sub-codes. Common themes were then identified, reviewed, further defined and situated within the scholarly literature. Ethical approval was obtained from the University of Auckland Human Participants Ethics Committee on 4 May 2016 reference: 017039. For the purposes of this chapter, perceptions and practices around sexting and image-based sexual abuse, I coded the data for 1) privacy, responsibility and abuse, 2) protection from sexting 3) beyond privacy. Excerpts below are taken from young people, who are referred to by a pseudonym such
14 ‘It’s like Mental Rape I Guess’: Young New Zealanders’ …
285
as ‘Mary’ throughout the chapter. The teachers are referred to as ‘teacher, school A, B or C’.
Privacy, Responsibility, Abuse The majority of young people were most concerned about the potential for privacy violations if their sexual images were forwarded beyond the intended recipient. Privacy was understood in terms of ‘something that’s yours and you can control who sees. If it gets out of control, then it’s like the extendedness of it. Like it’s gone too far, and you can never get it back. It’ll never just be yours again (Mary). Steeves and Regan (2014) suggest that privacy is a set of social practices which enable young people to navigate the boundaries between themselves and others in their social interactions, dependent on social meanings and different social contexts. For image sharing, most young people agreed that a certain level of privacy, that is, between the sender and original recipient, was assumed but not guaranteed. The cause of most anxiety was around managing risk associated with privacy. Management for privacy is juxtaposed with a system whereby young men achieved social status by sharing, tagging and collecting intimate images of young women: ‘you do get sent tit pics from your friends. Sometimes they’re new and others you’ve seen before’ (James). This homosocial exchange was used to verify access to particular girls’ bodies and a system of ratings (Ringrose & Harvey, 2015) determined the value of the image: ‘you want one of a girl that no one else has, then everyone else wants it too’ (John). Numerous young men and women viewed girls in leaked images as ‘stupid’ or ‘slutty’. They were considered passive pre-victims (Dobson, 2015) that ‘should’ve said no’. There was little analysis of these young women’s agentic desire to take and share the images for sexual pleasure, expression and exploration in most of the friendship groups. All of the younger groups (aged 12, 13 and 14 years) considered these young women to be ‘sluts’ and ‘just wanting attention’. There was a little more variety in most of the older groups (aged 15 and 16 years). Discussion ranged between having fun, hooking up, getting into or staying in relationships, being sexy or being funny. Most of the young people
286
C. Meehan
agreed that boys got most of the pleasure from the images themselves whereas the girls may not get pleasure from the images per se, but they would ‘later, in real life’ (Martha) when they met up to flirt, make out or have sex. Even in these groups, the conversations organically circled back around those in long-term, long-distance relationships who might share images for necessity—privileging both the nature of a monogamous relationship and the practicalities of not being able to have offline sexual contact as often as they may want: ‘I can totally understand how it’s risky but if you were in a relationship with someone and they had to be a in different town or something, obviously you’re going to send nudes to keep it fresh’ (Sarah).While this fieldwork was conducted pre-COVID 19, given the current lockdown restrictions that are likely to continue for some time, this may be the only way to share intimacy with partners or crushes. Responsibility for privacy was accepted as that of the original image creator. This was the case if the image was self-taken, if the image was taken of a young woman by someone else (usually a boyfriend) or if the image was taken while she was drunk or high. As one participant explained: My personal opinion towards nudes is that if you send it you are the one to hold to blame because you sent it or let it happen in the first place. Even if you’re drunk or high, are you really that drunk or high that you don’t know? I know my parents, one of many other things they tell us, one of the main things is you should always leave things for a guy’s imagination. (Rebecca)
In each case, the victim was blamed as they ‘should’ve known the risks’. Even if the image was taken while the victim was intoxicated, they were perceived to have ‘let’ the perpetrator ‘use’ them and they were responsibilised for their actions (Miller, 2016). When an image was taken and or forwarded without consent, young people understood this as a breach of someone’s privacy rather than a sexual assault, regardless of the context in which the image was taken and or shared. Research on youth sexting usually encompasses explicit texts and images, as outlined in the definition provided above. These are often
14 ‘It’s like Mental Rape I Guess’: Young New Zealanders’ …
287
nude, semi-nude or suggestive images or sometimes text messages. There is a lot of nuance in these images with regard to how they were elicited and received, that is, consensually, coercively or non-consensually; as well as what happens to the images once they are received. For example, several of the young men in this study spoke about sending ‘ball swing pictures’ (Jonny) or ‘pictures of your dick all scrunched up to your bros’ (James). This was considered a normal, homosocial bonding tool. These young men joked that ‘if your bro hasn’t seen your dick, is he even your bro?’ (James). While these images were usually a close up of the sender’s genitals, they were interpreted as being humorous and non-sexual. While it was considered bad form to share these images outside of the friendship group, doing so would not cause the original sender any reputational damage and there usually was an element of deniability due to lack of identifiers in the image. At the other end of the spectrum were images of a non-sexual nature that are taken consensually, sexualised and shared non-consensually. For example, in this research, one young woman (13y) had her photo taken at the park while eating an ice cream. She had ice cream on her face. The ice cream was cropped from the image and the caption ‘[her name] sucks dick’ was added so the ice cream would look like semen. The image was then circulated around the victim’s school and the town where she lived. The victim was in her school uniform in the picture, as well as living in a small town, so easily identified. This led to sexual harassment, threats and bullying. In 2015, Ringrose and Harvey explored how images of young women’s breasts represented a crucial example of a highly sexualized body object which was collectivised to shame girls and women. They drew on Coleman (2009) who argued that ‘images do not reflect or represent bodies but produce the ways in which it is possible for bodies to become’ (207). This is an example of an image not representing the young woman’s face, rather her potential as someone with the ability to provide oral sex. While she had not taken a sexual image, nor indeed had one taken, arguably, she is a victim of image-based sexual abuse. As McGlynn and Rackley (2017) note, these images have the potential to go viral, exploit a person’s sexual identity, violate their sexual autonomy and they experience online abuse—as this young victim did. The consequences
288
C. Meehan
of this happening to young women were focused on shame and being labelled a slut: Yeah, it’s quite disgusting the amount of shame that gets thrown at people who are still virgins and they get slut-shamed; it’s really ridiculous. Like, I have a really close friend who’s just moved here from [town]. She’s like one of the nicest people I’ve ever met; she’s really lovely to everybody she meets, and she still gets slut-shamed. (Joanna)
As I have discussed elsewhere (Meehan, 2021), one 14 year old victim had her images printed and taped to the back of chairs in assembly. While this was one of the most memorable incidents, there have been many other discussions of note. Young women commonly reported feeling ‘disgusting, dirty, used and ashamed’ by image-based sexual abuse, ‘it’s like mental rape I guess’ (Sarah). These incidents are compounded by a shame-based schooling system that seeks to ‘punish’ both the creator of the image and the person who has shared it non-consensually. As Teacher, School A observed: Often when this happens, we are obligated to punish both the girl who took the picture in the first place and whomever distributed it. I realise this isn’t always fair but if we only punish the distributor, we face all sorts of grief from that student’s parents – why hasn’t she been punished when she sent this picture to him and so forth. It’s even harder when he is at a neighbouring school.
Most often, young women are most adversely affected through this process. Young men tend to be more concerned with being socially embarrassed rather than shamed: ‘they’ll pass it off as a joke. They won’t be really judged harshly’ (Jonny). When young men had their images leaked, usually a torso or dick pic, these would be explained away as ‘doing it to get one [an image] back’ (Sam), ‘wasn’t thinking straight’ (Sean) or ‘thinking with my dick’ (Max). This speaks to the broader delegitimisation of young women’s sexuality. Sexting was conceptualised as a male pleasure-based practice, where young women were required to send images to ‘please the boys’ rather than derive sexual gratification
14 ‘It’s like Mental Rape I Guess’: Young New Zealanders’ …
289
themselves. They were understood to be passive contributors in the pleasure process. For young women, the impact was long-lasting, with some young women having to move schools (see also, McGlynn et al., 2021 for a discussion on the social rupture of image-based sexual abuse). For young men, the teasing usually lasted ‘about a month or so’ (Jonny). It was clear that these conceptualisations of risk and consequences were both a reflection and reinforcement of established gender norms (Setty, 2020).
Protection from Sexting Much of the rhetoric around teen sexting has an emphasis on protection. Most educational policies and responses involving sexting seek to protect young people, particularly young women, from the ‘dangers of sexting’ (Albury, 2017). The central tenet of protection responsibilises young women for ensuring their (sexual) privacy is not violated. Unpacking this further, it requires teen women to guard their bodies against the ‘impurity’ of sexuality or sexual expression. If we use this as a starting point, the results are twofold in terms of consent and thinking beyond a narrow foci of sexting, privacy and protection. If we only consider consent in terms of protection of privacy and end up with ‘just say no’, the approach is much too simplistic. Some young women reported feeling pressured to send sexual images after a period of flirting and or receiving an (often unsolicited) image from a boy. They spoke about consent as a permission seeking exercise, yet they discussed concerns around the process and the dilemmas they faced whether or not they wanted to, or should, reciprocate with sexual images. This is at odds with the teachers’ observations of their female students: ‘I just don’t think these girls understand what will happen with these photos and how it will affect their futures. They’re all about the immediate, about the getting the boy’ (teacher, school B). In this sense, the teachers inadvertently positioned them as passive ‘reactors’ to young men’s desires (Tolman et al., 2015) rather than sexual agents in their own right. This speaks to the lack of resources and tools teachers may possess to equip the young women to navigate the sociocultural norms that underpin youth
290
C. Meehan
sexting culture (Thomas, 2018). This has an obvious flow-on effect on how schools respond to victims of image-based sexual abuse. By denying young women legitimate sexuality, anti-sexting messages create a context where young women are expected to manage risks of sexting as well as ‘just say no’ to sexting. This does not facilitate the unravelling of consensual, coercive, non-consensual, non-sexual or sexualised images. Abstinence-focused discourses increase the risks young women face when sexting as they have to navigate sexual agency, consent and gender-based violence (Hasinoff, 2015). Image-based sexual abuse is considered inevitable (Buiten, 2020) and up to individuals to prevent by not creating and sharing intimate images. Through placing the management of well-being on the individuals, this individualised sense of risk translates into the apportion of blame, and victim-blaming (Powell et al., 2018; Ravn et al., 2019). In doing so, the complexity of young people’s relations, as well as importance of other kinds of affective responses including shame and desire are omitted. This limits the tools both young men and women are able to build upon in order to navigate, respect and understand, informed consent in sexual relations.
Beyond Privacy The educational context in which young people send intimate images is characterised in terms of risk and the inevitability of harm and shame (Dobson & Ringrose, 2015), particularly for young women. A common theme throughout this research and the wider literature is the focus on responsibility for protection of violation of privacy as the key means of avoiding harm. Sex education teaches young women that they best way to protect themselves sexually from young men is to remain abstinent. This approach upholds and reinforces established gendered norms in a number of ways. First, it suggests that that risk of harm or having an image shared non-consensually is inevitable, even though research suggests that the majority of intimate images are not shared beyond the intended recipient (Salter et al., 2013). In those cases where images are forwarded, a focus on privacy violation fails to hold the perpetrators responsible
14 ‘It’s like Mental Rape I Guess’: Young New Zealanders’ …
291
instead placing blame on the victim by further responsibilising her for taking the image in the first place: ‘what did she think would happen to these pictures? We’ve seen it time and time again. They’ve seen it time and time again’ (teacher, school B). This approach completely disregards the complexities of the context in which consent happened (if at all) for the original image. Both young people and teachers discussed the pressure on young women to sext, ‘even if they didn’t really want to’ (Clara) due to the sexual desires of young men as perceived to be driven by their uncontrollable hormones ‘boys can’t control when they get a boner and things like that and so in that sense maybe that gives them an excuse’ (Clara). This, alongside young women’s difficulties in being able to ‘just say no’, is naturalised and normalised (Setty, 2020). Rather than address the intricacies of consent or the harms experienced by victims of image-based sexual abuse, the current provision merely seeks to further police young women’s sexual expression. In doing so, the highly gendered consequences they face should they be a victim of image-based sexual abuse (McGlynn et al., 2017) go unchallenged. If anything, these consequences are further compounded by a disciplinary approach which punishes both the victim and the perpetrator. In these instances, it makes it difficult for schools to go beyond this approach due to the influence of outside factors such as parents and the school boards. If the original image was a non-sexual image which was subsequently sexualised and then shared without the victim’s consent, this adds an additional layer of difficulty to an already complicated situation. A focus on sexual images at the exclusion of other images including an understanding of how images of young women are produced, received, understood, consumed and shared is limiting. Young men can share an image of their scrunched-up penis with their friends and the image, even though it is a picture of their genitals, can be understood as non-sexual and ‘for a laugh’, as part of a homosocial bonding activity. Young women do not have that luxury, as is evident with the ice cream image described above. Addressing young women’s images without the nuance afforded to young men, means the further privileging of male sexuality. All of these outcomes effectively de-legitimise young women’s right to sexual and
292
C. Meehan
bodily expressions, experiences and pleasure. Instead they fortify their role as passive sexual givers. Sexting is a youth cultural phenomenon which could provide opportunities for meaningful change to the current provision of sex education. A focus on responsibility for protection from violation of privacy is at the expense of the rights to sexual expressions, enjoyment and sexuality, as well as informed consent that goes alongside it. Young women are often denied legitimate sexual expression, being strongly encouraged to ‘just say no’. Moving forward there is space for a conversation on sexting rights, as well as responsibilities (Albury, 2017), even if this does cause initial discomfort as it goes against the status quo.
Conclusion The creation of informal norms which govern young people’s digital sexual behaviours are strengthened by gendered double standards. These norms disadvantage young women at all stages of the process, resulting in gendered harms and victim-blaming. Most young people in this study were concerned about the potential for privacy violations if their sexual images were forwarded beyond the intended recipient. Young men and women viewed girls in leaked images as ‘stupid’ or ‘slutty’. Young women are expected to be passive pre-victims that ‘should’ve said no’. Responsibility for privacy was accepted as that of the original image creator by both young people and teachers. Much of the rhetoric around teen sexting has an emphasis on protection of privacy. Sex education teaches young women the that the best way to protect themselves from inevitable privacy violations by young men is to remain abstinent, to not send sexual images of their bodies. By advocating a privacy-focused approach solely based on young women’s abstinence, it limits our attention to sexual images, while at the same time, we conceptualise women’s images one-dimensionally, as sexual. Conversations around sexting and image-based sexual abuse need to go beyond sexual images to include non-sexual images that have been taken consensually, sexualised and shared without consent. We need to rethink how we understand, share and respond to young women’s images
14 ‘It’s like Mental Rape I Guess’: Young New Zealanders’ …
293
more broadly and inclusively with the same distinction and range of descriptors as young men’s, rather than continuing to privilege male sexuality through the active–passive binary. This approach reinforces entrenched gender norms by de-legitimising young women’s right to sexual and bodily expressions and pleasure and reinforcing the role as ‘reactive’ sexual beings. In doing so, it fails to hold those who perpetrate image-based sexual abuse to account nor does it address with the social, emotional and physical fall-out for the victim including shame and blame. Acknowledgements The author(s) received no financial support for the research, authorship and/or publication of this article.
References Albury, K. (2017). Just because it’s public doesn’t mean it’s any of your business: Adults’ and children’s sexual rights in digitally mediated spaces. New Media and Society, 19 (5), 713–725. Albury, K., Crawford, K., Bryon, P., & Matthews, B. (2013). Young people and sexting in Australia: Ethics, representation and the law, final report. Australian Research Centre for Creative Industries and Innovation, University of New South Wales. Angelides, S. (2013). ‘Technology, hormones, and stupidity’: The affective politics of teenage sexting. Sexualities, 16 (5–6), 665–689. Blumer, H. (1969). The methodological position of symbolic interactionism. Symbolic interactionism: Perspective and method (pp. 1–60). Braun, V., & Clarke, V. (2006). Thematic analysis revised—Final. Qualitative Research in Psychology, 3(2), 77–101. Buiten, D. (2020). It’s “Vile” but is it violence? A case study analysis of news media representations of non-consensual sexual image-sharing. Feminist Media Studies, 20 (8), 1177–1194. Burkett, M. (2015). Sex(t) talk: A qualitative analysis of young adults’ negotiations of the pleasures and perils of sexting. Sexuality and Culture, 19 (4), 835–863.
294
C. Meehan
Coleman, R. (2009). The becoming of bodies: Girls, images, experience (p. 94). Manchester: Manchester University Press. Dobson, A. S. (2015). Girls, sexting, and gender politics. Postfeminist digital cultures (pp. 77–99). Palgrave Macmillan. Dobson, A. S., & Ringrose, J. (2015). Sext education: Pedagogies of risk and shame in the schoolyards of tagged and exposed. Sex Education, 16 (1), 8– 21. Döring, N. (2014). Consensual sexting among adolescents: Risk prevention through abstinence education or safer sexting? Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 8(1): article 9. Hasinoff, A. A. (2015). Sexting panic: Rethinking criminalization, privacy, and consent. University of Illinois Press. Hasinoff, A. A., & Shepherd, T. (2014). Sexting in context: Privacy norms and expectations. International Journal of Communication, 8, 2932–2951. Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. J. (2020). Image-based sexual abuse: A study on the causes and consequences of non-consensual nude or sexual imagery. Routledge. Lee, M., & Crofts, T. (2015). Gender, pressure, coercion and pleasure: Untangling motivations for sexting between young people. British Journal of Criminology, 55 (3), 454–473. Lee, M., Crofts, T., Salter, M., Milivojevic, S., & McGovern, A. (2013). ‘Let’s get sexting’: Risk, power, sex and criminalisation in the moral domain. International Journal for Crime, Justice and Social Democracy, 2(1), 35–49. McGlynn, C., Johnson, K., Rackley, E., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2021). ‘It’s torture for the soul’: The harms of image-based sexual abuse. Social & Legal Studies, 30 (4), 541–562. McGlynn, C., & Rackley, E. (2017). Image-based sexual abuse. Oxford Journal of Legal Studies, 37 , 1–17. McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond ‘revenge porn’: The continuum of image-based sexual abuse. Feminist Legal Studies, 25 (1), 25–46. Meehan, C. (2021). ‘I guess girls can be more emotional’: Exploring the complexities of sextual consent with young people. Sexualities, advance online publication 27 March. https://doi.org/10.1177/1363460721999275. Miller, S. A.. (2016). “How you bully a girl” sexual drama and the negotiation of gendered sexuality in high school. Gender & Society, 30 (5), 721–774. Powell, A., Henry, N., & Flynn, A. (2018). Image-based sexual abuse. In W. DeKeseredy & M. Dragiewicz (Eds.), Routledge handbook of critical criminology (2nd ed., pp. 305–315). Routledge.
14 ‘It’s like Mental Rape I Guess’: Young New Zealanders’ …
295
Ravn, S., Coffey, J., & Roberts, S. (2019). The currency of images: Risk, value and gendered power dynamics in young men’s accounts of sexting. Feminist Media Studies, 21(2), 315–331. Ringrose, J., & Harvey, L. (2015). Boobs, back-off, six packs and bits: Mediated body parts, gendered reward, and sexual shame in teens’ sexting images. Continuum, 29 (2), 205–217. Ringrose, J., Gill, R., Livingstone, S., and Harvey, L. (2012). A qualitative study of children, young people and ‘sexting’ : A Report Prepared For The NSPCC. Salter, M., Crofts, T., & Lee, M. (2013). Beyond criminalisation and responsibilisation: Sexting, gender and young people. Current Issues in Criminal Justice, 24 (3), 301–316. Setty, E. (2019). Meanings of bodily and sexual expression in youth sexting culture: Young women’s negotiation of gendered risks and harms. Sex Roles, 80, 586–606. Setty, E. (2020). ‘Confident’ and ‘hot’ or ‘desperate’ and ‘cowardly’? Meanings of young men’s sexting practices in youth sexting culture. Journal of Youth Studies, 23(5), 561–577. Steeves, V., & Regan, P. (2014). Young people online and the social value of privacy. Journal of Information, Communication and Ethics in Society, 12(4), 298–313. Thomas, S. E. (2018). “What should I do?”: Young women’s reported dilemmas with nude photographs. Sexuality Research & Social Policy, 15, 192–207. Tolman, D. L., Anderson, S. M., & Belmonte, K. (2015). Mobilizing metaphor: Considering complexities, contradictions and contexts in adolescent girls’ and young women’s sexual agency. Sex Roles, 73(7), 298–310. Van Ouytsel, J., Van Gool, E., Walrave, M., Ponnet, K., & Peeters, E. (2017). Sexting: Adolescents’ perceptions of the applications used for, motives for, and consequences of sexting. Journal of Youth Studies, 20 (4), 446–470.
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective Ronnie Meechan-Rogers, Caroline Bradbury Jones, and Nicola Ward
Introduction There is an established body of evidence that demonstrates how technology has facilitated the perpetration of Image-Based Sexual Abuse (IBSA), with camera-enabled smartphones, camera-enabled computers and connection to the internet now commonplace (Crimmins & Siegfried-Speller, 2017; Hayes & Dragiewicz, 2018; Henry et al., 2021; Langlois & Slane, 2017; Wood, 2011). This advancement in technology R. Meechan-Rogers (B) · C. B. Jones · N. Ward University of Birmingham, Birmingham, UK e-mail: [email protected] C. B. Jones e-mail: [email protected] N. Ward e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_15
297
298
R. Meechan-Rogers et al.
has revolutionised the way businesses operate, how education and healthcare are delivered and most notably, how we communicate with each other in a digital age. However, instant access to the internet, combined with social media platforms and camera-enabled devices has enabled a pernicious range of behaviours. These behaviours include the taking, making and distribution of personal, intimate images/videos, modification of images/videos which replaces the person in the original video with someone else, known as deep fakes; and threats to share intimate images of an individual without their consent: a phenomenon that is now generally conceptualised as IBSA (McGlynn et al., 2017; McGlynn & Rackley, 2017; Waldman, 2019). Until recently, the term IBSA has been used interchangeably with ‘revenge porn’; however, we suggest that the term is outdated, one-dimensional and problematic. This is because the label implies that images and or videos have been shared solely as an act of revenge or that the victim/survivor has engaged in a pornographic act, which is not always the case (Beyens & Lievens, 2016; Henry & Powell, 2015, 2016, 2017; McGlynn et al., 2017; McGlynn & Rackley, 2017; Patella-Ray, 2018). Therefore, we adopt the taxonomy of IBSA throughout this chapter. We believe the definition of IBSA is more inclusive to a wider and diverse subset of behaviours, including, distribution or threats to distribute intimate images, irrespective of the motivation of the perpetrator/s. This chapter seeks to explore IBSA within LGBTQ+ populations. We share the preliminary key findings from a UK-based international study exploring IBSA and offer recommendations for further exploration needed to understand the harmful impact that IBSA is having on LGBTQ+ communities.
Prevalence of IBSA Amongst Individuals Who Identify as LGBTQ+ Over the past decade, there has been a growing body of evidence suggesting that LGBTQ+ individuals and other minority groups are reporting higher levels of IBSA when compared to heterosexual peers or those from non-minority groups (Powell et al., 2020, 2019). Lenhart
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective
299
et al. (2016) explored non-consensual image sharing in the United States (US) and identified that one in 25 participants were victims of IBSA. However, those who identified as LGBTQ+ reported higher levels of IBSA when compared to their heterosexual peers. With the exception of peer-reviewed research exploring sexting (which in itself is not IBSA), this was one of the first publications to indicate that LGBTQ+ individuals may be experiencing IBSA disproportionality. In Lenhart et al. (2016) study, LGBTQ+ individuals reported experiencing IBSA (5%) more than their heterosexual peers. When asked about someone threatening to share intimate images of them, 15% of LGBTQ+ individuals reported that someone had threatened to share an image of them compared to just 2% of heterosexual respondents. Other emerging evidence indicating LGBTQ+ individuals report higher levels of IBSA, can be found in the research undertaken by Ruvalcaba and Eaton (2019) also in the US. Their study identified that bisexual respondents were most likely to report being victims of IBSA with bisexual women reporting the highest level (17.9%), followed by bisexual men (12.82%) and then gay men (10.19%). Heterosexual participants reported lower levels of prevalence of IBSA than heterosexual women (6.74%) and heterosexual men (5.63%). However, within this study, those who identified as lesbian did not report higher incidences of IBSA when compared to heterosexual women. In another US study, Waldman (2019) explored the use of mobile dating apps in a sample of 917 LGBTQ+ users. Findings demonstrate that gay and bisexual male users of dating apps were more than twice as likely to experience IBSA in comparison to other LGBTQ+ users. His study revealed that 14.5% of gay and bisexual men had intimate images distributed without their consent, double the rate of IBSA reported in the study undertaken by Lenhart et al. (2016). Research undertaken in Australia by Henry et al. (2019) discovered that LGBTQ+ individuals reported higher levels of IBSA when compared to their heterosexual peers. One in three LGBTQ+ participants in their study reported they had been a victim of IBSA, compared to one in five heterosexual respondents. They also found that 33% of LGBTQ+ individuals reported that someone has shared a sexual image of them without
300
R. Meechan-Rogers et al.
their consent in comparison to 19% of heterosexual respondents. Moreover, LGBTQ+ respondents were twice as likely to report that someone had threatened to share an image of them (18%), compared to heterosexual respondents (9%). In a similar, larger-scale study across Australia, New Zealand and the United Kingdom (UK), Powell et al. (2020) found that LGBTQ+ participants reported higher levels of IBSA, with 56.4% of LGBTQ+ respondents stating that they had experienced IBSA, compared with 35.4% of heterosexual respondents. When considering research that has a focus on sexting, sexting behaviours or where participants have been asked specifically about nonconsensual sharing of images, there is also evidence that LGBTQ+ individuals report higher incidences of IBSA. Interestingly, there is evidence to suggest that LGBTQ+ individuals are sharing images of others without that person’s consent. In Garcia et al.’s (2016) study exploring sexting amongst singles in the US, the prevalence of sending, receiving and sharing sexual messages and images amongst their LGBTQ+ participants was highest for gay men. In Ruvalcaba and Eaton’s (2019) study, comparable findings to those of Garcia et al. emerged with bisexual men (11.11%) most likely to report sending non-consensual images, followed by gay men (10.85%) and bisexual women (6.37%). This compared to heterosexual men (6.37%) and women (4.95%). The fact that within both of these studies, more LGBTQ+ participants report that they are also perpetrators of IBSA e.g. sending images onto others in itself indicates higher incidences of IBSA from within the LGBTQ+ community. Other findings that identify LGBTQ+ individuals are participating in the perpetration of non-consensual intimate image sharing, include Powell et al. (2020) across the UK, New Zealand and Australia, in which 28.6% of LGBTQ+ individuals surveyed stated that they had been involved with one or more types of IBSA in comparison to 16.1% of those who were heterosexual. Their earlier Australian study similarly found that LGBTQ+ participants reported that they were more likely to take, distribute or threaten to distribute intimate images or materials without a person’s consent when compared to the heterosexual respondents in their study, with gay men reporting the highest level of threats to share in all of the respondent groups within the study (Powell et al., 2019).
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective
301
Methods Within this section, we outline preliminary qualitative results from our international study exploring IBSA and the impact on the mental health and wellbeing of LGBTQ+ individuals. Ethical approval for the study was granted by the University of Birmingham’s ethical approval department (Ethical Review ERN_19-0418). Participant information sheets as well as consent forms were circulated to all participants. All participants were informed that they could withdraw from the study at any time. Within the participant information sheets, participants were signposted to organisations who could offer additional support if they required this. This information was also reiterated at the end of every interview. A narrative, semi-structured interview approach, using online technologies such as Zoom, Skype or Microsoft Teams was used to collect data. At the time of writing this chapter, we had 11 complete transcripts of interviews from participants that had been fully thematised and analysed using NVivo. Interviews lasted between 38 and 90 minutes. Questions focused on the experience of the incident, when the individual become aware of the IBSA and where the situation occurred (e.g. within their home or someone else’s home, Porn Hub, Facebook, Twitter). Contextual questions were also posed including: what was the participant thinking and feeling at the time prior to the incident and just after the incident, how the IBSA made them feel, how the incident affected their mental health and wellbeing and how the incident affected issues of trust. Closing questions focused on whether participants had disclosed what had happened to them with family, friends or law enforcement agencies and anything that had helped them to cope with what had happened. Pseudonyms were assigned to all participants and were stored securely and separately from transcripts and consent forms. All of our participants self-reported as victims/survivors of IBSA and identified as LGBTQ+.
302
R. Meechan-Rogers et al.
Mental Health and Wellbeing As outlined, the behaviours associated with the perpetration of IBSA are multifaceted. For some of our participants, they experienced extensive dissemination of images and videos on the internet, including the live streaming of sexual encounters without their consent. For example, Geoff had a casual chem-sex sexual hook-up with another man. Drückler et al. (2021) define chem sex as the utilisation of a combination of drugs, that can include crystal methamphetamine and mephedrone, amongst others, where individuals may consume the drugs before or during sexual encounters. During the encounter, the perpetrator received multiple telephone calls and Geoff asked ‘if he was going to answer the phone. Geoff overhead the conversation where the perpetrator said, what you mean the guy that I’ve got with me now? When Geoff asked if he was being recorded, the perpetrator responded yes, I record and stream all of my sexual hookups. Geoff told us that ‘the perpetrator insinuated at one point that he was going to upload it to a porn Web site to get money’. Another example of a participant who had their sexual encounters live streamed was Keith. He met someone online and became involved with him. During the relationship, Keith experimented with chem sex with the perpetrator. He realised that the perpetrator had been live streaming their sexual encounters when others told him things that had happened during what he thought were private encounters. ‘I couldn’t believe what I was hearing, how could they know this stuff about me’. Keith also had his dating and hook up apps hacked and his personal information within them was changed. For months after the relationship ended, Keith said ‘he would send messages trying to get in touch, when I refused or blocked him, I would get messages stating that I had bought this on myself ’.
Mark experienced a number of IBSA occurrences where his encounters were live streamed without his consent. Mark was exploring his sexuality and became involved with the chem-sex scene. When he informed
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective
303
law enforcement agencies, he was targeted and harassed by the perpetrators. Others had material shared amongst social network groups and messaging applications, as well as to the national press. Abigail shared some intimate pictures via Skype with individuals who she thought had agreed not to share the images. She later found out that the images were being shared on a platform: ‘My friends were telling me, hey, these people are saying that they have your nudes’. Matthew was having an online relationship with someone via Skype. He was unaware that the perpetrator was recording their online sessions. Matthew later found out that the perpetrator was selling the recording online for financial gain. He told us that, ‘One of the people from the community that I’m a part of sent me a video one day, with a link to a website and was like, hey, is this is this you?’. Emma experienced IBSA following the breakup of her marriage. Images were shared of her with media outlets and national/international newspapers. Emma had a high-profile job and had to resign after her pictures were shared. Some participants had experienced sextortion and blackmail. Joseph received an email asking for him to pay money. Joseph told us the email said: ‘If you don’t send on Bitcoin twice (sic) 500 euros, we are going to send a message to all your contacts’. Joseph thought it was an email scam and ignored it. Days later intimate photos and videos of him were shared throughout social media and messaging applications as well as email contacts including colleagues.
Others told us that perpetrators had sent messages to church/religious leaders as a means to ‘out’ them. Toby was exploring his sexuality with someone he thought he could trust. The perpetrator shared the photos with church leaders and put them on a gay dating site. This outed Toby to family, friends, people at the church and work colleagues. Two of our participants experienced doxing, where perpetrators circulated images of the victim/survivor alongside personal details, including their names. Jenny told her ex-partner that she wanted to take a break from the relationship. The perpetrator recorded them having sex whilst Jenny was intoxicated. The video was released online, alongside Jenny’s
304
R. Meechan-Rogers et al.
name. In Abigail’s case, the perpetrators had created a folder on an online platform with her name on it. Several participants also told us that they experienced IBSA as part of other unwanted behaviour from an ex-partner including domestic abuse. Jane was in a relationship for 18 months, during this relationship she experienced emotional and physical abuse. Her ex-partner had posted images of her on Porn Hub. The perpetrator had also used her phone to distribute sexual images of her to contacts within her phone. Julia was in an abusive relationship with an ex-partner. Julia experienced sustained threats to share intimate images of her, alongside physical coercion and controlling behaviour. It can be seen from the accounts that participants experienced IBSA in diverse ways. As well as the dissemination of intimate images, online streaming of sexual encounters or threats to share images, each participant experienced differences that made their victimisation unique. It is not our intention to explore the impact on health and wellbeing solely through a traditional medicalised trauma model. Rackley and Gavey (2021) explore the benefits and limitations of using a medicalised trauma approach in terms of legitimising victim-survivors experiences, as well as validating the harms and suffering that individuals experience. The majority of our participants expressed experiencing severe anxiety, depression, symptoms of PTSD, threatening-coercive behaviours, suicidal thoughts and in some instances, attempted suicide. Some of our participants also reported that they turned to alcohol and drugs as a coping strategy. For some participants, the levels of victimisation included death threats either directly from perpetrators or more widely online. McGlynn et al. (2020) argue that trying to understand IBSA victim-survivors exclusively through a medicalised trauma model can result in a narrow interpretation of the extensive multifaceted harms related to IBSA. McGlynn et al. (2020) offer a useful holistic framework on which to explore IBSA and the impact this has on individuals. We have adapted the lens of the framework from one which used a feminist phenomenological approach, to one using a queer theory approach. Like our study, the McGlynn et al.’s. (2020) study spanned geographical location, sexuality, ethnicity, age and gender. Our study had similar findings to McGlynn et al.’s (2020) study, as participants told us that the impact
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective
305
of their experiences in relation to mental health and wellbeing were not vastly different if the perpetrator was known to them, in terms of being in an established relationship or if the perpetrator was less known to them. Our participants across eight countries discussed the impact upon their lives in multifactorial ways and whilst this included direct impact on mental health, many other impacts on their lives were discussed. These included a complete loss of dignity and control over their lives affecting the entirety of their entire daily lives, loss of trust and difficulty forming new relationships with others, feelings of guilt, anxiety and social isolation from family and friends and being constantly hypervigilant that the images or videos were being shared. Therefore, we adopt the framework outlined by McGlynn et al.’s (2020) to scaffold the results of our study for this chapter, this includes their five interrelated themes of: social rupture; constancy; isolation; existential threat; and constrained liberty.
Social Rupture In the research undertaken by McGlynn et al. (2020), social rupture is described as abuse whereby individuals experience an impact on their lives where there is a discernible and devastating violation that profoundly disrupted the entirety of their being, impacting on their sense of self, their identity, personal relationships and their body image. In our study, LGBTQ+ participants reported that the abuse they experienced impacted significantly on their lives. Geoff, for example, describes the ‘fracturing’ impact that IBSA had on his life: ‘Everything in my life just felt as if I had been ripped open and put on public display. It’s mental rape. That’s precisely what it feels like. It’s like somebody diving into your head and ripping it apart. If you’ve ever been burgled and your house has been invaded by an individual and they’ve ransacked your house, they’ve been through your personal possessions and belongings and they’ve touched everything, it’s an invasion. That’s kind of what it feels like, because your soul has been invaded’.
306
R. Meechan-Rogers et al.
Similarly, Jane told us about the ‘rupturing’ nature that IBSA had on her life: ‘It was shock and loss and grief, how am I going to heal from this? And it just felt like the world kind of fell apart. To be honest, I felt like everything was unsafe. So hard to put into words’. Participants reported far-reaching impacts of IBSA, alongside feeling violated, as well as feeling ashamed of what had happened to them. Jenny told us: ‘It devastated every facet of my life. There’s so much shame. Like I still to this day. Can’t help but think of, like, how dirty I am, there’s like there’s no amount of showering, there’s no amount of cleansing mentally anything that can convince me’.
The rupturing nature of IBSA was echoed by Abigail who said ‘This has completely shattered me mentally, completely. And so, when I first saw them, I was just disgusted and ashamed of myself ’. Matthew also explained the rupturing nature that his incident of IBSA had upon him saying: ‘I was just like violated. I kind of fell into like a slump’. Similar to the findings from McGlynn et al. (2020) and Henry et al. (2019), our LGBTQ+ participants reported significant impacts on the way they viewed their body image, and detailed experiencing significant harms to their wellbeing from IBSA. Given that our study focused solely on LGBTQ+ individuals, some participants also reported the pervasive impact that IBSA had on their external reputation, as well as outing them to family, colleagues and friends. Julia, in her account, explained the long-term impact, experiencing IBSA had on her self-esteem. She said: It’s kind of going back over this thing that what’s changed me. About the clothing. The eyewear. Now, I tend to wear clothing that is not revealing in any way. Not that I was, but I would wear a bikini swimsuit and stuff like that. And I won’t do that anymore. So, I don’t go swimming anymore. Don’t go anywhere where my body can be seen and I tend to wear Jeans and sweatshirts T shirts and stuff now, clothing that’s more covering.
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective
307
Toby talked about how his perpetrator used the images to out him, and how the abusive situation resulted in him losing his job: It was an ugly time, I felt powerless…, he took advantage of Because of my involvement in the community with my job it negative effect on my reputation, I was put on administration management started an investigation, I ended up leaving my job, just too much at the time.
me… had a leave, it was
Emma also described her abuse as impacting on her external reputation, her wellbeing and her having to resign from her job. She said: ‘I was suicidal, you know it took me it took me a long time to kind of come out from that, but I was also I felt defiant at the same time I couldn’t believe that it (my life and job) had been kind of ripped away like that’. Mark likewise described an extensive impact on his wellbeing and the significant impact this has on his mental health: The effects of the last two years have been completely incapacitating…having flashbacks of these various assaults…It was so serious at one stage that I was sectioned, and I had to be admitted to hospital for my own safety. I was suicidal three and four times. I was put on the wrong medication, they (the healthcare staff ) thought I was delusional.
The lived experiences demonstrated above explain the multifactorial impact that IBSA survivors-victims have suffered and how these relate to the notion of social rupture. The variety of ways in which participants’ lives have been affected are far-reaching. These include feelings of total violation of their social and personal being, the traumatising impact of the undoing of their perceptions of themselves before, and after their experience of IBSA. Feelings of guilt or victim blaming and worsening underlying mental health (including suicidal thoughts or attempts), were also experienced by our participants, for some of our them this resulted in them being unable to work or build new relationships. For some, the rupturing effect of IBSA resulted in them becoming so isolated they felt unable to be seen in public.
308
R. Meechan-Rogers et al.
Isolation As well as social rupture, almost all of our LGBTQ+ participants experienced isolation following IBSA. The levels and types of isolation varied, including isolating themselves from family, friends and work colleagues to removing themselves completely from any online presence and disengaging completely with any form of social media or dating/hook-up platforms. The impact on some of our participants resulted in them giving up work altogether. Overwhelmingly, the fear of being recognised was one of the primary drivers for isolation amongst our participants, this was captured by many of our participants and examples of this included Abigail who said: I don’t like going out as much as I used to because I keep thinking that someone is going to like, you know, recognise me. And every little sound like, say, my apartment’s really quiet. If I hear, like the smallest sound, I get really scared and jumpy. And I think someone’s trying to break into my apartment to like, attack me. And it’s just like this is so terrifying, you know.
Emma echoed the impact IBSA had on her becoming withdrawn and isolated: I felt like I couldn’t go out in public, I still sometimes have that feeling that I’d be recognised, especially when I’m back home. If I didn’t have masks right now [interviews undertaken during COVID 19], I wouldn’t be going out pretty much at all.
Joseph, who had his videos/pictures shared amongst his contacts in his phone, including a ‘neighbourhood watch’ group on WhatsApp, described his feelings of isolation and feeling constantly stressed: When I come inside my building or I need to go outside of my building, I always check if there is no neighbours because I don’t want to trust any neighbours. I’m ashamed to come across my neighbours in case they have seen these videos. So yes, I’m stressed, I’m very stressed doing that every time I come into or leave the building.
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective
309
Mark similarly told us he is no longer able to leave the house for any longer than a short period of time: ‘So, I can no longer really go out of the house, maybe once or twice a week … I have to get back within about 20 min because I start to panic. …. I have developed agoraphobia’. Jane also explained how IBSA impacted on her becoming isolated to the point that she gave up her job. She said: I didn’t sleep for a solid year after this. I experienced insomnia very intensely. I was teaching yoga at the time, I stopped... I just stopped teaching. I had stopped eating. I couldn’t close my eyes and public at all anymore, which is what made me stop, you know, teaching yoga. I couldn’t handle crowds anymore.
Nearly all of our participants talked at some point during their interview about a level of distrust of the internet, social media platforms and of having an online/digital presence, either immediately after their experience of IBSA or in some instances permanently. Keith said: I have deleted all of my on-line dating profiles, I don’t use social media any longer and try to have as little on-line activity as possible. I find the whole notion of Facebook too triggering. I’ve removed myself from all of that, I removed myself from family and friends, I don’t date anymore and certainly do not have any random hook-ups anymore.
Abigail also said how her isolation wasn’t just related to leaving her house it also involved her removing herself from any online activity: ‘I just sit in my bed doing nothing, just like use my phone sometimes rather than being on the Internet, I don’t want to do anything like all the things I used to, I don’t interact with my friends on-line as much as I used to’. Toby, Matthew and Jane also described the social isolation they experienced as a result of IBSA. Toby said in relation to dating apps and social media platforms: ‘I am very wary; I am very cautious about it now … especially with the dating sites online … because of the trauma that I went through’. Matthew also reiterated similar feelings towards having an online presence following his experience of IBSA: ‘I’m definitely more wary of sort of like online stuff now. It just makes me more suspicious or like less likely be on the internet with someone’.
310
R. Meechan-Rogers et al.
Finally, Jane told us about her relationship with using online platforms and how isolated she felt following her expedience of IBSA: ‘The Internet has been weaponised against me, I have to use it for work, it’s do or die’. Isolation is significant amongst our research participants. Issues of distrust of individuals, partners and in some instances the law enforcement agencies, impacted significantly on participants’ daily lives. The social isolation experienced by some was so significant they either were dismissed or felt that they could no longer work for fear of being recognised. Isolation also transcended to the removal from any online presence, social media, dating apps as well as restricting all online interactions even with family and friends. As Jane reported ‘they (the perpetrator) had weaponised the internet against me, I have to use it for work, it’s “do” or “die”’. This demonstrates the ongoing constant anguish that survivor-victims have to endure in an ever more digital world that we live and work in.
Constancy Almost all of our participants gave accounts of the unrelenting impact that IBSA has had on their daily lives. Many described the permanent, constant and continuing damaging harms that IBSA has had on them. Geoff identifies the ongoing and harmful impact that IBSA has on individuals: ‘There’s another element of the trauma that I’ve got to relive. And it’s just this, this cycle of trauma’. Keith also reiterates the ongoing nature of IBSA claiming ‘it never goes away; I think about it all the time’. Joseph similarly described the constant checking of his phone and the worry he experiences when receiving multiple messages: ‘I checked my phone constantly, I was constantly worried that more pictures were being shared, I was exhausted’. Jenny also outlined the permanent impact that IBSA has had on her life: This image-based abuse has been permanently life altering and has been a struggle ever since and impacted every facet of my life. The traumatisation
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective
311
of like having all of your dignity and vulnerability taken away by the wild frontier of the Internet. It’s permanent and you have no defence against it.
Emma echoes this in her description of how IBSA has affected her life: It’s, I think, just completely altered my life, and will forever be intertwined because of what happened. … And the life shattering kind of period that you will always have. Emma also discussed the ongoing fear of being recognised by strangers, grocery stores were completely overwhelming. I would have like panic attacks in a grocery store probably just because there’s so many different people that could be recognising you.
Jane and Emma also talked about the ongoing nature of IBSA in their lives and how this affected them. Jane said: It’s affected my life in very, very damaging ways, financially and emotionally … but it’s something that is going to always affect me, even spiritually. I stopped believing in everything in my sphere. I won’t get into my spiritual beliefs, but it even stripped me of my spiritual beliefs. To be honest, I don’t believe in much anymore. So, it felt like it killed me in a lot of ways.
Abigail described how the abuse has continued to have an impact on her several years after the IBSA experience: ‘This has completely destroyed my mental health. I’ve had anger, sadness, frustration, betrayal. I have nightmares over and over again, like countless nightmares of being raped and murdered… And I will just die’. In relation to the constant and ongoing fear and relentlessness that images will reappear, be recirculated or be used as a means to cause further abuse against survivors/victims was a high cause of anxiety to our participants. The accounts provided throughout this chapter can be seen in the harms upon their daily lives. The ongoing nature of the events being relived in flashbacks, nightmares and reliving the events over and over. The constancy that their lives will never return back to normal as well as the ongoing impact on thier mental health on a day-to-day basis.
312
R. Meechan-Rogers et al.
Existential Threat McGlynn et al. (2020) describe existential threat as overlapping with the constancy that IBSA victims experience. The nature of IBSA and the pervasiveness of the internet, as well as other digital platforms, provide real opportunities for images and videos to be shared or to resurface. This ongoing fear and anxiety can be seen in the following accounts from our participants. For example, Geoff told us: When these had been streamed online or videos have been leaked or pictures or whatever, but that cycle doesn’t stop because they’re still convinced that that information’s out there, And that’s the bit that mentally eats away at you and more.
Julia also described her ongoing existential threat in relation to the continued sharing of her images and her fear that this could impact negatively on her employment: When I tried to break the control, those images were used against me as threats to disclose them to other people, both on the internet and to be sent to people. I knew that with part of my work being in the legal world as to say … that could have caused me a great deal of damage at that time, I still do.
Similar to McGlynn et al. (2020), our participants reported being hypervigilant, not only in relation to images being shared but also in relation to building new personal relationships. Jenny said: I’m always hypervigilant, it’s exhausting, So, you’re constantly tense. Everything just terrifies me. And then even my own intimate relationships, …. it’s something that I would say crosses my mind every time I have sex or like in my day it comes up.… They threatened to ruin my life for years and now, you know, it makes you curious. … I just wanted to drink to take the pain away.
In relation to ongoing hypervigilance, some of our participants also reported that they were constantly searching for images. For others, it was
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective
313
being constantly concerned that images would continue to be released. Keith said, in relation to this: I still to this day find myself googling for photos, images or videos. I think about what happened to me on a daily basis, I never think I will ever get over what happened. It’s a constant nightmare.
Joseph also talked about the ongoing threat of images being released and reiterated some of Keith’s thoughts and experiences: ‘If there is a lot of messages that come through at once, I’m always afraid when I open my phone in the morning. I’m like, it’s happening again, I am always stressed’. In relation to existential threat, one key factor that participants constantly reflected upon was the notion of feeling hypervigilant of images being released or constantly searching for them on the internet. Participants talked about the ongoing cycle and the significant impact this has on their resilience and mental health.
Constrained Liberty Our participants reported a sense of feeling unsafe both online and offline. Issues of distrust of people and online platforms featured highly in participants’ responses. One finding of the McGlynn et al. (2020) study was that respondents expressed a distrust of men following their experience of IBSA. Whilst one of our participants strongly vocalised this, it was not a general feature of our findings. In terms of feeling unsafe and not trusting individuals like they had prior to the IBSA incident, Abigail said: It’s really hard to trust people. Every time someone is, like, friendly with me, I keep thinking that they have a second motive, like there’s, um, an ulterior motive. It’s very, very hard to trust people now. But back before this happened to me, I was a very, very, loving and very trusting person. But now I’m trying to be very cautious of people now, I don’t feel safe online anymore.
314
R. Meechan-Rogers et al.
Emma also echoed the general loss of trust in people that she does not know, much in the same way that other participants had described: ‘I don’t trust strangers, but um, I guess I’m hopeful that there’s only so many people in the world that would do something like this, and I’ve already experienced it so hopefully that’s my turn’. Toby went on to highlight how IBSA has impacted on his sense of trust in people generally: ‘I am still very wary of people; I’ve been selfconscious and this (the IBSA) was a huge step in me feeling this way, I used to feel safe online’. Geoff talked about how he felt safe within the LGBTQ+ community and how that has all changed after his experience of IBSA: As a gay man, I spent so many years of my gay life thinking that I was surrounded by a supportive community, you know, you feel safe. You don’t imagine that another gay individual will be the person that destroys your life, you know, you entrust. You know, you buy into the concept; the whole community supported each other. And, I learnt so much that night. It destroyed my trust in people entirely, entirely. I found it difficult to tell anybody anything about my life. And in many respects, I think I still do.
Keith told us about how he felt his liberty had been constrained: ‘I feel like I am constantly being watched, even in my own home I have to keep telling myself it’s in my head. When I first told friends what happened, they were like are you sure you haven’t imagined it, they didn’t believe me’. Jenny talked about her experience of IBSA and the impact that has had on her distrust of men: ‘I’m terrified of men constantly on guard. That still happens to this day. Like I do not trust men as a whole. There’s a deep, I would say, hatred about the fact that so many men commit violent crimes’. All of our participants described the negative impact IBSA has had on their personal safety as well as the impact on their relationships. The lack of trust in individuals and spaces where individuals had previously felt safe have been eroded. For all of our participants, their view of the internet as a space where they once felt safe had been altered
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective
315
immensely. As all of our participants identified as LGBTQ+ the notion of constrained liberty within a safe LGBTQ+ environment has for some been shattered to such an extent that they no longer feel safe within environments where they had once felt a sense of solidarity and protection.
Conclusion In this chapter we have explored the preliminary qualitative findings from an international study exploring the impact that IBSA has on the mental health of LGBTQ+ individuals. Our study has revealed the pervasive and pernicious impact that IBSA has upon the lives of LGBTQ+ victims/survivors. Every participant experienced an unspeakable invasion of privacy and trust, with a lasting and persistent impact on all aspects of their lives. Whilst we have attempted to contextualise our findings through a holistic non-medicalised trauma-based model, it must be noted that all of our participants outlined significant impact on their mental health, with many of our participants having contemplated suicide, experiencing depression and anxiety and demonstrating symptoms of PTSD. Whilst there is a growing body of evidence that LGBTQ+ individuals and other minority groups are experiencing IBSA at disproportionate levels, more focused research into the impact this has on the mental health of LGBTQ+ individuals is required. Acknowledgements We would like to acknowledge and thank all of the LGBTQ+ participants of our study who took the time to share their experiences with us. We have been truly humbled by all of the individuals for their enthusiasm in the study, without their input it would not have been possible to develop this work. We would also like to thank the various organisations who took the time to talk to us about this study, including Victims of Image Crime, the Revenge Porn Helpline, the Legal Advice Centre at Queen Mary University London, Survivors UK, the e-Safety commission in Australia, the UK Law Commission, Net Safe New Zealand, Thriving Through in the US, the Rainbow Project in Northern Island, the Equality Network in Scotland and Police officers from Greater Manchester Police Constabulary. Finally, we would like to thank Clare McGlynn, Kelly Johnson, Erika Rackley, Nicola Henry,
316
R. Meechan-Rogers et al.
Nicola Garvey, Asher Flynn and Anastasia Powell for allowing us to utilise their framework in this chapter. This project was not funded or supported by any external funding body/authority.
References Beyens, J., & Lievens, E. (2016). A legal perspective on the non-consensual dissemination of sexual images: Identifying strengths and weaknesses of legislation in the US, UK and Belgium. International Journal of Law Crime and Justice, 47 , 31–43. Crimmins, D., & Seigfried-Spellar, K. (2017). Adults who sext: Exploring differences in self-esteem, moral foundations, and personality. International Journal of Cyber Criminology, 11(2), 169–182. Drückler, S., Speulman, J., van Rooijen, M., & De Vries, H. J. (2021). Sexual consent and chemsex: A quantitative study on sexualised drug use and nonconsensual sex among men who have sex with men in Amsterdam, the Netherlands Sexually Transmitted Infections. British Medical Journal, 97 (4), 268–275. https://doi.org/10.1136/sextrans-2020-054840 Garcia, J. R., Gesselman, A. N., Siliman, S. A., Perry, B. L., Coe, K., & Fisher, H. E. (2016). Sexting among singles in the USA: Prevalence of sending, receiving, and sharing sexual messages and images. Sexual Health, 13, 428– 435. Hayes, R. M., & Dragiewicz, M. (2018). Unsolicited dick pics: Erotica, exhibitionism or entitlement? Women’s Studies International Forum, 71, 212–219. Henry, N., & Powell, A. (2015). Beyond the ‘sext’: Technology-facilitated sexual violence and harassment against adult women. Australian and New Zealand Journal of Criminology, 48, 104–118. Henry, N., & Powell, A. (2016). Technology facilitated sexual violence: A literature review. Trauma Violence and Abuse 19 (2), 195–208. Henry, N., & Powell, A. (2017). Sexual violence in the digital age: The scope and limits of criminal law. Social & Legal Studies, 25, 397–418. Henry, N., Powell, A., & Flynn, A. (2019). Image based sexual abuse: Victims and perpetrators, trends and issues in crime and criminal justice. Australian Institute of Criminology, 572, 1–18.
15 Image-Based Sexual Abuse: An LGBTQ+ Perspective
317
Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. (2021). Image based sexual abuse and consequences of non-consensual nude or sexual imagery. Routledge. Langlois, G., & Slane, A. (2017). Economies of reputation: The case of revenge porn. Communication and Critical/cultural Studies, 14 (2), 120–138. Lenhart, A., Ybarra, M., Zickuhr, K., & Price-Feeney, M. (2016). Online harassment, digital abuse, and cyberstalking in America. Data & Society Research Institute/Centre for Innovative Public Health Research. McGlynn, C., Johnson, K., Rackley, E., Henrey, N., Gavey, N., Flynn, A., & Powell, A. (2020). ‘It’s torture for the soul’: The harms of image-based sexual abuse. Social & Legal Studies, 1–22. McGlynn, C., & Rackley, E. (2017). Image based sexual abuse. Oxford Journal of Legal Studies, 37 (3), 538–561. McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond ‘revenge porn’: The continuum of image-based sexual abuse. Feminist Legal Studies, 25, 25– 46. Patella-Rey, P. J. (2018). Beyond privacy: Bodily integrity as an alternative framework for understanding non-consensual pornography. Information, Communication, & Society, 21(5), 786–791. Powell, A., Scott, A. J., & Henry, N. (2020). Digital harassment and abuse: Experiences of sexuality and gender minority adults. European Journal of Criminology, 17 (2), 199–223. https://doi.org/10.1177/1477370818788006 Powell, A., Scott, A. J., Flynn, A. & Henry, N. (2020). Image Based Sexual Abuse: An International Study of Victims and Perpetrators. Summary Report. Melbourne: RMIT University. Powell, A., Henry, N., Flynn, A., & Scott, A. J. (2019). Image-based sexual abuse: The extent, nature, and predictors of perpetration in a community sample of australian adults. Computers in Human Behaviour, 92, 393–402. Rackley, E., & Gavey, N. (2021). The harms of image-based sexual abuse. In N. Henry, McGlynn, C., Flynn, A., Johnson K., Powell, & A. Scott (Eds.), Image based sexual abuse and consequences of non-consensual nude or sexual imagery. Routledge. Ruvalcaba, Y., & Eaton, A. A. (2019). Non-consensual pornography among US adults: A sexual scripts framework on victimization, perpetration, and health correlates for women and men. Psychology of Violence, 10 (1), 68. Waldman, A. E. (2019). Law, privacy, and online dating: ‘Revenge porn’ in gay online communities. Law & Social Inquiry, 44 (4), 987–1018. https://doi. org/10.1017/lsi.2018.29
318
R. Meechan-Rogers et al.
Wood, H. (2011). The internet and its role in the escalation of sexually compulsive behaviour. Psychoanalytic Psychotherapy, 25 (2), 127–142.
16 Sexual Violence and Consent in the Digital Age Alexandra S. Marcotte and Jessica J. Hille
Introduction Digital technologies provide new opportunities and means for connection and self-expression. These technologies enable connection with people around the world instantaneously, allowing us to share any part of our lives, from the relatively inconsequential to the monumental. Studies show that interpersonal connectivity online is tied to positive psychosocial outcomes, including a decrease in loneliness and an increase in selfesteem outcomes (Gonzales & Hancock, 2010; Pittman & Reich, 2016; Ryan & Xenos, 2011; Valkenburg et al., 2006). However, digital platforms are also used to facilitate, exacerbate, and extend sexual violence. Examples of digitised sexual consent violations have received widespread attention in recent years, and there are hundreds of examples around the world where issues of sexual consent and digital spaces intersect. This A. S. Marcotte (B) · J. J. Hille Kinsey Institute, Indiana University, Bloomington, IN, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_16
319
320
A. S. Marcotte and J. J. Hille
attention is due in part to the ease of sharing information on the internet, particularly since the advent of social media. Examples of sexual violence and consent in digital spaces so often involve images—whether in the case of image-based sexual abuse, in which images and videos may be consensually created but later shared without the victim’s consent, or images and videos of sexual assaults, among others (Henry et al., 2020). Images play a key role in conversations about and understandings of sexual violence and consent in the digital age. The purpose of this chapter is not to argue that there is something wholly unique about sexual consent in digital spaces, but is rather an acknowledgement of the centrality of these spaces in our everyday romantic and sexual lives, and our contemporary social being more generally. This work serves as a recognition of the need to understand how these spaces operate in different moments in the service of sexuality and sexual expression. There are two primary differences between analogue and digital photography that impact contemporary sexual experiences. First, digital recordings capture a particular moment and extend it over time, allowing a single event to be re-lived in perpetuity. Second, photographs and videos can be disseminated faster and further in the contemporary digital age than at any other moment in history. Once a photograph has been taken and shared, it is much harder for the subject to withdraw consent for the recording and dissemination, if consent was even obtained in the first place. The capabilities and effects of digital photography, particularly in the context of sexual consent and assault, have broad-reaching and profound implications for policy and education (see Henry et al., 2020). The study of sexual consent is in many ways an entire academic subfield, spanning decades and disciplines, methodologies and epistemologies. This chapter thus cannot and will not address all of the themes, tensions, and research involved. However, drawing primarily from sexological and feminist research, we offer a definition of sexual consent, and apply this definition to issues of sexual consent in digital spaces, paying particular attention to the ways in which these non-physically located spaces shift, challenge, and/or reconfigure current understandings of sexual consent.
16 Sexual Violence and Consent in the Digital Age
321
We review three scenarios that highlight the complexity of sexual consent in the digital age, examining past research on event-based consent in the context of digital photography, the internet, and social media. First, we distinguish between willingness and consent in relation to sexual activity, including the creation of digital recordings of the activity. Second, we consider instances of non-consensual sexual image distribution, when the sexual activity itself was consensual, but the recording and dissemination were not. Finally, we explore the ways that the non-consensual recording of sexual assault can extend and exacerbate experiences of violence and violation. In each section, we suggest avenues for future research to further the understanding of sexual consent in a digital context. Consent, particularly in our current digital era, is a complicated and fraught issue. While this chapter cannot address every aspect of sexual consent, we hope that the vignettes and analyses below will be a starting point for continued interdisciplinary discourse on sexual consent in the digital age.
Unwanted, Consensual, and Recorded Alice and her girlfriend Jane have been dating for several months. They often engage in consensual sex. Alice wants to make a sex tape with Jane, who isn’t interested in the idea. After further discussion, Jane agrees, and they record themselves having sex.
In this example, the sexual activity and recording are consensual, but Alice is more interested in creating a recording than Jane. In the context of sexual consent, scholars have distinguished between consenting to sex and wanting it (e.g., Hickman & Muehlenhard, 1999; Humphreys, 2007; Muehlenhard et al., 2016; Peterson & Muehlenhard, 2007). The ability to record sex has become more widespread due to the development and proliferation of cameras and smartphones, therefore, this practice must be considered in the context of sexual consent and wantedness. Though there are theoretical and practical differences between inperson sexual consent and digital sexual consent, the definition of
322
A. S. Marcotte and J. J. Hille
consent in the literature is largely consistent. Hickman and Muehlenhard (1999) define consent as the “freely-given verbal or nonverbal communication of a feeling of willingness to engage in sexual activity” (p. 259). This definition is useful in its recognition of both the internal (willingness) and external (communication) aspects of consent (Humphreys, 2007). In their review article, Muehlenhard et al. (2016) expand upon Hickman and Muehlenhard’s (1999) definition by factoring in consent interpretation. Their three-pronged understanding is the most robust account of sexual consent in the literature to date. They argue that sexual consent can be understood as (1) “an internal state of willingness”, (2) “an act of explicitly agreeing to something”, and (3) as a “behaviour that someone else interprets as willingness” (Muehlenhard et al., 2016, p. 462). In its account of the internal state of willingness, this model acknowledges that consent must be freely given, and not the result of coercion or fear. It also highlights the communication of consent (whether verbal or non-verbal) and how it is interpreted (see also, Burgin, 2019; Burgin & Flynn, 2019). Peterson and Muehlenhard (2007) further examined the distinction between wanting to consent by examining both unwanted consensual sex and wanted non-consensual sex. They argue that while wanting or not wanting to engage in sex can influence decisions about consenting or not consenting, these categories are analytically and practically distinct. In an earlier study, O’Sullivan and Allgeier (1998) found that both men and women engage in unwanted consensual sex with their partners: approximately 25% of men and 50% of women in their study reported consenting to unwanted sexual activity during the two-week period. The most common reasons provided by participants were to satisfy a partner’s needs and to promote intimacy. Interestingly, the data also showed that most participants experienced positive outcomes from engaging in unwanted consensual sex, including increased intimacy and conflict avoidance. Similarly, Hille et al. (2020) found that people who identified as asexual (i.e. people who do not experience sexual attraction) or on the asexual spectrum reported willingness to engage in consensual sexual activity to satisfy their partner or further an emotional connection, even if they had no personal desire to engage in sexual activity.
16 Sexual Violence and Consent in the Digital Age
323
As mentioned above, wanting and consenting are important and distinguishable concepts in discussions of sexual consent. In the context of in-person encounters, research has shown few negative effects for engaging in unwanted, but consensual sexual activity. O’Sullivan and Gaines (1998) examined unwanted sex in the context of committed relationships. Their results showed that 81% of participants (n = 194) reported at least one instance of feeling ambivalent about engaging in sex with their partners and half of these participants ultimately refused sex. Their results also supported O’Sullivan and Allgeier’s (1998) finding that within committed monogamous relationships, consenting to sexual activity even without completely desiring sex produced few to no negative effects among participants. In line with O’Sullivan and Allgeier’s (1998) results, Peterson and Muehlenhard found that participants chose to engage in unwanted sex primarily to please their partners and/or to promote intimacy in their relationships. In all of these cases, however, the event in question is a particular event when consent and wantedness are experienced in real time. Consent given at one time does not constitute consent for all time (Burgin, 2019). In the context of digital recording and dissemination, willingness and consent interact with a different temporal frame than in-person interactions. Now, with digital technologies, consenting to a photograph or video recording creates a lasting risk of unwanted exposure, particularly if the relationship doesn’t last. If posted online, digital files are incredibly difficult to erase completely. Individuals must now grapple with the lasting effects of a single granting of consent, particularly if the consensual activity, or its recording, was not wanted. Similarly, scholars of consent must consider how to understand consent at a particular moment in the context of a digital infinity.
Non-Consensual Sexual Image Distribution In September 2010, 18-year-old Rutgers student Tyler Clementi killed himself after a video of him kissing a man surfaced on Twitter. His roommate, Dharun Ravi, videotaped Tyler and the other male student in his dorm room
324
A. S. Marcotte and J. J. Hille
without Tyler’s permission or knowledge and shared the video on social media. Tyler is one of dozens of LGBT+ youth and young adults who have committed or attempted suicide as a result of online harassment. (Johnson et al., 2013; Wiederhold, 2014)
Issues of sexual consent in digital spaces also include the nonconsensual distribution of images. The above example illustrates potential disconnects between consensual in-person sexual experiences and non-consensual digital sexual expression. Situations like these fit onto Powell and Henry’s (2016) continuum of image-based sexual abuse which includes all forms of unauthorised image creation, distribution, and threats of distribution (see Powell et al., 2019, p. 393). Consider another (fictional) example of non-consensual sexual image distribution: Rachel takes a nude picture of herself and sends it to her boyfriend, Daniel. Without her knowledge or permission, he shows the photo to several friends. A few months later, Rachel breaks up with Daniel. He posts the photo on his social media where it is shared repeatedly.
Colloquially, the practice highlighted in the vignette above is known as “revenge porn”, though that term is problematic for three main reasons. First, it is too limited in focus, suggesting that the phenomenon is limited to angry former partners who upload images of their exes (Bates, 2017). Second, the term “porn” directs focus to the content of the image and away from the abusive nature of non-consensual distribution (Powell et al., 2018). Third, “revenge porn” plays on existing negative connotations about pornography in general. Researchers have also argued that, in labelling non-consensual image distribution as pornography, the term minimises the violent nature of the act (McGlynn et al., 2017). In Rachel and Daniel’s example, we see both consensual and nonconsensual activities. Rachel created the original photo and sent it consensually to Daniel. The non-consensual image distribution occurred when Daniel showed the image to his friends and then later publicly posted the image without her consent. It is important to note that, while Daniel may have been acting maliciously, he could also have assumed
16 Sexual Violence and Consent in the Digital Age
325
Rachel was consenting to the sharing of the nude image when she shared it with him. While Daniel’s assumption of sexual consent is clearly erroneous, it illustrates the pitfalls of not explicitly requesting and receiving consent, particularly in the context of gendered sexual communication norms. Consent scholars have long noted gendered differences in consent communication and interpretation. In a 1999 study, Hickman and Muehlenhard examined how heterosexual college-aged men and women communicate consent. Based on a questionnaire designed to elicit information about both verbal and non-verbal indicators of consent in their analysis, they found that “communicating sexual consent is far more complex than simply saying yes to a sexual initiation” (Hickman & Muehlenhard, 1999, p. 268). Some participants understood smiling to be an indication of consent from their partner, whereas others noted that direct verbal statements of consent are the only true indicators of consent. The researchers also found that the use of consent signals varied slightly between men and women. Women were more likely to use indirect verbal signals, whereas men were more likely to use indirect nonverbal signals or to not respond to questions about consent (Hickman & Muehlenhard, 1999, p. 269). Further, Humphreys (2007) examined the relationship between gender, relationship history, and consent. The results showed that participants believed that those in long-term relationships had better understandings of consent than those in new or short-term relationships. When participants were presented with scenarios depicting long-term relationships, they were more likely to indicate that explicit verbal consent was not necessary (Humphreys, 2007, p. 313). In other words, explicit verbal consent, according to the participants, becomes less and less necessary as a relationship continues. This finding suggests that, over time, consent negotiations can/should become more non-verbal. Again, it is important to note that, once digitised, a moment is preserved and extended through time. Consent given in a particular moment, then, may be assumed to be indefinite by one partner, while the other did not intend such an extension of their consent. Again, the assumption of indefinite consent is erroneous, but may happen regardless of existing legal standards and regulations. Therefore, programs seeking to address sexual consent and curb non-consensual activity must consider not only
326
A. S. Marcotte and J. J. Hille
the particular moment/event when consent is given but also how that moment may be extended if the event is digitised. Non-consensual sexual image distribution practices illustrate the complex, nuanced aspects of digital consent. Digital dissemination can happen easily and from virtually anywhere, meaning that image distribution can happen without the subject’s knowledge. This makes sharing easier to conceal and prevents an opportunity for the subject to give or withhold consent for distribution. Though statistics about the prevalence of non-consensual image sharing are relatively limited, one Australian study found that 6.4% of participants (aged 16–49 years) distributed a nude image of someone else and 4.9% threatened to distribute a nude image (Powell et al., 2019). Crofts et al. (2015) reported similar results: 6% of participants (under 18 years of age) distributed at least one nude image without consent. They also found that 20% of adolescents in their study reported showing a nude image of another person to someone else. In a large US sample, Garcia et al. (2016) found that nearly one-fifth of people have shared a nude image without the consent of the person who was photographed. Of the participants who shared an image, they typically shared with three or more friends, suggesting that the number of people viewing the image grows exponentially.
Recorded Violence On August 11, 2012, two high school boys, Trent Mays and Ma’lik Richmond, raped an unconscious 16-year-old girl in Steubenville, Ohio. The rape was photographed and videotaped by witnesses. The photos and videos spread quickly via social media and the event received substantial media coverage. Mays and Richmond were later convicted of rape. (Oppel, 2013; Simpson, 2013) A year earlier, 15-year-old Rehtaeh Parsons attended a small party with her friend, during which four teenage boys allegedly raped her. The act was photographed and the image was shared quickly among her classmates through
16 Sexual Violence and Consent in the Digital Age
327
texts and social media. In the weeks and months that followed, Rehtaeh experienced nearly constant harassment, both in person and on social media. In April 2013, Rehtaeh attempted suicide and later died in the hospital . (Chiu, 2018)
These well-known examples of digital consent violations represent hundreds of examples around the world where issues of sexual consent in digital spaces arise. These stories highlight the importance of understanding how the near ubiquity of social media and other digital platforms have been used as tools of violence, particularly against young people (e.g., Armstrong & Mahone, 2017; Hamm et al., 2015; Lee & Crofts, 2015). The violent images extend the violence and trauma of the original action through the sharing of the images without the victim’s consent. In instances of recorded violence, the violence is not enacted in a singular moment; it occurs repeatedly (see McGlynn et al., 2020). Victims experience violence with each event: the assault, the witnessing, the photographing, the sharing, and the re-sharing. Images of violence, and sexual violence, in particular, are by no means new to the digital moment. The history of photography includes war photography, photographs of lynching, and images of rape, among others (Mailänder, 2017; Reinhardt et al., 2007). As Sontag writes, “Ever since the invention of the camera in 1839, photography has kept company with death” (2003, p. 24). Capturing violence on film is not unique to digital photography—its history is inextricably linked with the history of photography. Digital photographs and videos both capture and extend experiences of violence. Not only do the images and videos capture acts of violence, but their dissemination and permanence also extend violence into the present moment (for more, see Dodge, 2016). As Butler argues in “Torture and the Ethics of Photography,” photography preserves a moment in time in such a way that the moment can be said to continue, the photograph serving as “a kind of promise that the event will continue, is that very continuation of the event” (2007, p. 959). Because the digital is so iterative, there are multiple moments when it is necessary to obtain consent and as many moments, when it is possible to enact violence. Therefore, it is important to conceptualise and discuss consent as an ongoing negotiation in which partners are continuously
328
A. S. Marcotte and J. J. Hille
checking in with each other. Understanding consent in a digital context requires people to consider the multiple moments of (non)consensual action that occur in digital spaces, particularly through the repeated sharing of videos and images. Sexual consent must be understood as an ongoing, iterative process beyond the consent given (or not) during the initial event. Legal and policy interventions must recognise both the complexity and repetitive nature of consent in an age of digitised sexuality.
Conclusion Much of the digital consent literature has grown out of legal theory, and the law is focused primarily on regulating behaviour and prosecuting violations. Because this body of literature stems from a different intellection tradition than in person consent literature and is often focused on legal standards, it cannot always thoroughly attend to the messiness of sexual consent. These “messy” considerations include ambivalence, consenting without wanting, and miscommunication/misinterpretation. By the same token, understanding consent in a digital context requires scholarship on in-person consent to consider the multiple moments of (non)consensual action that occur in digital spaces, particularly through the repeated sharing of images and how violations should be handled, both legally and in terms of advocacy, education, and policy. Consent then, must be understood as an ongoing, iterative process beyond consent given (or not) during the initial event. Digital consent literature could, in turn, be enhanced by considering experiences of wanting in addition to consenting, and how wantedness and consent are communicated. Communicating consent in digital spaces must also be considered, particularly given the gendered dynamics of verbal versus non-verbal consent communication. Unlike in-person consent negotiations, the digital medium often necessitates verbal (written or oral) communication. Because there are gender differences in consent communication practices with different social outcomes based on gender, future research should consider how miscommunication and misinterpretation might occur on digital platforms.
16 Sexual Violence and Consent in the Digital Age
329
One potentially positive aspect of communicating consent digitally stems from the fact that conversations are mediated by technology. This affords people the opportunity to disengage from uncomfortable conversations easier than in face-to-face situations. In the context of online dating and certain social media platforms, people can “unmatch” or block users who are coercing them. Another important aspect of digital consent violations is that there is often evidence of the crime. In the cases of Jane Doe (Steubenville) and Rehtaeh Parsons, the existence of the images was crucial in prosecuting the perpetrators (Dodge, 2018). Such safeguards are, of course, not without their limitations. The ability to block or unmatch cannot fully prevent online harassment. The existence of a digital record, particularly in the case of videos and images (see examples above) can exacerbate and extend trauma. But it would be a mistake to assume that there can be no consensual digital contact or that the digital nature of modern communication and courtship is necessarily violent. As we have discussed in the chapter, sexual consent, particularly involving a digital component, is a complicated, nuanced issue. Any attempt to address contemporary sexual harassment and assault must consider both the positive and negative aspects of consent in the digital age. As public policy and legislation adapt to changing technologies, it is important that we recognise and contend with issues unique to digital consent. There needs to be more research on the prevalence of digital consent violations and potential solutions so that laws can reflect the realities of people’s everyday experiences. Future research should also address whether the potential risk inherent in the permanency of digital sexual images constrains choices about sexual expression and further distances wanting from consenting (i.e. prevents people from engaging in image-based sexting, even if they want to). Acknowledgements The author(s) received no financial support for the research, authorship, and/or publication of this article.
330
A. S. Marcotte and J. J. Hille
References Armstrong, C. L., & Mahone, J. (2017). “It’s on us.” The role of social media and rape culture in individual willingness to mobilize against sexual assault. Mass Communication and Society, 20 (1), 92–115. Bates, S. (2017). Revenge porn and mental health: A qualitative analysis of the mental health effects of revenge porn on female survivors. Feminist Criminology, 12(1), 22–42. Burgin, R. (2019). Persistent narratives of force and resistance: Affirmative consent as law reform. British Journal of Criminology, 59 (2), 296–314. Burgin, R., & Flynn, A. (2019). Women’s behavior as implied consent: Male “reasonableness” in Australian rape law. Criminology & Criminal Justice, 21(3). Butler, J. (2007). Torture and the ethics of photography. Environment and Planning D: Society and Space, 25, 951–966. Chiu, E. (2018). The legacy of Rehtaeh Parsons. Canadian Broadcasting Corporation. https://newsinteractives.cbc.ca/longform/five-years-gone Crofts, T., Lee, M., McGovern, A., & Milivojevic, S. (2015). Sexting and young people. Palgrave Macmillan. Dodge, A. (2016). Digitizing rape culture: Online sexual violence and the power of the digital photograph. Crime Media Culture, 12(1), 65–82. Dodge, A. (2018). The digital witness: The role of digital evidence in criminal justice responses to sexual violence. Feminist Theory, 19 (3), 303–321. Garcia, J. R., Gesselman, A. N., Siliman, S. A., Perry, B. L., Coe, K., & Fisher, H. E. (2016). Sexting among singles in the USA: Prevalence of sending, receiving, and sharing sexual messages and images. Sexual Health, 13(5), 428–435. Gonzales, A. L., & Hancock, J. T. (2010). Mirror, mirror on my Facebook wall: Effects of exposure to Facebook on self-esteem. Cyberpsychology, Behavior, and Social Networking, 14 (1–2), 79–83. Hamm, M. P., Newton, A. S., Chisholm, A., Shulhan, J., Milne, A., Sundar, P., Ennis, H., Scott, S. D., & Hartling, L. (2015). Prevalence and effect of cyberbulling on children and young people: A scoping review of social media studies. JAMA Pediatrics, 169 (8), 770–777. Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. J. (2020). Beyond revenge porn: Gender, justice and image-based sexual abuse. Routledge.
16 Sexual Violence and Consent in the Digital Age
331
Hickman, S. E., & Muehlenhard, C. L. (1999). “By the semi-mystical appearance of a condom”: How young women and men communicate sexual consent in heterosexual situations. Journal of Sex Research, 36 (3), 258–272. Hille, J. J., Simmons, M. K., & Sanders, S. A. (2020). “Sex” and the ace spectrum: Definitions of sex, behavioral histories, and future interest for individuals who identify as asexual, graysexual, or demisexual. The Journal of Sex Research, 57 (7), 813–823. Humphreys, T. (2007). Perceptions of sexual consent: The impact of relationship history and gender. Journal of Sex Research, 44 (4), 307–315. Johnson, R. B., Oxendine, S., Taub, D. J., & Robertson, J. (2013). Suicide prevention for LGBT students. New Directions for Student Services, 142, 55– 69. Lee, M., & Crofts, T. (2015). Gender, pressure, coercion and pleasure: Untangling motivations for sexting between young people. British Journal of Criminology, 55, 454–473. Mailänder, E. (2017). Making sense of a rape photography: Sexual violence as social performance on the Eastern Front, 1939–1944. Journal of the History of Sexuality, 26 (3), 489–520. McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond ‘revenge porn’: The continuum of image-based sexual abuse. Feminist Legal Studies, 25, 25– 46. McGlynn, C., Rackley, E., Johnson, K., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2020). It’s torture for the soul: The harms of image-based sexual abuse. Social & Legal Studies, 30 (4). Muehlenhard, C. L., Humphreys, T., Jozkowski, K. N., & Peterson, Z. D. (2016). The complexities of sexual consent among college students: A conceptual and empirical review. Journal of Sex Research, 53(4), 1–31. Oppel, R. A. (2013). Ohio teenagers guilty in rape that social media brought to light. New York Times. https://www.nytimes.com/2013/03/18/us/teenag ers-found-guilty-in-rape-in-steubenville-ohio.html O’Sullivan, L. F., & Allgeier, E. R. (1998). Feigning sexual desire: Consenting to unwanted sexual activity in heterosexual dating relationships. Journal of Sex Research, 35 (3), 234–243. O’Sullivan, L. F., & Gaines, M. E. (1998). Decision-making in college students’ heterosexual dating relationships: Ambivalence about engaging in sexual activity. Journal of Social and Personal Relationships, 15 (3), 347–363. Peterson, Z. D., & Muehlenhard, C. L. (2007). Conceptualizing the “wantedness” of women’s consensual and non-consensual sexual experiences:
332
A. S. Marcotte and J. J. Hille
Implications for how women label their experiences with rape. Journal of Sex Research, 44 (1), 72–88. Pittman, M., & Reich, B. (2016). Social media and loneliness: Why an Instagram picture may be worth more than a thousand Twitter words. Computers in Human Behavior, 62, 155–167. Powell, A., & Henry, N. (2016). Technology-facilitated sexual violence victimization: Results from an online survey of Australian adults. Journal of Interpersonal Violence, 34 (17), 3637–3665. Powell, A., Henry, N., & Flynn, A. (2018). Image-based sexual abuse. In W. DeKeseredy & M. Dragiewicz (Eds.), Routledge handbook of critical criminology (pp. 305–315). Routledge. Powell, A., Henry, N., Flynn, A., & Scott, A. J. (2019). Image-based sexual abuse: The extent, nature, and predictors of perpetration in a community sample of Australian residents. Computers in Human Behavior, 92, 393–402. Reinhardt, M., Edwards, H., & Duganne, E. (2007). Beautiful suffering: Photography and the traffic in pain. University of Chicago Press. Ryan, T., & Xenos, S. (2011). Who uses Facebook? An investigation into the relationship between the Big Five, shyness, narcissism, loneliness, and Facebook usage. Computers in Human Behavior, 27 (5), 1658–1664. Simpson, C. (2013). The Steubenville victim tells her story. The Atlantic. https://www.theatlantic.com/national/archive/2013/03/steubenville-victimtestimony/317302/ Sontag, S. (2003). Regarding the pain of others. Picador. Valkenburg, P. M., Peter, J., & Schouten, A. (2006). Friend networking sites and their relationship to adolescents’ well-being and social self-esteem. CyberPsychology and Behavior, 9 (5), 584–590. Wiederhold, B. K. (2014). Cyberbullying and LGBTQ youth: A deadly combination. Cyberpsychology, Behavior, and Social Networking, 17 (9), 569–570.
Part V Online Hate
17 It’s Just a Preference: Indigenous LGBTIQ+ Peoples and Technologically Facilitated Violence Andrew Farrell
Introduction Queer encounters are more accessible than ever on dating apps. Dating apps which host and facilitate connections between LGBTIQ+ (lesbian, gay, bisexual, transgender, intersex, and queer) people, and prompting encounters with those questioning and experimenting, fulfil a necessary gap in the market for online platforms that are accepting, inclusive, and safe for queer people to connect, flirt, and organise social, sexual, and romantic engagements. Grindr, a popular queer dating app launched in 2009, is one of the various LGBTIQ+ centred dating apps which ‘proudly represents a modern LGBTQ [sic] lifestyle that’s expanding … with a meaningful impact for our community’ (Grindr.com, n.d.). Becoming synonymous with hook-up culture, dating apps face growing A. Farrell (B) Department for Indigenous Studies, Macquarie University, Sydney, NSW, Australia e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_17
335
336
A. Farrell
concerns and criticisms from racial, ethnic, gender, sex, and sexually diverse consumers (Truong, 2018). Current research indicates that issues of compounding forms of violence including racism, harassment, bullying, and sexual violence is rife within these kinds of platforms (Lim et al., 2020). Carlson (2020, p. 134) notes that ‘social media platforms facilitate the continuation and augmentation of existing cultural practices [but also] allow for the expression and proliferation of racist, colonial discourse, what Matamoros-Fernández has called “platformed racism”’. Women, Indigenous peoples, people of colour, ethnic, and LGBTIQ+ responses to dating apps draw attention to the ever shifting and intersecting nature of discrimination and violence online. With a focus on Aboriginal and Torres Strait Islander LGBTIQ+ peoples, this chapter initiates a dialogue that empowers and amplifies. This chapter will include an analysis of literature that explicitly addresses the topic, as well as the converging themes of race, gender, and sexuality, which exist in current academic and mainstream discourses around the use of dating apps. This chapter will follow themes which trace converging and overlapping discourses of race, gender, and sexuality through online spaces which provide intimate interactive social, romantic, and sexual experiences. I will argue that the proliferation and normalisation of dating apps must address the growing concerns about intersecting forms of oppression, discrimination and violence towards groups who face multiple and compounding forms of oppression. Discussion of these themes will include a survey of Aboriginal and Torres Strait Islander LGBTIQ+ peoples and dating apps which forms original research contributing to a growing multidisciplinary field of Queer Indigenous Studies and research in Australia.
Literature Review There is a dearth of literature which focuses on Indigenous LGBTIQ+ peoples and dating apps. At the present time, research that is inclusive of Aboriginal and Torres Strait Islander LGBTIQ+ peoples can be found in the works of Farrell (2017) and Carlson (2020), which recognise the distinct experiences of Aboriginal and Torres Strait Islander
17 It’s Just a Preference: Indigenous LGBTIQ+ Peoples …
337
LGBTIQ+ peoples on social media (see also, Lumby, 2010) and popular dating apps such as Grindr and Tinder. This research suggests that dating apps ‘are implicated in the perpetuation of normative ideas of gender, race and sexuality … hatred and abuse, and … concerns about the physical safety of users, particularly women and sexually diverse users’ (Carlson, 2020, p. 134). Much of the evolving scholarship around race and intimate forms of violence reflects a growing recognition of nonindigenous people of colour and ethnic minorities. The experiences of LGBTIQ+ Aboriginal and Torres Strait Islander peoples remain absent across these spaces. As Carlson and Frazer (2018) argue, in the context of online bullying, most research does not often account for the intersecting experiences of minority groups in favour for data which resembles homogenised categories of normative gender, sex, and sexual identities as well as cultural and ethnic identities. Pooley and Boxall’s (2020, p. 2) study on mobile dating apps and sexual and violent offending notes that ‘increased use of technology to develop and maintain social relationships has coincided with growth in the use of communication technologies to facilitate a range of sexual offences known as technology-facilitated sexual violence (TFSV)’. The examination of TFSV requires an analysis that encompasses the identities of both the perpetrator and victim to get a better understanding of the role of race and racism on dating apps that target Indigenous peoples, which is also suggested by Carlson and Frazer (2018). When both perpetrator and victim are from oppressed groups, there must be a closer analysis of the ways perpetrators ‘fail to see how [their] thoughts and actions uphold someone else’s subordination’ (Collins, 1993, p. 25). Oppression does not absolve a person of their wrongdoing. However, by considering more complex degrees of privilege and oppression, we may form a better understanding of how distinct forms of discrimination such as racism exists in shared digital spaces and between marginalised groups. Queer Indigenous scholarship broadly recognises that racist and normative sexual violence is an integral feature of historical settler colonialism and the ongoing settler colonial project. Picq (2020, p. 172) argues that ‘sexual colonisation brutally repressed Native sexualities, regulating Indigenous sexual and gender experiences and supplanting them with Western sexual codes’. Moreover, Clark (2015, p. 388) notes
338
A. Farrell
that the ‘settler imaginary has constituted sexual identities through an appropriative and reactive relationship with colonised peoples globally’, where settler queers often align themselves to Indigenous oppression through perceived sameness. Clark (2015, p. 389) identifies the false equivalence of white experiences of queerness with racial oppression in coalition building with Indigenous LGBTIQ+ peoples which attempt to ‘denaturalise the settler by their being present in our corporeality’. I argue that such false equivalencies exist online, making dating sites a minefield for Indigenous people who have navigated and survived settler colonial violence from first contact and into the digital age. To understand the specificities of online violence experienced by Aboriginal and Torres Strait Islander LGBTIQ+ peoples in Australia, we must acknowledge the history of their silencing, erasure, and unique experiences of violence under settler colonialism. Across early contact, hierarchies established Indigenous peoples as ‘primitive and inferior; far removed from the “civilised” European male who occupies the highest position in the global human racial, social, and cultural hierarchy’ and ‘caught in a space define[d] as undesired but also always as the vulnerable subjects of clandestine [settler] desires’ (Sullivan & Day, 2019, p. 6). Through a Western Christian and anthropological lens of sexual morality and superiority, Indigenous bodies and societies were homogenised as sexually savage, exotic, libidinous, immoral, and taboo. Terms such as ‘black velvet’ arose to objectify Aboriginal women’s genitals and have been appropriated to objectify gay Aboriginal men (Gays and Lesbians Aboriginal Alliance, 1994, p. 15). Queer Indigenous people continue to face the dichotomy of settler desires vividly as they occupy online spaces which amplify settler colonial social and sexual relationships that have been in place since early contact. Queer critique locates racial, settler colonial influence in queer digital landscapes. White supremacy and settler colonialism enters the chat through what Riggs (in Daroya, 2018, p. 74) refers to as the ‘psychic life of power’ which maintains settler power through the ongoing disavowal of Indigenous sovereignty. Daroya (2018, p. 74) employs the psychoanalytic lens of the cerebral dimensions of racism to understand ‘colonialism and racism as having a “psychic life” [that] bridges the gap between racist discourses and subjectivity specifically in looking at how whiteness
17 It’s Just a Preference: Indigenous LGBTIQ+ Peoples …
339
operates … to maintain power relations between white and non-white subjects’. This is explored through gay Asian men who are excluded on dating sites through stereotypes around race and hierarchies of masculinity by the homosexual white male gaze (Daroya, 2018, p. 77). This is consistently asserted across academic discourses tracing the experiences of people of colour and ethnic minorities beyond settler colonial contexts and into the globalising connectivity of social media landscapes where racial, gendered, and sexual hierarchies are reconstituted. Much of the extant literature which explicitly addresses online violence and LGBTIQ+ Aboriginal and Torres Strait Islander populations is limited to a particular perspective; that of mostly cisgender, gay Aboriginal, and Torres Strait Islander men. While expansive research is required, current testimonies produced by gay Aboriginal and Torres Strait Islander men shed critical light on the issue of online violence and the sexual racism experienced on platforms such as Grindr. Media articles reflect a spectrum of experiences from tense interactions of subtle racism to more explicit acts of racism. An implicit act of racism and discrimination, as described by Barada gay man Casey Conway, occurred in an interaction on Grindr. Conway was contacted by a man whose profile displayed a preference for ‘white men only’ (Milton, 2020). On principle, Conway rejects his advancement based on the profile description only to be met with further discriminatory behaviour and a backlash of racist diatribes and gaslighting, belligerently telling Conway to ‘get over it… [and that] sexual preference isn’t racial preference!’ (Milton, 2020). Yolngu gay man, Dustin Mangatjay McGregor, called out overt and deeply offensive racist messages received on Grindr stating that ‘gay men who were not white were more likely to be rejected in the online dating world and that he was fed up with users disclosing their racial preferences in derogatory terms … such as “wog abo cunt” and “petrol sniffer” [sic]’ (Donnelly, 2016). These accounts further validate critiques of how whiteness places Indigenous peoples in a category of racial, ethnic, and cultural other within gay relationships. Drawing on Goffman (1959, 1966), Carlson (2020) argues that diverse performativities of identity and indigeneity are played out on dating apps as a tactical manoeuvre to survive, subvert, or obstruct any instances of online violence. Apps facilitate this through profiles which
340
A. Farrell
allow us to present ourselves through images, text, emojis, categories of identification (such as ‘I am’ and ‘looking for’), and even external social media links to other sites such as Instagram and Spotify which curate a view into who we are (Byron et al., 2019). These descriptive fields allow for both limiting and openly representing our identities in ways that we consent to. Sullivan and Day (2019, p. 5) offer an insight into contemporary sexual transactions and Indigenous identity through Indigenous transmasculine sex workers as they present and advertise themselves and their services both online and offline. They argue that Indigenous sex workers are ‘cautious about identifying [as Aboriginal] not only because they are concerned about how they will be treated if they advertise themselves as Indigenous, but also whether they will fit into stereotypes and meet racial expectations’. There is a pressure to fit certain archetypes that sit across categories of race, gender, and sexuality, that further restrict queer Indigenous social media users, whether it be in the context of sex work or casual encounters and dating. Anxieties surrounding desirability are further explored by McGregor as he utilises Grindr functions to de-identify himself to qualify his argument of rampant racial prejudice (Donnelly, 2016). He conducted a personal experiment in which he removed all descriptors which identified him as Indigenous and mixed race. He states, ‘I wondered if people would notice such changes in my profile and they did … I was flooded with messages. There was so many more – I lost count. But when I changed it back [to Indigenous/mixed race] there wasn’t as much interest’ (Donnelly, 2016). As outlined by Hanckel et al. (2019, p. 1268), forms of anonymity and pseudonymity are utilised by queer people who are ‘conscious of audiences who [come] into contact with their content … [and] intentionally conceal or obscure [their identities] by making them difficult to navigate. By carefully managing what and how they reveal, they are able to manage boundaries with a precision that suits them’. For Aboriginal and Torres Strait Islander LGBTIQ+ peoples, these specifications are also utilised towards concealing and revealing the degree in which their Indigenous identity is present or absent. In contrast to this, overt expressions of identity employed by Indigenous people on dating apps also perform strategic purposes, for example, to attract the right demographic of potential partners who accept them
17 It’s Just a Preference: Indigenous LGBTIQ+ Peoples …
341
for who they are. Monaghan (2015) describes ‘strategic essentialism’, through Spivak, to describe the ways that Indigenous people mobilise around shared identity. The use of symbols such as the Aboriginal flag colours [red, black, yellow] is employed across social media sites such as Twitter to signify Aboriginal and Torres Strait Islander identity, as well as allied status. Within the context of dating apps, they may also be used as a mode of identification in profile descriptions to flag interest in other users who may seek Indigenous partners. Moreover, symbols can indicate multiple identities (see Image 1) which includes emojis representing the Aboriginal flag, transgender pride flag, and the LGBTIQ+ pride flag, with the inclusion of black and brown stripes in recognition of people of colour within the queer community. Such examples situate complexities of identity experienced by queer Indigenous peoples by curating their online presence towards what Hanckel et al. (2019) argues is a strategy of ‘finding comfort in non-heterosexual identities’ and normalising the self online. Identification strategies employed by LGBTIQ+ Aboriginal and Torres Strait Islander peoples have the potential to invest in a kind of queer world-making that does not desegregate race, gender, and sexuality.
Methodology The data collected for this paper is underpinned by a methodology which, as best as possible, is suited to Aboriginal and Torres Strait Islander LGBTIQ+ populations. At the present time, there are limited research projects and methodological approaches which account for and explicitly apply to Aboriginal and Torres Strait Islander LGBTIQ+ populations. One study which has contributed towards analytical approaches in research about Aboriginal and Torres Strait Islander LGBTIQ+ peoples can be found in Sullivan and Day (2019, p. 1) who argue that inclusive methodological approaches to Indigenous LGBTIQ+ research must ‘ensure [that] the re-telling of Indigenous peoples stories is self-determined. Indigenous Standpoint indicates the ways Indigenous people, places, and philosophies are understood. An Indigenous trans
342
A. Farrell
methodology extends this articulation to ensure conversations of sex and sexuality are included’. In this chapter, I have employed a similar approach using Nakata’s Indigenous Standpoint Theory (2007) to locate the position of Aboriginal and Torres Strait Islander LGBTIQ+ peoples between the micro details of what participants say and the wider macro landscape of Aboriginal and Torres Strait Islander LGBTIQ+ identities and community discourses, informed by a survey. This allows the analysis to move back and forth from the survey, to social media texts, and to the larger discursive contexts in which they are embedded. This analysis will begin from the premise that a stated position on diverse sexualities and gender identity is discursively constituted within, and constitutive of, competing sets of social and knowledge relations, and that these relations form a crucial part of the assemblage of elements that can be investigated for the ways they come to inform how subjects can identify and express themselves online. It is important to emphasise that the analytical approach will not seek to represent participant narratives, but to analyse them for evidence of their historical and discursive contingencies of Indigenous agency and self-determination, and for evidence of unity and disunity across discourses around social media and dating apps. Together, these analytical layers will help to produce accounts that can reveal the heterogeneity of social media use among Aboriginal and Torres Strait Islander LGBTIQ+ peoples.
Survey During January 2021, I conducted an online survey for a broader PhD thesis to which this chapter contributes. The survey received ethical approval from the Macquarie University Human Research Ethics Committee (ID: 4898) and was hosted and launched on Qualtrics Surveys. The survey contained 17 questions, seven were multi-choice and ten were open-ended. All questions related to online dating apps. The survey was disseminated online through targeted advertisements, predominantly in LGBTIQ+ Aboriginal and Torres Strait Islander Facebook community pages. The survey was also shared through Twitter
17 It’s Just a Preference: Indigenous LGBTIQ+ Peoples …
343
where it received much interest through direct responses, sharing, and retweeting. These posts were shared with expediency and received prompt responses in a short period of time. Thirty participants completed the survey between 3 and 26 of January 2021, most completing the survey in the first week of its launch (see Image 2). The survey required participants to be over 18 years of age and to identify as Aboriginal and/or Torres Strait Islander, as well as LGBTIQ+ or otherwise gender and sexually diverse.
Participants The median age of participants was 33. The mode, or most common grouping of age was 27. The largest age group sample overall came from participants between 30 and 39 years. The oldest participant was 54 and the youngest, 19. The location of participants was overwhelmingly urban, in capital and major cities around Australia. Of the 30 participants, eight currently live in the greater areas of Sydney, New South Wales (NSW) and five in Brisbane, Queensland (QLD). In smaller groups, there were three participants from Darwin, Northern Territory (NT), three in Melbourne, Victoria (VIC), and three in Perth, Western Australia (WA). Other cities and towns represented in the survey include Cairns and Hervey Bay (QLD), Batemans Bay, Byron Bay, and Newcastle (NSW), Geelong and Warrnambool (VIC), and Launceston in Tasmania (TAS). The only states and territories not represented in this survey are South Australia (SA) and Australia’s Capital Territory (ACT). These figures are somewhat anticipated as, according to 2016 census data, the majority of Aboriginal and Torres Strait Islander populations are situated in urban regions in eastern states. This survey does not draw from participants’ place of origin and is more concerned with the current localities in which participants may use dating apps. While participants identified their current location it may not reflect the origin of participants. The identities of participants were located across categories and subcategories of Aboriginal, Torres Strait Islander, and LGBTIQ+ (lesbian, gay, bisexual, transgender, intersex, queer) and ‘+’ to open up the space to identify beyond those categories. Of the 30 participants, 27 identified as
344
A. Farrell
Aboriginal, two identified as both Aboriginal and Torres Strait Islander, and one solely identified as Torres Strait Islander. It must be acknowledged that this does not go far enough in representing the hundreds of unique First Nations tribal groups, communities, and regions which exist within the scope of each broader category, and to which each participant has a relationship and belonging to. Across the LGBTIQ+ categories, there were more overlapping arrangements that exist across the group. Participants were allowed to choose more than one answer to account for gender, sex, and sexual identity. Broadly, fourteen identified as gay, seven as lesbian, three as bisexual, three as queer, and three as transgender. These must not be taken as the primary identifier as the question placed these categories in the sequence according to the LGBTIQ+ acronym. All who identified as Bisexual also identified as Queer. Out of the gay grouping, three identified as transgender, one identified as queer, and one identified across gay, transgender, intersex, and ‘other’. In the lesbian grouping, one identified as gay, one identified as queer. In the transgender grouping, one identified as queer, and two with ‘other’. In the queer grouping, one identified as ‘other’. With an added open text answer available, two identified as non-binary and one identified as asexual. These intersections provide a diverse cross section of identities that are necessarily complex. As O’Sullivan (2019) puts it, ‘across all of our cultures, the body individually experienced is complex and formative regardless of our outward facing behaviours and societal commitments. The impact of the colonial project… has frequently denied us the subtle complexities of sexuality and gender claimed by mainstream culture in Australia over recent decades’. Aboriginal and Torres Strait Islander peoples have developed self-determined ways of identifying as LGBTIQ+ such as the growing use of ‘Sistergirl’ and ‘Brotherboy’ among Indigenous transgender and queer peoples (Farrell, 2017). Of the 30 participants, three identified as Brotherboy and two with Sistergirl and all who identified in this way also identified as either Transgender or Queer.
17 It’s Just a Preference: Indigenous LGBTIQ+ Peoples …
345
Findings and Discussion Dating App Experiences When asked about previous and current dating app use, the survey reflected the use of both LGBTIQ+ specific and non-LGBTIQ+ dating apps and sites. Dating apps/sites which encourage users from both heterosexual and LGBTIQ+ peoples included Tinder, Bumble, OK Cupid, Hinge, and Plenty of Fish. LGBTIQ+ specific sites that were identified were Grindr, Scruff, Growler, TAIMI, and HER. A user also identified KIK and Instagram, which are not dating sites, as an app that they have used for online relationships. Of the 30 participants, 11 users individually identified using only one dating app, 10 identified using two apps, seven identified using three apps, one identified using four apps, and one did not specify any app use. In total, the majority of participants use between one to two apps on average, with the identified variability of past, current, and sometimes discontinued usage. More detailed descriptions about app usage were extrapolated by two questions: the first asking whether or not users identify online as either Aboriginal and/or Torres Strait Islander only, LGBTIQ+ only, as both Aboriginal and Torres Strait Islander and LGBTIQ+, or neither. The second question asked how and why they identified as such in the context of social media. Fourteen participants indicated that they identified as both Aboriginal and/or Torres Strait Islander and LGBTIQ+, eleven indicated that they identified as only LGBTIQ+, five identified as neither, and no one solely identified as Aboriginal and/or Torres Strait Islander only. When asked why and how they identified, participants gave detailed descriptions about identification and non-identification practices. Those who identified as Aboriginal and/or Torres Strait Islander and LGBTIQ+ expressed that they typically, and proudly, identified across gender, sexual, and cultural categories in various social media and dating app sites. This grouping located themselves by identifying their nation, tribe, and language affiliations. One participant, 19-yearold Brotherboy, writes ‘I put my Nation and the Country I am currently on (especially as I am living off-Country)’, indicating the use of cultural protocols such as an acknowledgement of country as a part of their social
346
A. Farrell
media practice. One participant, 33-year-old, gay identified person from Brisbane, said that they identify themselves by ‘wearing the Aboriginal flag on my clothing in photos’. These examples demonstrate that cultural protocols are often performed in tandem with open displays of gender and sexually diverse identities. Participants who identified that they do not present themselves as either Aboriginal and Torres Strait Islander or LGBTIQ+ on dating apps, said this was due to factors ranging from casual prerogatives to issues around privacy. Some reported minimal to no use of functions such as the ‘about me’ feature on dating apps, with a 27-year-old lesbian identified person from QLD saying that they ‘only included their photo and preference on their profile’. A 37-year-old gay identified participant from Brisbane answered that they do not identify as Indigenous ‘due to stigma about [being] Aboriginal’, while a 45-year-old Aboriginal person from NSW, who identified as both Transgender and Sistergirl, critically identified the limitations of dating apps in the potential to identify as Aboriginal and Torres Strait Islander. She argued that ‘sometimes there’s no option. … Other times on Grindr I put in barely anything on [my] profile, just height, weight and picture, they’re fucking my body they don’t need to know every little thing about me’. Two participants identified the same desire towards privacy. From the group who answered that they present themselves as LGBTIQ+ only, a 37-year-old bisexual and queer identified participant from Sydney indicated that ‘there is no similar way to indicate race’ compared to the many ways you can identify your gender and sexual preference on many dating apps, and a 47-year-old gay identified person from NT notes that ‘there’s no function to identify as Indigenous unless you put it in your profile text. But it’s unnecessary in my opinion’. In the group of participants who identified as ‘LGBTIQ+’ only, major themes such as conflict, violence, fear and anxiety were explored. A 43year-old lesbian and queer identified person from Sydney said ‘[I] cannot be bothered with the racism’ and a 19-year-old gay identified participant from Sydney writes: I don’t put the fact I am Indigenous anywhere unless asked about my heritage by the person I am speaking to. This is because there is so much
17 It’s Just a Preference: Indigenous LGBTIQ+ Peoples …
347
rampant racism in the queer community especially towards Indigenous queers. I know I will be scrutinised if I say I am Indigenous because I have fair skin. I also do not want to have to use my heritage as a point of discussion for others to decide if they like me or not.
Speaking from experience, the above participant sheds light on the varying points of discrimination that can, and has, occurred online. A major contention around how Aboriginal identity is perceived and understood, by non-Indigenous as well as Indigenous peoples, is explored by another participant who reported, ‘[I] don’t say I’m mob online because I’m scared of rejection from other mob’. Another participant, a 33-year-old gay identified person from NSW, admitted that ‘I’m slightly afraid of harassment or bullying: I try not to give people (who I might reject or not reply to) information that can be used for targeted abuse’. In this regard, the presentation of Indigenous identity is posited as a risk for racially targeted violence between themselves and nonIndigenous people, as well as issues of intracommunity violence. Furthermore, participants expressed concerns relating to experiences offline, as many of these sites use geolocation technologies which are exploited by perpetrators of online violence (Donnelly, 2016; Pooley & Boxall, 2020).
Dating Apps and Discrimination The survey was used to extrapolate instances of violence and discrimination experienced by Aboriginal and Torres Strait Islander LGBTIQ+ peoples on dating apps. Out of the total, 18 participants—almost one-third—said ‘yes’ to experiencing forms of discrimination online in relation to their race, sexuality, and gender identity. Most reported more than one instance of violence and discrimination. Five participants did not identify any instances of violence or discrimination and seven were unsure. When asked about violence and discrimination, twelve reported experiencing overt racism, for example, racial slurs and hostility from non-Indigenous users. Eleven reported experiencing racial stereotyping and nine reported instances of sexual racism and fetishization where being Indigenous was superficially desired in sexual ways. Non-racial forms of discrimination were also identified, for example,
348
A. Farrell
being fat-shamed and for presenting as ‘butch lesbian’. One participant reported experiencing lateral discrimination from another Aboriginal person where ‘[an] Aboriginal person I matched with ended things when I told them I’m Aboriginal too. They said they [only] like white people’. The most graphic incident in the survey was reported by a 19-yearold non-binary identified participant who described experiencing ‘sexual racism in the form of “race players” and “slavery players” wanting me to be their “black boy slave”’. This experience exemplifies the complex intersection of race, gender, and sexuality, as the participant is objectified through their race as well as being misgendered as ‘boy’. Despite having experienced varied forms of violence, 13 participants said that they did not report any offences to the dating apps, primarily on the basis they felt ‘nothing would happen’ and of the nine who reported incidents, most described receiving generic feedback and no outcome. The low rates of reporting and distrust of platforms are highlighted by research which draws attention to the excessive emotional and traumatic labour that minorities undertake on social media platforms which continue to fail marginalised groups and place the onus of solving these complex issues on the victim (Carlson, 2020; Hanckel et al., 2019; Pooley & Boxall, 2020). To glean whether participants had potentially discriminatory preferences themselves, the survey asked whether or not they displayed strict preferences on their profile. Of the 30, 17 said that they do not display detailed preferences. Of those that did, they tended to outline their preferred sexual partner’s gender and their sexual preferences, with some admitting that excessive descriptors were unnecessary as they were using apps which cater to the sexual categories they were most interested in. These findings sit in stark contrast to the discriminatory and racist use of preference deployed by non-Indigenous people as reported across media discourse as demonstrated by McGregor and Conway (Donnelly, 2016; Lim et al., 2020; Milton, 2020). Participants were asked whether or not social media had an effect on their lives offline. Key themes of relationships (short term, long term, sexual), social (positive, non-social, anti-social), and health and wellbeing arose. Several participants reported having increased body-image issues, with one 19-year-old gay participant from Sydney stating, ‘I find
17 It’s Just a Preference: Indigenous LGBTIQ+ Peoples …
349
myself wanting to always look good in case I see a boy I have seen online or wanting to be skinnier or better dressed to look like the boys that always get boyfriends or hookups’. A 20-year-old gay and trans identified person reflected, dating apps ‘ha[ve] made me feel undesirable to other people (especially to Non-Indigenous or cis/straight people)’. Some reported feeling a sense of isolation and despair in relation to dating apps being the only source of social, romantic, and sexual relationships, claiming, ‘I have become so dependant on using queer dating apps… [and] I feel like [it’s] my only option to find love or a community’. Other participants reported feelings of distrust and disunity with the broader non-Indigenous LGBTIQ+ community, stating that it gives them a ‘poor view of [the] Queer community as being noninclusive’, while another described being fearful of ‘going to LGBT events [offline as] it makes me concerned [about] closet racists we have in the wider LGBT community, and how many white LGBT people think that they are exempt or excused from racism just because they are LGBT [sic]’. Participants invariably described social media apps as intrusive, time-consuming, toxic, addictive, and isolating. These descriptions show a connection between experiences of racism and discrimination in online and offline contexts which, for many, will have a great effect on their social, cultural, and romantic lives and their mental health and well-being (Carlson & Frazer, 2018).
Positive Experiences on Dating Apps The survey also asked participants to reflect on: any positive aspects of using dating apps. Three reported finding long-term partners and good relationships as a result of dating apps. Many reported they had experienced an expansion of their social lives and an enriching social and cultural experience, reflecting that ‘it’s been a great way to expand my circle, but also because our world is so small as queer mob, it has also been a bit of an exercise in finding out who’s related to you and who’s dtf [down to fuck]. It’s an interesting way that blak queer friendship[s] has started to include (not mandate, but expand to, if everyone’s interested) sex and hookups’. Several users described that dating apps performed a
350
A. Farrell
vital service in providing them with options for love, sex, and relationships, for example, ‘finding someone that is into trans women’, ‘meeting like-minded people’, and ‘more sex and new friends’. Many of the participants felt that the geo-location features of dating apps sometimes gave them an advantage when seeking people out online, such as being able to immediately locate other LGBTIQ+ and Aboriginal and Torres Strait Islander peoples wherever they go, alleviating stressors such as isolation by participating in dating culture, and having a boost to their self-esteem through flirting and sexual release.
Conclusion While social media has facilitated a wealth of potential for socialisation, romance, and sex, it continues to harbour and facilitate toxic, dangerous, and violent experiences and implications which affect Aboriginal and Torres Strait Islander LGBTIQ+ peoples in unique ways. The majority of this population overwhelmingly face discrimination based largely around their racial and cultural identities as Indigenous peoples. They are further discriminated through forms of sexual racism and symbolic exclusion through racist and white supremacist practices which inhabit dating apps through, for example, sexual preference. What compounds the sense of hopelessness in instances of discrimination and violence for victims is that there continues to be no meaningful recourse in the digital infrastructure to resolve these issues. Compounding silencing and erasure, platforms do not include or facilitate the recognition of Indigenous identity in meaningful and safe ways. This is countered by users who have subverted exclusion and innovative ways in which they express their identities through visual and text descriptors such as emojis. Burgess (2020), informed by black and Indigenous advocates and scholars, suggests that users set firm boundaries online, seek community, process emotions, reclaim the narrative, and if needed, seek professional and emergency services. Pooley and Boxall (2020, p. 12) conclude that the ‘onus of preventing and protecting users from online violence is placed on the victim’, while mechanisms of protection remain difficult in the process of law enforcement, governance (policy), and between
17 It’s Just a Preference: Indigenous LGBTIQ+ Peoples …
351
internet service providers and dating app platforms which do not sit in a neat jurisdiction. What this calls for is more comprehensive local and national responses to these issues which are inclusive and aware of multiple minority experiences. It must also be forthright in acknowledging the platforming of settler colonialism, white supremacy, and the many forms of racial violence which persist in digital terrains. In terms of research, more needs to be done to identify patterns of discrimination with a focus on perpetrators, rather than continuing to put a magnifying glass on marginalised peoples. Andrew Farrell is an Indigenous Early Career Academic Fellow and Ph.D. Student in the Department for Indigenous Studies, Macquarie University. Andrew is a Wodi Wodi descendant from Jerrinja Aboriginal community on the South Coast of NSW. Their research is multidisciplinary with a focus on Aboriginal LGBTIQ+ gender and sexualities, community, media, and online studies. Acknowledgements The author(s) received no financial support for the research, authorship, and/or publication of this article.
References Burgess, B. (2020). Racism on social media affecting your mental health? Here are 4 things you can do to take care of yourself . https://www.abc.net.au/everyday/ how-to-deal-with-racism-on-social-media/12827648 Byron, R., Robards, B., Hanckel, B., Vivienne, S., & Churchill, B. (2019). “Hey, I’m having these experiences”: Tumblr use and young people’s queer connections. International Journal of Communication, 13, 2239–2259. Carlson, B. (2020). Love and hate at the cultural interface: Indigenous Australians and dating apps. Journal of Sociology, 56 (2), 133–150. Carlson, B., & Frazer, R. (2018). Cyberbullying and Indigenous Australians: A review of the literature. Macquarie University. Clark, M. (2015). Are we queer? Reflections on ‘peopling the empty mirror’ twenty years on. In D. Hodge (Ed.), Colouring the rainbow: Blak queer and trans perspectives: Life stories and essays by first nations people of Australia (pp. 282–298). Wakefield Press.
352
A. Farrell
Collins, P. H. (1993). Toward a new vision: Race, class, and gender as categories of analysis and connection. Race, Sex & Class, 1(1), 25–45. Daroya, E. (2018). ‘Not into chopsticks or curries’: Erotic capital and the psychic life of racism on Grindr. In D. Riggs (Ed.), The psychic life of racism in gay men’s communities (pp. 67–80). Lexington Books. Donnelly, B. (2016). Gay minorities speak out against racist slurs on Grindr. http://www.smh.com.au/national/gay-aboriginal-man-publishes-rac ist-slurs-on-dating-app-20160418-go8zov.html Farrell, A. (2017). Archiving the aboriginal rainbow: Building an aboriginal LGBTIQ portal. Australasian Journal of Information Systems, 21, 1–14. Gays and Lesbians Aboriginal Alliance. (1994). Peopling the empty mirror. In R. Aldrich (Ed.), Gay perspectives II: More essays in Australian gay culture (pp. 1–62). Sydney. Goffman, E. (1959). The presentation of self in everyday life. Penguin. Goffman, E. (1966). Behavior in public places: Notes on the social organization of gatherings. Simon and Schuster. Hanckel, V., Vivienne, S., Byron, P., Robards, B., & Churchill, B. (2019). ‘That’s not necessarily for them’: LGBTIQ+ young people, social media platform affordances and identity curation. Media, Culture & Society, 41(8), 1261–1278. Lim, G., Robards, B., & Carlson, B. (2020). Grindr is deletings its ‘ethnicity filter’ but racism is still rife in online dating. https://livewire.thewire.in/outand-about/grindr-is-deletings-its-ethnicity-filter-but-is-racism-over-in-onl ine-dating/ Lumby, B. (2010). Cyber-indigeneity: Urban Indigenous Identity on Facebook. Australasian Journal of Indigenous Education, 39, 68–75. Milton, J. (2020). It took just one day for this retired rugby player to be pelted with racist abuse on Grindr. https://www.pinknews.co.uk/2020/02/11/caseyconway-aboriginal-australia-rugby-grindr-racism-twitter/ Monaghan, O. (2015). Dual imperatives: Decolonising the queer and queering the decolonial. In D. Hodge (Ed.), Colouring the rainbow: Blak queer and trans perspectives: Life stories and essays by first nations people of Australia (pp. 195–207). Wakefield Press. Nakata, M. (2007). Disciplining the savages, savaging the disciplines. Aboriginal Studies Press. O’Sullivan, S. (2019). A lived experience of aboriginal knowledges and perspectives: How cultural wisdom saved my life. In J. Higgs (Ed.), Practice wisdom: Values and interpretations (pp. 107–112). Brill/Sense.
17 It’s Just a Preference: Indigenous LGBTIQ+ Peoples …
353
Picq, M. (2020). Decolonizing Indigenous sexualities: Between erasure and resurgence. In M. Rahman, S. M. McEvoy, & M. J. Bosia (Eds.), The Oxford handbook of global LGBT and sexual diversity politics (pp. 169–184). Oxford University Press. Pooley, B., & Boxall, H. (2020). Mobile dating applications and sexual and violent offending. Trends and Issues in Crime and Criminal Justice, 612, 1–16. Sullivan, D., & Day, M. (2019). Indigenous transmasculine Australians & sex work. Emotion, Space and Society, 32, 1–7. Truong, K. (2018). After ‘sexual racism’ accusations, gay dating app Grindr gets ‘Kindr’ . https://www.nbcnews.com/feature/nbc-out/after-sexual-racismaccusations-gay-day-app-grindr-gets-kindr-n912196
18 ‘Women Get Away with the Consequences of Their Actions with a Pussy Pass’: Incel’s Justifications for Misogyny Lisa Sugiura
Introduction Incel stands for ‘involuntary celibate’. Its origin dates back to 1993, when a female Canadian student—Alana—created a website to talk about their ‘Involuntary Celibacy Project’. On the website, Incel was described as ‘anybody of any gender who was lonely, had never had sex or who hadn’t had a relationship in a long time’. Although the original foundations of the term remain, incel communities have since developed to exclude women, and to propagate misogyny, hatred and violence. Prior to the killings committed by Elliot Rodger in Isla Vista, California in 2014, and the subsequent acts of violence inspired by him (see in particular Parkland 2018, Toronto 2018, Toronto 2020), incels were relatively unknown, an inconsequential subculture confined to the online recesses L. Sugiura (B) School of Criminology and Criminal Justice, University of Portsmouth, Portsmouth, UK e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_18
355
356
L. Sugiura
of the ‘manosphere’, free to operate relatively undetected. However, these (and other) attacks have led to increased media, security and scholarly attention, which has seen comparisons made between incels, terrorists and other ideologically motivated extremist groups (Baele et al., 2019; Beauchamp, 2019; Collins, 2019; Richter & Richter, 2019). To date, work on extremism and ideologically motivated violence has broadly focused on religious or political motivators (Lakhani, 2020; Silke, 2008), with limited attention afforded to gender as a driver (Ferber & Kimmel, 2008; O’Malley et al., 2020). Thus, this chapter explores the justifications presented by incels that attempt to absolve their flagrant misogyny and aims to encourage further debate about the role of gender in extremist behaviours. Incels are not an isolated phenomenon; they are part of larger backlash against women and feminism bolstered by the manosphere (Marwick & Caplan, 2018), which involves groups of men including Mens’ Rights Activists (MRAs), Pick up Artists (PUAs), and Men Going Their Own Way (MGTOW). Whilst there are distinctions with the main focus of these groups, they are connected by the perception that women are marginalising and subjugating men in contemporary society and men need to fight back to preserve their rights (Dragiewicz, 2008; Gotell & Dutton, 2016). Although incels are an extreme manifestation of misogyny, their problematic attitudes are not contained to the online spaces they and other manosphere groups frequent. They are symbolic of structural misogyny and patriarchal systems of socialisation. Moreover, the ideology espoused within incel communities is interwoven with the wider Western sociopolitical climate, which has seen increasing resistance and retaliation against equality and improvements for marginalised groups.1 This type of extremist behaviour is not confined to online spaces, but it is exacerbated by digital technologies embedded into our daily lives due to the symbiotic relationship between technology and society (Powell et al., 2018). In light of this, this chapter will show how incels, their ideology, and their
1
As seen in the growth in the far right and other fascist groups (Daniels, 2018; Main, 2018; Mondon & Winter, 2020; Winter, 2019).
18 ‘Women Get Away with the Consequences …
357
hatred of women are not only symptomatic of wider normalised societal misogyny, but reinforced by it. The chapter is informed by a constructivist ethnographic study involving non-participant observation and thematic analysis of publicly available incel discussions, videos and comments on social media platforms. I also conducted 10 interviews with self-identified incels, in order to understand the influence and philosophy of incels from a criminological gendered perspective. Explanations and justifications for incels’ hatred of women are structured around three key interrelated themes: misandry (adopting the victim status), rejection (sex and intimacy), and women as naturally inferior/corrupt, which are all epitomised within the incel Blackpill ideology. With incels, their critique is systemic, extending to the basic structure of Western society itself. In their view, there would not be incels if women were not provided the freedom to choose who to have sex with. The logical conclusion of the Blackpill, as one incels.co user writes, is that ‘women should have never been given any rights’.
Incels and the Manosphere Incels view themselves as unsuccessful in obtaining sex and romantic relationships with those they desire, and engage with the online communities within the manosphere as a means of support for their perceived isolation. Although there has been an emphasis on frustrated virgins (Tait, 2018), some incels have had sex but have since been rejected, been single for a long time, or slept with a sex worker (discounted as ‘real’ sex in the incel community). The emphasis is on heteronormative partnerships, and generally, incels are young (white)2 men unable to attract the women [they want] hence they are unable to meet expected hegemonic masculine ideals. A quintessential facet of hegemonic masculinity is heterosexual sexual prowess, essentially the presumption of entitlement to heterosexual sex, whereby women are viewed as sexual objects 2 Evidence demonstrates the prevalence of white, educated young men populating the incel community, however other ethnicities are also present, in particular, there is growing participation from incels in India. Indian incels are deemed inferior to the white ideal, further adding to how they demonise themselves through the use of terms such as currycels, ricecels, blackcels.
358
L. Sugiura
or conquests (Connell & Messerschmidt, 2005; Whitehead, 2002). The lack of relationships and sex has led some incels to claim significant psychological distress such as depression and loneliness as a result (Burgess et al., 2001). Some men view society as economically, sexually and socially favouring women, which has led to a sense of ‘aggrieved entitlement’ and adoption of victimhood (Kimmel, 2015). Ging (2017, p. 11) investigated how theories of masculinity operate within the manosphere and found that traditional ideas of hegemonic masculinity and power are intersected by the utilisation of ‘victimhood’. She argues that social media provides the perfect platform for amplifying the expression of this ‘new hybrid’ of masculinity, which she calls ‘aggrieved manhood’. In fact, MRAs often utilise language from feminism in order to portray symbolic and systemic harms against men instead (Nicholas & Agius, 2018). Van Valkenburgh’s (2018) content analysis of ‘Red Pill’ related documents showed that the ideology is not just an expression of hegemonic masculinity, but that it also includes scientific discourse and elements of neoliberalism. This is shown in the way that it economises sexuality and treats women like commodities with a quantifiable exchange value, known as ‘Sexual Market Value’ (SMV). This framework removes intimacy from human relationships, meaning that women no longer threaten the emotional boundaries established by hegemonic masculinity. Valkenburgh (2018) discussed how women are treated as exchangeable commodities within the ideology of the manosphere. Incels place looks on a numerical scale from 1 to 10 (with 10 being the greatest value) in order to rate and compare the innate value of an individual, framing discussions such as ‘any 5/10 woman can land a 6/10 or 7/10 boyfriend and/or husband ’. Bratich and Banet-Weiser (2019) point to the failures of neoliberalism as being the underlying cause for the increase in violence seen within the manosphere. Neoliberalism has failed to give men the self-confidence it promises, instead relying on misogynistic ideas, resulting in reactive violence against women who do not comply with patriarchal gender roles for sexual reproduction. This violence often starts with online harassment and increases in severity. However, Lumsden (2019) discovered the denial of women and feminists as victims
18 ‘Women Get Away with the Consequences …
359
of online violence in MRA forum discussions, whereby MRAs justified online abuse as retaliation for men’s online victimisation. Incels claim that their lack of sexual success is predetermined by their physical appearance. Their worldview consists of a fixed three-tier social hierarchy based entirely on appearance, where a minority of alpha ‘Chad ’ males and the most attractive ‘Stacy’ women reside at the top, the majority of normal average-looking people are in the middle, and a minority of ugly ‘incels’ languish at the bottom. ‘Lookism’ is also a central feature of the Red Pill philosophy, ascribed to by other manosphere groups (Papadamou et al., 2020), with the PUA community in particular exchanging tips on how to improve their appearance by ‘looksmaxxing ’ to increase the chances of sexual success. However, where incels differ is in their nihilistic belief that this hierarchy is immutable and impermeable, it is impossible to move between categories (Baele et al., 2019). The red pill draws on the film, the Matrix, whereby the protagonist, Neo, is presented with a choice about taking the red pill or the blue pill. If he takes the blue pill, he can continue to live blissfully unaware of the façade he is currently living in, whereas if he takes the red pill, he will know the truth about the world. In the manosophere, this analogy is used to ‘awaken men to feminism’s misandry and brainwashing’ (Ging, 2017, p. 3). Incels have created a third pill, that of the Blackpill, which once metaphorically consumed, shows the uncontested nature of reality: that the world is structured against ‘low status’ men, in favour of women and alpha males, personal solutions to systemic oppression are impossible, men who are ‘genetically inferior’ have, are and will always be socially disadvantaged, and women are biologically wired to seek out conventionally attractive and rich male partners. Studies from evolutionary biology and psychology are uncritically presented to support the Blackpill and the stance that feminism is the scourge of all men’s problems. For example, incels believe in hypergamy, that it is within women’s inherent nature to seek high status [male]partners, and therefore women are solely driven by looks and wealth. As well as an adulteration of Pareto’s 80/20 rule, whereby 80% of women desire and are competing for the top 20% of men, while the bottom 80% of men are competing for the bottom 20% of women. However, this ideology is not unique or novel, rather it is influenced and
360
L. Sugiura
informed by recognisable and established tropes borne from historical subjugation of women and minority groups, pseudoscience and internet culture, what Lindsay (2020, p. 41) refers to as a ‘hetero patriarchal racial caste-system’ that weaponises misogyny leading to acts of gender-based speech and more extreme mass violence.
New Tools Old Beliefs Incels have been considered a misogynistic ‘fringe’ due to their explicit sexism and hatred for women (Tait, 2018), however the misogyny espoused by incels, normalising violence against women, is not new and continues to be endemic with digital technologies creating new ways for misogyny to advance and disseminate in unprecedented ways. For example, online communities and virtual platforms have provided the means for the collective animosity of incels and other manosphere groups to advance on an exponential scale enabling a ‘networked misogyny’ (Banet-Weiser & Miltner, 2016, p. 2) to flourish. Furthermore, search capabilities and algorithmic politics inherent within technology reflect designer’s perspectives and thus advance straight, white male interests (Ging, 2017; Massanari, 2017), whilst anonymity and the availability of polarising topics have made it easy to find others with similar interests and/or belief systems (Perry, 2001). Antiquated ideas about biology and race are given contemporary validation online via forums, social media and memes, but also propagated offline in mainstream media, politics and academia. Though much of the content found in incel communities is extreme and blatant, the ideas are more palatable when advocated on mainstream platforms used by powerful individuals, such as the former President of the United States. The fact that Trump was able to continue in post after his infamous ‘grab them by the pussy’ remarks were leaked, was demoralising for women everywhere. The incel ideology involves a bi-directional dissemination, old misogynistic and racist notions such as pro-rape, eugenics and antisemitism have been regurgitated and found a new lease of life by incels, which are then recycled elsewhere on and offline. Often content appears ridiculous and extreme, yet some of this is intentional,
18 ‘Women Get Away with the Consequences …
361
‘shitposting’ culture to deliberately derail discussions (Jenkins, 2019). To the uninitiated, the ‘normies’, incels’ views are shocking, yet to those who have been immersed in internet culture perpetuating more acceptable forms of misogyny, they are all too familiar and inviting. Moreover, manosphere logic and tactics of reversing the roles of victims and perpetrators have become closely aligned with conventional hyperbole around sexual violence. Laura Bates (2020, p. 248) refers to this as a ‘symbiotic relationship’, whereby such rhetoric in the mainstream media emboldens and encourages online extremists, and extremist communities’ enthusiastic responses reward editors with clicks and shares.
Study Design In order to understand the influence and philosophy of incels, an online ethnographic approach—netnography (Kozinets, 2019)—was utilised. The methods involved non-participant observation and thematic analysis (Braun & Clarke, 2012) of purposively selected relevant forum discussions, comments, memes, images and videos from publicly available posts on Reddit, 4chan, Twitter, and dedicated incel websites. Particular threads with the term ‘incel’ included were explored and incel related search terms and hashtags were used. The same search terms were employed on YouTube, along with the recommended feature to further probe the videos that were suggested/similar, also taking into account how the platform’s algorithms promote such content. There was no direct interaction with users/posters, rather time was spent immersed on the sites and it was unnecessary to become a member of any communities to access data. Relevant posts were collected via manual means—that is, copying and pasting, and the software NodeXL, which obtains tweets over the previous seven days. The online observations equated to more than 100 hours, with 10,264 pieces of data analysed. Data was excluded from analysis if it was clear that the poster did not identify as an incel and was blatantly an outsider to the community, for example, posts from users who criticise incels and their belief systems.
362
L. Sugiura
The research also involved 10 qualitative semi-structured interviews with self-identified incels (n = 7) and those who identify as ex-incels (n = 3). Participants were identified from convenience sampling and were all men. There are some women who identify as incel, who are known as femcels, and although the research was focused on incels not specifically male incels, it was only men who were interviewed. It is worth noting that this might be due to the targeted spaces being less inclusive of female incels, as well as the potential reluctance of female incels to be interviewed. Participants were identified via incel Reddit threads and by searching Reddit for phrases like ‘I am/ I used to be an incel ’ and were messaged via Reddit’s messaging service to invite them to interview. As users have pseudonyms, it is often unclear as to what gender they are, and potential interviewees were chosen based on their profiles and contributions to the forums, rather than aiming for men solely. Nevertheless, all the participants stated that they were men. Interviews were conducted online via email or private messages on Reddit. All elements of the study received institutional ethical approval (Project Number: FHSS 2019– 061). Usernames of individuals quoted from the online data as well as the interviews, have been removed for the protection of their privacy. Forum and social media posts have also been curated to prevent reverse searches revealing identifying information, in keeping with accepted ethical procedures for the presentation of qualitative online data (Sugiura et al., 2017), especially where informed consent has not been obtained.
Justifications for Misogyny The analysis of the data unsurprisingly found an abundance of misogynistic ideals propagated by incels and supplementing this were intertwined justifications in the form of retaliating against misandry, countering rejection, and the realisation that women are both naturally inferior and corrupt. Such justifications validate incels’ hatred towards women, yet these ideas pre-date and co-exist alongside incels, on and offline, often in respected social and political spheres, further endorsing
18 ‘Women Get Away with the Consequences …
363
misogyny and violence against women, and the continued growth and appeal of communities like incels.
Misandry Due to not meeting hegemonic masculine standards and feeling that women are unfairly favoured in society, incels view themselves as the oppressed, such that they are victims of evolutionary biology (in not being born attractive) and misandry (as a result of feminism, the world is now structured to hate [certain] men). Incels are able to justify their misogyny by turning the tables, as these users demonstrate, rather than hating women, it is about fighting misandry and standing up for men’s rights: If you aren’t an above average male, short, ugly, or aren’t human, remember women hate you and wish for you to die. Not returning that hatred is the most soy3 thing you can do. Any man who isn’t a misogynist in today’s day and age is a complete retard. We are looked at as the scum of the earth, nice or not. Nothing we can do as below average men will ever be enough. May as well give them a taste of their own medicine. Misogyny implies irrational disgust and hate of women. being [sic] disgusted by foids4 is completely rational. That’s why I hate them [women]. Because they treat me like shit. Because I have respected them and helped them all my life and they don’t care. Isn’t my hatred justified? Am I the asshole for hating those who hate me? Why is that so difficult to understand for the rest of the world?
Marwick and Caplan (2018) in their study exploring the use of the term misandry within the manosphere, highlights how it serves to act as a weapon to counter feminist language and ideas and is viewed as 3
The word ‘soy’ is used in reference to anything perceived as feminine, influenced by scientific studies which discovered that soy contains a form of the female sex hormone oestrogen. 4 Incels frequently refer to women as foids, short for female humanoid organism. Other popular terms include: femoid, toilets or holes, and anything which is purposefully dehumanising and derogatory, as incels perceive women to be ‘subhuman.’
364
L. Sugiura
a false equivalence by feminists. Incels, like MRAs, use tropes of male victimhood to retaliate against feminism, which they perceive to prioritise women’s rights over men’s. Terms such as ‘toxic masculinity’ are co-opted, with myriad discussion threads and videos dedicated to ‘toxic femininity’, with extreme claims that feminists are actively seeking the annihilation of men (notwithstanding the Chad alpha males). Hysteria and panic are whipped up in incel spaces about false rape allegations, male victims of domestic abuse, fathers losing access to their children and (unknowingly) raising other men’s children. Incels denigrate the notion that women face any structural oppression or inequality. Misandry is used as a counterargument to feminism rather than calling attention to and seeking solutions for men’s problems. I hate women because we give them too many rights that they clearly don’t fucking deserve. Like having the ability to kick a man out of his OWN FUCKING HOUSE, the ability to take his fucking kids from him, the ability to commit a crime such as murder and get a slap on the wrist, and so on. Women do not deserve the rights they have given what they do with them.
Though these narratives did not originate online, they have contributed to the exaggerated one-dimensional portrayals of these issues that have almost become accepted fact. These are not discourses only espoused by deviant fringe groups, rather they have increasingly become the norm in mainstream media and in public discourse following backlash to feminist gains such as #MeToo and the accusations the movement has encouraged witch hunts of men.5 Pseudoscience or pure lies create a contrasting reality involving false equivalents or trends whereby incels rather than women are the victims, and their hatred is reframed as selfdefence. The tendrils of misandrist beliefs are difficult to escape once internalised, with even supposedly ex-incels still defaulting to gendered allegories, with one interviewee opining the following, which inspired the chapter’s title: ‘women get away with the consequences of their actions with a pussy pass’. 5 https://www.pewsocialtrends.org/2020/08/20/nearly-half-of-u-s-adults-say-dating-has-gottenharder-for-most-people-in-the-last-10-years/.
18 ‘Women Get Away with the Consequences …
365
Rejection A common theme throughout incel communities is that of rejection, this is presented as not only a reason to hate women, but what made them initially self-identify as an incel. O’Malley et al. (2020) state that incel rhetoric may include normal anxieties of young men transitioning into adulthood. Almost no one is immune to romantic failure, and the resulting craving, depression, fear, and rage rejection causes (Baumeister & Dhavale, 2001). However, for incels what is viewed as a (albeit painful) rite of passage by many, is all consuming, often leading them to seek out answers for their rejection. At the core of the Blackpill are explanations presented with a veneer of validity under the guise of social and evolutionary theory, enticing to those raw from the misery of unrequited love. Yet often incel narratives are less about finding love and a partner, rather anger and frustration are displayed in response to being denied sex. I didn’t start being hateful with them. Why would I? I don’t give a shit how foids behave or who they fuck, as long as they give me love too. But they won’t. If I could find a girlfriend just by treating her nicely, do you think I would hate them? If women gave me sex the same as they give it to Chad, do you think I’d hate them? But they won’t. That’s why I hate them with [a] passion. I rejoice every time a foid gets killed by her Chad bf. I laugh every time one of them gets raped for being a whore. They rejected me just because I’m ugly. They deserve it.
In explaining the journey to becoming incel, interview participants spoke about being rejected by girls in their youth, but were keen to impress the significance of romantic intimacy rather than sexual relationships, which may have been influenced in part, by speaking with a female researcher. What made me identify as an incel was when I realised how much I struggled with dating and even forming friends with girls. I pretty much never had any female friends in my life
366
L. Sugiura
Being an incel is a lone battle. It has been a cycle of rejection from girls. Sometimes, it is because you are not smart, or you lack the social skills for courting.
Nevertheless, women’s agency in being able to choose who they date and who they have sex with is used by incels to explain their lack of sexual success. Assuming the marginalised role, they view women as the oppressors because their right to refuse sex has resulted in them feeling powerless because they believe as men they are entitled to it (Anderson, 2005; Kimmel, 2010). Incels frequently reaffirm the perception that women owe men sex, and that refusal to provide this is in direct conflict to a man’s feeling of masculinity (Anderson, 2005; Connell & Messerschmidt, 2005; Kimmel, 2010). The Sexual Market Place, which women retain dominance of, is viewed by incels as a barrier to them obtaining sex. Romantic relationships between men and women function under a model of prostitution; all men know we need to have access to resources to gain access to sex. If you’re a woman (especially a decent looking one) you still have inherent (sexual) value.
Sex redistribution, as professed by Valizadeh (more commonly known in the online sphere as PUA Roosh V) and addressed in a paper by Jaki et al. (2019), is viewed as a means to counter incels engaging in violence. This would absolve incel killers, as due to their lack of romantic or sexual success with women, their actions were simply attempts at gaining attention and hence they were not responsible for them. The rationalisation of sexual deprivation for mass murder is a classic example of Sykes and Matza’s (1957) techniques of neutralization, whereby responsibility is denied through the perpetrator being a victim of their circumstance and forced into a situation beyond their control. Furthermore, denial of victimisation also applies, whereby the victim status has been appropriated from those who were the actual victims, by the perpetrator. In this instance, there is an implication that the victims deserved what happened
18 ‘Women Get Away with the Consequences …
367
to them, as they were mere collateral in the incel fight back against a society which (they claim) created them. The argument behind sex redistribution is that the state should provide sex workers (who are not referred to in such terms within the manosphere, rather they are reduced to whores) to incels. Training would be provided to ensure incels receive specialist treatment and are made to feel ‘handsome’, ‘powerful’ and ‘confident.’ Incels claim that the funding for this programme should come from single women paying tax upon birth control products. As sex workers are already a marginalised group vulnerable to abuse, presenting them as a solution for misogynist and lonely men further dehumanises them (Del Valle, 2018). Moreover, this ‘solution’ contradicts the incel worldview. Most women are ‘subhuman’,6 with sex workers not even meeting this characterisation and sex with them is considered invalid (such that an interviewee still claimed they were a virgin after a sexual encounter with a sex worker).
Women as Naturally Inferior/Corrupt Strengthening the perspective that misandry is pervasive in contemporary society and underpinning the reasons behind their sexual rejection, is the incel belief that women are inherently abhorrent. On the one hand, women are viewed as pitiful subordinates ‘less than’ men, less intelligent, weaker, less capable of empathy or rational thought, hypocritical liars who fabricate sexual violence, and whose only purpose is reproduction and sex. But seriously, I care nothing about foids, I don’t care about their intellect (or lack thereof ), their so called oppression, their fake rape stories, nothing about them. The only thing I care about is sticking my johnson in their holes. If I’m not doing that, then there’s no need for us to even interact with each other.
While on the other hand women are deemed to have power through being manipulative, deceptive and evil, hence the ability to oppress men 6
Female family members are often excluded from this and viewed as the exception.
368
L. Sugiura
by withholding sex, exploiting their SMV advantage. Experiences with women have led incels to generalise all women and categorise them according to the Blackpill ideology: I got much more deep into the manosphere (including black pill) after my (now ex) girlfriend cheated on me with a hotter guy. I started believing their ideas that women monkey-branch to other men, they are never loyal, and will go on the “cock-carousel”.
The Blackpill validates the notion that women are naturally predetermined to be paradoxically both inferior to and noxious to men and is core to how incels justify misogyny. Reading blackpill topics made me start hating women, even if they didn’t do anything to me. The more I lurked [on] those sites, the more I learnt about women’s nature by reading surveys, stats and papers, the more i hated women. The blackpill made me see woman [sic] in a completely new way. Society teaches you that women are beautiful, pure, innocent, that must be protected. It tells you that women are better than men because women don’t hate, don’t discriminate. It tells you that women aren’t as superficial as men when it comes to dating. The Blackpill destroyed all the beliefs imprinted by society in my mind. I couldn’t see women in the same way I did before. Society gives women a beautiful veil of pureness, but the blackpill tore that veil. I also hated women’s hypocrisy in negating the fact that looks matter a lot, and that often personality isn’t enough when you’re ugly. I started to think (and I still think) that women are the real Nazis.
Women are stripped of their human attributes and demonised as the evil other, transforming them into the instinctive enemy of incels, therefore permitting aggression and hatred towards them.
18 ‘Women Get Away with the Consequences …
369
Conclusion This chapter has provided insight into the influence and philosophy of incels and how they justify their flagrant misogyny. This is structured around the interrelated themes of misandry, rejection and women as naturally inferior/corrupt, which are neatly packaged into the incel Blackpill ideology. In embodying the Blackpill and becoming an incel, there is a conscientious rejection of women and sexual and romantic relationships with women, which ironically then causes bitterness and resentment, as these are the very things they crave. The Blackpill does not improve individual’s lives. It is not a replacement for the sex and intimacy that incels desire, rather it enables them to feel superior in having exclusive knowledge, which seemingly offers respite from solitude, while actually increasing their loneliness and self-loathing. The poignant complexity and irony of the manosphere is that there are well-meaning groups that tackle genuine problems affecting men, not just groups deliberately and systematically using these issues to promote physical and sexual violence against women. However, incels who attract and appear to be comprised of many vulnerable men, are not providing or receiving essential support, rather they are met with vitriol, revulsion and ridicule, with suicide actively encouraged. The misogyny prevalent within incels should expose the harsh reality that misogyny continues to be pervasive within contemporary society. The nostalgia yearned for by incels expressed through the Blackpill’s desire to return to the ‘golden age’ (Baele et al., 2019, p. 13) involves repealing hard-fought feminist gains and propagating political violence against women. Though unlikely to satisfy incels, who require women to be removed of all autonomy, unfortunately, the backlash against feminist advancement is not contained to the confines of the manosphere. Debates about revoking Roe V. Wade in the United States continue,7 and political violence against women is an all-too-common occurrence, with female politicians and campaigners increasingly targeted globally 7 https://www.pewresearch.org/politics/2019/08/29/u-s-public-continues-to-favor-legal-abortionoppose-overturning-roe-v-wade/.
370
L. Sugiura
(Chesney-Lind, 2006; Krook & Sanín, 2020; Sanín, 2020). An argument propagated within incel communities is that incels did not exist in the past, when gender roles were more rigidly enforced and women had less rights, therefore this negates feminists’ explanation for incels being caused by the patriarchy or ‘toxic masculinity’. However, this dispute is inherently flawed as it only further signifies that incels are a reaction to societal progression and gender equality. Moreover, the contribution of a digital society (Powell et al., 2018) enabling communities to assimilate together on and offline, in a marriage of hatred and violence, has been overlooked. Incels do not exist and do not solely occur within an online vacuum. If the toxic societal construction of masculinity and normalised misogyny is not addressed, how can we effectively respond to and mitigate them? Incels are a reflection of a society grounded in misogyny and online spaces mirror a reality that enables entitlement, anger and hatred towards women. Therefore it is crucial to examine and challenge misogyny in its entirety, otherwise further young men vulnerable to this insidious ideological grooming, risk the unhappy existence of the Blackpill, and women will continue to be at risk of such ideologically inspired violence. Acknowledgements This research was supported by the University of Portsmouth Themes Research and Innovation Strategic Fellowships (TRIF) fund.
References Anderson, E. (2005). Orthodox and inclusive masculinity: Competing masculinities among heterosexual men in a feminized terrain. Sociological Perspectives, 48(3), 337–355. Baele, S. J., Brace, L., & Coan, T. G. (2019). From “incel” to “saint”: Analyzing the violent worldview behind the 2018 Toronto attack. Terrorism and Political Violence, 4, 1–25.
18 ‘Women Get Away with the Consequences …
371
Banet-Weiser, S., & Miltner, K. M. (2016). #MasculinitySoFragile: Culture, structure, and networked misogyny. Feminist Media Studies, 16 (1), 171– 174. Bates, L. (2020). Men who hate women: From incels to pickup artists: The truth about extreme misogyny and how it affects us all . Simon and Schuster UK. Baumeister, R. F., & Dhavale, D. A. W. N. (2001). Two sides of romantic rejection. In Interpersonal rejection (pp. 55–71). Beauchamp, Z. (2019). Our incel problem. https://www.vox.com/thehighlight/ 2019/4/16/18287446/incel-definition-reddit Bratich, J., & Banet-Weiser, S. (2019). From pick-up artists to incels: Con (fidence) games, networked misogyny, and the failure of neoliberalism. International Journal of Communication, 13, 25. Braun, V., & Clarke, V. (2012). Thematic analysis. In H. Cooper (Ed.), The handbook of research methods in psychology. American Psychological Association. Burgess, E. O., Donnelly, D., Dillard, J., & Davis, R. (2001). Surfing for sex: Studying involuntary celibacy using the internet. Sexuality and Culture, 5 (3), 5–30. Chesney-Lind, M. (2006). Patriarchy, crime, and justice: Feminist criminology in an era of backlash. Feminist Criminology, 1(1), 6–26. Collins, P. (2019). Incel: How an online subculture has led to violence against women. Blue Line. https://www.blueline.ca/how-an-online-subculture-hasled-to-violence-against-women/ Connell, R. W., & Messerschmidt, J. W. (2005). Hegemonic masculinity: Rethinking the concept. Gender & Society, 19 (6), 829–859. Daniels, J. (2018). The algorithmic rise of the “alt-right.” Contexts, 17 (1), 60– 65. Del Valle, G. (2018). Don’t ask sex workers to solve the problem of violently angry men. The Outline. https://theoutline.com/post/4407/sex-wor kers-won-t-solve-the-incel-problem Dragiewicz, M. (2008). Patriarchy reasserted: Fathers’ rights and anti-VAWA activism. Feminist Criminology, 3(2), 121–144. https://doi.org/10.1177/155 7085108316731 Ferber, A. L., & Kimmel, M. S. (2008). The gendered face of terrorism. Sociology Compass, 2(3), 870–887. Ging, D. (2017). Alphas, betas, and incels: Theorizing the masculinities of the manosphere. Men and Masculinities, 22(4), 638–657. https://doi.org/ 10.1177/1097184X17706401
372
L. Sugiura
Gotell, L., & Dutton, E. (2016). Sexual violence in the ‘manosphere’: Antifeminist men’s rights discourses on rape. International Journal for Crime, Justice and Social Democracy, 5 (2), 65. https://doi.org/10.5204/ijcjsd.v5i2.310 Jaki, S., De Smedt, T., Gwó´zd´z, M., Panchal, R., Rossa, A., & De Pauw, G. (2019). Online hatred of women in the Incels. me forum: Linguistic analysis and automatic detection. Journal of Language Aggression and Conflict, 7 (2), 240–268 Jenkins, C. (2019). Year in a word: Shitposting. Financial Times. https://www. ft.com/content/00df4ed4-1dbc-11ea-97df-cc63de1d73f4 Kimmel, M. (2010). Misframing men: The politics of contemporary masculinities. Rutgers University Press. Kimmel, M. (2015). Angry white men: American masculinity at the end of an era. Nation Books. Krook, M. L., & Sanín, J. R. (2020). The cost of doing politics? Analyzing violence and harassment against female politicians. Perspectives on Politics, 18(3), 740–755. Kozinets, R. V. (2019). Netnography: The essential guide to qualitative social media research. SAGE. Lakhani, S. (2020). Extreme criminals: Reconstructing ideas of criminality through extremist narratives. Studies in Conflict & Terrorism, 43(3), 208– 223. Lindsay, A. (2020). Swallowing the blackpill: A qualitative exploration of incel antifeminism within digital society (Unpublished Masters Thesis). Victoria University of Wellington, New Zealand. Lumsden, K. (2019). ‘“I want to kill you in front of your children” is not a threat. It’s an expression of a desire’: Discourses of online abuse, trolling and violence on r/MensRights. In Online othering (pp. 91–115). Palgrave Macmillan, Cham. Main, T. J. (2018). The rise of the alt-right. Brookings Institution Press. Marwick, A. E., & Caplan, R. (2018). Drinking male tears: Language, the manosphere, and networked harassment. Feminist Media Studies, 18(4), 543–559. Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19 (3), 329–346. Mondon, A., & Winter, A. (2020). Reactionary democracy: How racism and the populist far right became mainstream. Verso Books. Nicholas, L., & Agius, C. (2018). # Notallmen, #menenism, manospheres and unsafe spaces: Overt and subtle masculinism in anti-“PC” discourse. In L.
18 ‘Women Get Away with the Consequences …
373
Nicholas & C. Agius (Eds.), The persistence of global masculinism (pp. 31– 59). Palgrave Macmillan. O’Malley, R. L., Holt, K., & Holt, T. J. (2020). An exploration of the involuntary celibate (InCel) subculture online. Journal of Interpersonal Violence. https://doi.org/10.1177/0886260520959625. Papadamou, K., Zannettou, S., Blackburn, J., De Cristofaro, E., Stringhini, G., & Sirivianos, M. (2020, January). Understanding the incel community on YouTube. ResearchGate, Preprint. Perry, B. (2001). In the name of hate: Understanding hate crimes. Routledge. Powell, A., Stratton, G., & Cameron, R. (2018). Digital criminology: Crime and justice in digital society. Routledge. Richter, G., & Richter, A. (2019). The incel killer and the threat to the campus community. Security Magazine. https://www.securitymagazine.com/ gdpr-policy?url=https%3A%2F%2Fwww.securitymagazine.com%2Farti cles%2F89962-the-incel-killer-and-the-threat-to-the-campus-community Sanín, J. R. (2020). Violence against women in politics: Latin America in an era of backlash. Signs: Journal of Women in Culture and Society, 45 (2), 302–310. Silke, A. (2008). Research on terrorism: A review of the impact of 9/11 and the global war on terrorism. In H. Chen, E. Reid, J. Sinai, A. Silke, & B. Ganor (Eds.), Terrorism informatics: Knowledge management and data mining for homeland security (pp. 27–50). Springer. Sugiura, L., Wiles, R., & Pope, C. (2017). Ethical challenges in online research: Public/private perceptions. Research Ethics, 13(3–4), 184–199. Sykes, G. M., & Matza, D. (1957). Techniques of neutralization: A theory of delinquency. American Sociological Review, 22(6), 664–670. Tait, A. (2018). We must try to understand how unwanted virginity leads self-hating incels to murder. New Statesman. https://www.newstatesman. com/2018/05/we-must-try-understand-how-unwanted-virginity-leads-selfhating-incels-murder Van Valkenburgh, S. P. (2018). Digesting the red pill: Masculinity and neoliberalism in the manosphere. Men and Masculinities. https://doi.org/10.1177/ 1097184X18816118 Whitehead, S. M. (2002). Men and masculinities: Key themes and new directions. Polity. Winter, A. (2019). Online hate: From the far-right to the ‘alt-right’ and from the margins to the mainstream. In Online othering (pp. 39–63). Palgrave Macmillan, Cham.
19 The Dirtbag Left: Bernie Bros and the Persistence of Left-Wing Misogyny Pratiksha Menon and Julia R. DeCook
Introduction As socialists, we are supposed to work to dismantle oppressive structures, yet DSA only seems to be replicating them. (Bz & DF, 2018)1
The most frequently highlighted lines in a publicly shared statement on the Democratic Socialists of America emphasise a problem that has historically plagued left-wing activism: mistreatment of women P. Menon (B) Department of Communication and Media Studies, University of Michigan, Ann Arbor, MI, USA e-mail: [email protected] J. R. DeCook Loyola University Chicago, Chicago, IL, USA 1
Post published under pseudonyms.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_19
375
376
P. Menon and J. R. DeCook
and minorities. Fed up by the dismissive attitudes of their male counterparts in the DSA steering committee, Rosie Gillies and Annie D. resigned from their posts in the Boston chapter, not without pointing out that their experience was similar to their women counterparts around the country. Both during the present conjuncture and the 1960s, the division of labour in left-wing organisations tended to follow culturally prescribed gender roles (Klatch, 2001). Expecting their female colleagues to attend to the emotional labour and administrative details, the men delegated those responsibilities in favour of focusing on the ‘big ideas’. Gillies and Annie D. also pointed out that the men tended to respond to them aggressively during political disagreements and discussions related to gender-based issues were often treated dismissively. Feminist critiques of Marxism underscore the reductionism of class-based analysis that overlooks how issues of gender and race intertwine with class to reproduce unequal power relations. This chapter focuses on how the overall dismissiveness towards women, as well as intersectional practices within traditional socialist spaces, manifests itself in online political discourse. The 2016 U.S. presidential election was unprecedented in the ways in which participatory cultures online contributed to and integrated within mainstream campaign discourse. One of the outcomes was that the sexism levelled at women politicians was rendered highly visible as Hillary Clinton emerged as the Democratic party frontrunner for presidential candidate in 2016. Right-wing online communities cashed in on the rise of Republican candidate Donald Trump to move their anti-feminist messaging into the mainstream, modifying and amplifying mainstream media’s antagonistic coverage of Clinton since her time as First Lady. Left-wing populism witnessed a revival led by Democratic challenger Bernie Sanders. Sexist attacks from the left wing have largely been overlooked in feminist scholarship on the election, justifiably prioritising the constant attacks on women’s and minority’s liberties by the right (Ogan et al., 2018; Schaffner et al., 2018; Strolovitch et al., 2017). Thus, this chapter explores the misogynistic messaging of the most vocal constituency of Sanders’ base, colloquially referred to as the ‘Bernie Bros’ and more broadly, the ‘Dirtbag Left’. The term Bernie Bros was coined by reporter Robinson Meyer of The Atlantic (2015) to describe
19 The Dirtbag Left: Bernie Bros …
377
the mostly white, cis, self-identified progressive men2 who tended to flock to the cult of personality around Sanders. Progressive journalists as well as a host of new media commentators who hopped on to the Bernie bandwagon, gained fame during this time forming the influential core of the Bros. Many of them came to be known as ‘The Dirtbag Left’, a hat tip to the vulgar populism that undergirded the content they created. One of these creations is the podcast Chapo Trap House, whose profanity-laced humour has often been criticised for being tone deaf to issues of race, sexuality, and gender. Theorising gendered harassment through the lens of populism and masculinity, this chapter examines how the Bernie Bros fit in with Sanders’ own messaging, the gendered ways in which this translated into online discourse and concludes with a case study of the Chapo Trap House podcast. Bolstered by an ‘us’ versus ‘them’ which presupposes the existence of true ‘people’ fighting the systemic ‘elites’ (Löffler et al., 2020), populism offers the promise of centring the people over a corrupt political order (Lacatus, 2019). For the purposes of this chapter, we focus on populism as a practice rather than an ideology (Mouffe, 2018). This provides the analytical framework with which to examine gendered harassment perpetuated by self-identified leftists, which often bear scant distinction from those perpetuated by the right. At a very fundamental level, misogyny across the political spectrum can be located in performances of hegemonic masculinity that work to reproduce unequal power relations among genders (Connell, 1995). Misogynistic behaviour among the left can be better understood as a form of hybrid masculinity wherein the discursive alignment with gender equality does not prevent the practical enactment of hegemonic masculinity (Bridges & Pascoe, 2014). By incorporating analyses of left-wing misogyny in larger conversations about misogyny and gendered harassment, we aim to provide a feminist and anti-racist critique of the unbearable white masculinity that saturates our political discourse.
2
The support for Bernie Sanders was across gender and racial lines. The most prominent faces of the harassment were white men, but that does not in any way imply that these were the only aggressors.
378
P. Menon and J. R. DeCook
A Socialism Without Feminism In the 1970s, similar issues such as the ones raised by the women on the DSA steering committee in 2018, played a substantial role in feminist organisers breaking away from male-dominated left-wing social movements. The dismissiveness of their male counterparts led to the exclusion of women from leadership positions and has been identified as a significant factor in the evolution of a feminist consciousness among women of the left (Klatch, 2001). The second wave which proved to be the most radical in its push to dismantle patriarchal power was in many ways born out of the grievances within the parent movements of the New Left, that led to a formal organisation based on the collective experience of discrimination on the basis of gender (Buechler, 1993). In the earliest iterations of the left-wing Students for a Democratic Society organisation, the assumption was that freedom of men would equal to freedom of women, without any recognition of women’s issues as political (Evans, 1980). The leftist underground newspaper, The Rat was known to feature pornography and crude sexist jokes (Baxandall & Gordon, 2000). That the Left endorsed this while simultaneously issuing the Port Huron statement, which called for women to be freed from oppressive labour practices, reflects the left’s purposive blindness towards the misogyny in its own philosophies. Activist Marilyn Salzmann Webb was called to be raped when she expressed concern with leftist ignorance of women’s issues (Brooks & Hodgson, 2007). The populist tendencies within contemporary left-wing political participation in the United States, can be observed in the cult-like status of Bernie Sanders among his supporters. By giving a hypermasculine spin to liberal beliefs that are otherwise considered feminine, Sanders popularised progressive thinking even though his political positions have not been too different from that of centrist Democrats such as Hillary Clinton (Albrecht, 2017; Löffler et al., 2020). This has resulted in a leftist base that not only devalues women candidates and women’s issues but also often carries forward right-wing conspiratorial talking points, such as ‘rigged’ systems and ‘the Deep State’ (Bush, 2016; Parker, 2018). While one could lay the blame squarely on the base, it would be remiss not to mark out the general tone of the Bernie Sanders campaign against
19 The Dirtbag Left: Bernie Bros …
379
Hillary Clinton. He continually depicted himself as being targeted by Clinton, who he framed as the elite, spinning off supposed slights that she was to have made against him. Midway through the primaries, when the race seemed close, Sanders spun a yarn around Clinton calling him unqualified, using that opportunity to refer to her as ‘not qualified’ (Schleifer, 2016). While her political qualifications to be President were the same as his and her political experience in various influential positions longer than his, his deployment played on the gendered notions of women being unfit to be President. Clinton brushed it off, instead of responding in kind, because women politicians who rage in public are looked upon unfavourably (Brooks, 2011), while Sanders used every opportunity to build an angry man persona which raged against the dominant order. The historical left-wing unwillingness to connect structural racism and gender to poverty and systemic inequality, was apparent in Sanders’ glib statements around these issues as well as his campaign’s internal struggle with the same. When it came to women’s issues, in what was an appeal to rural, white voters, Sanders made it clear that reproductive rights were negotiable3 (Detrow, 2017), criticising the Democratic Party for engaging in what he deemed as identity politics (Griffiths, 2016). Plagued with allegations of sexist and racist mistreatment, the Sanders’ campaign desperately tried to keep the latter under wraps (Ember & Benner, 2019; Slodysko, 2020). Sanders denounced women’s reproductive justice organisations such as Planned Parenthood and NARAL as ‘establishment’, both of which have been under constant threat from Republican lawmakers, when he did not receive their endorsement (Wagner, 2016).4 Maintaining the anti-elitist narrative is integral to a populist campaign’s success, even at the cost of maligning the institutions that support the very populations whose needs the populist leader claims to represent.
3 4
He walked back his stance on this during the 2020 primaries. He walked back his stance on this after backlash.
380
P. Menon and J. R. DeCook
The ‘Bernie Bros’: Toxic Masculinities on the Left ‘Cunt’, ‘bitch’, ‘criminal’ were some of the messages left on Democratic National Convention chair Roberta Lange’s voicemail after the Nevada convention in May 2016. The subsequent public sharing of Lange’s residential address along with the location of her grandchildren’s school, was weaponised as a threat in these voicemails. Only hours before, Presidential nominee Bernie Sanders had lost the largest county in the state to nominee Hillary Clinton. Believing this decision to be unjust, Sanders supporters disrupted the convention, screaming, rushing towards the stage and throwing chairs. The aggression and threats of violence against Lange were perceptibly justified by the Sanders supporters as an attack against the ‘establishment’. Members of the Working Family Party (WFP), a multiracial party with a working-class base, who endorsed Elizabeth Warren in the 2020 presidential primaries received a barrage of racial and gendered abuse, suggesting that this was not restricted to Clinton’s candidacy in 2016. One of WFP’s junior employees, Ember Ollom, known to be a rape survivor received a message that read ‘We were raped by this process, so I’m happy it happened to you’. The National Director of the WFP, Maurice Mitchell, was referred to as ‘half man’, ‘Uncle Tom’ and asked to ‘go back to his slave masters’ (Alter, 2019). Anyone who did not subscribe to the Sanders cult of personality was immediately deemed an enemy of the ‘people’. These racialised and gendered attacks made possible by the affordances of the internet can be classified as networked misogyny (Banet-Weiser & Miltner, 2016). Culturally entrenched attitudes towards women are mutated and amplified by the anonymity provided by digital technologies, often with zero consequence to the perpetrators. Women of colour particularly find themselves at the receiving end of such vitriol. When filmmaker Ava DuVernay expressed on Twitter that she would possibly not support Sanders as a candidate for 2020, the death threats were fast and furious (DuVernay, 2020). The Sanders supporters’ perception of her as an elite worked to legitimise their attacks on her, erasing her own experience of racialised violence. Women journalists who posted articles critical of Sanders were targeted through emails threatening
19 The Dirtbag Left: Bernie Bros …
381
sexual violence and subjected to racial slurs (Ross, 2016). The online accessibility which reporters are expected to maintain in the interest of credibility often leaves women reporters open to attacks of a misogynistic nature (Chen et al., 2020). While gendered political violence has often been associated with the right (Melich, 1998), the 2016 Presidential primaries made visible the misogynistic tendencies that are often overlooked in popular discourses about the left. The populist left’s justification of its misogyny under the guise of a class revolution, reflects the extent to which misogyny is normalised in Western culture. The Bernie Bros did not draw the line at harassing public figures, but also targeted private individuals. Politically expressive women and people of colour were often targeted for posting anything anti-Sanders on Twitter; an ire that they unleashed among their own supporters who questioned racism within the community (Keckler, 2021; Lussenhop, 2016). Women of colour were systemically harassed by Sanders supporters (mostly white males), through what came across as a strategised attempt to silence any criticism of Sanders (Flegenheimer et al., 2020; Ray, 2018). One tactic employed by the Bros was to retweet something that a Clinton supporter said, almost guaranteeing that their followers would descend on that woman’s timeline mobbing her with insults (Bragman, 2017; Calavera, 2016). Sanders supporters who have considerable followings on Twitter, like Glenn Greenwald, David Klion and Matt Bruenig have especially been known to participate in this form of harassment (Goldberg, 2016; Jay, 2019; Nomi4dems, 2019). Besides the textbook threats of sexual violence lobbed by their followers, baseless accusations of bullying, anti-Semitism, and misandry are commonplace (Daou, 2020; Greenwald, 2019; Luby, 2021). Derogatory comments on appearance are also par for the course (Relman, 2020). One of the women on Twitter who continues to be targeted, mentions that rampant bullying on two of the most prominent leftist online news forums has resulted in ‘so FEW black people on them that they could be REPUBLICAN sites but for the topics’ (Delarosa as cited in Majority60, 2018, emphasis original). The paternalistic discourse, suggesting that Black voters who criticised Sanders did not know what was in their best interests, echoed tropes that were set in place by proponents of scientific racism (Gandy, 2016;
382
P. Menon and J. R. DeCook
Goff, 2017; Starr, 2019). The purposeful stereotyping of Black people as ‘less than’ contributes to their dehumanisation, reflecting the persistent tendency among the populist left to reject any concerns that do not align with their interpretations of class warfare. For Black people and women who have continued to be a disproportionate target of state and personal violence, this is tantamount to a similar kind of suppression. What is also crucial to note here is that the ephemeral nature of social media content often means that there is no archived record of the harassment, leaving the harassed open to accusations of lying. The harassers often delete their tweets or have had their accounts suspended on account of repeated hateful trolling. The labour then falls on those who are being harassed to keep what has come to be known as ‘receipts’, usually in the form of screenshots. The repeated gaslighting that occurs through social media platforms, keeps women and people of colour from fully participating. In the runup to the 2016 election, there were several invite-only, private Facebook groups for Clinton supporters, an outcome of the hostile treatment meted to them on public threads (Deaderick, 2019). In many ways the populist left’s exclusionary tactics have similar outcomes to those of the Men’s Rights Activists, whose goals include silencing women online (Marwick & Caplan, 2018). Silencing of women as an outcome of networked misogyny has been theorised through the lens of geek masculinity (Massanari, 2015). Geek masculinity can be observed in the behaviours of those who flaunt their expertise in the digital vernacular by creating and contributing to cultures which thrive on male-centric, insider references that are often lost on a wider audience (eg: #GamerGate participants). The hybrid masculinity displayed by the Bernie Bros, we suggest, is a version of geek masculinity that is intellectually garbed in Marxist analysis. One of the most visible examples of this was observed on the abundant Bernie Sanders meme groups on Facebook. Milner (2016) explains that in memetic discourses the outgroups are constantly ‘othered’ through the deployment of stereotypes. Indeed, stereotypes play an integral role in the success of memes, being part of a shared cultural assumption about certain groups. The fixity that comes with an already existing cultural narrative, allows for
19 The Dirtbag Left: Bernie Bros …
383
the addition of new catchphrases that facilitate successful memetic reproduction. Memes are also meant to get across pithy messages without too much context, since the elements of intertextuality are primarily meant for the pleasure of the ingroup who are already familiar with the tropes. Thus, these meme groups work effectively in communicating the logic of populist messaging: more rhetoric less policy and villainising the outgroup (anyone who did not support Sanders). In addition, the deployment of these memes gave credence to the Bernie Bros self-presentation as an anti-establishment social movement with distinct signifiers, allowing it an especially desirable status among the younger millennials and Gen Z. The youth appeal of Sanders was carefully constructed through memes depicting him in touch with subjects that were considered part of popular culture, while framing Clinton as an old fogie, reiterating the ageist attacks often levelled at women politicians. For example, Fox News host Laura Ingraham suggested US House Speaker Nancy Pelosi had ‘dementia’ during the Trump impeachment; while David Hogg, one of the youth leaders of the Progressive movement said she should leave her position as Speaker because she is ‘old’. Hillary Clinton was also more likely than Barack Obama to be portrayed as unfavourable and ugly in political cartoons, according to a 2010 study by Zurbriggen and Sherman. Meanwhile in a widely circulating meme, Sanders’ was portrayed as knowledgeable of youth subculture, through a reference to the Harry Potter books, while Clinton was depicted with the line ‘I’m a Hofflepump’, casting her as feigning authenticity. This also relies on a dominant trope among geek subcultures, where women’s interest in ‘masculine’ subcultures or interests is perceived as less authentic than men’s. Through overtly sexist memes including several that referenced Monica Lewinsky,5 the Bros cultivated a similar kind of toxic culture 5 Lewinsky came to public prominence after she was involved in a high-profile government scandal when news broke that Lewinsky, then a 22-year old White House Intern, was involved in a sexual relationship from 1995 to 1997 with then-President of the United States, Bill Clinton. Despite Clinton obviously abusing his power and position in their relationship, Lewinsky bore the brunt of the scandal and became a decades-long punchline and target of misogynistic harassment, suffering the consequences of the affair for nearly her entire adult life which still persists.
384
P. Menon and J. R. DeCook
that they often railed against the right for. These memes provide a simultaneous sense of identification, through their content as well as their format. The text, by playing on sexist stereotypes via the use of the memetic format, plays to Sanders’ image as a relatable everyman. All of this contributes to populism’s function of intimidating through coolness, exemplified by #FeelTheBern becoming a hot catchphrase. Those who were not feeling it, were relegated to being social media outcasts. ‘Coolness’ as a concept itself is gendered given that male genres and masculine interests are valorised by popular culture, whereas the feminine is usually reviled (Corbanese, 2020). Even though marginalised voices are represented online, arguably they are often silenced by dominant discourses of white masculinity. This is strongly exemplified by the stream of left-wing media influencers who rose to prominence in the wake of Bernie Sanders’ ascent in the primaries, who came to be known as ‘The Dirtbag Left’.
The Dirtbag Left The above brings us to a case study of ‘the dirtbag left’, mentioned before as a group of journalists and other media personalities who came to be known by the moniker due to their embrace of vulgarity and profanity in lieu of civility (Kirchick, 2017) and organised around podcasts like Chapo Trap House. The leftist politics of the ‘dirtbag left’ tends to focus on their hatred of mainstream liberalism and left-wing politics, with them asserting that many liberals and other left-wingers are ‘unprepared’ for the coming revolution (Kirchick, 2017). Focusing mostly on class (in lieu of race, gender, and sexuality), the predominantly white straight men who align themselves with the dirtbag left have come under scrutiny in recent years. Particularly, women and other members of the left have called out these men for hiding behind their anti-capitalist beliefs when confronted with their misogyny, especially when women are harassed by Chapo fans (North & Stein, 2017). This is poignantly articulated by the r/BreadTube subreddit, a community dedicated to discussing topics and YouTubers who make up ‘BreadTube’, a group of content creators who produce videos discussing political and social topics from a leftist
19 The Dirtbag Left: Bernie Bros …
385
perspective to combat the influence of far right content on the platform (Kuznetsov & Ismangil, 2020). The ‘BreadTubers’ note that the dirtbag left only adheres to the ‘anti-capitalist’ part of leftist politics, subtly calling them out for their lack of awareness and care towards issues of racism, homophobia, transphobia, and misogyny. In 2017, two of the podcasts’ hosts drew ire when they posted a photograph of themselves on Twitter next to Bill Cosby’s Hollywood Walk of Fame Star with the caption ‘hey libs, try taking THIS statute down’ and another mocked a rape victim during an episode of Chapo Trap House (North & Stein, 2017). ‘Cosbygate’, as it came to be known among fans, was defended by the hosts with them claiming it was an attempt to reveal Hollywood’s hypocrisy of criticising Harvey Weinstein (currently imprisoned for his acts of sexual assault and misconduct) but still allowing Cosby’s star to remain (also currently imprisoned for similar charges) (Kirchick, 2017). But the long history of the shows’ hosts mocking victims of sexual assault and claims that ‘rape is funny’ demonstrate that the joke not only fell flat, but was just one of many attempts to deride feminist issues and accusations of misogyny and sexism as ‘liberal’ distractions to their ‘cause’. At the time of writing, Chapo Trap House has released nearly 550 episodes. Hosted by Will Menaker, Matt Christman, Felix Biederman, Amber A’Lee Frost, and Virgil Texas; the three founding hosts met on Twitter where they enjoyed a minor celebrity status (Koshy, 2019). Rising to prominence during the lead up to the 2016 election, the podcast quickly became popular with followers of Sanders and piqued more interest in democratic socialism. However, they did so by feeding into hate and rage that was often directed at other Democratic candidates— vitriol towards Hilary Clinton (and in the 2020 election towards Joe Biden) was the hallmark affect of the Dirtbag Left and their audiences (Bowles, 2020; Tolentino, 2016). Aptly described by Nellie Bowles as ‘the socialist’s answer to right-wing shock jock radio’ (2020), they engage in a similar kind of populist rhetoric to appeal to their listeners and to drive a political base behind ‘their man’ Bernie Sanders. Despite regularly railing against the ‘establishment’ and espousing socialist politics, the hosts of the show regularly bring in roughly $168,000 each and Menaker, one of the founders, is the son of a New York Times editor and a New Yorker
386
P. Menon and J. R. DeCook
editor (Bowles, 2020). Claiming that their insults and calls to kill other candidates are ‘jokes’, the hosts feel that the anger that their show builds and the potential violence it could bring are necessary to bring about their vision of Sanders-led revolution (Bowles, 2020). This has come at a cost not just in sowing hatred towards any candidate other than Bernie Sanders, but a larger culture of misogyny and sexist harassment all in the name of ‘revolution’ and under the guise of ‘just joking’. Embracing irony and humour as defence mechanisms— much like the alt right (DeCook, 2020)—Chapo Trap House and its followers perpetuate a form of ‘dominance politics’ (Heer, 2017) wherein their strategy is to ‘insult their way’ into power, even going so far as to regularly use the insult ‘cuck’6 on their podcast (Lokke, 2019). When vulgarity, profanity, and insult are embraced as rhetorical strategies to gain political prominence (much like Trump did), the effects are deleterious. Women feminist writers and women DSA members often report that they are harassed by Chapo Trap House listeners online and in other spaces, impeding many women’s abilities and desire to become involved in leftist political organising (North & Stein, 2017; Valenti, 2020). The harassment and misogynistic, racist, homophobic speech would eventually catch up to the podcast and its listeners: in June 2020, the subreddit r/ChapoTrapHouse was banned for repeatedly violating reddit’s rules and terms of service (Kimball, 2020). Reddit’s new policies, which became much broader in recent years, were updated to remove content ‘that promote hate based on identity or vulnerability’ (Kimball, 2020). Despite initially ‘quarantining’ the subreddit, to give the moderators a chance to rein in their members and to enforce rules, reddit administrators noted that the moderators did not comply by policies and allowed content that violated rules around promoting violence. But the ban did little to decrease the popularity of the podcast and its influence on modern leftist politics and political spaces. Indeed, the left’s misogyny problem persists because misogyny persists in other spheres of life. Bernie Sanders’ presidential run in 2016 did in fact spur more interest and enthusiasm for democratic socialism—and podcasts 6 Cuck, short for cuckold/cuckoldry, is typically used to emasculate men and is a common insult among the far right, but has been co-opted by leftists (i.e., “cuckservative” in place of “conservative”).
19 The Dirtbag Left: Bernie Bros …
387
like Chapo Trap House did contribute to this growth. However, the dirtbag left and the DSA are still reckoning with the misogyny that runs rampant within their communities. More disturbingly, the misogyny in their communities is rewarded: in 2020, a 20-year old named Aaron Coleman successfully ran as a member of the DSA and won a seat in the Kansas State Legislature. It had been revealed early on in his campaign that he had bullied, harassed, and blackmailed girls when he was in middle school - even posting ‘revenge porn’ (Jones, 2020; Valenti, 2020). The Dirtbag Left and other leftists defended Coleman, claiming that he was ‘only a child’ (a teenager) who was not aware of the severity of his actions (the old ‘boys will be boys’ defence) (Jones, 2020). But Coleman’s behaviour has not changed from when he was an adolescent—a former girlfriend of his from 2019 revealed that he had tried to choke her after an argument and sent her verbally abusive messages when she ended the relationship (Melero, 2020). Although he ran unopposed, he never dropped out of the race despite these past behaviours coming to light. In Down Girl , Kate Manne introduces the term ‘himpathy’, a portmanteau of ‘him’ and ‘sympathy’, and the term is used to refer to the excessive sympathy (by the media and others) shown to men who are perpetrators of sexual violence (Manne, 2017). Coleman’s disturbing past behaviour of physical and emotional abuse towards girls and women is just one example of violence against women that is swept under the rug by leftist men, and even defended, i.e., he is shown himpathy. Himpathy manifests across the political spectrum, and feeds off of the societal mores that encourage the dehumanisation of women. Thus, Bernie Bros and the Dirtbag Left merely reflect back the larger culture and discourse they are embedded within.
Conclusion Even though populism is about rousing the affective, the societal norm has been for politics to be understood as a rational process (Loffler et al., 2020). So, when Bernie Sanders reframes the country’s first woman Presidential candidate’s historical achievement as, ‘It is not good enough for someone to say, “I’m a woman! Vote for me!’” (as cited in Stein,
388
P. Menon and J. R. DeCook
2016) this fits in with the discursive logic of a leftist class-reductive rationality. The significance of political representation is erased along with the acknowledgement that the opposing candidate has substantive policy positions. This class-reductive ‘rationality’ combined with exaggerated affective messaging in a decontextualised social media environment, worked to arouse the cis, (primarily) white audience towards a cause. Misogyny is not entrenched in political affiliations but rather the social and cultural fabric of our societies. With the visible right-wing assaults on women’s rights, it follows that the examination of left-wing misogynistic behaviour seems less pertinent. But the silencing of women within leftist spaces as well as the occasional racist and sexist threats of violence, suggest that there is a need to interrogate the perpetuation of unequal power relations among those who flaunt progressive ideologies. The goal of this chapter was to unpack the persistent scourge of misogyny on the left, and to examine the ways that misogynistic messaging itself is an organising logic for men who espouse leftist politics. Moreover, we have attempted to demonstrate the ways that this negatively impacts women, particularly in the political and public sphere. By examining the ways that misogynistic, leftist populism has been mobilised (particularly online) against women and women of colour politicians, we hope to start a conversation about these issues within our own communities and beyond them. Acknowledgements The author(s) received no financial support for the research, authorship, and/or publication of this article.
References Albrecht, M. M. (2017). Bernie Bros and the gender schism in the 2016 US presidential election. Feminist Media Studies, 17 (3), 509–513. Alter, C. (2019, September 20). Working families party staff face harassment after warren endorsement. Time.
19 The Dirtbag Left: Bernie Bros …
389
Banet-Weiser, S., & Miltner, K. M. (2016). #MasculinitySoFragile: Culture, structure, and networked misogyny. Feminist Media Studies, 16 (1), 171– 174. Baxandall, R., & Gordon, L. (2000). Dear sisters: Dispatches from the women’s liberation movement. Basic Books. Bowles, N. (2020, February 29). The pied pipers of the dirtbag left want to lead everyone to Bernie Sanders. The New York Times. Bragman, W. [@WalkerBragman]. (2017, December 26]. You worked for JP Morgan in the wake of the financial collapse. We... [Quote Tweet]. Twitter. https://twitter.com/WalkerBragman/status/945706754595344387 Bridges, T., & Pascoe, C. J. (2014). Hybrid masculinities: New directions in the sociology of men and masculinities. Sociology Compass, 8(3), 246–258. Brooks, D. J. (2011). Testing the double standard for candidate emotionality: Voter reactions to the tears and anger of male and female politicians. The Journal of Politics, 73(2), 597–615. Brooks, E., & Hodgson, D. L. (2007). “An activist temperament”: An interview with Charlotte Bunch. Women’s Studies Quarterly, 35 (3/4), 60–74. Buechler, S. M. (1993). Beyond resource mobilization? Emerging trends in social movement theory. The Sociological Quarterly, 34 (2), 217–235. Bush, D. (2016, July 26). Sanders supporters walk off convention floor, blame ‘rigged system’ for his loss. PBS. Bz, R., & DF, A. (2018, March 14). Statement on women in DSA leadership [Blog post]. https://medium.com/@breadandrosie/statement-on-women-indsa-leadership-f697aa83cb78 Calavera, K. [@KaraCalavera]. (2016, June 29). Targeted harassment of women online by Bernie Bros @Wade_Turnbull and @warp_factor9. [Image Attached] [Tweet]. Twitter. https://twitter.com/KaraCalavera/status/748314 716250976256 Chen, G. M., Pain, P., Chen, V. Y., Mekelburg, M., Springer, N., & Troger, F. (2020). ‘You really have to have a thick skin’: A cross-cultural perspective on how online harassment influences female journalists. Journalism, 21(7), 877–895. Connell, R. W. (1995). Politics of changing men. Radical Society, 25 (1), 135. Corbanese, E. (2020). The anatomy of coolness. Zonemoda Journal, 10 (1S), 55–70. Daou, P. [@peterdaou]. (2020, February 12). Just to continue to rebut the “Bernie bro” narrative, here’s a thread of unadulterated hatred for me and my wife… [Quote Tweet]. Twitter. https://twitter.com/peterdaou/status/122767 1830556741634
390
P. Menon and J. R. DeCook
Deaderick, J. (2019, May 2). The real purpose of secret Hillary Facebook groups. Dame Magazine. DeCook, J. R. (2020). Trust me, I’m trolling: Irony and the alt-right’s political aesthetic. M/C Journal , 23(3). Detrow, S. (2017, April 20). Bernie Sanders defends campaigning for antiabortion rights democrat. NPR. https://www.npr.org/2017/04/20/524962 482/sanders-defends-campaigning-for-anti-abortion-rights-democrat. DuVernay, A. [@ava]. (2020, February 28). Bernie supporters terrroizing my mentions while I’m off enjoying my Saturday. [Image Attached] [Tweet]. Twitter. https://twitter.com/ava/status/1231637548428120066 Ember, S., & Benner, K. (2019, January 2). Sexism claims from Bernie Sanders’s 2016 run: Paid less, treated worse. The New York Times. Evans, S. M. (1980). Personal politics: The roots of women’s liberation in the civil rights movement and the new left (Vol. 228). Vintage. Flegenheimer, M., Ruiz, R. R., & Bowles, N. (2020, January 27). Bernie Sanders and his internet army. The New York Times. Gandy, I. [@AngryBlackLady]. (2016, June 1). yeah. dumb black me. Haven’t looked at St. Bernie’s policies. If I only I knew how to reeeeeead. [Quote Tweet]. https://twitter.com/AngryBlackLady/status/738001919063392256 Goff, K. (2017, April 13). The racist side of Bernie Sanders supporters. The Daily Beast. https://www.thedailybeast.com/the-racist-side-of-berniesanders-supporters. Goldberg, M. (2016, May 23). Is Matt Bruenig a populist martyr? Slate Magazine. https://slate.com/news-and-politics/2016/05/is-matt-bruenig-apopulist-martyr.html. Greenwald, G. [@ggreenwald]. (2019, December 14). This is common liberal anti-semitism—Of the cheapest kind, too, invoked purely for petty partisan reasons. [Image Attached]. [Tweet]. Twitter. https://twitter.com/ggreenwald/ status/1205919589525311488 Griffiths, B. (2016, November 21). Sanders slams identity politics as Democrats figure out their future. POLITICO. Heer, J. (2017). The dirtbag left and the problem of dominance politics. The New Republic. Jay, M. [@magi_jay]. (2019, December 14). Glenn Greenwald directing targeted harassment towards women on the internet b/c they talked about sexism will surely ease any discussion… [Quote Tweet]. Twitter. https://twitter.com/ magi_jay/status/1205926617861820424 Jones, S. (2020). Why were progressives defending Aaron Coleman? The Cut.
19 The Dirtbag Left: Bernie Bros …
391
Keckler, J. [@funnyvalntine09]. (2021, January 24). I tweeted this before the primaries and was met with such a barrage of vitriol from my “fellow progressives.” Mocking... [Image Attached] [Tweet]. Twitter. https://twitter.com/fun nyvalntine09/status/1353352644492976130 Kimball, W. (2020). Reddit Bans r/The_Donald and 2,000 other subs. Gizmodo. Kirchick, J. (2017). The left-wing heroes who treat women like garbage. The Daily Beast. Klatch, R. E. (2001). The formation of feminist consciousness among left-and right-wing activists of the 1960s. Gender & Society, 15 (6), 791–815. Koshy. (2019, June 3). “The voice of the dirtbag left”: Socialist US comics Chapo Trap House. The Guardian. Kuznetsov, D., & Ismangil, M. (2020). YouTube as praxis? On BreadTube and the digital propagation of socialist thought. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 18(1), 204–218. Lacatus, C. (2019). Populism and the 2016 American election: Evidence from official press releases and Twitter. PS: Political science & politics, 52(2), 223– 228. Löffler, M., Luyt, R., & Starck, K. (2020). Political masculinities and populism. NORMA, 15 (1), 1–9. Lokke, G. (2019). Cuckolds, cucks, and their transgressions. Porn Studies, 6 (2), 212–227. https://doi.org/10.1080/23268743.2018.1555053 Luby, D. [@Daniel_luby]. (2021, January 29). Except that’s literally not what I’m doing. Stop being disingenuous. My point was that Bernie writing a short tweet on… [Tweet]. Twitter. https://twitter.com/Daniel_luby/status/135527 8907344375809 Lussenhop, J. (2016, January 28). Bernie Sanders supporters get a bad reputation online. BBC News. Manne, K. (2017). Down girl: The logic of misogyny. Oxford University Press. Marwick, A. E., & Caplan, R. (2018). Drinking male tears: Language, the manosphere, and networked harassment. Feminist Media Studies, 18(4), 543–559. Massanari, A.L. (2015). Participatory culture, community, and play: Learning from reddit. Peter Lang. Melero, S. (2020). A scumbag child has now been elected to the Kansas House of Representatives. Jezebel. Melich, T. (1998). The Republican war against WOMEN: An Insider’s report from behind the lines. Bantam. Meyer, R. (2015, October 17). Here comes the Berniebro. The Atlantic.
392
P. Menon and J. R. DeCook
Milner, R. M. (2016). The world made meme: Public conversations and participatory media. MIT Press. Mouffe, C. (2018). For a left populism. Verso Books. Naomi. [@Nomi4dems]. (2019, April 15). Reason 1001 why I feel physically ill when I hear the name David Klion... Bernie bro’s like David have seriously… [Image Attached]. [Quote Tweet]. Twitter. https://twitter.com/Nomi4dems/ status/1117643324322611201 North, A., & Stein, J. (2017). Listen to what socialist women are saying about misogyny on the left. Vox. Not Having It: My Interview with Bravenak. (2018, January 13). Majority60. Retrieved January 29, 2021, from https://majority60.com/2018/01/13/nothaving-it-my-interview-with-bravenak/ Ogan, C., Pennington, R., Venger, O., & Metz, D. (2018). Who drove the discourse? News coverage and policy framing of immigrants and refugees in the 2016 US presidential election. Communications, 43(3), 357–378. Parker, I. (2018, August 27). Glenn Greenwald, the bane of their resistance. The New Yorker. Ray, M. [@queerBengali]. (2018, December 10). The ONLY real way to be a leftist is to name search yourself on twitter and bring all your angry… [Tweet]. Twitter. https://twitter.com/queerBengali/status/1072169568657989633 Relman, E. (2020, February 25). A Daily Beast reporter was doxxed after publishing a story about Bernie Sanders’ campaign staffer’s harassing tweets. Business Insider. Ross, J. (2016, March 10). Bernie Sanders’s most vitriolic supporters really test the meaning of the word ‘progressive’. The Washington Post. Schaffner, B. F., MacWilliams, M., & Nteta, T. (2018). Understanding white polarization in the 2016 vote for president: The sobering role of racism and sexism. Political Science Quarterly, 133(1), 9–34. Schleifer, T. (2016, April 7). Bernie Sanders: Hillary Clinton is ‘not’ qualified to be president. CNN. Slodysko, B. (2020, February 28). Sanders-linked group entered into racial discrimination NDA. AP NEWS. https://apnews.com/article/c5201196cdc5 897e76e49a3352a2c8dd Starr, T. J. (2019, March 1). On Twitter, Bernie Sanders’s supporters are becoming one of his biggest problems. The Washington Post. Strolovitch, D. Z., Wong, J. S., & Proctor, A. (2017). A possessive investment in white heteropatriarchy? The 2016 election and the politics of race, gender, and sexuality. Politics, Groups, and Identities, 5 (2), 353–363.
19 The Dirtbag Left: Bernie Bros …
393
Stein, J. (2016, Nov 21). Bernie Sanders: “It is not good enough for someone to say, ‘I’m a woman! Vote for me!’” Vox. Tolentino, J. (2016). What will become of the dirtbag left? The New Yorker. Valenti, J. (2020, August 24). The left’s misogyny problem. Gen | Medium. https://gen.medium.com/the-lefts-misogyny-problem-3fc37eea6e0e Wagner, J. (2016, January 20). Sanders draws criticism after calling Planned Parenthood and other groups part of ‘the establishment’. The Washington Post. Zurbriggen, E. L., & Sherman, A. M. (2010). Race and gender in the 2008 US presidential election: A content analysis of editorial cartoons. Analyses of Social Issues and Public Policy, 10 (1), 223–247.
20 Bystander Experiences of Online Gendered Hate Jo Smith
Introduction The internet has proved to be fertile ground for public discussion, debate, the sharing of ideas, and engagement with other people. Social media sites in particular allow people to air their views, enabling thousands of others to see and engage with such opinions, and participate in conversations about them. This content does not always remain polite, and at times degenerates into threats of physical or sexual violence, campaigns of harassment, abusive and upsetting comments, and invasions of privacy. Women are one group1 who have experienced these 1 Among
others. See, for example, online Islamophobia (Awan, 2014; Awan & Zempi, 2017), homophobia (Citron, 2014), racism (Madden et al., 2018; Mantilla, 2015) and the intersections of racism and misogyny (Madden et al., 2018).
J. Smith (B) University of Brighton, Brighton, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_20
395
396
J. Smith
behaviours online, where such behaviours often have a specifically misogynistic or gendered slant. Examples of high-profile women being targeted for abuse are abundant (see Mantilla, 2015). This chapter explores experiences of online misogynistic abuse targeting feminist women, approaching this from the perspective of those women who encounter the abuse indirectly: that is, they have not been directly targeted themselves but have seen others being subjected to such behaviours. Unlike many offline forms of gender-based and misogynistic abuse, where the ‘audience’ for such behaviours rarely consist of more than a few individuals who are in the vicinity of the act, abuse occurring in online public spaces can have a vast audience. Combined with the possible longevity of online abuse—through screenshots, and retweeting or reposting abuse and comments—a large number of people can experience these toxic discourses indirectly and can, it is argued here, be affected by this. The analysis presented here comes from a wider research project, and will focus on two facets of the experiences of indirect recipients of online misogynistic abuse: that their encountering of misogyny online (i) affected these women and their engagement with online spaces, and (ii) led some to respond to and try to resist these acts. It is argued in this chapter that online gendered hate is about sending a message, not just to the direct target, but to other women who see the hate; that message tells us how to appropriately be a woman online, particularly to not challenge hierarchies and power structures, and not to occupy (or vocally occupy) particular spaces. It serves to control the behaviours of both the direct targets of abuse and other women seeing and fearing abuse. It is also argued, more optimistically, that indirect recipients of abuse can play a role in challenging abuse and supporting the direct targets. This can be part of reshaping online spaces as ones which recalls those enduring feminist activities of sharing experiences, consciousness-raising, supporting other women, and activism. Within this chapter, the online abuse examined is termed ‘online gendered hate’. This particular conceptualisation of online misogyny acknowledges both the feminist and hate crime scholarship that underpins this particular piece of research, while also acknowledging the more ‘mundane’ acts of abuse that also form part of abusive behaviours online,
20 Bystander Experiences of Online Gendered Hate
397
and which are not well captured by other definitions. Online gendered hate is thus defined as ‘aggressive, abusive and threatening acts which are directed at feminists in public spaces online’ (Smith, 2019, p. 50).2 This is not to deny that others experience online abuse, or to suggest that this is the only way in which women experience abuse on the internet,3 but to focus on one aspect of violence against women within the digital world.
Background While research and academic study looking at the online abuse of women is still in its infancy, much of that which exists focuses on how we conceptualise online abuse targeting women, how the abuse affects those receiving it, and the (in)adequacies of responses to this. Notably absent is research looking at the wider effects of online gendered hate on those encountering online misogyny targeting others, and how they respond to this. This section will look briefly at some of the research that has been conducted into online misogyny and abuse.
Conceptualising Online Gendered Hate The feminist approach to understanding online gendered hate and online misogyny often frames this in a way similar to other forms of violence against women and girls (VAWG). Such approaches also tend to explain online misogyny using our understanding of VAWG, albeit as mediated by the affordances of the online world. This is reflected in some of the terminology used to describe online abuse targeting women, which mirrors that used in the offline world (Vera-Gray, 2017): online sexual 2 While in this study, the abuse examined was that targeting feminists, but this definition could be expanded to include other women that experience abuse online, 3 For a broad overview see the extensive work of Powell and Henry (Henry & Powell, 2015, 2016; Powell & Henry, 2017) on technology-facilitated sexual violence, as well as work on cyber-stalking (Citron, 2014), non-consensual sharing of indecent images (Citron & Franks, 2014; McGlynn & Rackley, 2017; McGlynn et al., 2017) and being subjected to receipt of unwanted sexual images (Vitis & Gilmour, 2017).
398
J. Smith
harassment (Megarry, 2014), cyber-VAWG (Jane, 2017a) or technologyfacilitated sexual violence (Henry & Powell, 2016; Powell & Henry, 2017) for example. An alternative approach adopted by some authors uses the language of hate crime scholarship to conceptualise online abuse targeting women. Powell and Henry (2017) for example include genderbased hate speech within their conceptualisation of technology-facilitated sexual violence, while Lewis et al. (2018) look specifically at whether online misogyny shares features with hate crime such that a hate crime framework is suitable for the analysis of these behaviours.
Online Abuse, Control, and Resistance Both the feminist and hate crime approaches to examining online gendered abuse share an acknowledgement of the ways that these acts can exert power over women, and control women’s ability to occupy and participate in online spaces. For those adopting these approaches, the harms of online abuse are seen as ‘embodied, tangible and real’ (Powell & Henry, 2017, p. 50), and the acts are interpreted—much like offline VAWG and hate crime—as manifestations of power and attempts to control women. The harms of online gendered hate are varied, and the acts of hate, such as threats of physical or sexual violence, abusive comments, and the disclosure of one’s personal information (‘doxxing’) are themselves harm—an intrusion into women’s lives and spaces. Beyond this, women have reported experiencing emotional and psychological harm including fear, distress and anxiety (Adams, 2018; Barlow & Awan, 2016; Lewis, 2011; Mantilla, 2015; Penny, 2011; Veletsianos et al., 2018). Harms to mental and physical health are also apparent (Lewis et al., 2016; Mantilla, 2015), and more recently Jane (2018) and Adams (2018) have noted the economic and professional harms that can result from being subjected to online abuse. In addition to these specific forms of harm, online gendered hate and abuse has consequences in that it controls and restrains women’s engagement with online spaces, and in some cases even using the internet at all (Citron, 2014, pp. 36–37; Mantilla, 2015, p. 109). Even prior to
20 Bystander Experiences of Online Gendered Hate
399
the advent of widespread social media use, academics noted the use of hostility and aggression in online conversations as a technique to silence women—described as ‘power plays’ by Herring (1999, p. 159). Although this control can come from moderators of a site or community shutting down debate and discussion, more common in the literature is examination of the ways in which women self-silence in response to, or because of fear of, abuse. Jane (2017a, pp. 50–51) has examined the different ways in which women respond to experiences of online abuse and created a taxonomy of acts of ‘flight or fight’. Flight responses are those which women use to avoid abuse or minimise the potential harms of this, and involve acts such as ignoring problematic users, taking time away from particular websites, and restricting participation and engagement in online spaces (such as by not engaging in particular discussions in particular places, or making one’s online presence more private). These self-protection strategies, including self-censorship, have been noted by others (Adams, 2018; Citron, 2014; Eckert, 2017; Veletsianos et al., 2018; Vitak et al., 2017). Evidence of silencing and ‘flight’ as a response by the recipients of abuse is well documented in the existing research. However, little is known about whether indirect recipients exhibit similar behaviours. A study by Kennedy (2000) noted that some women who had heard about the online abuse of others had become more cautious about their own behaviours online, but this finding is not expanded on in sufficient detail to give a fuller picture of the effects on these women and, predating, as it does, the creation of much modern social media, does not provide a suitably current picture of the effects of online gendered hate. As well as flight responses, some women engage in fighting back against online gendered hate, with Jane (2017a, pp. 50–51) noting different categories of action These acts represent women being motivated to try to resist the abuse they were experiencing (Lewis et al., 2016) and reclaim space in the online world. Particularly powerful as a fight response, and most thoroughly discussed in the existing literature, are forms of traditional activism. These include campaigning and speaking out about experiences (Antunovic, 2018; Chemaly, 2013; Mantilla, 2015; Veletsianos et al., 2018), and actions which have sought to create spaces of resistance online; what Fraser (1990) describes as
400
J. Smith
‘subaltern counterpublics’. Literature has looked at women-only or feminist forums and friendships as spaces of support and help (Shaw, 2013); carefully curated and moderated websites and blogs (Wazny, 2010); and the building community and support around hashtags (Dixon, 2014; Drüeke & Zobl, 2016; Horeck, 2014). Performance activism—the humorous and creative repurposing of cyberhate (Jane, 2017a, p. 51)—is less thoroughly explored in the literature, with the focus more on performance as a response to rape culture and general misogyny on the internet (Mendes et al., 2018) or a response to ‘dick pics’ (Vitis & Gilmour, 2017). A further way of fighting online gendered hate in Jane’s (2017a, pp. 50–51) categorisation is digital vigilantism, or ‘digilantism’. This is described as ‘do-it-yourself attempts to secure justice online’ through acts such as hacking, scamming, public shaming, and encouraging others to harass the perpetrators of abuse (Jane, 2017b). Discussion of this in the literature is limited to Jane’s (2016, 2017a, 2017b) research and the findings in Smith (2019). It is apparent that there is still work to be done in looking at how women fight back when faced with online gendered hate, and there is certainly scope for looking in more depth at some of the particular challenges and risks associated with performance activism and digital vigilantism. Again, research in this area has not focused on the role that ‘bystanders’ to online abuse can play in creating and contributing to activism, to online spaces, and to providing support to those being targeted for abuse.
Methodology The findings in this chapter are from a larger research project which sought to explore how feminist women experienced, understood and responded to online gendered hate. Drawing on a range of approaches to feminist research (including Burgess-Proctor, 2015; Cook & Fanow, 1985; Oakley, 1981; Stanley & Wise, 1993) this study used focus groups and interviews to gather and give voice to women’s experiences. In total, 26 women were recruited, 6 of whom described themselves as having been the direct targets of online misogyny. The remaining
20 Bystander Experiences of Online Gendered Hate
401
women defined their experiences as indirect. It was required that participants be over 18, live in England and Wales, must identify as a woman (including genderqueer, non-binary, and transwomen), and must identify as a feminist or feminist-sympathising. Interviews and focus groups were conducted online, with those who had received abuse themselves taking part in interviews, and those who had encountered abuse indirectly participating in focus groups and follow up interviews. The data was analysed thematically (Braun & Clarke, 2006, 2012). Throughout the research careful attention was paid to ethics and to reflexivity. Ethical guidance from the British Society of Criminology (2015) and the Association of Internet Researchers (Markham & Buchanan, 2012) was taken into account, and ethical approval given from the University of Surrey. Particular thought was given to the power relationships within the research, the risks to participants of being targeted for further abuse, and the risks to myself as a researcher. Reflection on my role as researcher was undertaken during all stages of the project, and I drew from the work of Campbell (2002) in recognising the value of paying attention to emotions during the research.
Findings and Discussion The relevance of those with indirect experience of abuse was apparent in different ways throughout the research, and speaks to the importance of understanding the broader effects of online misogyny. Two key aspects of these experiences are discussed here: (i) indirect recipients experienced consequences as a result of the abuse of others; (ii) that their indirect experiences of online abuse led some to become engaged with responding to and resisting online abuse. These findings suggest that when looking at conceptualising online gendered hate, when looking at the consequences, and when thinking about responses to these behaviours, a view which looks at both direct and indirect experiences needs consideration. The data suggests that the effects on the wellbeing and engagement with online spaces may not only be limited to those who receive abuse, suggesting that online gendered hate is more harmful than has been previously understood. More positively, the findings show how indirectly
402
J. Smith
encountering online gendered hate has motivated some women to engage in providing support, assistance, or activism.
Indirect Recipients and the Effects of Abuse Online gendered hate affected the participants in a range of different ways. Direct recipients of abuse described the consequences on their mental and physical health, on their work (in terms of having to take time off work, not being able to take on work, with one participant losing her job), and on their professionalism. These consequences had not been a part of the harms experienced by indirect recipients, although they were aware of these as some of the harms of being targeted for online gendered hate. Indirect victims did, however, experience fear of online abuse. For indirect recipients, this fear occurred in several ways: fear of seeing online gendered hate, fear of being targeted for abuse if they were to engage in public feminist discussion in some online spaces; of experiencing some of the consequences of receiving online gendered hate; and fear of not knowing how they would feel, cope, or deal with being subjected to abuse online. These fears led many of the indirect recipients of abuse to take actions to avoid abuse or to minimise the risk of this. Sylvia P. for example noted that: There are absolutely places that I won’t go. Because I’ve seen what happens to women that do, and I’ve got so much admiration for the women that do go there, and I know what happens to them and I know how that will trigger my anxiety problems and I’m not prepared to go there.
Sylvia P. knew that there were spaces online that were risky for women who chose to occupy those spaces and engage publicly in discussion in them. She knew what had happened to some of these women, and she knew that even visiting some of these risky online spaces would have an effect on her mental wellbeing. She thus imposed restrictions on herself in not using those public online discussion forums. Heidi C. commented that:
20 Bystander Experiences of Online Gendered Hate
403
I mean [edited name] I followed her on Twitter and I just saw some of the stuff that she was getting…it scared me enough. To put me off, which I guess is their aim.
Here, Heidi C.’s experience of seeing a woman being abused created fear, which in turn caused her to limit her use of Twitter. In noting that this ‘is their aim’ she speaks to this idea that causing fear—and the subsequent control of women’s participation in online spaces—is the aim of those perpetrating abuse. In her focus group Heidi C. further identified how online gendered hate controlled women because of the particular language used: Heidi C.: they [women] need to shut up. Shut up now. So getting them to shut up they use their fears. What are we afraid of? Rape and murder by men. Cleo T.: Yeah. Heidi C.: Even if they don’t shut up immediately, no one reads “I’m going to rape you” etc. and thinks hey ho. It will always upset women.
Using threats of rape and murder as forms of online abuse was, in Heidi C.’s view, using something that women are taught to fear from an early age, something that women all knew to be afraid of: Heidi’s experience here speaks particularly to how fear can be a form of control, as much as acts of violence themselves can ensure that women remain in their place (Brownmiller, 1975; Millet, 1970). Fear of being targeted led most of the indirect recipients of online gendered hate to take actions to try to protect or defend themselves. These usually amounted to self-restriction of participation in online spaces, either through not participating in conversations in a space, or through avoidance of that place entirely; in other words, flight responses (Jane, 2017a, pp. 50–51). Their freedom to use the internet freely and without constraint, to occupy and participate in any space they wished, was curtailed as a result of the abuse that they had seen. They were receiving the message that being a vocal and active feminist in some of these online spaces was dangerous. Cath B., talking about their own blog, said:
404
J. Smith
I’ve gone for weeks not posting anything because the only things I can think of are topics that I wouldn’t…and here’s an interesting word… I wasn’t brave enough to post…now why I should I be thinking about writing something online in terms of ‘brave’ and I do think that feminists online with public profiles, I think of them as brave…
The idea that women engage in defensive and protective actions in the face of fear of violence is apparent in the feminist literature on violence against women; Stanko (1990) describes ‘routines of safety’ and Kelly (in the foreword to Vera-Gray, 2016, p. xi) talks of ‘safety work’. These are those strategies or acts that are undertaken to avoid or cope with men’s violence, and to try to reduce the fear of being subjected to this. Engagement in this safety work came not from needing to protect oneself from the actuality of online abuse, but from the risk that one might experience this. As Stanko notes, in relation to offline violence, one way in which women learn about the risks to their own safety is through seeing what other women experience, something very apparent for these participants. Indirect recipients of abuse were therefore ‘constructing their ideas about safety and risk, their behaviours, and their emotional wellbeing around these experiences of others’ (Smith, 2019, p. 150). The significance of indirect recipients undertaking the safety work of trying to avoid or minimise risk also speaks to online gendered hate sending a message beyond the initial target of the abuse, to other women. This reflects the idea from hate crime theory that hate crimes are not merely about subduing an individual, but about controlling the community to which the individual belongs (Chakraborti & Garland, 2015, p. 5; Perry, 2001, p. 10). In committing an act of hate against one person, the perpetrator is sending a message to others (Iganski, 2001; Noelle, 2002; Perry & Alvi, 2012). Here too, online abuse is sending a message to those who are similar to the targets of abuse: behave appropriately, do not speak or act in ways which challenge the dominant power structures and hierarchies, or you too will face this abuse. This message seeks to ‘reaffirm the precarious hierarchies that characterise a given social order’ (Perry, 2001, p. 10) and control those who seek to challenge it. That the message is being heard, loud and clear, by those encountering online abuse indirectly, presents a troubling picture in which the harms and
20 Bystander Experiences of Online Gendered Hate
405
control of online abuse are more widespread than has previously been explored.
Indirect Experiences and Fighting Back In addition to experiencing fear and subsequent constraint of their behaviours, indirect recipients of abuse engaged in acts to resist or fight back against the online abuse they saw. Experiences of fear and control, and fighting back, were not necessarily mutually exclusive: interestingly these indirect recipients found ways to resist while still protecting themselves. This resulted in women engaging with those perpetrating abuse, and the use of alternative spaces where women were safer to talk— or as Fraser (1990) describes, subaltern counterpublics. There was also discussion of the potentials and positivity of digilantism although no engagement in this. Some participants were willing to step into abusive situations as a means of supporting the recipient of the abuse, which Gillian L. described as being in order to draw ‘their fire by turning Sauron’s eye away from the hobbits towards myself ’.4 Others noted that stepping into an abusive situation, while not always necessary or appropriate, could be beneficial when ‘it feels like it might not be a total lost cause’ (Juliet K.). Indirect recipient intervention in abusive situations therefore has several possible purposes when it comes to resisting online abuse. Firstly, it can draw attention away from the recipient of the abuse to someone else or a group of individuals. This could provide relief to the recipient, but quite obviously carries the risk that the indirect recipient may be targeted. Secondly, interceding may provide the opportunity to correct or educate the perpetrator of abuse. This can be to correct any inaccuracies made by the perpetrator, or to ‘call out’ their abuse. Correcting misconceptions or calling out the misbehaviour of the perpetrator could be beneficial in educating them and demonstrating that the missives are wrong, but can also have a wider effect in showing other onlookers that things said are 4 Gillian here refers to Tolkien’s work Lord of the Rings (1992). The Eye of Sauron is a manifestation of the antagonist Sauron, watching over the lands of Middle Earth for the protagonists of the story—the hobbits Frodo and Sam, and their allies.
406
J. Smith
factually wrong but also morally (and even legally) problematic. It shows disapprobation of these acts of online gendered hate and sends a message that women will challenge this. These acts of speaking out against online abuse are acts of defiance against expectations that women will stay in their place and not challenge acts which oppress and constrain them. Other participants talked about how they used their online feminist spaces as places where it was safe for them to engage in discussions about things which might lead to abuse if talked about in public. Furthermore, they could use these sites to practise their arguments and to engage with healthy and constructive challenge to their views, without the fear of abuse. Lewis et al. (2015, pp. 7–8) note this in their discussion of safe spaces online where safety from abuse created safety to speak out. Cath B. explored this in discussing her use of feminist spaces: And online space, especially safer spaces that are locked down to women only, and the people that identify away from the gender binary, those are spaces where you can test out your discussions, you, you work on ideas so if someone actually says ‘not all men’ to me in a post, how can I respond to this with a good argument? …I think the internet has given spaces for people new to feminism to really…practise their feminism…
Another participant noted how these spaces helped her to work out and phrase particular things she wanted to say in public about feminism: …there is also support in the case of a Facebook argument where someone can help you come up with something a bit more elaborate which is very important for a foreign-speaker like I am. Because I so often have issues formulating sentences to clearly explain what I mean… I use the terms I know which to my ear sounds simplistic when I actually meant something really quite elaborate. (Sophia C.)
These online sites therefore provided support for women in being a safe place for discussion, and also a site for learning and refinement of views and arguments. While on the face of it, using these spaces might seem like a flight response—retreating away from public through fear of being targeted for abuse—they can be interpreted as a fight action in which
20 Bystander Experiences of Online Gendered Hate
407
women came together to learn from each other, to discuss feminism safely, and to feel better equipped to return to public to engage there. A final response discussed by those who had indirect experience of abuse was of digilantism. While these participants had not engaged in acts of digilantism, they saw them as a positive way of responding to online abuse, a form of support, of self-defence or of defence of another. Interestingly they also reframed acts which were forms of abuse when perpetrated against women, as ‘different’. Recognising that the men who were perpetrating abuse would not recognise this distinction was apparent in a focus group discussion, where the participants discussed ‘dogpiling’.5 Cath B.: I guess the men doing the dogpiling see it as the same thing. Elle B.: I don’t think so, because the power balance isn’t the same.
[…] Cath B.: I think, when women wade in, we’re supporting each other, showing that we’re not lone voices. That the woman isn’t alone, that we want our voices to be heard.
[…] Elle B.: we’re always trying to get them to understand. They’re trying to get us to be quiet.
The construction of this behaviour by these participants was as an act of defiance or defence when used by women against those acting abusively, and therefore as something qualitatively different from the abuse received by women when groups of men perpetrated abuse through dogpiling. This behaviour was justifiable because of the power imbalance between perpetrators and ‘victims’ of abuse. The lack of power in the relationship between victim and perpetrator also meant that the acts of digilantism, which participants supported, lacked the power to make them abusive.
5 ‘A situation in which criticism or abuse is directed at a person or group from multiple sources’ (Oxford Dictionaries, 2019).
408
J. Smith
Conclusion These findings sit within a growing body of literature that explores how online misogyny and abuse can be conceptualised and understood, and how women respond to such acts. It contributes new insights into the broader consequences of abuse to those women who encounter abuse on the internet, demanding that we recognise the wider harms that online misogyny can cause. This, in turn, speaks to the importance of addressing online misogyny in order to make the internet a safer place for all women, whether they are engaging in public conversation, or simply observing such activity. While this chapter has explored the acts of resistance that some women engage in, or see as a valuable response to online abuse, it is necessary to remember that engagement in fighting back against online misogyny carries the risks of generating further abuse, and puts the onus on women—be they direct or indirect recipients of abuse—to solve this problem themselves. In turn, this takes attention away from the more formal approaches to responding to abuse, such as through intervention by the law or social media sites. Despite this, there is empowerment in women fighting back against online abuse, or seeing others doing this, not least in creating community between women in resisting acts that are rooted in patriarchal power and control. Acknowledgements The author(s) received financial support from the University of Survey for the research, authorship, and/or publication of this article.
References Adams, C. (2018). “They go for gender first”: The nature and effect of sexist abuse of female technology journalists. Journalism Practice, 12(7), 850–869. https://doi.org/10.1080/17512786.2017.1350115 Antunovic, D. (2018). “We wouldn’t say it to their faces”: Online harassment, women sports journalists, and feminism. Feminist Media Studies, 1–15. https://doi.org/10.1080/14680777.2018.1446454
20 Bystander Experiences of Online Gendered Hate
409
Awan, I. (2014). Islamophobia and Twitter: A typology of online hate against Muslims on social media. Policy & Internet, 6 (2), 133–150. https://doi.org/ 10.1002/1944-2866.POI364 Awan, I., & Zempi, I. (2017). ‘I will blow your face OFF’—Virtual and physical world anti-Muslim hate crime. The British Journal of Criminology, 57 (2), 362–380. https://doi.org/10.1093/bjc/azv122 Barlow, C., & Awan, I. (2016). “You need to be sorted out with a knife”: The attempted online silencing of women and people of Muslim faith within academia. Social Media + Society, 2(4), 1–11. Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/147808 8706qp063oa Braun, V., & Clarke, V. (2012). Thematic analysis. In H. Cooper, P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf, & K. J. Sher (Eds.), APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological (pp. 57–71). American Psychological Association. https://doi.org/10.1037/13620-004 Brownmiller, S. (1975). Against our will: Men, women and rape. Simon and Schuster. BSC. (2015). British Society of Criminology: Statement of ethics. British Society of Criminology. http://www.britsoccrim.org/new/docs/BSCEthics2015.pdf Burgess-Proctor, A. (2015). Methodological and ethical issues in feminist research with abused women: Reflections on participants’ vulnerability and empowerment. Women’s Studies International Forum, 48, 124–134. Campbell, R. (2002). Emotionally involved: The impact of researching rape. Routledge. Chakraborti, N., & Garland, J. (2015). Hate crime: Impact, causes and responses (2nd ed.). SAGE. Chemaly, S. (2013, January 28). The digital safety gap and the online harassment of women. Huffpost. https://www.huffingtonpost.com/soraya-chemaly/ women-online-harassment_b_2567898.html Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press. Citron, D. K., & Franks, M. A. (2014). Criminalising revenge porn. Wake Forest Law Review, 49 (2), 345–391. Cook, J. A., & Fanow, M. M. (1985). Knowledge and women’s interests: Issues of epistemology and methodology in feminist sociological research. Sociological Inquiry, 56 (1), 2–29. https://doi.org/10.1111/j.1475-682X.1986.tb0 0073.x
410
J. Smith
Dixon, K. (2014). Feminist online identity: Analyzing the presence of hashtag feminism. Journal of Arts and Humanities, 3(7), 34–40. Drüeke, R., & Zobl, E. (2016). Online feminist protest against sexism: The German-language hashtag #aufschrei. Feminist Media Studies, 16 (1), 35–54. Eckert, S. (2017). Fighting for recognition: Online abuse of women bloggers in Germany, Switzerland, the United Kingdom, and the United States. New Media & Society, 20 (4), 1282–1302. https://doi.org/10.1177/146144481 6688457 Fraser, N. (1990). Rethinking the public sphere: A contribution to the critique of actually existing democracy. Social Text, 25 (26), 56–80. Henry, N., & Powell, A. (2015). Beyond the ‘sext’: Technology-facilitated sexual violence and harassment against adult women. Australian & New Zealand Journal of Criminology, 48(1), 104–118. https://doi.org/10.1177/ 0004865814524218 Henry, N., & Powell, A. (2016). Technology-facilitated sexual violence: A literature review of empirical research. Trauma, Violence, & Abuse, 19 (2), 195–208. https://doi.org/10.1177/1524838016650189 Herring, S. C. (1999). The rhetorical dynamics of online harassment. The Information Society, 15, 151–167. Horeck, T. (2014). #AskThicke: “Blurred Lines”, rape culture, and the feminist hashtag takeover. Feminist Media Studies, 14 (6), 1105–1107. https://doi. org/10.1080/14680777.2014.975450 Iganski, P. (2001). Hate crimes hurt more. American Behavioural Scientist, 45 (4), 626–638. Jane, E. A. (2016). Online misogyny and feminist digilantism. Continuum, 30 (3), 284–297. https://doi.org/10.1080/10304312.2016.1166560 Jane, E. A. (2017a). Feminist flight and fight responses to gendered cyberhate. In L. Vitis & M. Segrave (Eds.), Gender, technology and violence (pp. 45–61). Routledge. Jane, E. A. (2017b). Feminist digilante responses to a slut-shaming on Facebook. Social Media + Society, 3(2), 1–10. https://doi.org/10.1177/205630 5117705996 Jane, E. A. (2018). Gendered cyberhate as workplace harassment and economic vandalism. Feminist Media Studies, 18(4), 575–591. https://doi.org/10. 1080/14680777.2018.1447344 Kennedy, T. L. (2000). An exploratory study of feminist experiences in cyberspace. CyberPsychology & Behavior, 3(5), 707–719.
20 Bystander Experiences of Online Gendered Hate
411
Lewis, H. (2011, November 3). ‘You should have your tongue ripped out’: The reality of sexist online abuse. New Statesman. http://www.newstatesman. com/blogs/helen-lewis-hasteley/2011/11/comments-rape-abuse-women Lewis, R., Rowe, M., & Wiper, C. (2016). Online abuse of feminists as an emerging form of violence against women and girls. British Journal of Criminology, 57 (6), 1462–1481. https://doi.org/10.1093/bjc/azw073 Lewis, R., Rowe, M., & Wiper, C. (2018). Misogyny online: Extending the boundaries of hate crime. Journal of Gender-Based Violence, 2(3), 519–536. https://doi.org/10.1332/239868018X15375304472635 Lewis, R., Sharp, E., Remnant, J., & Redpath, R. (2015). ‘Safe spaces’: Experiences of feminist women-only space. Sociological Research Online, 20 (4), 1–14. https://doi.org/10.5153/sro.3781 Madden, S., Janoske, M., Briones Winkler, R., & Edgar, A. N. (2018). Mediated misogynoir: Intersecting race and gender in online harassment. In J. R. Vickery & T. Everbach (Eds.), Mediating misogyny: Gender, technology, and harassment (pp. 71–90). Springer International Publishing. https://doi.org/ 10.1007/978-3-319-72917-6_4 Mantilla, K. (2015). Gendertrolling: How misogyny went viral . Praeger. Markham, A., & Buchanan, E. (2012). Ethical decision making and Internet research. Association of Internet Researchers. http://aoir.org/reports/ethics2. pdf McGlynn, C., & Rackley, E. (2017). Image-based sexual abuse. Oxford Journal of Legal Studies, 37 (3), 534–561. https://doi.org/10.1093/ojls/gqw033 McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond ‘revenge porn’: The continuum of image-based sexual abuse. Feminist Legal Studies, 25 (1), 25–46. https://doi.org/10.1007/s10691-017-9343-2 Megarry, J. (2014). Online incivility or sexual harassment? Conceptualising women’s experiences in the digital age. Women’s Studies International Forum, 47 , 46–55. https://doi.org/10.1016/j.wsif.2014.07.012 Mendes, K., Ringrose, J., & Keller, J. (2018). #MeToo and the promise and pitfalls of challenging rape culture through digital feminist activism. European Journal of Women’s Studies, 25 (2), 236–246. https://doi.org/10.1177/ 1350506818765318 Millet, K. (1970). Sexual politics. Doubleday. Noelle, M. (2002). The ripple effect of the Matthew Shepard murder: Impact on the assumptive worlds of members of the targeted group. American Behavioral Scientist, 46 (1), 27–50. Oakley, A. (1981). Interviewing women: A contradiction in terms. In H. Roberts (Ed.), Doing feminist research (pp. 30–61). Routledge.
412
J. Smith
Oxford Dictionaries. (2019). Dogpile. In Oxford dictionaries. Oxford University Press. https://en.oxforddictionaries.com/definition/dogpile Penny, L. (2011, November 4). A woman’s opinion is the mini-skirt of the internet. The Independent. https://www.independent.co.uk/voices/commen tators/laurie-penny-a-womans-opinion-is-the-mini-skirt-of-the-internet-625 6946.html Perry, B. (2001). In the name of hate: Understanding hate crimes. Routledge. Perry, B., & Alvi, S. (2012). ‘We are all vulnerable’: The in terrorem effects of hate crimes. International Review of Victimology, 18(1), 57–71. https://doi. org/10.1177/0269758011422475 Powell, A., & Henry, N. (2017). Sexual violence in a digital age. Palgrave Macmillan. Shaw, F. (2013). Still ‘searching for safety online’: Collective strategies and discursive resistance to trolling and harassment in a feminist network. The Fibreculture Journal, 22, 93–108. Smith, J. (2019). Feminist women’s experiences of online gendered hate [University of Surrey]. http://epubs.surrey.ac.uk/852351/ Stanko, E. (1990). Everyday violence: How women and men experience sexual and physical danger. Pandora. Stanley, L., & Wise, S. (1993). Breaking out again: Feminist ontology and epistemology. Routledge. Tolkien, J. R. R. (1992). The lord of the rings. Grafton. Veletsianos, G., Houlden, S., Hodson, J., & Gosse, C. (2018). Women scholars’ experiences with online harassment and abuse: Self-protection, resistance, acceptance, and self-blame. New Media & Society, 20 (12), 4689– 4708. https://doi.org/10.1177/1461444818781324 Vera-Gray, F. (2016). Men’s intrusion, women’s embodiment. Routledge. Vera-Gray, F. (2017). ‘Talk about a cunt with too much idle time’: Trolling feminist research. Feminist Review, 115 (1), 61–78. https://doi.org/10.1057/ s41305-017-0038-y Vitak, J., Chadha, K., Steiner, L., & Ashktorab, Z. (2017). Identifying women’s experiences with and strategies for mitigating negative effects of online harassment. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing—CSCW ’17 (pp. 1231–1245). https://doi.org/10.1145/2998181.2998337
20 Bystander Experiences of Online Gendered Hate
413
Vitis, L., & Gilmour, F. (2017). Dick pics on blast: A woman’s resistance to online sexual harassment using humour, art and Instagram. Crime, Media, Culture, 13(3), 335–355. https://doi.org/10.1177/1741659016652445 Wazny, K. M. (2010). Feminist communities online: What it means to be a Jezebel. B Sides, 8, 1–23.
Part VI Technologies for Justice
21 Police Body-Worn Cameras in Response to Domestic and Family Violence: A Study of Police Perceptions and Experiences Mary Iliadis, Zarina Vakhitova, Bridget Harris, Danielle Tyson, and Asher Flynn
Introduction The adoption of body-worn cameras (BWCs), which are attached to a police officer’s vest, cap or sunglasses, has been embraced by international M. Iliadis (B) · D. Tyson Deakin University, Burwood, VIC, Australia e-mail: [email protected] D. Tyson e-mail: [email protected] Z. Vakhitova · A. Flynn Monash University, Clayton, VIC, Australia e-mail: [email protected] A. Flynn e-mail: [email protected] B. Harris Queensland University of Technology, Brisbane, QLD, Australia e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_21
417
418
M. Iliadis et al.
police organisations as a mechanism to aid investigative and prosecutorial processes in formal justice system responses to crime, including domestic and family violence (DFV). This has included the adoption of BWCs by frontline Australian police agencies over the last five years in an attempt to enhance responses to DFV, including to victim/survivors (Axon, 2017; Harris, 2018, 2020). Some police organisations, such as in New South Wales (NSW) in 2015, have launched DFV specific applications for these technologies. In other jurisdictions, BWCs have been rolled out for use in all generalist operations (which includes DFV callouts), although many locations have employed trials of BWCs in DFV specific responses. Facilitated by legislative, policy and practice shifts, and prompted by significant inquiries into DFV, uptake and pilots of BWCs within DFV programs have been adopted by the Australian Capital Territory (2015), Queensland (2016, prompted by the Special Taskforce on DFV), South Australia (2016), Victoria (2016, following the Royal Commission into Family Violence [RCFV]), the Northern Territory (2017, after 2016 pilots), and Tasmania and Western Australia (trialled in 2016 in Western Australia and 2018 in Tasmania, and expanded in 2019 in both jurisdictions; see Billings, 2018; Clare et al., 2019a; Taylor & Lee, 2019).1 Politicians, police and judicial officers have contended that BWC technology is a “game changer” in DFV responses (Cardozo, 2015). Proponents claim it has the potential to strengthen evidential cases (Goodall, 2007; Owens et al., 2014); improve the probability of guilty pleas (and early guilty pleas); increase the likelihood of arrest and/or conviction (Morrow et al., 2016; Palmer, 2016); reduce state resources expended in investigating and prosecuting matters (Braga et al., 2017; Harris, 2018); bolster police accountability and transparency (Harvard Law Review, 2015); enhance confidence in police and perceptions of procedural justice (Gannoni et al., 2017; Stoughton, 2018); reduce incivility and 1 Western Australia trialled BWCs in 2016 for six months and then disseminated devices to all frontline officers by June 2019. By July 2020, the WAPOL completed the full roll-out of BWCs to frontline police, including to the Tactical Response Group, which is the only police agency in Australia and New Zealand that has done so to date.
21 Police Body-Worn Cameras in Response …
419
violence against police (Gaub et al., 2018); and lessen victim/survivors time and trauma associated with criminal justice processes (Harris, 2020; Neave and State of Victoria, 2016; White, 2014). More optimistically (or perhaps, controversially), others claim that BWCs could reduce rates of DFV and recidivism (Aggs, 2015). BWCs have also been identified as having the potential to improve public perceptions of safety and police practice (Taylor et al., 2017), and contribute to police training and education programs (Stoughton, 2018). Despite the rapid rollout of BWCs, there has been little investigation and assessment of DFV specific applications internationally, and of these, few are publicly available (Cubitt et al., 2016; Harris, 2018; Morrow et al., 2016; Murphy, 2015). Only one open access report released in Australia has included police perspectives. McCulloch et al. (2020, p. 3) conducted a qualitative evaluation in Victoria comprising interviews and focus groups with stakeholders, including police, finding “a general consensus” that BWC technology “improves frontline responses to family violence”, including for victim/survivors. An internal inquiry on BWCs conducted in NSW in 2016 has not been publicly released. As police are the direct users of BWC technology, their perspectives and experiences relating to whether BWCs have the capacity to impact public views of procedural fairness in decision-making, and their associated levels of transparency and accountability in DFV responses, are key to examine. This chapter addresses this knowledge deficit, discussing results from the most extensive study to date with two Australian police organisations: the Queensland Police Service (QPS) and Western Australian Police Force (WAPOL).2 Policy guiding BWC use in the QPS permits officers to employ BWCs in “inadvertent or unexpected” circumstances or “while acting in the performance of the officer’s duties” (s. 609A (2)(a) & (b) Police Powers and Responsibilities Act, 2000) to capture images or sounds before and during the exercise of a police power under legislation or when applying
2 While the data presented is the focus of our work in this chapter, it complements our broader research agenda and forthcoming projects which feature DFV advocate voices and views, and importantly, those who have been overlooked in studies to date: victim/survivors.
420
M. Iliadis et al.
the use of force (see s. 609A (5)(b)(ii) Police Powers and Responsibilities Act, 2000).3 In the WAPOL, police can activate BWCs where it is safe and practicable to do so “when exercising a legislated or common law power and the recording is likely to assist in capturing evidence” and/or “when the recording may provide a record of activity applicable to an operational matter and will afford transparency of actions” (Western Australian Police Force, 2020, pp. 5–6). Across Australian jurisdictions, there is notable variation in which BWC policies are publicly available, as well as the sorts of protections provided to vulnerable people being filmed without their informed consent (Harris, 2020). Some jurisdictions, including NSW, require victim/survivors’ consent to record, but in other locations, this is not needed. There are also discrete differences in organisational policy and practice guiding responses to DFV. BWC guidelines are necessary for understanding applications, but deployment is also shaped by DFV procedures, training and cultures at local and institutional levels of policing bodies (Harris, 2020; see Clare et al., 2019b; The Leadership Conference on Civil and Human Rights and Upturn, 2017). This chapter does not set out to examine whether the use of BWCs meets the requirements set out in operational manuals or investigate and contrast factors shaping policing in different jurisdictions. Instead, we explore the varying situational contexts in which police use BWCs and their impressions of the technology. Drawing on a survey of 452 responses from the QPS and WAPOL, this chapter explores police insights into BWC use in DFV responses. In the study, we were particularly interested in whether or not police felt that BWCs had the potential to transform public perceptions of police levels of transparency and accountability in DFV responses, as well as public confidence in procedural fairness in police decisionmaking. Recognising that individual differences could impact experiences and impressions, we accounted for and reviewed how demographics and employment history and experience—including number of years working for the police, age, gender and their role—shaped 3 See also section 14.3.2 of the ‘Situational Use of Force Model (2009)’ of the Operational Procedures Manual (QPS, 2021).
21 Police Body-Worn Cameras in Response …
421
police views of BWCs.4 This allowed us to understand how and in what contexts the technology is employed, frequency of use, and officer determinations of its utility. While BWC guidelines dictate how the technology should be used in theory, our findings offer insights into BWC deployment, as well as the potential benefits of this technology that have or could emerge, as noted by police. This chapter begins by outlining the methodology of the study. It then presents the study’s descriptive findings in relation to police demographics and characteristics, before providing an overview of BWC deployment across the QPS and WAPOL in DFV callouts and policing responses more generally. Police perspectives of the potentials of BWCs are then presented. We contend that in the interests of realising the promises of this technology, it is imperative that police expertise, training and experience guide future BWC initiatives in DFV responses. Reflecting on findings in our work, we suggest that greater examination of possible differences between generalist and specialist police views and use of BWCs is needed in future research.
Methodology This chapter presents findings from the first Australian study to examine the use of police BWCs as a response to DFV in Queensland and Western Australia. The chapter is guided by two research questions: (1) How might BWC technology impact public perceptions of police transparency and accountability in DFV responses?; and (2) How do police officer’s characteristics (their age, gender, number of years working with the police and role in DFV responses), and experiences with BWCs, influence their views about the potential for BWCs to enhance public perceptions of procedural fairness in police decision-making and promote a positive image of the police? We focus, here, on differences and/or similarities among the views of specialist and generalist DFV 4
The aim of this chapter is not to contrast police views of BWCs across the QPS and WAPOL, but rather, to explore how police demographics and employment history, such as whether police work within a specialist DFV unit or encounter DFV as part of their generalist response, might influence their views toward the technology in DFV.
422
M. Iliadis et al.
police respondents, as well as those who do not deal with DFV in relation to the deployment of BWCs and the potential strengths of the technology.
The Survey Instrument To answer these research questions, an anonymous online survey was distributed to the QPS and WAPOL using the Qualtrics online platform between the 6th of July 2020 and 20th of September 2020.5 The survey took respondents approximately 15 minutes to complete. It was informed by a mixed-method research design, featuring closed and open-ended questions, and measured police perceptions about the potential for BWCs to bolster public views of the police, procedural fairness in DFV incidents, and transparency and accountability in decision-making processes. Police perceptions were measured using three seven-point Likert-scales (0–6), with 0 indicating “extremely unlikely” and 6 “extremely likely”. The survey also collected demographic information about participants’ age, gender and professional role, including the number of years working for the police, and whether police were involved in a specialist DFV team or encountered DFV as part of their generalist response. It also asked questions regarding prior BWC use, including contexts and frequency of use. Considering that the QPS have employed BWC technology for longer than the WAPOL, we accounted for this variance in the survey design through the presentation of different options to measure the frequency of BWC use. For example, the QPS respondents were presented with four options: (1) daily; (2) at least once a week; (3) at least once a month; and (4) several times a year, whereas the WAPOL participants were given five options: (1) 0 times; (2) 1–4 times; (3) 5–9 times; (4) 10–14 times; and (5) more than 15 times. The inclusion of these different variables enabled
5
This project received ethics approval from Deakin University’s Human Research Ethics Committee on 5 September 2019 (approval number: 2019–297). Ethics approval was also granted by the QPS Research and Evaluation Unit (reference number: QPSRC-1219-3.01) and approval was given by the WAPOL for the undertaking of this research and publication of findings.
21 Police Body-Worn Cameras in Response …
423
meaningful data collection and accuracy in measuring the frequency of BWC use across the two police organisations.
Data Collection The survey was distributed with the assistance of the QPS and WAPOL. Specifically, the WAPOL broadcasted the survey to all its members. At the QPS, the survey was distributed by the Family Violence and Vulnerable Persons Unit (VPU) to generalist and specialist DFV police members. Eligibility for participation required police members to be at least 18 years of age. The QPS research committee restricted data collection to 50 responses. While this may be perceived as a potential limitation of the study, we remain confident in the richness of the data and analysis presented. The number of participants from the WAPOL was not restricted. In total, 477 survey responses were recorded: 50 from the QPS and 427 from the WAPOL. Only surveys that were at least 50% complete (N = 452) were included in the final analysed dataset, including 49 responses from the QPS and 403 from the WAPOL.
Analytic Strategy The analytical approach involved both quantitative and qualitative analyses. First, exploratory quantitative data analyses were conducted, including univariate statistics for each variable of interest, and the analysis of bivariate relationships between the variables. To better understand the nuances of participants’ perceptions of and experiences with BWCs, a thematic content analysis was applied to open-ended responses using Microsoft Excel. The quantitative analyses were conducted using R, a language and environment for statistical computing. Density plots were produced to discern participants’ perceptions on the potential for BWCs to: (1) improve public views of the police; (2) enhance public perceptions of and confidence in procedural fairness in police decision-making; and (3) increase levels of transparency and accountability in DFV responses. We also conducted Wilcoxon Signed Rank Tests to compare the scores on the three dimensions of interest.
424
M. Iliadis et al.
This was followed by a series of Mann–Whitney U tests to establish whether participants’ gender uncovered any differences and/or similarities in police perceptions of BWC technology in relation to the three aforementioned variables of interest. Spearman correlation coefficients were used to examine the relationship between participants’ ages, the number of years they had worked for the police and their perception scores. A one-way between-groups analysis of variance (ANOVA) was also conducted to explore whether police officers’ role—i.e. as a DFV specialist or generalist, or an officer who reported having no dealings with DFV—impacted their views towards the technology. As State inquiries have emphasised (see Neave and State of Victoria, 2016; Special Taskforce on Domestic and Family Violence in Queensland, 2014), DFV represents a significant proportion of generalist police work. As part of efforts to bolster policing responses to DFV, specialist units, officers and training have emerged (Segrave et al., 2016). Differences between generalist and specialist units are important to study. Australian literature has found greater victim/survivors satisfaction with specialist as opposed to generalist police, where specialist police are not only those whose role is focused on DFV (such as Domestic Violence Liaison officers in the QPS or WAPOL), but officers who are trained (and often belong to) particular community cohorts, such as Cross Cultural Liaison Officers (in the QPS) or Community Liaison Officers (in the WAPOL) (see George & Harris, 2014; Harris & Woodlock, forthcoming). Therefore, we were interested in unpacking any potential variation between generalist and specialist perspectives of BWCs.
Results Descriptive Characteristics of the Sample Participants from the QPS and WAPOL were similar in age, with the WAPOL respondents being on average 42 years old, and the QPS respondents, 40 years old (see Table 21.1). Proportionally, there were more females among the QPS participants (27%) than in the WAPOL
425
21 Police Body-Worn Cameras in Response …
Table 21.1
Descriptive statistics of the sample (N = 452)
Variable Demographic characteristics Age (µ(SD)) Gender (female) Number of years with the police (µ(SD)) Deals with vulnerable people and DFV Experience with BWC Issued a BWC (yes = 1) Used BWC in response to DFV Used BWC in generalist policing response
QPS (N= 49)
WAPOL (N = 403)
N
N
%
9.1 26.0 9.0 80.0
41.9 81 15.9 243
10.8 20.1 10.2 60.3
100.0 79.6 100.0
330 298 321
81.9 73.9 79.7
40.2 13 11.6 40 49 39 49
%
sample (20%). When asked “are you involved in an area that deals specifically with vulnerable people and/or domestic and family violence?”, the majority of participants from both the QPS and WAPOL indicated that they dealt with vulnerable people and/or DFV as part of their policing response (80 and 60% respectively). Of the QPS respondents, 44% indicated that they were involved with vulnerable people and/or DFV as a result of their generalist duties only, while a further 18% stated that they were involved in the DFV/VPU and therefore specialised in DFV responses. Of the WAPOL participants that responded, 37.5% indicated that they were involved with vulnerable people/DFV as part of their generalist duties only, while a further 13.6% stated that they were involved in a specialist DFV unit. An analysis of the open-ended responses revealed that our sample was fairly diverse in terms of the specialisation of participants. The QPS participants included DFV specialists: coordinators within the DFV/VPU, liaison officers and leaders of the Gold Coast DFV Taskforce (established in January 2016), and generalist members: general duty officers who reported encountering DFV as part of their daily duties. The WAPOL respondents used BWCs both in generalist roles and in specialist DFV units, but also in other specialist units and roles, including as: detectives, members of the homicide squad, members of
426
M. Iliadis et al.
Fig. 21.1
Frequency of BWC use among the WAPOL participants
the mental health policing response team, police prosecutors and individuals working within drug operations, homeless and substance abuse operations and with at-risk juveniles.6 As illustrated in Table 21.1, the majority of the WAPOL participants (over 80.0%) and all of the QPS participants (both specialist and generalist ) were issued a BWC. Just under three-quarters of the WAPOL respondents (73.9%), working in a variety of units, and 79.6% of the QPS respondents identified having used BWCs in response to DFV. Almost 80% of the WAPOL participants and all of the QPS participants indicated having used BWCs as part of their generalist policing response. The descriptive findings, therefore, suggest that regardless of whether or not police are involved in a specialist DFV unit, the majority of police participants encountered DFV as part of their generalist policing response (QPS = 44.0%; WAPOL = 37.5%) and an even greater portion of participants identified having used BWCs in DFV incidents (QPS = 79.6%; WAPOL = 73.9%) and generalist policing responses (QPS = 100%; WAPOL = 79.7%).
Frequency of Body-Worn Camera Use In both the QPS and WAPOL survey samples, the majority of participants reported a high frequency of BWC use in accordance with the survey response options presented (see Figs. 21.1 and 21.2). Of the WAPOL participants, 40.0% identified having used BWCs more than 6
Participants’ roles, as listed here, reflect the descriptions provided in open-ended responses.
21 Police Body-Worn Cameras in Response …
Fig. 21.2
427
Frequency of BWC use among the QPS participants
15 times, while 61.0% of the QPS participants stated having used BWCs daily. It is therefore clear that BWCs play a significant role in policing and their use will likely continue to escalate in the foreseeable future. The situational contexts of their use, as reported below, thus require exploration.
Contexts of Body-Worn Camera Use Within Domestic and Family Violence Incidents To better understand the contexts and rationale for BWC use in DFV responses, specifically, police were asked to describe their experiences with BWCs in DFV incidents. As outlined in Table 21.2, police most commonly activated their BWC in all DFV responses and investigations (QPS = 48.0%; WAPOL = 47.4%). This was followed by the use of BWCs for interviewing purposes and/or evidence gathering (QPS = 16.7%; WAPOL = 8.7%) and to capture the DFV scene/environment (QPS = 16.7%; WAPOL = 7.9%). Our findings in this regard demonstrate that police practice on BWC use is in line with police guidelines on BWC more broadly, and BWC activation in DFV responses is highly common among both survey samples. Other situational contexts reported by a smaller number of participants (less than 1% of the total number of participants from the QPS and/or WAPOL) included the use of BWCs to capture non-physical abuse that might be overlooked in DFV policing responses; in efforts to prevent false allegations by the primary aggressor; to capture information that will benefit other services helping victim/survivors; to ensure that
428
Table 21.2
M. Iliadis et al.
Contexts of BWC use in DFV responses
Context of BWC use BWC is activated in all DFV responses and investigations Interviewing/evidence gathering of victims/offenders/witnesses and other parties To capture the DFV scene/environment (including subjects/weapons) as evidence BWC is always on, including during daily interactions with the public BWC is activated during arrests/cautions/detaining suspects When issuing police orders To capture victim/survivors’ account and demeanour To record follow-up visits/investigations DFV incidents involving physical violence to victim/survivors and/or property
QPS
WAPOL
N
%
N
%
23
48.0
191
47.4
8
16.7
35
8.7
8
16.7
32
7.9
7
14.6
22
5.5
1
2.1
6
1.5
0 0
0.0 0.0
6 6
1.5 1.5
1 1
2.1 2.1
4 4
1.0 1.0
suspects comply once instructed that the BWC is activated; to negate complaints against police; to capture incidents that involve parental abuse of children; when a breach of an apprehended violence order (also known as a protection order) is reported; for police protection; and as per policy, for note-taking. Less than 1% of participants among the QPS and WAPOL also reported de-activating BWCs if it appeared to cause distress to members involved in the DFV incident or if the suspected primary aggressor is not deemed threatening to police officers.
Body-Worn Cameras: Accountability, Transparency and Procedural Fairness in Domestic and Family Violence Responses? To determine the overall consensus among police officers’ views, we firstly examined whether participants were more or less optimistic about the likelihood that BWCs will improve public perceptions of the police, enhance confidence in procedural fairness and increase transparency and accountability in decision-making in DFV responses (see Fig. 21.3).
21 Police Body-Worn Cameras in Response …
429
Fig. 21.3 The perceived likelihood that BWCs will positively impact public perceptions of the police, procedural fairness, and transparency and accountability in decision-making in DFV responses. Vertical lines indicate median scores
Overwhelmingly, the results suggest that police officers expect that BWC technology is likely to have a positive effect in all three examined domains. This finding supports the results of prior literature which has found that police officers typically feel quite positive about BWCs and their potential to strengthen public views of transparency, accountability and police practice (Goodall, 2007; Jennings et al., 2014; Taylor et al., 2017). Notably, participants were optimistic about the potential for BWCs to increase transparency and accountability in police decision-making processes (µ = 4.9 out of 6; SD = 1.4)7 and slightly less optimistic about their capacity to improve public perceptions of procedural fairness in DFV incidents (µ = 4.3 out of 6; SD = 1.5) or the police (µ = 4.1 out of 6; SD = 1.6). A series of Wilcoxon signed-rank tests revealed statistically significant differences between the views of participants in relation to how likely BWC technology is to: (a) improve public perceptions of the police; (b) enhance public perceptions of procedural fairness; and (c) increase transparency and accountability. The largest difference
7
µ = average; SD = standard deviation.
430
M. Iliadis et al.
was detected between (a) and (c) (z = 10.95, p = 0.000, r = 0.38),8 followed by the differences between (b) and (c) (z = 10.36, p = 0.000, r = 0.36), with the smallest, yet still statistically significant difference, being between (a) and (b) (z = 4.08, p = 0.000, r = 0.14). Participants, therefore, believe that BWCs are most likely to impact positively on public perceptions of police transparency and accountability in DFV responses, and least likely to improve public views of the police. Next, we examined whether any notable differences existed between the perceptions of male and female participants in regard to the use of BWCs within the context of DFV. While male participants were marginally more supportive of the notion that BWCs will improve public perceptions of the police (Md = 4.35, n = 322)9 compared to female participants (Md = 4.00, n = 89), a Mann–Whitney U test revealed that the difference was not statistically significant (U = 13,793.5, z = −0.55, p = 0.59). In contrast, female participants were slightly more supportive of the potential for BWCs to improve public confidence in procedural fairness (Md = 4.60, n = 89) and enhance transparency and accountability in DFV incidents (Md = 5.5, n = 89), than male participants (Md = 4.55 and Md = 5.1, n = 322). However, the differences between female and male participants’ views in relation to the capacity for BWCs to improve public confidence in procedural fairness (U = 1341.2; z = −0.93 ; p = 0.35) and enhance transparency and accountability (U = 12,866, z = −1.51, p = 0.13) were not statistically significant. Furthermore, correlation analyses suggest that neither age nor the number of years working for the police affected participants’ perceptions (see Table 21.3). The results therefore suggest that participants’ age, gender and years in service did not lead to any remarkable differences in relation to how police viewed BWCs among the QPS and WAPOL. There was a commonly held view among participants that BWCs were most likely to increase transparency and accountability in DFV responses, followed by public perceptions of procedural fairness, but were least likely to improve public views of the police. 8 9
z value for the Wilcoxon sign-test rank test of significance. Md = median value.
21 Police Body-Worn Cameras in Response …
Table 21.3
431
Spearman correlation coefficients (r) for the variables of interest
Variable Age* Years in service*
Public perception of police
Perception of procedural fairness
Perception of transparency and accountability
0.00 0.01
0.01 0.00
0.03 −0.02
* standardised
Finally, we examined whether participants’ roles within the police (for instance, as a DFV specialist; an officer who encounters DFV as part of their generalist policing response; or an officer who identified not dealing with DFV) impacts on their views of BWCs. As Table 21.4 shows, police officers who reported having no dealings with DFV were least optimistic about the potential for BWCs to enhance transparency and accountability in DFV responses and improve public perceptions of procedural fairness and the police. Comparatively, officers who identified encountering DFV as part of their generalist response were more optimistic about the potential for BWCs to positively impact public views of transparency and accountability, procedural fairness and the police image, while DFV specialists revealed the strongest positive sentiments about the ability for BWCs to fulfil these potentials. However, a one-way between-groups analysis of variance (ANOVA) conducted to explore the impact of specialisation on police perceptions of BWCs, revealed no statistically significant differences between DFV specialists and non-specialists in any of the perception scores: improved public perceptions of: the police (F (2, 397) = 0.89, p = 0.40); procedural fairness (F (2, 397) = 1.37, p = 0.26); transparency and accountability (F (2, 397) = 1.86, p = 0.15). Therefore, while we cannot be certain that this difference is not an artifact unique to our study, it still provides insight into the possible effect that specialisation might have on police perceptions of BWCs in relation to the three examined domains, as outlined in Table 21.4.
39.7 50.9
9.3
171 219
40
1. Police who do not encounter DFV 2. Police who encounter DFV as part of their general policing response 3. DFV specialists
%
N
4.4
4.2
4.1
1.5
1.5
1.6
4.6
4.4
4.2
1.5
1.5
1.6
SD
µ
µ SD
Procedural fairness
Public perceptions of the police
Police impressions of BWCs by specialisation
Specialisation
Table 21.4
5.3
4.9
4.8
µ
0.9
1.4
1.3
SD
Transparency and accountability
432 M. Iliadis et al.
21 Police Body-Worn Cameras in Response …
433
Discussion To date, most published Australian evaluations have not assessed whether or not BWCs have actually strengthened public perceptions of police operations (see Gannoni et al.’s, 2017 work on detainee perceptions, as an exception), nor has there been examination of police BWC compliance with operational guidelines, policy or legislation, especially in DFV responses. Typically, reviews of BWCs have indicated that police officers are likely to be more proactive when wearing the technology (Wasserman, 2015). Former QPS Minister, Jo-Ann Miller, also commented that BWCs would give victim/survivors “extra reassurance that police will follow up their complaints” (QPS Media, 2015, n.p.). This finding warrants attention considering that some victim/survivors consulted in Harris’ (2020) study expressed support for BWCs on the basis that it could enhance police accountability. Harris (2020) suggests that BWCs might ensure that both police actions (misconduct) and failures to act (such as declining to respond to callouts, pursue protection orders or breaches of orders) are captured. It follows then, that if BWC deployment is frequent, it might prompt or ensure police respond to DFV and comply with relevant protocols and legislations. Our research offered insights into how and the rate at which police use BWCs. Ultimately, we found that police officers do frequently engage BWCs in both daily operations and DFV incidents. This may provide victim/survivors with assurances, and bolster confidence in police and perceptions of procedural fairness. Police participants also reported BWC use in a variety of DFV settings, but most commonly in response to DFV callouts and investigations; for the purposes of interviewing and evidence gathering of victim/survivors, suspects and other witnesses; and to capture the DFV scene. Other situational contexts for BWC activation and de-activation were also cited by a smaller portion (less than 1%) of participants. While these situational contexts comprised less than 1% of survey responses, they still provide further insight into whether and how BWCs are being used and the rationale underpinning their use. What is noteworthy is that one of the reasons cited for BWC deactivation was if the primary aggressor is not deemed “threatening” to police officers. Prior research has shown that police are more likely to
434
M. Iliadis et al.
respond to DFV incidents with a greater degree of zeal in instances where the primary aggressor has used or threatened to use a weapon, strangulation and/or physical assault resulting in injury (see, for example, Robinson et al., 2018). Respondents suggested that, potentially, BWCs could capture police responses to non-physical abuse that might otherwise be “overlooked” and thus not documented in policing responses (see Douglas, 2019). This is important as State inquiries have noted that, traditionally, police were less likely to address non-physical harms in response to DFV (see, for example, Neave and State of Victoria, 2016). This may involve coercive and controlling behaviours and manipulative techniques used by the abuser to deny or downplay the severity of their actions and/or falsely accuse the victim/survivors of being the primary aggressor. However, even so, police understandings and determinations of “threatening” behaviour require a comprehensive understanding of DFV, the dynamics and patterns of abuse and its manifestations, and the effects of DFV and presentation of neurological and psychological trauma (Harris, 2020). Overwhelmingly, our findings indicate that police participants, like victim/survivors consulted in Harris’ (2020) study, believed that BWCs would enhance public perceptions of transparency, accountability and procedural fairness in DFV responses, and to a lesser extent, public views of the police. While police officer’s demographic characteristics did not significantly affect their impressions of BWCs, we observed a difference between specialist and generalist police views towards BWCs. Although the difference was not statistically significant, our findings show that specialist police were more optimistic than generalist police about the potential for BWCs to enhance public views of transparency, accountability, procedural fairness and the police image. This may be attributed to the “differences in mission, responsibilities, and daily work tasks” between specialist and generalist police officers (Gaub et al., 2018, p. 137). Differences in perceptions among specialist and generalist police warrant attention because specialised officers can have “innovative ways they use BWCs to achieve their specific mission” (Gaub et al., 2018, p. 141), and therefore BWC adoption does not comprise a “one-size-fitsall” approach (Gaub et al., 2018, p. 149). While there is a lack of review
21 Police Body-Worn Cameras in Response …
435
or examination of generalist versus specialist unit perceptions and use of BWCs, our research begins to address this knowledge deficit.
Conclusion This chapter has contributed insights into the settings and frequency of BWC use among two police agencies in Australia—the QPS and WAPOL—and accounted for the influence of police demographics, employment history and experience on participants’ perceptions. Despite the relatively short history of BWC use among the QPS and WAPOL, both of our samples reported a high frequency of BWC deployment by generalist and specialist police. This suggests that BWCs have played and will continue to maintain a large role in policing practice. The contexts of BWC use will therefore remain important considerations, as will the jurisdictional differences guiding BWC deployment and DFV responses, including the training and culture inherent within local and institutional levels of policing among specialist and generalist police cohorts (Harris, 2020). In order to ensure the promises of BWCs are realised, especially in relation to police accountability and transparency concerns in DFV responses, and to address the lack of guidance “from professional organisations or academic literature” as to how BWCs could or should be integrated into specialist units and generalist responses (Gaub et al., 2018, p. 137), further research is needed to monitor the ongoing impacts of BWC use in DFV specific applications in both the QPS, WAPOL, and Australian jurisdictions more generally. Our future research agenda will address this knowledge deficit by examining the views and experiences of police, DFV stakeholders (including those working within government and non-government sectors) and those who have been overlooked in scholarship to date: victim/survivors. We maintain that a holistic investigation of BWCs is essential, including further examination of possible differences between generalist and specialist views and experiences, to review and realise the ongoing potential merits and risks of BWCs in DFV responses. In light of the apparent frequency of BWC deployment among specialist and generalist police, and to maintain (and bolster)
436
M. Iliadis et al.
public perceptions of transparency, accountability and procedural fairness in DFV responses, the ways and settings in which BWCs are used (or not used) are—and will continue to be—key considerations. Acknowledgements The researchers wish to acknowledge Dr. Shannon Stuart, Dr. Sally Kennedy and Yu Qi for offering research assistance to this project.
References Aggs, R. (2015, September 10). Victims of violence video statements ‘great tool to save lives’. Milton Ulladulla Times. Axon. (2017). Fighting domestic violence with body-worn cameras: Queensland Police Service case study. Scottsdale: Axon. Billings, P. (2018, January 10). Tassie police set for body-worn cameras. The Mercury. Braga, A., Coldren, J. R., Sousa, W., Rodriguez, D., & Alper, O. (2017). The benefits of body-worn cameras: New findings from a randomized controlled trial at the Las Vegas Metropolitan Police Department. Washington, DC: CNA Corporation. https://www.cna.org/cna_files/pdf/IRM-2017-U016112-Final.pdf. Accessed 1 March 2021. Cardozo, G. (2015, June 5). Video evidence to help victims. Express Advocate. Clare, J., Henstock, D., McComb, C., Newland, R., & Barnes, G. C. (2019a). The results of a randomized controlled trial of police body-worn video in Australia. Journal of Experimental Criminology. https://doi.org/10.1007/s11 292-019-09387-w. Clare, J., Henstock, D., McComb, C., Newland, R., Barnes, G. C., Lee, M., & Taylor, E. (2019b). Police, public, and arrestee perceptions of body-worn video: A single jurisdictional multiple perspective analysis. Criminal Justice Review, 44 (3), 304–321. Cubitt, T., Lesic, R., & Myers, G. (2016). Body-worn video: A systematic review of literature. Australian & New Zealand Journal of Criminology, 50 (3), 379–396. Douglas, H. (2019). Policing domestic and family violence. International Journal for Crime, Justice and Social Democracy, 8(2), 31–49.
21 Police Body-Worn Cameras in Response …
437
Gannoni, A., Willis, M., Taylor, E., & Lee, M. (2017). Surveillance technologies and crime control: Understanding police detainees’ perspectives on police bodyworn video (BWV) and CCTV cameras. Canberra: Australian Institute of Criminology. Gaub, J. E., Todak, N., & White, M. D. (2018). One size doesn’t fit all: The deployment of police body-worn cameras to specialty units. International Criminal Justice Review, 30 (2), 136–155. George, A., & Harris, B. (2014). Landscapes of violence: Women surviving family violence in regional and rural Victoria. Deakin University. Goodall, M. (2007). Guidance for the police use of body-worn video devices. United Kingdom: Home Office. Harris, B. (2018). Spacelessness, spatiality and intimate partner violence: Technology-facilitated abuse, stalking and justice. In K. Fitz-Gibbon, S. Walklate, J. McCulloch, & J. M. Maher (Eds.), Intimate partner violence, risk and security (pp. 52–70). Routledge. Harris, B. (2020). Visualising violence? Capturing and critiquing body-worn video camera evidence. Current Issues in Criminal Justice, 32(4), 382–402. https://doi.org/10.1080/10345329.2020.1831730. Harris, B., & Woodlock, D. (forthcoming). Spaceless violence: Women’s experiences of technology facilitated domestic violence in regional, rural and remote areas. Canberra: Australian Institute of Criminology. Harvard Law Review. (2015). Considering police body cameras. Harvard Law Review, 128(6), 1794–1817. Jennings, W. G., Fridell, L. A., & Lynch, M. D. (2014). Cops and cameras: Officer perceptions of the use of body-worn cameras in law enforcement. Journal of Criminal Justice, 42(6), 549–556. https://doi.org/10.1016/j.jcr imjus.2014.09.008. McCulloch, J., Pfitzner, N., Maher, J. M., Fitz-Gibbon, K., & Segrave, M. (2020). Victoria police trial of digitally recorded evidence in chief—Family violence. Clayton: Monash Gender and Family Violence Prevention Centre, Monash University Morrow, J. W., Katz, M. C., & Choate, E. D. (2016). Assessing the impact of police body-worn cameras on arresting, prosecuting, and convicting suspects of intimate partner violence. Police Quarterly, 19 (3), 303–325. Murphy, S. T. (2015). Police body cameras in domestic and sexual assault investigations: Considerations and unanswered questions. United States of America: Battered Women’s Justice Project. Neave, M., & State of Victoria. (2016). Royal Commission into Family Violence: Summary and recommendations. Melbourne: Victorian Government Printer.
438
M. Iliadis et al.
Owens, C., Mann, D., & Mckenna, R. (2014). The Essex body worn video trial: The impact of body worn video on criminal justice outcomes of domestic abuse incidents. Essex, England: College of Policing. Palmer, D. (2016). The mythical properties of police body-worn cameras: A solution in the search of a problem. Surveillance and Society, 14 (1), 138– 144. Police Powers and Responsibilities Act. (2000).(QLD). Queensland Police Service [QPS]. (2021). Operational procedural manual . https://www.police.qld.gov.au/sites/default/files/2020-12/OPM%20-%20C hapter%2014%20-%20Operational%20Skills%20and%20Practices.pdf. Accessed 5 Feb 2021. Queensland Police Service [QPS] Media. (2015). Body worn camera roll out for Gold Coast. Queensland Police Service: Queensland Government. https://mypolice.qld.gov.au/news/2015/11/11/body-worn-camera-roll-outfor-gold-coast/. Accessed 5 Feb 2021. Robinson, A. L., Pinchevsky, G. M., & Guthrie, J. A. (2018). A small constellation: Risk factors informing police perceptions of domestic abuse. Policing and Society, 28(2), 199–205. Segrave, M., Wilson, D., & Fitz-Gibbon, K. (2016). Policing intimate partner violence in Victoria (Australia): Examining police attitudes and the potential of specialisation. Australian & New Zealand Journal of Criminology, 51(1), 99–116. Special Taskforce on Domestic and Family Violence in Queensland. (2014). Not now, not ever. Queensland: Special Taskforce on Domestic and Family Violence in Queensland. Stoughton, S. W. (2018). Police body-worn cameras. North Carolina Law Review, 96 (5), 1363–1424. Taylor, E., & Lee, M. (2019). Points of view: Arrestees’ perspectives on police body-worn cameras and their perceived impact on police-citizen interactions. The British Journal of Criminology, 59 (4), 958–978. Taylor, E., Lee, M., Willis, M., & Gannoni., A. (2017). Police detainee perspectives on police body-worn cameras. Trends and Issues in Crime and Criminal Justice, 537, 1–14. https://aic.gov.au/publications/tandi/tandi537. Accessed 5 Feb 2021. The Leadership Conference on Civil and Human Rights & Upturn. (2017). Police body-worn cameras: A policy scorecard . https://www.bwcscorecard.org/. Accessed 5 Feb 2021. Wasserman, H. M. (2015). Moral panics and body cameras. Washington University Law Review, 92(3), 831–845.
21 Police Body-Worn Cameras in Response …
439
Western Australian Police Force. (2020). Standard operating procedures for body worn cameras. Digital Policing Division: Western Australian Police Force. White, M. (2014). Police officer body-worn cameras: Assessing the evidence. Office of Community Oriented Policing Services.
22 He Said, She Said, We Watched: Video Evidence in Sexual Assault Trials Amanda Glasbeek
Introduction In November 2019, Torontonians were transfixed by the sensational trial of the owner and manager of the College Street Bar, a popular city nightspot. The two men were charged with multiple counts of sexual assault, including gang sexual assault, as well as forcible confinement and administering a stupefying drug. All charges arose from events that occurred on one night in December 2016 when between 7:30 p.m. and 6:30 a.m., they gave alcohol and cocaine to, and then engaged in a series of violent sexual acts with, a woman at the bar (R. v. MacMillan, 2019 ONSC 5769). Significantly, the woman’s night-long ordeal, most of which she was unable to remember, was recorded on the bar’s eight surveillance cameras. A. Glasbeek (B) Department of Social Science, York University, Toronto, ON, Canada e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_22
441
442
A. Glasbeek
The video documentation is remarkable. More typically, CCTV footage captures events related to a sexual assault, but not the assault itself. Even still, the nine hours of video from inside the bar were not self-explanatory. While the complainant alleged that the videos documented her sexual assault, the men maintained that what the jurors saw was consensual kink play, which the woman later regretted. As one local news outlet described: “For weeks, jurors in a Toronto courtroom have been pouring over grainy security camera video in a contentious sexual assault trial – but with the jury about to start deliberations, just what that video proves is still very much in question” (Seputis, 2019). After five days of deliberations, the jury found both men guilty of gang sexual assault and administering a stupefying substance. In his sentencing decision, the trial judge acknowledged that “without the videos, the Crown could not have proved its case” (para. 9). In many ways, this case seems to confirm that CCTV can be a technological aid to contested accounts of sexual assault. In 2016, in what was considered a landmark case, another Ontario provincial court found Moazzem Tariq guilty of sexual assault after reviewing CCTV footage from a bar and a hotel elevator that demonstrated to the trial judge the complainant was so severely intoxicated that she was unable to consent to sexual activity (Craig, 2020; Dodge, 2018; R. v. Tariq, 2016 ONCJ 614). At the time, Tariq was widely heralded as an “encouraging” (Bellehumeur, 2016) development that would ensure women who had been sexually assaulted while severely intoxicated had found a “powerful ally” (Hasham, 2016) in the “black-and-white proof ” (Fraser, 2016) that CCTV could provide. Yet, there are a number of elements to these cases that unsettle such optimistic conclusions. Even though MacMillan turned on direct video documentation of an assault in progress, the surveillance footage was not seen by the jury as self-evident. Ultimately, they sidestepped the contested issue of consent to render a guilty verdict for knowingly administering a stupefying substance in order to render the complainant incapable of consent to what then amounted to gang sexual assault. They also deadlocked on whether one of the men who had brutalized her for eleven hours at the bar had sexually assaulted her in the privacy of his apartment, for which no corroborating evidence was produced, and they
22 He Said, She Said, We Watched …
443
acquitted him of an additional sexual assault charge (of digitally penetrating her vagina), because it was not clearly illustrated on the video. Importantly, this case is exceptional. More commonly, as in Tariq, the trier of fact does not have a video that “sees” the sexual assault at all, and CCTV footage acts as corroborating evidence, mobilized in the “he said/she said” context of many such trials in order to verify or dismiss the complainant’s testimony. Because a video “has no story of its own to tell” (Morrison, 2017, p. 824) and requires an “accompanying narrative” (Lippert & Wilkinson, 2010, p. 139) to make it functional in criminal justice processes, the interpretative processes through which CCTV acts as a witness to sexual violence deserves closer scrutiny. The analysis that follows is drawn from 58 reported sexual assault cases from Ontario, Canada,1 heard between 2010 and 2020, in which CCTV evidence played some role in the trial. Using the Canadian legal search index CanLii, I collected the reports from 309 cases that matched “sexual assault” with “video surveillance” (134 cases) and “video evidence” (175 cases), drawn from an initial 10,019 “sexual assault” cases in the Ontario database over the time period.2 I then removed those cases that: were not criminal trials (e.g. were heard in a different tribunal, such as the Human Rights Tribunal, Labour Arbitrations, etc.); were not sexual assault cases (e.g. distributing child pornography); did not refer to CCTV evidence (e.g. smartphone cameras or taped police statements); and referred only to pre-trial motions, or were sentencing reports that did not explain the evidentiary value of video at trial. Nearly half (27) of the 58 distinct sexual assault criminal trials produced through this method were heard in, or after, 2018, which I take to indicate a growing comfort and capacity within the courts to gather and put to use a wide array of digital evidence, including CCTV (Dodge et al., 2019). While my sample is not a complete one, the collected cases demonstrate notable patterns. First, in the vast majority (44, or 75%) of cases, the complainant and the defendant knew one another, so that the case was one of consent to sexual activity, rather than identification of a 1
Ontario is Canada’s most populous province. The total set of matched cases included a large number of duplicates. I also eliminated the search term “CCTV” because courts were most likely to use this to refer to video testimony and police statements, rather than public surveillance video.
2
444
A. Glasbeek
perpetrator.3 A significant number of these consent cases (17 or 44%) turned not on the broader issue of consent but on the more specific question of capacity to consent because the complainant had no or little memory of the assault. Overall, 55% (32) of the cases resulted in conviction, a remarkably high rate compared to the abysmally low Canadian conviction rate of 12% (Rotenberg, 2017). At first glance, then, it appears that CCTV is beneficial to women in making their stories of sexual violence more credible in the courtroom. If so, this would be a welcome innovation. Women’s lack of credibility as witnesses to their own assaults is a well-understood phenomenon that significantly inhibits them from reporting their experiences to criminal justice agents (Johnson, 2017; Mazza, 2014). Elsewhere, I have shown that many women actively welcome CCTV—or public space video surveillance4 —as a safety mechanism precisely because they perceive it as a tool that can corroborate what they know to be their often not-believed experiences of sexual violence (Glasbeek & van der Meulen, 2014).5 This chapter, therefore, asks the deceptively simple question: are women right to believe that public surveillance cameras can support their stories of sexual violence and thereby restore credibility to survivors and legitimacy to the “inhospitable” criminal justice system (Craig, 2016)? The findings here suggest, perhaps predictably, that the answer is not as encouraging as we might hope.
3
This is an unsurprising finding given that 52% of all women who report sexual violence to the police are intimately acquainted with their assailant (Government of Canada, Department of Justice, 2019). 4 “CCTV” stands for Closed Circuit Television, an analog term that has become anachronistic in our digital world. Nonetheless, I use the term “CCTV” as a recognisable shorthand for what may also be called “open street” or public surveillance cameras. For discussion of technological change, see Ferenbok and Clement (2012). 5 Sexual assault has one of the lowest “founding rates” of any Canadian crime category (Doolittle, 2017; Rotenberg, 2017).
22 He Said, She Said, We Watched …
445
CCTV as Silent Witness As part of a larger forensic toolkit, CCTV shares much in common with other “technoscientific witnesses” (Quinlan, 2017) now commonly used in courtrooms. A variety of technological tools, such as digital photography (Biber, 2009; Dodge, 2016; Moore & Singh, 2018), new digital media (Bluett-Boyd et al., 2013; Dodge, 2018; Powell et al., 2018), sexual assault evidence kits (Quinlan, 2017), DNA profiles (Lynch et al., 2010) and body-worn camera footage (Fan, 2017; Morrison, 2017; see also Chapter 21, this volume), have all entered the courtroom on the assumption that they “possess an inner logic, an autonomous framework of validation and control, that operates irrespective of the law [and] are a manifestation of our society’s deep commitment to a rational, and hence reliably objective, policy process” (Jasanoff, 1995, p. 6). And yet, as critical scholars of these various sciences and media note, the “proof ” that these technological interventions offer is neither rational nor objective. The “naïve realism” (Morrison, 2017, p. 796) that tends to characterise courtroom adoptions of these technologies is contradicted by their deeply rooted social qualities, and by the ways they both absorb and give rise to subjective, and often highly emotive, interpretive frameworks that undermine their claims to impartially assist in the truth-finding process. In this sense, CCTV is but one example of a larger array of tools that appear to (but do not really) lend “mechanical objectivity” to courtroom decision-making processes (Brucato, 2015, p 459; see also, Edmond & San Roque, 2013; Newell, 2020). At the same time, CCTV has several characteristics that make it an importantly distinct site of exploration, especially for feminist researchers of sexual violence (Glasbeek, 2016; Koskela, 2002, 2012). Unlike many new digital media forms, CCTV is a one-way and, often, top-down visual technology, not subject to user-created content. This feature allows it to be seen as “unmediated” video (Brucato, 2015, p. 457) and therefore less susceptible to the potential for manipulation than other forms of digital media (Lippert & Wilkinson, 2010). Even more importantly, CCTV is a public space technology. Here, public space refers not just to open streets, but also to “mass private property” or “property that is privately owned but which is nevertheless publicly used [and which]
446
A. Glasbeek
members of the general public are typically and routinely invited, even encouraged, to frequent” (Law Commission of Canada, 2006, p. 38). In this expanded understanding of public space, CCTV has proliferated as a ubiquitous, to the point of “banal” crime control technology (Goold et al., 2013), not only in fully public sites such as urban streets and public parks, but also in private retail spaces (malls, convenience stores, bars, etc.), and private residences (hotel or apartment building lobbies, elevators, hallways, etc.). Significantly, this means that, although CCTV “sees” us a lot, it does so only in our public-facing encounters. The public space function of CCTV is intentional. The “legitimating rhetoric” (Lett et al., 2012) of CCTV is that areas monitored by video cameras are safer spaces based on two “seductive” promises: (1) criminal activity will be deterred by virtue of being seen; and (2) camera surveillance can act as a “perfect witness” should a crime occur (Norris, 2012).6 This second promise has travelled unproblematically into the Canadian courtroom, enabled by the Supreme Court of Canada’s 1996 decision in R. v. Nikolovski (3 SCR 1157). In this convenience store robbery case, the sole (human) witness, the clerk who had been robbed at knifepoint, was unable to identify the accused from the police photographic lineup, the CCTV recording of the crime or in the courtroom. At trial, therefore, Nikolovski did not mount a defence, on the argument that the Crown had failed to produce evidence that he was the assailant. The trial judge, however, declared that she was able to identify Nikolovski from the CCTV footage, and convicted him, a conviction upheld by the Supreme Court of Canada. Writing for the majority, Cory, J. explained: Not every witness can have the fictional James Bond’s cool and unflinching ability to act and observe in the face of flying bullets and flashing knives. ....The video camera on the other hand is never subject to stress. Through tumultuous events, it continues to record accurately and dispassionately all that comes before it. Although silent, it remains an unbiased witness with instant and total recall of all that it observes. …[S]o 6 Critical scholars of CCTV note that the technology fails on both of these promises, but has remained a popular response to public concerns for “urban safety” nonetheless, mostly because of its success in transforming urban spaces from risky places of heterogeneity to predictable and homogeneous spaces of consumption. See, for example, Norris (2012), Coleman (2005), Hier (2004), Fyfe and Bannister (1996).
22 He Said, She Said, We Watched …
447
long as the videotape is of good quality and gives a clear picture of events and the perpetrator, it may provide the best evidence of the identity of the perpetrator. (paras. 20–22; emphasis added)
Unlike its forensic cousins (toxicology tests, sexual assault evidence kits, DNA profiling), with which it is often paired in the courtroom, but all of which require expert analysis that can be tested by the court, CCTV evidence can be entered into criminal proceedings as always accessible and, therefore, as self-evidently probative. CCTV footage as evidence thus carries with it a peculiar combination of presumed qualities that has made it a potent factor in criminal trials: it is a “silent witness” that offers a “bird’s eye view” (Fraser, 2016) of a wide array of spaces where the public interact, making it capable of aiding in the fact-finding process in a way that appears commonsensical. But these qualities do not satisfactorily make the jump from armed robbery to sexual assault, for at least two reasons. First, most sexual assault cases are tried on the question of consent, not identification. Second, most sexual assaults do not occur in the kinds of public spaces that are monitored by CCTV cameras. These two elements mean that, unlike in the precedent-setting case of Nikolovski, surveillance video images in sexual assault trials tend to be entered as circumstantial, rather than direct evidence of the crime. In this transformation, the very qualities that make public surveillance video appear uncontroversial become contestable issues at trial. The act of balancing between the acceptance of CCTV evidence as a “silent witness”, while simultaneously permitting the triers of fact—usually a judge sitting alone—to engage in their own acts of interpreting the meaning of a video, transforms video footage from certain and seemingly objective evidence into something that is ambiguous and subjective. This “function creep” (Wilkinson & Lippert, 2012, p. 315) is rarely recognised in judicial reasons for decision-making. Combined with the fact that CCTV images tend to capture only corroborating evidence, rather than the assault itself, these particular qualities of CCTV evidence allow for a wide range of discretionary factors to shape the testimony of these mute public space cameras, transforming video surveillance evidence into a part of the contestation of sexual assault, rather than its witness.
448
A. Glasbeek
“It Is Not a Smoking Gun”: CCTV as Witness to Sexual Assault As Tariq and MacMillan demonstrate, the most hopeful expectation for CCTV as a silent witness to sexual violence rests in its ability to recall events that the complainant may not remember, making it a most likely ally in prosecutions based on a complainant’s capacity to consent. There are good reasons to look for tools to assist in such cases. The Canadian legal framework on “incapacity to consent” has been roundly critiqued by feminist scholars who note that, short of clear evidence of unconsciousness, there is an absence of definitive guidelines in how to determine “incapacity” to consent to sexual touching in either the Criminal Code or case law (Benedet, 2010; Craig, 2020; Richards, 2020). Instead, as Elaine Craig (2020, p. 105) argues, criminal courts reinforce “common stereotypes about women who consume alcohol includ[ing] the beliefs that they are: responsible for any consequences they suffer; sexually promiscuous or indiscriminate in their sexual choices; and more likely to lie about rape” (see also Burgin & Flynn, 2019). As a result, an intoxicated woman faces an evidentiary “paradox”, in which the memory loss due to intoxication becomes, not the marker of incapacity but the fundamental flaw in her testimony: “Too drunk to be believed, not drunk enough to lack capacity” (Craig, 2020, p. 85). In this legislative vacuum, judges often search for “some form of evidentiary proxy” to give them proof that intoxication sufficiently impaired a complainant’s cognitive capacity to make informed choices about sexual activity (Craig, 2020, p. 90). Enter CCTV. Several cases from my sample illustrate how CCTV can act as the “evidentiary proxy” sought by the courts. For example, in R. v. Casilimas (2012 ONCJ 845), in which the complainant was so drunk that she passed out in the bushes in front of an apartment building and the crime was discovered by a passerby who saw the accused on top of her with his pants down, video surveillance from the exterior of the apartment building confirmed that she was unable to walk steadily prior to losing consciousness. Similarly, in R. v. D.A. (2018 ONCJ 307), CCTV footage retrieved from the public areas of an apartment complex revealed to the court the complainant’s “staggered gait” as she walked
22 He Said, She Said, We Watched …
449
alongside the accused, thereby corroborating her claims that the accused had sex with her while she was—and he knew her to be—incapacitated. In R. v. Carpov (2018 ONSC 6650), a woman who came to the accused’s rooming house to purchase a kitten, and who was then rendered insensible by what she suspected was a “date rape drug” in her drink and subsequently sexually assaulted, was successful at trial because CCTV cameras set up inside the common areas of the rooming house offered “direct evidence that the victim was unconscious when the sexual touching began” (para. 170). But, conviction is not a certain outcome, and nearly half (8) of the capacity to consent cases studied here resulted in acquittal. In these cases, video evidence often failed to convince judges that the complainant demonstrated sufficient levels of intoxication as to be incapable of informed consent. Thus, in R. v. Whitteker (2019 ONCJ 180), the complainant, who woke up in the morning after a night of heavy drinking to find herself naked in bed with a man she had met in the bar, and who immediately called the police and on whose body the accused’s DNA was found, but who had no specific recall of the evening, was found neither to have been sexually assaulted, nor to have been incapable of giving consent to sexual activity. Instead, the judge demurred to the fact that, unlike in Tariq, there was “no video evidence to attest to her physical symptoms” of intoxication (para. 87), thereby allowing for alternative inferences to be drawn from her account, including that she had consented to sex only to regret it later. Similarly, in R. v. Robertson (2016 ONCJ 333), in acquitting an accused of sexually assaulting a woman with whom he had been drinking excessively at a bar, the judge noted that “[i]t is significant that the extensive video surveillance [from the bar] that I have seen … contains no obvious display of intoxication by either of them”. Although the judge agreed that “it was certainly possible, perhaps even probable” that non-consensual sex occurred, the absence of independent proof of incapacity from the video footage meant that he was not satisfied beyond a reasonable doubt and was, therefore, obligated to acquit. These cases raise the troubling concern that rather than providing support to women’s experiences, the mere possibility of CCTV as a “silent witness” may raise the bar on the already “onerous evidentiary
450
A. Glasbeek
burden” (Craig, 2020, p. 85) carried by women with little recall of their assaults. More generally, using the “informational benefits” (Morrison, 2017, p. 795) of video evidence to draw a “direct” line to sexual assault can prove elusive. Indeed, it would be more accurate to say that the widespread network of CCTV across a vast number of public and semipublic spaces offers indirect, albeit sometimes crucial, verification of the events in question. This was particularly clear in R. v. Niyongabo (2020 ONSC 308), where an extensive urban array of public space cameras silently monitored the complainant and defendant over an entire night, from the bar where they met, to their walking route back to his condominium building, to the parking garage, elevators and lobby of that building. There was a two-hour absence of video that was taken to account for the time inside his unit where the assault took place, and then footage captured the complainant and defendant back in the streets, garage and elevators again. Given that the complainant had little memory of the evening, this diverse set of public camera views were allowed to “witness” on her behalf, testifying both to her levels of intoxication (staggered walk, falling down, weaving into the road) and supporting what few memories she was able to recall for the court. Given her unreliability as a witness, the judge declared the video footage to be the “most compelling” piece of evidence that led to her decision to convict (para. 140). In R. v. Thakoordeen (2019 ONSC 25), public surveillance video of encounters between Thakoordeen and the woman he sexually assaulted (but typically, not of the assault itself ) was also sufficient to produce a conviction. As the judge noted, “though corroboration is not legally required in this case, there was nonetheless extensive support for many important aspects of [the complainant’s] case. The most significant piece of supportive evidence is that the chronology of the event was recorded on the surveillance videos” (para. 36). If, however, women’s accounts of their movements were not verifiable in the chronologies provided by video footage, their entire credibility as witnesses to their own experiences of sexual violence was undermined. Thus, in R. v. Byford (2016 ONSC 797), the accused was acquitted of sexual assault because, although the complainant testified that he followed her from their church to an alleyway where he assaulted her, the
22 He Said, She Said, We Watched …
451
surveillance footage from outside a nearby big box retailer showed that he was, in fact, ahead of her, and that she walked toward him. This inconsistency rendered her entire testimony incredible. In R. v. Wood (2013 ONSC 1309), “an extensive surveillance camera network” (para. 4) from the multi-unit building in which the alleged assault took place undermined the testimony of both the accused and the complainant. While the accused’s testimony about his comings and goings from his unit was “clearly contrary to the video evidence” (para. 49), the judge also found it difficult to reconcile the complainant’s story with the camera footage from the building’s common hallway: In my view, there are some issues with [the complainant’s] testimony regarding the events. .... She is shown going into the apartment at around 1:17 a.m. She is then shown in the hallway about 2 hours later. However, listening to her testimony, in chief, one got the impression that once she and Mr. Wood walked into the apartment, the sequence leading to her being assaulted began immediately. Clearly, that could not have occurred immediately given the passage of time. (para. 63)
Relying more heavily on the video’s testimony than the complainant’s, the judge acquitted Mr. Wood. As these latter examples demonstrate, the seemingly uncontroversial use of time- and date-stamped videos of people’s movements through public space can slip seamlessly into more subjective readings of videos. Drawing on the leeway granted by Nikolovski to triers of facts to treat videos as an “accurate and dispassionate” record of “all that comes before it”, judges often watched the videos, including through repeated views, slow motion, isolating stills in the courtroom and multiple reviews in chambers, to try to discern the meaning and weight that should be accorded to the footage. Through this process, courts searched videos for clues to demeanour and, more specifically, to see if complainants looked like somebody who had been assaulted. For example, in R. v. DiMichele (2017 ONSC 2250), in arriving at a conviction, the judge found it significant that a hotel employee who alleged sexual assault by her employer was seen on the hotel cameras running out of a room naked and crying, all of which sufficiently “established” that she was
452
A. Glasbeek
“upset to the point of being hysterical” (para. 4). Similarly, video footage from an apartment hallway that showed the complainant to be “distraught”, as evidenced by images of her barefoot and running, offered “objective evidence” of her state of mind (R. v. Vlaski, 2019 ONCA 927, para. 40). In another case, a judge convicted the accused of sexual assault in large part because his story that the sexual encounter was consensual was contradicted by apartment building footage from the elevator that demonstrated to the judge that, prior to the sexual activity, the complainant “exhibited no apparent sexual or flirtatious behaviour toward the accused” (R. v. Mugabo, 2019 ONSC 4308, para. 75). In contrast, in R. v. Bah (2016 ONCJ 495), the accused was acquitted of touching the complainant’s bottom without consent in part because on reviewing the video of the encounter, the judge noticed that “about 24 seconds after the touching, the complainant’s face is clearly visible. She does not appear upset”. There are no clues in the ruling as to what kinds of facial expressions might have indicated the appropriate level of distress. The interpretive role of videos was especially relevant in those few cases where surveillance footage captured the alleged assault in progress. For example, in R. v. Walters (2020 ONCJ 618), a cleaning woman at a homeless men’s shelter alleged she was assaulted by one of the residents as she was mopping a hallway monitored by CCTV. At trial, the Crown relied almost entirely on the surveillance footage as evidence, claiming that “it shows that Mr. Walters grabbed her by the waist and made a thrusting motion toward her with his hips” (para. 58). The video sequence was played repeatedly in the courtroom, dissected by lawyers for both parties as either confirmation of the assault or, as argued by the defence, showing that the accused accidentally “bumped into her” while intoxicated. After multiple replays, including in slow motion and again in chambers, the judge concluded that “the surveillance video does not necessarily confirm her evidence. It depicts physical contact between the two, but does not show definitively that Mr. Walters sexually assaulted the complainant. I find the video is open to interpretation, including Mr. Walters’ interpretation” (para. 59, emphasis added). In acquitting the accused, the judge remarked of the video, “It is not a smoking gun
22 He Said, She Said, We Watched …
453
that would take the Crown directly to proof beyond a reasonable doubt” (para. 78). On an evening in May 2012, a woman living at a downtown women’s shelter, who was later described as “developmentally delayed” with a history of alcohol abuse, itinerancy, and “police interactions”, met two young, affluent men outside the shelter. She went with them for a drink, and was driven to a park on the outskirts of town where, she claimed, she was violently sexually assaulted and then abandoned, without money or a phone, to make her way home (R. v. A.G. and E.K , 2015 ONSC 181). At issue was not whether the accused had had sex with her, but whether the sex had been consensual. As part of the evidence, the Crown introduced two surveillance videos from the shelter’s exterior cameras. The videos, however, failed to corroborate some aspects of the complainant’s story. While she said she was on the porch of the shelter smoking cigarettes and people-watching when the two men walked by, one of the shelter videos, “contrary to her evidence”, showed her emerging onto the porch later, neither smoking nor people-watching (para. 21). A second video, from another angle that captured the parties after they had met, showed one of the accused with his arm around her neck, which the complainant said she had not wanted, “but the footage shows no evident effort to ask him to stop as she walked with them down the street” (para. 22). Nonetheless, the Crown asked the judge to take a very close look at the videos which she suggested had a “disturbing” quality about them, “because … they reveal a ‘targeted’ pickup of the complainant by these two accused. … [I]f we did not know otherwise, the video surveillance would appear to suggest two men picking up a prostitute” (para. 73).7 The judge agreed that the videos were “disturbing”, but viewing them more narrowly than the Crown had suggested, determined that they did not verify the complainant’s claim that the men called to her rather than the defendants’ claim that she called out to them. Although the judge did find that the accused knew the complainant was “someone of a diminished developmental state compared to them” and “took advantage of 7
The elements of this story bear a striking resemblance to the story of Pamela George, an Indigenous woman working as a sex-worker in Regina, Saskatchewan when she was picked up by two white affluent men, driven outside the city boundaries, forced to perform oral sex, and then beaten and left to die (Razack, 2002).
454
A. Glasbeek
her vulnerability” to “see what might arise out of a brief dalliance with [her]” (para. 13), the two men were acquitted because the complainant failed the credibility test, an assessment verified by the incompatibility between her own testimony and the video evidence. With specific respect to the video evidence, the judge noted that “even the Crown’s claim of a predatory aspect to the behaviour of the two accused does not necessarily equate to an absence of consent relative to subsequent sexual interactions between them” (para. 74). This case stands in stark contradiction to the hopeful messages drawn from Tariq and MacMillan. In this case, the video acted in multiple capacities simultaneously: as a chronology (when was she on the porch?), as a barometer of her credibility (if she was wrong about smoking and people-watching, what else might she be wrong about?), as an indicator of demeanour (did she initiate the contact? Did she do enough to resist a hand on her neck?) and as interpretive (did it show sufficient vulnerability to predatory behaviour to vitiate consent?). Notably, like other cases documented here, the CCTV images slid easily between these different functions, revealing the ways that surveillance footage is a multisemic and polyvalent evidentiary tool.
Conclusion Although, or perhaps because, CCTV entered Canadian courtrooms as “the best evidence” for identification due to its unflinching objectivity, courts have been granted license to engage directly with this visual technology in a range of criminal cases, even as they maintain the fiction that CCTV is a neutral observer of fact. As the examples here illustrate, when the task of identifying an accused shifts to determining consent to sexual activity, CCTV becomes a “double-edged sword” (Dodge, 2018) that can just as easily be mobilised to discredit women’s accounts of violence, as it can assist them in the truth-finding process. And, because it is a public space technology, CCTV footage rarely “sees” sexual assault itself. Instead, it captures public movements, encounters, behaviours and demeanours, all of which are offered up to the courts as if these can act as “evidentiary proxies” for consent, but which, by definition, make
22 He Said, She Said, We Watched …
455
its images subject to the contested he said/she said strategies that characterise such trials. Perhaps most perversely, although a silent witness, video surveillance is allowed to narrate stories in sexual assault trials that often reinforce the very “common stereotypes” that have made independent corroboration of women’s credibility necessary. The black and white proof ostensibly offered by CCTV, therefore, may be less a welcome innovation signalling change, as optimistically imagined after Tariq, than yet another indicator of the failure of courts to take sexually assaulted women’s credibility seriously. Acknowledgements I thank Mandi Gray, Emily Lockhart and Katrin Roots for their invaluable research assistance during various phases of preparing this paper. I also extend my deep appreciation to the gracious and skilled editors of this collection.
References Bellehumeur, K. (2016, October 30). A new role for video evidence—Proving no capacity to consent. Bellehumeur Law. https://bellehumeurlaw.com/newrole-video-evidence/. Last accessed 5 Mar 2021. Benedet, J. (2010). The sexual assault of intoxicated women. Canadian Journal of Women and the Law, 22(2), 435–461. Biber, K. (2009). Visual jurisprudence: The dangers of photographic identification evidence. Criminal Justice Matters, 78(1), 35–37. Bluett-Boyd. N., Fielborn, B., Quadara, A., & Moore, S. (2013). The role of emerging communication technologies in experiences of sexual violence: A new legal frontier? Report, Australian Institute of Family Studies, Melbourne. Brucato, B. (2015). Policing made visible: Mobile technologies and the importance of point of view. Surveillance & Society, 13(3/4), 455–473. Burgin, R., & Flynn, A (2019). Women’s behavior as implied consent: Male “reasonableness” in Australian Rape Law. Criminology & Criminal Justice, online first https://doi.org/10.1177/1748895819880953. Coleman, R. (2005). Surveillance in the city: Primary definition and urban spatial order. Crime, Media, Culture, 1(2), 131–148.
456
A. Glasbeek
Craig, E. (2016). The inhospitable court. University of Toronto Law Journal, 66 (2), 197–243. Craig, E. (2020). Sexual assault and intoxication: Defining (in)capacity to consent. Canadian Bar Review, 98(1), 70–108. Dodge, A. (2016). Digitizing rape culture: Online sexual violence and the power of the digital photograph. Crime, Media, Culture, 12(1), 65–82. Dodge, A. (2018). The digital witness: The role of digital evidence in criminal justice responses to sexual violence. Feminist Theory, 19 (3), 303–321. Dodge, A., Spencer, D., Ricciardelli, R., & Balluci, D. (2019). ‘This isn’t your father’s police force’: Digital evidence in sexual assault investigations. Australian and New Zealand Journal of Criminology, 52(4), 499–515. Doolittle, R. (2017, February 3). Unfounded: Why police dismiss 1 in 5 sexual assaults as baseless. Globe & Mail . https://www.theglobeandmail. com/news/investigations/unfounded-sexual-assault-canada-main/article33 891309/. Last accessed 5 Mar 2021. Edmond, G., & San Roque, M. (2013). Justicia’s gaze: Surveillance, evidence and the criminal trial. Surveillance & Society, 11(3), 252–271. Fan, M. D. (2017). Justice visualized: Courts and the body camera revolution. UC Davis Law Review, 50, 897–959. Ferenbok, J., & Clement, A. (2012). Hidden changes: From CCTV to ‘smart’ video surveillance. In A. Doyle, R. Lippert, & D. Lyon (Eds.), Eyes everywhere: The global growth of camera surveillance (pp. 309–332). Routledge. Fraser, L. (2016, October 6). How video like this can be ‘very powerful’ evidence in sexual assault trials. CBC News. https://www.cbc.ca/news/can ada/toronto/sex-assault-video-1.3794859. Last accessed 5 Mar 2021. Fyfe, N. R., & Bannister, J. (1996). City watching: Closed circuit television surveillance in public spaces. Area, 37–46. Glasbeek, A. (2016). ‘They catch you doing the simple human things’: CCTV, privacy, and gendered exposure. Journal of Law and Equality 12 (Special Issue on Gender), 63–88. Glasbeek, A., & van der Meulen, E. (2014). The paradox of visibility: Women, CCTV, and crime. In G. Balfour & E. Comack (Eds.), Criminalizing women: Gender and (in)justice in neo-liberal times (pp. 219–235). Fernwood Publishing. Goold, B., Loader, I., & Thumala, A. (2013). The banality of security: The curious case of surveillance cameras. British Journal of Criminology, 53(6), 977–996.
22 He Said, She Said, We Watched …
457
Government of Canada, Department of Justice. (2019). Just facts: Sexual assault. https://www.justice.gc.ca/eng/rp-pr/jr/jf-pf/2019/apr01.html. Hasham, A. (2016, October 7). No capacity to consent, judge rules in sex assault case. Toronto Star. https://www.thestar.com/news/crime/2016/10/07/ no-capacity-to-consent-judge-rules-in-sex-assault-case.html. Last accessed 5 Mar 2021. Hier, S. P. (2004). Risky spaces and dangerous faces: Urban surveillance, social disorder and CCTV. Social & Legal Studies, 13(4), 541–554. Iliadis, M., Vakhitova, Z., Harris, B., Tyson, D., & Flynn, A. (2021). Police body-worn cameras in response to domestic and family violence: A study of police perceptions and experiences. In A. Powell, A. Flynn, & L. Sugiura (Eds.), The palgrave handbook of gendered violence and technology (pp. 417– 440). Palgrave Macmillan. Jasanoff, S. (1995). Science at the bar: Law, science, and technology in America. Harvard University Press. Johnson, H. (2017). Why doesn’t she just report it? Apprehensions and contradictions for women who report sexual violence to the police. Canadian Journal of Women and the Law, 29 (1), 36–59. Koskela, H. (2002). Video surveillance, gender, and the safety of public urban space: ‘Peeping Tom’ goes high tech? Urban Geography, 23(3), 257–278. Koskela, H. (2012). ‘You shouldn’t wear that body’: The problematic of surveillance and gender. In K. Ball, K. Haggerty, & D. Lyon (Eds.), Routledge handbook of surveillance studies (pp. 49–56). Routledge. Law Commission of Canada (2006). In Search of Security: The future of policing in Canada. Canadian Government Publishing. Lett, D., Hier, S., & Walby, K. (2012). Policy legitimacy, rhetorical politics, and the evaluation of city-street video surveillance monitoring programs in Canada. Canadian Review of Sociology, 49 (4), 328–349. Lippert, R., & Wilkinson, B. (2010). Capturing crime, criminals, and the public imagination: Assembling crime stoppers and CCTV surveillance. Crime, Media, Culture, 6 (2), 131–152. Lynch, M., Cole, S. A., McNally, R., & Jordan, K. (2010). Truth machine: The contentious history of DNA fingerprinting. University of Chicago Press. Mazza, E. (2014, October 31). #BeenRapedNeverReported trending on Twitter as women share stories of sexual violence. HuffPost. https://www.huffpost. com/entry/beenrapedneverreported_n_6080054. Last accessed 5 Mar 2021. Moore, D., & Singh, R. (2018). Seeing crime, feeling crime: Visual evidence, emotions, and the prosecution of domestic violence. Theoretical Criminology, 22(1), 116–132.
458
A. Glasbeek
Morrison, C. M. (2017). Body camera Obscura: The semiotics of police video. American Criminal Law Review, 54 (3), 791–842. Newell, B. (2020). Introduction: Surveillance as evidence. Surveillance & Society, 18(3), 400–402. Norris, C. (2012). There’s no success like failure and failure’s no success at all: Some critical reflections on the global growth of CCTV surveillance. In A. Doyle, R. Lippert, & D. Lyon (Eds.), Eyes everywhere: The global growth of camera surveillance (pp. 23–45). Routledge. Powell, A., Stratton, G., & Cameron, R. (2018). Digital criminology: Crime and justice in digital society. Routledge. Quinlan, A. (2017). The technoscientific witness of rape: Contentious histories of law, feminism, and forensic science. University of Toronto Press. Razack, S. (2002). Gendered violence and spatialized justice: The murder of Pamela George. In S. Razack (Ed.), Race, space and the law: Unmapping a white settler society (pp. 121–156). Between the Lines. Richards, C. (2020). Intoxication, a drunk science: Expertise in cases of sexual assault regarding capacity to consent. Canadian Journal of Law and Justice, 2(1), 137–175. Rotenberg, C. (2017). From arrest to conviction: Court outcomes of police-reported sexual assaults in Canada, 2009–2014. Statistics Canada. https://www150.statcan.gc.ca/n1/pub/85-002-x/2017001/article/ 54870-eng.pdf. Last accessed 5 Mar 2021. Seputis, J. (2019, November 26). Video key as jurors about to begin deliberations at gang sex assault trial. CBC News. https://www.cbc.ca/news/canada/ toronto/sex-assault-trial-1.5363652. Last accessed 22 Feb 2021. Wilkinson, B., & Lippert, R. (2012). Moving images through an assemblage: Police, visual information, and resistance. Critical Criminology, 20 (3), 311– 325.
Cases Cited R. R. R. R. R. R. R.
v. v. v. v. v. v. v.
A.G. and E.K 2015 ONSC 181. Bah 2016 ONCJ 495. Byford 2016 ONSC 797. Carpov 2018 ONSC 6650. Casilimas 2012 ONCJ 845. D.A. 2018 ONCJ 307. DiMichele 2017 ONSC 2250.
22 He Said, She Said, We Watched …
R. R. R. R. R. R. R. R. R. R. R.
v. MacMillan 2019 ONSC 5769. v. Mugabo 2019 ONSC 4308. v. Nikolovski 1996 3 SCR 1157. v. Niyongabo 2020 ONSC 308. v. Robertson 2016 ONCJ 333. v. Tariq 2016 ONCJ 614. v. Thakoordeen 2019 ONSC 25. v. Vlaski 2019 ONCA 927. v. Walters 2020 ONCJ 618. v. Whitteker 2019 ONCJ 180. v. Wood 2013 ONSC 1309.
459
23 The Promises and Perils of Anti-rape Technologies Lesley McMillan and Deborah White
Introduction This chapter focuses on the striking array of technologies that has emerged over the last decade claiming the capacity to prevent or mitigate the risk of sexual violence. These so-called ‘anti-rape technologies’ include apps that harness the communication functions of smartphones and mobile devices, as well as a variety of ‘wearables’ designed to prevent an assault. This proliferation of technological solutions, aimed primarily at women, is in part a result of the widespread availability of digital communications, which provide not only a platform for the functioning L. McMillan (B) Glasgow Caledonian University, Glasgow, Scotland e-mail: [email protected] D. White Department of Sociology, Trent University, Peterborough, ON, Canada e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_23
461
462
L. McMillan and D. White
of some of these tools, but also a means to announce, promote and market them. We offer a considered critique of these devices by first drawing upon our published research (White & McMillan, 2020) as a ‘case study’, from which we outline these technologies and delineate potential unintended consequences, misunderstandings and misrepresentations often present in their marketing, and their capacity for misuse and violence perpetration. We then expand our previous arguments to consider these devices within the wider issue of ‘safety work’ (Kelly, 2012), with which women engage, and are expected to engage, in day-to-day life. While we suggest that these devices contribute to the normalisation of fear and the apparent necessity for women to take proactive steps for their own protection, we also explore some potential possibilities of these technologies. We examine whether they may facilitate a greater sense of freedom or increased autonomy and agency for women, and the extent to which the acceptance of one form of restriction may alleviate another. Nonetheless, we contend that despite hypothetical gains for both individuals and society, for the most part, these technologies represent ‘safety theatre’, offering only a sense of safety rather than the reality of it.
Background: Rape, Sexual Violence and Technological ‘Solutions’ It is the case that rape and sexual violence persist pervasively in the lives of women (Du Mont & White, 2013; Htun & Weldon, 2012; Kessler et al., 1995) and that public sexual harassment is ubiquitous (Vera-Grey, 2018). While it is not only women who are affected, with men, trans-identified and non-binary people also experiencing violence, the gendered pattern of victimisation is striking (Du Mont & White, 2013; McMillan, 2007; Taylor & Gassner, 2010). Despite significant legal and procedural reform (Corrigan, 2013; McMillan, 2007; Powell et al., 2013), in the main, reporting of rape remains very low (Spohn & Trellis, 2012) and legal outcomes are poor (Harris & Grace, 1999; McMillan, 2010, 2011). The impacts of sexual violence can be considerable, and include physical, mental, social and interpersonal harm
23 The Promises and Perils of Anti-rape Technologies
463
(Chivers-Wilson, 2006; Du Mont & White, 2013; McMillan, 2013). As a result, widespread attention has been given to sexual violence prevention, seeking to reduce incidence and prevalence, and mitigate the harms that result. Sexual violence prevention efforts have taken a number of forms, including awareness raising, educative efforts such as bystander programmes and consent training, and self-defence instruction. The use of devices to attempt to reduce the risk of sexual assault is not new; the chastity belt dates back centuries (Shelby, 2020; White & McMillan, 2020). However, over the last decade, we have witnessed a proliferation of newer tools and communications technologies marketed primarily to women (Flynn, 2015; Maxwell et al., 2020; White & McMillan, 2020; Wood et al., 2021). These take a variety of forms, and include apps for mobile phones, alarm-emitting gadgets and devices worn both on and inside the body. Most, though not all, harness some form of digital technology. These newer products are designed with the purpose of preventing rape and sexual assault, and some claim to also record evidence with the intention of supporting the prosecution of an assailant (White & McMillan, 2020). Interested in this range of devices that have increasingly been produced and brought to market, we embarked upon a systematic analysis to establish a comprehensive itemisation of technologies available at the time, examine the claims made by their inventors and proponents with respect to their capacity to reduce sexual violence, and assess both their possibilities and limitations for doing so (see White & McMillan, 2020). We briefly outline this work and our conclusions before moving to new avenues of consideration regarding whether these devices may offer any positive possibilities for women.
Anti-Rape Technologies: A Case Study In analysing anti-rape technologies, we drew upon the internet sources where they were advertised, promoted and discussed. As such, the internet was our object and tool for analysis (Flick, 2014). Between 2015
464
L. McMillan and D. White
and 2018 we used ‘google alerts’ and ‘google scholar alerts’ to systematically identify and examine existing and proposed anti-rape devices. We restricted our analysis to those that were specifically designed to prevent or reduce sexual assault.
Categorisation of Technologies We developed a typology of these technologies, categorising them as: corporeal devices (to be worn on or in the body); communications devices (mostly connected to mobile phones); and corporeal/communications devices (those that are both worn on or in the body and that utilise digital communications functions). A full description of the technologies identified is outlined in White and McMillan (2020), but examples of each category are provided here. Those in the corporeal category included devices designed to be worn on or inside a woman’s body, primarily to harm an attacker. These included the FemDefence tampon that harboured a spike on the end to pierce the penis of an attacker and an anti-molestation jacket that could discharge an electrical shock to stun a potential perpetrator. There was also AR Wear, underwear that could not be cut or pulled off, a belt buckle that could only be undone with two hands, bracelets that when opened would release a pungent smell to repel an attacker and, the True-love Bra that could only be unhooked when a woman felt ‘true love’ as indicated through the detection of certain secreted chemicals from her body. Those in the communications category mostly took the form of apps for mobile devices. Of those we classified in our analysis, a number of them were about alerting others if one felt in danger. For example, when activated, the Circle of 6 mobile app would notify pre-selected contacts or the emergency services, providing GPS location data. Some offered tracking functions and ‘Follow Me’ features (e.g. bSafe) that allowed owners to announce a safe arrival, with alarms deployed should they not do so. Other apps offered the capacity to document consent to sexual encounters (e.g. WeConsent, Yes to Sex, and Legalfling ). One group of apps within the communications category used crowd-sourced data to produce ‘hot spots’ of sexual assault and harassment, posted in real time
23 The Promises and Perils of Anti-rape Technologies
465
to notify others of areas that might be considered dangerous or risky (e.g. HarassMap). Those within our third category, corporeal/communications devices, were wearable technologies that also harnessed digital communications through mobile phones, and included watches, jewellery and bra stickers. Some, intended to signal to others that the person was in danger, activated when the wearer pressed a certain combination of buttons that connected to a mobile phone and sent an alert message (e.g. Athena Safety Wearable, Revolar Instinct ). Others were designed to detect unusual movements of the wearer that might suggest she was under attack (e.g. Intrepid bra sticker). Some also functioned as alarm-emitting devices with sound and strobes sending alerts and GPS coordinates to others, for example, The Smart Jewelry Bracelet which claimed to detect changes in the wearer’s regular movements or vital signs. In addition, some like The Personal Guardian, that attached to a bra strap or other area of clothing, featured simultaneous audio and/or video recording through the connected mobile phone to document an assault in progress to be used as evidence in a future legal case.
Discursive Claims In the same study, we examined the discursive claims made by the inventors and proponents of these technologies regarding the possibilities they posited for sexual assault prevention. While some made more realistic and pragmatic claims, recognising these devices had limitations, that rape and sexual violence are complex problems without easy solutions and that these tools may play only one small part in violence prevention, for the most part, they were presented as empowering and protective for women, suggesting the ability to prevent, reduce or end sexual violence. A number asserted that the evidence gathered with their devices might assist in jailing an attacker, and some went so far as to claim that lives would be saved as a result of these technologies. We established that central to the marketing of all items examined was the contention that
466
L. McMillan and D. White
the world is unsafe for women, and that carrying or wearing prevention technologies, as one would any other ‘everyday’ item, should be common sense and practice.
Analysis and Critique In analysing these technologies, we identified a number of issues, the first of which was the capacity for unintended consequences through device failure or misuse. For instance, mobile phones could lose their charge or, during an assault, a woman could be too traumatised or otherwise unable to reach the required device or buttons to initiate an alarm (Möller et al., 2017). There might be times also when third parties or bystanders contacted by an alert notice would be unable to assist or not know how to. Moreover, tools intended for use on or in the body with sharp points or electrical shock capabilities could unintentionally wound the wearer, or if the perpetrator became aware of the presence of a device or was injured by it, it could escalate violence on the part of the attacker. We also suggested that there is the very real possibility of not only cyber-stalking (Henry & Powell, 2016; Lemieux & Felson, 2012), but of such apps being used as a tool in the repertoire of ‘coercive control’; a sustained pattern of behaviours contributing to the micromanagement of everyday life (Stark, 2007). We noted the possibility for misuse of tracking and surveillance technologies that allow women’s locations to be known at all times, particularly by coercively controlling partners or family members. The expectation that a woman would consent to such surveillance raised issues of privacy and safety. More specifically, we noted an assumption, often present in bystander discourses, that those with access to an app’s location data or alerted to a woman’s whereabouts would necessarily have only her best interests in mind. We further contended that these devices and their marketing could contribute to fearmongering and to the normalisation of sexual violence as an inevitable and ever-present risk in women’s lives (Spencer et al., 2017). A great deal of the discourse surrounding these was about creating, and then harnessing, a state of fear to encourage the use of
23 The Promises and Perils of Anti-rape Technologies
467
such products for presumed protection. We posited that these technologies, rather than emancipate women, might in fact impose greater social control by normalising the avoidance of certain areas and situations, policing their own movements and behaviour. Further, while some of the devices were not for profit, and some apps were free to download, many were paid-for. We argued that tying the daily routinisation of women’s safety to the market is also problematic. It commodifies women’s safety, makes it accessible only to those who can afford it and intertwines the threat of sexual violence with gendered lifestyle consumerism. It was our contention that as technologies are socially shaped (Wajcman, 1991), these anti-rape devices embodied and reproduced assumptions and representations of rape and sexual violence that were largely based on myth, which in turn could impact their efficacy at both the individual and societal levels through the reinforcement of widespread misunderstandings of sexual violence. For example, most technologies we studied were premised on the notion that women face the greatest threat from strangers in public places; the ‘real rape’ (Estrich, 1987) stereotype long since discounted by evidence. It has been well established that most assaults are carried out by known men (Du Mont & White, 2007; Fisher et al., 2000; McMillan, 2013). Additionally, several of the corporeal tools to be worn on or inside the body were premised on the assumption that vaginal penetration was the sole or primary threat, discounting the myriad other ways that women are sexually assaulted, and failing to address that emotional manipulation, coercion and force are likely to be used instead (Du Mont & White, 2007). A striking commonality across the technologies we examined was the responsibilisation of women and of their friends and family (bystanders) to prevent sexual violence. Embodying the neoliberal shift from state to individual responsibility, it was places and activities that were framed as risky, rather than perpetrators (see also Bedera & Nordmeyer, 2015). We suggested that there could be an expectation that women would harness these tools in order to enhance their safety, and if they failed to do so, they risked being blamed for their own victimisation, as they routinely are with society and by criminal justice agents (McMillan & White, 2015; White & McMillan, 2018). Similarly, we argued that if the use
468
L. McMillan and D. White
of devices that record both women’s whereabouts and their assaults in progress were to become normalised and expected, it would not be unreasonable to assume that not using these tools might be viewed as a failure to both ensure safety and to document evidence to prove the interaction was unwanted. We also held that there was capacity for evidentiary ambiguity with a number of these technologies that could lead to unanticipated uses and antithetical outcomes. For those designed with the ability to gather evidence for future judicial processes, we suggested that ultimately, all they would be likely to ‘prove’ was that the accused was in proximity to the woman at the time, rather than a non-consensual encounter—a similar ‘rub’ to most DNA evidence which often only proves identity and presence, but not the facts of what unfolded (Du Mont & White, 2007). As well, they could reinforce stereotypical expectations of victim responses of high emotion, loud screaming and visible distress, which might not occur or be recorded (Möller et al., 2017). Moreover, those apps designed to record consent for sexual encounters, or document women’s routine activities, could concomitantly record data that might be used to undermine their credibility, as Dodge (2018) has suggested regarding other forms of digital evidence. Finally, we argued that digital evidence gathered through these tools, if it did not exactly concur with a woman’s initial statement, could undermine her narrative account of the assault. As such it might then be used to discredit her, given that any inconsistency in the story of an event is often looked upon negatively (McMillan & Thomas, 2009; McMillan & White, 2015).
Do They Offer Any Benefits or Possibilities? Having chronicled a substantial list of potential limitations and problems (perils) of anti-rape technologies, it was our intent in this chapter to further consider whether there may be other possibilities for their effectiveness and value. In the absence of research that establishes whether these devices do reduce victimisation (Kalms, 2017; Maxwell et al., 2020; Wood et al., 2021), we can only hypothesise the potential benefits. While we have been critical of these tools in the ways outlined above,
23 The Promises and Perils of Anti-rape Technologies
469
it is not our intention to malign those who create them in the hope of reducing sexual violence (White & McMillan, 2020). Thus, although maintaining what we believe to be notable limitations, we also wish to further consider possible capabilities with respect to: ‘fear, restriction & participation’; ‘agency and resistance’; and ‘collective action and evidence capture’.
Fear, Restriction and Participation In our analysis of these devices, we argue that their marketing plays on the widespread concern women have of rape and sexual assault. Women consistently report greater fear of all types of crime than men, despite being at less risk of victimsation (Hale, 1996; Pain, 1994). This is known as the fear of crime paradox, and is one of the most persistent findings in criminological and victimological research (Vera-Grey & Kelly, 2020). This paradox has been explained by the ‘shadow of sexual assault hypothesis’ (Ferraro, 1995, 1996), where fear of sexual violence shadows onto other crimes, of which women believe rape may become a part. The ever-present fear of rape (Gordon & Riger, 1989; Stanko, 1995) results in women engaging in behaviours to protect themselves (Stanko, 1997) that Kelly (2012) has termed ‘safety work’. These are routine, wellpracticed strategies that women may engage in on a daily basis, such as avoiding certain places, not going out alone at night, policing how they dress, with whom they interact and so on. Avoidance strategies are common in the face of fear of crime, and routinely evidenced in crime and victimisation surveys. For example, Runyan et al. (2007) reported that 47% of US women (n = 1800) avoided doing things they needed to and 71% avoided doing things they wanted to, due to fear of violent victimisation, and that they often engaged in protective strategies such as attending self-defence training. They also found that more than 20% carried pepper spray and almost 20% carried a noise-emitting device such as a personal alarm. In considering possibilities, we questioned whether anti-rape technologies might be empowering for women if they reduce feelings of fear and increase perceptions of safety. We ask whether there is a capacity for
470
L. McMillan and D. White
them to increase women’s use of public space, and their confidence when in it? If this were the case, we can see potential benefits at the individual level, as women may feel better able to go where they want, when they want, opening up opportunities for work, leisure and social participation. At the societal level, it may increase women’s access to education and economic participation. This might be the case especially in patriarchal societies or controlling relationships, where some form of monitoring of a woman’s location might, in turn, grant her greater freedom, and become one of the ways by which she might ‘manage’ these contexts and relationships (Dobash & Dobash, 1979). These are valid considerations, and empirical research is needed to establish whether and to what extent, these might be positive effects. But, there remain significant questions about whether this ‘trade off ’ is worth it (Vera-Grey, 2018). First, there is some evidence to suggest that the measures people engage in to protect themselves from harm, may in fact add to their sense of fear (Ferraro, 1995), and that tools such as tracking apps might increase anxiety and hypervigilance (Hasinoff, 2017), meaning that using these devices may in fact be counterproductive. Second, the capacities for misuse by coercively controlling partners or family members (Harris & Woodlock, 2019) may outweigh any perceived benefit of greater freedom, as that freedom necessarily comes with the cost of ‘consent’ to surveillance. Third, if these devices only offer the illusion of safety, and not actual safety, is it appropriate for women to be sold a ‘false promise’? Drawing from Maxwell et al. (2020, p. 10), it becomes a question of whether they ‘represent effective crime prevention as opposed to merely affective crime prevention that generates feelings of increased safety among users’, but not actual safety.
Agency and Resistance Following our consideration of the potential of these technologies to increase feelings of safety among women, and the associated potential for their greater use of public space, we want to consider whether use of these devices could be considered by women as a form of agency. In this section we explore whether they might allow women to take control of
23 The Promises and Perils of Anti-rape Technologies
471
a persistent problem that both the reality and the threat of blights their lives, and whether they might perceive that these devices could help them resist should an attack occur. These are questions we are currently exploring empirically in new qualitative research, but even if these products do provide a benefit at the individual level, allowing a woman to foil a sexual assault, we have concerns about the potential for product failure, as we have outlined above and in our original critique (White & McMillan, 2020). Most notably, the reliance on mobile phone technology for the majority of these devices to operate, means failure to charge a battery, or being somewhere with a poor signal, would render them ineffective. So for women living in rural areas, where signals might be poor, and bystanders further away, we question what these products might offer by way of the capability to exercise agency and resistance. It is also worth noting that Maxwell et al. (2020) found 46 apps they identified had features that did not work as intended upon testing, suggesting further capacity for product failure. These products are also premised on the notion that a third party will actually respond in a timeous way. In this regard, these tools may not be preventive in the true sense (Maxwell et al., 2020), because an attack is already likely to be underway. Any alert also only advises on the location of the device at the time it was deployed (Maxwell et al., 2020) and it may no longer be with the person, may never have been with them at all, or the person (with or without the assailant) may have moved location. A critique we level in our research is that these apps and devices are predicated on a fundamental misrepresentation of the reality of sexual violence in women’s lives, that attacks are perpetrated by strangers in public places (White & McMillan, 2020). These products reify the public–private divide, seeking to intervene in assaults that form the minority of violations. While we know that women tend to fear stranger rape more than acquaintance rape (Barbaret et al., 2003; Wilcox et al., 2006), they face the greatest risk from men they know (Fisher et al., 2000; Koss et al., 1987; McMillan, 2013) with research showing almost 90% of women are assaulted by known men, including current and former partners, friends, relatives and colleagues (McMillan, 2013). We contend that it is in interactions with men with whom women are close,
472
L. McMillan and D. White
that would make them less likely to be equipped with a prevention device or strategy for resistance. Even if these products worked 100% of the time (about which we are doubtful) and prevented every stranger rape in public, these data suggest they would, at best, only prevent roughly 10% of rapes. It may be agentic to make a ‘choice’ to use such devices, but as Vera-Grey (2018) highlights, freedom and safety are in tension. She demonstrates that women and girls routinely take steps that limit the former, which are presented as ‘choices’, albeit choices made in a particular context. As C. Wright Mills (1959) says, we make choices but not in the circumstances of our own choosing.
Collective Action and Evidence Capture It is possible too, we hypothesise, that women may perceive anti-rape technologies as a form of collective agency, most notably those that have the capacity to prevent the victimisation of others. Kalms (2017) provides a useful comparison between apps that individuals use to record their location and allow others to track them, and those that use crowdsourced data to identify areas with higher incidence of sexual assault and harassment. She concludes that while the former is passive, offering limited agency or protection to women, the latter has the capacity to offer both agency and power to women. Kalms (2017) suggests that crowd-sourced data on geolocations of assaults in public spaces clearly identifies areas of a city that might be publicly avoided and may identify areas with potential for intervention and preventive approaches. We would agree to some extent, in that recording the location of public sexual harassment or assault on a geolocative data app may draw greater public awareness of a particular location and the issue as a whole (Eisenhut et al., 2020). And, as it does not require a woman to ‘sacrifice’ anything in terms of consent to ongoing surveillance, or potential physical harm from a technology, it is therefore potentially, a lower ‘cost’ to women. We would, however, caution that this still only applies to those minority of assaults in public
23 The Promises and Perils of Anti-rape Technologies
473
places. The identification of so-called ‘hot spots’ of crime and victimisation cannot take account of assaults in the private sphere or the context of intimate relationships, where even if those did occur in public places, we assume that women would be unlikely to record them on a geolocation device. Further, as we flagged earlier, with any crime prevention intervention based on location, there is potential for displacement in terms of ‘hot spot’ targets, times or places (Clarke, 1995), pushing the problem elsewhere rather than eradicating it. They may lead to others making false assumptions that those areas not identified in an app are safe when in fact they may not be (Maxwell et al., 2020), therefore creating a false sense of security. Hasinoff (2017) points out that such apps also rely on the accuracy of the data input, which may reflect perceptions of vulnerability rather than actual risk. Data input to crime mapping apps can also reflect racial and class biases (Wood et al., 2021). We have also considered the possibility that the capacity for some antirape technologies to synchronously record assaults to provide evidence for criminal justice may be perceived by some as an act of individual and collective resistance to sexual violence, believing that doing so may increase the chance of identification and prosecution of an attacker. This may be considered an opportunity to receive justice themselves and may also be akin to some of the motivations documented in research on reporting sexual violence, wherein women are concerned to stop it happening to others (McMillan, 2013), and may possess what Brooks-Hay (2020) has called, a sense of social and moral responsibility. Empirical research is needed to establish the probative value of any evidence collected through these technologies, but we would question whether the hope women might place in potential evidence capture is plausible. The use of such data is still to be legally tested, but if we draw a comparison with data derived from police officers’ body-worn cameras, the findings are mixed. In her analysis of police use of body-worn cameras in domestic and family violence cases, Harris (2020) highlights that those who support the use of such technology argue it can strengthen cases and improve the likelihood of guilty pleas or convictions. Though she questions whether such cameras provide neutral footage, and the extent to which
474
L. McMillan and D. White
the recorded evidence may in fact be used to illustrate a victim is ‘not ideal’ (not presenting in the stereotypical ways expected of those who are victimised; see also, Iliads et al., Chapter 21). We would suggest a similar argument for video or audio evidence collected by anti-rape apps, as it has the capacity to present evidence of those assaulted that may be interpreted through the lens of familiar rape myths about stereotypical emotional responses (Burgin & Flynn, 2019; Ellison & Munro, 2009; Möller et al., 2017), and for judgement to be placed upon individual actions such as levels of resistance, and the extent to which women may or may not verbally resist, call out or alert others.
Conclusion: Safety Tools or ‘Safety Theatre’? Above we asked if these anti-rape technologies only offer the illusion of safety rather than actual safety, is it appropriate for women to be sold a ‘false promise’? We would argue the answer is no, and that in the absence of research that confirms a reduction in victimisation as a result of their use, many of these devices may represent only ‘safety theatre’. We draw here upon Schneier’s (2003) concept of ‘security theatre’ based on routine security features such as tamper-resistant packaging and airport scanners, that many believe provide security, but actually do not. We extend this to notions of sexual violence prevention and suggest that any measures taken to increase safety from sexual assault that only make us feel safer, are palliative at best, and harmful at worst. We have previously argued that the widespread problems with many of these technologies likely make them antithetical to their stated aims— to reduce or eradicate sexual violence—as well as potentially providing platforms for further abuse (White & McMillan, 2020). We acknowledge there may be some instances where these tools might thwart an individual assault, which of course has a benefit. Nonetheless, we are concerned about this neoliberal individualised approach to sexual violence which does not seek wider social change nor challenge the actions of potential perpetrators and make protection (or perceived protection) available only to those who can afford it (Citron & Franks, 2014).
23 The Promises and Perils of Anti-rape Technologies
475
Our analysis has been focused primarily on those technologies that seek to prevent or avoid sexual violence and make claims to do so. Other technological developments in the area of violence against women, however, include apps with rather different functional intent, including education, reporting and evidence building, and support (Eisenhut et al., 2020). We would suggest that there are a number of possibilities in these technologies that are worthy of closer examination, such as challenging attitudes and behaviours, fostering the identification and naming of violence, mitigating barriers to in-person reporting, facilitating anonymous reporting and providing greater support to those who are sexually violated to reduce the associated physical and mental health sequelae (Chivers-Wilson, 2006; Du Mont & White, 2013; McMillan, 2013). The technologies we examined are unlikely to provide a preventive solution to the widespread, global problem of sexual violence in the lives of women (Eisenhut et al., 2020; White & McMillan, 2020) and risk being mere ‘safety theatre’. Those with educative potential, and the capacity to support reporting and help-seeking, are likely to offer the most promise. Acknowledgements The author(s) received no financial support for the research, authorship and/or publication of this article.
References Barbaret, R., Fisher, B. S., Farrell, G., & Taylor, H. (2003). University student safety. Home Office. Bedera, N., & Nordmeyer, K. (2015). “Never go out alone”: An analysis of college rape prevention tips. Sexuality & Culture, 19 (3), 533–542. Brooks-Hay, O. (2020). Doing the “right thing”? Understanding why rape victim-survivors report to the police. Feminist Criminology, 15 (2), 174–195. Burgin, R., & Flynn, A. (2019). Women’s behavior as implied consent: Male "reasonableness" in Austrialian rape law. Criminology and Criminal Justice, 21(3), 334–352. Chivers-Wilson, K. (2006). Sexual assault and posttraumatic stress disorder: A review of the biological, psychological and sociological factors and treatments. McGill Journal of Medicine, 9 (2), 111.
476
L. McMillan and D. White
Citron, D. K., & Franks, M. A. (2014). Criminalizing revenge porn. Wake Forest Law Review, 43, 345. Clarke, R. V. (1995). Situational crime prevention. Crime and Justice, 19, 91– 150. Corrigan, R. (2013). Up against a wall: Rape reform and the failure of success. New York University Press. Dobash, R. E., & Dobash, R. (1979). Violence against wives. Free Press. Dodge, A. (2018). The digital witness: The role of digital evidence in criminal justice responses to sexual violence. Feminist Theory, 19, 303–321. Du Mont, J., & White, D. (2007). The uses and impacts of medico-legal evidence in sexual assault cases: A global review. World Health Organisation. Du Mont, J., & White, D. (2013). Barriers to the effective use of medicolegal findings in sexual assault cases worldwide. Qualitative Health Research, 23(9), 1228–1239. Eisenhut, K., Sauerborn, E., Garcia-Moreno, C., & Wild, V. (2020). Mobile applications addressing violence against women: A systematic review. BMJ Global Health, 5:e001954. https://doi.org/10.1136/bmjgh-2019-001954. Ellison, L., & Munro, V. (2009). Reacting to rape: Exploring mock jurors’ assessments of complainant credibility. British Journal of Criminology, 49, 202–219. Estrich, S. (1987). Real rape. Harvard University Press. Ferraro, K. F. (1995). Fear of crime: Interpreting victimisation risk. State University of New York Press. Ferraro, K. F. (1996). Women’s fear of victimization: Shadow of sexual assault? Social Forces, 75 (2), 667–690. Fisher, B. S., Cullen, F. T., & Turner, M. G. (2000). The sexual victimization of college women (NCJ 182369). Office of Justice Programs. Flick, U. (2014). An introduction to qualitative research. Sage. Flynn, A. (2015). Sexual violence and innovative responses to justice: Interrupting the recognisable narratives. In A. Powell, N. Henry, & A. Flynn (Eds.), Rape justice: Beyond the criminal law (pp. 92–111). Palgrave Macmillan. Gordon, M. T., & Riger, S. (1989). The female fear. Free Press. Hale, C. (1996). Fear of crime: A review of the literature. International Review of Victimology, 4 (2), 79–150. Harris, B. (2020). Visualising violence? Capturing and critiquing body-worn video camera evidence of domestic and family violence. Current Issues in Criminal Justice, 32(4), 382–402.
23 The Promises and Perils of Anti-rape Technologies
477
Harris, B. A., & Woodlock, D. (2019). Digital coercive control: Insights from two landmark domestic violence studies. The British Journal of Criminology, 59 (3), 530–550. Harris, J., & Grace, S. (1999). A question of evidence? Investigating and prosecuting rape in the 1990s (Research Study No. 196). Home Office. Hasinoff, A. A. (2017). Where are you? Location tracking and the promise of child safety. Television & New Media, 18(6), 496–512. Henry, N., & Powell, A. (2016). Sexual violence in the digital age: The scope and limits of criminal law. Social & Legal Studies, 25 (4), 397–418. Htun, M., & Weldon, S. L. (2012). The civic origins of progressive policy change: Combating violence against women in global perspective, 1975– 2005. American Political Science Review, 106 , 548–569. Kalms, N. (2017). Digital technology and the safety of women and girls in urban space: Personal safety apps or crowd-sourced activism tools? In H. Frichot, C. Gabrrielsson, & H. Runting (Eds.), Architecture and feminisms: Ecologies, economies, technologies (pp. 112–121). Routledge. Kelly, L. (2012). Standing the test of time? Reflections on the concept of the continuum of sexual violence. In J. Brown, & S. Walklate (Eds.), Handbook of sexual violence (pp. xvii–xxvi). Routledge. Kessler, R. C., Sonnega, A., Bromet, E., Hughes, M., & Nelson, C. B. (1995). Posttraumatic stress disorder in the National Comorbidity Survey. Archives of General Psychiatry, 52(12), 1048–1060. Koss, M. P., Gidycz, C. A., & Wisniewski, N. (1987). The scope of rape: Incidence and prevalence of sexual aggression victimization in a national sample of higher education students. Journal of Consulting and Clinical Psychology, 55 (2), 162–170. Lemieux, A. M., & Felson, M. (2012). Risk of violent crime victimization during major daily activities. Violence and Victims, 27 (5), 635–655. Maxwell, L., Sanders, A., Skues, J., & Wise, L. (2020). A content analysis of personal safety apps: Are they keeping us safe or making us more vulnerable? Violence Against Women, 26 (2), 233–248. McMillan, L. (2007). Feminists organizing against gendered violence. Palgrave. McMillan, L. (2010). Understanding attrition in rape cases (ESRC End of Award Report RES-061–23–0138-A). Swindon, UK: ESRC. McMillan, L. (2011). Sexual violence policy in Europe: Human rights, policies and outcomes. In M. Koch, L. McMillan, & B. Peper (Eds.), Diversity, standardization and social transformation: Gender, ethnicity and inequality in Europe (pp. 61–76). Ashgate.
478
L. McMillan and D. White
McMillan, L. (2013). Sexual victimisation: Disclosure, responses and impact. In N. Lombard & L. McMillan (Eds.), Violence against women: Current theory and practice for working with domestic abuse, sexual violence and exploitation (Research Highlights in Social Work Series) (pp. 71–86). Jessica Kingsley. McMillan, L., & Thomas, M. (2009). Police interviews of rape victims: Tensions and contradictions. In M. Horvath & J. Brown (Eds.), Rape: Challenging contemporary thinking (pp. 255–280). Willan. McMillan, L., & White, D. (2015). “Silly girls” and “nice young lads”: Vilification and vindication in the perceptions of medico-legal practitioners in rape cases. Feminist Criminology, 10 (3), 279–298. Möller, A., Söndergaard, H., & Helström, L. (2017). Tonic immobility during sexual assault—A common reaction predicting post-traumatic stress disorder and severe depression. Acta Obstetricia Et Gynecologica Scandinavica, 96 (8), 932–938. Pain, R. (1994). Whither women’s fear? Perceptions of sexual violence in public and private space. International Review of Victimology, 4 (4), 297–312. Powell, A., Henry, N., Flynn, A., & Henderson, E. (2013). Meanings of “sex” and “consent” and the impact of rape law reform in Victoria, Australia. Griffith Law Review, 22(2), 456–480. Runyan, C. W., Casteel, C., Moracco, K. E., & Coyne-Beasley, T. (2007). US women’s choices of strategies to protect themselves from violence. Injury Prevention, 13(4), 270–275. Schneier, B. (2003). Beyond fear: Thinking sensibly about security in an uncertain world . New York: Copernicus Books. Shelby, R. M. (2020). Techno-physical feminism: Anti-rape technology, gender and corporeal surveillance. Feminist Media Studies, 20 (8), 1088–1109. Spencer, C., Mallory, A., Toews, M., Stith, S., & Wood, L. (2017). Why sexual assault survivors do not report to universities: A feminist analysis. Family Relations, 66 (1), 166–179. Spohn, C., & Trellis, K. (2012). The criminal justice system’s response to sexual violence. Violence Against Women, 18(2), 169–192. Stanko, E. A. (1995). Women, crime and fear. Annals of the American Academy of Political and Social Science, 537 , 46–58. Stanko, E. A. (1997). Safety talk: Conceptualising women’s risk assessment as a technology of the soul. Theoretical Criminology, 1(4), 479–499. Stark, E. (2007). Coercive control: How men entrap women in personal life. Oxford University Press.
23 The Promises and Perils of Anti-rape Technologies
479
Taylor, C., & Gassner, L. (2010). Stemming the flow: Challenges for policing adult sexual assault with regard to attrition rates and under-reporting of sexual offences. Police Practice and Research, 11(3), 240–255. Vera-Grey, F. (2018). The right amount of panic: How women trade freedom for safety. Policy Press. Vera-Grey, F., & Kelly, L. (2020). Contested gendered space: Public sexual harassment and women’s safety work. International Journal of Comparative and Applied Criminal Justice, 44 (4), 265–275. Wajcman, J. (1991). Feminism confronts technology. Pennsylvania: Pennsylvania State University Press. White, D., & McMillan, L. (2018). Statutory response to sexual violence: Where doubt is always considered reasonable. In N. Lombard (Ed.), The Routledge handbook of gender and violence (pp. 111–122). Routledge. White, D., & McMillan, L. (2020). Innovating the problem away? A critical study of anti-rape technologies. Violence Against Women, 26 (10), 1120–1140. Wilcox, P., Jordan, C. E., & Pritchard, A. J. (2006). Fear of acquaintance versus stranger rape as a “master status”: Towards refinement of the “shadow of sexual assault.” Violence and Victims, 21(3), 355–370. Wood, M. A., Ross, S., & Johns, D. (2021). Primary crime prevention apps: A typology and scoping review. Trauma, Violence and Abuse. https://doi.org/ 10.1177/1524838020985560. Wright Mills, C. (1959). The sociological imagination. Oxford University Press.
24 Using Machine Learning Methods to Study Technology-Facilitated Abuse: Evidence from the Analysis of UK Crimestoppers’ Text Data Felix Soldner, Leonie Maria Tanczer, Daniel Hammocks, Isabel Lopez-Neira, and Shane D. Johnson
Introduction Technology-facilitated abuse, so-called “tech abuse”, in intimate partner violence (IPV) contexts describes the misuse of technical systems to harass, monitor, and control victims/survivors. This digitally enabled mode of violence can take many forms (Dragiewicz et al., 2018; Harris & Woodlock, 2019; Woodlock, 2017). It may include abusive messages or F. Soldner · D. Hammocks · S. D. Johnson Dawes Centre for Future Crime, Department of Security and Crime Science, University College London, London, UK e-mail: [email protected] D. Hammocks e-mail: [email protected] S. D. Johnson e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_24
481
482
F. Soldner et al.
calls, image-based abuse cases such as “revenge porn” (Citron & Franks, 2014; Henry, McGlynn, et al., 2021; McGlynn et al., 2017), as well as means of being traced through phones, trackers, or other GPS- or Internet-enabled devices (Lopez-Neira et al., 2019; Parkin et al., 2019). Due to the breadth of systems and means to harm victims/survivors through technology, previous publications have described the concept of tech abuse as “big bucket”, ranging from low-tech offences to more technically sophisticated crimes (Tanczer, 2021). Despite the rising uptake of digital technologies in our day-to-day lives, data on the scale, nature, and extent of this type of abuse is scarce internationally (Henry, Flynn, et al., 2020; Tanczer et al., 2018). Recorded data hint at the scope of the problem. In the UK, these include, for exampl Alex Riley’s (2020) assessment of 920 tech-abuse cases, and later statement that 72% of their service users experienced abuse through technology in 2019 (Refuge, 2020); Women’s Aid’s (2018) survey which indicated that 85% of victims/survivors experience online and offline abuse in parallel; a study by Snook et al. (2017) which found that nearly half of their 209 surveyed victims/survivors (47%) were monitored online by their partner; and most recently the Suzy Lamplugh Trust’s evaluation of the UK’s National Stalking Helpline figures which showcase that 100% of cases presenting to the Helpline now involve a cyber element. Akin dynamics having been identified in countries such as Australia (Henry, McGlynn, et al., 2021; Powell & Henry, 2019; Powell et al., 2020) and the US (Messing et al., 2020). While all these assessments are important and form a step in the right direction, the evaluations are opaque. None of these studies collect longitudinal evidence nor discuss the different forms of offences that L. M. Tanczer (B) · I. Lopez-Neira Department of Science, Technology, Engineering and Public Policy (STEaPP), University College London, London, UK e-mail: [email protected] I. Lopez-Neira e-mail: [email protected]
24 Using Machine Learning Methods to Study …
483
fall under the broad category of tech abuse. Hence, present quantitative tech-abuse studies do not account for the nuances of tech abuse that would enable to differentiate between distinct devices or platforms that are abused, nor their respective severity levels (Tanczer et al., 2018). Instead, they speak of tech abuse as an overarching category and are often primarily concerned with “conventional” cyber risks such as abuse patterns on social media and restrictions to devices such as laptops and phones (Burton et al., 2021; Slupska & Tanczer, 2021). As the IPV tech-abuse risk landscape is steadily transforming, our focus and attention must shift towards the different shapes and shades that tech abuse can adopt (Tanczer, 2021). A range of technical innovations including digital payment systems, Internet of Things (IoT) devices, blockchain technologies, or the ominous concept of Artificial Intelligence (AI) is here to stay (see Chapter 29). The current chapter consequently outlines our recent attempt to add quantitative figures to the evidence base on tech abuse. To facilitate this assessment, we analysed a dataset of 500,000 archival crime reports held by the UK charity Crimestoppers to look for cases of tech abuse. While such big datasets are of great value, it is unfeasible to manually inspect each entry to determine if it can be categorised as tech abuse. By utilising Machine Learning (ML), which is often placed under the umbrella term AI, we aimed to devise a system which can facilitate the automated detection of tech abuse within a large corpus of text data. Such a computerised identification mechanism would be beneficial not only for research, but for the practitioner community, including law enforcement and domestic abuse charities. Thus, in this chapter, we outline how an automated system can look like, what potential problems can arise in deploying such tools, and how these can be mitigated. We also briefly touch upon the distinct forms of tech abuse we observed within the dataset.
Machine Learning (ML) and Natural Language Processing (NLP) The term “AI” is most commonly used when referring to the concept of ML. The latter describes the automated learning from experience through iterative analysis processes on sample data (Murphy, 2012).
484
F. Soldner et al.
ML can be seen as a part of data science, which focuses on the ways we can extract information from data (Provost & Fawcett, 2013) and understand it within a given context (Dhar, 2013). Thus, data science is the integration of many disciplines, including computer science and mathematics, as well as knowledge from the (research) domain it is applied to (e.g. domestic abuse or mental health), which is crucial to help to understand the extracted information. Making use of ML systems is very helpful for a broad range of issues in several areas of science (Jordan & Mitchell, 2015). ML facilitates the addressing and solving of problems both quickly and automatically, without the need to give a machine precise instruction on how to do it. In the case of supervised ML methods, the idea is to present the system with a set of labelled datapoints, which it learns to identify automatically (Murphy, 2012). For example, we can present the system with a set of texts or reports, which are labelled as “tech abuse” and “other”. The system then learns to associate the text properties to the corresponding label with little or no direct human control. The goal is then to utilise the trained system to make predictions on a set of unlabelled text data (i.e. identify tech-abuse reports based on the text properties it has studied previously). Hence, like a child that has come to know particular patterns, the system will apply the lessons it has been trained upon onto new information. Since ML methods can only work with numerical data, it is not possible to present the system with plain text. To transform text data into numerical representation, we can use Natural Language Processing (NLP) methods. NLP works at the intersection of human–computer understanding and is a research domain of itself (Jurafsky & Martin, 2019; Nadkarni et al., 2011). Using NLP methods to generate numerical text features can include simple frequencies or proportions of words (unigrams), or word pairs (bigrams). Other NLP methods can include the generations of grammatical features of the text, in which each word is given its grammatical label (e.g. noun, verb, etc.), which are called part-of-speech tags (Jurafsky & Martin, 2019). Such part-of-speech tags can also be numerical represented in frequencies or proportions within a text. By adding feature sets together, text properties can be numerical represented, which can be understood by ML models. A supervised ML model can then learn to associate the patterns of these numerical text
24 Using Machine Learning Methods to Study …
485
features with the corresponding labels (e.g. “tech abuse” and “other”). The task of differentiating between the texts labelled either “tech abuse” or “other” is also called a classification task. Hence, an algorithm (i.e. the ML method) is also referred to as a classifier (Murphy, 2012).
The Present Analysis Given the significant opportunities that ML and NLP approaches provide, we set out to develop a tech-abuse detection algorithm for unstructured text data. Such a method creates the means to identify and monitor tech abuse, generating data upon which evidence-based policy and interventions can be built. The underpinning research questions were: (a) What is the extent of technology-facilitated abuse evident in the Crimestoppers dataset?; and (b) What is the nature of technologyfacilitated abuse apparent in the Crimestoppers dataset? As part of our analysis, we encountered multiple hurdles that prevented us from explicitly answering these questions. However, our setbacks offer useful lessons on the limitations of ML and NLP tools. In this chapter, we outline the methodology, as well as our core findings, before discussing them through consideration of the challenges arising from the development of workable automated tech-abuse identification systems. We end the chapter with recommendations and an overview of future research needs.
Method For our data science-driven project, for which we received ethical approval,1 we worked with un-anonymised data collected as part of Crimestopper’s UK case management system. Crimestoppers is an independent charity that gives people the power to report crime anonymously either via a telephone number or an anonymous online form. The idea of Crimestoppers, which originated in the US, is that reporting parties are not required to give their name or any personal information. This should 1 University College London (UCL) Research Ethics Committee—Project ID Number: 10503/001.
486
F. Soldner et al.
lower the barrier for individuals to speak up and to report crime they observe within their surroundings, such as their community or neighbourhood. Thus, the reporting mechanisms encourage “bystanders”, meaning individuals who suspect particular offences to take place, as well as those directly involved in a crime.
Data We received the Crimestoppers dataset as an Excel spreadsheet. The dataset contained in total 434,088 crime reports from across the UK (England, Wales, Scotland, and Northern Ireland), between 2014 and 2019. All reports were broadly categorised into different crime topics, such as fraud, domestic abuse, rape and sexual offences, drug trafficking, e-crime, murder & other killings. Across different columns, information about the date, type, and region/location of a reported incident are offered. Most important in the context of our study, the dataset included short text summaries of the nature of the report that was logged. These were content—suitably edited to preserve privacy—either provided via the anonymous online form or a written summary of the oral phone conversation noted down by one of Crimestopper’s call handlers. These text snippets are not verbatim transcripts. Instead, they are just a few lines long and are recorded in the call handlers’ words or those of the person reporting the incident. Examples found under the “domestic abuse”labelled offences can be found below: “[He] was physically abusing his wife and posting photos on FB of her injuries” “He smashed her phone.” “Her ex-partner has nude pictures of her and threats to post them around unless she has oral or full sex on a specific date.” “Raping her, drugging her and allowing other men to rape her [his girlfriend] and videoed rape.”
24 Using Machine Learning Methods to Study …
487
Data Storage To receive this highly sensitive data, a data sharing agreement with Crimestoppers had been set in place, which included the arrangement that all information would be stored and analysed within the Jill Dando Institute Research Laboratory (JDIRL) at University College London. The JDIRL is a state-of-the-art, police-assured secure computer facility that allows researchers to store and analyse data classified up to OFFICIAL-SENSITIVE. The lab is an “air-gapped” facility, which means that the computers offer no connection to the outside world and users are not allowed to bring any electronic items into the lab. Data imports and exports are managed via a tightly controlled authorisation protocol and all users that wish to access the JDIRL must undergo appropriate personal security checks.
Data Analysis The purpose of our data analysis was to detect, examine, and compare reports of tech abuse across the Crimestoppers dataset through automated means. As no explicit offense category for tech abuse existed, qualitative descriptions from excerpts seen above were used to elucidate the nature and scope of tech abuse. To find an initial set of techabuse cases, we conducted a keyword search (see Appendix A for used keywords) on all reports categorised as “domestic abuse”. We manually inspected the filtered reports (n = 700) and annotated 133 of them as being tech abuse, while we marked 567 of them as non-tech abuse. From the annotated data, we compiled a balanced dataset (i.e. 50% tech abuse and 50% non-tech abuse reports) consisting of all labelled techabuse cases and randomly matching them with non-tech abuse cases (but still related to domestic abuse). This resulted in 266 reports. Presenting supervised ML methods with a balanced dataset is common practice and ensures that the model appropriately learns to differentiate between the classes (Batista et al., 2004). We used these 266 reports to train and test the ML algorithm, using a k-fold cross-validation procedure (Kuhn & Johnson, 2013). Such a
488
F. Soldner et al.
procedure splits the data into equally sized k sets, for which k-1 sets are used for training the model, and the remaining set is used for testing the model (in our case, k = 10). Through an iterative process, in which each iteration is called a fold, the model is being trained and tested in each fold on a subset of the data. After completing all folds, all subsets were consequently used both for training and testing. To make this possible, each fold exchanges one of the nine subsets, included as part of the training data, with the one set, which is used as the testing set. That way, the model always trains on nine subsets, and is tested on one previously unseen data set. Since the data is split into k sets, the procedure is repeated k times, to complete a full round of training and testing (in our case ten times). Testing the model in each fold translates into predicting the data labels of each datapoint in the testing set. Because we know what the true labels are for each datapoint, we can evaluate the models’ performances by looking at the correct and incorrect predictions and generate metric scores, such as accuracy for each fold. By averaging the performances scores across all folds, we obtain the average performance of the models on our data. A key advantage of a cross-validation procedure is that the model is not evaluated on the same data it is trained on, to avoid “overfitting”. The danger of evaluating the model on the training data is that the model relies more on specific sample characteristics rather than on the true relationships between the features and the labels. Furthermore, with cross validation, we can utilise all labelled data for training and testing as well as observe performance variability in the models’ predictions across the folds. By averaging the performances across all folds, we can obtain a more robust estimation of the models’ prediction performances on unseen data. In this study, we measured the models’ performances in accuracy, precision, recall, and f1 scores. Although accuracy is very important and most known, scores such as precision and recall can give us additional insights on how well a model is performing. Both scores give us further information on the predictions of the individual classes (i.e. techabuse, non-tech abuse). Precision assesses how well false positive cases are avoided, while recall (sensitivity) assesses how well the individual classes were predicted (e.g. how many reports were predicted to be tech abuse
24 Using Machine Learning Methods to Study …
489
from all tech-abuse cases). The f1 score represents the harmonic mean from precision and recall. To increase the performances of our model to detect tech facilitated abuse in a large corpus of text, we followed an iterative approach of (i) training the model, (ii) making predictions with it on the corpus (detecting tech abuse), and (iii) re-labelling the predicted tech abuse cases. That way, we fine-tuned the model by including more correctly labelled reports, which were either correctly or incorrectly predicted by the algorithm. By correctly labelling falsely predicted tech abuse cases in the first iteration, the model used these cases in the next iteration of the training phase to improve its performance. We followed this procedure for two iterations, which lead us to compile a final balanced dataset of 294 reports2 (i.e. 147 tech abuse and 147 non-tech abuse). Since the cross-validation procedure trains and tests a separate model in each fold, to assess the method’s general capability, none of the ten models is used for the final predictions on the large corpus. Instead, after determining, which methods work best, the classifier is trained on the whole labelled dataset, to make use of all datapoints. The final model is then used to make predictions on the corpus.
Data Cleaning and Feature Generations To facilitate the generation of text features, the reports underwent several pre-processing steps. All texts were lowercased to make any further text analyses case insensitive. Next, all punctuations and English stopwords3 (e.g. “by”, “for”, “when”, etc.) were removed, to include more meaningful words, which carry content within sentences. Lastly, all remaining words were stemmed, converting each word to its stem (e.g. “texting”, “texted” would be stemmed to “text”), to facilitate a more accurate count of the same word meanings. From the clean reports, we extracted Term Frequency-Inverse Document Frequency (TF-IDF) weighted unigrams,
2 The code and the trained model for making predictions can be found at: “https://osf.io/fea5j/? view_only=35786879fdee4d21bc1da71cba3661d1”. 3 For a full list of all stopwords, see Bird et al. (2009).
490
F. Soldner et al.
bigrams, and part-of-speech (POS) features.4 TF-IDF weights are used to lower the importance of high-frequency words in a document (e.g. “she”, “the”, “and”), which also occur more frequently across all documents. Thus, it assigns more weight to words which are important (higher frequency) for each individual document (Jurafsky & Martin, 2019). We used the python package “nltk” (Bird et al., 2009) for text cleaning and feature generations.
Results In this section, we will first discuss the results of our automated approach to detect tech abuse, followed by a manual examination of 147 identified tech abuse reports describing the general observed content and properties of tech abuse cases. The results showcase that initial evaluation scores of our model are promising, but do not translate well into the practical detections of tech abuse reports. Specifically, most reports identified as tech abuse were false positives, showing that the current applied model is not workable to support a quantitative assessment of text data.
Automated Detection of Tech Abuse Reports To decide which ML method would be most suited to detect tech abuse reports in the large corpus, we trained and tested ten different ML classifiers, utilising the tenfold cross-validation procedure as explained above. Since we used a two-step approach of training, testing, and detection, we first trained our models on 226 and then on 294 Crimestoppers reports. Both datasets were always equally balanced between tech abuse and nontech abuse. In both iterations, the “LinearSVC”5 seemed to perform best, for which the averaged performance scores across the ten folds of the last iteration are reported below. All analyses were completed in python using 4 We also extracted n-gram, and POS proportions, but they resulted in lower classification performances. 5 Model parameters were set to l = 1 and dual = False, while the remaining parameters were unchanged (default settings).
24 Using Machine Learning Methods to Study …
491
Table 24.1 Averaged performance scores for the “linearSVC” classifier. SD values in parentheses Tech-abuse Non-tech-abuse Average
Precision
Recall
F1
80.71 (10.37) 75.48 (5.61) 78.09 (5.15)
73.01 (9.55) 80.06 (13.24) 76.54 (4.31)
75.63 (3.84) 76.89 (5.84) 76.26 (4.15)
“scikit-learn” (Pedregosa et al., 2011), a programming package for using ML methods.
Model Performances The accuracy of the “LinearSVC” classifier was 76.59% (SD = 4.18). Scores for each class and their averages are reported in Table 24.1.
Predicting Tech Abuse We used the “LinearSVC” classifier in both iterations of the training, testing, and detection of tech abuse reports. In both iterations, the evaluation metrics seemed promising (Table 24.1) to warrant an application to detect tech abuse reports on the whole corpus of 434,088 unannotated reports. In the first iteration, the classifier identified 61,969 reports as tech abuse (14% of the whole corpus). As the case numbers seemed very high for such a specific abuse type within this dataset (as Crimestoppers commonly record many different forms of crimes), we manually inspected 700 entries and found 14 as being correctly attributed as tech abuse. After integrating the additional labelled reports to our data pool in our second iteration of training, testing, and detection, the model identified 30,289 tech abuse reports across the whole corpus (7% of the whole 434,088 reports). In a second manual inspection, we examined a similar number of reports and found again only seven tech abuse cases.
492
F. Soldner et al.
Manual Analysis of the Nature of Tech Abuse Having looked at the Crimestoppers data in more detail, tech abuse cases were primarily located in domestic abuse entries. The latter are often descriptive e.g. “commits DA”, “being abusive”, “is domestically violent”, “carries out DA”, making the extraction of information on distinct forms of abuse (e.g. financial, psychological, technical) rather difficult. The DA entries were also mostly reported for heterosexual couples, with incident logs regularly being accompanied by references to mental health issues such as perpetrators being “mentally unstable”. Across the dataset, mainly “common” tech abuse offences were evident, echoing findings of previous research that attempted to cluster different forms of tech abuse (Brown et al., 2018; Freed et al., 2017; Henry & Flynn, 2018; Southworth et al., 2007). These incidents included excessive, malicious, and/or unwanted “messages and emails”, some of which involved threats “to kill”. They also comprised of imagebased sexual abuse cases, such as sending or threatening to send “nude pictures”, people having “videoed [a] rape”, or “posting photos on FB [Facebook] of her [the victims/survivors] injuries”. Mobile applications such as “Snapchat” and “WhatsApp” were further mentioned. Besides, the importance of monitoring and controlling partners through technology was prevalent in the dataset. Products such as “Find my iPhone” were highlighted which allow perpetrators to track a victim/survivor— often without their knowledge. The analysis of the data did not reveal the existence of tech abuse through “unconventional” systems such as smart Internet of Things devices nor drones or game consoles in IPV situations. However, a particular abuse category that has been discussed to a lesser extent within the tech abuse literature is the active withholding of access to technology (Henry et al., 2021). This was an element that was relatively prominent in the analysed tech abuse cases. It included perpetrators having “smashed her [the victims/survivors] phone” or them “confiscate[d] his [the victims/survivors] keys, hide his mobile phone to prevent him contacting someone to help him leave the address”.
24 Using Machine Learning Methods to Study …
493
Discussion The present study aimed to develop an automated detection system, which could be used to find tech abuse cases within a large corpus of unstructured reports (e.g. charity or police records, administrative data). Since the manual inspections of big text corpora are unfeasible, an automated approach is needed to quantify and uncover tech abuse cases in such datasets. Once identified, these records may then be manually examined for a more detailed investigation. To realise this task and deliver a proof-of-concept, we utilised ML in combination with NLP methods. Previous research has shown that such tools can be effective in detecting specific text types within free text, including abuse types and victim/survivor injuries (Karystianis et al., 2019). However, in our study, the detection of tech abuse proved to be more difficult. While our classifier showed promising performances it had difficulties to classify tech abuse accurately when deployed on a large dataset. Our findings consequently reveal the limitations around the generalisability of such automated methods that researchers, as well as practitioners, should closely consider.
Detecting Technology-Facilitated Abuse As the presented evaluation scores show (Table 24.1), the expected performances of our trained classifier seemed to be acceptable in detecting tech abuse. Although the performance scores are not as high as in other works with similar goals (Karystianis et al., 2019), our model would have sufficed in serving as a filter to narrow down the possible reports we would have to inspect manually. Thus, the goal was not to develop a perfect detection system, but rather a sophisticated filtering method. However, after applying the trained classifier on the whole corpus, a large number (over 60k) were predicted as tech abuse and after closer inspection of these cases, only a small proportion (~2%) seemed to be of interest. The performance equated to a false positive rate of 98%, which is in a strong contrast to our expected performances.
494
F. Soldner et al.
To tackle this problem, we re-trained the classifier including newly labelled data from these false positive cases, which seemed to improve the predictions and lowering the predicted tech-abuse case to around 30k. The idea for doing so was to train the algorithm on cases that are difficult for the model to classify, which in turn make it easier for the algorithm to find tech abuse in the future. Nevertheless, after a closer inspection, the problem seemed to persist and only a small fraction of predicted tech-abuse cases was, in fact, tech abuse. This shows a strong discrepancy between the expected and the behaved performance of the ML method. It seems that the evaluation scores did not translate well into assessing the classifier’s practical applicability of detecting tech-abuse reports. Hence, whenever we tried to generalise the model to look for these smaller values of tech abuse in the overarching sample (i.e. like finding a needle in a haystack), the method seems not to be as sensitive as needed. It is not immediately apparent, why the classifier is not working as anticipated, but some potential reasons are as follows. First, the training and evaluation sets did not represent the corpus well, effectually invalidating the evaluation scores. In our first iteration, we only labelled reports originating from the domestic abuse category, while we applied the model to the whole corpus. Although the domestic abuse category is not an adequate representation of the whole corpus, it includes reports, which are more difficult to differentiate (e.g. a report mentioning technology, which is not involved in the abuse vs. a report which mentions technology, which is involved in the abuse). Thus, the initial training and testing set contained such difficult cases, which should be helpful for the model to learn about nuances in the data and classify them more accurately. Furthermore, in the second iteration of the model, the training set contained reports from other categories reports that were initially falsely predicted. Both added more difficult to differentiate cases, which the model was able to learn from. However, these measures did not lead to better detections, practically. Second, the reports included in the data are very short (often ranging between five to ten short sentences). The brevity of the recorded call handler notes limits the details and semantics which can be captured. Thus, the model can only be presented with very limited information.
24 Using Machine Learning Methods to Study …
495
While this might be one of the problems within this study of detecting tech abuse, utilising short texts or “imperfect” recorded data will have to be addressed in the future. Third, it is possible that the corpus does not contain a lot of tech-abuse cases, which would make it very difficult to locate them. This problem is also referred to as a low base rate, which makes it even for highly accurate models very difficult to have low false positive rates (Axelsson, 2000). Fourth, tech abuse as a concept and abuse form may not be well enough defined, highlighting the need to agree on exact features to allow for its possible application as a distinct offence category in future data collection processes. Currently, tech abuse is not an “official” concept, nor measurement category. For instance, Markwick et al. (2019) underlined that scholars have used different terminology to describe the perpetration of abuse and harassment via digital means. All used terms have further no agreed specifications but are frequently associated with a combination of “behaviours” (Dragiewicz et al., 2018), “areas” (Henry & Flynn, 2018), or “dimensions” (Powell & Henry, 2018). Additionally, different “forms” of tech abuse commonly intersect (Brown et al., 2018; Messing et al., 2020) and the subjective nature of tech abuse makes it further challenging to define, detect, and measure technology-enabled abuse incidents (Messing et al., 2020). Throughout our iterative coding process, we encountered exactly this problem and frequently struggled to make a definite assessment whether a Crimestoppers report should or should not fall under our evaluation of tech abuse. It is possible, that this uncertainty of the labelling process is passed on to the model, making it more difficult to identify tech abuse. Although all previously mentioned reasons could lead to reduced predictions performances, it is important to reiterate that the evaluation metrics did not transfer well into a practical prediction model. The model seems accurate but is not practical. This discrepancy raises questions on the practicality of existing ML models with similar tasks and goals because the evaluation metrics might not always capture the actual prediction performances. It further illustrates that we need additional control mechanisms to ensure our ML models behave as anticipated, which we mention below in the “recommendations” section.
496
F. Soldner et al.
Insights from the Manual Inspections With regard to manual analysis, our study sadly did not provide the anticipated in-depth details on the exact distribution and nature of the different nuances of tech abuse. However, we were able to observe elements that echo existing dynamics noted by scholars and practitioners alike, including image-based sexual abuse forms, malicious and/or unwanted messages, and stalking behaviours (Flynn & Henry, 2019; Henry & Flynn, 2019; McGlynn et al., 2017, 2019; Messing et al., 2020; Women’s Aid, 2018; Yardley, 2020). Despite the uniqueness and qualitative difference of our dataset, there was nothing out of the ordinary that we were able to detect. Instead, the cases of tech abuse we reviewed replicated tech-abuse dimensions and trends of which we already know or suspect to be happening. However, one element worth pointing out is the huge proportion of cases that we classified as involving the active withholding of devices and technical systems such as phones. Instead of an “active” or a “kinetic” misuse of technology, its suppression may possibly be as serious as its deliberate manipulation. This observation also plays into our earlier posed question of what tech abuse is and what counts as technologymediated abuse. Indeed, the research team wondered, firstly, whether tech abuse did not flag up in the analysed datasets, because callers may themselves not consider aspects such as the confining of a device, as a crime. Secondly, they may also not perceive it as severe “enough” to be worth reporting (McGlynn et al., 2021). The latter would explain why cases of physical, financial, or sexual abuse remain so prevalent. Thirdly, tech abuse may be an element of IPV that would be easier to observe in datasets where victims/survivors themselves (or people very close to them or the perpetrator) detail about wrongdoings (e.g. domestic abuse charity data). Unlike physical violence, where family, friends, and neighbours may hear or “see” something (e.g. screams, bruises), tech abuse may be much harder to detect and possibly also less likely to be reported by external, third parties. Due to all these considerations, we are, therefore, attentive to accentuate that the absence of an observation (i.e. of tech abuse) does not imply the absence of the act itself. We consequently are hopeful that
24 Using Machine Learning Methods to Study …
497
future research into tech abuse will help to expand our understanding of the different subtleties that are part of a pattern of perpetrators not acknowledging victim’s/survivor’s boundaries and the different shades and shapes that tech abuse may involve.
List of Recommendations and Future Research Needs Across this chapter, we hoped to have revealed the practicalities of and challenges around doing experimental research on real-world unstructured text data. We now want to end our article with a list of recommendations that derive from the lessons we learned throughout this study. These suggestions may profit researchers and other parties interested in further developing automated detection systems for tech-abuse cases. Firstly, we could employ other more sophisticated methods to generate text features. Such methods could include using the Linguistic and Word Count Software (LIWC) (Pennebaker et al., 2015) or word embeddings (Jurafsky & Martin, 2019). When training a classifier, it might be useful to tune the model’s parameter to avoid false positive predictions, in exchange for more false negatives. Although this is not an optimal solution, as some positive cases will be missed, it would make the model somewhat more practical, as it would reduce the manual labour of checking all positive predicted cases. However, this approach is highly dependent on the classification task and the costs of increasing false negatives should be clearly considered. Lastly, there is a need for more iterative process when working with ML. As our study shows, humans must be in the loop and are required to audit the output as we did. Thus, we encourage others to always take a sample of your predicted cases to check if your predictions are practical, otherwise you might run the risk of falsely attributing prediction performances. With that being said, we are also considerate that automated detection mechanisms may remain too shallow to understand the breadth of dynamics that come into play when studying the phenomena of tech abuse. We therefore encourage researchers and practitioners to not disregard the lived realities of victims/survivors, who remain unheard in such quantitative evaluations. Indeed, the research team believes that our
498
F. Soldner et al.
study shows the necessity to continue qualitative research on tech abuse, especially while such other methodological tools remain ineffective. We should also not forget to pursue research into the root causes, consequences, escalation trajectories, and the causal pathways that precipitate tech abuse as well as the legal as well as technical instruments that could help improve the situation for victims/survivors. All these elements should never be replaced by the “sexiness” of latest instruments such as ML and NLP, which in many ways are only an add-on rather than a replacement to our existing research toolkit. Acknowledgements The authors are indebted to Crimestoppers for sharing their data, the Jill Dando Institute Research Laboratory (JDIRL) for hosting all records, and to Oli Hutt and Nigel Swift for providing us with technical support while using the JDIRL. We are also thankful to the book editors (Dr. Asher Flynn, Dr. Anastasia Powell, and Dr. Lisa Sugiura) for their flexibility and support throughout our analysis and write-up process. Parts of the insights discussed in this publication stem from findings derived from UCL’s “Gender and IoT” research project. The latter has received funding from the UCL Social Science Plus+ scheme, UCL Public Policy, the PETRAS IoT Research Hub (EP/N02334X/1), the UK Home Office, and the NEXTLEAP Project (EU Horizon 2020 Framework Programme for Research and Innovation, H2020ICT-2015, ICT-10-2015, grant agreement No. 688722). The work was also supported by the Dawes Centre for Future Crime at UCL.
Appendix A: Used Keywords Physical devices: “smart”, “device”, “computer”, “laptop”, “alexa”, “tablet”, “keytracker”, “tracker”, “(smart)-heater”, “light”, “lock” Online platform apps: “online”, “technology”, “internet”, “digital”, “dating app”, “facebook”, “systems”, “messages””, “apps”, ““service”, “account”, “platform”, “dating site”, “instagram”, “snapchat”, “tinder”, “app”, “whatsapp”, “spyware”, “find my iPhone”, “find my Friends”, “gps”, “youtube”, “caller id”, “profile”, “sniffer”, “Badoo”, “messenger”, “chat messenger”, “fake account”, “flirtfinder”, “ipad”, “snap chat”, “what’s app”
24 Using Machine Learning Methods to Study …
499
Verbs: “dating”, “stalking”, “control”, “victimisation”, “report”, “access”, “texting”, “calling”, “sexting”, “experience”, “bullying”, “rape”, “video”, “use”, “abuse”, “sexualise”, “harass”, “harm”, “perpetrate”, “experiment”, “sharing”, “threat”, “intimate”, “message”, “phone”, “post”, “follow”, “cyberbullying”, “doxing”, “tracking”, “monitoring”, “watching”, “blackmailing”, “humiliate”, “restrict”, “destroy”, “punish”, “force”, “impersonate”, “gaslight”, “controlling”, “distribute”, “hacking”, “attack”, “expose”, “film”, “command”, “spread”, “shout”
References Axelsson, S. (2000). The base-rate fallacy and the difficulty of intrusion detection. ACM Transactions on Information and System Security, 3(3), 186–205. https://doi.org/10.1145/357830.357849. Batista, G. E. A. P. A., Prati, R. C., & Monard, M. C. (2004). A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explorations Newsletter, 6 (1), 20–29. https://doi.org/ 10.1145/1007730.1007735. Bird, S., Klein, E., & Loper, E. (2009). Natural language processing with python. O’Reilly Media. Brown, M. L., Reed, L. A., & Messing, J. T. (2018). Technology-based abuse: Intimate partner violence and the use of information communication technologies. In J. R. Vickery & T. Everbach (Eds.), Mediating misogyny: Gender, technology, and harassment (pp. 209–227). Springer International Publishing. https://doi.org/10.1007/978-3-319-72917-6_11. Burton, S., Tanczer, L. M., Vasudevan, S., & Carr, M. (2021). The UK code of practice for consumer IoT cybersecurity: Where we are and what next (pp. 1–74). Department of Digital Culture, Media and Sport; The PETRAS National Centre of Excellence for IoT Systems Cybersecurity. Retrieved from website: https://discovery.ucl.ac.uk/id/eprint/10117734/. Citron, D., & Franks, M. A. (2014). Criminalizing revenge porn. Wake Forest Law Review, 49, 345–391. Dhar, V. (2013). Data science and prediction. Communications of the ACM, 56 (12), 64–73. https://doi.org/10.1145/2500499.
500
F. Soldner et al.
Dragiewicz, M., Burgess, J., Matamoros-Fernández, A., Salter, M., Suzor, N. P., Woodlock, D., & Harris, B. (2018). Technology facilitated coercive control: Domestic violence and the competing roles of digital media platforms. Feminist Media Studies, 18(4), 609–625. https://doi.org/10.1080/ 14680777.2018.1447341. Flynn, A., & Henry, N. (2019). Image-based sexual abuse: An Australian reflection. Women & Criminal Justice, 1–14. https://doi.org/10.1080/08974454. 2019.1646190. Freed, D., Palmer, J., Minchala, D. E., Levy, K., Ristenpart, T., & Dell, N. (2017). Digital technologies and intimate partner violence: A qualitative analysis with multiple stakeholders. In Proceedings of the ACM on HumanComputer Interaction, 1(CSCW) (pp. 1–22). https://doi.org/10.1145/313 4681. Harris, B. A., & Woodlock, D. (2019). Digital coercive control: Insights from two landmark domestic violence studies. The British Journal of Criminology, 59 (3), 530–550. https://doi.org/10.1093/bjc/azy052. Henry, N., & Flynn, A. (2019). Image-based sexual abuse: Online distribution channels and illicit communities of support. Violence Against Women, 25 (16), 1932–1955. https://doi.org/10.1177/1077801219863881. Henry, N., Flynn, A., & Powell, A. (2020). Technology-facilitated domestic and sexual violence: A review. Violence Against Women, 26 (15–16), 1828– 1854. https://doi.org/10.1177/1077801219875821. Henry, N., & Flynn, A. L. G. (2018). Technology-facilitated abuse among culturally and linguistically diverse woman: A qualitative study (p. 94). Canberra: Office of the eSafety Commissioner. Retrieved from website: https://research.monash.edu/en/publications/technology-facili tated-abuse-among-culturally-and-linguistically. Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. J. (2020). Image-based sexual abuse: A study on the causes and consequences of non-consensual nude or sexual imagery. Routledge. https://doi.org/10.4324/ 9781351135153. Henry, N., Vasil, S., Flynn, A., Kellard, K., & Mortreux, C. (2021). Technology-facilitated domestic violence against immigrant and refugee women: A qualitative study. Journal of Interpersonal Violence. https://doi. org/10.1177/08862605211001465. Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349 (6245), 255–260. https://doi.org/10.1126/ science.aaa8415.
24 Using Machine Learning Methods to Study …
501
Jurafsky, D., & Martin, J. H. (2019). Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition—Third edition draft. Stanford. Retrieved from website: https:// web.stanford.edu/~jurafsky/slp3/. Karystianis, G., Adily, A., Schofield, P. W., Greenberg, D., Jorm, L., Nenadic, G., & Butler, T. (2019). Automated analysis of domestic violence police reports to explore abuse types and victim injuries: Text mining study. Journal of Medical Internet Research, 21(3). https://doi.org/10.2196/13067. Kuhn, M., & Johnson, K. (2013). Applied predictive modeling. Springer. Lopez-Neira, I., Patel, T., Parkin, S., Danezis, G., & Tanczer, L. M. (2019). ‘Internet of Things’: How abuse is getting smarter. Safe—The Domestic Abuse Quarterly (63), 22–26. Markwick, K., Bickerdike, A., Wilson-Evered, E., & Zeleznikow, J. (2019). Technology and family violence in the context of post-separated parenting. Australian and New Zealand Journal of Family Therapy, 40 (1), 143–162. https://doi.org/10.1002/anzf.1350. McGlynn, C., Johnson, K., Rackley, E., Henry, N., Gavey, N., Powell, A., & Flynn, A. (2021). ‘It’s torture for the soul’: The harms of image-based sexual abuse. Social & Legal Studies, 30 (4), 541–562. https://doi.org/10.1177/096 4663920947791. McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond ‘revenge porn’: The continuum of image-based sexual abuse. Feminist Legal Studies, 25 (1), 25–46. https://doi.org/10.1007/s10691-017-9343-2. McGlynn, C., Rackley, E., & Johnson, K. (2019). Shattering lives and myths: A report on image-based sexual abuse. Durham University; University of Kent. Messing, J., Bagwell-Gray, M., Brown, M. L., Kappas, A., & Durfee, A. (2020). Intersections of stalking and technology-based abuse: Emerging definitions, conceptualization, and measurement. Journal of Family Violence, 35 (7), 693–704. https://doi.org/10.1007/s10896-019-00114-7. Murphy, K. P. (2012). Machine learning: A probabilistic perspective. MIT Press. Nadkarni, P. M., Ohno-Machado, L., & Chapman, W. W. (2011). Natural language processing: An introduction. Journal of the American Medical Informatics Association, 18(5), 544–551. https://doi.org/10.1136/amiajnl-2011000464. Parkin, S., Patel, T., Lopez-Neira, I., & Tanczer, L. M. (2019). Usability analysis of shared device ecosystem security: Informing support for survivors of IoT-facilitated tech-abuse. In Proceedings of the New Security Paradigms Workshop (pp. 1–15). San Carlos, Costa Rica: Association for Computing Machinery. https://doi.org/10.1145/3368860.3368861.
502
F. Soldner et al.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., & Duchesnay, É. (2011). Scikit-learn: Machine learning in python. The Journal of Machine Learning Research, 12, 2825–2830. Pennebaker, J. W., Boyd, R. L., Jordan, K., & Blackburn, K. (2015). The development and psychometric properties of LIWC2015. University of Texas at Austin. https://doi.org/10.15781/T29G6Z. Powell, A., & Henry, N. (2018). Policing technology-facilitated sexual violence against adult victims: Police and service sector perspectives. Policing and Society, 28(3), 291–307. https://doi.org/10.1080/10439463.2016.1154964. Powell, A., & Henry, N. (2019). Technology-facilitated sexual violence victimization: Results from an online survey of Australian adults. Journal of Interpersonal Violence, 34 (17), 3637–3665. https://doi.org/10.1177/088626 0516672055. Powell, A., Scott, A. J., Flynn, A. L. G., & Henry, N. (2020). Image-based sexual abuse: An international study of victims and perpetrators—A summary report (pp. 1–16). RMIT University. Retrieved from website: https://www.researchgate.net/publication/339488012_Imagebased_sexual_abuse_An_international_study_of_victims_and_perpetrators. Provost, F., & Fawcett, T. (2013). Data science and its relationship to big data and data-driven decision making. Big Data, 1(1), 51–59. https://doi.org/10. 1089/big.2013.1508. Refuge. (2020, January 9). 72% of Refuge service users identify experiencing tech abuse. Retrieved from website: https://www.refuge.org.uk/72-of-refuge-ser vice-users-identify-experiencing-tech-abuse/. Riley, A. (2020). How your smart home devices can be turned against you. BBC Future. https://www.bbc.com/future/article/20200511how-smart-home-devices-are-being-used-for-domestic-abuse. Slupska, J., & Tanczer, L. M. (2021). Threat Modeling Intimate Partner Violence: Tech abuse as cybersecurity challenge in the Internet of Things (IoT). In J. Bailey, A. Flynn, & N. Henry (Eds.), Handbook on technologyfacilitated violence and abuse: International perspectives and experiences. Emerald Publishing. Snook, Chayn, & SafeLives. (2017). Tech vs abuse: Research findings 2017 (pp. 1–56). Comic Relief. Southworth, C., Finn, J., Dawson, S., Fraser, C., & Tucker, S. (2007). Intimate partner violence, technology, and stalking. Violence Against Women, 13(8), 842–856. https://doi.org/10.1177/1077801207302045. Tanczer, L. M. (2021). Das Internet der Dinge: Die Auswirkung »smarter« Geräte auf häusliche Gewalt. In bff: Bundesverband Frauenberatungsstellen
24 Using Machine Learning Methods to Study …
503
und Frauennotruf, & N. Prasad (Eds.), Geschlechtsspezifische Gewalt in Zeiten der Digitalisierung: Formen und Interventionsstrategien (pp. 205–225). Berlin: Transcript. https://doi.org/10.14361/9783839452813. Tanczer, L. M., Lopez-Neira, I., Parkin, S., Patel, T., & Danezis, G. (2018). Gender and IoT (G-IoT) research report: The rise of the Internet of Things and implications for technology-facilitated abuse (pp. 1–9). University College London. Retrieved from website: https://www.ucl.ac.uk/steapp/sites/steapp/ files/giot-report.pdf. Women’s Aid. (2018). Online and digital abuse. Retrieved from website: https://www.womensaid.org.uk/information-support/what-is-domesticabuse/onlinesafety/. Woodlock, D. (2017). The abuse of technology in domestic violence and stalking. Violence Against Women, 23(5), 584–602. https://doi.org/10.1177/ 1077801216646277. Yardley, E. (2020). Technology-facilitated domestic abuse in political economy: A new theoretical framework. Violence Against Women. https://doi.org/10. 1177/1077801220947172.
Part VII Legal Developments
25 Gaps in the Law on Image-Based Sexual Abuse and Its Implementation: Taking an Intersectional Approach Akhila Kolisetty
Introduction With the growing use of technology and the shift to digital spaces spurred by the COVID-19 pandemic, cases of image-based sexual abuse (“IBSA”) have risen rapidly worldwide (Powell & Flynn, 2020). However, countries are still coming to terms with how to effectively tackle such abuse through the law. Some countries have no specific legislation to address IBSA (see e.g. Bangladesh), while others have laws that do little to address the needs of the victim-survivors or are even actively harmful (see e.g. India) (End Cyber Abuse, 2020a, 2020b). Common across contexts, however, is the lack of an intersectional analysis in the law. The experiences of victim-survivors of IBSA clearly show that one’s identities, including gender, sexual orientation, age, race, or religion, can expose one to disproportionate or differential harm. A. Kolisetty (B) End Cyber Abuse, Hoboken, NJ, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_25
507
508
A. Kolisetty
Yet, laws, policies, and justice processes related to IBSA are generally not crafted with a nuanced understanding of intersectionality, nor through informed consultations with impacted communities, but tend to apply one-size-fits-all definitions and rules (see Flynn & Henry, 2019b; Henry & Flynn, 2019). This chapter explores how laws and institutional responses addressing IBSA fail to capture the full spectrum of victim-survivor needs and experiences, and explores alternatives. The chapter begins by defining IBSA and exploring how intersecting systems of power and oppression shape individuals’ experiences. I then apply an intersectional analysis to key elements of laws that address IBSA, and lastly, explore aspects of the judicial process and sentencing, alternatives to criminalisation, and solutions beyond the law from the perspective of communities at intersecting sites of oppression. This chapter is informed by desk research and consultations I conducted as Co-Director of End Cyber Abuse, a collective of lawyers and activists working to tackle technology-facilitated gender violence (TFGV). I spoke with six lawyers, activists, and service providers who have worked with victim-survivors of IBSA, and who bring intersectional and feminist perspectives. Three were based in the United Kingdom (“UK”), two in the Philippines, and one in Pakistan. Throughout this chapter, I primarily utilise the term “victim-survivor,” to refer to individuals who have experienced IBSA, recognising that each individual experiences the harm differently, and that healing from the trauma of IBSA is a very personal journey (see McGlynn et al., 2020).
What Is Image-Based Sexual Abuse? Image-based sexual abuse (“IBSA”) describes the non-consensual taking, dissemination, or threat to disseminate intimate, private, or sexual images (photos or videos) (Powell et al., 2019; Powell, Henry, & Flynn, 2018). Consent is required both when the image is first taken and again when it is shared with any third party. In this chapter, I primarily focus on cases where images initially shared freely are later disseminated without consent.
25 Gaps in the Law on Image-Based Sexual …
509
This form of abuse is widely known as “revenge porn,” a problematic term that does not fully reflect the scope, motivation, and nature of the crime, and which relies on victim blaming language (Powell, Henry, & Flynn, 2018; Henry et al., 2019). Additionally, it mischaracterises this abuse as “pornography” as it is not created freely or only for the purpose of sexual gratification. Consequently, I use the term “image-based sexual abuse,” developed by campaigners, academics, and others in the field to better emphasise its abusive nature and to capture the wider range of motivations for the non-consensual dissemination of intimate images, such as financial gain or social notoriety (Flynn & Henry, 2019a; Henry et al., 2019; McGlynn & Rackley, 2017; McGlynn et al., 2017; Powell et al., 2020).
Taking an Intersectional Approach: Impacts on Marginalised Communities Intersectionality addresses how multiple and intersecting systems of power and oppression shape individuals’ experiences and access. Legal scholar Kimberlé Crenshaw (1989) coined the term to reveal the discrete, multidimensional ways in which layers of oppression and structural inequalities intersect and contribute to specific forms of subordination based on an individual’s identities. For example, the intersection of racism and sexism together impact African-American women in distinct ways that an analysis of race or gender alone does not reveal. An intersectional approach takes into account the “contextually specific social constructions” of categories, such as gender, race, or class (Caiola et al., 2014, p. 3). This requires exploring the historical, political, and social context of how an identity is constructed, revealing the unique and complex experiences of an individual located at those intersections. This analysis must move away from treating the dominant social group as the standard against which all other marginalised groups are compared (Caiola et al., 2014). Further, an intersectional approach is not simply an “additive” approach that begins with one identity and adds others (Hankivsky et al., 2012). Rather, it understands that socially constructed differences intersect to create a specific “social location” that often creates
510
A. Kolisetty
distinct experiences, not only disproportionate or unitary disadvantages (Caiola et al., 2014; Hankivsky & Cormier, 2010). It also recognises that people can “simultaneously experience oppression and privilege” (AWID, 2004, p. 2). An intersectional lens contributes a far richer understanding of victimsurvivors’ experiences with IBSA, including how gender intersects with other identities to shape one’s particular circumstances. This is key to crafting more nuanced, responsive, and trauma-informed laws, policies, and institutional responses that address diverse victim-survivor needs. Below, I highlight how victim-survivors are impacted by IBSA, with a focus on sexual orientation and gender identity, and cultural and religious background, to help illuminate the elements of an intersectional analysis.
IBSA, Gender Identity, and Sexual Orientation Perhaps surprisingly, a number of quantitative studies have found that men and women experience IBSA at roughly similar rates (Henry et al., 2019; Lenhart et al., 2016; Powell, Henry, & Flynn, 2018; Powell et al., 2019, 2020; Reed et al., 2016), or that men had slightly higher victimisation rates (Gámez-Guadix et al., 2015). In contrast, a few studies find that women disproportionately experience IBSA (Eaton et al., 2017; OeSC, 2017; Uhl et al., 2018). However, although victimisation rates may be similar across genders, research reveals that this harm is gendered. First, several studies seem to confirm that it is disproportionately men who perpetrate IBSA or self-report doing so (Henry et al., 2019; Powell, Henry, & Flynn, 2018; Powell et al., 2019, 2020). Second, the gendered nature of IBSA is revealed by the consequences of the harm: women report more negative impacts of IBSA than men, such as being humiliated, depressed, angry, worried, or afraid for their safety, along with more negative health, reputational, and relational impacts (Henry et al., 2019; Lenhart et al., 2016; OeSC, 2017; Powell, Henry, & Flynn, 2018, p. 309; Powell et al., 2020). Men, however, are more likely to describe their experience of IBSA as funny, flattering,
25 Gaps in the Law on Image-Based Sexual …
511
or that they were “okay” with it (Powell, Henry, & Flynn, 2018). The gendered element also becomes apparent when considering that IBSA is often part of a broader pattern of power and control, including domestic violence (Henry et al., 2019; McGlynn et al., 2019, 2020; Powell, Henry, & Flynn, 2018). Studies have found that women victim-survivors are more likely to be stalked, threatened, or harassed by the perpetrator (OeSC, 2017; Powell et al., 2020). Further, scholars have examined how IBSA is often a form of exercising patriarchal power through the public objectification of women’s bodies (McGlynn & Rackley, 2017). Henry and Flynn’s (2019) investigation of websites that host IBSA found that images, largely of women shared by heterosexual men, were regularly accompanied by degrading comments or references to sexually violent and rape scenarios. Users traded non-consensual images “for sexual gratification or male peer bonding, of which degradation and objectification of women is a key component of gender performativity” (Henry & Flynn, 2019, p. 1949). In addition, sexual orientation may partially explain similar victimisation rates for men and women. Powell et al.’s (2019) study found that lesbian, gay, and bisexual (LGB) participants were more likely than heterosexual participants to self-report perpetrating IBSA. Several studies find that LGB individuals are more likely to experience IBSA than heterosexual individuals (Henry et al., 2019, 2020; Lenhart et al., 2016; OeSC, 2017; Powell et al., 2019, 2020; Waldman, 2019). Henry et al. (2019) found that gay and bisexual males are slightly more likely than lesbian and bisexual females to report experiencing IBSA. Additionally, an intersectional lens further reveals that LGB women victim-survivors are more likely to report greater impacts on their health and relationships, greater safety concerns, and more subsequent harassment compared with heterosexual women and men (Powell, Scott, & Henry, 2018; Powell et al., 2020). In one of few papers that consider the experiences of transgender individuals, Powell, Scott, and Henry (2018) found that transgender individuals were more likely than male or female participants to experience IBSA. Yet, they note that fewer quantitative studies have examined the experiences of transgender, intersex, and non-binary individuals of digital abuse, perhaps in part because these communities are small in
512
A. Kolisetty
number, but also because this field should be critiqued for its “heteroand gender-normativity” and for failing to focus on intersectional issues around gender and sexuality (p. 206). In places where being a gender minority can result in physical harm, transgender individuals may be sexually assaulted, with those assaults recorded and disseminated as IBSA. Lawyer A from Pakistan shared, “I also think that the probability of you being attacked depends on your gender presentation or identity…transgender folks are more likely to be attacked in terms of abusive comments and things like sexual assault…and that is a commonly distributed form of pornography.”
IBSA and Ethnic, Religious, or Cultural Identity Some studies find that religious, cultural, or ethnic minorities, including immigrants, are more likely to experience IBSA. For example, an Australian study found that the prevalence of IBSA was higher among those who speak a language other than English at home, compared with English speakers (OeSC, 2017). Similarly, Powell et al. (2020) found that 23.7% of Black, Asian, and Minority Ethnic (BAME) respondents in the UK experienced IBSA, compared to 12.5% for non-BAME respondents. A victim-survivor’s ethnicity and/or their religious or cultural background may also result in particularised, and potentially more severe, consequences of IBSA. Activist A from the UK shared that, “[t]he levels of conservatism in their life is a factor, that is, religion, culture, background and ethnic group…” Because of the stigma and shame attached to displays of female sexuality in some communities, perpetrators can effectively wield the threat of sending intimate images to the victimsurvivor’s family to deepen control and, where they are intimate partners, prevent their partner from leaving the relationship (Henry et al., in press). Abusive partners may use the victim-survivor’s citizenship status, language barriers, and financial status to create fears of deportation or loss of child custody, thus amplifying the victim-survivor’s dependency on the abuser (Henry et al., in press). Further, IBSA can lead to ostracisation from one’s family or community. Henry et al. (in press) reference a case where a victim-survivor was
25 Gaps in the Law on Image-Based Sexual …
513
disowned by her family because of the IBSA. Likewise, in my consultations, Lawyer A from Pakistan stated that “for religious minorities who are a lot more close knit…especially if you don’t have support within the community because the perpetrator is also from the community, it is much more difficult to call them out. So access to…welfare services then becomes much more important than for someone who doesn’t experience those intersectional issues.” Lawyer A further discussed violence as a consequence, noting that in Khyber Pakhtunkhwa, a video of two women was circulated that was “sexually explicit and they were both unfortunately made target of honour killings…so the consequences were much higher if you were…coming from certain parts of Pakistan. Class intersects with that, but also ethnicity.” Below, I continue to explore how an understanding of people’s experiences at the intersection of identities—specifically, sexual orientation and gender identity as well as ethnic, cultural, or religious background— should influence laws, policies, and justice systems.
Understanding IBSA Laws Through an Intersectional Perspective Next, I examine the following aspects of IBSA laws: definitions of intimate images and bodily autonomy. While the specific laws vary across jurisdictions, I attempt to draw out the relevant issues revealed through an intersectional analysis, and explore recommendations for reform.
Defining Intimate Images In some countries, existing laws that criminalise IBSA to define intimate images too narrowly such that they fail to capture diverse perceptions of intimacy. For example, some jurisdictions focus only on depictions of an individual’s genitals, such as India’s Information Technology (“IT”) Act 2000 Section 66E, which defines a private area as “the naked or undergarment clad genitals, pubic area, buttocks or female breast.” This definition fails to address a whole host of situations, such as individuals engaged in sexual acts while clothed, or in a state of undress.
514
A. Kolisetty
Similarly, in England and Wales, an intimate image is a private sexual photograph or film, with “sexual” defined as showing “all or part of an individual’s exposed genitals or pubic area” or showing something a “reasonable person would consider to be sexual because of its nature” or that “its content, taken as a whole, is such that a reasonable person would consider it to be sexual” (England and Wales, Criminal Justice and Courts Act 2015, Section 33(1), 35(3)). While this is broader than in India, it still excludes images that are non-sexual, but still harmful. For instance, Service Provider A in the UK described: We believe this to be a very narrow definition which does little to support many people. We receive many reports from clients from certain ethnic communities where, culturally, images that don’t reach this standard can still put the welfare of a client at risk, e.g., images with bare arms showing or uncovered hair, or just sitting with a non-family member male.
Indeed, in some Hindu or Muslim communities, women are expected to uphold familial honour by avoiding so-called shameful behaviour, and perceived sexual transgressions can be punished, sometimes with violence (Werbner, 2005). In such communities, images that would otherwise be deemed non-sexual—like a person wearing a bikini—could be perceived as too revealing or inappropriate. For instance, a victimsurvivor in Huber’s (2020) study said that an image of someone in a bra could be considered shameful, even leading to ostracisation from one’s community. As one example of severe ramifications, in 2020, two women in Pakistan were murdered by male relatives after a video leaked online depicted them kissing a man on the mouth (Mahbubani, 2020). Further, in some communities, covering one’s hair signals sexual modesty. In Huber’s (2020) study, an individual living in a country where wearing a headscarf is required reported that her ex-boyfriend posted a picture of her with uncovered hair on Facebook. Such acts should be considered IBSA, given the potential harms to victim-survivors. In Henry et al.’s (2019, p. 87) study, one participant shared that in some contexts, “no hijab is naked.” In Australia, the law has explicitly addressed this scenario. The Enhancing Online Safety Act 2018 expands
25 Gaps in the Law on Image-Based Sexual …
515
the definition of intimate images to include images depicting someone without religious or cultural attire that they consistently wear in public. This law is a step forward. However, we need broader definitions that include the types of non-sexual images discussed above, which some might construe as shameful. An example of such flexible legislation is England and Wales, which defines a “sexual” image as what “a reasonable person would consider to be sexual” (England and Wales, Criminal Justice and Courts Act 2015, Section 35(3)). Yet, feminist scholars like Kim Lane Scheppele (1991) and Catherine MacKinnon (1989) have denounced the “reasonable person” standard as “gendered to the ground,” with the male perspective often treated as the default. Some judges propose a “totality of circumstances” standard, which considers how one’s intersecting identities, like race or sex, influence the application of the law (Astrada & Astrada, 2020). Laws defining intimate images should move towards a more contextual “totality of circumstances” test, which can better recognise diverse understandings of modesty, intimacy, and shame. Still, they must be written narrowly to avoid being overbroad, potentially “diluting” the laws or infringing on free expression (Henry et al., 2019). Ultimately, perhaps shifting the law’s focus from the sexual character of images to the violation of consent and privacy might simplify the law and grant individuals greater agency to define acceptable representations of themselves in the digital space. Several participants in Henry et al.’s (2019, p. 84) study similarly supported “moving beyond only a sexual element to the definition, and instead focusing on the assumption of privacy.”
Policing Sexual and Bodily Autonomy In some countries, overbroad laws criminalise free sexual expression and bodily autonomy, with devastating impacts on gender and sexual minorities. First, in several South Asian countries, the distribution of pornography is banned. For instance, India’s IT Act Section 67 criminalises the publication or transmission of “obscene material” electronically. In
516
A. Kolisetty
Bangladesh, The Pornography Control Act (2012) criminalises pornography, including its production, sale, or exhibition. In Pakistan, the Penal Code (1860), Section 292 bans the possession, sale, and distribution of “obscene” representations. However, such overbroad laws can lead to the criminalisation of victim-survivors themselves for sharing intimate images. This may particularly impact individuals who use sexting to be intimate due to cultural or social barriers that make in-person contact impossible, such as LGBTQ+ communities. Such laws criminalise free sexual expression, rather than focusing on the real harm—the violation of consent. Second, laws that police indecency and women’s “modesty” are deeply patriarchal. For example, India’s Indecent Representation of Women (Prohibition) Act 1986 bans depictions of a woman’s figure that is indecent or “likely to deprave, corrupt, or injure the public morality” (Section 2(c)). While this law has been used to criminalise forms of IBSA, it is not gender neutral, perpetuating the assumption that only women can be victims of IBSA, and failing to give recourse to boys, LGBTQ+ people, or gender non-binary people. The law also relies on a moralityladen discourse that tends to shame sexuality, thus contributing to victim blaming in our society. This leads to online spaces being increasingly controlled by the state, with free expression by people of marginalised genders viewed as indecent, vulgar, or worthy of prosecution.
The Judicial Process and Victim-Survivors’ Experiences For victim-survivors who seek justice, a significant barrier is the retraumatisation of the court process. From lack of privacy to rigid sentencing frameworks focused on criminalisation instead of justice, victim-survivors face a range of issues compounded by their individual identities. Below, I review three parts of the judicial process and make recommendations from an intersectional lens: reporting IBSA, privacy during court proceedings, and victim-survivor-centred justice. I conclude with a look at remedies and reforms beyond the law.
25 Gaps in the Law on Image-Based Sexual …
517
Reporting IBSA Victim-survivors of IBSA may hesitate to approach the police for fear of being shamed or dismissed. Research has shown that police have failed to take TFGV cases as seriously as physical abuse, telling victimsurvivors to simply change their number, for instance (Henry et al., 2019, 2020). Further, practitioners and victim-survivors describe police lacking adequate understanding of the law and technology; lacking or blaming the lack of financial and technical resources to investigate; blaming victim-survivors; and encountering evidentiary challenges, including identifying anonymous perpetrators (Henry et al., 2018, 2019; McGlynn et al., 2019). Bond and Tyrrell (2018) found that the majority of police in the UK lacked knowledge and confidence in investigating IBSA cases, and that few received training. My consultations revealed similar concerns. Activist C from the Philippines shared that “[t]here have been times where survivors were blamed by police for sharing their own intimate images online.” Indeed, Service Provider A in the UK reported that, in “2019–20, 54% of our clients who reported to the police, reported to us a negative response.” These harms are amplified for victim-survivors from marginalised groups. In Australia, culturally and linguistically diverse (CALD) victimsurvivors described being hesitant to report the crime to the police (Henry et al., 2018). Those from certain cultural backgrounds may fear retribution from the perpetrator, such as sending images to family (Henry et al., 2019). Language barriers, insecure migration status, social isolation, and complex legal processes may make immigrant victimsurvivors more dependent on their abusive partners, and less comfortable approaching the police (Henry et al., in press). Likewise, Activist A from the UK shared that “there is a policy which says migrants don’t have recourse to public funds, so if you are facing IBSA within a domestic violence situation, why will you go to the police?” Further, in Australia, where sharing an image of a woman with her head uncovered is a criminal offence, one victim-survivor in Henry et al.’s (in press, p. 16) study disclosed that police did not consider that IBSA and lacked the “requisite cultural understanding” of the consequences of such acts.
518
A. Kolisetty
LGBTQ+ individuals also report negative experiences with law enforcement and a reluctance to report IBSA to the police (Henry et al., 2018, 2019). In a US-based survey, about 58% of transgender individuals who interacted with law enforcement experienced mistreatment, including harassment, misgendering, or even physical or sexual assault, and 57% said they feel uncomfortable seeking help from the police (James et al., 2016). In the UK, Activist A shared that for “people from the trans community or queer community…you have so many layers or reasons to choose to not engage with legal processes because the consequences for you could be quite real.” This suggests there are limitations with reporting IBSA to police that require redress, including educational campaigns, training and resources for police and justice actors investigating IBSA, and alternatives to criminalisation (Henry et al., 2019).
Privacy for Victim-Survivors The failure of court systems to ensure privacy and anonymity in IBSA cases is a major barrier to access to justice, preventing victim-survivors from reporting IBSA to the police (Citron & Franks, 2014; McGlynn et al., 2019, 2020). In the US, for example, there is a strong tradition in favour of litigants in civil suits using their real names, and federal courts generally require judicial consent following a balancing test before a plaintiff can proceed under a pseudonym. Furthermore, in criminal proceedings, most US states do not guarantee that victimsurvivors’ identifying information will be kept confidential, including on court transcripts (Brown, 2018). Lack of anonymity can impact survivors’ mental health, employment prospects, and personal relationships (Law Commission, 2021). This general failure is compounded for marginalised communities. For example, LGBTQ+ victim-survivors might face additional risks if their identities and the nature of the abuse are made public, especially given the high rates of harassment and physical attack they face. In the UK, the Law Commission noted in its 2021 report that “the lack of anonymity is especially devastating to LGBTQ+ victims who may be ‘outed’”
25 Gaps in the Law on Image-Based Sexual …
519
due to the proceedings (Law Commission, 2021, p. 376). Likewise, ethnic, religious, or cultural minorities may also face expulsion from their families or communities if the nature of the harm becomes widely known—especially if the perpetrator is from the same community. To alleviate these harms, uniform standards should be developed to enable victim-survivors of IBSA to proceed under a pseudonym. Additionally, courts should limit who has access to sensitive evidence and provide the option of private (“in camera”) proceedings for IBSA. Statutes such as the Sexual Offences Act (Amendment) 1992 in England and Wales, which prohibit public disclosure of the identity of victimsurvivors of certain sexual offences, should be extended to apply to IBSA—which has not been considered a sexual offence. There have been promising developments in the UK, where the Law Commission, drawing from consultations, recommended in 2021 that lifetime anonymity, restrictions on cross-examination, and special measures provisions at trial (e.g. giving evidence behind a screen or in private) be made available for victim-survivors of IBSA (Law Commission, 2021). These laws should also be applied to website intermediaries, which often escape liability for user-generated content (Agarwal, 2018).
Victim-Survivor-Centred Justice for IBSA An intersectional approach to victim-survivor-centred justice for IBSA recognises that a “one-size fits all” approach does not work, and that justice must be individualised. This requires a wide range of options, from criminal penalties to non-criminal remedies to acknowledgements of the harm. Currently, courts often fail to acknowledge IBSA’s harms. For example, in the US court case People v. Barber, while the court states that naked photographs were posted on Twitter and sent to the victimsurvivor’s employer, there is no description of any impact, whether loss of employment or emotional distress. American judges routinely minimise the impact of IBSA, using language like “mere,” “simply,” or “only,” contributing to the broader trivialisation of gender-based violence (Vora, 2017, pp. 245–248). On the other hand, simply increasing criminal
520
A. Kolisetty
penalties may not offer the types of justice victim-survivors desire. For example, McGlynn et al. (2019) found that some victim-survivors sought forms of punishment other than imprisonment, and Dodge (2020) notes that alternatives to criminal justice are particularly important for youth, who might be “misinformed regarding the moral/legal lines between consensual and non-consensual image sharing.” So, what would an intersectional approach to victim-survivor-centred justice for IBSA look like? First, judges should contribute to healing and accountability by acknowledging the traumatic effects of IBSA, including how the harm was distinctly experienced based on one’s identities. For example, in the Canadian case of R v. MR (2017), where intimate images were shared only with the victim’s family members but not posted online, the judicial opinion still stated that the IBSA had an “immeasurable” impact on the victim and her family, recognising that the victim-survivor came from a religious family and that despite the limited dissemination, significant harm was done (Dodge, 2019, pp. 133–134). Second, judges should incorporate a nuanced, intersectional understanding of the consequences of IBSA into the sentencing stage. Dodge (2019) suggests that judges can assess factors influencing the degree of harm, such as whether the sharing of intimate images occurred in the context of domestic violence. Judges can further review additional impacts based on the victim-survivors’ identities (e.g. sexual orientation, religion), and whether the IBSA caused offline consequences (e.g. loss of employment, mental health impacts). Citron (2019, p. 1946) argues that there should be “penalty enhancements for bias-motivated sexual privacy invasions,” which would recognise the stigma, humiliation, and collateral impacts on gender, ethnic, or sexual minorities. Canada’s Criminal Code, §718.2, for instance, lays out aggravating circumstances such as evidence that “the offence was motivated by bias, prejudice or hate” based on factors including religion, sexual orientation, gender identity, or expression; evidence that the victim-survivor was underage; or that there was an abuse of trust or authority. The court may also consider whether the offence “had a significant impact on the victim, considering their age and other personal circumstances, including their health and financial situation” (Canada Criminal Code, R.S.C. 1985, c. C-46, § 718.2(a)(iii.1)).
25 Gaps in the Law on Image-Based Sexual …
521
Additionally, judicial systems should provide civil remedies, including compensatory and punitive damages, which can be sought through tort actions for the invasion of privacy and the intentional infliction of emotional distress. Tort actions can provide a more individualised determination of the harms, offering tailored damages. Victim-survivors should have access to free legal representation for tort actions, given that most cannot afford to hire private attorneys to file suit (Citron & Franks, 2014; Henry et al., 2019, pp. 103–105). Developing new legal remedies could also better address IBSA’s impact on marginalised communities. For example, Citron (2015) suggests that cyber harassment, including IBSA, be considered a civil rights violation “if it interferes with people’s crucial life opportunities—the ability to work and speak—because of their membership in a protected group” such as religion or national origin. Further, victim-survivors should be provided with support in removing their intimate images and videos from the internet. For instance, victim-survivors can use copyright laws if they have taken an image themselves, and therefore own it (Vining, 2019). In the US, individuals can inexpensively send DMCA takedown notices to websites, providing greater agency and control than the state-driven criminal justice process (Farries & Sturm, 2019). However, this also requires the victim-survivor to constantly monitor websites for intimate images, which can become retraumatising. In some jurisdictions, like the US, one can also utilise injunctions in tort actions, or restraining orders, to require perpetrators to take down images. Beyond the court system, other forms of healing and justice should be available, such as acknowledgement of the harm, apologies, or mechanisms enabling offenders to understand their wrongdoing. One victimsurvivor in McGlynn et al.’s (2019) study spoke about the desire for a mediation where she could speak directly to the perpetrator about the abuse and its impact on her. Finally, institutions should be developed to facilitate access to noncriminal remedies. For example, in Canada, Nova Scotia’s CyberScan unit can send a warning letter; use mediation, negotiation, or restorative practices; or help victim-survivors obtain a cyber-protection order prohibiting someone from sharing an intimate image, ordering them
522
A. Kolisetty
to take one down, or securing damages (Government of Nova Scotia). In Australia, the eSafety Commissioner’s online portal enables victimsurvivors to report IBSA digitally. They can facilitate image removal, issue a formal warning or infringement notice, or seek an injunction or civil penalty order against the perpetrator or the platform or service provider (Henry et al., 2019). This can more directly address victim-survivors’ primary need—to have harmful images removed— while providing options for accountability and reparations for those who feel unsafe or uncomfortable pursuing a legal route.
Beyond the Law The state should also meet victim-survivors’ needs through adequate non-legal support, including online and phone information, psychosocial support, and counselling that is “accessible and relevant to the diversity of victims” (Henry et al., 2019). Activist B from the Philippines explained that some victim-survivors are not keen to go the legal route, but that they “just want someone to talk to, so they want address the psychosocial aspect of it…And then majority of them…don’t want to file a legal case.” Moreover, educational campaigns on IBSA—including in schools and workplaces—focused on consent, healthy relationships, and bystander intervention and that challenge gender norms enabling IBSA, while placing the onus of prevention on perpetrators rather than victimsurvivors are crucial (Henry et al., 2019). Additionally, more qualitative research is required to better understand the experiences of victimsurvivors who sit at multiple layers of marginality. Finally, Activist C from the Philippines shared: Much data has to be uncovered about sexual orientation, class, disability, or profession. To dissect these particular details is to shed light on the specific experience of such marginalized groups, and we need this in crafting inclusive policies and working towards safe online spaces and platforms.
25 Gaps in the Law on Image-Based Sexual …
523
Governments should capture and disaggregate data on IBSA and experiences with justice seeking to shape more effective policy responses.
Conclusion The failure of laws, policies, and justice systems to respond to the wide spectrum of individual experiences with IBSA has caused many victimsurvivors to be retraumatised by the very institutions intended to protect them, or unable to access remedies for the harms they experience. It is essential that laws and policies be constructed after thorough consultations with victim-survivors who bring a diversity of identities and perspectives, and following the application of an intersectional analysis. This would ensure that legal definitions are written in inclusive ways, while avoiding harm to marginalised communities. Rather than a onesize-fits-all approach, governments should move towards an ecosystem of legal, social, and institutional responses that address different aspects of the victim-survivor experience and allow them to craft individualised pathways to justice. Finally, we should think beyond existing criminal and civil legal frameworks that address IBSA (see e.g. Henry et al., 2020). Exploring such alternatives might open up new ways to centre sexual expression, autonomy, privacy, and consent, while better highlighting the harms experienced by victim-survivors situated at multiple sites of marginality. Acknowledgements I would like to thank Nishma Jethwa, Co-Director of End Cyber Abuse, for substantial research, writing, and intellectual work that significantly shaped this chapter’s arguments. I would also like to thank Esha Meher for her research and writing assistance, and Ankur Asthana for doing the yeoman’s work of helping me finalise this chapter.
524
A. Kolisetty
References Agarwal, D. (2018, June 9). Anonymity for rape victims: Law must strike delicate balance on holding internet intermediaries liable. Firstpost. https:// www.firstpost.com/india/anonymity-for-rape-victims-law-must-strike-del icate-balance-on-holding-internet-intermediaries-liable-4503863.html. Association for Women’s Rights in Development (AWID). (2004). Intersectionality: A tool for gender and economic justice. Toronto, Canada: AWID. https://www.awid.org/sites/default/files/atoms/files/intersectionality_a_tool_ for_gender_and_economic_justice.pdf. Astrada, S., & Astrada, M. (2020, May 11). The enduring problem of the raceblind reasonable person. ACS Expert Forum. https://www.acslaw.org/expert forum/the-enduring-problem-of-the-race-blind-reasonable-person/. Bond, E., & Tyrrell, K. (2018). Understanding revenge pornography: A national survey of police officers and staff in England and Wales. Journal of Interpersonal Violence, 36 (5–6), 2166–2181. Brown, E. (2018). Protecting does and outing mobsters: Recalibrating anonymity standards in revenge porn proceedings. Duke Journal of Gender Law & Policy, 25 (2), 155–190. Caiola, C., Docherty, S., Relf, M., & Barroso, J. (2014). Using an intersectional approach to study the impact of social determinants of health for African-American mothers living with HIV. Advances in Nursing Science, 37 (4), 287–298. Citron, D. K. (2015). Addressing cyber harassment: An overview of hate crimes in cyberspace. Case Western Reserve Journal of Law, Technology & the Internet, 6 , 1–12. Citron, D. K. (2019). Sexual privacy. Yale Law Journal, 128(7), 1870–1960. Citron, D. K., & Franks, M. A. (2014). Criminalizing revenge porn. Wake Forest Law Review, 49 (2), 345–391. Crenshaw, K. (1989). Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. University of Chicago Legal Forum, 1(8), 139–167. Dodge, A. (2019). Nudes are forever: Judicial interpretations of digital technology’s impact on “revenge porn.” Canadian Journal of Law and Society, 34 (1), 121–143. Dodge, A. (2020). Trading nudes like hockey cards: Exploring the diversity of ‘revenge porn’ cases responded to in law. Social and Legal Studies. https:// doi.org/10.1177/0964663920935155.
25 Gaps in the Law on Image-Based Sexual …
525
Eaton, A. A., Jacobs, H., & Ruvalcaba, Y. (2017). Nationwide online study of nonconsensual porn victimization and perpetration: A summary report. FL: Cyber Civil Rights Initiative. https://www.cybercivilrights.org/wp-content/ uploads/2017/06/CCRI-2017-Research-Report.pdf. End Cyber Abuse. (2020a). Image-based sexual abuse: The Law in Bangladesh. http://endcyberabuse.org/country-factsheets/. End Cyber Abuse. (2020b). Image-based sexual abuse: The Law in India. http:// endcyberabuse.org/country-factsheets/. Farries, E., & Sturm, T. (2019). Feminist legal geographies of intimate-image sexual abuse: Using copyright logic to combat the unauthorized distribution of celebrity intimate images in cyberspaces. EPA: Economy and Space, 51(5), 1145–1165. Flynn, A., & Henry, N. (2019a). Image-based sexual abuse. In Oxford research encyclopedia of criminology and criminal justice. Oxford: Oxford University Press. Flynn, A., & Henry, N. (2019b). Image-based sexual abuse: An Australian reflection. Women and Criminal Justice. https://doi.org/10.1080/08974454. 2019.1646190. Gámez-Guadix, M., Almendros, C., Borrajo, E., & Calvete, E. (2015). Prevalence and association of sexting and online sexual victimization among Spanish adults. Sexuality Research and Social Policy, 12(2), 145–154. Government of Nova Scotia. CyberScan. https://novascotia.ca/cyberscan/. Hankivsky, O., & Cormier, R. (2010). Intersectionality and public policy: Some lessons from existing models. Political Research Quarterly, 64 (1), 217–229. Hankivsky, O., Grace, D., Hunting, G., Ferlatte, O., Clark, N., Fridkin, A., Giesbrecht, M., Rudrum, S., & Laviolette, T. (2012). Intersectionality-based policy analysis. In O. Hankivsky (Ed.), An intersectionality-based policy analysis framework (pp. 33–45). Vancouver, BC: Institute for Intersectionality Research and Policy, Simon Fraser University. https://data2.unhcr.org/en/ documents/download/46176. Henry, N., & Flynn, A. (2019). Image-based sexual abuse: Online distribution channels and illicit communities of support. Violence Against Women. https:// doi.org/10.1177%2F1077801219863881. Henry, N., Flynn, A., & Powell, A. (2018). Policing image-based sexual abuse: stakeholder perspectives. Police Practice and Research, 19 (6), 565–581. Henry, N., Flynn, A., & Powell, A. (2019). Responding to ‘revenge pornography’: Prevalence, nature and impacts. Canberra: Australian Institute of Criminology.
526
A. Kolisetty
Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. (2020). Image based sexual abuse and consequences of non-consensual nude or sexual imagery. London and New York: Routledge. Henry, N., Vasil, S., Flynn, A., Kellard, K., & Mortreux, C. (2021). Technology-facilitated domestic violence against immigrant and refugee women: A qualitative study. Journal of Interpersonal Violence. https://doi. org/10.1177/08862605211001465. Huber, A. R. (2020). Women, image based sexual abuse and the pursuit of justice (Publication No. 10169573). Doctoral dissertation, Liverpool John Moores University. LMJU Repository. https://researchonline.ljmu.ac.uk/id/ eprint/12955. James, S. E., Herman, J. L., Rankin, S., Keisling, M., Mottet, L., & Anafi, M. (2016). The report of the 2015 U.S. transgender survey. Washington, DC: National Center for Transgender Equality. Law Commission. (2021). Intimate image abuse: A consultation paper. London: Law Commission. https://www.lawcom.gov.uk/project/taking-making-andsharing-intimate-images-without-consent/. Lenhart, A., Ybarra, M., & Price-Feeney, M. (2016). Online harassment, digital abuse and cyberstalking in America. New York, NY: Data and Society. https:// www.datasociety.net/pubs/oh/Online_Harassment_2016.pdf. MacKinnon, C. A. (1989). Towards a feminist theory of the state. Cambridge: Harvard University Press. Mahbubani, R. (2020, May 18). Two Pakistani women were shot dead in an apparent ‘honor killing’ after a leaked video showed them kissing a man. Insider. https://www.insider.com/two-pakistani-women-shot-dead-honor-kil ling-leaked-video-kissing-2020-5. McGlynn, C., Johnson, K., Rackley, E., Henry, N., Gavey, N., Powell, A., & Flynn, A. (2020). ‘It’s torture for the soul’: The harms of image-based sexual abuse. Social & Legal Studies. https://doi.org/10.1177/0964663920947791 McGlynn, C., & Rackley, E. (2017). Image-based sexual abuse. Oxford Journal of Legal Studies, 37 (3), 534–561. McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond ‘revenge porn’: The continuum of image-based sexual abuse. Feminist Legal Studies, 25 (1), 25–46. McGlynn, C., Rackley, E., Johnson, K., Henry, N., Flynn, A., Powell, A., Gavey, N., & Scott, A. J. (2019). Shattering lives and myths: A report on image-based sexual abuse. Durham University; University of Kent. https:// dro.dur.ac.uk/28683/3/28683.pdf.
25 Gaps in the Law on Image-Based Sexual …
527
Office of the eSafety Commissioner (OeSC). (2017). Image-based abuse national survey: Summary report. Australia: Office of the eSafety Commissioner. https://www.esafety.gov.au/sites/default/files/2019-07/Image-basedabuse-national-survey-summary-report-2017.pdf. Powell, A., & Flynn, A. (2020, June 2). Reports of ‘revenge porn’ skyrocketed during lockdown, we must stop blaming victims for it. The Conversation. https://theconversation.com/reports-of-revenge-porn-sky rocketed-during-lockdown-we-must-stop-blaming-victims-for-it-139659. Powell, A., Henry, N., & Flynn, A. (2018). Image-based sexual abuse. In W. S. DeKeseredy & M. Dragiewicz (Eds.), Handbook of critical criminology (pp. 305–315). New York: Routledge. Powell, A., Henry, N., Flynn, A., & Scott, A. J. (2019). Image-based sexual abuse: The extent, nature, and predictors of perpetration in a community sample of Australian adults. Computers in Human Behavior, 92, 393–402. Powell, A., Scott, A. J., Flynn, A., & Henry, N. (2020). Image-based sexual abuse, An international study of victims and perpetrators: A summary report. Melbourne, Australia: RMIT University. Powell, A., Scott, A. J., & Henry, N. (2018). Digital harassment and abuse: Experiences of sexuality and gender minority adults. European Journal of Criminology, 17 (2), 199–223. Reed, L. A., Tolman, R. M., & Ward, L. M. (2016). Snooping and sexting: Digital media as a context for dating aggression and abuse among college students. Violence Against Women, 22(13), 1–21. Scheppele, K. L. (1991). The reasonable woman. The Responsive Community, Rights, and Responsibilities, 1(4), 36–47. Uhl, C. A., Rhyner, K. J., Terrance, C. A., & Lugo, N. R. (2018). An examination of nonconsensual pornography websites. Feminism and Psychology, 28(1), 50–68. Vining, A. (2019). No means no: An argument for the expansion of rape shield laws to cases of nonconsensual pornography. William and Mary Journal of Race, Gender, and Social Justice, 25 (2), 303–325. Vora, A. (2017). Into the shadows: Examining judicial language in revenge porn cases. Georgetown Journal of Gender and the Law, 18(1), 229–250. Waldman, A. E. (2019). Law, privacy, and online dating: “Revenge porn” in gay online communities. Law & Social Inquiry, 44 (4), 987–1018. Werbner, P. (2005). Honor, shame and the politics of sexual embodiment among South Asian Muslims in Britain and beyond: An analysis of debates in the public sphere. International Social Science Review, 6 (1), 25–47.
26 Gender-Based Abuse Online: An Assessment of Law, Policy and Reform in England and Wales Kim Barker and Olga Jurasz
Introduction Gender-based abuse online (GBAO) is a common problem, affecting women and girls worldwide daily. While there has been a considerable increase in public awareness of GBAO, and its harmful effects are beginning to be recognised by the general public, and law and policy makers alike, efforts directed at tackling this modern issue have—to date—brought limited change in terms of combatting such behaviours and bringing effective remedies to victims. Furthermore, there is a lack of attention paid to the issue of GBAO by platform providers K. Barker (B) · O. Jurasz Open University Law School, Open University, Walton Hall, Milton Keynes, UK e-mail: [email protected] O. Jurasz e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_26
529
530
K. Barker and O. Jurasz
(or, to a limited extent only—see, for example, Women’s Aid, 2017), and as a result, there is a lack of deterrence. The absence of specific legal protections from GBAO and limited recognition of online harms arising from it are problematic and must also be addressed. This chapter applies a gendered analysis of the law and policy reform on online harms in England and Wales. In doing so, it offers a critique of the current proposals and advocates for a more holistic and gender-inclusive approach to tackling GBAO and addressing online harms. Throughout this chapter, the term GBAO is used to describe abusive behaviours online which are gender-based in nature and which target women disproportionately because they are women or identify as women. Such abusive acts may include (although are not limited to) online misogyny, sexist abuse, text and/or image-based sexual abuse and online sexual/misogynistic harassment.
Gender-Based and Online Abuse—A Policy Challenge? Little attention has been paid specifically to the phenomenon of GBAO. While there are—and have been—some almost rapid—responses to specific forms of gender-based abuse (GBA) in online contexts, notably image-based sexual abuse (IBSA) (Henry et al., 2020), the sheer scale of this—now—phenomenon has caught policymakers, regulators and platforms (technology companies) off-guard. What is more evident than the lack of consideration of GBA in the Online Harms White Paper (OHWP) (DCMS, 2019) is the broader lack of recognition by law makes of first, the harms of GBA (and online violence against women [OVAW]), and second, the failure to appreciate the damage to participatory rights that emanates from unchecked and unchallenged online abuse motivated by gender (Barker & Jurasz, 2019a, 2020a). The phenomenon of gender-based abuse is increasingly evident. For instance, in 2017, Amnesty International’s—admittedly small scale (n = 4009, across 8 countries)—study identified that nearly a quarter of women aged between 18 and 55 have experienced instances of online abuse at least once (Amnesty International, 2017). Other studies
26 Gender-Based Abuse Online …
531
highlight that these findings are not isolated. For instance—in 2020, Facebook was given the dubious honour of being rated top of the list of platforms on which GBA occurs (Plan International, 2020). Other evidence suggests that women worldwide are encountering online abuse at an alarmingly frequent rate—with as many as 46% of women receiving sexist and/or misogynistic messages and comments (Barker & Jurasz, 2019c, p. 97; Statista, 2017). The prevalence of online abuse with a gender connotation increases exponentially when the targets of the abuse are politically active prominent women. Such results are perhaps most evident through the example of Diane Abbott—the first black member of Parliament in the UK who received more than 8000 abusive tweets and messages in the first 6 months of 2017 alone (Amnesty International, 2018; Barker & Jurasz, 2019c, p. 97). That is, sadly, not an isolated example, with other instances of high-profile women also reporting extreme levels of online gendered abuse (Barker & Jurasz, 2021c). Other women politicians in England have been vocal about their experiences—including Jess Phillips MP who has recounted the ‘living room’ effect of online gendered abuse—which is at its heart deeply personal and which can reach you when you least expect it, invading beyond your professional and work contexts (Phillips, 2017). The impact—and harm—of GBAO is significant (Barker & Jurasz, 2021c). In one of the few judicial considerations of online GBAO in England, the trial judge was given a first-hand account of the impact on victims of such abuse, and an insight into the steps victims feel compelled to take to protect themselves and their families where the law fails to do so. In the case of R v Nimmo & Sorley (2014), the sentencing remarks focused on the impact of the abuse on campaigner Caroline Criado-Perez who had relocated, employed private security, and all but erased her digital footprint to protect herself after gendered rape and death threats were sent to her via Twitter (R v Nimmo & Sorley, 2014). Overall, the severe impact on victims is a leading factor in the harm caused by GBAO. Given such harms, the announcement of the online safety programme of law reform work in 2018 was one which caused cautious optimism about placing GBAO and OVAW at the heart of the England and Wales policy agenda.
532
K. Barker and O. Jurasz
Online Harms: A Policy Landmark The evolving nature, and awareness of internet harms led—through high-profile examples—to the announcement in May 2017 of the intention to introduce a ‘Digital Charter’ (Evensted, 2017) by the British Conservative Party in its 2017 General Election manifesto. This election pledge included a promise to crackdown on social media companies in an attempt to, ‘make Britain the safest place in the world to be online’ (Martin, 2017). Following the election, this promise duly made its way into the new government’s programme of work, when the Department for Media, Culture and Sport (DCMS) published the Digital Charter in April 2018. At the time of its publication, the Digital Charter embodied a number of core elements, but specifically, it included the principle that, ‘the same rights that people have offline must be protected online’ (DCMS, 2018). In 2018, such a principle held great promise, especially for victims of GBAO. The policy initiative contained within the Digital Safety charter notionally put at the forefront of internet regulation in the UK the concept of ‘online safety’. It included recognition of the need to ensure that there was some form of legislative intervention to address online abuses, online harms and behaviours in a digital context which were not captured by the existing legal framework. To advance the online safety agenda, the Government’s next step to tackle these pernicious issues was to publish its Internet Safety Strategy in 2017 (DCMS, 2017), which in turn paved the way for the OHWP (DCMS, 2019). The Internet Safety Strategy (ISS) Green Paper took a broad approach to the issue of online safety and covered a number of strands aimed at developing so-called safe online environments. Despite the wide consultation that fed into the ISS, only scant attention was paid to issues of online misogyny, and GBAO (Barker & Jurasz, 2019a). In the 58 page policy document, the issue of online misogyny is addressed in less than a page, with GBAO appearing in only two short paragraphs (DCMS, 2019). While credit should in some small part be given to the policy recognition of misogyny and GBAO, the passing mention remains a significant policy disappointment given the scale, prevalence and impact of such harms (Barker & Jurasz, 2020a; McGlynn et al., 2020).
26 Gender-Based Abuse Online …
533
The era of online harms, and the signal from the government that internet regulation was to become front and centre of the law reform agenda has been widely applauded, especially by the soon-to-be regulator, Ofcom (Lomas, 2020). It was similarly regarded as a watershed moment by charities (Barnardo’s, 2020) and those supporting victims of online abuse, including bullying, self-harm incitement, IBSA, and most particularly, by those involved in dealing with child-sexual abuse material (CSAM) (Nair, 2020). While few would disagree with the importance of preventing and tackling CSAM, the focus of the law reform agenda in the context of online harms and online ‘safety’ has not necessarily extended beyond this overwhelming focus on youth protection. Consequently, the ‘harms’ that led to the OHWP in 2019 were predominantly focussed on those that affect children in particular. In part, this agenda is one which has been championed as a result of the fallout from the Molly Russell tragedy (BBC News, 2019) which saw the teenager commit suicide after exposure to extreme and consistent suicidal and self-harm content on social media sites, particularly Instagram. Following her death, her father campaigned tirelessly, with support from the NSPCC to advocate for greater protection for children and teenagers struggling with self-harm encouragement and incitement, and suicide content online. The Russell family welcomed the OHWP upon its unveiling, highlighting the importance of the need to tackle social media regulation (Leigh Day, 2020). Such campaigns have coincided with calls for increased regulation of, and responsibility from, platforms (Barker & Jurasz, 2019a; Gorwa, 2019; Helberger et al., 2018). However, the pitfall following these campaigns is that when it was unveiled—much later than promised (BBC News, 2020)—the OHWP failed to represent a holistic policy attempt to address the full spectrum of online harms, somewhat of a divergence from the intent signalled in the ISS in 2017. In particular, there are some identified issues relating to internet safety—such as online misogyny, and GBAO—that are discussed in the Internet Safety Strategy, but which have not migrated to the OHWP. The OHWP outlines the combined approach to regulating the Internet and digital interactions from both the Home Office, and Department for Culture, Media and Sport. Both of these government departments make it clear in the OHWP that while online safety is, ‘a
534
K. Barker and O. Jurasz
shared responsibility’ (DCMS, 2019, p. 5) across users, regulators and technology companies, there is a pressing need to address illegal, hostile and harmful behaviours and content. In attempting to improve online safety and tackle online harms, the UK Government vision anticipates that regulation makes it possible for ‘[a]n online environment where companies take effective steps to keep their users safe, and where criminal, terrorist and hostile foreign state activity is not left to contaminate the space’ (DCMS, 2019, p. 6). Nevertheless, it seems that criminal activity on the basis of gender abuse online falls outside of this vision. To achieve its ambition, the Government anticipates the OHWP leading to a piece of legislation which can tackle harms, only 23 of which are specifically listed as falling within the scope of this reform plan (DCMS, 2019, p. 31). The ambition of the OHWP is curtailed by its own limitations—not only is GBAO not listed, but all forms of OVAW are also notably absent from the White Paper. This is perplexing given the proclaimed focus on online safety, yet the harms that are regarded as falling within the scope fit into only three categories: (1) terrorist or extreme content; (2) CSAM and pornography; or (3) sales of illegal goods and services. The exclusions which are—somewhat—more comprehensible, are those for which specific legal regulation already preexists, such as data protection breaches—such as the Data Protection Act (2018). That said, GBAO, and OVAW are not specifically catered for in other legislation in England and Wales. There is, similarly (at present), no provision for capturing gender within hate crime (Barker & Jurasz, 2019b, 2021a, 2021c), so GBAO are also not captured by this potential route, leaving victims as unprotected online, as offline—suggesting that core principle embodied within the ISS has failed even before draft online harms legislation has been tabled. Above all else, the overlooking of GBAO within the OHWP reflects a policy agenda which claims to be ambitious and which seeks to foster a ‘global consensus’ (DCMS, 2019, p. 4) for tackling online safety yet fails to be progressive by omitting gender dimensions (Barker & Jurasz, 2019a, p. 3). The omission of GBAO from the most ambitious of policy reform agendas designed to take safety online is one which is representative of a broader policy lack of cohesion across different policy areas in England and Wales. Cases such as Nimmo & Sorley serve as examples of why
26 Gender-Based Abuse Online …
535
GBAO should have been placed within the remit of the OHWP, and yet it has not been. This lack of understanding is matched in turn by the lack of attention paid to new and contemporary forms of gendered abuse, including its online forms. To therefore omit it from the OHWP is a substantial weakness, and a failure.
Failures of the Online Harms Approach The British Government’s proposals for tackling online harms are symptomatic of policy fragmentation and, to a degree, short-sightedness, when it comes to assessing the realities of online abuse and the wide spectrum of harms arising from it. For instance, in relation to text-based abuse alone, a diverse range of harms (originating online but taking effect both on and off-line) can be identified, including social, reputational, intersectional and democratic harms (Barker & Jurasz, 2021c, pp. 256– 258). In contrast, online harms which are captured in the OHWP are presented in a restrictive manner with the key emphasis falling on physical harms to children and national security, as well as harms arising from IBSA. Although IBSA is regulated by the existing legislation (s33 Criminal Justice & Courts Act, 2015), textual forms of online abusive communications—especially in the form of tweets—are not currently regulated or punished in an equivalent manner. However, both textual and image-based forms of online abuse can take gendered forms and can lead to online harms, but only sharing of sexual imagery is covered in the OHWP (DCMS, 2019, p. 20). Furthermore, while harms associated with ‘cyberbullying’ (DCMS, 2019, p. 75) and ‘online anonymous abuse’ (DCMS, 2019, p. 17) are referred to in the document, the gender aspect of such abusive behaviours, as well as their consequences, are omitted from the analysis. GBAO encompasses a much wider range of abusive behaviours than gender-based harassment or cyberbullying referred to in the document (Barker & Jurasz, 2019b, 2020b)—therefore, a passing mention of these two forms of abuse does not even come close to encompassing a diverse typology of GBAO. As such, the OHWP creates a very selective image of online harms and the proposed legal architecture within which these
536
K. Barker and O. Jurasz
harms are to be situated. The selectiveness of the imagery is reflective of the selective visibility—and recognition—of victims perpetuated by the omission of GBAO.
Fragmentation As we argue elsewhere (Barker & Jurasz, 2020b), fragmentation can be observed in the context of digital reforms on several levels and the OHWP regrettably follows this suit. However, the perils of fragmentation in policy making and law reform always result in painting an incomplete picture of the problem (here: the omission of online harms arising from GBAO) and solutions to it. As a result, proposed avenues for reform lack conceptual cohesion and a comprehensive approach towards establishing responsibility for tackling online harmful content—something that we refer to as ‘regulation by omission’ (Barker & Jurasz, 2020b). In particular, this fragmented approach towards tackling GBAO is characterised by the gender-platforms duality, whereby a gender perspective is regularly omitted from digital reforms while the issue of responsibility of platform providers and other key actors in the digital sphere is excluded from reform proposals concerning gender equality and tackling of violence against women and girls. This duality is also reflected in the OHWP, a document which heavily weighs in on the matter of Internet regulation and responsibilities of companies/platform providers, but adopts a narrow and gender-exclusive approach to the societal phenomenon of online abuse which is set out to tackle. Despite promoting a ‘shared responsibility’ narrative within the OHWP, the Government in fact does not extend this vision to tackling one of the key societal problems: violence against women and girls, which now also happens online. As such, while various actors—platform providers, companies, charities, civil society organisations—are called upon or otherwise tasked with combatting online behaviours which are harmful, GBAO is not seen as fitting within that category, nor worthy of regulatory efforts/attention. In fact, throughout the document, ‘gender’ is mentioned only once, and even then, only in relation to sexual exploitation of any young person (DCMS, 2019, p. 13). Furthermore,
26 Gender-Based Abuse Online …
537
the impact of the online abuse on women is emphasised only in relation to the online abuse of public figures—especially journalists and politicians (DCMS, 2019, pp. 24–25)—as well as harassment (DCMS, 2019, pp. 69–70). On the face of it, that appears to be inclusive of the broader category of GBAO. However, given the specific legal characterisation and threshold for the offence of harassment under the law of England and Wales, it is unlikely to cover a range of behaviours which can be described as GBAO or OVAW (Barker & Jurasz, 2019b, pp. xiii–xiv).
Online Harms and Violence Against Women and Girls The specific reference to the online abuse of female public figures is contrasting with the glaring oversight in not capturing ‘everyday’ GBAO which happens to ‘ordinary’ women. While online abuse of prominent women is a serious problem—and, as we argue elsewhere, one which needs addressing (Barker & Jurasz, 2019c, 2020a)—such a categorisation of online harms creates a false hierarchy whereby GBAO suffered by prominent figures is given disproportionate policy and legislative attention to that afforded to ‘ordinary’ women subjected to such abuse. This approach is reminiscent of the much deeper problem of perceptions of online violence against women (Barker & Jurasz, 2020b) and its impact on the legal regulation of such violence. Despite the British Government’s commitment to tackle violence against women and girls (VAWG) in all its forms, including online abuse (HM Government, 2016, p. 26) and cyber-enabled VAWG (HM Government, 2016, p. 51), the OHWP does little in the way of bringing this agenda forward. In fact, the OHWP represents a failed opportunity to meaningfully address GBAO and the online harms arising from it. Failure to prioritise GBAO also illustrates the reluctance of the policy and law makers to combat everyday forms of VAWG. VAWG has become normalised within society and so has the GBAO (Barker & Jurasz, 2019b, p. 28). Subsequently, only when such forms of GBAO reach an alarmingly high threshold—either in terms of scale or tragic consequences, e.g. suicide or death—public and social concern is expressed, often accompanied by enthusiastic glossy campaigns that are typically
538
K. Barker and O. Jurasz
short lived (Barker & Jurasz, 2021a), and calls for legal action against VAWG (Barker & Jurasz, 2019b, 2020a; Marlborough, 2020). To an extent, this is reflected in the narrative presented in the OHWP which, on the whole, silences the impact of GBAO on women and girls—especially where abuse does not concern public figures, those who are targeted, or where sexual imagery is not involved. By extension, such presentation of online harms paints an incomplete picture of the impact of GBAO on women. It shows GBAO as a relatively rare instance and a type of violence that is not sufficiently ‘harmful’ to regulate/punish. This, in turn, reinforces the false duality of Internet safety and women’s safety/freedom from violence, as spheres exclusive of one another; distinct worlds which do not intersect and which do not mirror social relations and inequalities of the offline worlds. Consequently, the ‘private/public divide’ plays out in law: the world of VAW (albeit online) is pushed towards the confines of the private sphere, to the margins of legal regulation, where, paradoxically, technologicallyfacilitated VAW can be punished when considered in the context of domestic abuse (Barker & Jurasz, 2020b; Buturuga v. Romania [2020]; Ministry of Justice, 2020)—and away from the progressive, public sphere of the Internet and online spaces which prioritises regulation to ensure the rights and freedoms (albeit not from GBV) of its users. Finally, even when considered on a semantic level, the OHWP is indicative of relegating GBAO and harms arising from it to the boundaries of legal regulation. In the document, misogynistic and sexist abuse which targets ‘ordinary’ women is categorised as ‘other behaviour online’ and one “beyond illegal activity” (DCMS, 2019, p. 16). As such, women who are subjected to GBAO are positioned as ‘others’ in relation to the— presumably—mainstream victims of online abuse and online harms. The othering of ordinary women creates further division, and even greater levels of policy fragmentation, leading—confusingly—to a situation where some GBAO is recognised but some is not, and the determination of its recognition is not dependent upon the behaviour, but the category of victim.
26 Gender-Based Abuse Online …
539
Lack of Cohesion with Other Law and Policy Reforms The OHWP is a part of the busy policy and law reform landscape in England and Wales. In recent years, there has been a noticeable increase—although, arguably, limited in scope—in the interest of law and policy makers in matters concerning online abusive behaviours and Internet regulation more broadly (e.g. Law Commission, 2018). However, there exists a concerning lack of cohesion between the various strands of law and policy reform on the intersecting matters of online abuse, online harms, online hate—all of which can, and frequently do, involve GBAO. In parallel to the ISS and the OHWP, the Law Commission of England and Wales has been working on two projects concerning possible reform of: (i) hate crime legislation (Law Commission, 2019) and (ii) the law on abusive and offensive online communications (Law Commission, 2018). There exist apparent overlaps between these two projects and the OHWP—particularly where online hate is concerned and given that GBAO features in all three areas (i.e. online hate, online communications, online harms). However, this holistic outlook towards reform is not reflected in the Law Commission’s consultation papers, nor in the OHWP. The OHWP lists hate crime as one of the harms within its scope (DCMS, 2019, pp. 31, 68)—yet it brings little hope for the victims of GBAO given that gender (as well as sex) is not included as a protected characteristic under hate crime legislation (Barker & Jurasz, 2019b, 2020c). This lack of alignment among reform priorities, goals and aims further illustrates the aforementioned tendency towards fragmentation (Fragmentation, above). Although the Law Commission and the DCMS are indeed distinct entities and there is no requirement as such for the coordination of the law and policy reform proposals between them, the misalignment of these agendas is concerning. Furthermore, these is a noticeable absence of consideration of these reforms in the context of the Government commitment to tackle VAWG, including online and technologically-facilitated violence, as articulated in the Ending Violence Against Women and Girls Strategy 2016–2020 (HM Government, 2016). Given the largely aligned timing of the work and interrelated subjects in scope, it is regrettable that a more holistic approach to law
540
K. Barker and O. Jurasz
reform in the areas of online hate, online communications and online harms has not been forthcoming. Arguably, it would have allowed for— at the very least—a more inclusive and fuller view of the issue of online abuse to be presented, also moving from the approach of reforming the law in siloes. As our day-to-day lives are not one-dimensional nor happening in or shaped by a single context, this contextual diversity should be reflected in the attempts to regulate and govern the abuse occurring in the online sphere.
Conclusion: An Opportunity Missed? The timing of the Digital Charter, ISS, and OHWP have all—accidentally—overlapped with substantial law reform projects addressing online hate, and online offensive/abusive communications. This presents a unique opportunity in England and Wales for a cohesive, and inclusively designed set of policies and reforms addressing a broad spectrum of potential online harms—including GBAO. Sadly, this opportunity has not been grasped, and instead of forging ahead with a truly world-leading programme of reform, fragmentation, disjointedness and regulation gaps look set to remain the norm. Tellingly, these disparate areas of reform all continue to overlook GBAO and OVAW. The underpinning vision of the OHWP claims to be one of ‘thriving’ and a safe society, based on the freedom of expression online as well as ‘shared rights, responsibilities and opportunities’ (DCMS, 2019, p. 6). However, it is a flawed assumption that this ambitious goal can be achieved without a commitment to tackling gender-based and intersectional discrimination, inequalities, patriarchy and misogyny, as well as ensuring that public spaces are free of violence. The emphasis on freedom of expression within the OHWP is important, especially from the perspective of a democratic society. However, the omission of GBAO from the proposed law and policy reforms invites a question: whose freedom of expression is protected and at what cost? GBAO results in a range of various harms, including participatory harms (Barker & Jurasz, 2021b) evident in driving women away from online spaces and curtailing their freedom of expression and right to
26 Gender-Based Abuse Online …
541
participate in public life. We have argued previously that ‘[w]e favour an approach to social media regulation that is rights based, prioritises gender equality online, and puts the rights of users at the heart of any regulatory framework’ (Barker & Jurasz, 2019a). This also extends to the way in which online harms ought to be addressed within the legal system. A modern, forward-looking, and gender-inclusive response is needed to ensure that women are adequately protected from GBAO and that responsibilities placed on the platform providers and other key stakeholders reflect that. Acknowledgements The author(s) received no financial support for the research, authorship and/or publication of this article. The assessment of law and policy presented in this chapter is correct as of March 2021.
References Amnesty International. (2017). Amnesty reveals alarming impact of online abuse against women. Retrieved from Amnesty International: https://www.amn esty.org/en/latest/news/2017/11/amnesty-reveals-alarming-impact-of-onl ine-abuse-against-women/. Amnesty International. (2018, September 3). Toxic Twitter. Retrieved from Amnesty International: https://medium.com/@AmnestyInsights/unsocialmedia-tracking-twitter-abuse-against-womenmps-fc28aeca498a. Barker, K., & Jurasz, O. (2019a, June). Online Harms White Paper consultation response. Online Harms White Paper Consultation Response. Open University. Barker, K., & Jurasz, O. (2019b). Online misogyny as a hate crime: A challenge for legal regulation? Routledge. Barker, K., & Jurasz, O. (2019c). Online misogyny: A challenge for digital feminism? Journal of International Affairs, 72(2), 95–113. Barker, K., & Jurasz, O. (2020a, April). Online harms & Caroline’s law: What’s the direction for law reform? Retrieved from OU Research News: http://www.open.ac.uk/research/news/online-harms-and-car olines-law%E2%80%93whats-direction-law-reform.
542
K. Barker and O. Jurasz
Barker, K., & Jurasz, O. (2020b). Online violence against women as an obstacle to gender equality: A critical view from Europe. European Equality Law Review, 1, 47–60. Barker, K., & Jurasz, O. (2020c, December). Reform of hate crime laws— Consultation response to the law commission. Open University. Barker, K., & Jurasz, O. (2021a). Online misogyny as a hate crime: #TimesUp. In I. Zempi & J. Smith, Misogyny as hate crime. Routledge. Barker, K., & Jurasz, O. (2021b). Sexual violence in the digital age: A criminal law conundrum? German Law Journal, 22(5), 784–799. Barker, K., & Jurasz, O. (2021c). Text-based (sexual) abuse and online violence against women: towards law reform? In J. Bailey, A. Flynn, & N. Henry, Technology-facilitated violence and abuse—International perspectives and experience. Emerald. Barnardo’s. (2020, September 30). Barnardo’s responds to NSPCC’s six tests for online harms bill . Retrieved from Barnardo’s: https://www.barnardos.org.uk/ news/barnardos-responds-nspccs-six-tests-online-harms-bill. BBC News. (2019, January 29). Mental health: UK could ban social media over suicide images, minister warns. Retrieved from BBC News: https://www.bbc. co.uk/news/uk-47019912. BBC News. (2020, June 29). Online Harms bill: Warning over ‘unacceptable’ delay. Retrieved from BBC News: https://www.bbc.co.uk/news/technology53222665. Buturuga v. Romania, No. 56867/15, Judgment (European Court of Human Rights), 11 February 2020. Criminal Justice & Courts Act. (2015). Data Protection Act. (2018). DCMS. (2017). Internet safety strategy. HM Government. DCMS. (2018). Digital charter. HM Government. DCMS. (2019, April). Online Harms White Paper. Online Harms White Paper. HM Government. Evensted, L. (2017, May 18). Conservative party manifesto promises to face up to fast changing technology. Retrieved from ComputerWeekly.com: www.computerweekly.com/news/450419091/ConservativeParty-manifesto-promises-to-face-up-to-fast-changing-technology. Gorwa, R. (2019). What is platform governance? Information, Communication & Society, 22(6), 854–871. Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested to cooperative responsibility. The Information Society, 34 (1), 1–14.
26 Gender-Based Abuse Online …
543
Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. J. (2020). Beyond revenge porn: Gender, justice and image-based sexual abuse. Routledge. HM Government. (2016, March). Ending violence against women and girls: Strategy 2016–2020. HM Government. Law Commission. (2018). Abusive and offensive online communications: A scoping report. Law Commission. Law Commission. (2019). Hate crime: Background to our review. Law Commission. Leigh Day. (2020, April 8). Family of Molly Russell Welcomes Online Harms White Paper. Retrieved from Leigh Day: https://www.leighday.co.uk/News/ 2019/April-2019/Family-of-Molly-Russell-welcomes-Online-Harms-Whit. Lomas, N. (2020, December 15). UK online safety bill, coming next year, will propose fines of up to 10% of annual turnover for breaching duty of care rules. Retrieved from TechCrunch.com: https://techcrunch.com/2020/12/14/ uk-online-harms-bill-coming-next-year-will-propose-fines-of-up-to-10-ofannual-turnover-for-breaching-duty-of-care-rules/?guccounter=1&guce_refe rrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQA AADhRIc4NW-6R7q59U0. Marlborough, C. (2020, February 17). What is ‘Caroline’s Law’? Campaigners mount online petition to change media regulation after death of Caroline Flack. Retrieved from The Scotsman: https://www.scotsman.com/news/sco ttish-news/what-carolines-law-campaigners-mount-online-petition-changemedia-regulation-after-death-caroline-flack-1995359. Martin, A. J. (2017, May 18). Conservatives pledge ‘digital charter’ crackdown on social media companies. Retrieved from Sky News: http://news.sky.com/ story/conservatives-pledge-digital-charter-crackdown-n-social-media-com panies-10883078. McGlynn, C., Rackley, E., Johnson, K., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2020). ‘It’s torture for the soul’: The harms of image-based sexual abuse. Social & Legal Studies, online first. https://doi.org/10.1177/096466 3920947791. Ministry of Justice. (2020, March 3). Enhanced domestic abuse bill introduced to parliament. Retrieved from HM Government: https://www.gov.uk/govern ment/news/enhanced-domestic-abuse-bill-introduced-to-parliament. Nair, A. (2020). The regulation of internet pornography. Routledge. Phillips, J. (2017). Everywoman: One woman’s truth about speaking the truth. Hutchinson.
544
K. Barker and O. Jurasz
Plan International. (2020). Free to be online? Retrieved from Plan International: https://plan-international.org/publications/freetobeonline. R v Nimmo & Sorley (Westminster Magistrates Court 2014). Statista. (2017, July). Most common types of online abuse of harassment experienced by women worldwide as of July 2017 . Retrieved from Statista.com: https://www.statista.com/statistics/784833/online-harassmentwomentypes/. Women’s Aid. (2017, June 27). Women’s aid and Facebook join forces to help keep women safe online. Retrieved from Women’s Aid: https://www.womens aid.org.uk/keeping-women-safe-online/.
27 Promises and Pitfalls of Legal Responses to Image-Based Sexual Abuse: Critical Insights from the Italian Case Elena Pavan and Anita Lavorgna
Prologue Tiziana Cantone, an Italian 31 years-old woman, was found dead on September 13, 2016, after she decided to end her life, overwhelmed by the non-consensual release of six videos in which she was seen having sexual intercourse. These videos titled with her name appeared on social media platforms and were shared via instant messaging apps. The case quickly rose to headlines in Italy, where Cantone was labelled a “web E. Pavan (B) Department of Sociology and Social Research, University of Trento, Trento, Italy e-mail: [email protected] A. Lavorgna Department of Sociology, Social Policy and Criminology, University of Southampton, Southampton, UK e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_27
545
546
E. Pavan and A. Lavorgna
idol”; and media commenting on this story spoke about a “woman consensually shooting hard videos”. Some of them even went so far as to wonder if those materials were in circulation as an advertising strategy by an “emerging porn star” (Il Post, 2016a).1 In reality, the videos were put into the public domain by a man Cantone was dating and without her consent. She filed a complaint to the police in 2015 (Il Post, 2016b). Her attorney asked Facebook and Google to remove photos, posts, pages, and websites mentioning the videos or portraying the woman. In the meantime, online pictures of Cantone’s face and body, and even the words she pronounced in those videos, started to be printed on t-shirts, coffee cups, smartphone covers, stickers. A myriad of new social media pages with her name was created to share the videos and shame her (Il Post, 2016a, 2016b). Tiziana Cantone had become a slut-meme. The trial (initially against the men who shared the videos online, then extended to social media platforms) proceeded slowly, in large part due to the uncooperative nature of the internet companies housing her videos, and also due to the alleged “willingly and in full consciousness” (Complaint, Tribunal of Napoli Nord, 13 July 2015) participation of Cantone in the making of the videos. In August 2016, the Court ordered the removal of contents referring to the woman, but Cantone was anyhow requested to compensate the internet companies and platforms approximately 20,000 euros for legal fees. Furthermore, she was denied any “right to oblivion” (that is, the right to be forgotten on the internet, see Il Post, 2016b). A few days later, Tiziana Cantone took her life.
1
As reported by the online newspaper Il Post (2016a), the news speculating on Tiziana Cantone’s porn career was published by the Italian newspaper Il Fatto Quotidiano. After Tiziana’s death, Il Fatto Quotidiano replaced the original text with a letter by its editor in chief Peter Gomez who admitted the “negligent conduct” of the newspaper and its “contribution” (albeit minor) to the “crime committed by the web” (Il Fatto Quotidiano, 2016).
27 Promises and Pitfalls of Legal Responses …
547
Introduction Tiziana Cantone’s story depicts the tragic individual experience of a woman brutalised by a group of men who shared videos of her without consent, but also by a violent “hybrid media system” (Chadwick, 2013) spanning multiple mass and digital media infrastructures and logics. At the same time, her story dramatically speaks to the ever-growing multiplicity of stories of online image-based sexual abuse (IBSA, Henry et al., 2020a), referred to, by many, as “revenge porn”. In response to tragic cases like this one, calls for the criminalisation of IBSA have emerged in several countries (Bates, 2017; Eaton & McGlynn, 2020; Goldsworthy et al., 2017; Hall & Hearn, 2018; Henry et al., 2020b; McGlynn et al., 2017; Salter & Crofts, 2015). In Italy, the case of Tiziana Cantone fostered collective sensitisation on this matter and kicked off a mobilisation process that culminated in 2019 with the introduction of the “Red Code Law”; a series of laws aimed at addressing domestic and gender-based violence. For the first time in Italy, the Red Code Law sought to criminalise the non-consensual diffusion of intimate and sexually explicit materials “aimed at remaining private” (Law No. 69/2019, art 10[1]). Approved unanimously by the chamber of deputies (a quite unusual outcome in the Italian political system) and with an overwhelming majority in the Senate, the Red Code was largely applauded as a necessary, timely, and effective provision—something that women like Tiziana Cantone could use to obtain justice. Ultimately, the text of the Red Code and, particularly, the section relating to the non-consensual diffusion of sexual visual materials, brought with itself a promise for a more just, respectful, and safe Italian society— “a first, fundamental step […] for the cultural revolution so needed in our Country”, as the (back then) Prime Minister Conte commented on his Facebook page.2 However, we argue, this promise is likely to remain unfulfilled as this legislative intervention is ineffective by design. In this chapter, we propose to tie this ineffectiveness to the lack of recognition by the legislator of IBSA as a form of violence that 2 https://www.facebook.com/GiuseppeConte64/posts/666309147184387. Last accessed 5 Mar 2021.
548
E. Pavan and A. Lavorgna
fuses social and technological factors (Henry & Powell, 2017; Henry et al., 2020a, 2020b; Maddocks, 2018). Borrowing explicitly from recent reflections that reconceptualise the idea of privacy online as built in practice through the interaction between individuals and platforms (Lupton, 2020a, 2020b; Lupton & Southerton, 2021), we propose to conceive of the non-consensual diffusion of intimate and/or sexual images as a “more than human” form of violence, perpetrated in and through “humannonhuman assemblages” characterised by relational connections, unique affective forces, and agentic capacities. By bringing in the concept of assemblage, we seek to provide a novel starting point not only to grasp the specificity of online violent practices such as IBSA, but also to design more adequate and legal responses in this domain. The reason for this is threefold. First, reasoning through the lens of assemblages allows assigning to digital media a constitutive role within harmful courses of action. Second, assemblages allow to better specify how already existing practices of IBSA are reconfigured within digital spaces. Thirdly, assemblages force a shift beyond the dichotomy of perpetrators-targets and instead recognise IBSA as a necessary collective and distributed form of violence. In the sections that follow, we begin by building a bridge between current debates on the non-consensual diffusion of intimate and/or sexual visual materials and the concept of assemblage. We then provide a critical reading of the text of the Red Code Law arguing that, albeit filling a legislative void, its provisions are ineffective by design and, therefore, destined to remain inefficient. We conclude by linking the main limitations of this provision to its complete lack of awareness of the human-nonhuman nature of this form of online abuse.
Beyond “Revenge Porn”: Online Image-Based Sexual Abuse as an Assemblage Extant literature agrees that the non-consensual diffusion of intimate or sexual images is a wide-encompassing set of deviant and criminal behaviours. In common parlance, this bulk of practices are often referred to as “revenge porn” and thus reduced to the stereotypical case when
27 Promises and Pitfalls of Legal Responses …
549
images or videos created and/or shared in the intimacy of a personal relationship are used by a vengeful ex-partner (Powell et al., 2019). As more and more attention is being paid to the pervasive tendency to upload and circulate intimate images without direct consent (Henry et al., 2020a), the general label “revenge porn” is increasingly contested as a “media generated moniker” (Maddocks, 2018, p. 1) incapable of communicating exhaustively the scope and the severity of the harms involved (McGlynn et al., 2020), and possibly facilitating a victim-blaming attitude (Eikren & Ingram-Waters, 2016; Flynn & Henry, 2019; Powell, Henry, et al., 2018). In many cases, neither “porn” nor “revenge” is present (see, e.g., Powell et al., 2019). More often than not, intimate and sexual visual materials are created outside the context of pornographic work (albeit the nonconsensual circulation of sexual content feeds digital public repositories and can become lucrative, see Maddocks, 2018); they can be taken illegally without the subject portrayed being aware, or individuals might be pressured into sharing them (Branch et al., 2017; Henry & Flynn, 2019; Henry & Powell, 2018; Lavorgna, 2020; Liong & Cheng, 2017; Powell & Henry, 2017). Similarly, their release might not be a way to seek revenge, but—for example—to blackmail the target, or as a form of control or harassment (Mantilla, 2015; Powell & Henry, 2017). Within cybercrime studies, the umbrella term of IBSA has been employed in recent years to indicate a heterogeneous spectrum of criminal and deviant behaviours pivoting around the illegal release of sexual and intimate images, or the release of illegally obtained visual materials (Powell & Henry, 2017; Henry et al., 2020a). These behaviours integrate a specific form of “delegitimizing doxing” (Douglas, 2016), in which intimate personal information is revealed often with the intent to damage the credibility or reputation of a certain individual. Broadly speaking, delegitimising doxing is not always related to sex-related crimes as it happens, for instance, in the release of unflattering videos for public mockery in the context of bullying (Lavorgna, 2020). However, when the material released is of intimate nature or consists of sexual content, such doxing holds specific characteristics that make it comparable, according to an increasing number of authors, to offline forms of sexual violence and abuse (Bates, 2017; Bloom, 2014; Citron & Frank, 2014; Eaton &
550
E. Pavan and A. Lavorgna
McGlynn, 2020; Henry & Flynn, 2019; McGlynn et al., 2017; Powell & Henry, 2017). Research in this area specifies that digital media are not mere facilitators or amplifiers of “old” forms of violence. Designed as they are to actively intervene in the construction of social relations and in the ways in which they are experienced and lived (Marres, 2017), digital media “reinvent” already existing patterns of image-based abuses at least in two ways (Maddocks, 2018, p. 3). First, they expand the role played by single individuals who are no longer just “spectators” of intimate images leaked, for instance, on magazine covers but, in fact, actively producing, circulating, and consuming abusive and violent contents. Second, digital media magnify the forms and the extent of the harm, as in the age of “non-optional internet” (Maddocks, 2018), the nonconsensual circulation of sexual images irreversibly dismantles targeted women’s public identities and further confine them within already stiff “gendered opportunity structures” in society (McCammon et al., 2001). These substantive transformations can be better grasped through the concept of assemblage originally derived from the work of Deleuze and Guattari (Lupton & Southerton, 2021). In a context of “deep mediatization” (Hepp, 2020), where every aspect of our societies is seamlessly adapted to digital media logics and infrastructures, courses of action performed online become “more than human” as they are defined by the imbricated encounter between individuals, the digital platforms they use, and their unique agentic capacities (Lupton & Southerton, 2021). The concept of assemblage has been recently adopted to rethink users’ understandings and practices of online privacy, underlining how these are never predetermined but, rather, always generated in practice when “people come together with technologies to produce human-data assemblages that are constantly changing” (Lupton, 2020a, p. 3166). From this perspective, acts of self-representation through “data personas” are simultaneously shaped by two sets of affordances (Lupton, 2020b, p. 5): human affordances, which enable individuals to understand and live in their worlds, expressing their thoughts and feelings but also exerting power on others; and technological affordances, which enhance relational capacities, solicit and direct public attention, and contribute to constitute social reality. As human and technological affordances come together
27 Promises and Pitfalls of Legal Responses …
551
within assemblages, “flows of relational connections and affective forces [are generated], opening or alternatively closing agential capacities [and structuring] micropolitical operations of power at the level of the assemblage [or] accumulating at the level of the macropolitical, affecting larger assemblages (social groups or organisations, for example)” (Lupton, 2020b, p. 5). As such, framing online IBSA in terms of assemblages allows making three important specifications. First, it allows to complicate the very idea of IBSA by clarifying that digital media do not simply play an environmental role but bring into existence harmful and violent courses of action (see also Powell et al., 2020). Seen through the lens of assemblages, online IBSA configures as a hybrid in which violence is always generated in practice through the imbrication of human and nonhuman agencies. As perpetrators take advantage of digital media affordances and targets experience abuses in and through digital milieus, the assemblage of IBSA gets loaded with affects, a form of pre-emotive intensity subjectively experienced (Papacharissi, 2016, p. 5), that permeate violent courses of action and are channelled through digital media interfaces and affordances along global networks of relations. Second, reasoning in terms of assemblage allows to further clarify how already existent patterns of IBSA are “reinvented”. More profoundly than being digitally enabled, facilitated or even experienced, IBSA becomes digitally embodied . As violent human affordances encounter digital media, violence inherits and is thus reconfigured along the structural affordances of the very technologies that bring it into being. Thus, borrowing from boyd (2010), IBSA becomes persistent, as violent acts that last as long as their traces are available, in one form or another, within digital spaces; replicable, as original violent acts can swarm easily and quickly along a chain of personalised abusive copycats; scalable, as situated and contextualised acts of violence become massively participated; searchable, as digital acts of violence can be queried, recovered and reiterated infinitely. Third, the idea of assemblage conveys unique attention to the “shared” and “distributed” agency (Lupton & Southerton, 2021, pp. 1–2) that characterises digital courses of action. Relational connections, affective forces, and agentic capacities that structure assemblages make IBSA a
552
E. Pavan and A. Lavorgna
necessarily collective and distributed course of action. This course of action involves not only targets and perpetrators, but also a wide variety of other human and nonhuman actors: people witnessing and even contributing to persistent, replicable, scalable, and researchable violent acts; people supporting targets in their resistance endeavours; platforms’ affordances and algorithms; social media companies and their agendas; media outlets and journalists, recounting stories of violence from their oftentimes biased perspective; and, of course, institutional actors. As such, and in line with Powell et al. (2020), technology cannot be considered a mere facilitator of sexual offending: indeed, digital technologies deeply contribute to both cultures and practices of abuse, affecting the intersection of human, social, and technical factors at its core. In other words, they are so embedded in the larger social entity—including in crime, victimisation, and justice—to redesign the relationships between perpetrators, targets, and bystanders, enabling the spread of harmful course of action by mimicry through social networks (Powell et al., 2018). Whether and how legal responses are in fact responding to the intricacies of IBSA assemblages remains a matter of empirical examination. We explore this in the context of the Italian Red Code Law in the next section.
The Red Code Law In 2019, Italy adopted a new law regarding domestic violence and gender-based violence, the so-called Red Code Law (Law No. 69/2019), which enacted changes to both substantial criminal law and criminal procedure in the country. For the scope of this chapter, the most relevant provision is article 10(1), which adds article 612-ter to the Criminal Code to criminalise “the unlawful dissemination, delivery, transfer, or publication of sexually explicit images or videos aimed at remaining private”. According to the new article, when sexually explicit images or videos that are created or stolen are published or otherwise distributed against
27 Promises and Pitfalls of Legal Responses …
553
the consent of the individuals there represented, the behaviour is punishable from one to six years of detention and a fine ranging from EUR5000 to EUR15,000. Equally punishable are those who receive and further distribute these images, “with the intention of harming” represented individuals. Higher sentences will be given in those cases where this material is shared by a spouse, even when separated or divorced, or by a current or past partner. Notably, a higher sentence is imposed if the nonconsensual release “occurs through computational or telecommunication means” (art.10[1]), or if the compromising images affect individuals with mental or physical inabilities or pregnant women. Investigations can start after the legal complaint of the target (who has 6 months, rather than the standard 3 months, to start the complaint, and the action can be withdrawn only via a formal statement in the Court) or can be started ex officio if the target suffers from inabilities or is pregnant, or if the offence is linked to another officially prosecutable offence (such as extortion). Over its first year of application, according to the latest available report of the Italian Police (Polizia di Stato, 2020), more than 1000 investigations under art 612-ter have been started, with a peak of criminal complaints in May 2020, during the national COVID-19 lockdown. 81% of the investigations have involved female targets (among those, 83% are above 18 years old). Of the investigations, 121 cases were set for trial, and only six resulted in a conviction [n = 3] or plea deal [n = 3] [Polizia di Stato, 2020]). From a procedural point of view, the declared intent of the legislator was to promote a prompt(er) and qualified intervention by the State to help targets of gendered abuse once they start a complaint. Some best practices to facilitate prompt intervention (e.g., colour-coding folders with cases of abuse for their prioritisation), provision of support (also from a psychological point of view), and avoiding double victimisation practices have been developed and implemented in some judicial offices (Questionegiustizia.it, 2020), but they remain exceptional and are not applied consistently throughout the country. Overall, the reception of the new legislation has been mixed. Most commentators have warmly welcomed it, recognising it as a muchneeded intervention filling a legislative void and recognising important forms of gendered violence and abuse (e.g., DoppiaDifesa, 2019).
554
E. Pavan and A. Lavorgna
Others, however, have underlined some of its substantial and procedural limitations, stressing for instance that the tools provided are still inefficient for protecting minors or the limited efficacy of adding only a generic article to the Criminal Code without elaborating national guidelines (first and foremost as regards best practices in interacting with the targets), and emphasising the importance to train and raise awareness among law enforcement and the judiciary on gender-based violence, discrimination and biases (Questionegiustizia.it, 2020; Romano & Marandola, 2020; Spina, 2020).
Ineffective by Design: The Disassemblage of the Red Code Law The Red Code displays some of the limitations identified in recent literature relating to IBSA law. The framing of the illegal behaviour it adopts does not overcome the persisting difficulty in recognising that certain digital experiences are harmful, and thus it fails to fully acknowledge some targets as crime victims (Martellozzo & Jane, 2017). Furthermore, to become effective, the provision presupposes that targets of IBSA are already aware of (and therefore severely victimised by) such diffusion. Hence, this approach does not meet the challenges for preventive justice (Ashworth et al., 2013) and remains merely a punitive measure. Moreover, article 612-ter criminalises only the most radical harmful behaviour of diffusion of materials but does nothing to reduce the harms connected with the threat of such diffusion and falls short when it comes to cases that involve intimate images and images without a blatantly explicit sexual content (but that can still be sexualised, or otherwise can create a range of harms), leaving discretion to the personal judgement of the Court to define what counts as “sexually explicit” and, therefore, liable to create offence (Giovanni, 2020). More relevantly, while never referring explicitly to “revenge porn”, the Red Code seems nonetheless to endorse the victim-blaming perspective this label encapsulates (Eikren & Ingram-Waters, 2016) as it binds the element of harm to the absence of “consent” by the target. On the one hand, this flows directly from and, at the same time, contributes to
27 Promises and Pitfalls of Legal Responses …
555
normalising and minimising the issue of sexual violence against women in cyberspace (Powell & Henry, 2017), as it leaves room to think that, when consensual (however consent is defined), the diffusion of intimate and sexual images cannot and should not be prosecuted (as harms are not recognised). On the other hand, it paves the way for a plethora of reframing exercises that perpetrators can perform to conceal their motivations behind a supposed consent (Giovanni, 2020). Reasoning along the lines of assemblages allows to further measure the (in)effectiveness of this specific provision. Building on our previous considerations, three main elements emerge distinctively: first, the peripheral and merely instrumental role assigned to digital media; second, the disregard of the more-than-human nature of online IBSA; third, the lack of recognition of the collective and distributed nature of online IBSA. We explore each of these below.
Digital Media at the Periphery of IBSA As mentioned above, article 612-ter envisages higher sanctions in case that the non-consensual diffusion of images “occurs through computational or telecommunication means”. Digital media are thus framed by the provision as peripheral and instrumental factors within a legal framework that addresses IBSA tout court and, in fact, presupposing an isomorphic translation of offline mechanisms of violence to the offline space. While certainly IBSA can be perpetrated without using digital media, approaching the issue by simply considering online IBSA a specification of a broader form of violence, rather than a form of violence in itself, is highly problematic. As we discuss below, this choice paves the way for designing response strategies that are necessarily limited and are not tailored to account for the specific flows of relations, affects, and agencies that sustain online IBSA. However, assigning digital media a peripheral and instrumental role entails also a second problem: it continues to leave on the side-line social media companies and internet intermediaries more broadly. Recent contributions have underlined how intermediaries play an active and multifaceted role within online harmful and violent courses of action
556
E. Pavan and A. Lavorgna
(Pavan, 2017). On the one hand, they act like “hidden-influentials” able, through their terms of services and their algorithmic functioning, of shaping an environment that can be more or less conducive of online IBSA. On the other, they act like “free agents” protected internationally by the principle of non-liability and, within national legislations, by cloudy characterisations swinging in-between hosting and content providers (D’Angelo, 2019). In these circumstances, the direct responsibility of harmful and violent courses of action remains always to users, while intermediaries have no formal obligation and are free to decide within the framework of their corporate social responsibility whether and how taking action particularly in cases of online IBSA. While mechanisms of quicker content takedown are being developed by big platforms like Facebook and Google (Henry et al., 2020a), they remain insufficient to cope with the replicability and the scalability of digital harmful contents, and, in any case, are carried out in a continuous tension with the need to guarantee free speech and, in fact, the profits that may generate from viral cases of IBSA (Massanari, 2017). Even when formally obliged by the judicial authority to intervene and remove harmful contents, as occurred in the case of Tiziana Cantone, platforms can initiate a formal appeal based on alleged practical difficulties in retracing and eliminating images, videos, and other contents that contribute to the formal offence, and lamenting the lack of a formal request from the authorities to act in this direction (therefore implicitly suggesting the insufficiency of users’ requests) (D’Angelo, 2019). The “judicial time” required to clarify whether a direct intervention of the intermediary is legally expected, however, runs at a different pace than the time of the IBSA assemblage and the Red Code does not manage to set a change of speed.
The Limited Power of Deterrence By leaving digital media at the margins, the Red Code fails to engage with “how new technologies operate, how they are gendered, how they are used, and what their impacts are” (Henry & Powell, 2015, p. 773) and thus to cope with the more-than-human nature of IBSA. Not only
27 Promises and Pitfalls of Legal Responses …
557
does the human-only perspective prevent the active participation of intermediaries, but it also concentrates all the agentic capacity of the provision within a legal model based on the theory of deterrence and pushed by the need to fill a pivotal legislative void in a logic of “progressive punitivism” (Aviram, 2020, p. 199) that relies on the traditional toolbox of criminal justice. In doing so, the legislator has disregarded the adoption of a cultural model (Hodges, 2020) based on behavioural aspects which aims to lead to cultural change through interventions at the level of enforcement, governance, responsibility, and accountability. The legal model, of course, has its own merits and, in itself, can be an enabler of change. As discussed by Corda (2020), through its “transformational function”, criminal law can use punishment to change (not only to reflect) social norms, attitudes, and beliefs, in conjunction with non-penal policymaking tools. However, for this norm-shaping force to work, as explained by Corda (2020, p. 589), four conditions should be satisfied: (1) the use of the law is not purely symbolic, but rather outcome-oriented; (2) the law does not constitute an unwarranted form of social dirigisme/interventionism; (3) the law does not translate into a use of the defendant merely as a means to an end; and (4) the law does not impose partisan moral views going against fundamental rights and principles of democratic pluralistic societies. The first two conditions are not fully met in the Red Code, if only because the law has been adopted out of collective sensitisation after the Cantone case, partially as a symbolic weapon in the midst of greater public attention on the topic. While probably expressive in its intention (Cooter, 1998), and hence interventionist by definition in its attempt to express social values to be internalised via the law, the Red Code has fallen short in understanding the socio-technical dynamics of IBSA, and consequently fails to address them. Moreover, as targets are not systematically and adequately supported, we suspect that also the third condition is not met. Failing the expressive intentions, deterrence remains. Deterrence, however, appears to be a thin response in light of the imbrication of human and technological affordances that structure IBSA. When it comes to this form of violence, the above-mentioned difficulty to recognise some digital experiences as harmful is magnified as it entwines with key gendered norms and assumptions: first, that violence,
558
E. Pavan and A. Lavorgna
in order to be defined as such, must be physical; second, that speech that trivialises or approach humorously violence against women does not constitute hate speech; and that “common misogynistic slurs do not present a real threat of violence” (Nyst, 2013, points 1–4). Online, these assumptions are applied to acts within spaces that are structured by technical affordances that enable a “private sociality” (Papacharissi, 2014) wherein individuals can carry out conversations that are perceived to be neither completely public nor private—as it happens within Facebook and Telegram groups or Reddit threads (Massanari, 2017; Semenzin & Bainotti, 2020). Combined, these two sets of affordances nurture violent courses of action that are rarely perceived as such and, therefore, as a behaviour that could be subjected to sanction.
The Persistent Dichotomy Between Perpetrator(s) and Targets The human-only perspective endorsed by the Red Code shows some sensibility towards the collective and distributed nature of IBSA, foreseeing sanctions also for “second distributors”, namely individuals who, albeit not directly involved in the production of image and video materials, further contribute to their diffusion. The inclusion of second distributors remains nonetheless problematic as, in order to be prosecutable, their action needs to be aimed at intentionally harming the target—an element that allegedly remains to be proved by targets themselves. Despite partially complicating the typical dichotomy between targets and perpetrators, what the Red Code misses is a full acknowledgement of the collective and distributed nature of online IBSA. Binding prosecutable behaviours of third parts to an explicit intention of harming targets leaves out of the picture the myriad of users who engage with violent contents in the ways allowed by the platforms—for example, through likes or comments—that are by all means active contributions to courses of violence and abuse but lack the element of international harm.
27 Promises and Pitfalls of Legal Responses …
559
Moreover, as seen above, while assigning to media only a peripheral role, the provision provides a framework that not only disregards the role played by intermediaries but that also excludes the active role of platforms as agents of harm. As poignantly noted by Semenzin and Bainotti (2020), platform features, policies, and algorithms do not simply matter as enablers or constrainers but, rather, as in the encounter with dominant masculine and misogynistic cultures they become “gendered”. Thus, secure data storages and users’ anonymity are appropriated following and, at the same time, contributing to existing gendered power hierarchies enabling a wide repertoire of abusive practices—from explicit IBSA to trivialised general porn, to spying and voyeurism, to the construction of cross platform links that foster the circulation of images, and the automated bot-driven collection of non-consensual intimate materials and personal information (ibid.).
Conclusion In this chapter, we sought to offer a novel entry point to reflect on the adequacy and the responsiveness of legal provisions relating to IBSA. We thus adopted the lens of assemblages to discuss the 2019 Red Code Law. Certainly, this law has the merit of having shed light on some of the practices that can be found under the IBSA umbrella, making these issues more “mainstream” in public and legal debates. This notwithstanding, we argued that this legislative intervention is ineffective by design as not only it presents some of the typical limits of legal provisions in this domain but, more broadly, fails to recognise and address the more-than-human nature of this form of abuse. In light of these limitations, it is not surprising that IBSA is estimated to be a phenomenon increasing in Italy (PermessoNegato, 2020)—a country where, it is worth noting, violence against women remains a problem (GREVIO, 2020), and diffused forms of gender-based discrimination (including the “whore stigma”) attest the persistence of norms organising social relationships into power-laden hierarchies (Zambelli et al., 2018).
560
E. Pavan and A. Lavorgna
It is certainly unrealistic to expect that a single provision can effectively and efficiently address the intricacies of the IBSA assemblage. However, assemblages should not be an impossible benchmark to meet, but rather, they should be considered an entry point to reshape logics and forms of interventions. In a context in which IBSA continues to spread and transform, legal responses are not merely invited by, but are expected to be delivered and, more importantly, to be focused. While the Red Code may be the exception in its choice to make digital media “just another aggravating factor”, we argue that wearing the lenses of assemblages can help embracing, rather than circumventing, the complexities of IBSA. Acknowledgements The author(s) received no financial support for the research, authorship, and/or publication of this article.
References Ashworth, A., Zedner, L., & Tomlin, P. (2013). Prevention and the limits of the criminal law. Oxford University Press. Aviram, H. (2020). Progressive punitivism: Notes on the use of punitive social control to advance social justice ends. Buffalo Law Review, 68(1), 199–245. Bates, S. (2017). Revenge porn and mental health: A qualitative analysis of the mental health effects of revenge porn on female survivors. Feminist Criminology, 12(1), 22–42. Bloom, S. (2014). No vengeance for “revenge porn” victims: Unravelling why this latest female-centric, intimate-partner offense is still legal, and why we should criminalize it. Fordham Urban Law Journal, 42(1), 233–289. boyd, D. (2010). Social network sites as networked publics: Affordances, dynamics, and implications. In Z. Papacharissi (Ed.), Networked self: Identity, community, and culture on social network sites (pp. 39–58). Routledge. Branch, K., Hilinski-Rosick, C. M., Johnson, E., & Solano, G. (2017). Revenge porn victimization of college students in the United States: An exploratory analysis. International Journal of Cyber Criminology, 11(1), 128–142. Chadwick, A. (2013). The hybrid media system: Politics and power. Oxford University Press.
27 Promises and Pitfalls of Legal Responses …
561
Citron, D. K., & Franks, M. A. (2014). Criminalizing revenge porn. Wake Forest Law Review, 49, 345–391. Cooter, R. (1998). Expressive law and economics. The Journal of Legal Studies, 27 (S2), 585–607. Corda, A. (2020). The transformational function of the criminal law: In search of operational boundaries. New Criminal Law Review, 23(4), 584–635. D’Angelo, S. (2019). Il caso Tiziana Cantone, i social network e la web reputation. Diritto.it – Diritto e Diritti dal 1996. Retrieved from https://www. diritto.it/il-caso-tiziana-cantone-i-social-network-e-la-web-reputation/. Last accessed 2 Feb 2021. DoppiaDifesa. (2019). Codice rosso: focus sulla violenza di genere. Retrieved from https://www.doppiadifesa.it/codice-rosso-focus-sulla-violenza-di-gen ere/. Last accessed 5 Mar 2021. Douglas, D. M. (2016). Doxing: A conceptual analysis. Ethics and Information Technology, 18(3), 199–210. Eaton, A. A., & McGlynn, C. (2020). The psychology of nonconsenual porn: Understanding and addressing a growing form of sexual violence. Policy Insights from the Behavioral and Brain Sciences, 7 (2), 190–197, online first. https://doi.org/10.1177/2372732220941534. Eikren, E., & Ingram-Waters, M. (2016). Dismantling “You get what you deserve”: Towards a Feminist sociology of revenge porn. Ada: A Journal of Gender, New Media, and Technology, 10. Flynn, A., & Henry, N. (2019). Image-based sexual abuse: An Australian reflection. Women and Criminal Justice, online first. https://doi.org/10.1080/089 74554.2019.1646190. Giovanni, A. (2020). L’inquadramento normativo del Revenge Porn: un illecito plurioffensivo. Diritto.it – Diritto e Diritti dal 1996. https://www. diritto.it/linquadramento-normativo-del-revenge-porn-un-illecito-plurioffe nsivo/. Last accessed 2 Feb 2021. Goldsworthy, T., Raj, M., & Crowley, J. (2017). “Revenge porn”: An analysis of legislative and policy responses. International Journal of Technoethics, 8(2), 1–16. GREVIO. (2020). Baseline evaluation report—Italy. Group of Experts on Action against Violence against Women and Domestic Violence. Council of Europe. Hall, M., & Hearn, J. (2018). Revenge pornography: Gender, sexuality and motivations. Routledge.
562
E. Pavan and A. Lavorgna
Henry, N., & Powell, A. (2015). Embodied harms: Gender, shame, and technology-facilitated sexual violence. Violence against Women, 21(6), 758– 779. Henry, N., & Powell, A. (2017). Sexual violence in a digital age. Palgrave Macmillan. Henry, N., & Powell, A. (2018). Technology-facilitated sexual violence: A literature review of empirical research. Trauma, Violence & Abuse, 19 (2), 195–208. Henry, N., & Flynn, A. (2019). Image-based sexual abuse: Online distribution channels and illicit communities of support. Violence Against Women, online first. https://doi.org/10.1177/1077801219863881. Henry, N., Flynn, A., & Powell, A. (2020). Technology-facilitated domestic and sexual violence: A review. Violence against Women, 26 (15–16), 1828– 1854. Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. (2020b). Image-based sexual abuse: A study on the causes and consequences of non-consensual nude or sexual imagery. Routledge. Hepp, A. (2020). Deep mediatization. Routledge. Hodges, C. (2020, May 1). Science-based regulation in financial services: From deterrence to culture (Oxford Legal Studies Research Paper No. 19/2020). Retrieved from http://dx.doi.org/10.2139/ssrn.3590176. Last accessed 5 Mar 2021. Il Fatto Quotidiano. (2016). Tiziana Cantone: il caso sul web, il suicidio e le nostre negligenze. Retrieved from https://www.ilfattoquotidiano.it/2016/09/ 14/tiziana-cantone-il-caso-sul-web-il-suicidio-e-le-nostre-negligenze/. Last accessed 5 Mar 2021. Il Post. (2016a). La morte di Tiziana Cantone. Retrieved from https://www.ilp ost.it/2016/09/14/tiziana-cantone-morta/. Last accessed 5 Mar 2021. Il Post. (2016b). Storia di Tiziana Cantone. Retrieved from https://www.ilpost. it/2016/09/15/storia-tiziana-cantone/. Last accessed 5 Mar 2021. Lavorgna, A. (2020). Cybercrimes: Critical issues in a global context. MacMillan. Liong, M., & Cheng, G. H. L. (2017). Sext and gender: Examining gender effects on sexting based on the theory of panned behaviour. Behaviour & Information Technology, 36 (7), 726–736. Lupton, D. (2020a). “Not the real me”: Social imaginaries of personal data profiling. Cultural Sociology, online first. https://doi.org/10.1177/174997 5520939779.
27 Promises and Pitfalls of Legal Responses …
563
Lupton, D. (2020b). Thinking with care about personal data profiling: A morethan-human approach. International Journal of Communication, 14, 3165– 3183. Lupton, D., & Southerton, C. (2021). The thing-power of the Facebook assemblage: Why do users stay on the platform? Journal of Sociology, online first. https://doi.org/10.1177/1440783321989456. Maddocks, S. (2018). From non-consensual pornography to image-based sexual abuse: Charting the course of a problem with many names. Australian Feminist Studies, 33(97), 345–361. Mantilla, K. (2015). Gendertrolling: How misogyny went viral . Praeger. Marres, N. (2017). Digital sociology. Polity Press. Martellozzo, E., & Jane, E. A. (2017). Introduction: Victims of cybercrime on the small ‘I’ internet. In E. Martellozzo & E. A. Jane (Eds.), Cybercrime and its victims. Routledge. Massanari, A. (2017). #Gamergate and the Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19 (3), 329–346. McCammon, H. J., Campbell, K. E., Granberg, E. M., & Mowery, C. (2001). How movements win: Gendered opportunity structures and US women’s suffrage movements, 1866 to 1919. American Sociological Review, 66 , 49– 70. McGlynn, C., Johnson, K., Rackley, E., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2020). “It’s torture for the soul”: The harms of image-based sexual abuse. Social & Legal Studies, online first. https://doi.org/10.1177/ 0964663920947791. McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond “revenge porn”: The continuum of image-based sexual abuse. Feminist Legal Studies, 25 (1), 25–46. Nyst, C. (2013). How gender-based harassment falls through the digital cracks. GenderIT. https://www.genderit.org/es/node/3920. Last accessed 2 Feb 2021. Papacharissi, Z. (2014). Affective publics: Sentiment, technology and politics. Oxford University Press. Papacharissi, Z. (2016). Affective publics and structures of storytelling: Sentiment, events and mediality. Information, Communication & Society, 19 (3), 307–324. Pavan, E. (2017). Internet intermediaries and online gender-based violence. In M. Segrave & L. Vitis (Eds.), Gender, technology and violence (pp. 62–78). Routledge.
564
E. Pavan and A. Lavorgna
PermessoNegato. (2020). State of Revenge Novembre 2020. Analisi dello stato della pornografia non consensuale in Italia su Telegram. Retrieved from https://www.permessonegato.it/doc/PermessoNegato_StateofRevenge_ 202011.pdf. Last accessed 5 Mar 2021. Polizia di Stato. (2020). Un anno di codice rosso. Reati spia e femminicidi. Retrieved from https://questure.poliziadistato.it/it/Siena/articolo/14835fbe4 7d7cf4f8323220872. Last accessed 5 Mar 2021. Powell, A., Flynn, A., & Henry, N. (2020). Sexual violence in digital society: Human, technical and social factors. In T. Holt & R. Leukfeldt (Eds.), Understanding the human factor of cybercrime (pp. 134–155). Routledge. Powell, A., & Henry, N. (2017). Sexual violence in digital age. Palgrave. Powell, A., Henry, N., & Flynn, A. (2018). Image-based sexual abuse (Chapter 25). In W. S. DeKeseredy & M. Dragiewicz (Eds.), Handbook of critical criminology. Routledge. Powell, A., Henry, N., Flynn, A., & Scott, A. J. (2019). Image-based sexual abuse: The extent, nature, and predictors of perpetration in a community sample of Australian adults. Computers in Human Behavior, 92, 393–402. Powell, A., Stratton, G., & Cameron, R. (2018). Digital criminology: Crime and justice in digital society. Routledge. Questionegiustizia.it. (2020). Rapporto: Un anno di “Codice Rosso” . Retrieved from https://www.questionegiustizia.it/data/doc/2723/rapportocodice-rosso.pdf. Last accessed 5 Mar 2021. Romano, B., & Marandola, A. (Ed.). (2020). Codice Rosso. Pacini Editore. Salter, M., & Crofts, T. (2015). Responding to revenge porn: Challenges to online legal impunity. In L. Comella & S. Tarrant (Eds.), New views on pornography: Sexuality, politics, and the law. Praeger. Semenzin, S., & Bainotti, L. (2020). The use of Telegram for the non-consensual dissemination of intimate images: Gendered affordances and the construction of masculinities. Social Media + Society, 6 (4), 2056305120984453. https://doi.org/10.31235/osf.io/v4f63. Spina, L. (2020). Il “Codice Rosso” e la tutela della vittima minorenne. MinoriGiustiza, 1, 144–158. Zambelli, E., Mainardi, A., & Hajek, A. (2018). Sexuality and power in contemporary Italy: subjectivities between gender norms, agency and social transformation. Modern Italy, 23(2), 129–138.
28 Restorative Responses to the Rhizomatic Harm of Nonconsensual Pornography Alexa Dodge
Introduction The nonconsensual distribution of nude or sexually explicit images (i.e. nonconsensual pornography) is a form of sexual violence that violates a victim’s right to privacy and bodily autonomy (Fairbairn, 2015; Henry et al., 2020; McGlynn et al., 2017, 2020). Concern with this act tends to be especially pronounced in regard to the victimisation of youth, with several legal and extralegal responses around the world having been inspired by particularly tragic youth cases (Dodge, 2016; Powell & Henry, 2017). While a growing collection of scholarship and a dedicated activist movement have, in many jurisdictions, successfully pushed for criminal justice responses to punish nonconsensual pornography offenders and send a message that this act is harmful and unacceptable (Bloom, 2014; Citron & Franks, 2014; Flynn & Henry, 2019; Powell & A. Dodge (B) Dalhousie University, Halifax, NS, Canada e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_28
565
566
A. Dodge
Henry, 2017), emerging research suggests that criminal responses do not adequately address the central needs of youth victims. Rather, victims are often interested in accessing alternative options to support them in stopping the spread of images and resolving related bullying or harassment (Dodge & Lockhart, 2021; Dodge & Spencer, 2018; Segal, 2015; Shariff & DeMartini, 2015). Criminal justice approaches provide a particular set of responses focused on punishing perpetrators. Such responses are often unable to address the core needs expressed by victims of nonconsensual pornography, such as assistance in immediately deleting and stopping the spread of their intimate imagery and addressing related victim blaming, shaming, and harassment at the hands of their peers and other community members (Dodge & Spencer, 2018; Powell & Henry, 2017; Segal, 2015; Shariff & DeMartini, 2015). School-based approaches focused on expelling or suspending individual youth perpetrators are likewise “premised on an authoritarian and retributive model of discipline” (Russell & Crocker, 2016, p. 195) that does not address the multiple sources of relational harm in these cases. While certainly less aggressive and stigmatising than a formal criminal justice intervention, suspension and expulsion rely on narrowly focused punishment regimes that fail to address the broader context in which harm occurs, and may even be counterproductive in terms of developing healthy environments for youth (Russell & Crocker, 2016). To understand how criminal and expulsion/suspension-based approaches fall short, it is useful to consider the “rhizomatic” nature of the harm many youth experience in the aftermath of nonconsensual intimate image distribution. When these harms are understood in all of their complexity and relationality, the need for holistic and adaptable responses becomes clear. A cohort of critical criminologists have utilised Deleuze and Guattari’s (1987) concept of the rhizome to understand how digital forms of justice seeking act as a form of “rhizomatic justice” in response to sexual violence (Fairbairn & Spencer, 2018; Powell, Stratton, et al., 2018). Influenced by this work, in this chapter I argue that it is also useful to understand the harms of digital forms of sexual violence as rhizomatic and, thereby, to better understand that rigid,
28 Restorative Responses to the Rhizomatic Harm …
567
retributive, and narrow perpetrator-focused responses are incongruent with the nature of this harm.
Rhizomatic Harm The initial act of nonconsensually distributing a nude or sexually explicit image is a harmful act of sexual violence representing a violation of the victim’s privacy, trust and bodily autonomy; yet this initial act is often only a portion of the harm experienced by victims. In many of the most tragic youth cases, this harm is amplified and extended by sexist, sexnegative, homophobic, gender-norm enforcing or otherwise discriminatory beliefs that result in victim blaming and shaming responses arising from multiple sources in the victim’s school and community (Henry et al., 2017; Karaian, 2014; Naezer & Oosterhout, 2020; Ringrose & Harvey, 2015). For instance, in the internationally reported case of Canadian teenager Rehtaeh Parsons, the initial act of image capture and distribution by two teen boys was taken up by both Rehtaeh’s male and female peers to persistently bully her with slut shaming taunts (Rau, 2015; see also Chapter 34, this volume). In such cases, simply punishing those who initially distributed the images without consent does nothing to address the widespread acceptance and perpetration of sexist slut shaming and victim blaming (Powell, Henry, et al., 2018). Victim blaming and shaming responses have also been reported to arise from additional sources such as parents, school officials and criminal justice officials (Dodge & Spencer, 2018; Hasinoff, 2015; Henry et al., 2018; Naezer & Oosterhout, 2020). For instance, youth victims of nonconsensual pornography who originally shared their nude image consensually (i.e. “sexting”) are sometimes shamed and blamed by police, principals or their own parents for their consensual act of sexual expression (Albury et al., 2017; Angelides, 2013; Dodge & Spencer, 2018; Setty, 2018). In a Canadian criminal case of nonconsensual pornography, for example, a teen girl describes one of the core sources of impact being the feeling of shame caused by her parents’ anger and disappointment with her for having consensually shared her nude image with the boy that later shared it without her consent (R v BMS, 2016). Some police
568
A. Dodge
officers also express, in dealing with cases involving male perpetrators and female victims, the sexist sentiment that boys’ acts of nonconsensual pornography are a natural response to receiving nude images from girls, and that it is the female victim who really needs to be shamed for their act of sexual expression (Dodge & Spencer, 2018). Parents, school staff, criminal justice officials, and other community members may all engage in victim blaming behaviours when they label victims who consensually share their images as reckless, gullible, dumb, or dirty (Crofts & Lievens, 2018; Dodge, 2021; Karaian, 2014; Naezer & Oosterhout, 2020). Demonstrating the level of harm that can be caused in this manner, both youth and adult victims regularly report that their biggest fear in the aftermath of image exposure is that their parents will find out about their consensual image distribution and be disappointed, angry with them, or ashamed of them (Dodge & Lockhart, 2021; McGlynn et al., 2017). The multiple sources of harm in many youth cases are well understood through Deleuze and Guattari’s (1987) image of the rhizome. The philosophical concept of the rhizome is based on the botanical rhizome, a type of plant stem that grows horizontally and sends out a multitude of nodes in various directions from which roots grow. The rhizome is useful as a way to understand phenomena that emerge from an array of sources and are non-linear and non-hierarchical in nature (Deleuze & Guattari, 1987). A cohort of critical criminologists have used this concept to understand the “non-hierarchical, asymmetrical and heterogenous” nature of digital activism in response to sexual violence (Fairbairn & Spencer, 2018; Powell, Stratton, et al., 2018, p. 154; see also Chapter 34, this volume). This concept is also generative as a way to understand the harm of nonconsensual pornography. As described above, the harm in the most impactful youth cases is often rhizomatic as it emerges from multiple entry points (initial image distributor, police, parents, peers) and creates complex offshoots of harm (parental shame, peer bullying, victim blaming from criminal justice or school officials). Just as the digitally distributed image is capable of spreading at rapid speeds through a variety of digital and physical networks (e.g. from one youth smartphone to a group messaging thread and then shooting out in all directions to friends of friends and being reported to school officials, parents, and
28 Restorative Responses to the Rhizomatic Harm …
569
criminal justice officials), the harms experienced by victims can likewise multiply and root throughout the various digital and physical networks that make up youths’ lives. The rhizomatic nature of this harm makes it particularly at odds with the rigid and narrowly perpetrator-focused responses typically offered by the punitive criminal justice system and school-based policies focused on punishing or removing the initial perpetrator. Deleuze and Guattari describe that “a rhizome may be broken, shattered at a given spot, but it will start up again on one of its old lines, or on new lines” (1987, p. 9). Likewise, responses focused on punishing or removing perpetrators of this act are akin to shattering a rhizome only at a single point, without addressing the many directions in which the nodes of harm have already spread. For instance, suspending the initial image distributor does nothing to address the group of peers that have begun to relentlessly bully a young female victim with sexist taunts, or a young gay victim with homophobic slurs. Likewise, the harm caused by a victim’s parent expressing disappointment or anger towards their child for their act of consensual sexual expression may prove to be one of the most deeply rooted harms of the experience and, yet, is entirely unaddressed by a typical intervention focused on punishing the initial distributor. Such responses do not address the nodes that have already shot off from the initial act of distribution and are now forming roots in one’s school, family, and community. Fertilised by sex-negative, sexist and other discriminatory beliefs, these rhizomatic harms must be addressed in all of their diversity and blocked at all of their sources. Conceptualising the harm of this act as rhizomatic is helpful for denaturalising the idea of criminal justice or suspension/expulsion as a necessarily sensical or adequate response. The harms of this act emerge from multiple sources and plug into various pre-existing discriminatory beliefs. Thus, these harms must be addressed through nimble responses that confront the various sources from which harm emerges and extends. Punishing the perpetrator often fails to “help the survivor cope with feelings of humiliation, shame, and embarrassment” and retributive responses regularly fail to create space for the “offender—or the community—to think deeply about why this harm occurred or how to prevent it in the future” (Hamilton, 2018, p. 4). As I argue below, restorative
570
A. Dodge
responses, at their best, offer holistic, relational, flexible, and educative responses that are more capable of addressing this rhizomatic harm.
Restorative Responses Recognising the shortcomings of typical punitive responses, some nonconsensual pornography scholars and activists are beginning to discuss the potential of alternative approaches for use in, at least a portion of, both adult and youth cases of nonconsensual pornography (Crofts & Lievens, 2018; Dodge, 2020; Hamilton, 2018; Henry et al., 2017). Often expressing particular concern with responses to youth cases, these scholars assert that the individualised punishment of youth perpetrators does not address youths’ potential confusion regarding the legal and moral lines between consensual and nonconsensual image distribution or address the systemic issues that deepen and extend the impacts of this act (Albury et al., 2017; Shariff & DeMartini, 2015; Wodda & Panfil, 2018). Instead of narrowly focusing on investigating and punishing a particular perpetrator, restorative responses offer a space for: perpetrators to understand the harm they caused and contribute to a solution; for victims to express the complex nature of the harm they have experience in all of its complexity and receive personalised supports; and for attention to be given to how bystanders, communities, and institutions may need to transform to address current and future harms of this nature. Rather than simply punishing perpetrators and seeking to deter others through this punishment, restorative processes take a relational and multimodal approach focused on understanding the harms caused and supporting perpetrators and communities to make positive changes to help heal this harm and transform the conditions that may have contributed to it (Braithwaite, 1989; Llewellyn, 2018; Morrison, 2002; Nelund, 2020). Although practices included under the restorative umbrella vary widely, they are distinct from typical retributive responses in that they meaningfully involve victims and community members in the justice process and focus on “elevating the voice of the victim or survivor”, recognising the impact on the broader community, and reintegrating “all parties back into the community” in a healthy way (Kim,
28 Restorative Responses to the Rhizomatic Harm …
571
2018, p. 226). At their best, these practices are formed in alignment with the needs of particular victims, perpetrators, and communities, are alive to the relationality of harm and are responsive to emerging needs throughout the process (Llewellyn, 2019). While restorative justice practices sometimes function as an arm of the traditional criminal justice system, and thus risk exposing communities to the discriminatory beliefs that are often baked into criminal justice systems to various extents (Hamilton, 2018), restorative processes can also take place entirely outside of the justice system through, for instance, school-based processes. Restorative approaches in schools have shown to be particularly effective for responding to bullying behaviours and discriminatory beliefs (Ahmed & Braithwaite, 2012; Melano, 2014; Morrison, 2002), providing a promising avenue for addressing the rhizomatic harms of victim bullying, blaming, and shaming that often accompany acts of nonconsensual pornography among youth. Restorative processes offered through schools require a reimagining of typical responses to conflict, as they focus not on wrongdoing as a violation of school rules, but rather as a violation of “relationships in the school and wider school community” (Morrison, 2002, p. 6; Russell & Crocker, 2016). Wrongdoers are helped to understand the consequences of their behaviour in a way that develops “relational thinking” (Morrison, 2002, p. 6; Russell & Crocker, 2016). Additionally, beyond a narrow focus on the perpetrator, these practices are alive to the ways institutions, communities, and bystanders can aggravate and extend the harm caused. Restorative processes are, thus, a relational approach in a broadly understood way as they “extend beyond interpersonal relationships to relations at the level of groups, of institutions, of systems, and of society” (Llewellyn, 2018, n.p.). This approach allows for a consideration of the rhizomatic harms that have occurred and may implicate a school’s culture, a parental response, and the actions of bystanders. To look at the broader context in this way is not to excuse the behaviours of individual perpetrators, but to take stock of and work to address the complex and systemic factors that create or aggravate harm such as the acceptance of sexist and other discriminatory attitudes within schools and communities.
572
A. Dodge
Restorative responses to youth wrongdoing can provide more fitting approaches than typical punitive-focused responses by considering what harms and what forms of healing actually matter most for the parties involved (Llewellyn, 2019). Rather than assuming, for instance, that the victim is most concerned with obtaining retribution for the acts of the original image distributor—an assumption that often seems to be incorrect in the youth context (Dodge & Lockhart, 2021; Dodge & Spencer, 2018)—a restorative process would allow room for a victim to speak to the multiple sources of harm they are experiencing and for the community of support to consider case-specific responses to these harms. For instance, many youth victims may be less concerned with seeing the initial perpetrator pay for their actions than they are with healing the relational impact to their family caused by a feeling of shame and distrust between them and their parents. The flexibility of restorative responses makes them better equipped to provide diverse services that are contextually relevant. For instance, a restorative response to a case involving sexist epithets might involve victims and perpetrators meeting separately with staff from gendered violence organisations who can help them process and contextualise their experience before sitting down together (Llewellyn, 2019). Because “community members’ views of violence can be unearthed and confronted in restorative processes” (Goodmark, 2019, p. 170), these responses can also better weed out discriminatory beliefs that might be held by those in a victim or perpetrator’s community of support. For instance, victim blaming beliefs held by a victims’ parents, that likely would have gone unacknowledged by a narrowly perpetratorfocused retributive response, could be unearthed and confronted in a restorative process. Llewellyn describes restorative approaches as “forward-focused”, with effective processes being “educative, problem solving/preventative and proactive” (2019, p. 129). The forward-looking nature of restorative processes allows them to build the capacity to respond to varied, ongoing, and future harms related to the case at hand. Braithwaite provides the example of how a single act of sexist bullying by a male student against a female peer could be responded to with a restorative response that addresses not only the needs of the two individuals involved, but also looks to transform the culture of sexism between boys and girls that
28 Restorative Responses to the Rhizomatic Harm …
573
exists more broadly at their school (Melano, 2014). He explains that responses might include an educative piece that helps all students at the school learn what sexist behaviour looks like, to recognise how they have engaged in it in the past, and to learn how to speak up against it in the future. Such forward-focused and educative responses are desperately needed in response to nonconsensual pornography, as cases continue to arise in which youth express that their greatest concern in the aftermath of image distribution is that peers and family will think badly of them rather than supporting them (Appellant v Respondent School Board, 2016; Dodge & Lockhart, 2021; R v BMS, 2016). Henry et al.’s research has found that—among all age groups—there is an “urgent need” for community education campaigns and information resources to “meet the information and support needs of victims; encourage ‘witnesses’ or ‘bystanders’ to take action to support a victim and/or challenge the perpetrator; [and to] challenge the culture of victim-blaming that both excuses perpetrator behaviour and prevents victims from seeking assistance” (2017, p. 1). Thus, this kind of multimodal and preventative community engagement would be valuable for not only addressing past harms but for ameliorating future harms. Llewellyn explains that restorative responses are commonly misunderstood as “consisting of a single circle or conference” (2019, p. 137). Contrary to this assumption, she explains, these processes should be designed to be “dynamic, layered, and tailored to the needs of the parties and to allow for evolution in their understanding of the issues” (Llewellyn, 2019, p. 137). This dynamic and context-specific approach provides opportunities to address the rhizomatic sources from which harm may emerge in nonconsensual pornography cases. Restorative approaches can address how various institutions (e.g. schools, workplaces) act to support problematic beliefs that deepen and extend the roots of relational harms through, for instance, transforming a school environment that previously tolerated sexist beliefs and behaviours (Llewellyn, 2019; Melano, 2014). Examples of responses at the institutional level could include reworking the curriculum to include discussions of empathy, social justice issues, and one’s right to bodily autonomy. Institutions could also explore the possibility that their approach to education on nonconsensual pornography is contributing
574
A. Dodge
to, rather than challenging, beliefs that victims of this act are dumb, dangerous, or dirty and thus deserving of exposure (Albury et al., 2017; Naezer & Oosterhout, 2020; Setty, 2018; Wodda & Panfil, 2018). As the harms of nonconsensual pornography emerge on multiple fronts, impactful responses must recognise and nimbly address the multiple sources and types of harm experienced.
Limitations to Responding Restoratively While restorative responses are promising in that they are more attuned to the rhizomatic nature of harm in youth cases of nonconsensual pornography, there are several barriers that may limit their application to cases of nonconsensual pornography in practice. First, as nonconsensual pornography is generally understood as a form of sexual violence (Fairbairn, 2015; McGlynn et al., 2017; Powell & Henry, 2017), debates over the appropriateness of using restorative approaches in response to sexual violence may act as a hurdle to implementation (Bumiller, 2008; Curtis-Fawley & Daly, 2005; Del Gobbo, 2020). Some feminist scholars worry that restorative, rather than criminal, responses to sexual, genderbased, and domestic violence can be unsafe for victims and “may trivialise and re-privatise” these issues (Cameron, 2006; Westmarland et al., 2018, p. 339); On the other hand, a cohort of feminist and restorative justice scholars are enthusiastic about restorative alternatives and believe these practices are capable of giving victims the agency that is often withheld from them in typical criminal processes, while also providing for more meaningful offender accountability and broader cultural change (Del Gobbo, 2020; Karp, 2019; Kim, 2018). Concerns for victim safety can be addressed through well planned restorative processes with trained facilitators who are alive to the power dynamics of sexual violence and can help make careful decisions regarding how, when, and if to include victims in direct dialogue with perpetrators (Karp, 2019; Nelund, 2020). Additionally, when concerns about safety are based on assumptions about gendered power relations between male perpetrators and female victims, it is important to trouble the assumption that this dynamic is present in all cases of sexual violence.
28 Restorative Responses to the Rhizomatic Harm …
575
In terms of nonconsensual pornography among youths, there is evidence that boys may be victimised at similar rates as girls—though they seem much less likely to report and appear to less often experience extreme harms—and there is growing evidence that teen girls are often both the offender and victim of this act (Dodge, 2021; Henry et al., 2020; Naezer & Oosterhout, 2020; Powell et al., 2019; Steeves, 2014). Thus, gender-based power dynamics will not be a concern in all cases of nonconsensual pornography and, even in those cases where this power dynamic is present, it is often possible to address this concern through well planned processes (Karp, 2019; Nelund, 2020). Finally, in regard to concerns about reprivatising these issues, it can also be argued that the criminal justice response privatises these issues by treating the perpetrator as a singular “bad apple” in need of punishment, rather than addressing the broader social contexts that allow for or even encourage acts of sexual violence such as nonconsensual pornography (Del Gobbo, 2020). Restorative responses, on the other hand, can provide a space in which to name and challenge myths about sexual violence, victim blaming and shaming, and other discriminatory beliefs (Goodmark, 2019; Nelund, 2020). While debates regarding the efficacy of restorative processes in sexual violence cases represent a potential barrier to implementation, it is promising that many restorative justice scholars and practitioners continue to advocate for and find evidence of its suitability to cases of sexual violence (Del Gobbo, 2020; Karp, 2019). A second limitation to responding restoratively in these cases is the question of victim involvement. It is a core tenant of restorative responses to listen to the needs of victims and ensure that victims are not coerced into being involved with a restorative process (Karp, 2019). There is some reason to believe that youth victims of nonconsensual pornography will not be interested in engaging in fulsome responses to this wrongdoing, as initial findings in this area point to youth wanting assistance in stopping the circulation of their nonconsensually distributed image but not wanting to further engage in responses that continue to bring attention to the violation they experienced (Dodge & Lockhart, 2021; Dodge & Spencer, 2018). As I have argued elsewhere, many youth supporters are eager to offer restorative responses to youths’ acts of nonconsensual pornography, but they are currently limited by the fact
576
A. Dodge
that most youth do not report their victimisation to adults (Dodge & Lockhart, 2021). Youth victims may avoid seeking support from adults due to: fears that adults will only escalate the problem through their reactions; worries that adults will victim blame and shame; and concerns that victims or perpetrators will be criminalised (Dodge & Lockhart, 2021). Most notably, youth often express that adult shaming/blaming would be more harmful than having their nude image seen by other teens, resulting in attempts to deal with this act at the peer level. While youth might be more willing to engage in a restorative response than a typical criminal justice process, as this would somewhat address their concerns about self and peer criminalisation, groundwork will be needed to create social and legal contexts in which youth feel safe to report their victimisation and feel that they will be supported in the ways that they need rather than being pulled into a framework for punishment that they do not see as useful. This groundwork would need to ensure, for instance, that youth victims cannot be charged with child pornography for their consensual acts of image sharing and that youth know there are non-judgemental supports available to them that will not add to their fears of being punished themselves or to their feelings of shame and blame (Karaian & Brady, 2020; Shariff & DeMartini, 2015). To various extents in different international jurisdictions, both youths’ consensual and nonconsensual intimate image sharing continues to be framed as child pornography material, this can limit youth victims from reporting this act and from receiving supports that do not demand the involvement of the criminal justice system. Education for youth on this topic should be careful not to imply that criminal sanctions are the only possible response as this seems to discourage many youths from seeking support due to fears of an “overreaction” from adults (Dodge & Lockhart, 2021). Given that these barriers may result in many youths never reporting their victimisation to an adult in their lives, several options should be made available for youth victims to gain emotional support and support in removing intimate images that may be posted online. For instance, it should be ensured that anonymous support lines (e.g. Kids Help Phone in Canada, Childline in the UK) are capable of offering youth emotional support regarding the specific impacts of nonconsensual pornography, support regarding how to remove images posted online,
28 Restorative Responses to the Rhizomatic Harm …
577
and non-judgemental education on the nature of digital forms of sexual violence. Finally, successful implementation of restorative approaches relies on a sea change in understandings of how to respond to harm and wrongdoing (Archibald & Llewellyn, 2006; Russell & Crocker, 2016). Experts warn that providing restorative approaches in schools requires not only a commitment of time and resources but also requires staff to interrogate their assumption that “punishment and control” is the way to “achieve[…] behavioral compliance” (Morrison, 2002, p. 6). Russell and Crocker (2016) provide a way forward for addressing this barrier, by demonstrating how restorative practices can be effectively implemented in schools through an approach that empowers stakeholders and works “from the ground up” to “generate radical change” (p. 210). While the path towards restorative transformation in schools and communities will be challenging, typical punitive responses leave much to be desired and there is reason to believe that overcoming these barriers will be well worth it (Archibald & Llewellyn, 2006; Russell & Crocker, 2016).
Conclusion The typical punitive and narrowly perpetrator-focused response to nonconsensual pornography treats the harms of this act like the roots of a tree, shooting out from one central source. From this perspective, uprooting the initial nonconsensual distributor adequately deals with the harm caused. Restorative responses, on the other hand, are better equipped to perceive the ways that rhizomatic nodes of harm can shoot off from the original source and take root in a victim’s school, family and community, in a manner that is wholly unaddressed by the uprooting of the initial perpetrator. By listening to the specific needs of victims and engaging more deeply with implicated communities and institutions, restorative responses are able to act nimbly to address harms on several fronts. These responses might include revealing and addressing overt and subtle forms of victim blaming and shaming expressed by the victim’s peers, family, and teachers and implementing whole-school approaches to challenge the acceptance of discriminatory bullying. Forward-focused
578
A. Dodge
(Llewellyn, 2019) processes can help not only to address harms experienced by a particular victim but can also help communities to prevent or lessen the impacts of these harms in the future (Burford et al., 2019; Kim, 2018). While restorative responses offer a promising alternative to the typical retributive response, the implementation of these responses may encounter several barriers. Stakeholders will have to be convinced of the efficacy of using restorative practices in response to sexual violence and will need to grapple with transforming their fundamental understanding of how to address harm and wrongdoing. Additionally, even in those jurisdictions where restorative responses are implemented, it may take time to create a context of trust wherein youth will choose to seek the help of adults in their lives. Thus, it is critical that, in addition to implementing restorative options, anonymous and non-judgemental supports are made available to help guide and inform youths who choose not to report their victimisation. Acknowledgements The author(s) received no financial support for the research, authorship, and/or publication of this article.
References Ahmed, E., & Braithwaite, V. (2012). Learning to manage shame in school bullying: Lessons for restorative justice interventions. Critical Criminology, 20, 79–97. Albury, K., Hasinoff, A. A., & Senft, T. (2017). From media abstinence to media production: Sexting, young people and education. In L. Allen & M. L. Rasmussen (Eds.), The Palgrave handbook of sexuality education (pp. 527– 545). Palgrave Macmillan. Angelides, S. (2013). ‘Technology, hormones, and stupidity’: The affective politics of teenage sexting. Sexualities, 16 (5), 665–689. Appellant v Respondent School Board [2016] CFSRB No. SS16-0001. Archibald, B., & Llewellyn, J. (2006). The challenges of institutionalizing comprehensive restorative justice: Theory and practice in Nova Scotia. The Dalhousie Law Journal, 29, 297–343.
28 Restorative Responses to the Rhizomatic Harm …
579
Bloom, S. (2014). No vengeance for “revenge porn” victims: Unraveling why this latest female-centric, intimate-partner offense is still legal, and why we should criminalize it. Fordham Urban Law Journal, 42(1), 261–289. Braithwaite, J. (1989). Crime, shame and reintegration. Cambridge University Press. Bumiller, K. (2008). In an abusive state: How neoliberalism appropriated the feminist movement against sexual violence. Duke University Press. Burford, G., Braithwaite, J., & Braithwaite, V. (2019). Introduction. In G. Burford, J. Braithwaite, & V. Braithwaite (Eds.), Restorative and responsive human services (pp. 1–19). Taylor and Francis Group. Cameron, A. (2006). Stopping the violence: Canadian feminist debates on restorative justice and intimate violence. Theoretical Criminology, 10 (1), 49–66. Citron, D. K., & Franks, M. A. (2014). Criminalizing revenge porn. Wake Forest Law Review, 49, 345–391. Crofts, T., & Lievens, E. (2018). Sexting and the law. Sexting: Motives and risks in online sexual self-presentation (pp. 119–136). Palgrave Macmillan. Curtis-Fawley, S., & Daly, K. (2005). Gendered violence and restorative justice: The views of victim advocates. Violence against Women, 11(5), 603–638. Del Gobbo, D. (2020). The return of the sex wars: Contesting rights and interests in campus sexual violence reform. In D. Crocker, J. Minaker, & A. Nelund (Eds.), Violence interrupted: Confronting sexual violence on university campuses (pp. 87–116). McGill-Queen’s University Press. Deleuze, G., & Guattari, F. (1987). A thousand plateaus: Capitalism and schizophrenia. University of Minnesota Press. Dodge, A. (2016). Digitizing rape culture: Online sexual violence and the power of the digital photograph. Crime, Media, Culture, 12(1), 65–82. Dodge, A. (2020). Trading nudes like hockey cards: Exploring the diversity of ‘revenge porn’ cases responded to in law. Social & Legal Studies. https://doi. org/10.1177/0964663920935155 Dodge, A. (2021). ‘Try not to be embarrassed’: A sex positive analysis of nonconsensual pornography case law. Feminist Legal Studies. https://doi.org/ 10.1007/s10691-021-09452-8 Dodge, A., & Lockhart, E. (2021). ‘Young people just resolve it in their own group’: Young people’s perspectives on responses to non-consensual intimate image distribution. Youth Justice. https://doi.org/10.1177/147322542 11030570
580
A. Dodge
Dodge, A., & Spencer, D. (2018). Online sexual violence, child pornography or something else entirely? Police responses to non-consensual intimate image sharing among youth. Social & Legal Studies, 27 (5), 636–657. Fairbairn, J. (2015). Rape threats and revenge porn: Defining sexual violence in the digital age. In J. Bailey & V. Steeves (Eds.), Egirls, ecitizens (pp. 229– 252). University of Ottawa Press. Fairbairn, J., & Spencer, D. (2018). Virtualized violence and anonymous juries: Unpacking Steubenville’s “big red” sexual assault case and the role of social media. Feminist Criminology, 13(5), 477–497. Flynn, A., & Henry, N. (2019). Image-based sexual abuse: An Australian reflection. Women & Criminal Justice. https://doi.org/10.1080/08974554.2019. 1646190 Goodmark, L. (2019). Responsive alternatives to the criminal legal system in cases of intimate partner violence. In G. Burford, J. Braithwaite, & V. Braithwaite (Eds.), Restorative and responsive human services (pp. 165–178). Taylor and Francis Group. Hamilton, A. (2018). Is justice best served cold?: A transformative approach to revenge porn. UCLA Women’s Law Journal, 25 (1), 1–44. Hasinoff, A. A. (2015). Sexting panic: Rethinking criminalization, privacy, and consent. University of Illinois Press. Henry, N., Flynn, A., & Powell, A. (2018). Policing image-based sexual abuse: Stakeholder perspectives. Police Practice and Research: an International Journal, 19 (6), 565–581. Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. J. (2020). Image-based sexual abuse: A study on the causes and consequences of non-consensual nude or sexual imagery. Routledge. Henry, N., Powell, A., & Flynn, A. (2017). Not just ‘revenge pornography’: Australians’ experiences of image-based abuse. A Summary Report. RMIT University. Karaian, L. (2014). Policing ‘sexting’: Responsibilization, respectability and sexual subjectivity in child protection/crime prevention responses to teenagers’ digital sexual expression. Theoretical Criminology, 18(3), 282–299. Karaian, L., & Brady, D. (2020). Revisiting the “private use exception” to Canada’s child pornography laws: Teenage sexting, sex-positivity, pleasure, and control in the digital age. Osgoode Hall Law Journal, 56 (2), 301–349. Karp, D. (2019). Restorative justice and responsive regulation in higher education: The complex web of campus sexual assault policy in the United States and a restorative alternative. In G. Burford, J. Braithwaite, & V. Braithwaite (Eds.), Restorative and responsive human services (pp. 143–164). Routledge.
28 Restorative Responses to the Rhizomatic Harm …
581
Kim, M. E. (2018). From carceral feminism to transformative justice: Womenof-color feminism and alternatives to incarceration. Journal of Ethnic & Cultural Diversity in Social Work, 27 (3), 219–233. Llewellyn, J. (2018, May 2). Realizing the full potential of restorative justice. Policy Options. Llewellyn, J. (2019). Responding restoratively to student misconduct and professional regulation. In G. Burford, J. Braithwaite, & V. Braithwaite (Eds.), Restorative and responsive human services (pp. 127–142). Taylor and Francis Group. McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond ‘revenge porn’: The Continuum of image-based sexual abuse. Feminist Legal Studies, 25 (1), 25–46. McGlynn, C., Rackley, E., Johnson, K., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2020). It’s torture for the soul: The harms of image-based sexual abuse. Social & Legal Studies. https://doi.org/10.1177/0964663920947791 Melano, D. (2014). Radical ideas in justice and regulation. Ronin Films. Morrison, B. (2002). Bullying & victimisation in schools: A restorative justice approach. Trends & Issues in Crime & Criminal Justice, Report no. 219, 1–7. Naezer, M., & van Oosterhout, L. (2020). Only sluts love sexting: Youth, sexual norms and non-consensual sharing of digital sexual images. Journal of Gender Studies, 30 (1), 79–90. Nelund, A. (2020). New policies, old problems? Problematizing university policies. In Violence interrupted (pp. 223–242). McGill-Queen’s University Press. Powell, A., & Henry, N. (2017). Sexual violence in a digital age. Palgrave Macmillan. Powell, A., Henry, N., & Flynn, A. (2018). Image-based sexual abuse. In W. S. DeKeseredy & M. Dragiewicz (Eds.), Handbook of critical criminology (pp. 305–315). Routledge. Powell, A., Henry, N., Flynn, A., & Scott, A. J. (2019). Image-based sexual abuse: The extent, nature, and predictors of perpetration in a community sample of Australian adults. Computers in Human Behavior, 92, 393–402. Powell, A., Stratton, G., & Cameron, R. (2018). Digital criminology: Crime and justice in digital society. Routledge. R v BMS. (2016). NSCA 35. Rau, R. (2015). No place to hide: The Rehtaeh Parsons story. Ringrose, J., & Harvey, L. (2015). Boobs, back-off, six packs and bits: Mediated body parts, gendered reward, and sexual shame in teens’ sexting images. Continuum, 29 (2), 205–217.
582
A. Dodge
Russell, S., & Crocker, D. (2016). The institutionalisation of restorative justice in schools: A critical sensemaking account. Restorative Justice, 4 (2), 195– 213. Segal, M. (2015). Independent review of the police and prosecution response to the Rehtaeh Parsons case. Murray D Segal Professional Corporation. Setty, E. (2018). Young people’s attributions of privacy rights and obligations. International Journal of Communication, 12, 4533–4552. Shariff, S., & DeMartini, A. (2015). Defining the legal lines: EGirls and intimate images. In J. Bailey & V. Steeves (Eds.), EGirls, eCitizens (pp. 281– 305). University of Ottawa Press. Steeves, V. (2014). Young Canadians in a wired world, phase III: Sexuality and romantic relationships in the digital age. MediaSmarts. Vitas, L., & Negler, L. (2021). Public responses to online resistance: Bringing powerinto confrontation. In A. Powell, A. Flynn, & L. Sugiura (Eds.), The palgrave handbook of gendered violence and technology (pp. 691–708). Palgrave Macmillan. Westmarland, N., McGlynn, C., & Humphreys, C. (2018). Using restorative justice approaches to police domestic violence and abuse. Journal of GenderBased Violence, 2(2), 339–358. Wodda, A., & Panfil, V. R. (2018). Insert sexy title here: Moving toward a sex-positive criminology. Feminist Criminology, 13(5), 583–608.
29 Disrupting and Preventing Deepfake Abuse: Exploring Criminal Law Responses to AI-Facilitated Abuse Asher Flynn, Jonathan Clough, and Talani Cooke
Introduction We are at a critical turning point in society, where the science of Artificial Intelligence (AI) has emerged from laboratories, theory and fiction, into every aspect of human life. Governments, businesses and the public health sector—to name just a few—are using AI to improve efficiency and develop new solutions to complex human problems. Although A. Flynn (B) School of Social Sciences, Monash University, Clayton, VIC, Australia e-mail: [email protected] J. Clough · T. Cooke Faculty of Law, Monash University, Clayton, VIC, Australia e-mail: [email protected] T. Cooke e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_29
583
584
A. Flynn et al.
the potential benefits of well-designed and appropriately deployed AItechnologies—trained to understand the world and simulate human task proficiency and learning capabilities—are immense, there is an increasing realisation that such technologies may not always be used for the betterment of society. The proliferation of readily available AI-technologies means that tools may be acquired and used by individuals or groups for abusive and harmful purposes. The potential dangers and consequences of the misuse and abuse of AI-technologies, and their role in transforming the landscape of technology-facilitated abuse, is demonstrated in the rapid emergence of deepfakes. Deepfakes are generated through machine learning, which creates a realistic, but entirely fabricated image or video that can include virtual performances, face swapping, lip-synching or voice synthesis. The term derived from ‘deep’ learning and the creation of ‘fake’ content. In 2017, the first documented example of amateur deepfakes appeared when a Reddit user uploaded several AI-manipulated images transposing female celebrities’ faces onto the bodies of pornography actors (Paris & Donovan, 2019). Fast forward to 2019, and an app using AI-technologies was released allowing users to create convincing deepfakes of any female—a colleague, friend, partner or stranger—by digitally replacing their clothes with breasts and a vulva (Flynn, 2019). The app eliminated the need for victims and perpetrators to have any kind of relationship or interaction. To create a realistic pornographic deepfake, all a perpetrator required was an image. In early 2020, an AI powered bot using the technologies developed by the 2019 app was made freely available on the messaging platform Telegram (Sensity, 2020). A review in July 2020 identified over 104,800 manufactured deepfake images of different women that had been created and shared publicly using this technology (Sensity, 2020). Much of the development of AI-technologies has been through opensource collaborations, including the first deepfake algorithm (Schnick, 2020). In this context, the rapid advancement and commercialisation of AI-technologies has broadened the accessibility of potentially abusive tools, beyond those with technological expertise, to the easy reach of the general public. AI-technologies have essentially eliminated the need for
29 Disrupting and Preventing Deepfake Abuse …
585
victims and abusers to have any kind of personal relationship or interaction, as perpetrators are able to create deepfakes by using photos and videos of their targets, for example, found on social media, and superimposing these onto existing pornographic videos (Chesney & Citron, 2019, p. 1763; Citron, 2019, p. 1922). This machine learning technology is largely available through accessible desktop tools and tutorials online explaining how to ‘create realistic face swapping videos’, allowing perpetrators to make deepfake images of celebrities, people they know and anyone whose imagery they can access (Chesney & Citron, 2019, p. 1763; Citron, 2019, p. 1922). With AI experts declaring open-source technologies producing manipulated imagery ‘impossible to detect as fake’ will be easily accessible by 2021 (Stankiewicz, 2019), the pool of potential abusers and victims is substantially expanding. The pervasiveness of AI-Facilitated Abuse (AIFA), and deepfakes in particular, is further heightened in light of the COVID-19 pandemic and the changing social and economic environments in which we now live. Between March and May 2020, the Australian Office of the eSafety Commissioner received more than 1000 reports of image-based abuse (IBA); a 210% increase on average weekly reports in 2019 (Powell & Flynn, 2020). Double the number of IBA cases were similarly reported in England in April 2020 compared to April 2019 (Powell & Flynn, 2020). Italy experienced similar increases in reports of IBA during COVID-19 lockdowns (see Chapter 27, this volume). In addition to social restrictions leading to a probable increase in the creation and sharing of intimate images (consensually and non-consensually), this rise may also be connected to high rates of family violence, sextortion scams and financial pressures during lockdown periods (Boxall et al., 2020; Flynn et al., 2021; Powell & Flynn, 2020). Anecdotal concerns have also been raised by academics and students on social media platforms around a spike in the creation of deepfakes as an outcome of increased online teaching, with screenshots of students taken in webinars being doctored into pornography and uploaded online. Our increased reliance on digital communications for socialising, work, education and intimacy, exacerbated by the changing lifestyle requirements imposed by COVID-19 restrictions, in addition to growing rates of victimisation and new opportunities to commit abuse, highlight the clear need
586
A. Flynn et al.
for improved knowledge on deepfakes and potential prevention and intervention strategies. The challenges in regulating deepfakes raise questions around digital criminality and citizenship, ethics, measurements of harm and social justice participation. In late 2019, some US states began introducing deepfake laws, giving victims the ‘right’ to engage in civil action, but leaving those who couldn’t afford litigation, unable to access justice. In England and Wales, IBA laws do not include deepfakes, with some commentators noting this decision was made on the (erroneous) basis victims do not experience the same harm as when the image is ‘real’ (Flynn & Henry, 2019a; Rackley et al., 2021). Some laws require evidence of an intention to harm the victim (e.g. Criminal Justice and Courts Act 2015 (UK) s.33(2); 9A Wash Rev Code § 86.010(1) (2020)), which means when an image has been created for personal use or covert circulation without the victim’s knowledge, it may be near impossible to prove intended harm. Responses to deepfakes are further compounded by inconsistency among Internet intermediaries’ actions— platforms (e.g. Facebook), cloud computing (e.g. Google Drive), search engines (e.g. Yahoo) and technical providers (e.g. Telstra). In 2019, a deepfake depicting the US House Speaker, Nancy Pelosi, drunk while making a speech was deleted from YouTube on the basis it violated acceptable content policies. Facebook, while acknowledging the video was fake and causing harm, did not prohibit it, instead stating: ‘we don’t have a policy that stipulates the information you post must be true’ (Harwell, 2019). Such inconsistency in actions show key vulnerabilities in how intermediaries prevent or respond to deepfakes. It also suggests a neglect of ethical and social responsibilities to the public. In this regard, many laws and regulatory mechanisms appear to be failing to keep pace with deepfakes and its harms. In this chapter, we begin by briefly pointing to the potential harms of deepfake abuse, before discussing current criminal law responses to deepfake abuse in Australia, arguing that these are not sufficient to adequately address, prevent and disrupt AI-technologies from causing harm. We conclude by providing recommendations for future, multifaceted responses to deepfake abuse and the need for further research in this space.
29 Disrupting and Preventing Deepfake Abuse …
587
Deepfake Harms A recently published review in the United Kingdom (UK) identified 20 ways that AI could be used to facilitate crime over the next 15 years. It ranked deepfakes as the most serious social threat (Caldwell et al., 2020). While little research has explored the pervasiveness and specific harms of deepfakes to date, Henry et al.’s (2020) research on IBA in the UK, Australia and New Zealand found 14.1% (n = 864) of respondents aged 16–84 years (n = 6109) had experienced someone creating, distributing or threatening to distribute a digitally altered image representing them in a nude or sexual way. Victimisation rates were higher for those identifying as lesbian, gay or bisexual (26.7% cf. heterosexual: 12.6%), as a person of colour (19.2% cf. white: 12.3%), or as having a disability (39.1% cf. no disability: 7.0%; see also Rackley et al., 2021). Other research (Melville, 2019) estimates that around 97% of deepfakes uploaded online are pornographic, and these range from imagery of celebrities to people known or unknown to the abusers (Chesney & Citron, 2019; Melville, 2019; Meskys et al., 2020). One example of this was identified by Chesney and Citron (2019, pp. 1172–1173), who documented deepfake cases from a sub-Reddit forum in which a male perpetrator acquired someone to create a deepfake of a young woman he went to school with, providing them with 308 photos he had lifted from her social media account to facilitate the abusive imagery. Abusers can create deepfake imagery to exploit, humiliate, damage and inflict harm and pain (Chesney & Citron, 2019, pp. 1772–1773; Henry & Flynn, 2020, p. 1110; Powell et al., 2019). At the time of writing, there is little research into the direct and consequential harms that may arise from deepfake abuse. However, many of the harms identified in studies examining digital abuse and IBA specifically are highly applicable to deepfake victimisation (Bates, 2017; Citron, 2019; Henry et al., 2019b; McGlynn et al., 2019, 2020; Powell, Scott, et al., 2020). Research on the harms of digital abuse has found high rates of mental health concerns among victims including anxiety, depression and self-injury (Bates, 2017; Flynn & Henry, 2019a; McGlynn et al., 2019, 2020; Powell et al., 2018). In their Australian survey examining the impact of IBA victimisation and perpetration involving 4274
588
A. Flynn et al.
respondents, Henry et al. (2019a, p. 41) reported that victims of IBA were almost twice as likely as non-victims to experience high levels of emotional distress. Victims may also experience humiliation, fear and intimidation from deepfake abuse (Chesney & Citron, 2019; Citron & Franks, 2014), and any feelings of shame and embarrassment may be compounded by the reactions of their friends and family (Bates, 2017). This emotional distress may extend to physiological effects such as an inability to be able to work, concentrate or eat (Citron, 2019). Victims may also turn to negative coping mechanisms like alcohol or drug use to deal with their victimisation (Bates, 2017; McGlynn et al., 2020). Alternatively, they may be constantly on alert, in a state of what Citron describes as ‘visceral fear’ (2019, p. 1925) with uncertainty about who may see the deepfake imagery or where it may be reposted (Henry & Flynn, 2019; McGlynn et al., 2020). Once a deepfake exists on the internet, it may be viewed by potentially millions of people and difficult, if not completely impossible, to eliminate online (Bates, 2017; Cecil, 2014; Chensey & Citron, 2019; Henry & Flynn, 2020; Henry et al., 2019a). Even if the victim is able to debunk the authenticity of the video, it may be difficult to remedy any harms that have already occurred (Chesney & Citron, 2019). McGlynn et al. (2020, p. 14) also note that victims may incidentally experience further harms in their personal and professional lives by ‘hyper-analysing all of their social interactions’ in trying to establish who may be aware or have seen the deepfake. As a result, some victims may wish to withdraw from the online world entirely by deleting their social media (Henry et al., 2020). McGlynn et al. (2020, pp. 14–15) found that many victims of IBA experience the internet as a ‘dangerous place’, where they cannot feel safe, and it becomes a space of ‘re-traumatisation’. This also extends to offline spaces, where victims may fear for their personal safety. In their Australian study, Henry et al. (2019a, p. 41) found that 39% of victims who had their image distributed, were fearful for their safety. Other research has found victims may experience an increased risk of offline stalking or harassment where their contact details are shared alongside the deepfake (Bates, 2017; Citron & Franks, 2014; Henry et al., 2020). For example, Citron and Franks’ (2014, pp. 350–351) US study involving 1244 victims of non-consensual pornography revealed
29 Disrupting and Preventing Deepfake Abuse …
589
that 20% had their telephone numbers or email addresses posted alongside the naked image and over 50% of victim’s social media accounts were linked alongside the intimate images (see also, Bates, 2017; Citron, 2019; McGlynn & Rackley, 2017). While the nature and extent of harms experienced by victims are not uniform, and may vary between different groups (McGlynn et al., 2020; Powell, Scott, et al., 2020), the research to date suggests they are likely to be profound, ranging from physical and psychological, to collateral consequences which may pervade a victim’s professional and social lives, as well as their reputation (Chesney & Citron, 2019). Concerningly, as the tools to create deepfakes become more accessible, the risk of people experiencing such harm increases, and research indicates this is likely to be amplified where victims are marginalised by factors such as gender, sexual orientation, Indigeneity, age or ability (Office of the eSafety Commissioner 2019, 2020; Powell et al., 2018; Powell, Flynn, et al., 2020; Powell, Scott, et al., 2020). This highlights further the need to examine appropriate responses.
Deepfakes and the Criminal Law While the AI-technologies to create deepfakes are new and ever evolving, the use of manipulated imagery to harass and offend has been around for as long as technology has permitted. Given the severity of harms that may be caused by deepfakes, a legal response to this form of abuse is warranted. There are, however, very few civil remedies that could apply to such conduct. For example, Australia has no tort of privacy, even if it could be shown that a privacy interest had been breached (ABC v. Lenah Game Meats [2001]). Copyright infringement presents considerable challenges in identifying who has copyright in the image (Henry et al., 2019a) or whether using an image to train a deepfake algorithm is necessarily an infringing use at all. While defamation might be another alternative, it suffers from the same problems as all civil proceedings; it would require considerable financial means and face significant forensic challenges. Civil litigation would also likely prolong the harm associated
590
A. Flynn et al.
with the image and for the vast majority of cases, is unlikely to be an effective response (see also, Henry et al., 2019a). While criminal prosecution of deepfake abuse also faces significant challenges (Flynn & Henry, 2019a; Henry et al., 2018), the criminal law provides a public enforcement response, which conveys the condemnation appropriate to this form of abusive conduct. In the face of new challenges, it can be tempting to enact new offences. However, there is a risk that responses to specific technologies will soon become outdated by newer technologies. There is also the danger of overcriminalisation, with yet another criminal offence added to an already crowded field. It is also arguable that the misuse of deepfakes aligns with existing forms of IBA, and thus should be prosecuted under IBA offences, where possible. In broad terms, the misuse of deepfakes may fall under three categories of offence: harassment, IBA and obscenity/indecency. Below we consider how well current Australian criminal laws capture deepfake abuse, before arguing why further reform, including within and beyond criminal law, is needed.
Harassment Harassment may be defined as ‘a pattern of behaviour or course of conduct pursued by an individual designed to intimidate and distress another individual’ (Australian Law Reform Commission, 2014, p. 211). In Australia, there is no general online harassment offence at the state and territory level and such conduct is typically addressed by the federal offence of using a carriage service ‘in a way (whether by the method of use or the content of a communication, or both) that reasonable persons would regard as being, in all the circumstances, menacing, harassing or offensive’ (Criminal Code Act 1995 (Cth) s. 474.17). The terms ‘menacing’, harassing’ and ‘offensive’ are not defined in the legislation and are to be given their ‘ordinary’ meanings. For example, the ordinary meaning of ‘menacing’ is ‘uttering or holding out threats’, while ‘harassing’, connotes ‘troubling or vexing by repeated attacks’ (Monis v R; Droudis v R [2013] 249 CLR 92, p. 156, Hayne J). Importantly, it is not necessary for a person to be menaced, harassed or offended,
29 Disrupting and Preventing Deepfake Abuse …
591
so long as a reasonable person would regard the conduct as such. This means a person can be found guilty of the offence, regardless of the harm or impact actually experienced by the victim; a key criticism of many IBA laws operating outside Australia (see, Flynn & Henry, 2019a; Henry et al., 2020; Rackley et al., 2021). In addition, the fault element for this offence includes recklessness; that is, where the person was ‘aware of a substantial risk that the use was [menacing or harassing] and, having regard to all the circumstances known to the accused, it was unjustifiable to take that risk’ (Monis v R; Droudis v R, p. 157, Hayne J). This provision has been applied to threatening messages posted on Facebook pages (Agostino v Cleaves [2010]), menacing emails (R v Ogawa [2009]), and defacement of Facebook memorial pages (R v Hampson [2011]), and may be apt to encompass the equivalent use of deepfakes. For example, an image of a person being physically abused could be seen as menacing, while publication of sexual images of the victim could be seen as harassing. In either case, an offence may be made out where a reasonable person would regard the conduct as menacing or harassing, and the accused was at least reckless as to that risk. Repeated deepfake abuse may also constitute ‘stalking’; that is, ‘a course of conduct in which one individual inflicts on another repeated unwanted intrusions and communications, to such an extent that the victim fears for his or her safety’ (Purcell et al., 2004, p. 157). Stalking offences are found in all Australian jurisdictions, and are both more serious and more limited, focusing on conduct that may cause harm or fear. Under the Victorian provision, it is an offence to engage in a course of conduct ‘with the intention of causing physical or mental harm to the victim, including self-harm, or of arousing apprehension or fear in the victim for his or her own safety or that of any other person’ (Crimes Act 1958 (Vic) s.21A). Online conduct is specifically included, in particular, ‘publishing on the Internet or by an e-mail or other electronic communication to any person a statement or other material; (i) relating to the victim or any other person; or (ii) purporting to relate to, or to originate from, the victim or any other person’. This provision has been applied to the posting of photographs of the victim on pornographic websites (R v Wilson [2012]), and is clearly capable of applying to the publication and/or distribution of deepfakes
592
A. Flynn et al.
whether they are of the victim, or any other person. For example, a deepfake of the victim’s partner sexually involved with another person. Although the offence requires an intention to cause harm or to arouse fear, the accused is taken to have the necessary intention if he or she either knew, or in all the circumstances, ‘ought to have understood that engaging in a course of conduct of that kind would be likely to cause such harm or arouse such apprehension or fear’ (Crimes Act 1958 (Vic) s.21A). A limitation of these offences is that they may not apply to single postings of images. Stalking, in particular, requires the accused to have engaged in a ‘course of conduct’; in other words, a single posting or communication of a deepfake image is not sufficient. In addition, because of the requirement that the conduct must be menacing or harassing, or be intended to cause harm or fear, they are unlikely to apply where the accused has no relationship or awareness of the victim (e.g. using celebrity images), and/or where the victim is unaware of the posting (see Henry & Flynn, 2019). This has significant limitations for prosecuting deepfake abuse, given the potential for perpetrators and victims to have no personal relationship prior to the image being created, and for some victims to be unaware of the existence of the image, particularly if it is solely created for personal use or circulated surreptitiously among a deviant, private online community (Flynn & Henry, 2019a; Henry & Flynn, 2019).
Image-Based Abuse On its face, the misuse of deepfake images as a form of harassment seems closest to IBA (Flynn & Henry, 2019b). This conduct involves the nonconsensual creation/taking and distribution of ‘intimate images’, as well as threats to distribute nude or sexual images. All Australian jurisdictions except Tasmania have now enacted offences which criminalise the distribution of ‘invasive’ or ‘intimate’ images without consent. Some also criminalise the threat to distribute. Looking at the Victorian provision by way of example, it is an offence for a person to intentionally distribute an intimate image of another person to a person other than the person
29 Disrupting and Preventing Deepfake Abuse …
593
depicted, where the distribution of the image is contrary to community standards of acceptable conduct (Summary Offences Act 1966 (Vic) s. 41DA). The Victorian provision defines an ‘intimate image’ to be a moving or still image that depicts a person engaged in sexual activity in a manner or context that is sexual, or depicts the genital or anal region or female breasts (Summary Offences Act 1966 s. 40). ‘Distribute’ is defined very broadly and may be by any person, not just the person who was involved in the making of the initial image. In addition to distribution being an offence, it is also an offence in Victoria to threaten to distribute an intimate image (Summary Offences Act 1966 s. 41DB). For all offences, there is no need to prove that harm was caused as a result of the act. While it would seem that a sexual deepfake image could fall within the definition of ‘intimate image’, there are some key challenges in applying existing definitions in IBA law to deepfakes. The wording of the legislation reflects the typical context of IBA; that is, private sexual images that have been captured and/or distributed without consent. However, some of the factors relevant to community standards arguably do not apply to deepfakes (Summary Offences Act 1966 s. 40). In particular, ‘the circumstances in which the image was captured’ does not apply, as a deepfake image is created , not captured. Similarly, ‘the degree to which the distribution of the image affects the privacy of a person depicted in the image’ is more challenging in the context of deepfakes as ‘that which is true (the person’s face) is not private and that which is private (the sex act) is not true’ (Citron, 2019, p. 1939; Kugler & Pace, 2021, p. 3). Although privacy interests might be involved in the acquisition of the images used to generate the image, in many instances, these may be publicly available. Even where this is not the case, it is arguable that the distribution of a computer generated image, albeit one which draws upon existing images, does not impact on the privacy interests of the person depicted. An analogy may be drawn with an artist creating a hyper-realistic sexual painting based on observations of the subject. Such an image may be upsetting to the subject, it may even be offensive or harmful, but arguably, it does not infringe the subject’s privacy under these provisions.
594
A. Flynn et al.
These factors are inclusive, and it may still be found that the distribution is contrary to community standards on the basis of other factors. For example, it may be argued that the ‘nature and content of the image’ is such that distribution without consent is contrary to community standards of acceptable conduct. Nonetheless, consideration should be given to clarifying or extending the application of these provisions to deepfake images, as has occurred in some US jurisdictions (Kugler & Pace, 2021, p. 3).
Offensive Images Independent of the question of whether they are harassing or constitute IBA, some online communications may be so offensive that they offend community standards. Such communications may, in limited instances, be prosecuted under provisions which criminalise objectionable material, such as the Commonwealth offence of using a carriage service to menace, harass or cause offence. As discussed above, the word ‘offensive’ is not defined in the Act and is to be given its ordinary meaning. Because the term ‘offence’ is used in conjunction with ‘menace’ and ‘harass’, ‘it is not sufficient if the use would only hurt or wound the feelings of the recipient, in the mind of a reasonable person’ (Monis v R; Droudis v R, p. 201, Crennan, Kiefel, Bell JJ). Instead, the use must be calculated or likely to arouse ‘significant anger, significant resentment, outrage, disgust, or hatred in the mind of a reasonable person in all the circumstances’ (Monis v The Queen [2011], p. 39). In determining whether the use is offensive, the surrounding circumstances may be taken into account, including ‘the standards of morality, decency and propriety generally accepted by reasonable adults’ (Criminal Code Act 1995 (Cth) s. 474.3). In the case of deepfakes, the argument may be made that the image, for example, depicting an adult engaged in a sexual context, is not in itself offensive. However, relevant circumstances might include that the image was created from images of the person, used without their consent, to create and distribute a sexualised image. It may be said that in all the circumstances, such a use of the carriage service is offensive (R v McDonald and DeBlaquiere [2013], para.
29 Disrupting and Preventing Deepfake Abuse …
595
113). As discussed above, it is not necessary that the accused intended, or knew that it was offensive; recklessness is sufficient. Some states have specific offences that criminalise the distribution of objectionable material (Crimes Act 1900 (NSW) s.578C; Criminal Code Act 1899 (Qld) s.228). For example, in Victoria, it is an offence to use an ‘on-line information service’ to publish, transmit or make available ‘objectionable material’ (Classification (Publications, Films and Computer Games) Enforcement Act 1995 (Vic) s.57(1)). Objectionable material is defined by reference to publications, films and computer games, but could conceivably apply to deepfakes (Classification (Publications, Films and Computer Games) Enforcement Act 1995 s.3). For example, a sexual image created without the person’s consent, could be said to depict matters of sex in a manner that is likely to cause offence to a reasonable adult. While the majority of cases concerned with these offences relate to the transmission of child sexual abuse material, they have been applied in circumstances which could be seen as analogous to the misuse of deepfakes, such as an objectionable Facebook page (Snashall-Woodhams, 2012) and the publication of indecent photographs on Facebook (Usmanov v R [2012]). It is therefore possible this offence could be applied to deepfake abuse. A general limitation in relation to all of these offences and deepfakes however, is that they typically require a level of distribution or publication. It is not generally an offence to possess an image, other than in limited cases such as child sexual abuse material (Clough, 2015). Nor, except in limited cases such as visually capturing a person’s genital or anal region without consent (Summary Offences Act 1966 s. 41B), do they apply to the creation of material. Therefore, in cases where the perpetrator has created a deepfake for personal use, these offences may not apply. Even if, and it is a big ‘if ’, appropriate criminal offences apply to deepfake abuse, criminal prosecutions will face significant forensic and legal challenges. In particular, there is the likelihood that the images may originate from or be distributed outside the jurisdiction. Consideration may therefore be given to whether such offences should apply extraterritorially, as is the case with the offence of stalking (DPP v Sutcliffe [2001];
596
A. Flynn et al.
Crimes Act 1958 (Vic) s. 21A; see Henry et al., 2018 for a discussion of similar jurisdictional challenges in the context of IBA law).
Conclusion: Recommendations and Implications Deepfakes are an emergent form of abuse that have the potential to generate significant harm. While existing criminal law in Australia provides a way to prosecute deepfake abuse, as we have outlined in the sections above, there are significant limitations associated with the current laws that promote the need for additional legal responses specific to deepfake abuse. This is particularly so in relation to recently introduced IBA laws, which fail to include the creation of intimate imagery, and requires distribution not merely possession of such imagery to fit within the legal provisions. While this suggests new laws are required to deal with deepfake abuse, it is important that any laws are introduced alongside a multifaceted response, which incorporates regulatory and corporate responsibility responses, education and prevention campaigns, training for those tasked with investigating and responding to deepfake abuse, and technical solutions that seek to disrupt and prevent the abuse. Research suggests that deepfake abuse is pervasive and increasing (Melville, 2019; Schnick, 2020; Sensity, 2020) in a range of different relational contexts, including in intimate or previously intimate relationships, as a form of peer-to-peer harassment, and within stranger scenarios. Research also suggests the diversity of those affected by deepfakes is wider than previously thought, moving beyond celebrity targets, to impact on marginalised communities, such as young people, Aboriginal and Torres Strait Islander people, sexuality, romantically and gender diverse people, and individuals who identify as having a disability (see e.g. Henry et al., 2020). It is therefore vital that responses to deepfake abuse seek to reflect the lived experience and needs of diverse populations on the basis of gender, cultural background, sexuality, disability and age, as well as taking into account the varied situational and relational contexts in which deepfake abuse can occur.
29 Disrupting and Preventing Deepfake Abuse …
597
Across Australia, there is a need for consistency in legal responses to deepfake abuse in order to assist with some of the limitations created by cross-jurisdictional offending and which better recognise the harms it causes. While we have not examined civil law responses in this chapter, there is a role for these, such as the civil penalties scheme operating in Australia, which gives the Office of the eSafety Commissioner the power to order the removal of deepfake imagery from online platforms (see, Henry et al., 2020). As Henry et al. (2019a) have argued in the context of IBA more generally, civil law responses may also come from strengthening privacy laws to enable victims the potential to proceed with civil lawsuits, should they wish to do so. There is also a clear role for Internet intermediaries to take on a higher level of corporate responsibility in detecting, preventing and responding to deepfakes. While several platforms have introduced policies restricting or prohibiting the sharing or uploading of deepfakes, for example, Reddit (2021) has banned involuntary pornography, PornHub (2016) prohibits ‘any content which impersonates another person’ and Facebook (2021) does not permit any sharing of ‘non-consensual content’, others have been slower to respond or require significant action on behalf of victims to seek some form of justice or response. It is vital that online platforms and websites work to reduce the harms victims experience from deepfakes. This includes moving beyond requiring victims or bystanders to report the abuse, to also implementing strategies to prevent and automatically identify deepfake content, for example, through photo-matching or hashing technologies (Citron, 2019; Henry & Flynn, 2020; Henry et al., 2019a), or immutable lifelogs (see Chesney & Citron, 2019, pp. 1814– 1817). Internet intermediaries could also provide more information and guidance to their users on deepfakes, including education on how to detect them through reverse Google image searches (Koenig, 2019), and where to seek support (Henry & Flynn, 2020). Legal and other regulatory responses must also be supported with education and prevention campaigns on the harms of deepfake abuse (Henry & Flynn, 2019; Kirchengast & Crofts, 2019), as well as training for police, criminal justice agencies and sector workers, such as those working in domestic, family, sexual assault and specialist diversity service sectors, public health, victim support and behavioural change. This
598
A. Flynn et al.
includes training on the nature, seriousness and range of response options available to address deepfake abuse, including how deepfakes may fall within existing legal frameworks (Delfino, 2019) and how to support victims who report experiencing deepfake abuse (Henry et al., 2019a). There is also a need for further research into deepfakes, including empirical research with victims, perpetrators and bystanders to better understand deepfake abuse, its harms, nature and pervasiveness, and potential barriers to education and prevention. Comparative research across different country contexts on how best to disrupt, prevent and respond to deepfake abuse and AIFA more broadly is also needed, particularly in relation to legal, technical, social and regulatory responses. It is also vital that understandings of AI and digital criminality are explored, and that connections between AIFA and systemic drivers of inequality, such as misogyny, homophobia, racism, ableism, are examined. Deepfakes are a serious threat facing society, and as we have demonstrated throughout this chapter, in Australia, the criminal law frameworks are lacking in their capacity to effectively address and prevent it. AI-technologies are not to blame for the abuse in which they are implicated, but technology-facilitated abuse is being weaponised through AI, and it is vital that further research is undertaken to explore the role AI plays in the complex web of factors that inform the commission of deepfakes and its associated harms. Acknowledgements The authors wish to acknowledge the Monash Data Futures Institute for funding the project that informed this chapter.
References ABC v Lenah Game Meats (2001) 208 CLR 199. Agostino v Cleaves [2010] ACTSC 19. Australian Law Reform Commission. (2014, March). Serious invasions of privacy in the digital era (Discussion Paper No. 80). https://www.alrc.gov. au/wp-content/uploads/2019/08/whole_dp80.pdf.
29 Disrupting and Preventing Deepfake Abuse …
599
Bates, S. (2017). Revenge porn and mental health: A qualitative analysis of the mental health effects of revenge porn on female survivors. Feminist Criminology, 12(1), 22–42. https://doi.org/10.1177/1557085116654565 Boxall, H., Morgan, A., & Brown, R. (2020). The prevalence of domestic violence among women during the COVID-19 pandemic. Statistical Bulletin, 28. https://www.aic.gov.au/publications/sb/sb28. Caldwell, M., Andrews, J. T. A., Tanay, T., & Griffith, L. D. (2020). AI-enabled future crime. Crime Science, 9 (14). https://doi.org/10.1186/s40163-02000123-8. Cecil, A. L. (2014). Taking back the internet: Imposing civil liability on interactive computer services in an attempt to provide an adequate remedy to victims of nonconsensual pornography. Washington and Lee Law Review, 71(4), 2513–2556. Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107 , 1753–1820. https://doi.org/10.2139/ssrn.3213954 Citron, D. K. (2019). Sexual privacy. The Yale Law Journal, 128(7), 1870– 1960. Citron, D. K., & Franks, M. A. (2014). Criminalizing revenge porn. Wake Forest Law Review, 49, 345–391. Classification (Publications, Films and Computer Games) Enforcement Act 1995 (Vic). Clough, J. (2015). Possession of “extreme” pornography: Where’s the harm? Canadian Journal of Law and Technology, 13(2), 131–169. Crimes Act 1900 (NSW). Crimes Act 1958 (Vic). Criminal Code Act 1899 (Qld). Criminal Code Act 1995 (Cth). Criminal Justice and Courts Act 2015 (UK). Delfino, R. A. (2019). Pornographic deepfakes: The case for federal criminalization of revenge porn’s next tragic act. Fordham Law Review, 88(3), 887–938. DPP v Sutcliffe [2001] VSC 43. Facebook. (2021). Community standards 14: Adult nudity and sexual activity. https://www.facebook.com/communitystandards/adult_nudity_sex ual_activity. Flynn, A. (2019, March 12). Image-based abuse: The disturbing phenomenon of the deepfake. The Lens. https://lens.monash.edu/@politics-society/2019/ 03/12/1373665/image-based-abuse-deep-fake.
600
A. Flynn et al.
Flynn, A., & Henry, N. (2019a). Image-based sexual abuse. In Oxford research encyclopedia of criminology and criminal justice. Oxford University Press. https://doi.org/10.1093/acrefore/978019264079.013.534. Flynn, A., & Henry, N. (2019b). Image-based sexual abuse: An Australian reflection. Women & Criminal Justice, 31(4), 313–326. https://doi.org/10. 1080/08974454.2019.1646190 Flynn, A., Powell, A. & Hindes, S. (2021). Technology-facilitated abuse: A survey of support service stakeholders (Research Report 02/2021). Melbourne: ANROWS. https://www.anrows.org.au/publication/technology-facilitatedabuse-a-survey-ofsupport-services-stakeholders/ Harwell, D. (2019, May 24). Facebook acknowledges Pelosi video is fake but declines to delete it. Washington Post. https://www.washingtonpost.com/tec hnology/2019/05/24/facebook-acknowledges-pelosi-video-is-faked-declinesdelete-it/. Henry, N., & Flynn, A. (2019). Image-based sexual abuse: Online distribution channels and illicit communities of support. Violence Against Women, 25 (16), 1932–1955. https://doi.org/10.1177/1077801219863881 Henry, N., & Flynn, A. (2020). Image-based sexual abuse: A feminist criminological approach. In A. Bossler & T. Holt (Eds.), The Palgrave handbook of international cybercrime and cyberdeviance. Palgrave Macmillan. https://doi. org/10.1007/978-3-319-78440-3_47. Henry, N., Flynn, A., & Powell, A. (2018). Policing image-based sexual abuse: Stakeholder perspectives. Police Practice and Research: An International Journal, 19 (6), 565–581. https://doi.org/10.1080/15614263.2018.1507892 Henry, N., Flynn, A., & Powell, A. (2019a). Responding to ‘revenge pornography’: Prevalence, nature and impacts. Australian Criminology Research Council. https://www.aic.gov.au/sites/default/files/2020-05/CRG_08_1516-FinalReport.pdf. Henry, N., Flynn, A., & Powell, A. (2019b). Image-based abuse: Victimisation and perpetration of non-Consensual sexual or nude imagery. Trends and Issues in Crime and Justice, 572. Henry, N., McGlynn, C., Flynn, A., Johnson, K., Powell, A., & Scott, A. J. (2020). Image-Based Sexual Abuse: A Study on the Causes and Consequences of Non-Consensual Nude and Sexual Imagery. Routledge. Kirchengast, T., & Crofts, T. (2019). The legal and policy contexts of ‘revenge porn’ criminalisation: The need for multiple approaches. Oxford University Commonwealth Law Journal, 19 (1), 1–29. https://doi.org/10.1080/147 29342.2019.1580518
29 Disrupting and Preventing Deepfake Abuse …
601
Koenig, A. (2019). “Half the truth is often a great lie”: Deep fakes, open source information, and international criminal law. American Journal of International Law, 113, 250–255. Kugler, M., & Pace, C. (2021). Deepfake privacy: Attitudes and regulation. Northwestern University Law Review, 116 . Advance online publication. https://doi.org/10.2139/ssrn.3781968. McGlynn, C., & Rackley, E. (2017). Image-based sexual abuse. Oxford Journal of Legal Studies, 37 (3), 534–561. https://doi.org/10.1093/ojls/gqw033 McGlynn, C., Rackley, E., Johnson, K., Henry, N., Flynn, A., Powell, A., Gavey, N., & Scott, A. J. (2019). Shattering lives and myths: A report on image-Based sexual abuse. University of Durham. https://claremcglynn.files. wordpress.com/2019/10/shattering-lives-and-myths-revised-aug-2019.pdf. McGlynn, C., Rackley, E., Johnson, K., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2020). ‘It’s torture for the soul’: The harms of image-based sexual abuse. Social & Legal Studies, 30 (4), 541–562. https://doi.org/10.1177/096 4663920947791 Melville, K. (2019, August 30). The insidious rise of deepfake porn videos and one woman who will not be silenced. ABC News. https://www.abc.net. au/news/2019-08-30/deepfake-revenge-porn-noelle-martin-story-of-imagebased-abuse/11437774. Meskys, E., Liaudanskas, A., Kalpokiene, J., & Jurcys, P. (2020). Regulating deep fakes: Legal and ethical considerations. Journal of Intellectual Property Law & Practice, 15 (1), 24–31. https://doi.org/10.1093/jiplp/jpz167 Monis v R; Droudis v R (2013) 249 CLR 92. Monis v The Queen (2011) 256 FLR 28. Office of the eSafety Commissioner. (2019, February). eSafety for women from culturally and linguistically diverse backgrounds. https://www.esafety.gov.au/ sites/default/files/2019-07/summary-report-for-women-from-cald-backgr ounds.pdf. Office of the eSafety Commissioner. (2020, August). Adults’ negative experiences online. https://www.esafety.gov.au/sites/default/files/2020-07/Adults% 27%20negative%20online%20experiences.pdf. Paris, B., & Donovan, J. (2019, September 18). Deepfakes and cheapfakes. Data & Society. https://datasociety.net/library/deepfakes-and-cheap-fakes/. Pavan, E., & Lavogna, A. (2021). Promises and pitfalls of legal responses to image-based sexual abuse: Critical insights from the Italian case. In A. Powell, A. Flynn, & L. Sugiura (Eds.), The palgrave handbook of gendered violence and technology (pp. 545–564). Palgrave Macmillan.
602
A. Flynn et al.
PornHub. (2016). What sort of content are not allowed on the site. https://help. pornhub.com/hc/en-us/articles/229890227-What-sort-of-content-is-not-all owed-on-the-site-. Powell, A., & Flynn, A. (2020, June 3). Reports of “revenge porn” skyrocket during lockdown: We must stop blaming victims for it. The Conversation. https://theconversation.com/reports-of-revenge-porn-sky rocketed-during-lockdown-we-must-stop-blaming-victims-for-it-139659. Powell, A., Flynn, A., & Henry, N. (2020). Sexual violence in digital society: Human, technical and social factors. In T. Holt & R. Lukfeld (Eds.), Understanding the human factor of cybercrime (pp. 134–155). Routledge. https:// doi.org/10.4324/9780429460593. Powell, A., Henry, N., & Flynn, A. (2018). Image-based sexual abuse. In W. S. DeKeseredy & M. Dragiewicz (Eds.), Handbook of critical criminology (2nd ed., pp. 305–315). Routledge. Powell, A., Henry, N., Flynn, A., & Scott, A. J. (2019). Image-based sexual abuse: The extent, nature, and predictors of perpetration in a community sample of Australian adults. Computers in Human Behavior, 92, 393–402. https://doi.org/10.1016/j.chb.2018.11.009 Powell, A., Scott, A. J., Flynn, A., & Henry, N. (2020, February). Image-based sexual abuse: An international study of victims and perpetrators—A summary report. Royal Melbourne Institute of Technology. Purcell, R., Pathé, M., & Mullen, P. E. (2004). Stalking: Defining and prosecuting a new category of offending. International Journal of Law and Psychiatry, 27 (2), 157–169. https://doi.org/10.1016/j.ijlp.2004.01.006 R v Hampson [2011] QCA 132. R v McDonald and DeBlaquiere [2013] ACTSC 122. R v Ogawa [2009] QCA 307. R v Wilson [2012] VSCA 40. Rackley, E., McGlynn, C., Johnson, K., Henry, N., Gavey, N., Flynn, A., & Powell, A. (2021). Seeking justice and redress for victim-survivors of imagebased sexual abuse. Feminist Legal Studies, (online first). https://doi.org/10. 1007/s10691-021-09460-8. Reddit. (2021). Content policy. https://www.redditinc.com/policies/content-pol icy-1. Schnick, N. (2020). Deep fakes and the infocalypse. Monoray. Sensity. (2020, October). Automating image abuse. https://sensity.ai/reports/. Snashall-Woodhams, E. (2012, December 19). Facebook-sex-rate page cocreator avoids prison. Bendigo Adviser. https://www.bendigoadvertiser.com. au/story/1194669/facebook-sex-rate-page-co-creator-avoids-prison/.
29 Disrupting and Preventing Deepfake Abuse …
603
Stankiewicz, K. (2019). Perfectly real deepfakes will arrive in 6 months to a year. CNBC. https://www.cnbc.com/2019/09/20/hao-li-perfectly-real-deepfakeswill-arrive-in-6-months-to-a-year.html. Summary Offences Act 1966 (Vic). Usmanov v R [2012] NSWDC 290. Wash Rev Code (2020).
Part VIII Community Responses and Activism
30 Of Commitment and Precarity: Community-Based Approaches to Addressing Gender-Based Online Harm Rosel Kim and Cee Strauss
Introduction This chapter reflects on community-based responses to gender-based online harm (GBOH), as opposed to (or in addition to) legal responses to the harm. In recent years, some advocates for non-state resolution of harm have adopted a vision of “transformative justice” (see e.g. Davis, 2003; Gilmore, 2007; Kim, 2018; Piepzna-Samarasinha & Dixon, 2020; Morris, 2000), linking their work to longstanding responses to state and sexual violence developed by Black and Indigenous women (Palacios, 2016, p. 94). Transformative justice can be defined as “ways to address R. Kim · C. Strauss (B) Women’s Legal Education and Action Fund, Toronto, ON, Canada e-mail: [email protected] R. Kim e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_30
607
608
R. Kim and C. Strauss
violence without relying on police or prisons” (Dixon, 2020) and is also often linked to the goal of contesting and abolishing those systems (Kim, 2018). Those who write and live transformative justice interchange the language of transformative justice with “community accountability”. The concept of community is often defined by two main characteristics: one of geographical proximity and one of affinity (Nilsen, 2006, p. 141). Creative Interventions, which provides resources for community-based options for interventions to interpersonal violence, defines “community” as “the networks of people with whom we may live, play, work, learn, organize, worship and connect” (Creative Interventions, 2012). Central to our conventional understanding of community, then, is that it is a network where people feel related—and therefore invested—through a commonly shared trait such as geography and/or interests. This relationality becomes central for accountability and justice in the transformative justice movement. Achieving justice directly through “community” is seen as more effective and beneficial, because people belonging to the community are the experts in their own needs and are invested in maintaining accountability through relationships and preventing future harm—including state violence (Kim, 2018). The practice of community-based responses to harm arising out of the transformative justice movement prompted our inquiry into community-based responses to online harm, including their capacity to address the justice needs of victims/survivors1 better than legal responses do. Drawing on Jane (2012), Barlow and Awan (2016), Shepherd et al. (2015), and the Supreme Court of Canada’s decision in Saskatchewan (Human Rights Commission) v. Whatcott, we define GBOH as any text or image, which relies on technology for communication and/or publication and that abuses, denigrates, or devalues a woman, trans, or gender-diverse person. The abuse, denigration, or devaluation does not have to specifically name or reference their gender identity. 1 In this chapter, we use the term “victim/survivor” to refer to people who have experienced gender-based online harm. In using both terms, we wish to capture both the structural nature of victimisation by a system that is colonial, misogynist, racist, homophobic, ableist and oppressive, and the resistance and strength of the survivors who persist despite the systemic violence and oppression.
30 Of Commitment and Precarity: Community-Based …
609
With respect to the justice needs of victims/survivors, our discussion relies on the work of feminist criminologists that have mapped out the dominant needs of victims/survivors of sexual assault (see e.g. Daly, 2017; Feldthusen et al., 2000; Herman, 2005; Powell, 2015; McGlynn et al., 2017). While the needs or interests of victims/survivors are not identical across this scholarship, they can be broadly grouped into the themes of: participation (in a justice process); expression (telling one’s story); validation (being believed); vindication (confirmation from one’s community that the aggressor’s acts were wrong); and accountability (the aggressor taking responsibility and experiencing consequences). In the next section, we focus on three instances of online communitybased response to GBOH. We find that GBOH is distinct from other forms of gender-based violence: online harm is easy to enact and amplify, and the source of that harm is difficult to remove and/or contain. This globalised and amplifiable nature of GBOH likely increases the accountability needs of victims/survivors, especially when the harm is rooted in intersectional oppression. We also find that the same amplifiable nature of online community-based responses may address victims’/survivors’ participation, expression, and validation needs better than legal responses do. Further, our case studies also remind us of the ongoing need to critically assess the definition of community (critical reflections already underway in transformative justice practices—see Mingus, 2020). We identify how harm facilitated by technology disrupts the notion of “community” in the practice of “community-based responses” to harm. In reflecting on the boundaries of community in the online context, we expand on Jan Fernback’s call to understand online social relations according to the extent to which “the meaning of commitment to group is enacted in the public sphere” (2007, p. 66). Fernback focuses on online social relations, and not community, to distance herself from the “structural–functional baggage of the term community.” Yet at the same time, she advocates focusing on the “process of community-building as an active human endeavour” (p. 50), in that “the practice of community is essentially a process of community” (p. 66). We, therefore, interpret Fernback’s ideas to include community, and conceptualise this as “community as commitment.” The active maintenance or change of social
610
R. Kim and C. Strauss
structures is the essence of community, and it only occurs through the doing. Thus, in order to provide meaningful support and to meet the needs of victims/survivors, online community-based responses must be grounded by commitment to the healing of the victims/survivors they are rallying around.
Case Studies In this section, we focus on three instances of online community-based response to GBOH: the BADASS Army, an organisation created to help victims/survivors of non-consensual distribution of intimate images (NCDII); the Crash Override Network, an organisation dedicated to helping people facing a range of GBOH; and the response to the Twitter harassment of Inuk musician Tanya Tagaq in the context of the sealfie movement, a movement supporting the Inuit seal hunt. These examples were chosen because in all of them, victims/survivors engaged in non-legal responses to GBOH (or a combination of legal and nonlegal responses) by mobilising their communities, and received extensive media coverage, which facilitated our understanding of the events and ensured we were not publicising private information. We wanted the case studies to reflect experiences of intersectional discrimination, though we acknowledge that as settlers, we cannot fully comprehend the extent of settler violence inflicted upon Indigenous communities online and offline. Our analysis of the cases is based on our review of scholarly articles and the media coverage discussing them; we did not communicate with the victims/survivors, or with people who had experience using the support offered by the initiatives addressed.
The BADASS Army In April or May 2017, Katelyn Bowden found out that an acquaintance stole her ex-partner’s cell phone and uploaded her intimate images without her consent on AnonIB, a website that allows users to upload
30 Of Commitment and Precarity: Community-Based …
611
non-consensually shared intimate images and categorise them by type and location (Cox, 2018; Gander, 2018). Bowden began removing her images that kept reappearing online; in doing so, she recognised intimate images of other women she knew from her town and contacted them to share how to remove the images, and to provide available legal options (Gander, 2018). Bowden was surprised to find out that the aggressor could not be prosecuted for the nonconsensual sharing of her images because her state of Ohio (U.S.) did not have laws against NCDII at the time (Crawley, 2019). Through her communications with other victims/survivors of NCDII, Bowden quickly fostered a network that became the BADASS Army Facebook group in August 2017. Members would provide support to victims/survivors by educating them about their options, including steps on how to lock down online accounts and how to send legal notices to have their images removed. Bowden said of the group’s impact: “It’s giving people that don’t have technical skills previously the ability to do something, and that feels good” (Cox, 2018). The BADASS Army quickly grew. By January 2018, the group comprised “about 650 victims, including 200 from Ohio” (Cass, 2018). In November 2019, it had “amassed over 1,000 active members” in its Facebook group (Maddocks, 2019). As the group grew, its members proactively sought out other victims/survivors by joining Facebook groups (Maddocks, 2019). Victims/survivors could contact BADASS through its website or its Facebook page and request to join the private Facebook group, after which BADASS would require them to go through a vetting process to confirm their identity (The BADASS Army, Get Help). BADASS acquired 501(c)3 non-profit status, which enabled it to receive tax-deductible donations and to apply for grants (The BADASS Army, 2019, January 24). It also launched a website, where BADASS described itself as a non-profit “with a mission of providing support to victims of revenge porn/image abuse, and eradicating the practice through education, advocacy, and legislation” (The BADASS Army Home Page). A blog post discussing the group’s major accomplishments in 2018 estimated that the number of non-consensually shared images and videos the group removed from the internet in 2018 was “somewhere in the 5-figure range,” and noted that the group “aided in
612
R. Kim and C. Strauss
dozens of arrests for [NCDII], telecommunications harassment, stalking, and child pornography.” The post also described that the group had started working with digital platforms and companies (The BADASS Army, 2019, January 24), but that despite the growth, the group had not secured regular funding and urged its supporters to donate. Bowden became a prominent public figure, advocating for law reform to criminalise NCDII at both the state and federal level (Cole, 2019; Gander, 2018). In September 2020, Bowden stepped away from the organisation, and the group announced it would stop accepting charitable donations while it figured out next steps (The BADASS Army, 2020, September 24).
Crash Override Network Zoe Quinn, a U.S.-based videogame developer, was the original target of Gamergate—a series of misogynist, coordinated harassment campaigns against women and their allies who spoke out about sexism in the video game industry. In August 2014, Quinn’s ex-partner, upset about Quinn leaving him, falsely alleged in a blog post that Quinn had an unethical relationship with a journalist in order to secure a positive review of a video game that Quinn had developed (Jason, 2015). The expartner also doxxed Quinn, sharing their personal information online. Quinn experienced extreme harassment and attacks that included daily rape and death threats (against them and their family members), and threats to their partner’s prospective employer that led to a rescinded job offer (McIntyre, 2016). By January 2015, The Guardian had reported that Gamergate had “faded to a background hum,” and that much of the energy had “dissipated” (Lewis, 2015). However, a Boston Magazine article published in April 2015 (which interviewed both Quinn and their ex-partner) indicated that the abuse was still ongoing (Jason, 2015). Eventually, Gamergate would find other targets of harassment (some of whom included Black performers such as Leslie Jones and John Boyega for being cast in the remakes of Ghostbusters and Star Wars, which Gamergate supporters characterised as “ruining” the original franchises) (Malone, 2017).
30 Of Commitment and Precarity: Community-Based …
613
Quinn co-founded Crash Override Network with another game developer in January 2015. It described itself as a “crisis helpline, advocacy group and resource center for people who are experiencing online abuse” (Crash Override Network, About the Network), as well as a “formalized group of people who have learned how to support others through coordinated online mob harassment” (Cosimano, 2015). While it was operational, the Network provided assistance to victims/survivors in the form of an email helpline on various aspects of GBOH (Crash Override Network, About the Network). The Network described its work as using “proven and humane methods” (Crash Override Network, About the Network). The Network further noted it was “not a vigilante group” and did not take “retaliatory actions against abusers” (Cosimano, 2015). In interviews, Quinn also noted that they were only interested in “self-defense measures, not going after individual people participating in these sorts of behavior, outside of filing reports if that’s what the victim wants” (Takahashi, 2017). When Quinn and other Network members would notice the names and personal information of others being shared and discussed on online forums used by Gamergate aggressors (Hudson, 2015), they would contact them to provide tips on increasing security measures on their online accounts and communicating with police about potential false reports the police may receive about them. The Network’s educational component included an automated questionnaire for users to assess their cybersecurity measures; a series of educational guides geared to victims/survivors of online abuse that provided tips on increasing cybersecurity measures; and a guide for family members and employers of the victims/survivors on how to best support them (Crash Override Network, COACH). Quinn expressed doubts about the utility of engaging law enforcement: “I worry that overinvolvement of police leads to bad stuff happening to a lot of folks” (Takahashi, 2017). In fact, Quinn announced in February 2016 they would not be pressing charges against their ex-partner, citing the failure of law enforcement to protect victims/survivors like them: “[T]he criminal justice system is meant to punish, not protect … And they’ve done nothing to protect me” (Formichella, 2016, p. 145).
614
R. Kim and C. Strauss
In March 2016, Crash Override announced that Feminist Frequency, a registered 501(c)3 non-profit founded by another Gamergate survivor had become its “fiscal sponsor,” a move which allowed people to make tax-deductible donations to the Network (Sarkeesian, 2016). In December 2016, the Network announced it was suspending the crisis hotline due to “overwhelming need for assistance outpacing [its] current resources” (Crash Override Network, About the Network).
Tanya Tagaq’s Sealfie At the 2014 Academy Awards, Ellen DeGeneres took a selfie, which she posted on Twitter. Before the show was over, it had been shared over two million times, setting a new record for Twitter and briefly causing the site to crash (Addley, 2014). Samsung, the producer of the phone that DeGeneres used to take the selfie and a sponsor of the Oscars, made a $3 million donation to charities of DeGeneres’ choice. DeGeneres opted to send half of the funds to the Humane Society of the United States, which is opposed to the commercial seal hunt (Hawkins & Silver, 2017). DeGeneres’ donation was interpreted by some sealers, both Inuit and non-Inuit, to be a donation towards the campaign against seal hunting.2 Conservation/animal rights concerns against sealing first came to prominence in the 1970s and 1980s, with groups such as Greenpeace, People for the Ethical Treatment of Animals, and others campaigning against sealing as being “the largest slaughter of marine mammals on planet” (currently the language on The Humane Society of the United States’ website). The animal rights groups successfully turned public opinion against sealing: in 1972, the U.S. banned seal products; in 1983, the European Economic Community banned seal pup skins and products. These bans devastated the seal industry, both Inuit and non-Inuit (Rodgers & Scobie, 2015).
2
According to Hawkins and Silver (2017), the connection between the donation and the antiseal hunt campaign is more circumstantial than direct—the Humane Society publicly stated that it planned to use the Samsung donation towards unrelated programming, and DeGeneres made no statements about the seal hunt in connection with the donation.
30 Of Commitment and Precarity: Community-Based …
615
The role of animal rights groups in economically devastating the Inuit has led to deep-seated resentment towards animal rights groups among Inuit communities (Rodgers & Scobie, 2015). Animal rights groups’ campaigns have also sparked an Inuit-led movement to re-frame the narrative into one that centres Inuit traditions and economic survival. In response to DeGeneres’ selfie, filmmaker Alethea Arnaquq-Baril posted to DeGeneres’ Facebook page, encouraging people to “post a picture of yourself wearing sealskin and send it to [DeGeneres’] Twitter with the hashtag #sealfie in reference to the famous and heavily retweeted #selfie.” The intention in doing so was to change perceptions about the seal hunt (Hawkins & Silver, 2017). In this moment, the #sealfie was born. On March 28, 2014, amid many other #sealfies by Inuit and nonInuit alike wearing seal clothing or standing beside freshly hunted seals,3 Tanya Tagaq, an internationally acclaimed Inuk musician from Cambridge Bay, Nunavut, posted a photo of her baby beside a freshly caught seal on Twitter with the #sealfie hashtag. The censure that she received from animal rights activists was swift and, in many cases, horrifying.4 One woman started a petition to take Tagaq’s child away from her (CBC News, 2014), while several other people called child services directly to report Tagaq. Someone in California Photoshopped a photo to depict a man stabbing Tagaq’s baby daughter as one would stab a seal, with blood on the ground (Dean, 2014). Tagaq began receiving death threats on Twitter daily. At some point, Tagaq removed her original tweet. In response to the harassment, Tagaq took to the media. Appearing on a Canadian national radio program six days after she first posted the tweet, Tagaq condemned the individual who Photoshopped her daughter being stabbed, but directed the majority of her messaging to the broader public. Her aim in the interview was to educate CBC’s listeners about the Inuit community’s ways of life and the fact that living on their land is healing the harms of colonialism (Off, 2014). 3
There are still 100 consecutive tweets available on Twitter using the #sealfie hashtag that were posted between March 25, 2014 and March 27, 2014 (Knezevic et al., 2018). 4 While we have opted not to reproduce the harmful messages that Tagaq received, they are preserved in news articles from the time, including Dean (2014).
616
R. Kim and C. Strauss
Tagaq was also active on Twitter, where she was receiving tremendous support. On Twitter, Tagaq was focused on the harassment. She asked followers to report the individual who posted the Photoshopped photo to Twitter. Many followers replied saying that they did so. Kent Driscoll, a reporter with APTN, an Indigenous news network, replied to Tagaq’s tweet, saying that “The way this particular machine works, Twitter does nothing without complaints.” After a phone call from APTN, Twitter took the photo down (Tagaq tweet April 3, 2014). In her first interviews in the days following the backlash, Tagaq said that she was not considering legal action (CBC News, 2014). A month later, Tagaq posted on Twitter, “I’m getting increasingly vexed with the up cropping of abuse surrounding the #sealfie picture. I grow weary” (Tagaq tweet May 1, 2014). She received encouragement and support in response. By June 2014, one harasser in particular would not stop, and Tagaq involved the police to help successfully shut down his Twitter account. Rodgers and Scobie (2015, p. 91) state that charges were brought against him, but there are no other sources confirming this assertion. Details of the police involvement have remained private (Rule, 2018, p. 751). Tagaq told CBC News, “I know I’m a good mother so to have a person like that talk about having my kids taken away every day for a couple of months… the first while was alright because I have a pretty thick skin but by the end I didn’t feel sorry for him anymore, I just needed him to stop.” Now that he had, she said, “It’s like my body was covered in boils and now all the boils are gone. That’s what it feels like” (Walker, 2014).
Discussion and Analysis These three case studies are unalike in significant ways. The BADASS Army and Crash Override Network were organisations with websites and mission statements, while the response to Tanya Tagaq’s harassment was both embedded within an existing social movement, and yet was also a group only momentarily connected (Rathnayake & Suthers, 2018) by the hashtag #sealfie. Our analysis of the three examples highlights commonalities and differences between them that allow for observations
30 Of Commitment and Precarity: Community-Based …
617
both about the nature of harm online and the nature of online communities. Any insights that we have gleaned rest on the hard work undertaken by these community responses, as well as with the scholars on whose work we depend; any misrecognition of the important work that these activists are doing is ours.
Globalised and Amplified Harm One key feature distinguishing GBOH from offline harm is its globalised nature and scale, such that harm becomes infinitely amplifiable. The harm being online makes it impossible to contain; for example, the singular act of one person uploading a non-consensually shared image online has the impact of distributing the harm everywhere (Dodge, 2016, p. 68). Moreover, once the sharing happens, the victim/survivor is in danger of receiving further harm from those who may not be in the victim’s “community” at all (in that they may not share geography or affinity), but rather are anonymous and/or strangers to the victim/survivor. In Bowden’s case, while an acquaintance in her town originally uploaded her intimate image without consent, the online sharing was facilitated by a website whose servers were located in the Netherlands (Kapur, 2020). In the case of Tanya Tagaq’s harassment, individuals from all over the world were able to react to her #sealfie and harass her. They could (and did) build on each other’s comments to escalate the harassment. In addition, the harms are mediated by platforms that can stop the abuse, but rarely do because the business interests of the platform do not prioritise the well-being or the rights of the victims/survivors (Khoo, 2021). While the photograph of Tagaq’s baby was removed from Twitter, she had to involve the police in order to stop the threats against her life. This is representative of Twitter’s well-documented inaction in the face of violent GBOH, rather than an outside case. Amnesty International’s report discusses how Twitter’s “inconsistent enforcement and application of the rules as well as delays or inaction to reports of abuse when users breach the Twitter rules,” in combination with the absence of any human rights policy commitments, demonstrate a “failure of the company to
618
R. Kim and C. Strauss
adequately meet its corporate responsibility to respect human rights in this area” (2018, p. 45). Corporate interests are an omni-present mediating influence in GBOH that often perpetuate—or exacerbate—harm (Khoo, 2021). Finally, GBOH is qualitatively more abusive than offline harassment (Amnesty International, 2018)5 and occurs on a massive scale. Stories of women targeted for harassment for years abound, with platforms and police unable or unwilling to act (Elk-Young Bear & Kane, 2016; Laville, 2016). It may be that the scale and abusiveness of the harms being directed at women, trans, and gender-diverse people online has contributed to a situation where GBOH is only taken seriously at its most extreme. For instance, Zoe Quinn experienced six months of daily, ongoing rape and death threats, had their social security number stolen and published online, and had to go into hiding (Formichella, 2016). In Tagaq’s case, on the other hand, the most gruesome of the photographs (in Tagaq’s estimation) was removed less than a week after it was posted, and the death threats lasted for just over two months. On the scale of GBOH, Tagaq’s experience may be labelled as being less severe. Yet Tagaq spoke of feeling that her body was covered in boils. We note the contrasts (and similarities) in Quinn and Tagaq’s experiences to highlight the disturbing aspects of GBOH discussed earlier, and to think through how they may impact victims/survivors. GBOH is easy to enact and amplify yet difficult to remove; is often deeply traumatic to the target; and occurs on so massive a scale as to render that trauma misrecognised, dismissed, or simply not attended to. The overwhelming stories of harassment and violence against women, trans, and gender-diverse people online is both a product and exacerbation of the continued normalisation of sexual and gendered violence that Dodge references when she writes that “the pervasiveness of rape culture and the ubiquity of acts of sexism…allow the perpetration of 5 For example, Scottish Women’s Rights Activist Talat Yaqoob discusses the abuse she receives on Twitter in the following way: “I, 100%, don’t experience the level of abuse offline that I do online. I experience it – but the frequency of it and the toxic nature of it is more online than what I experience in real life because people know they get away with it more” (Amnesty International, 2018, p. 11).
30 Of Commitment and Precarity: Community-Based …
619
sexual violence to become banal” (2016, p. 73). Dodge (via Hannah Arendt and Judith Butler) describes how even horrific acts of rape do not register with a broader public as being anything other than “normal,” “everyday” violence (p. 73). How, then, can an “everyday” act of gendered harassment compete for validation, vindication, or accountability? The particularities of GBOH, in which victims/survivors face the possibility of experiencing abuse from anywhere and at any time, and where the abuse is distributed on commercial platforms motivated to profit from problematic content (rather than moderate it)— likely increase the feeling of uncontrollability that already characterises experiences of gender-based violence. This, in turn, may increase the victim/survivor’s accountability needs, their need for the abuse to stop at any cost, as well as their need for consequences for the harmful behaviour. For instance, while Tagaq did not initially wish to involve law enforcement, the unrelenting nature of the harassment forced her to change course.
Intersectional Harms The harassment that Tagaq experienced—most notably, accusations of being an unfit mother—reflects the exponential increase of harm for those experiencing intersectional discrimination. GBOH is not simply gendered: Twitter harassment against women, trans, and gender-diverse people, for example, almost always targets their other intersecting markers of oppression as well. U.S. journalist Imani Gandy explains: “I get harassment as a woman and I get the extra harassment because of race and being a black woman. They will call white women a ‘c*nt’ and they’ll call me a ‘n*gger c*nt’. Whatever identity they can pick they will pick it and use it against you. Whatever slur they can come up with for a marginalised group – they use” (Amnesty International, 2018, p. 20). Twitter harassment is directed at the most marginalised, time and again. The GBOH that Tagaq experienced was a specifically colonial, gendered kind. The accusations of being an unfit mother, the petition to remove Tagaq’s daughter from her care, and the calls to child services, run in deep grooves of the Canadian settler project to remove
620
R. Kim and C. Strauss
Indigenous children from their homes and assimilate them. As of 2020, Indigenous children account for 52.2% of children in foster care, despite constituting 7.7% of the child population according to Census 2016 (Government of Canada, 2021). Rule describes the settler state’s understanding of Indigenous women as being the same as its understanding of the land itself: “the settler state views Indigenous women as what could be—and must be—conquered and controlled as a way to secure and maintain Indigenous dispossession” (2018, p. 743). The harassment of Tagaq, she continues, “represent[s] just one manifestation of this legacy, as settler state violence clearly informed Tagaq’s attackers when they threatened her for her activism supporting Indigenous culture.” The attacks on Tagaq thus constituted a colonial form of GBOH that marks it as both familiar and particularly virulent. This, combined with the relentless nature of the harm, may have changed Tagaq’s accountability needs from Bowden or Quinn, who do not face colonial violence. This raises questions about how communities can support victims/survivors through the level of intensity that GBOH reaches.
Disruption of the Notion of “Community” The ease with which both harm and support can be expressed and amplified in online spaces disrupts understandings of community. Our case studies reveal that online spaces often allow for anyone to participate in both the harm-doing and the support-giving. In addition, the networked aspect of online harms necessarily involves the practices and standards of the companies hosting the content. This leads to difficulties in conceptualising boundaries of where a “community” begins or ends. Here, it may be useful to think of community as commitment. Drawing on our discussion of Fernback above, commitment can be understood as the means by which communities do the process of community-building. The active maintenance or change of social structures is the essence of community, and it only occurs through the doing. We argue that while the nature of GBOH allows anyone living anywhere to momentarily display support, this is not the type of community-based response that
30 Of Commitment and Precarity: Community-Based …
621
will create meaningful community for a victim/survivor of GBOH. A community need not be localised or even readily identifiable, but its members must be committed to the healing of the victims/survivors they are supporting. They must be seeking to engage in validation, vindication, and accountability as active endeavours. The support that mobilised around Tagaq’s harassment evidences communities of commitment. This is likely as a result of the fact that the GBOH in this case occurred within an established network of relationships based in a social movement. The number of replies to Tagaq’s tweet raised the profile of the #sealfie (Knezevic et al., 2018), arguably raising the profile of the movement. At the same time, the harassment actively damaged Tagaq and anyone directly impacted by GBOH against Indigenous women. In this way, the violence both harmed and strengthened the communities fighting for Inuit rights. In her film Angry Inuk, Arnaquq-Baril discusses the harmful tweets received after Tagaq’s post, and begins to cry on camera as misogynist tweets are displayed on the screen (Arnaquq-Baril). This is another form of expression, validation, and accountability. Here, Arnaquq-Baril tells the story of the harassment in her own way, freely expressing her pain, validating Tagaq’s, and demanding accountability from the animal rights groups that fuel the online harassment. The success of Angry Inuk has raised the sealfie movement’s profile even further. In the narrative of the sealfie movement, Tagaq’s harassment has become a site of meaning that engages communities of commitment.
The Precarity of Community-Based Responses Focusing on commitment does not fully address the precarity of community-based responses, however. One unresolved source of precarity is the fact that online community-based responses are often hosted on the same platforms where the harms occurred, a reality that enables the possibility of those spaces being infiltrated by aggressors. Bowden admitted that operating BADASS on a mainstream platform—Facebook—where acts of GBOH take place, was a difficult but “inevitable” decision, because “it’s really the only thing that everybody
622
R. Kim and C. Strauss
uses” (Maddocks, 2019). The potential for aggressors to infiltrate the Facebook group required BADASS Army to impose an additional barrier to participation by requiring people to go through a verification process before joining the closed Facebook group. In addition, the commercial nature of the platforms can threaten the healing process through acts of surveillance and content moderation by the platform companies themselves. Community organising can be disrupted or even terminated by platform companies whose corporate interests do not align with the needs of victims/survivors. Bowden noted her Twitter account was briefly suspended in 2019 (Maddocks, 2019), which disrupted her ability to reach out to victims/survivors. Moreover, undertaking and coordinating online community-based responses to traumatic instances of GBOH is draining, under-funded, and difficult to sustain as a result—much like “offline” community-based work. This, combined with the amplified and exponentially increased volume of harm that makes GBOH unique, can pose operational challenges for community-based responses. Crash Override’s team operated on a volunteer basis (Feminist Frequency, 2019, p. 11), and it ultimately suspended its email hotline due to overwhelming demand. BADASS noted it had not found regular funding after operating for more than a year and having received significant media coverage (The BADASS Army, 2019, January 24).
Increased Potential for Participation and Expression by Victims/Survivors On a positive note, the aspects of the internet that make the harm amplifiable also lead to increased ability for community-based responses to thrive. The BADASS Army’s quick growth reflects such potential. This ability to quickly mobilise community-based responses online can remove barriers to participation for those with regular internet access, which in turn can lead to increased opportunities for expression and validation (Fileborn, 2014; Rentschler, 2014).
30 Of Commitment and Precarity: Community-Based …
623
Conclusion This chapter examined community-based responses to GBOH to assess their potential to address victims’/survivors’ justice needs. Our analysis revealed that the online aspect of the harm disrupts a conventional understanding of “community” due to the ease with which harm— and expressions of support—can be enacted, shared, and amplified on a global scale. Such disruption served as a reminder of the usefulness of the concept community as commitment —rather than aspects of geography or affinity—as a grounding principle in any understanding of community-based responses. The amplified and global scale of both the harm and support can exacerbate the negative impacts of GBOH and can increase the participation and expression opportunities for victims/survivors. The fact, however, that many of the support mechanisms are housed on platforms run by corporate interests—that run on profit motive rather than a desire to protect the victims’/survivors’ rights—contributes to the precarity of community-based responses. This precarity of community-based responses is an area where the government can provide meaningful support, though it must be noted that not all community-based actors may wish to receive support from the state. Governments can direct funding and resources directly to community-based responses, which can increase and sustain the capacity of the initiatives already helping victims/survivors. Finally, in our current environment, the scale and nature of GBOH will likely continue to induce victims/survivors to seek support from the state when it simply becomes too much to bear. Yet we know that the criminal justice and policing systems are the least likely to meet the justice needs of survivors (Women’s Legal Education and Action Fund, 2021). While we recognise that not everyone is able to contact the state for help, we would recommend creating a state-based mechanism unconnected to the criminal justice system, such as an independent government agency with a clear survivor-centred mandate (Hrick, 2021). This agency could help to stop the harms victims/survivors are experiencing when they see legal intervention as their best or only option.
624
R. Kim and C. Strauss
Acknowledgements The authors would like to thank the J. Armand Bombardier Foundation for supporting LEAF’s Technology-Facilitated Violence Project. The authors would also like to thank Emily Price, Sarah Wilcox, and Meghan Zhang for their research assistance. The opinions expressed in this chapter are those of the authors alone, and not those of LEAF.
References Addley, E. (2014, March). Ellen’s Oscar selfie most retweeted ever—And more of us are taking them. The Guardian. http://www.theguardian.com/media/ 2014/mar/07/oscars-selfie-most-retweeted-ever. Amnesty International. (2018). #Toxic Twitter: Violence and abuse against women online. https://www.amnesty.org/download/Documents/ACT308 0702018ENGLISH.PDF. Arnaquq-Baril, A. (Director). (2016). Angry Inuk [Film]. The BADASS Army. (2019, January 24). BADASS—Year in review. The BADASS Army. https://badassarmy.org/2019/01/. The BADASS Army. (2020, September 24). Badass announcements. The BADASS Army. https://badassarmy.org/2020/10/. The BADASS Army. (n.d.-a). Get help. The BADASS Army. https://badass army.org/gethelp/. The BADASS Army. (n.d.-b). Home page. The BADASS Army. https://badass army.org/. Barlow, C., & Awan, I. (2016). “You need to be sorted out with a knife”: The attempted online silencing of women and people of Muslim faith within academia. Social Media + Society October, 2(4), 1–11. Cass, A. (2018, January 19). Revenge porn: New bill to be introduced in Ohio Senate. The News-Herald . https://www.news-herald.com/news/ohio/ revenge-porn-new-bill-to-be-introduced-in-ohio-senate/article_69755e6a2520-528d-a7d9-b0943b28c342.html. CBC News. (2014, April). Tanya Tagaq #sealfie provokes anti-sealing activists. https://www.cbc.ca/news/canada/north/tanya-tagaq-sealfie-provokes-antisealing-activists-1.2595250. Cole, S. (2019, May 22). Kamala Harris and Bipartisan legislators hope to pass federal anti-revenge porn law. Vice. https://www.vice.com/en/article/
30 Of Commitment and Precarity: Community-Based …
625
qv7zd7/kamala-harris-bipartisan-legislators-hope-to-federal-anti-revengeporn-law-shield-act. Cosimano, M. (2015, January 27). Zoe Quinn founds anti-harassment network Crash Override. Destructoid . https://www.destructoid.com/zo-quinn-fou nds-anti-harassment-network-crash-override-286719.phtml. Cox, J. (2018, March) The badass army’ is training revenge porn victims to fight back. Vice. https://www.vice.com/en/article/59k7qx/revenge-pornwhat-to-do-badass-army-anon-ib. Crash Override Network. (n.d.-a). About the network. Crash Override Network. http://www.crashoverridenetwork.com/about.html. Crash Override Network. (n.d.-b). COACH: Crash override’s automated cybersecurity helper. http://www.crashoverridenetwork.com/coach.html. Crawley, K. (2019). Katelyn Bowden: The cyberattack that changed my life. Blackberry Threatvector Blog. https://blogs.blackberry.com/en/2019/10/katelynbowden-the-cyberattack-that-changed-my-life. Creative Interventions. (2012). Creative interventions toolkit: A practical guide to stop interpersonal violence, part 2: Some basics everyone should know. https://www.creative-interventions.org/wp-content/uploads/2020/08/ CI-Toolkit-Final-ENTIRE-Aug-2020.pdf. Daly, K. (2017). Sexual violence and victims’ justice interests. In E. Zinsstag & M. Keenan (Eds.), Restorative responses to sexual violence: Legal, social and therapeutic responses (pp. 108–139). Routledge. Davis, A. (2003). Are prisons obsolete? South End Books. Dean, D. (2014, April). Tanya Tagaq’s cute sealfie pissed off a lot of idiots. Vice. https://www.vice.com/en/article/4w7awj/tanya-taqaqs-cute-sea lfie-pissed-off-a-lot-of-idiots. Dixon, E. (2020). Building community safety: Practical steps toward liberatory transformation. In L. L. Piepzna-Samarasinha & E. Dixon (Eds.), Beyond survival: Strategies and stories from the transformative justice movement (pp. 15–25). AK Press. Dodge, A. (2016). Digitizing rape culture: Online sexual violence and the power of the digital photograph. Crime Media Culture, 12(1), 65–82. https://doi.org/10.1177/1741659015601173. Elk-Young Bear L., & Kane, S. (2016). When movements backfire: Violence against women and online harassment. Model View Culture, 37. https:// modelviewculture.com/pieces/when-movements-backfire-violence-againstwomen-and-online-harassment. Feldthusen, B., Hankisky, O., & Graves, L. (2000). Therapeutic consequences of civil actions for damages and compensation claims by victims of sexual
626
R. Kim and C. Strauss
abuse—An empirical study. Canadian Journal of Women and the Law, 12, 66–116. Feminist Frequency. (2019). 2018 annual report. Feminist Frequency. https:// feministfrequency.com/wp-content/uploads/2019/01/2018femfreqannualr eport-4.pdf. Fernback, J. (2007). Beyond the diluted community concept: A symbolic interactionist perspective on online social relations. New Media & Society, 9 (1), 49–69. https://doi.org/10.1177/1461444807072417. Fileborn, B. (2014). Online activism and street harassment: Digital justice or shouting into the ether? Griffith Journal of Law & Human Dignity, 2(1), 32–51. Formichella, J. (2016). A Reckless Guessing Game: Online threats against women in the aftermath of Elonis v. United States. Seton Hall Legislative Journal, 41(1), 117–148. Gander, K. (2018, February 26). ‘BADASS’: The revenge porn victim fighting for justice. Newsweek. https://www.newsweek.com/badass-revenge-porn-vic tim-who-turned-anger-activism-818047. Gilmore, R. (2007). Golden Gulag: Prisons, surplus, crisis and opposition in globalizing California. University of California Press. Government of Canada. (2021, June 7). Reducing the number of Indigenous children in care. https://www.sac-isc.gc.ca/eng/1541187352297/154118739 2851. Accessed 10 Feburary 2021. Hawkins, R., & Silver, J. J. (2017). From selfie to #sealfie: Nature 2.0 and the digital cultural politics of an internationally contested resource. Geoforum, 79, 114–123. https://doi.org/10.1016/j.geoforum.2016.06.019. Herman, J. (2005). Justice from the victim’s perspective. Violence Against Women, 11(5), 571–602. https://doiorg.proxy.bib.uottawa.ca/10.1177/107 7801205274450. Hrick, P. (2021). The potential of centralized and statutorily-empowered bodies to advance a survivor-centered approach to technology-facilitated violence against women. In J. Bailey, A. Flynn, & N. Henry (Eds.), The emerald international handbook of technology facilitated violence and abuse (emerald studies in digital crime, technology and social harms) (pp. 595– 615). Bingley: Emerald Publishing Limited. https://doi.org/10.1108/9781-83982-848-520211043. Hudson, L. (2015, January 20). Gamergate target Zoe Quinn launches antiharassment support network. Wired . https://www.wired.com/2015/01/gam ergate-anti-harassment-network/.
30 Of Commitment and Precarity: Community-Based …
627
Jane, E. A. (2012). “Your a ugly, whorish, slut”—Understanding E-Bile. Feminist Media Studies, 14 (4), 531–546. https://doi.org/10.1080/14680777. 2012.741073. Jason, Z. (2015, April 28). Game of fear. Boston Magazine. https://www.bos tonmagazine.com/news/2015/04/28/gamergate/. Kapur, B. (2020, September). An army of women are waging war on the web’s most notorious revenge porn site. Mel Magazine. https://melmagazine.com/ en-us/story/anon-ib-revenge-porn-badass-army. Khoo, C. (2021). Deplatforming misogyny: Report on platform liability for technology-facilitated Gender-Based Violence. Women’s Legal Education and Action Fund. https://www.leaf.ca/publication/deplatforming-misogyny/. Kim, M. (2018). From carceral feminism to transformative justice: Womenof-color feminism and alternatives to incarceration. Journal of Ethnic & Cultural Diversity in Social Work, 27 (3), 219–233. https://doi.org/10.1080/ 15313204.2018.1474827. Knezevic, I., Pasho, J., & Dobson, K. (2018). Seal hunts in Canada and on Twitter: Exploring the tensions between indigenous rights and animal rights with #Sealfie. Canadian Journal of Communication, 43, 421–439. https:// doi.org/10.22230/cjc.2017v43n3a3376. Laville, S. (2016, March 4). Online abuse: ‘Existing laws too fragmented and don’t serve victims’. The Guardian. https://www.theguardian.com/uk-news/ 2016/mar/04/online-abuse-existing-laws-too-fragmented-and-dont-serve-vic tims-says-police-chief. Lewis, H. (2015, January 11). Gamergate: A brief history of a computer-age war. The Guardian. https://www.theguardian.com/technology/2015/jan/11/ gamergate-a-brief-history-of-a-computer-age-war. Maddocks, S. (November 2019). The BADASS army: A radical response to cyber-sexual harassment. Centre on Digital Culture and Society, University of Pennsylvania, Annenberg School for Communication. https://cdcs.asc. upenn.edu/sophie-maddocks/. Malone, M. (2017, July). Zoe and the trolls. New York Magazine. https:// nymag.com/intelligencer/2017/07/zoe-quinn-surviving-gamergate.html. McGlynn, C., Downes, J., & Westmarland, N. (2017). Seeking justice for survivors of sexual violence: Recognition, voice and consequences. In E. Zinsstag & M. Keenan (Eds.), Restorative responses to sexual violence: Legal, social and therapeutic responses (pp. 179–191). Routledge. McIntyre, V. (2016). Do(x) you really want to hurt me: Adapting IIED as a solution to doxing by reshaping intent. Tulane Journal of Technology and Intellectual Property, 19, 111–134.
628
R. Kim and C. Strauss
Mingus, M. (2020). Pods and pod-mapping worksheet. In L. L. PiepznaSamarasinha & E. Dixon (Eds.), Beyond survival: Strategies and stories from the transformative justice movement (pp. 119–125). AK Press. Morris, R. (2000). Stories of transformative justice. Canadian Scholar’s Press and Women’s Press. Nilsen, P. (2006). The theory of community based health and safety programs: A critical examination. Injury Prevention, 12, 140–145. https://doi.org/10. 1136/ip.2005.011239. Off, C. (2014, April). Inuit #SEALFIE trend sparks death threats for throat singer Tanya Tagaq. CBC. https://www.cbc.ca/radio/asithappens/thursdaysyria-s-millionth-refugee-us-supreme-court-on-campaign-spending-1.290 3923/inuit-sealfie-trend-sparks-death-threats-for-throat-singer-tanya-tagaq1.2903928. Palacios, L. C. (2016). ‘Something else to be’: A Chicana survivor’s journey from vigilante justice to transformative justice. Philosophia, 6 (1), 93–108. https://doi.org/10.1353/phi.2016.0001. Piepzna-Samarasinha, L. L., & Dixon, E. (Eds.). (2020). Beyond survival: Strategies and stories from the transformative justice movement (pp. 15–25). AK Press. Powell, A. (2015). Seeking informal justice online vigilantism, activism and resisting a rape culture in cyberspace. In N. Henry, A. Powell, & A. Flynn (Eds.), Rape justice: Beyond the criminal law (pp. 218–237). Springer. Rathnayake, C., & Suthers, D. (2018). Twitter issue response hashtags as affordances for momentary connectedness. Social Media & Society, 4 (3), 1–14. Rentschler, C. (2014). Rape culture and the feminist politics of social media. Girlhood Studies, 7 (1), 65–82. https://doi.org/10.3167/ghs.2014.070106. Rodgers, K., & Scobie, W. (2015). Sealfies, seals and celebs: Expressions of Inuit resilience in the Twitter era. Interface: A Journal for and About Social Movements, 7 (1), 70–97. Rule, E. (2018). Seals, selfies, and the settler state: Indigenous motherhood and gendered violence in Canada. American Quarterly, 70 (4), 741–754. Sarkeesian, A. (2016, March 3). Feminist frequency and crash override partnership. Feminist Frequency. https://feministfrequency.com/2016/03/03/fem inist-frequency-and-crash-override-partnership/. Saskatchewan (Human Rights Commission) v. Whatcott. (2013). SCC 11. https://scc-csc.lexum.com/scc-csc/scc-csc/en/item/12876/index.do.
30 Of Commitment and Precarity: Community-Based …
629
Shepherd, T., Harvey, A., Jordan, T., Srauy, S., & Miltner, K. (2015). Histories of Hating. Social Media + Society, 1(2), 1–10. https://doi.org/10.1177/205 6305115603997. Tagaq, T. [@tagaq]. (2014, May 1). I’m getting increasingly vexed with the up cropping of abuse surrounding the #sealfie picture. I grow weary. Twitter. https://twitter.com/tagaq/status/461857264341504000. Takahashi, D. (2017, October 25). Game Boss interview: Zoe Quinn’s crash override network fights online abuse. GamesBeat, VentureBeat. https://ven turebeat.com/2017/10/25/game-boss-interview-zoe-quinns-crash-overridenetwork-fights-online-abuse/2/. Walker, C. (2014, June). Tanya Tagaq takes on cyberbullies and stereotypes. CBC . https://www.cbc.ca/news/indigenous/tanya-tagaq-takes-on-cyb erbullies-and-stereotypes-1.2669892. Women’s Legal Education and Action Fund. (2021). Due justice for all: A survivor-focused analysis of Canada’s legal responses to sexual violence - Part One. https://www.leaf.ca/publication/due-justice-for-all-part-one/.
31 Digital Defence in the Classroom: Developing Feminist School Guidance on Online Sexual Harassment for Under 18s Tanya Horeck, Kaitlynn Mendes, and Jessica Ringrose
Introduction During the first phase of lockdown for the global coronavirus pandemic in March 2020, the World Health Organization (WHO), Interpol, National Society for the Prevention of Cruelty to Children (NSPCC), Child Exploitation and Online Protection Command (CEOP) and other T. Horeck (B) Anglia Ruskin University, Cambridge, UK e-mail: [email protected] K. Mendes Western University, London, ON, Canada e-mail: [email protected] J. Ringrose UCL, Institute of Education, London, UK e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_31
631
632
T. Horeck et al.
children’s agencies warned that increased screen time during COVID19 made young people more susceptible to online sexual exploitation, grooming and abuse. Plan International UK’s report found that since lockdown began ‘25% of girls have experienced at least one form of abuse, bullying or sexual harassment online’ (Plan UK, 2020), while others reported an upsurge in practices commonly known as ‘revenge porn’ (Davies, 2020). This is the context in which we produced and test piloted a feminist policy for secondary schools in England and Wales on how to raise awareness and reduce online sexual harassment (OSH) through feminist strategies for digital defence. To date limited research has explored the broad range of practices captured under the broad term of OSH, particularly in regard to the under 18s and how these practices are understood and managed in school contexts, where the focus has been on e-safety, cyberbullying and sexting (Ringrose et al., 2021a). As schooling has moved to online contexts in response to the conditions of pandemic life and all that that brings—including the risks of intensified screen time—the need for schools to have a comprehensive, whole-schools approach to tackling the problem of OSH becomes ever more urgent. In this chapter we explore the feminist pedagogical and activist practices that informs our guidance and recommendations for schools regarding curriculum responses, pastoral care and legal responsibilities regarding online sexual harassment. Our key argument is that policies need to move beyond the limitations of the law in order for schools and other service providers to protect children at a time when online technological harms, gendered hate, online misogyny, xenophobia, ableism and racism are on the rise. We conclude that a concerted move among educators is necessary to recognise and record the nature of gendered sexual risk and harms happening to young people, and create a culture and climate shift within schools towards consent and rights-based education.
31 Digital Defence in the Classroom …
633
Study Design This chapter is based on a three-phase study design, carried out by various members of our research consortium on Digital Sexual Cultures.1 Phase one began in 2019 and involved focus group interviews with 144 children aged 13–19 about experiences with online sexual harassment (OSH)—or a ‘range of behaviours where digital technologies are used to facilitate both virtual and face-to-face sexually based harms’ (School of Sexuality Education et al., 2020a). OSH can broadly be broken down into three areas: (1) Unsolicited sexualised content online which refers to any sexual content shared online, which is not wanted by the recipient. This could include content seen on apps, messaging services and websites which has not been sought out by the user, particularly cyberflashing (McGlynn & Johnson, 2021); (2) Image-based sexual abuse (IBSA), which refers to the non-consensual creation and/or distribution of sexual images (McGlynn & Rackley, 2019), and (3) Sexual coercion, threats and intimidation online, which include a person receiving threats of a gender/sexual nature or being coerced to engage in sexual behaviours on or offline via digital technologies. While we make distinctions between these three categories for the sake of clarity, there are evident overlaps and links. Information from these focus groups was used to inform semistructured interviews with seven safeguarding leads, conducted between January and March 2020 by a trained research assistant.2 The interviews explored safeguarding leads’ familiarity with issues around OSH that arose during the focus groups with young people. During the interviews, safeguarding leads were asked about their familiarity with examples across the three categories of OSH, if there were problems in their school settings, and if so, how the schools dealt with them. Of the seven safeguarding leads, two were also Head or Deputy Headteachers, and all had more than two years experience in the role. Safeguarding 1 Details of the Digital Sexual Cultures Feminist Research and Engagement Consortium can be found here: https://www2.le.ac.uk/departments/media/research/featured-projects-1/digital-sex ual-cultures-feminist-research-and-engagement-consortium. 2 We are very grateful to Sophie Whitehead, our research assistant, for conducting the safeguarding interviews and writing up the initial report of the findings.
634
T. Horeck et al.
leads were drawn from a mixture of schools, including one independent boarding school, two mixed comprehensive schools, one all girls academy, one private girls school, one inner city school and one local authority school. We deliberately drew from a range of school settings to capture a diversity of experiences. Interviews with safeguarding leads lasted between 30 and 45 minutes, and were recorded, transcribed and coded using thematic analysis (Braun & Clark, 2006). From March to November 2020, we devised an online quantitative survey to further gather empirical evidence of OSH. The survey was open to children aged 13–18 in England, and was distributed online. The survey asked questions relating to online image sharing, consensual and non-consensual, how young people responded and how it made them feel. We gathered 336 responses. Phase two involved creating a series of policy and guidance documents to tackle the problem of OSH in school settings. These documents were developed between March and May 2020, and were produced after working closely with school leaders, safeguarding leads, legal experts and our partners—the non-profit organisation, the School of Sexuality Education (formerly Sexplain, hereafter SSE) and the Association of School and College Leaders (ASCL). They draw on our extensive experience doing research with young people on these issues in school settings (see Dobson & Ringrose, 2016; Mendes et al., 2019; Ringrose, 2013; Ringrose et al., 2012). Phase three of the research design is an evaluation process, beginning in October 2020, and continuing during the writing up of this chapter. To promote our policy documents and guidance, we held five webinars between October and November 2020, attended by approximately 50 school leaders, teachers, safeguarding leads and other Personal, Social, Health and Economic (PSHE) education and Relationships and Sex Education (RSE) organisations around the UK. From these events, we gathered 13 online questionnaire responses to the training from participants to evaluate their understanding of the policy, how to put it into practice, and its usefulness for dealing with issues surrounding OSH in secondary schools. We have also gone back and conducted a semistructured interview with one of the original seven safeguarding leads, whose school adopted our OSH policy shortly after it was launched in
31 Digital Defence in the Classroom …
635
May 2020. Further interviews are planned, but yet to be carried out due to increased pressures on teaching staff due to Covid-19. In the sections below, we begin by outlining key literature on young people’s experiences with OSH in school settings, before moving to our empirical findings with safeguarding leads, detailing information in our policy and guidance documents and concluding with feedback from our evaluation process.
Overview of Young People’s Experiences As noted in the introduction, during lockdown for COVID-19, there were widespread concerns about increased domestic and sexual abuse, including that directed at children, with the Internet Watch Foundation (IWF) reporting a 50% increase in public reports of child sexual abuse (Internet Watch Foundation, 2020). One of the issues that we faced, however, is that organisations responding to gender and sexual harm online have significant gaps in their scope. For instance, the Revenge Porn helpline only offers services for those aged 18+, while CEOP and the IWF focus on child exploitation from adults, rather than abuse from peers and other children. In 2019 we worked with 144 young people aged 11–18 years old exploring sharing networked image practices, finding that over 70% of girls in our qualitative study had been pressured to send nudes and had been sent an unwanted dick pic (Ringrose et al., 2021c). Following up on these findings, during the first nation-wide UK lockdown and into the first return to school in 2020, our consortium of feminist researchers, in conjunction with the SSE, launched a survey to learn more about young people’s image sharing practices. Between March and November 2020 we collected responses from 336 young people across the UK. The survey looked at girls’ and boys’ experiences as well as those of gender diverse students, finding that 37% of those who defined themselves as girls said they had been sent unwanted sexual images online, compared to 20% of boys. What’s more, 32% girls said they had been sent an unsolicited penis photo (a ‘dick pic’), compared to 5% boys. The majority of girls who received such images reported feeling ‘disgusted’ and ‘confused’
636
T. Horeck et al.
by them (Ringrose et al., 2021b). These findings are similar to those of a YouGov Dick Pic survey (Smith, 2018) which found 41% of millennial aged women (18–24 had been sent an unsolicited penis image, with nearly ¾ saying they found it ‘gross’. In spite of these negative feelings, the numbers of young people who reported their experiences were very low with only 6% reporting it to the social media platform; 3% telling parents and a mere 1% reporting it to their school. Reporting rates had been similarly low in our qualitative findings from the year before; girls typically said they did not feel they could report to anyone or anywhere and lacked knowledge of (or faith in) the reporting functions on the platforms themselves. Taken together, our combined qualitative and quantitative findings with 480, 12–18 year olds (336 in the survey and 144 in focus groups) indicates a failure of reporting mechanisms and a lack of support for young people in mitigating these online harms, with particularly grave implications for reporting at school. They indicate a real need for schools to provide comprehensive digital sex education which addresses issues of online sexual harassment such as IBSA and cyberflashing (unwanted dick pics). These findings were echoed by our interviews with safeguarding leads, as discussed below.
The Problem of Image-Based Sexual Abuse in UK Schools: Interviews with Safeguarding Leads From January to March 2020, we conducted interviews with seven safeguarding leads in UK secondary schools that were critical to identifying and understanding the nature and scope of the problem regarding mobile devices, social media platforms and IBSA related cases in a school context. All seven safeguarding leads interviewed said they were increasingly having to deal with issues caused by social media, with one head of year suggesting that they now spend around 60% of their time dealing with cases of social media use and abuse among the student population. Emerging very strongly across our interviews is the impact of the
31 Digital Defence in the Classroom …
637
erosion between online and offline on student life. One safeguarding lead informed us that Monday mornings were an especially busy time for them, as students (and their parents) brought social media conflicts from the weekend (which included bullying incidents, miscommunications and social maligning/ostracising) into the school setting. Despite four out of the seven schools banning mobile phone use entirely during the school day, issues arising from student engagement with smartphones and social media platforms were unavoidable in a school environment. While schools handled a wide range of social media issues, the sharing of sexually explicit images was cited as a new and pressing issue. All seven safeguarding leads gave examples of OSH and five of these recounted specific instances of IBSA, including a case of sexual imagery of a young woman being shared among several students without her consent, and cases of young people in relationships exchanging photos of an intimate sexual nature, only for those images to be viewed (even if not always shared on devices) non-consensually by others following the relationship breakdown. The school response to these cases tended to be strongly reactive and one of the key themes to emerge from the interviews was the need for a more concerted, proactive approach to tackling the problem of OSH. In the absence of specific policies regarding online sexual harassment, the educators we spoke to had to use existing internet, e-safety or anti-bullying policies to guide their piecemeal response to complex incidents of image-based sexual abuse among their students. Despite the growing concern in schools over how to handle cases of sexual image sharing, it is notable that the safeguarding leads we interviewed were unfamiliar with using a frame of online sexual harassment to understand young people’s experiences. Reference was instead made to ‘cyberbullying’, ‘revenge porn,’ ‘sexting’ or just ‘sexual images’. The lack of vocabulary to talk about this form of digital violence was symptomatic of wider lacunas in knowledge around the gendered nature of imagesharing and online sexual harassment, which our policies seek to address (as will be discussed below). When asked about the gendered nature of social media use, the safeguarding leads appeared uncertain about how the concept of gender might be a determining factor in digital processes and behaviours. Gender was instead mainly framed in relation to boys’ tendency to get into disagreements over gaming, and girls’ tendency to
638
T. Horeck et al.
resort to name-calling and interpersonal conflicts on social media platforms. The gendered power dynamics of image-based sexual abuse were rarely considered. Although there was awareness of ‘sexting’ as a practice among their secondary school students, a nuanced understanding of consent was absent. Sending a nude image tended to be rather simplistically framed as something that was ‘not appropriate’ and should not be ‘done in the first place’, a common attitude to sexting as deviant behaviour in health frameworks (Döring, 2014). This abstinence-based approach was, in certain instances, accompanied by a degree of victim-blaming, however unintentional, which framed the consensual production and sharing of sexual images as the inherent problem (rather than the subsequent sharing of the images in a non-consensual manner). Safeguarding leads also spoke of the struggle to manage and track the escalation of image spreading between peers within the school. One lead discussed a particular case that spread like ‘wildfire’ in their all girls’ academy: We’ve got an ongoing case at the moment with a student whose image has been shared by other students. We don’t know how the other students got hold of the image. It’s very possible that she might have sent it on in the first place…we don’t know but it’s had a lot of ramifications to the point that she’s now school refusing…so it’s had a huge impact and the police have actually been involved in this.
The rapid spread of the image made it difficult to track everyone involved, but the school called in police to speak to the first seven students they identified within the first 24 hours of the initial image share. Police spoke to the students ‘about the risks of doing this’ as a warning; they also spoke to the victim and her family although the school was not present for this. There is an understandable anxiety on the part of teachers and safeguarding leads about how to deal with cases regarding sexual image sharing, especially in regard to the legal implications of child sexual imagery. One deputy head teacher from a mixed gender independent day and boarding school spoke of the ‘terror, particularly for parents’, around youth-generated sexual imagery being treated as pornography. Most of
31 Digital Defence in the Classroom …
639
the leads were sensitive to the fact that young people should not necessarily be criminalised for generating sexual imagery. As the same deputy head teacher explained, while he was aware of the sensitivity around the matter, the decriminalisation of youth-generated sexual imagery would help to shift the conversation in schools away from the legalistic and punitive to a more productive safety based discussion. Indeed, one of the key imperatives highlighted by all of the safeguarding leads is the need to educate students about healthy communication online. It is worth quoting from one of the deputy head teachers about the progression and escalation of negative online behaviours through the school years: One of the biggest issues we’ve identified is in the lower end of the school so year 7, year 8, which is about healthy relationships, positive communication and cyberbullying…and sharing information without consent. So, they’ll talk about each other’s personal lives without any idea of the fallout of that and obviously the long-term consequences of that being online forever. As you move through the school obviously it changes so you end up in year 10 to 11, going into sixth form, you end up with youth generated sexual imagery, sexual grooming, and that kind of stuff.
Although we aim to shift the use of catastrophic language and thinking that instils fear in young people about images and comments ‘being online forever’, what is clear from this commentary is how important it is to build learning around consent into the curriculum from an early age. For the teachers and safeguarding leads we spoke to, what was urgently needed was more support and guidelines for them to follow on imagesharing and the ‘online footprint’. As the safeguarding lead at an all girls’ school told us: I think that the policy and guidance needs to be taught to the students and they need to understand that it is a form of abuse as opposed to the normalisation of [it], you know this is what I have to cope with.
This observation about the need to identify and label forms of online image sharing as abuse, corresponds to feminist research on how the unsolicited ‘dick pic’, for example, has become normalised by many
640
T. Horeck et al.
teenaged girls as something they have to tolerate; as one study revealed (Ringrose et al., 2019, p. 271)—a focus group of 15-year-old girls, all of whom had received unsolicited dick pics, ‘said they would not bother reporting the incident either to the social media company or school, as it was simply a normal tedious part of life’. Just as there needs to be education for students about what online behaviours constitute sexual harassment, and what to do if they experience any form of online harm, so too there needs to be continual education and training for teachers on the realities of what students have to contend with in the rapidly changing digital world they inhabit. As one teacher said in specific reference to the issue of dick pics, it is necessary for people who work in secondary schools to understand ‘what the students have to go through and the pressures and the shock value of receiving an image on their phone [dick pics]’. Developing strategies for digital defence and finding ways to support educators in implementing best-practice approaches to understanding, preventing and managing young people’s experiences of online sexual harassment is the main aim of our set of school policies on Online Sexual Harassment including developing understanding and literacy around framing unwanted dick pics as cyberflashing. Given our empirical data from safeguarding leads and pupils, in the section below we outline our efforts to create resources for schools and pupils, on how to manage and combat the problem of online sexual harassment.
Online Sexual Harassment (OSH) Policy Resources In early 2020, we began to work closely with the School of Sexuality Education, the Association of School and College Leaders, legal expert Professor Clare McGlynn (Durham University), secondary school safeguarding leads, heads of schools and teachers to develop a series of resources to tackle online sexual harassment. Although the resources are aimed towards secondary school students, we recognise that primary schools can draw lessons from these documents. That being said, we
31 Digital Defence in the Classroom …
641
argue that tailored resources are needed to address OSH specifically in the context of primary schools. The resulting documents are based on insights and research findings derived from our many years of experience working with young people around digital sexual cultures and violence in school settings (see Ringrose, 2008; Ringrose et al., 2012), research conducted from 2014 to 2016 on digital resistance to rape culture including among young digital activists (Mendes et al., 2019), recent research on image sharing practices among young people including the rise of the ubiquitous ‘dick pic’ (Ricciardelli & Adorjan, 2018; Ringrose et al., forthcoming) and interviews with seven safeguarding leads across the UK detailed above. The documents were produced through an iterative process, going through multiple re-drafts after input from key stakeholders, particularly safeguarding leads, head teachers and secondary school teachers. Access to these groups were facilitated by the ASCL and the SSE. Special consideration was given to ‘translate’ academic research findings to a lay context that non-specialists, including secondary school students could digest, and how this information was all the more urgent in light of the Covid19 pandemic in which young people were spending increased time online. The end result was three documents. The OSH Comprehensive Guidance for Schools (School of Sexuality Education et al., 2020a) provides detailed advice for schools on how to deal with the issue of online sexual harassment. This document lays out the complete context of online sexual harassment, including who is likely to be a target, how the abuse might be manifested and the impact each type of harassment may cause to pupils. Furthermore, the document provides an overview of laws relating to online sexual harassment, and demonstrates why a whole-school joined up approach is necessary, alongside a move to consent-oriented education, replacing the current abstinence-based approaches which do not work. The document concludes with key recommendations for schools, including the need for dedicated Relationships and Sex Education (RSE) and Personal, Social, Health and Economic (PSHE) education lessons to address this issue and zerotolerance stances on (online) sexual harassment. For staff, the document
642
T. Horeck et al.
offers recommendations on what to do if students encounter these issues, how to support them and options for formal and informal reporting. The second document is an OSH Guidance for Students (School of Sexuality Education et al., 2020b) which provides easy to digest guidance for students who experience online sexual harassment—visualised through flowcharts. The document provides step-by-step guidance for students to understand the nature of OSH, when it may constitute a criminal offence, information about their rights and how to safely and anonymously report these practices. Furthermore, it provides information on ‘digital defence’ such as how to manage social media apps, including increased privacy settings and information on how to report harmful content to the app. The document provides detailed information on organisations which might be able to help, such as the IWF, which removes harmful images of children from the internet and can stop images from being re-uploaded; CEOP to report instances of online sexual abuse and grooming and Report Harmful Content to report bullying, harassment, pornographic content or unwanted sexual advances online. The final document is the OSH School Policy (School of Sexuality Education et al., 2020c), which seeks to support senior leaders in ensuring that all school staff and students understand what ‘online sexual harassment’ is, and what mechanisms, policies or practices either enable it, or prevent victims from reporting it. The document argues that a whole school, joined up approach is needed to simultaneously tackle online sexual harassment and to support victims. This means schools should have dedicated PSHE and RSE sessions on the topic, address it via assemblies and constitute a well-trained and well-informed staff. Indeed, the policy argues that all staff should have a baseline understanding of what constitutes OSH, and be aware of recommended responses and curriculum approaches. Senior leaders should ensure that policies and curriculum support a message of equality and a zero-tolerance stance on OSH and violence for children and young people of all genders and sexualities. They should also recognise that, while it is possible for all young people to experience these issues, these practices are highly gendered. As such, an understanding of gender and sexual inequity is essential to help young people understand and challenge online sexual harassment. In
31 Digital Defence in the Classroom …
643
sum, the policy equips staff with knowledge needed to deal with disclosures. Finally, it discusses how curriculum in RSE and other subjects can address issues related to OSH, including ensuring young people understand what constitutes online sexual harassment, their rights and how to seek support. This document is designed to be read alongside the Online Sexual Harassment Comprehensive Guidance.
Evaluating the OSH Policy Our OSH policy was made publicly available in May 2020 and disseminated to local schools through our contacts at SSE and ASCL. We held an official launch in October 2020, bringing together members of the academic team, ASCL, SSE and a Deputy Head Teacher at one school that adopted the policy shortly after it was launched. From October to December 2020, we ran five webinars for teachers, Senior Leaders and Newly Qualified Teachers (NQTs) on the policy, advertised through ASCL’s network of 19k school leaders across the UK. Although the policy evaluation is an ongoing process, and one that has been hindered by lockdowns caused by Covid-19, our preliminary data indicates that schools adopting the policy, and a whole-school approach to the issue of online sexual harassment, see dramatic improvements both in reduced numbers of instances of OSH, teacher confidence in handling these issues, and improved student mental health. Indeed, as one safeguarding lead stated, since adopting our policy and taking on a rights-based approach to RSE more generally, their school has seen a dramatic improvement in students’ mental well-being and fewer instances of unsolicited nudes being sent and received. As this safeguarding lead stated, their school went from dealing with issues around ‘dick pics’ from once a week, to once a term. As he credited, taking a cue from the policy and guidance, ‘Just having an open and honest access to pastoral care, the results have been phenomenal’. When pressed further on the most useful part of these resources, the safeguarding lead stated it was the guidance for students. This is because it wasn’t ‘just used by students, but it’s used by tutors and chalk faced teachers. It tells students and staff where to go [to find resources they
644
T. Horeck et al.
need to support them]’. The safeguarding lead said that teachers learnt about resources and students’ rights from this guidance, and found it useful to see their options laid out so clearly. We know from interviews with safeguarding leads, and experiences engaging in delivering RSE teacher training programmes in the UK that many teachers feel uncomfortable addressing these issues. This discomfort was at times entwined with issues around faith, and those who take on abstinence approaches to youth sexuality. As one safeguarding lead indicated, discussing issues relating to OSH is often particularly worrying for older members of staff, those who are less IT literate, and those who feel ‘genuine embarrassment’ discussing such personal details with young people. As our safeguarding lead told us, staff often ‘worry about saying the wrong thing’ and either making things worse or being ‘on the wrong end of a disciplinary hearing’. It became clear that pushback from parents was a complicated reality of introducing rights-based RSE into school curriculums, with some parents worrying that content will make children more sexually promiscuous. As our safeguarding lead told us, in having conversations about the new curriculum and school policies, it was important to show that ‘The policies aren’t there to make sure children find loopholes, but to help them understand themselves and therefore become more holistically sound human beings’. While this safeguarding lead said that most parents were open to having dialogues around these issues, this was certainly not always the case. The reality that some children will be excluded from RSE content is therefore something that schools must contend with, and we anticipate this will be a barrier that we will face as we encourage schools across the UK to adopt our policies and guidance. What is encouraging however is that 84% of our teacher survey participants stated they would do something different in their school relating to OSH because of what they learnt in our workshops. This included practices such as sharing information with colleagues (100%), reviewing existing RSE resources (53%), accessing additional training (46%), review RSE curriculum and training (46%), implement our OSH policy (23%) or review existing policies and procedures (7% each) in line with our recommendations. While our evaluation of these resources is still ongoing, our findings show that having a whole-school approach
31 Digital Defence in the Classroom …
645
with better trained staff, educational curriculum taking a rights-based approach to these issues, supported by evidence-based policy guidance is an important step in reducing instances of online sexual harassment, and ensuring perpetrators are held accountable for their actions.
Conclusion We write this chapter during yet another period of extensive lockdown in the UK. There are many reasons to be worried about teenagers during—and after—extended periods of lockdown. They have been socially isolated for extended periods of time now, on and off, for over a year. They are having to self-motivate to learn and organise their school days from home, something that most adults would struggle to do. Many of them are facing mental health problems that are likely to have a long-lasting impact. Amidst everything that has to be contended with, it might seem that something like Relationships and Sex Education should far fall down the list of things for schools to be concerned about. Indeed, recognising the pressures of the pandemic, the UK Department of Education delayed the compulsory introduction of the new RSE curriculum, which was supposed to be introduced in schools in September 2020 (Busby, 2020; Department for Education, 2020). While the attempt to be flexible about introducing the new RSE curriculum is understandable, our research into young people, digital sexual cultures and online sexual harassment tells us that the RSE and its emphasis on students’ digital lives, is something that needs to be prioritised as a matter of urgency. Alongside others (WHO, Plan UK) we have argued that the pandemic has worsened gender and sexual risks online, with the scope for online gendered harms, and forms of sexual harassment, such as image-based sexual abuse and cyberflashing increasing significantly. As noted, our survey with 336 students during the first phase of lockdown, has shown that at least 37% of girls and 20% of boys had been sent unwanted sexual images online. Our in-depth qualitative research indicated that these rates were reported as much higher from girls when they are in a safe space of a qualitative workshop environment and can discuss these
646
T. Horeck et al.
issues with researchers and sex educators. It was also revealing that young people almost never felt comfortable reporting these episodes to their schools because of fear of recrimination and further victimisation. These findings around the nature and prevalence of online harms are important to acknowledge as a daily reality for some young people during and after the pandemic. With only a phone, tablet or laptop for company, use of social media apps like Instagram has inevitably increased, in tandem with a rise in body image struggles and anorexia admissions at paediatric hospitals (Haripersad et al., 2020). This sits alongside a myriad of challenges to young people’s education. In the vast majority of cases, school leaders, teachers and students are doing their best with what they have, but ‘what they have’ can vary dramatically depending on schools’ resources, the actions of government and the socio-economic situations of young people and their families. As well as having to manage the constant trials that glitching tech and video lessons can bring, seeking informal advice from a teacher—about online harassment, about body image issues, about mental health—becomes more difficult in online settings where organic conversations are less likely to take place. As a research team, working with a range of third-sector stakeholders, we have worked to develop a series of policies and guidance documents which move beyond the limitations of the law in order for schools and other service providers to protect children at a time when online technological harms, gendered hate, online misogyny, xenophobia, ableism and racism are on the rise. Furthermore, our recommendations include the delivery of ‘digital defence’ lessons, where young people can be taught how to use and manage social media apps, be referred to helpful resources or activism pages, be shown how to apply certain privacy settings on different platforms and be pointed to the reporting and blocking functions on the apps they use. Conceptually, it is also valuable to teach young people about how human rights, consent and respect all apply in online spaces just as much as they do away from the keyboard. Although our policy focuses on the online dimensions of RSE, it is also important to remember the other key elements which young people need to know: anatomy, building positive relationships, conflict resolution, sexual health, reproductive health, body image, consent. All of these
31 Digital Defence in the Classroom …
647
topics are still essential learning for teens. We recognise that not everything can be covered in this challenging time, so we advocate for RSE and PSHE more broadly not only as a vital area of learning now, but as a pathway to recovery in the future. We as researchers, educators and activists must bear this in mind when school systems begin a return to ‘normality’; we must look to prioritise young people’s recovery and wellbeing over all else, and PSHE and RSE can function as valuable tools in this goal. More research is needed to evaluate the impact and effectiveness of our guidance and resources, but the signs so far are promising, with teachers feeling that our resources are or will be helpful, and are necessary to create a culture and climate shift within schools which recognise the highly gendered and sexualised nature of harms happening to young people. Acknowledgements The authors would like to thank the University of Leicester for QR funding and the AHRC (AH/T008938/1) for supporting some of the research underpinning this chapter.
References Braun, V., & Clark. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3, 77–101. Busby, E. (2020). Schools can delay teaching compulsory sex education lessons until Summer 2021. Msn.com, June 4. https://www.msn.com/en-gb/news/ uknews/schools-can-delay-teaching-compulsory-sex-education-lessons-untilsummer-2021/ar-BB152tr0. Davies, S. (2020). Revenge porn soars in Europe’s coronavirus lockdown as student fights back. Reuters, 5 May. Available at: https://www.reuters.com/ article/us-health-coronavirus-europe-porn-trfn-idUSKBN22H2I6. Department for Education. (2020). Relationship and Sex Education (RSE) and health education. https://www.gov.uk/government/publications/relations hips-education-relationships-and-sex-education-rse-and-health-education.
648
T. Horeck et al.
Dobson, A. S., & Ringrose, J. (2016). Sext education: Pedagogies of sex, gender and shame in the schoolyards of Tagged and Exposed. Sex EducationSexuality Society and Learning, 16 (1), 8–21. Döring, N. (2014). Consensual sexting among adolescents: Risk prevention through abstinence education or safer sexting? Journal of Psychological Research on Cyberspace, 8, 1, Article 9. https://doi.org/10.5817/CP2014-1-9. Haripersad, Y. V., Kannegiesser-Bailey, M., Morton, K., et al. (2020). Outbreak of anorexia nervosa admissions during the COVID-19 pandemic. Archives of Disease in Childhood . https://doi.org/10.1136/archdischild-2020-319868. Internet Watch Foundation. (2020). ‘Definite jump’ as hotline sees 50% increase in public reports of online child sexual abuse during lockdown, 16 July. https://www.iwf.org.uk/news/%E2%80%98definite-jump%E2%80% 99-as-hotline-sees-50-increase-public-reports-of-online-child-sexual-abuseduring#:~:text=There%20has%20been%20a%2050,Internet%20Watch% 20Foundation%20during%20lockdown.&text=Of%20the%20public%20r eports%2C%205%2C367,were%20actioned%20by%20the%20IWF. Kreger, M., Brindis, C. D., Manuel, D. M., & Sassoubre, L. (2007). Lessons learned in systems change initiatives: Benchmarks and indicators. American Journal of Community Psychology. https://doi.org/10.1007/s10464-007-910 8-14. McGlynn, C., & Johnson, K. (2021). Criminalising cyberflashing: Options for law reform. Journal of Criminal Law, 85 (3), 171–188. McGlynn, C., & Rackley, E. (2019). Image-based sexual abuse. Oxford Journal of Legal Studies, 37 (3), 534–561. Mendes, K., Ringrose, J., & Keller, J. (2019). Digital feminist activism: Girls and women fight back against rape culture. Oxford University Press. Plan International UK. (2020). https://plan-uk.org/media-centre/new-researchshows-barrier-to-girls-rights-exacerbated-by-coronavirus-crisis. Ricciardelli, R., & Adorjan, M. (2018). ‘If a girl’s photo gets sent around, that’s a way bigger deal than if a guy’s photo gets sent around’: Gender, sexting, and the teenage years. Journal of Gender Studies. https://doi.org/10.1080/ 09589236.2018.1560245. Ringrose, J. (2008). ‘Every time she bends over she pulls up her thong’: Teen girls negotiating discourses of competitive, heterosexualized aggression. Girlhood Studies: An Interdisciplinary Journal, 1(1): 33–59. Ringrose, J. (2013). Postfeminist education? Girls and the sexual politics of schooling. Routledge.
31 Digital Defence in the Classroom …
649
Ringrose, J., Gill, R., Livingstone, S., & Harvey, L. (2012). A qualitative study of children, young people and ‘sexting’: A report prepared for the NSPCC. National Society for the Prevention of Cruelty to Children. London, UK. Ringrose, J. Mendes, K., Whitehead, S., & Jenkinson, A. (2021a). Resisting rape culture online and at school: The pedagogy of digital defence and feminist activism lessons. In Y. Odenbring & T. Johansson (Eds.), Violence, victimisation and young people: Education and safe learning environments (pp. 129–154). London: Springer. Ringrose, J., Regehr, K., & Milne, B. (2021b). Understanding and combatting youth experiences of image based sexual harassment & abuse. School of Sexuality Education & Association of School and College Leaders. https://school ofsexed.org/ibsh-ibsa-report. Ringrose, J., Regehr, K., & Whitehead, S. (2021c). Teen girls’ experiences negotiating the ubiquitous dick pic: Sexual double standards and the normalization of image based sexual harassment. Sex Roles. https://doi.org/ 10.1007/s11199-021-01236-3. Ringrose, J., Whitehead, S., Regehr, K., & Jenkinson, A. (2019). Play-Doh Vulvas and Felt Tip Dick Pics: Disrupting phallocentric matter(s) in sex education. Reconceptualizing Educational Research Methodology, 10 (2–3), 259–291. https://doi.org/10.7577/rerm.3679. School of Sexuality Education; Mendes, K., Ringrose, J., & Horeck, T. (2020a). Online sexual harassment comprehensive guidance for schools. School of Sexuality Education. https://static1.squarespace.com/static/57dbe2 76f7e0abec416bc9bb/t/5f86b37c409ee95b26cf27e6/1602663308003/Sch ool+of+Sex+Ed+OSH+Comprehensive+Guidance.pdf. School of Sexuality Education; Mendes, K., Ringrose, J., & Horeck, T. (2020b). Online sexual harassment guidance for students. School of Sexuality Education. https://static1.squarespace.com/static/57dbe276f7e0abec41 6bc9bb/t/5f86b397b2e54d10a44ae46f/1602663355088/School+of+Sex+ Ed+OSH+Guidance+for+Students.pdf. School of Sexuality Education, Mendes, K., Ringrose, J., & Horeck, T. (2020c). Online sexual harassment school policy. School of Sexuality Education. Available at: https://static1.squarespace.com/static/57dbe276f7e0abe c416bc9bb/t/5f86b3b1117dd72a57c88d48/1602663347896/School+of+ Sex+Ed+OSH+School+Policy.pdf. Smith, M. (2018). Four in ten female millennials have been sent an unsolicited penis photo. YouGov, 16 Feb. https://yougov.co.uk/topics/politics/articlesreports/2018/02/16/four-ten-female-millennials-been-sent-dick-pic.
32 “GirlsDoPorn”: Online Pornography and Politics of Responsibility for Technology Facilitated Sexual Exploitation Ashlee Gore and Leisha Du Preez
Introduction Using force and coercion, pornography company GirlsDoPorn defrauded and trafficked a number of women between 2010 and 2019. In 2020, a U.S court ruled in favour of 22 women who had sued the company directly for damages. However, there remains an ongoing concern about platforms, like Pornhub (owned by tech company “MindGeek”), that host such content. Indeed, there is little consensus about the responsibilities of technology companies to mitigate or prevent harm perpetrated through their platforms, often implicitly reinforced by an assemblage of A. Gore (B) 49 Raymond Ave, Campbelltown, NSW 2560, Australia e-mail: [email protected] L. Du Preez 33 Summerfield Circuit, Cambridge Gardens, NSW 2747, Australia e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_32
651
652
A. Gore and L. Du Preez
socio-cultural discourses which responsibilise girls and women instead. In this context, it is not a coincidence that technology companies and online platforms like Pornhub/MindGeek, proved so indifferent to the suffering of women exploited on their site. This chapter builds an account of how the “technological rationality” (Marcuse, 1985; Salter, 2018, p. 256), or the particular logics and values instantiated within technological platforms, intersect with broader cultural logics of postfeminism and popular misogyny in ways that can promote the instrumental attitudes and exploitative relations in the production of image-based abuse. The chapter begins with a brief overview of the facts of the case, drawn from the lawsuits lodged against GirlsDoPorn (Jane Doe Nos 1–22 v. GirlsDoPorn, 2019) and MindGeek by up to 40 women (Jane Doe Nos 1–40 v. MindGeek, 2020). From here, we argue that the logics and values embedded in the technological design of MindGeek’s pornography sharing social platforms reflects a reactive form of popular misogyny, which intersects with technocapitalist values. In this context, the proliferation of technology-facilitated image-based abuse is not simply indicative of a failure of these platforms to take responsibility to regulate content. Rather, in the dialectic relationship between popular misogyny and postfeminism, corporations such as MindGeek and GirlsDoPorn co-opt and remobilise strategic elements of “postfeminist” discourse, including problematic entanglements of sexualisation, empowerment, choice and responsibilisation, turning them against women in ways, which capitalise on image-based abuse and exploitation as a spectacle of humiliation. Such an analysis offers a way of understanding the intersections of cultural and technological factors that facilitate gendered exploitation, and an abnegation of responsibility.
32 “GirlsDoPorn”: Online Pornography and Politics …
653
Making “GirlsDoPorn”: Consent, Coercion and Fraud in the Production and Distribution of Online Pornography Launched in 2009, GirlsDoPorn, owned and operated by Michael Pratt and Douglas Wiederhold, claimed to feature 18–22-year-old women in amateur pornographic videos. The allure of the video’s, according to Pratt, was the fact they only featured “normal everyday” girls, “first timers” with “no prior experience” in pornography, and no intention to pursue further pornographic work (Jane Doe v. GirlsDoPorn.com, 2019 at 115). The videos produced were intended to be distributed on the tubesite “Pornhub”, which in 2020 alone was the 8th most visited website in the United States and the 10th most visited in the world (Khalil, 2020), with over 42 billion visits in 2019 in the United States and 14 million free to view videos. This was facilitated through a formal partnership with Pornhub’s parent company “MindGeek”, which promotes itself as a technology company and a “leader in the design, development … and management of highly trafficked websites” (MindGeek 2021). As members of MindGeek’s “Content Partner Program”, GirlsDoPorn received a personalised channel on a range of sites operated by MindGeek, including Pornhub, which hosted 70 GirlsDoPorn videos, with more than 700,000 subscribers. MindGeek generated millions of dollars in affiliate fees and premium subscriptions from selling, marketing and exploiting videos featuring victims of GirlsDoPorn’s sex trafficking venture views (Jane Doe v MG Ltd , 2020 at 111). In order to procure “normal” “everyday” girls for these videos, GirlsDoPorn relied on several strategies of deception, fraud, coercion and violence. These included bait-and-switch advertisements for clothed modelling, in which the defendants posed as a modelling agency. In its advertisements, they claimed to be seeking models, and included links to www.beginemodelling.com, or www.modelinggigs.com, with no reference to pornography work. Some of the women who inquired about the job, were convinced over the phone to agree to film a pornographic video, but under the express agreement that the videos would go to DVD for an overseas based private collector, that they would not be
654
A. Gore and L. Du Preez
posted online or distributed in the United States, and that the women would not be named in the video. As part of the deception, other young women were enlisted to act as “reference models”, and assure the women of the security, limited distribution and anonymity of the process. Other women had no idea the job would entail filming pornography until they arrived at the site. Once the women arrived at the hotel room for the job, they were pressured to drink alcohol and/or smoke marijuana and were then instructed to take off their clothes. If they refused to shoot the film, the defendants threatened to sue the women, and threatened to cancel their return flight, which the defendants booked and had control over (Jane Doe v. GirlsDoPorn.com, 2019 at 51). Once naked and confined in the hotel room the women were coerced into signing contractual documents without being allowed to read them (at 52). The resulting videos often demonstrated the extreme distress of the women, including footage of them crying, bleeding and trying to lock themselves in bathrooms away from the defendants. One of the women recounted that she gagged, vomited and choked, and was in so much pain she started to cry. The defendants told her to finish the scene, that she needed to “get over it” and that they would keep re-recording the scenes until she looked “into it” (at 68). After the videos were distributed online, the women also alleged that the defendants strategically leaked the women’s real names through the website PornWikileaks, which they bought in November 2015. The posts on PornWikileaks contained links to the women’s social media accounts, their family’s social media accounts, high school information and other personal information. Posts also included five-ten minute “trailer videos” of the women, embedded with advertisements to their subscription websites where the full videos could be viewed. This appeared to have been designed by the defendants to make the videos go viral, but it also resulted in third parties using this information to stalk, harass and blackmail the young women and their families (at 56). As early as 2010 and until they were shut down in October 2019, GirlsDoPorn openly admitted to using deceptive and fraudulent advertising to attract young women. It was not until October 2019, when
32 “GirlsDoPorn”: Online Pornography and Politics …
655
GirlsDoPorn was shut down that MindGeek’s partnership with GirlsDoPorn ceased. Evidence was presented in the civil case against GirlsDoPorn, that by at least 2016, but possibly as early as 2009, MindGeek would have been aware that GirlsDoPorn was engaging in fraudulent, coercive and abusive practices. Following evidence of MindGeek’s apparent complicity, in 2020 the same women who sued GirlsDoPorn lodged a legal case against Pornhub/MindGeek. The women allege that Pornhub and parent company MindGeek knew of the charges against GirlsDoPorn. Despite this knowledge, MindGeek never investigated or questioned its business partner regarding the mounting evidence of sex trafficking and continued to profit from the videos. The lawsuit also claims that the MindGeek-owned websites did not comply with dozens, if not hundreds of requests by women to take down the videos. MindGeek continued its partnership with GirlsDoPorn until October 2019 when the Department of Justice shut down GirlsDoPorn by arresting and indicting its principals. At this point, according to the legal complaint, there was no longer a company left for MindGeek to partner with. Even after severing its partnership with GirlsDoPorn, MindGeek continued to profit from women’s videos. As of December 2020, MindGeek was still hosting victims’ videos on its websites (Brodkin, 2020). The videos were surrounded by hyperlink advertisements that, if clicked, redirected the visitors to MindGeek’s paysite. Digital media platforms across the board have been slow to respond to the problem of online abuse, persistently representing themselves as neutral intermediaries. Indeed, there is little consensus about responsibilities they have to mitigate or prevent harm perpetrated through their platforms (Suzor et al., 2018). Outside of the Online Harms Bill currently being implemented in the United Kingdom and particularly in the United States, where many major Internet companies are based, digital platforms are not legally responsible when their networks are used in abusive ways (Pavan, 2017). The legal contours of MindGeek’s responsibility remain unclear, with the lawsuit still in process. The women suing MindGeek are relying on section 1595 of US law, under the Trafficking Victims Protection Reauthorization Act 2008, which lets victims of trafficking bring civil actions against either perpetrators or people who benefit when they “knew or should have known” about the
656
A. Gore and L. Du Preez
crimes. MindGeek’s defence against the lawsuit could cite Section 230 of the Communications Decency Act 1996, which gives legal immunity to online platforms that host user-submitted content. However, the lawsuit argues that the Fight Online Sex Trafficking Act (FOSTA) enacted in 2018 could override a Section 230 defence. It is not the scope of this chapter to consider the legal dynamics of responsibility, rather, we are concerned with the assemblage of socio-cultural discourses at play that operate to responsibilise girls and women, while obscuring the responsibility of content hosts, digital media platforms and corporations.
Postfeminism, Popular Misogyny and Discourses of (Non)-Responsibility and Women’s Responsibilisation in the Digitised Pornography Industry We locate these combined practices of exploitative production, distribution and apathy towards victims along a broader continuum of (men’s) violence against women (see Boyle, 2019; Kelly, 1988; McGlynn et al., 2017), which encompasses image-based sexual abuse and the nonconsensual production and distribution of pornography (McGlynn et al., 2017). These forms of abuse have largely been facilitated by technological developments and can therefore be conceptualised on a continuum of technology-facilitated sexual violence (Henry & Powell, 2016). The pornography industry has gone through a period of rapid expansion and change, particularly with increasing consumption levels and the shift to digital technologies (Forrester, 2016). With the rise of digital technologies, the pornography industry has also become more monopolistic, with MindGeek being the biggest distributor, owning all of the major pornography websites (called “Tubesites”). MindGeek have also bought many of the traditional pornography production companies, consolidating production and distribution in a single monopolistic owner (McVey et al., 2021). As free tube sites, the commerce model of these businesses work to build traffic, promote paid-content and sell advertising (Dines, 2016; McVey et al., 2021). These changes reflect
32 “GirlsDoPorn”: Online Pornography and Politics …
657
broader shifts in the capitalist economy, sometimes referred to as “techno capitalism” (Wajcman, 2006), in which raw factory materials have been displaced by technological innovation. In an increasingly digitised economy, the core of production and profit instead becomes the endless traffic and circulation of content on digital platforms, or what has been called “platform capitalism” (Srnicek, 2016). Alongside these technology-based changes in the pornography industry have been important representational shifts in the meaning of pornography. This reflects what some scholars consider a sexualisation or “pornification” of culture (Attwood, 2009; McNair, 2013), in which the boundary between pornography and mainstream culture is increasingly blurred. This has been linked to contemporary postfeminist and neoliberal sensibilities, which operate through discourses of choice and empowerment (see Gill, 2007), denoting a shift from women portrayed as passive, desirable sexual objects (objectification) to active, confident, and desiring sexual subjects (subjectification) (Gill, 2003). Gill (2003, p. 5; 2007) refers to this as “sexual subjectification”, describing the ways in which contemporary neoliberal societies interpellate a feminine subject who is “incited to be compulsorily sexy and always ‘up for it’ … Beauty, desirability and sexual performance constitute her ongoing projects” (Harvey & Gill, 2011, p. 56). This new feminine subject is compulsorily required to display and perform “technologies of sexiness” and her power no longer derives from traditional feminine virtues, but from her bodily capital, sexual skills, and appropriately “made over” sexual subjectivity (Gill, 2011, p. 65). In other words, women are compelled to choose to present themselves in an objectified manner. For Gill (2003), this sexual subjectification represents a new “new sexism”, in which there is a deliberate re-sexualisation and recommodification of bodies that relies on depictions of women as “knowing, active, and desiring”, marking a shift from an “external male judging gaze to a selfpolicing narcissistic gaze” (Gill, 2003, p. 104). In this context, far from providing a full range of sexual freedom and potential, this postfeminist sexual subjectification instead foregrounds “looking like pornstars” (Anderson, 2018, p. 31). This agentic, autonomous and desiring female subject is represented as no longer consigned to sexual objectification by men; instead, she is free to “choose” her sexual subjectification because it
658
A. Gore and L. Du Preez
pleases her liberated interests, which happen to coincide with discourses from heterosexual imagery and pornography aimed at men (Amy-Chinn, 2006). For Coy and Garner (2010) this is reflected in the rise in aspirations of girls to be “glamour models” over doctors, lawyers or teachers (see also Deeley, 2008). Glamour modelling typically refers to topless or nude modelling or otherwise more sexually suggestive than other forms of modelling and is typically geared towards a male audience. To connect these issues, we draw on the notion of “technological rationality” (see Marcuse, 1964; Salter, 2018). The theory of technological rationality suggests that neither technology nor users are the sole origin of online abuse, but rather their interaction is mediated by the dominative and instrumental rationality that characterizes the technological base; a rationality that is gendered and deeply entrenched in technological cultures, industries and associated subcultures. (Salter, 2018, p. 256)
For Salter (2018) the concept of technological rationality consolidates the co-constitution of the design of technology, and the broader culture in which it is designed and operated. Drawing on this argument, we suggest that the platform design and administration of tube sites like Pornhub are clearly structured according to the demands of platform capitalism, in which profit is linked to a logic of visibility predicated on metrics of clicks, likes, shares and so on, on which the business model of digital media platforms depend. In this context, platforms curate content in response to user tastes, but in markets defined by male dominance and misogyny, this may provide an atmosphere which reinforces and then actually elicits tendencies to disrespect women (Barak, 2005). In this sense, we argue that a broader cultural logic of popular misogyny is simultaneously encoded into, and privileged by, the platform design and administration of MindGeek/Pornhub, and the exploitative practices of GirlsDoPorn. This produces an “enclosed set of social and technical arrangements that mirror the other” (Salter, 2018, p. 256) to normalise and capitalise on exploitative practices of production and consumption in a broader context of platform capitalism. Furthermore, we argue that in a dialectic relationship between popular misogyny and
32 “GirlsDoPorn”: Online Pornography and Politics …
659
postfeminism, these corporations not only crafted a position of nonresponsibility, but relied on and redeployed postfeminist conceptions around sexualisation, choice and empowerment to responsibilise women for their own exploitation. We suggest that this inversion of the logic of choice, empowerment and responsibility become part of the spectacle of humiliation of the image-based abuse produced by GirlsDoPorn and distributed by MindGeek. Popular misogyny describes the reactive response to the visible forms of feminism in contemporary popular and media culture (see BanetWeiser, 2018). The existence of “backlash” against feminism is well documented (see Faludi, 1991). However, for Banet-Weiser (2018) the current cultural climate extends beyond a conservative “backlash” to encompass a normative reaction to feminist constructions circulating overtly in the public sphere (Banet-Weiser, 2018; Gill, 2016).Misogyny here is normative and “popular” because it is expressed and practiced on multiple popular media platforms, it attracts popularity within like-minded groups of individuals, and it manifests in a terrain of struggle with competing demands for power (Banet-Weiser, 2018, p. 2). However, it is inherently reactive, operating as a mirror to forms of visible feminism, taking up feminist themes but distorting their meaning and then redeploying them actively against women (Banet-Weiser, 2018, pp. 3–4). In this sense, popular misogyny represents a distinct kind of anti-feminism that operates specifically in a “postfeminist” context. For Gill (2007), postfeminism is distinctive because it is a response to feminism that is characterised by an “entanglement” of feminist and antifeminist discourses. McRobbie (2004) uses the term “double entanglement” to describe the “co-existence of feminism as at some level transformed into a form of Gramscian common sense, while also fiercely repudiated, indeed almost hated” (McRobbie, 2004, p. 262). A key characteristic of this dialectic relationship between postfeminism and popular misogyny is the way that dominant themes of postfeminism are taken up, reframed and rearticulated as misogynistic statements and practices (Banet-Weiser & Miltner, 2016). This is frequently reflected in the way that postfeminist articulations of gender equality, choice and empowerment, become appropriated in inverted narratives of mens disempowerment (see Puente & Fuentes, 2017). We contend that the focus on
660
A. Gore and L. Du Preez
choice, empowerment and sexual subjectification were central themes that were appropriated and redeployed to both facilitate the exploitative practices of “GirlsDoPorn” and construct a problematic cultural carte blanche for companies and platforms like Pornhub/MindGeek to take a position of (non)responsibility. This includes the intertwining of sexual subjectification and empowerment, and the interlocking of choice with responsibility and responsibilisation. Each of these are briefly considered in turn. The appropriation and redeployment of postfeminist sexual subjectification discourse was evident in many of the marketing statements from GirlsDoPorn originally displayed on their website. The way this was mobilised reflects the inherently reactive nature of popular misogyny, which simultaneously relies on, and ridicules postfeminist sexual subjectivity in its exploitative production practices. For example, as of 2010 the GirlsDoPorn homepage indicated: Girlsdoporn is the only website that uses only 100% amateur girls. There are a lot of websites out there that claim they have first timers only. I myself have joined these kinda websites and then days later started recognising the girls on other websites all over the internet and been dissapointed. This is why I built Girlsdoporn.com here you will find nothing but amateurs. I refuse too shoot any girls who have prior experience. All the girls you will find on my site are normal everyday girls you would find in the city streets - malls - colleges and normal 9-5 jobs. I personally hunt out each and every one for your viewing pleasure [sic]. (Jane Doe v. GirlsDoPorn.com, 2019 at 115)
This dual desire for and expectation that “normal everyday girls” engage in a process of pornification reflects the conditions of postfeminist sexual subjectification, in which everyday women are encouraged to embrace porn culture, always be “up for it”, and look and act like pornstars. However, in this case, the logic of sexual subjectification has been taken up and redeployed in a clear display of popular misogyny. In place of any reference to empowerment through sexual subjectification there is instead a move to “hunt out” and “instrumentalise women as objects” (Banet-Weiser, 2018) of men’s desire. Importantly, this is not just a straightforward display of the objectification of women for a pornified male gaze, rather, it needs to be understood in the context of the
32 “GirlsDoPorn”: Online Pornography and Politics …
661
reactive nature of popular misogyny and the broader cultural context of postfeminism. Take for example the following statement from the GirlsDoPorn homepage: You would be surprised how quickly the offer of quick cash turns these girls into part time pornstars. Everything you read or see on this website is 100% real and true. We have no need to trick or lie to you.. ENJOY GUYS ! [sic]. (Jane Doe v. GirlsDoPorn.com, 2019 at 115)
According to Harvey and Gill (2011) in the wake of the “sexual revolution” and women’s movement, and alongside the acceleration and intensification of neoliberalism and consumerism we have witnessed the rise of the “sexual entrepreneur” as a new feminine subjectivity. Evans and Riley (2011) found that for the everyday women in their study, female celebrities were figures of successful neoliberal entrepreneurial selves, with the capacity to make money from their bodies. This also echoes the core postfeminist shifts in representing women’s self-subjectification as agentic, knowing, desiring and entrepreneurial, instead of as the passive object of the male gaze (see Gill, 2007). Furthermore, this representation is supposedly severed from conventional feminine passivity through being performed wittingly and ironically (see McRobbie, 2009). In this context of contemporary neoliberal consumer capitalism, McRobbie (2001, p. 366) argues that young women have become “bearers to the new economy” and “creators of wealth”, enabled to “suspend ownership of their bodies and instead have the sense to sell them”. However, this statement demonstrates both a reliance on, and a reaction against women who use their bodily capital and conform to a postfeminist ethos of feminine entrepreneurial agency and sexual subjecthood. GirlsDoPorn, as a company specialising in “amateur” pornography, clearly relies on these feminine modes of sexual entrepreneurism from everyday women who may seek to use their sexual capital to make “quick cash”. However, the logic of postfeminist sexual entrepreneurism is simultaneously redeployed and ridiculed from a reactionary mirror perspective of popular misogyny. Just as the postfeminist sexual entrepreneur is presented as knowingly and ironically playing with her sexuality to her
662
A. Gore and L. Du Preez
advantage, here the assumption that women will become “part time pornstars” for money is presented by GirlsDoPorn with derision and a tone of “knowingly and ironically” telling an inside joke at their expense. The statement, we suggest, can be read as a strategic performance in taking advantage of the entrepreneurial postfeminist subject who willingly engages in sexual subjectification for self-profit. The postfeminist assumption that women may be empowered (personally or economically) through sexual subjectification is seemingly mocked (“You would be surprised how quickly the offer of quick cash turns these girls into part time pornstars”) and redeployed for the pleasure of the male gaze (“ENJOY GUYS”). The women, rather than being presented as the knowing and desiring sexual subjects of postfeminist culture, are reclaimed and (re)represented in popular misogynistic terms, as a means to an end, devalued and dehumanised (Banet-Weiser, 2018). Furthermore, while the statement reassures its imagined male audience that “we have no need to trick or lie to you”, it is difficult to ignore the ironic reality in which the content was openly procured through tricking and lying to the prospective women. This is further demonstrated in a GirlsDoPorn caption added to one of the videos on the website: This smokin hot 18 y/o teen named jessica was trying to find some money so that she could get a boob job done. She contacted us regarding an add I had placed for beauty models wanted, having no idea it was actually for adult videos instead ha :) [sic]. (Jane Doe v. GirlsDoPorn.com, 2019 at 115)
Again, there is an obvious invocation of an inside joke based on mocking women who had been tricked into performing in adult videos. However, there is also a more subtle derision of aspects of postfeminist sexual subjectivity. Within the regulatory discourses of postfeminism, young women are presented with a range of constructions and practices for embodied identities based on conceptions of empowerment via sexuality and consumption. In this context, the allure of breast enhancements and modelling jobs, while not wholly unproblematic, are relatively normalised aspirations. However, this has also created a space and a market for reactive forms of popular misogyny that both rely on, and ridicule women who participate in or mimic such sexualisation. For the
32 “GirlsDoPorn”: Online Pornography and Politics …
663
women exploited by GirlsDoPorn, the desire for plastic surgery ([she] was trying to find some money so that she could get a boob job done ) and proclivity towards careers in modelling (She contacted us regarding an add I had placed for beauty models wanted ) were repackaged and turned into part of the spectacle of humiliation ([she had] no idea it was actually for adult videos instead ha) to be consumed by their audience. This spectacle was strategically amplified through the affiliated PornWikiLeaks website, which also featured narratives from the women detailing the fraud and coercion used by GirlsDoPorn as part of its recruitment and filming process (Jane Does v MG FREESITES, LTD, 2020, at 107). The doxing forums, virality of the videos and publicly available videos on MindGeek’s Tubesites created significant traffic to GirlsDoPorn’s paysite, where it maintained ten to fifteen thousand subscribers per month (at 108). The staging, recording and distribution of these videos across online platforms such as Pornhub through their content partnership with MindGeek, can all be read as part of this spectacle, which was further enabled by the platform’s indifference. Indeed, the women in their case allege that the parent company—and everyone else in the pornography industry—were acutely aware of PornWikiLeaks.com, its doxing practices and the culture of exploitation (at 107). This indifference was clearly present in the consistent disregard shown to the women who reached out to MindGeek/Pornhub for help. MindGeek’s Tubesites such as PornHub include a takedown portal, which the victims of GirlsDoPorn repeatedly used to no avail. Below are just some of the takedown requests cited during the legal case, with some women making several consecutive requests: Jane Doe No.11 Aug 8 2016: Reason: Im going to kill myself if this stays up here. I was scammed and told this was only going to be on dvds in another country. Please im begging you please ill pay! Agree to Distribution: No. [sics in original]. (Jane Doe v. GirlsDoPorn.com, 2019 at 117) Jane Doe No.11 Aug 13 2016: They scammed me and told me it was only going to dvds in another country. Please this is ruining my life. (Jane Doe v. GirlsDoPorn.com, 2019 at 118)
664
A. Gore and L. Du Preez
Jane Doe No. 11 May 31 2017: I WAS SCAMMED. THIS COMPANY LIED TO ME ABOUT THIS BEING ON THE INTERNET! THEY TOLD ME IT WOULD ONLY BE AVAILIBLE ON DVD IN AUSTRALIA. MY WORK FRIENDS AND FAMILY ALL KNOW AND THIS VERY LINK IS BEING SENT AROUND. I WANT TO JUST DIE [sics and capitalisation in original]. (Jane Doe v. GirlsDoPorn.com, 2019 at 119) Jane Doe No.36 January 2016That’s what I am trying to explain is that I did not consent to being online!!! :(((( me and other girls are being brutally harassed.” [sics in original]. (Jane Doe v. GirlsDoPorn.com, 2019 at 121) Jane Doe No.36 December 14 2016: I was told this video went to a private viewer, and now it is all over the internet. I was lied to, and this isn’t okay. I have reached out to them with no response. (Jane Doe v. GirlsDoPorn.com, 2019 at 120)
Evidence presented in the trial brief indicated that MindGeek received dozens, if not hundreds, of similar takedown requests from GirlsDoPorn victims over the years and never conducted an investigation of the repeated claims of fraud or coercion (Jane Doe v. GirlsDoPorn.com, 2019). The ability of MindGeek/Pornhub to ignore these takedown requests is embedded in a broader cultural logic that aggressively responsibilises girls and women, while obscuring conditions of coercion and exploitation. Indeed, a core aspect of postfeminist sensibilities is the notion that “all our practices are freely chosen” (Gill, 2007, p. 153). In the interlocking of feminist and antifeminist ideas with neoliberal invocations of freedom and choice, a new feminist subject is forged, who bears a new level of personal responsibility (see also Brown, 2005; Larner, 2000; Rottenberg, 2014). This notion of “choice”, with its attendant associations of freedom and equality, enables a kind of contract identity, where the subject, by virtue of “having chosen”, is then firmly bound to their choices, as being in some way reflective of their self. This tethering of the subject and choice embodies a new form of restriction, in which the circumstances of one’s life are always seen as the direct and logical result of one’s own decisions (Burns, 2015, p. 101). Here, patriarchal capitalism and the broader conditions of popular misogyny are
32 “GirlsDoPorn”: Online Pornography and Politics …
665
exculpated, and instead, the focus shifts to scrutinising women’s choices. The resulting inequality experienced by women is framed as the result of their own decisions. These discourses both hold women responsible for avoiding violence and victimisation and negates their victimisation under the guise of individual failure, without recognising the structural conditions producing such inequality. McRobbie (2009, p. 51) refers to this predicament as the “new sexual contract”. Freedom and agency in the popular context have been signified to refer to women’s voluntary choice of self-objectification, and of willing participation in whatever is prescribed by patriarchal heterosexual norms and capitalist commodity culture (Budgeon, 2001). In this context, companies like MindGeek were able to rely on and redeploy this logic to eviscerate the difference between coercion and choice, and specific and implied consent. While these women all adamantly argued that they did not consent to the distribution of these videos online, MindGeek/Pornhub are arguably able to rely on a broader cultural logic of choice and responsibilisation. By virtue of “having chosen” to produce an adult DVD, even if under conditions of fraud and deception, the women were further bound to the distribution of the videos online as being a direct result of their “decision”. This language of “choice” and the responsibilisation of girls and women to avoid victimisation and the framing of exploitation as a personal failure was also explicitly mobilised by the GirlsDoPorn defense case. The defendants argued repeatedly that the women “chose” to come to the hotel room, “chose” to sign the documents and “chose” to go through with the shoot (Jane Doe vs GirlsDoPorn, 2019 at 31). The defense attorney for GirlsDoPorn argued that the women did not appropriately assess the risks before agreeing to appear in porn, stating that the women “did not do their due diligence because they did not obtain an explanation for how the defendants could guarantee that a video (once distributed in any medium) would not be uploaded onto the internet” (at 17). While the dialectic relationship between popular misogyny and postfeminism provide a convenient discourse for inverting responsibility, the economic incentive for MindGeek also cannot be understated. While the technical qualities of tubesites may seem unintentional, they are curated and crafted by technology companies, like MindGeek. These
666
A. Gore and L. Du Preez
companies engage in the same pioneering techniques of their more wellknown social media counterparts, such as Facebook and Twitter, “making strategic choices about information management and the graphic organisation of content that translates into large profits…” (Keilty, 2018, p. 338). The attention that advertising-driven digital media platforms derive from the spread of highly inflammatory abusive content can create economic disincentives to deal with abuse (Suzor et al., 2018). If Pornhub primarily profits from website traffic and engagement, it is unsurprising then that design features on the website reflect attempts to capitalise on this. In this sense, the oft-cited claim that technology companies and platforms are neutral actors ignores the way that some companies clearly have a vested interest in blurring distinctions between choice and coercion, and personal responsibility and corporate responsibility. This requires an account of the technological rationality underpinning the base of such companies, which include material interests vested in platform capitalism, and ideological affiliations with the logics of popular misogyny. The intersection of these logics underpin the business model of online pornography platforms, producing a cultural and technological context in which the responsibility to protect women and girls from exploitation is eclipsed. Similarly, MindGeek’s “hands-off ” approach was evident in the lack of policies and procedures to properly investigate claims made against companies, such as GirlsDoPorn. This was further apparent when MindGeek was questioned by the BBC as to why videos with titles along the lines of “teen abused while sleeping” and “drunk teen abuse sleeping” were still allowed on the pornographic website, Pornhub responded (Mohan, 2020): We allow all forms of sexual expression that follow our Terms of Use, and while some people may find these fantasies inappropriate, they do appeal to many people around the world and are protected by various freedom of speech laws.
This privileging and reverence of “freedom of speech” and laissez faire capitalism has been a point of pride for many technology companies regulating online platforms (including Facebook and Twitter). Indeed,
32 “GirlsDoPorn”: Online Pornography and Politics …
667
this libertarian ethos underpins the ability of these platforms to evade the responsibility to pay for moderation and content regulation (Jeong, 2015). User and content regulation is expensive and runs contrary to the business models underpinning digital media platforms, in which income is generated by encouraging and commodifying, rather than restricting (Salter, 2018). The ability of MindGeek to obscure the presence of abuse and exploitation through a language of freedom of sexual expression catering to a free market, is illustrative of intersections of popular misogyny with capitalist values in the context of techno or platform capitalism.
Conclusion Despite the accelerated rates of consumption and production and the increasing demand for non-consensual imagery, online pornography is more often framed as an “active choice on the part of a strong-willed woman” (Gabriel, 2017, p. 113). In this way, the discourse of “choice” is often purposefully used by the online pornography market to refute concerns about exploitation and ethical issues of production (Gabriel, 2017). For McVey et al. (2021) these discourses are also heavily invested in by the market because they work to obscure violence, and legitimise problematic practices of production, representation and consumption, while flattening important differences. For Dines (2016) this strategic use of discourse goes beyond just language. With lobbying strategies and PR campaigns, the market links itself to wider social aspirations (like sexual emancipation and free speech), while drawing the line between acceptable and unacceptable pornography (for example child sexual abuse images) in a way that ensures protection of its key markets. This highlights the need to interrogate how responsibility is configured by companies with a vested market interest in the production and distribution of online pornography. This requires a critical consideration of the discourses around responsibility, non-responsibility and responsibilisation that are available in the current cultural moment and how these are taken up.
668
A. Gore and L. Du Preez
The proliferation of abusive and exploitative content on platforms such as Pornhub speaks to the underlying rationales that informed their design and shaped their governance. A critical discursive reading of statements made by GirlsDoPorn, Pornhub and MindGeek collectively suggests these companies operated within a rationality of popular misogyny. This rationality is inherently reactive, and simultaneously relies on, and ridicules postfeminist sexual subjectivity in their exploitative production and distribution practices. In a neoliberal, postfeminist political economy that pivots on women’s sexual subjectification as a strategy for self-empowerment, women are encouraged to capitalise on capabilities to make the most of their female power (frequently flaunted as sexual power). However, as we argue through the case of GirlsDoPorn and Pornhub, popular misogyny appropriates this discourse, turning it against women by blaming them for purportedly leaning into the postfeminist neoliberal logic, grounded in deriving benefit from sexual commodification. Acknowledgements The author(s) received no financial support for the research, authorship and/or publication of this article.
References Jane Doe Nos 1–22 v. GirlsDoPorn.com. (2019). Superior Court of California 37-2016-00019027-CU-FR-CTL. Jane Doe Nos 1–40 v. MG FREESITES, LTD. (2020). United States District Court 20CV2440 W RBB. Amy-Chinn, D. (2006). This is just for me(n): How the regulation of post-feminist lingerie advertising perpetuates woman as object. Journal of Consumer Culture, 6 (2), 155–175. https://doi.org/10.1177/146954050606 4742 Anderson, K. J. (2018). Modern misogyny and backlash. In C. B. Travis, J. W. White, A. Rutherford, W. S. Williams, S. L. Cook, & K. F. Wyche (Eds.), APA handbooks in psychology® . APA handbook of the psychology of women: History, theory, and battlegrounds (pp. 27–46). American Psychological Association.
32 “GirlsDoPorn”: Online Pornography and Politics …
669
Attwood, F. (2009). Dirty work researching women and sexual representation. In R. Flood & R. Gill (Eds.), Secrets and silences in the research process: Feminist reflections (177–187). Routledge. Banet-Weiser, S. (2018). Empowered: Popular feminism and popular misogyny. Duke University Press. Banet-Weiser, S., & Miltner, K. (2016). #MasculinitySoFragile: Culture, structure, and networked misogyny. Feminist Media Studies, 16 (1). https://doi. org/10.1080/14680777.2016.1120490 Barak, A. (2005). Sexual harassment on the internet. Social Science Computer Review, 23(1), 177–192. Boyle, K. (2019). What’s in a name? Theorising the Inter-relationships of gender and violence. Feminist Theory, 20 (1), 19–36. https://doi.org/10. 1177/1464700118754957 Brodkin, J. (2020, October 12). Pornhub blocks uploads and downloads in crackdown on child-sexual-abuse videos, ars-technica. https://arstechnica. com/tech-policy/2020/12/pornhub-bans-uploads-by-unidentified-usersamid-child-abuse-investigations/ Brown, W. (2005). Neo-liberalism and the end of liberal democracy. Theory & Event, 7 (1), 37–59. Budgeon, S. (2001). Emergent feminist(?) identities: Young women and the practice of micropolitics. European Journal of Women’s Studies, 8(1), 7–28. https://doi.org/10.1177/135050680100800102 Burns, A. (2015). In full view: Involuntary porn and the postfeminist rhetoric of choice. In C. Nally & A. Smith (Eds.), Twenty-first century feminism. Palgrave Macmillan. https://doi.org/10.1057/9781137492852_5 Coy, M., & Garner, M. (2010). Glamour modelling and the marketing of selfsexualization: Critical reflections. International Journal of Cultural Studies, 13(6), 657–675. https://doi.org/10.1177/1367877910376576 Deeley, L. (2008, July 28). I’m Single, I’m Sexy and I’m Only 13. The Times. Dines, G. (2016, April 9). Is porn immoral? That doesn’t matter: It’s a public health crisis. The Washington Post. https://www.washingtonpost.com/postev erything/wp/2016/04/08/is-porn-immoral-that-doesnt-matter-its-a-publichealth-crisis/ Evans, A., & Riley, S. (2011). Immaculate consumption: Negotiating the sex symbol in postfeminist celebrity culture. Journal of Gender Studies, 22(3), 268–281. https://doi.org/10.1080/09589236.2012.658145 Faludi, S. (1991). Backlash: The undeclared war against American women. Crown Publishing Group. Forrester, K. (2016). Making sense of modern pornography. The New Yorker.
670
A. Gore and L. Du Preez
Gabriel, K. (2017). Power of porn cultures: The Transnational Institute Sixth Annual State of Power Report (pp. 1–12). The Transnational Institute. Retrieved 12 March 2020 from https://longreads.tni.org/state-of-power/ power-of-porn-cultures/ Gill, R. (2003). From sexual objectification to sexual subjectification: The resexualisation of women’s bodies in the media. Feminist Media Studies, 3(1), 100–106. Gill, R. (2007). Postfeminist media culture: Elements of a sensibility. European Journal of Cultural Studies, 10 (2): 147–166. https://doi.org/10.1177/ 1367549407075898 Gill, R. (2011). Sexism reloaded, or, it’s time to get angry again! Feminist Media Studies, 11(1), 61–71. https://doi.org/10.1080/14680777.2011.537029 Gill, R. (2016). Post-postfeminism? New feminist visibilities in postfeminist times. Feminist Media Studies, 16 (4), 610–630. https://doi.org/10.1080/ 14680777.2016.1193293 Harvey, L., & Gill, R. (2011). Spicing it up: Sexual entrepreneurs and The Sex Inspectors. In R. Gill & C. Scharff (Eds.), New femininities. Palgrave Macmillan. https://doi.org/10.1057/9780230294523_4 Henry, N., & Powell, A. (2016). Sexual violence in the digital age: The scope and limits of criminal law. Social and Legal Studies, 25 (4), 397–418. Jeong, S. (2015). The internet of garbage. Forbes. Keilty, P. (2018). Desire by design: Pornography as technology industry. Porn Studies, 5 (3), 338–342. https://doi.org/10.1080/23268743.2018.1483208 Kelly, L. (1988). Surviving sexual violence. Polity Press. Khalil, J. (2020, July 20). These are the most popular websites in the world—And they might just surprise you, Tech Radar Pro. https://www.tec hradar.com/au/news/porn-sites-attract-more-visitors-than-netflix-and-ama zon-youll-never-guess-how-many Larner, W. (2000). Neo-liberalism: Policy, ideology, governmentality. Studies in Political Economy, 63, 5–25. Marcuse, H. (1964). One-dimensional man: Studies in the ideology of advanced industrial society. Routledge. Marcuse, H. (1985). Some social implications of modern technology. In A. Arato & E. Gephardt (Eds.), Essential Frankfurt School Reader (pp. 138– 162). Continuum. McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond ’revenge porn’: The continuum of image-based sexual abuse. Feminist Legal Studies, 25 (1), 25–46.
32 “GirlsDoPorn”: Online Pornography and Politics …
671
McNair, B. (2013). Porno? Chic! How pornography changed the world and made it a better place (1st ed.). Routledge. McRobbie, A. (2001). Good girls bad girls? Female success and the new meritocracy. In D. Morley & K. Robins (Eds.), British cultural studies: Geography, nationality and identity (pp. 361–372). Oxford University Press. McRobbie, A. (2004). Post-feminism and popular culture. Feminist Media Studies, 4 (3), 255–264. https://doi.org/10.1080/1468077042000309937 McRobbie, A. (2009). The aftermath of feminism: Gender, culture and social change. Sage. McVey, L., Gurrieri, L., & Tyler, M. (2021). The structural oppression of women by markets: The continuum of sexual violence and the online pornography market. Journal of Marketing Management, 37 (1–2), 40–67. https://doi.org/10.1080/0267257X.2020.1798714 Mohan, M. (2020, February 10). I was raped at 14, and the video ended up on a porn site. BBC . Available at: https://www.bbc.com/news/stories-513 91981 Núñez Puente, S., & Gámez Fuentes, M. J. (2017). Spanish feminism, popular misogyny and the place of the victim. Feminist Media Studies, 17 (5), 902– 906. https://doi.org/10.1080/14680777.2017.1350527 Pavan, E. (2017). Internet intermediaries and online gender-based violence. In M. Segrave & L. Vitis (Eds.), Gender, technology and violence (pp. 62–78). Routledge. Rottenberg, C. (2014). The rise of neoliberal feminism. Cultural Studies, 28(3), 418–437. https://doi.org/10.1080/09502386.2013.857361 Salter, M. (2018). From geek masculinity to Gamergate: The technological rationality of online abuse. Crime Media Culture, 14 (2), 247–264. https:// doi.org/10.1177/1741659017690893 Srnicek, N. (2016). Platform capitalism. Wiley. Suzor, N., Dragiewicz, M., Harris, B., Gillett, R., Burgess, J., & Van Geelen, T. (2018). Human rights by design: The responsibilities of social media platforms to address gender-based violence online. Policy and Internet, 11(1), 84–103. Wajcman, J. (2006). TechnoCapitalism meets TechnoFeminism: Women and technology in a wireless world. Labour & Industry: A Journal of the Social and Economic Relations of Work, 16 (3), 7–20.
33 Image-Based Sexual Abuse, Legal Responses and Online Activism in the Aotearoa New Zealand Context Fairleigh Evelyn Gilmour
Introduction Image-based sexual abuse has been increasingly acknowledged as an issue in Aotearoa New Zealand. In 2015, in response to a lack of recourse for victims of digital harms such as online harassment, cyberbullying and image-based sexual abuse, the Harmful Digital Communications Act (HDCA) was enacted. The majority of prosecutions have been reported as resulting from image-based sexual abuse—specifically the non-consensual sharing of intimate images and videos (Netsafe, 2018). Meanwhile, there has been increasing feminist commentary and discussion in online feminist spaces about rape culture in Aotearoa New Zealand, as well as discussion about the topic of the non-consensual sharing of intimate images and videos. In this chapter, I explore both legislation and activism in Aotearoa New Zealand, as well as tensions F. E. Gilmour (B) University of Otago, Dunedin, New Zealand e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_33
673
674
F. E. Gilmour
between the discursive framing of image-based sexual abuse in these two settings. This chapter begins with a discussion of image-based sexual abuse as a specifically gendered harm, before discussing developments in both law and activism in the Aotearoa New Zealand setting, and how the discursive framing of victimhood in both of these settings is sometimes underpinned by gendered assumptions. This chapter argues that despite efforts to address the harms of image-based sexual abuse, its contextual and gendered harms continue to be inadequately conceptualised and addressed.
Image-Based Sexual Abuse as Gendered Harm Image-based sexual abuse has increased exponentially with the contemporary ubiquity of the smartphone, and there are aspects to the internet that encourage damaging online behaviours. For example, the cloaking benefits of anonymity can play a key role in perpetuating online abuses (Jane, 2014; Lapidot-Lefler & Barak, 2012) and allow people to escape detection and ignore social norms. Meanwhile, the disinhibiting features of the internet (Suler, 2004) can amplify offline sexual aggression (Zhong et al., 2020). Nonetheless, the gendered nature of online harms indicate that they emerge from the same social norms and power imbalances that shape offline abuse. Offline inequalities, including gender inequalities, are replicated and amplified within technospaces. Patterns of online harassment are gendered, and, they can be understood as systemic sexual violence (Eikren & Ingram-Waters, 2016) that works to “marginalize and hinder individual public participation based on gender and sexuality” (Fairbairn, 2015, p. 244). A feminist analysis of image-based sexual assault situates it within a broader culture of controlling and punishing women and limiting their agency (Eikren & Ingram-Waters, 2016). Thus, non-consensual image sharing needs to be interpreted more broadly in terms of the cultural dynamics of violence against women including, but not limited to, patterns of intimate partner violence and control (Eaton et al., 2020). Online abuses are often perpetrated in the context of offline relationships—technology has offered additional ways to undertake surveillance, to control, threaten and abuse a partner both
33 Image-Based Sexual Abuse, Legal Responses …
675
during a relationship and at the time of separation (Dragiewicz et al., 2019). The impacts of image-based sexual abuse are similar to those of offline violence. Women who have experienced image-based sexual abuse may experience harms similar in nature to those who have experienced sexual assault, including trust issues, PTSD, anxiety, depression, loss of control and impacts on self-esteem (Bates, 2017) (although this emphasis on mental health outcomes has also been critiqued [McGlynn et al., 2020], as this chapter will discuss). Images are shared and interpreted within the context of people’s gendered lives. In an Australian study, Crofts et al. (2015) found a gender disparity in perceptions of the pressure placed upon young people to share intimate images. Young women and girls were significantly more likely to agree that there was pressure to post sexual images (Crofts et al., 2015). Harassment is also gendered in terms of the heterosexist social norms that underpin abusive conduct. The use of naked images relies for its impact, upon the gendered double standard—the notion that girls’ and women’s bodies are inherently sexual, and thus potentially shameful. As New Zealand feminist commentator Lizzie Marvelly (2016, n.p.) has stated in relation to the #MyBodyMyBusiness campaign, which this chapter will discuss in more detail, while “our devices are constantly updated, some of our attitudes are basically Victorian”. Qualitative research with young people has suggested that having naked photos seen by family members, peers or potential employers is seen as something that could “ruin your life” if you are a girl (Dobson & Ringrose, 2016). Furthermore, as research into non-consensual image sharing has shown, while non-consensual image sharing is sometimes undertaken against male victims, the impacts are fundamentally different, because women’s sexuality is constructed in a fundamentally different way in a patriarchal society (Dobson & Ringrose, 2016). Girls’ bodies and sexualities are understood to risk being marked as shameful and their images are interpreted through “hierarchical codes of gendered morality” (Ringrose et al., 2013, p. 314). Dobson and Ringrose (2016, p. 18), in their study on responses to non-consensual sharing of images, found that young people are able to identify and critique the sexual double standard but what “seems more difficult for youth, as for adults, is to imagine the
676
F. E. Gilmour
possibility that girls are legitimately entitled to digitally mediate sexuality or express sexual desire”. Gender-blind perspectives are therefore inadequate—risks and impacts are clearly impacted by gender, while image-based sexual abuse is a specific risk from current and former intimate male partners (DeKeseredy & Schwartz, 2016). Australian research has indicated that while women and men experience image-based sexual abuse at similar rates, perpetrators are more likely to be men, while women are more likely to be victimised by a current or former partner, are more likely to fear for their safety (Henry et al., 2017), and reported greater negative impacts (Powell et al., 2020). Stigma and blame is often directed at victims (Powell et al., 2020), particularly towards women, with victim-blaming narratives relying on the sexual double standard and assumptions around traditional gender roles (Mckinlay & Lavis, 2020). Australian research by Henry et al. (2017, pp. 7–8) has found that while most Australians think that it should be a crime to share sexual or nude images without permission, 70% of participants agreed that “People should know better than to take nude selfies in the first place, even if they never send them to anyone” and 62% agreed “If a person sends a nude or sexual image to someone else, then they are at least partly responsible if the image ends up online”. However, while these behaviours fit into existing feminist theorisation of gendered violence and harassment, the practices and impacts of online harms are nonetheless particular to the digital realm. The dynamics of online space result in the permanence, searchability, duplicability and scalability of content (boyd, 2010) in such a way that alters the meanings attributed to communications. The digital reach means that a non-consensually shared image, for example, can be shared an exponentially larger number of times. There is also a high degree of permanence to online information. McGlynn et al.’s (2020, p. 12) study of the impacts of image-based sexual abuse on victims noted that the constancy and perceived endlessness of this form of abuse was a key facet of the harm experienced: “The material was ‘out there’, beyond their control: constantly available to be shared online, viewed and re-discovered”. Research undertaken by Netsafe (Melhuish & Pacheco, 2020) into the extent of digital harms in Aotearoa New Zealand, found that 2% of New
33 Image-Based Sexual Abuse, Legal Responses …
677
Zealanders had undertaken harmful behaviour in the past 12 months in the form of sharing intimate images or recordings of someone without their permission. In terms of experiences of victimisation, 4% of adult New Zealanders have had someone threaten to share their intimate pictures or videos online, while 3% have actually had intimate content shared. Young women in particular are at a higher risk of threats of non-consensual image sharing (Netsafe et al., 2019). Victim-blaming was expressed by participants—70% of adult New Zealanders think that people in a relationship should be “aware of the risks associated with sharing intimate pictures with a partner” (Netsafe et al., 2019). While both men and women experience image-based sexual abuse, the context is different. For women, whose images were more likely to actually be shared, the context was usually an ex-partner sharing them for revenge (Netsafe et al., 2019).
The Introduction of the Harmful Digital Communications Act (2015) Not only does online harm have serious consequences, there are also significant limitations to legal redress for online harassment and abuse. Legislation to combat image-based sexual abuse has thus emerged in many jurisdictions including Australia, the UK, and, in 2015, New Zealand. The Law Commission (2012), in the Ministerial Briefing Paper that informed the development of the HDCA, observed that there were serious harms resulting from electronic communication, which were often going unaddressed. Existing laws were too limited—there was no accessible route to civil litigation and claims under breach of confidence or privacy proved time consuming and expensive (Smith, 2015). The purpose of the Act is to deter, prevent, and mitigate harm caused to individuals by digital communications; and provide victims of harmful digital communications with a quick and efficient means of redress (Sect. “The Introduction of the Harmful Digital Communications Act (2015)”). The HDCA creates a hybrid scheme, which includes both civil enforcement and criminal offences, and can be separated into four broad
678
F. E. Gilmour
components (Post, 2017)—a new civil complaints regime, a new criminal offence provision, the ‘safe haven’ regime, and amendments to a number of other statutes. The new criminal offence provision, which is the primary focus of this chapter, set out in Section 22 (1), provides that a person commits an offence if: i. The person posts a digital communication with the intention that it cause harm to a victim; and ii. Posting the communication would cause harm to an ordinary reasonable person in the position of the victim and iii. Posting the communication causes harm to the victim. There have been numerous critiques of the Act, including around its breadth (Panzic, 2015), around the safe harbour provisions and the need for express takedown orders (Smith, 2015). In terms of the criminal provisions, the key issue I wish to focus on in this chapter is that of subjective harm, specifically the requirement that the victim suffer “serious emotional distress” (Sect. “Feminist Activism: #MyBodyMyTerms”). Some commentators have argued that the subjective harm requirement is inappropriate (Smith, 2015; Upperton, 2015); first because victims are caused extreme distress by having to provide this evidence in court and often are reluctant to do so; and secondly, because “revenge porn is an exception that is so inherently offensive and distinctly different from the use of words, that the subjective harm requirement is not necessary” (Smith, 2015, p. 27). From this perspective, the focus should be on the elements of the communication itself, if other perpetrators are to be deterred (Upperton, 2015). Martin Crocker, from Netsafe—the approved agency in terms of this legislation—has been quoted as saying that victims must show distress such as You couldn’t sleep for a period of time, you had to go and see a counsellor, you constantly have this on your mind and you can’t get out of it, you can’t work properly, you can’t eat properly, perhaps you have experienced backlash from friends or acquaintances. It’s early days but the courts are saying that you can’t just turn up to court and say ‘hey, I was really upset’. (quoted in Whyte, 2018)
33 Image-Based Sexual Abuse, Legal Responses …
679
In one case, a man was charged under the HDCA as well as for breaching a protection order. After the breakdown of his relationship with his wife, he had tracked her on Google Maps, followed her and threatened to post pictures of her online. He then posted intimate photographs of her online on Facebook, which a friend alerted her to. While the Judge agreed to these facts, the respondent was not found to have met the criteria because “While the evidence clearly points to some degree of emotional distress, it is not sufficient to satisfy me it has reached the threshold of serious emotional distress” (“The Queen v Partha Iyer,” 2016, p. 73). While an appeal was allowed and the discharge was quashed (“Police v B,” 2017), this raises important questions: namely, why is serious emotional harm even required? Since, as the research shows (Dragiewicz et al., 2019) and the facts of this case indicate, non-consensual image sharing can be deployed as part of a broader pattern of coercive control and harassment in the context of intimate partner violence. Therefore proving that serious emotional harm was the result of the specific incident seems an unreasonable requirement. Presumably, a person attempting to separate in a context of abuse and control, is going to already be experiencing some level of fear and distress. Yet the legislation seems to require that a specific instance of image-based sexual abuse, when used as part of a pattern of violent and controlling behaviour, provoked a measurable increase in that fear and distress. In the contemporary context, where patterns of coercive control are increasingly being acknowledged by legislative frameworks,1 this incident-based approach is inadequate to address the context in which much imagebased sexual abuse occurs. To some extent, this is understandable—the HDCA is not image-based sexual abuse specific, rather it refers to a range of potentially harmful communications. And indeed, while the majority of claims relate to what is colloquially referred to as “revenge porn”; the Act is broad enough to include a wide range of harmful communications and has avoided the tendency for legislation to be “piecemeal” and to focus “only, or mainly, on the paradigmatic case of the vengeful expartner” (McGlynn et al., 2017). However, the breadth of the Act means 1
The New Zealand Family Violence Act 2018, for example, has as one of its guiding principles, at 4(b), that “decision makers should, wherever appropriate, recognise that family violence often is or includes coercive or controlling behaviour”.
680
F. E. Gilmour
that it involves a particularly careful balancing of free speech and protection from harms and thus struggles to contextualise image-based sexual abuse as part of a pattern of behaviour. A further issue is the focus on psychological distress. There are key issues with legislation that requires intent to cause harm or distress, including that it is difficult to quantify and places victim experiences on a hierarchy (Powell et al., 2020). As McGlynn et al. (2020) note, there has been a tendency to emphasise a medicalised trauma framework in exploring responses to image-based sexual abuse—as they observe, deploying a trauma lens “can help to legitimise victim-survivors’ experiences as ‘real’” and thus validate their harm and suffering (McGlynn et al., 2020, p. 4). Indeed, the articulation of the HDCA seems to almost require a trauma response—and the ability to document such a response—from victims. Yet this medical trauma model faces two key problems: It doesn’t adequately describe the range of victim experiences (as the above case study indicates) and such a framework can “end up inadvertently colluding with the gendered status quo to decontextualise and depoliticise the harms of sexual violence” (McGlynn et al., 2020, p. 5). This requirement for evidenced distress needs to be understood within a broader pattern of pathologising women’s reasonable reactions to male violence in a variety of legal and discursive contexts (Allen, 1987; Orenstein, 2009; Stubbs, 1991). Moreover, this emphasis on psychological damage fails to account for the fact that empirical conventional psychology shows that some women are not enduringly traumatised by abuse (Gavey, 2005). The limits of a medicalised trauma model in making sense of image-based sexual abuse is rendered even further apparent when contrasted with the discursive construction of intimate image sharing, and non-consensual sharing, in the feminist hashtag campaign #MyBodyMyTerms, which the next section explores.
Feminist Activism: #MyBodyMyTerms The #MyBodyMyTerms campaign, which launched on feminist New Zealand blog Villainesse involved videos of prominent New Zealand Aotearoa personalities discussing sexual autonomy, ethics and consent.
33 Image-Based Sexual Abuse, Legal Responses …
681
The Roastbusters incident was a “driving force in the creation of #MyBodyMyTerms” and it aims to respond broadly to the rape culture in New Zealand, which allowed Roastbusters to occur. In the RoastBusters case, a group of young men in Auckland posted on Facebook about allegedly assaulting intoxicated and underage girls. The young victims’ names were published online, causing them additional distress (Casey, 2019). This case demonstrates how real-world violence can expand into cyber-violence with far-reaching and serious consequences. While the main video in the #MyBodyMyTerms campaign discussed a range of issues—including emphasising consent in sexual interactions, and directly challenging common rape myths—it also specifically focused on image-based harms (2017a): Taking, or posing for naked photos doesn’t make me a ‘whore’. Having a naked picture or video of myself with a partner does not give anyone else the right to view it. When you share private pictures of someone without their consent, you violate them. When you view private videos of me without my consent, you violate me. I’m a human being. Not some stranger on a screen.
The campaign involved prominent New Zealanders, posing undressed, with #MyBodyMyTerms written on their bodies; while it also encouraged people to not only share the video, but also to take selfies with #MyBodyMyTerms written on their own body and to share what the campaign means to them. In this way, the campaign sought to maximise the potential of hashtag activism. Hashtag feminism, the most prominent example being #MeToo, is: the use of hashtags to address feminist-identified issues primarily through Twitter by sharing personal experiences of inequality, constructing counter-discourses, and critiquing cultural figures and institutions. (Linabary et al., 2020, p. 1828)
Considering the key role that community education can play in responding to image-based sexual abuse (Powell et al., 2020), this kind of popular campaign is important. Furthermore, the engagement by
682
F. E. Gilmour
young women with the specific practices of the cyber realm in order to combat gendered abuses subverts the popular discursive portrayal of online spaces as primarily risky for girls and young women—they are also spaces successfully mobilised for feminist activism (Vitis & Gilmour, 2017). Online activism can expose gendered violence, allowing women to testify to it (Megarry, 2014) and affording a public and communal record of patterns of gendered violence, which can engender “critical witnessing” (Girling, 2004) by exploring issues often hidden from view.2 The originator of the campaign, Lizzie Marvelly (2015) condemned the victim-blaming that often emerges in response to image-based sexual abuse: A common response to revenge porn is something along the lines of the incredulous, “why would someone send naked pictures of themselves anyway?” a question heavy with overtones of judgment, victim-blaming and slut-shaming (…) As with all victim-blaming, the focus is shifted away from the perpetrators and their abusive actions to become a judgment of the moral characters of the victims.
Marvelly also specifically articulated image-based sexual abuse in terms of the sexual double standard: While revenge porn also has male victims, the majority are female, and attitudesaround female sexuality make revenge porn damaging for women [because we live] in a culture that condones victim-blaming, where the ideas of purity and promiscuity form a ‘good’ or ‘bad’ binary, where the representation of a woman as a sexual being is used as a weapon to shame, threaten and degrade her.
Other participants also shared their personal experiences with imagebased sexual abuse and emphasised the problematic constructions of 2 In this section, I only analyze the discursive framing of image-based sexual abuse in the official #MyBodyMyTerms videos and posts—made my prominent Aotearoa New Zealand personalities; and the blog posts on the feminist blog Villainesse where the campaign originated. However, the campaign prompted further discussion on Twitter, Facebook and beyond, and the videos viewed hundreds of thousands of time in total—the national reach and discussion was therefore much broader.
33 Image-Based Sexual Abuse, Legal Responses …
683
female sexuality which underlie image-based sexual abuse. Another blog author on Villainesse stated: “Every time some woman has them [naked images] leaked (and it’s almost always a woman) society acts as though she’s been caught satanically sacrificing bunnies” (Johnson, 2018). These narratives not only challenge victim-blaming rhetoric, but also strategically resists hegemonic frames of representing gender-based violence, by emphasising the underlying issue in terms of how feminine bodies and sexualities are shamed. By representing a collective response to violence, and inviting others to share their own images and testimony, this movement is part of a broader trend of feminist hashtag activism intervening in dominant narratives (Linabary et al., 2020; Puente et al., 2021). Having famous New Zealanders—in a range of body shapes, genders and ethnicities—pose naked for the campaign, went further than articulating a lack of shame with the naked body. It deployed bodies in a way that subverted the discursive construction of feminine bodies as potentially always at risk of being leaked, of being marked as shameful. In this way, it constitutes a powerful example of women’s use of their bodies as a form of activism against the cultural vilification of women’s sexuality (cf Ringrose & Renold, 2012; Vitis & Gilmour, 2017). As the Law Commission itself in New Zealand argued, legal change in Aotearoa New Zealand is inadequate in and of itself, online harms are “symptomatic of wider social problems which require address, as far as that is possible, by extra-legal techniques” (2012, p. 133). Feminist activism, like #MyBodyMyTerms is a powerful tool in shaping discussion. The movement engaged with a feminist lens that articulated not only a critique of victim-blaming, but an emphasis on the right for women to express themselves as sexual beings. Villainesse—a feminist blog—has often articulated responses to image-based sexual abuse in terms of an emphasis on the need for criminalisation, prosecution and jail terms (running headlines such as “When will authorities start treating revenge porn like the horrendous sexual crime it is?” [Marvelly, 2016] and asking “Are we finally seeing the justice system support victims of revenge porn? A Hamilton man is jailed after filming a female friend having sex and uploading it to Facebook”) (Raj, 2016). While it has its limits (cf Baer, 2016), hashtag activism has the potential to be transformative—both in the sense that it fosters social change by addressing
684
F. E. Gilmour
social norms; and that it offers an alternative to perceiving the carceral state as the only valid forum for addressing harms (Rentschler, 2017). Tensions within the campaign highlighted the tension in feminist theory around gendered violence (particularly in terms of IPV, cf for example Stubbs, 1991), wherein psychological harm is often a requirement for legitimate victim status and redress yet it can sometimes reify an interpretation of women as emotional, pathologised, passive, ‘victims.’ Participants emphasised the permanence of online images—“for many it’s too late; the damage is done (…) her topless photo may resurface online at any time for the rest of her life” (De Young, 2015a). Commentators note that image-based sexual abuse is life-destroying—discussing suicide attempts and identity changes by victims of image-based sexual abuse—yet note that “this situation [the shaming of victims of image-sharing] is nonsensical when juxtaposed against the proliferation of nudity in mainstream media” (De Young, 2015b). However, in the case of non-consensual image sharing, while trust and betrayal are clearly at issue, the gendered difference in responses to image sharing also clearly demonstrate that the social context is fundamental to the experience of victims—the meanings inscribed onto the naked and sexualised feminine body shape the social and thus personal response to image sharing. As one article in Villainesse observed: The ‘revenge’ part of these situations relies on the power of these photos being seen as shameful. Now, if we all saw nudes as simply a normal part of sexual expression, it takes away the moral stigma, leaving these blackmailerswith no weapons to use against these women. (Johnson, 2018)
The feminist blogging in this context thus asks us to acknowledge the more nuanced interpretation of harm that McGlynn et al. (2020) argue for, one that understands that the harms of image-based sexual abuse are contextual, and thus contingent on their social location and the dominant sexual scripts in which those images are produced and interpreted. While there are increasingly discussions of ethics and personal responsibility in discussion of the circulation of images of women’s bodies, what is often missing is a critical account of how and why “a discourse of ‘ruin’ still mediates the treatment of female sexuality that is open and visible”
33 Image-Based Sexual Abuse, Legal Responses …
685
(Chun & Friedland, 2015, p. 9). The #MyBodyMyTerms campaign uses a feminist lens to render these discussions central to the discourse of non-consensual image sharing. The campaign also sought to straddle the line between condemning victim-blaming while also attempting to encourage young women to take precautions, as one representative stated “I should be able to do what I want, dress how I want and not have to worry about anyone else” before also recommending caution around sharing images: “you need to have a very good level of trust with someone before you do something like that with them” (Schedewy, 2017). Another participant observes that while “Just because you took [a picture] doesn’t mean that you’re a bad person”, they stress that “You have to remember that stuff is out there for life, and so be proud of who you are, be proud of what you do and take ownership of what you present to the world ” (#MYBODYMYTERMS Campaign, 2017b). In the neoliberal era, the concept of “personal responsibility” and the need to guard against victimisation have framed discussions of gendered violence (Stringer, 2014, p. 2) as is illustrated here, where advocating for sexual freedom and bodily autonomy is tempered by the need for responsible behaviour. As Powell (2015, p. 217) argues, the “politics of (sexual) choice” typifies neoliberal and postfeminist narratives, ignoring underlying gender inequalities while casting “the individual as solely responsible for their life choices”. Elements of gendered respectability politics also emerged in the comments of Ritchie Hardcore who was one of the faces of the campaign.3 In response to a naked selfie posted by Kim Kardashian, Hardcore agreed with another commentator on Instagram who said that she was flaunting terrible values: “Good words brother, Tautoko.4 We need to teach healthy ways of validation”, he said before referring to Levy’s “raunch culture” (2005). As McAllen (2016) observed:
3
The co-creators of #MyBodyMyTerms made a statement that Hardcore’s comments were “unrelated” to the campaign Marvelly, L., & Raj, J. (2016). To our wonderful #MyBodyMyTerms supporters. Villainesse. 4 Tautoko means to support or agree with what someone has said. It is a Te Reo M¯ aori word commonly used by New Zealanders.
686
F. E. Gilmour
…you’d think a man who posted a photo of his half-naked body emblazoned with the words ‘my body, my terms’ would be okay with a woman posting a picture of her own body on her terms.
As Karakian (2014, p. 284) observes, responses to young women sharing online images may reveal anxieties around changing social norms, while also constituting young women and teenage girls’ “unintelligibility as sexual subjects (while simultaneously fetishising them as sexual objects)”. Hardcore’s concern around the impact of Kardashian’s post on young women reflects these anxieties. While young women do experience pressure to share naked images, the articulation of the complex subject position that young women navigate as purely a quest for “validation” implies that they are incapable of autonomous sexual exploration. This discursive framing of young women reinscribes a rendering of them as unable to challenge the normative sexual order. Women and girls do have to negotiate their digital engagement in an increasingly sexualised culture, yet “their experimentations should not be viewed as sexually subjectifying them in only a negative sense” (Ringrose, 2011, p. 102).
Conclusion While both the legislative shifts and the feminist activism described in Aotearoa New Zealand in the contemporary era, indicate a deliberate shift away from traditional victim-blaming narratives, anti-victim elements remain apparent. In the context of the HDCA, psychological distress is not just a possible outcome, but one that is required in order for women to be given legitimate victim status. In this way, this form of gendered violence is de-politicised—it becomes an issue of individual mental health. This form of violence is highly gendered in that women experience negative consequences precisely because women’s bodies and sexuality can be shamed. If we accept a framework of medicalised trauma, it is harder to articulate alternatives—including a social reaction to nudes, which is less fraught, and thus causes less harm (the harms here being fundamentally gendered and contextual). Meanwhile, the feminist activism in #MyBodyMyTerms specifically articulates that
33 Image-Based Sexual Abuse, Legal Responses …
687
image-based sexual abuse needs to be understood in terms of social harms, which are underpinned by the patriarchal mythos that makes women’s bodies potential targets for shame. This activism, nonetheless, in trying to navigate the tightrope between empowerment and protection for women from gendered violence, also articulates a postfeminist rhetoric of personal responsibility and choice that undercuts its broader critique of gender norms. Acknowledgements The author(s) received no financial support for the research, authorship and/or publication of this article.
References #MyBodyMyTerms Campaign. (2016, June 27). Check out the TEDx Talk on #MyBodyMyTerms. Villainesse. #MyBodyMyTerms Campaign. (2017a, January 26). #MyBodyMyTerms. Villainesse. #MYBODYMYTERMS Campaign. (2017b, January 26). Watch Michelle Dickinson’s #MyBodyMyTerms video diary. Villainesse. Allen, H. (1987). Rendering them harmless: The professional portrayal of women charged with serious crimes. In P. Carlen & A. Worrall (Eds.), Gender, crime and justice. Open University Press. Baer, H. (2016). Redoing feminism: Digital activism, body politics, and neoliberalism. Feminist Media Studies, 16 (1), 17–34. Bates, S. (2017). Revenge porn and mental health: A qualitative analysis of the mental health effects of revenge porn on female survivors. Feminist Criminology, 12(1), 22–42. boyd, d. (2010). Social network sites as networked publics: Affordances, dynamics, and implications. In Z. Papacharissi (Ed.), A networked self: Identity, community, and culture on social network sites (pp. 39–58). Routledge. Casey, A. (2019). ’I’m still living it’: A Roast Busters survivor’s story. The Spinoff . Chun, W. H. K., & Friedland, S. (2015). Habits of leaking: Of sluts and network cards. Differences: A Journal of Feminist Cultural Studies, 26 (2), 1–28.
688
F. E. Gilmour
Crofts, T., Lee, M., McGovern, A., & Milivojevic, S. (2015). Sexting and young people. Palgrave Macmillan. De Young, M. (2015a, September 19). Growing up in the time of revenge porn: Part One. Villainesse. De Young, M. (2015b, September 20). Growing up in the time of revenge porn: Part Two. Villainesse. DeKeseredy, W. S., & Schwartz, M. D. (2016). Thinking sociologically about image-based sexual abuse: The contribution of male peer support theory. Sexualization, Media, & Society, 2(4), 1–8. Dobson, A. S., & Ringrose, J. (2016). Sext education: Pedagogies of sex, gender and shame in the schoolyards of tagged and exposed. Sex Education, 16 (1), 8–21. Dragiewicz, M., Woodlock, D., Harris, B., & Reid, C. (2019). Technologyfacilitated coercive control. In W. S. DeKeseredy, C. M. Rennison, & A. K. Hall-Sanchez (Eds.), The Routledge International handbook of violence studies. Routledge. Eaton, A. A., Noori, S., Bonomi, A., Stephens, D. P., & Gillum, T. L. (2020). Nonconsensual porn as a form of intimate partner violence: Using the power and control wheel to understand nonconsensual porn perpetration in intimate relationships. Trauma, Violence & Abuse. https://doi-org.ezproxy.otago. ac.nz/10.1177/1524838020906533 Eikren, E., & Ingram-Waters, M. (2016). Dismantling ‘You get what you deserve’: Towards a feminist sociology of revenge porn. Ada: A Journal of Gender, New Media & Technology, 10. Fairbairn, J. (2015). Rape threats and revenge porn: Defining sexual violence in the digital age. In J. Bailey & V. Steeves (Eds.), eGirls, eCitizens. Gavey, N. (2005). Just sex? The cultural scaffolding of rape. Routledge. Girling, E. (2004). ’Looking death in the face’: The Benetton death penalty campaign. Punishment & Society, 6 (3), 271–287. Harmful Digital Communications Act 2015. (2015). Henry, N., Powell, A., & Flynn, A. (2017). Not Just ’Revenge Pornography’: Australians’ Experiences of Image-Based Abuse: A Summary Report. Jane, E. (2014). “Back to the kitchen, cunt”: Speaking the unspeakable about online misogyny. Continuum: Journal of Media & Cultural Studies, 28(4), 558–570. Johnson, V. (2018, April 18). What’s our problem with naked selfies? Villainesse.
33 Image-Based Sexual Abuse, Legal Responses …
689
Karakian, L. (2014). Policing ’sexting’: Responsibilization, respectability and sexual subjectivity in child protection/crime prevention responses to teenagers’ digital sexual expression. Theoretical Criminology, 18(3), 282–299. Lapidot-Lefler, N., & Barak, A. (2012). Effects of anonymity, invisibility, and lack of eye-contact on toxic online disinhibition. Computers in Human Behavior, 28(2), 434–443. Law Commission. (2012). Ministerial Briefing Paper. Harmful Digital Communications: The Adequacy of the Current Sanctions and Remedies. Levy, A. (2005). Female chauvinist pigs: Women and the rise of raunch culture. Free Press. Linabary, J. R., Corple, D. J., & Cooky, C. (2020). Feminist activism in digital space: Postfeminist contradictions in #WhyIStayed. New Media & Society, 22(10), 1827–1848. Marvelly, L. (2015, June 11). We need to talk about revenge porn. Villainesse. Marvelly, L. (2016, May 10). 5 Questions: Revenge porn woes in the UK, Round of applause for Steve Dunne, The Bachelor is finally over + more. Villainesse. Marvelly, L., & Raj, J. (2016). To our wonderful #MyBodyMyTerms supporters. Villainesse. McAllen, J. (2016, March 9). Here are all the terrible things New Zealanders did on International Women’s Day. The Spinoff . https://thespinoff.co.nz/ media/09-03-2016/here-are-all-the-terrible-things-new-zealanders-did-oninternational-womens-day/ McGlynn, C., Johnson, K., Rackley, E., Henry, N., Gavey, N., Powell, A., & Flynn, A. (2020). ‘It’s torture for the soul’: The harms of image-based sexual abuse. Social & Legal Studies. https://doi-org.ezproxy.otago.ac.nz/10.1177/ 0964663920947791 McGlynn, C., Rackley, E., & Houghton, R. (2017). Beyond ’revenge porn’: The continuum of image-based sexual abuse. Feminist Legal Studies, 25, 25– 46. Mckinlay, T., & Lavis, T. (2020). Why did she send it in the first place? Victim blame in the context of ’revenge porn’. Psychiatry, Psychology and Law, 27, 386–396. Megarry, J. (2014). Online incivility or sexual harassment? Conceptualising women’s experiences in the digital age. Women’s Studies International Forum, 47, 46–55. Melhuish, N., & Pacheco, E. (2020). Factsheet: Who is sending and sharing potentially harmful digital communications?
690
F. E. Gilmour
Netsafe. (2018). Image based abuse. Available online: https://www.netsafe.org. nz/image-based-abuse/ Netsafe, Pacheco, E., Melhuish, N., & Fiske, J. (2019). Image-Based Sexual Abuse: A Snapshot of New Zealand Adults’ Experiences. Orenstein, A. (2009). Evidence and feminism. In B. W. Taylor, S. Rush, & R. J. Munro (Eds.), Feminist jurisprudence, women and the law (Vol. Fred B. Rothman Publications). Panzic, S. F. (2015). Legislating for e-manners: Deficiences and unitended consequences of the Harmful Digital Communications Act. Auckland University Law Review, 21, 225–247. Police v B. (NZHC 2017). Post, S. (2017). Harmful Digital Communications Act 2015. New Zealand Women’s Law Journal, 1, 208–214. Powell, A. (2015). Young women, activism ad the ’politics of (sexual) choice’: Are Australian youth cultures post-feminist? In S. Baker, B. Robards, & B. Buttigieg (Eds.), Youth cultures and subcultures: Australian perspectives (pp. 215–226). Ashgate Publishing Ltd. Powell, A., Scott, A. J., Flynn, A., & Henry, N. (2020). Image-Based Sexual Abuse: An International Study of Victims and Perpetrators. Summary Report. Puente, S. N., Maceiras, S. D. A., & Romero, D. F. (2021). Twtiter activism and ethical witnessing: Possibilities and challenges of feminist politics against gender-based violence. Social Science Computer Review, 39 (2), 295–311. The Queen v Partha Iyer. (District Court at Manukau 2016). Raj, J. (2016, May 25). 5 Questions: Revenge porn assailant jailed, Stranger steps in to help, Facebook fat-shames + More. Villainesse. Rentschler, C. A. (2017). Bystander intervention, feminist hashtag activism, and the anti-carceral politics of care. Feminist Media Studies, 17 (4), 565– 584. Ringrose, J. (2011). Are you sexy, flirty, or a slut? Exploring ’sexualization’ and how teen girls perform/negotiate digital sexual identity on social networking sites. In R. Gill & C. Scharff (Eds.), New femininities (pp. 99–116). Palgrave Macmillan. Ringrose, J., Harvey, L., Gill, R., & Livingstone, S. (2013). Teen girls, sexual double standards and ’sexting’: Gendered value in digital image exchange. Feminist Theory, 14 (3), 305–323. Ringrose, J., & Renold, E. (2012). Slut-shaming, girl power and “sexualisation”: Thinking through the politics of international SlutWalks with teen girls. Gender and Education, 24 (3), 333–343.
33 Image-Based Sexual Abuse, Legal Responses …
691
Schedewy, R. (2017, January 26). Watch Rachael Schedewy’s #MyBodyMyTerms video diary. Villainesse. Smith, C. (2015). Revenge porn or consent and privacy: An analysis of the Harmful Digital Communications Act 2015. Victoria University of Wellington. Stringer, R. (2014). Knowing victims: Feminism, agency and victim politics in neoliberal times. Routledge. Stubbs, J. (1991). Battered women’s syndrome: An advance for women or further evidence of the legal system’s inability to comprehend women’s experience. Current Issues in Criminal Justice, 3, 267–270. Suler, J. (2004). The online disinhibition effect. Cyberpsychology & behavior, 7 (3), 321–326. Upperton, T. J. (2015). Criminalising “revenge porn”: Did the Harmful Digital Communications Act get it right? Victoria University Wellington. Vitis, L., & Gilmour, F. (2017). Dick pics on blast: A woman’s resistance to online sexual harassment using humour, art and Instagram. Crime, Media, Culture, 13(3), 335–355. Whyte, A. (2018, February 7). ‘I’ve never hated myself more in my life’— Revenge porn law, does it really protect the victim? 1 News. Zhong, L. R., Kebbell, M. R., & Webster, J. L. (2020). An exploratory study of technology-facilitated sexual violence in online romantic interactions: Can the Internet’s toxic disinhibition exacerbate sexual aggression? Computers in Human Behavior. https://doi.org/10.1016/j.chb.2020.106314
34 Public Responses to Online Resistance: Bringing Power into Confrontation Laura Vitis and Laura Naegler
Introduction The #MeToo movement, starting in 2017 when women and girls began to use the hashtag to share experiences of sexual harassment, ignited a global discussion on sexual violence against women (Clark-Parsons, 2019) and on the political potential of social media as a site of resistance. While feminist scholars have long paid attention to how victimssurvivors use online spaces as sites of justice-seeking, the movement’s centralisation of anti-violence testimony was recognised as a historical moment of consciousness raising (Fileborn & Loney-Howes, 2019). L. Vitis (B) 2 George Street, Brisbane, QLD, Australia e-mail: [email protected] L. Naegler University of Liverpool, Foundation Building, Brownlow Hill, Liverpool, UK e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1_34
693
694
L. Vitis and L. Naegler
The movement here capitalised on the ability for testimony and “narrative politics” (Serisier, 2018) to shift the public’s frame of reference for interpreting sexual violence (Clark, 2016). Moreover, researchers have shown that women are not “passive victims” of online public spaces but negotiate these spaces and their affordances to critique and disrupt harassment, violence and violence supportive attitudes (Clark, 2016; Jane, 2017b; Powell, 2015; Vitis & Gilmour, 2017) and seek informal justice (Powell, 2015). The examination of modalities of online resistance is crucial for feminist interrogations of technology and violence. However, in order to fully grasp the complexities of online resistance, it is vital to consider public responses to these resistant forms. Resistance is commonly defined as an action or practice enacted in opposition to a certain order, situation, condition or behaviour (Hollander & Einwohner, 2014; Naegler, 2018). It requires the existence of an opposite force towards which it is targeted, that recognises the oppositional act as such. Resistant acts are never solely defined by the meaning-making processes of those enacting them: they enter into an interactional relationship with external definitions and reactions of observers and targets (Hollander & Einwohner, 2014). As this interactive process of meaning-making is shaped by surrounding power relations and cultural, structural and political frameworks, the moment of confrontation between forces of resistance and reactions to it further reveals the dynamics and organisation of power relationships. The moment of confrontation makes visible those power structures, including gender hierarchies within patriarchal societies, which are concealed in everyday life by various cultural mediations (Naegler, 2018). In these moments of confrontation, if the legitimacy of these structures is threatened and/or refused, the threat of violence underlying unequal power relations (Graeber, 2009) materialises in the response. Drawing from a case study involving online naming and shaming of image-based abuse and institutional failure in Singapore, this chapter examines how moments of online resistant confrontation—which visibilise gendered power structures—are managed and embraced by the public in online settings. In doing so, we highlight the importance of questioning how and whether popular forms of online resistance shape
34 Public Responses to Online Resistance …
695
the public’s frame of reference for violence against women. This questioning, as we argue, necessitates a consideration of the way in which resistance forms interact with extant discourses and power relations.
Online Resistance to Violence and Harassment Wider research into the relationship between gender, technology and violence has highlighted how technologies are used to facilitate violence against women and entrench violence supportive attitudes (Dragiewicz et al., 2018; Harris & Vitis, 2020; Harris & Woodlock, 2019; Jane, 2012, 2015). However, focusing solely on the perpetration of violence against women via technology has the potential to reproduce online spaces as simply “unsafe” and women as passive victims unable to use these technologies and/or spaces to their advantage (Vitis & Gilmour, 2017). In response, researchers have highlighted how women use technology to resist gendered violence, whether this violence is facilitated by technology or not. This scholarship has documented several important practices which reflect shifts in both resistance forms and justice-seeking and the changing technological landscape within which they are located. Before outlining some examples, it is important to note that strictly categorising such practices is difficult as many share common tools and techniques but have differing motivations and forms. For example, Jane (2017b, p. 51) recognises that women subjected to gendered cyberhate respond in a variety of ways. Some reduce or constrain their internet use in order to control perpetrator access. Others confront and punish cyberhate by: speaking directly with perpetrators, engaging in online advocacy, participating in performance-based activism1 and engaging in digital vigilantism, or “digilantism” (Jane, 2017b). The latter involves using online spaces to hold perpetrators to account outside the confines of the formal justice system. These practices are clearly varied: some align more with advocacy over confrontation, some target individual perpetrators and others address the public and supporters. Despite the 1
The remediation and ridicule of hate speech and misogyny.
696
L. Vitis and L. Naegler
difficulties of categorisation, in the following section, we detail two examples that reflect the relationship between resistance, justice, technology and violence against women: naming and shaming and hashtag activism. Naming and shaming practices could easily be situated underneath Jane’s (2017b, p. 46) conceptualisation of “digilantism” or the “politically motivated extrajudicial practices in online domains that are intended to punish or bring others to account…” Similarly, naming and shaming involves exposing the names and deeds of perpetrators of rape, sexual assault, harassment and misogyny in online settings (Fileborn, 2017; Salter, 2013; Vitis et al., under review). Notable examples of naming and shaming in the context of anti-violence activism include women and girls posting the names of rape and sexual assault perpetrators (Salter, 2013; Vitis et al., under review) and recordings of harassment, misogyny and entitlement within public or private social media spaces (Jane, 2017a). They also include screenshotting or remediating examples of cyberhate, disrespect and entitlement. For example, Vitis and Gilmour’s (2017) work on naming and shaming as exo-judicial punishment explored a case study of an artist who created an Instagram account dedicated to illustrating men’s misogynistic and harassing comments on dating websites. This type of practice has become popularised and creates living documents of harassment, misogyny and disrespect that the public can readily access, engage with and critique. While this example focuses on publicising behaviours as opposed to personal information, most practices of naming and shaming involve making behaviours and/or individuals visible for ridicule, derision and public censure. Other practices of resistance are more advocacy based and focus on collating shared experiences (Jane, 2017b). For example, hashtag activism involves community members sharing their personal testimony of violence and harassment via popular hashtags to politicise their experiences and intervene in widespread myths and misconceptions of violence against women (Clark, 2016). This has been demonstrated through popular hashtags like #MeToo, #whyistayed and #mencallmethings (Megarry, 2014). The function of these hashtags is varied; however, it has been argued that collating and visibilising shared experiences can produce “enduring frames of reference for interpreting and
34 Public Responses to Online Resistance …
697
responding to current and future social phenomena” outside of online spaces (Clark, 2016, p. 14). Moreover, others have suggested these hashtags function as a contemporary form of consciousness raising by allowing space for women to make personal experiences public in a way that builds solidarity and represents a “political challenge to male social dominance” (Megarry, 2014, p. 51). For example, Mendes et al. (2018, p. 244) note that hashtags are “extremely positive in generating community, connection and support for feminist views, and solidarity in calling out rape culture”. Additionally, because hashtags provide a site for people to collectively circulate their experiences outside geographical and temporal boundaries, they have provided insight into survivor experiences that would have been previously hidden (Megarry, 2014). Practices like online naming and shaming and hashtag activism which seek justice as a primary or key motivator must be situated in a context where formal institutions have failed to deliver adequate, transformative or meaningful responses to violence against women (Jane, 2016; Salter, 2013). For example, naming and shaming practices have been described as “extrajudicial” (Jane, 2016, p. 464), “exojudicial” and “informal” because they are situated in contexts with limited institutional responses to harassment and violence (Fileborn, 2017; Powell, 2015; Vitis & Gilmour, 2017). As such, they reflect a desire to pursue accountability and punishment outside the confines and limitations of the formal criminal justice system and regulatory apparatuses. Additionally, hashtag activism can seek to foster change in meaningful ways that de-centre the criminal legal domain as the site of understanding and addressing violence against women. This includes translating personal testimony into collective practice and encouraging and facilitating opportunities to build solidarity with survivors to shape the public understanding of violence against women. Indeed, Alcoff and Gray (1993, p. 261) suggest that “speaking out” in the public sphere can educate the public on the nature of men’s violence against women and ensure that violence against women is recognised as a social problem rather than an individual issue. This form of recognition challenges the incident-based understandings embedded within the legal system.
698
L. Vitis and L. Naegler
Context and Anti-Violence Speech Acts This ongoing work exploring the complexities and nature of techfacilitated resistance can be advanced by considering the cultural, structural and political contexts in which these resistance forms are situated. Doing so enables further understanding of the forces which facilitate, constrain and shape the public’s access and response to women’s resistance. The role of context in shaping public access to anti-violence resistance, particularly resistance that comes in the form of public testimony, is well established. Here, Serisier’s (2018) seminal survey of the transition between the “grudging acknowledgment” (p. 24) of rape in the 1970s to the public recognition and circulation of survivor testimony is useful. She argues that there has been a substantial cultural shift in the Englishspeaking West which has “enabled and preceded ordinary women being able to publicly tell their stories of rape and claim cultural authority on the basis of their experience” (Serisier, 2018, p. 24). Her work demonstrates that the circulation of women’s testimonies of violence is primed by feminist activism, which began in the early 70s and the subsequent popularisation of rape narratives in popular culture and discourse. In the context of technology facilitated resistance forms (such as online naming and shaming and hashtag activism) material, technological and social dimensions also shape how testimony can be heard and circulated. For example, in his work on #MeToo and political economy, Salter (2019, p. 331) argues that the attention economy undergirding online platforms has the potential to discipline online political activism in ways that benefit those imperatives. For example, he notes that “[m]ass outrage and grief over sexual violence can be hijacked by ‘old’ and ‘new’ media companies seeking to redirect and rework political movements to profitable ends” (Salter, 2019, p. 331). Additionally, Wood et al.’s (2019) analysis of “survivor selfies” highlights how technological affordances and cultural conditions shape the amplification, circulation and recognition of these images within the wider online public. They note that the techno-social conditions which enable women’s testimonies of violence to “go viral” include the convergence of platforms to enable wider circulation and an algorithmic power responsive to graphic posts
34 Public Responses to Online Resistance …
699
and contingent upon the affective characteristics of these posts (Wood et al., 2019, p. 8). Additionally, the networked affordances of online spaces can amplify voices, build solidarity and mobilise the wider public. In her analysis of survivors sharing their experiences of domestic violence under the popular #whyistayed & #whyileft hashtags in 2014, Clark (2016, p. 13) suggests that the confluence of “networked power of hashtags, the political fervor of digital activists, and the discursive influence of collective storytelling” amplify the potential for personal testimony to build collective action. Clark (2016) also found that networked hashtags facilitated the active participation of audience members who responded by sharing tweets and encouraging their followers and friends to pay attention to survivor narratives. Actors within these settings also play a crucial role in shaping public access to and understanding of discursive activism. For example, when testimony is shared in online spaces populated by supportive actors, communities of practice can be built into counterpublics (Salter, 2013). Alternatively, they can constrain speech. For example, in an analysis of #mencallmethings Megarry (2014, p. 51) noted that while this hashtag was used to name men as perpetrators of violence, some users coopted the discussion to critique the gendered framing and suggested the hashtag be changed to ‘#peoplecallmethings’ or ‘#misogynisticassholesruintheinternet’. In light of this, Megarry (2014, p. 51) argues that: [C]onversations surrounding the title and appropriateness of the hashtag occupied significant space in the discussion, and redirected the focus away from women’s collective articulation of their experiences by demanding that they first justify their right to engage in such a conversation.
She notes further that these criticisms must be situated within the open and public space of Twitter which allows for public engagement as opposed to specific networks of support. Indeed, attempts to discipline women’s testimony of violence in networked publics is commonplace and women and girls who participate in these practices have been subjected to hostility, trolling and threats (Mendes et al., 2018; Vitis & Gilmour, 2017).
700
L. Vitis and L. Naegler
Co-opting Testimony As the circulation of anti-violence speech acts is architected by these social, political, technological and material landscapes, accordingly, feminist scholars have highlighted how attempts to publicly address sexual violence have intersected with and been co-opted by law-and-order discourses. For example, in their early work on survivor testimony and public disclosure, Alcoff and Gray (1993, p. 261) questioned whether the increased circulation of survivor testimony was transformative or being “taken up and used but in a manner that diminishes its subversive impact”. These points have been reiterated by Serisier (2018, p. 24) who draws upon historical case studies where women speak out against rape to argue that women’s testimonies of sexual violence are being uncoupled from feminist politics and are now “open to the competition between different discourses.” Serisier (2018, p. 24) notes how these stories can be incorporated into criminal justice discourses and in doing so situates the criminal legal system as the site of understanding and managing sexual violence. Similar observations have been made in relation to the repurposing of women’s narratives of violence in the media. A common example is when political actors co-opt survivor narratives and experiences of sexual violence to gain support for “tough on crime” policies by individualising and othering sexual violence in the news media and reproducing women’s bodies as the landscapes upon which social tensions are cohered (Gleeson, 2004; Nilsson, 2018). While this has been an enduring issue—as Alcoff and Gray’s (1993) work illustrates—when discursive activism takes place in online spaces, the potential for repurposing is particularly acute (Powell, 2015). For example, Powell (2015, p. 582) notes that survivors face an “inherent loss of control” when personal testimony is circulated within online publics. This is affirmed by Wood et al. (2019, p. 9) who note that when survivors post selfies in an attempt to narrate their own experiences and advocate for public recognition of domestic violence “…[they] lose control over their image and their narrative, and the swarms can move in entirely different directions to those intended by the original poster.”
34 Public Responses to Online Resistance …
701
In addition to re-purposing, online naming and shaming can align with dominant state discourses, as calls for punitiveness in formal responses to sexual violence can easily create an “uneasy alliance” between “conservative law and order politics” (Powell, 2015, p. 573), and carceral feminism. For example, in her work on digilantism, Jane (2017b, p. 57) argues that public affirmation of digilantism as a response to online hate speech may reproduce longstanding discourses which both responsibilise women for men’s violence and cohere with neoliberal crime prevention strategies encouraging citizens to manage their own risks of victimisation. In sum, these varied works highlight some of the discursive, social, material and technological conditions which architect the way in which the public access (Clark, 2016; Wood et al., 2019), engage with (Salter, 2019) and understand (Serisier, 2018) women’s use of public testimony to address violence, particularly online. It also demonstrates the interplay between these contextual factors and public responses and how this interaction can make meaning of these speech acts. In the following section, we build upon this work by exploring how context and affordances were evident in the public response to the Monica Baey case in Singapore.
The Monica Baey Case In November 2018, Nicholas Lim, a student at the National University of Singapore (NUS), filmed fellow student Monica Baey while she showered in the residential halls. After Baey reported Lim to police and the university, Lim received a 12-month conditional warning from the Singapore police, stating that he must not commit any offenses during the period, and this would not be placed on his record or disclosed to employers (Seah, 2019). NUS ordered Lim to write an apology letter to Baey and undergo mandatory counselling. He was suspended from university for one semester and banned from entering the student hall where Baey lived (Ang, 2019; Teng, 2019). Disappointed with this lenient response, Baey posted about her experience on Instagram on April 18th and 19th 2019. In her post, Baey recounted the shock of discovering being filmed by Lim, and expressed disappointment with his punishment, which she found to be a “slap on the wrist” (Teng, 2019).
702
L. Vitis and L. Naegler
She stated that this was her “last resort” to bring attention to the matter in light of NUS’s and the police’s failure to take it seriously (Ang, 2019). The Instagram posts were soon picked up by the local media and received widespread public attention and support. Around 300,000 people signed an online petition calling for the university and police to punish Lim more harshly (Ng, 2019), and Baey received support from politicians (Teng, 2019). Urged to action by this public response, NUS reviewed their disciplinary frameworks for sexual misconduct on campus. This resulted in the adoption of tougher sanctions for offenders, including a minimum of one-year suspension for serious offenses, and immediate expulsion for severe or repeated cases (Ang, 2019). Campus security was enhanced, including the increase of technological surveillance. Changes were also made to enhance support for victims, including the launch of a victim care unit, and their involvement in disciplinary procedures (Ang, 2019; Teng, 2019). Over the past decade, cases of image-based abuse such as the one perpetrated against Baey, have been recognised as a problematic issue in Singapore. Media reports have long documented incidents where men have been prosecuted for acts of camera sexual voyeurism (often referred to as “upskirting”) in public places like subway train stations (Vitis, 2020a). More recently, research into image-based abuse has demonstrated that Singaporean women have been targeted with threats to distribute intimate images, the non-consensual distribution of intimate images and the non-consensual production of intimate images (AWARE, 2019; Vitis, 2020b). For example, over the past four years Singapore’s leading sexual violence service, the Sexual Assault Care Centre (SACC), has been documenting cases of technology facilitated violence against women. In 2019, they reported that the number of image-based abuse cases reported to their service doubled from 30 in 2016 to 64 in 2018 (AWARE, 2019). Around this time the Criminal Law Reform Bill 2019 was introduced into the Singapore Parliament and has subsequently created several image-based abuse offences (Criminal Law Reform Bill 2019 s.377 BD-BE). Against this backdrop, and in the wider context of the rise of online resistance to sexual violence, the Monica Baey case exemplifies the role of online testimony as a mechanism for achieving transformative change
34 Public Responses to Online Resistance …
703
and recognition for violence against women. Yet in the Singaporean context, it is important to unpack naming and shaming as a mechanism of both resistance and of what Ibrahim (2018, p. 219) calls “everyday authoritarianism”, or the bottom-up processes in which authoritarian control is maintained. In Singapore, the ruling party, the People’s Action Party (PAP) exercises an authoritarian rule that employs a legal system to discipline transgression (Ganapathy, 2000; Kaur et al., 2016) and constrain civil liberties. Consent to this authoritarian rule is maintained through considered management of the economy and investment in social policies, ensuring a social standard of living (Kaur et al., 2016). While not absent (see e.g. Singam & Thomas, 2017), political resistance in Singapore is constrained by a number of laws restricting free speech and criminalising any protest outside of the narrow boundaries of the PAP government’s permit process. However, the limited political opposition to the PAP government is not solely attributed to these control mechanisms. As Ibrahim (2018) argues, it is also the result of citizen’s consent to and active participation in authoritarian rule. Through practices of “everyday authoritarianism”, authoritarian rule is maintained “bottom-up”—instead of “top-down”— by citizens who give “permission to the state to behave in an authoritarian fashion” (p. 228). This is exemplified in the conscious enacting and enforcing of rules and social control, shown, for example, in citizen’s willingness to report transgressions to authorities. One example of everyday authoritarianism in Singapore is the public shaming of antisocial and transgressive behaviour on various online platforms (Jiow & Morales, 2015). In Singapore, vigilantism is made available to community members to engage in politics. For example, the popular “citizen journalist” website Stomp, a subsidiary of the government-owned news outlet Straits Times, encourages Singaporeans to post pictures and videos of public transgressions, ranging from minor social transgressions to cases of assault and the incitement of racial or religious hatred. According to Ibrahim (2018), practices of “everyday authoritarianism” can also manifest in punitive and aggressive reactions of citizens towards rulebreakers or those engaging in what is considered unacceptable behaviour; particularly acts seen as insulting significant symbols of Singaporean nationalism.
704
L. Vitis and L. Naegler
The presence of this wider political context and subsequent availability of online naming and shaming as a tool for addressing transgressions raises questions as to whether responses to the Monica Baey case are determined within or outside the parameters of online narrative politics. We recently explored this question in a research project examining public responses to this case on a popular Singaporean message board forum, Hardwarezone (Vitis et al., under review). This project involved a thematic analysis of 6820 comments about the Monica Baey case posted on this forum and utilised these comments to consider whether online resistance shifts the public’s frame of reference for sexual violence and how the wider socio-political context shapes reactions to online resistance. We found that public responses on the forum were largely supportive of Baey’s act of naming and shaming Lim. Some commenters also engaged in practices of participatory vigilantism, by continuously naming Lim and encouraging others to share his information and post about the case in order to keep the story in the public sphere (Vitis et al., under review). However, both support of Baey and criticism of Lim, NUS and/or the police centred on accusations of corruption and class privilege. The lenient punishment of Lim was seen as the result of his assumed “elite status”, with users suggesting he either had wealthy or politically connected parents. In addition, users problematised what they considered was the violation of the principle of punishment as deterrence, and the failure of punishment in this case. Specifically, they noted the unfairness of other offenders receiving more serious punishments for similar crimes and stated that Lim’s punishment was not harsh enough. Comments also emphasised the opinion, that without strict carceral responses, there will be no deterrence for perpetrators of sexual violence. These responses centred on the ways in which Lim and the initial response to him violated principles understood as part of national identity; specifically, zero-tolerance and meritocracy. Less emphasis was placed on the gender inequalities which both produce violence and shape poor responses from police and formal institutions (Vitis et al., under review). User responses demonstrated participation in supportive vigilantism and affirmation of extant state discourses of institutional punishment as
34 Public Responses to Online Resistance …
705
the penultimate form of deterrence. At the same time, there was less emphasis on the wider conditions which shape image-based abuses such as gender inequality and the commodification of non-consensual intimate images between male peers. Everyday authoritarianism allows active participation in the restoration of social order (Ibrahim, 2018). Therefore, while participatory and supportive comments did not illustrate a fundamental shift in the frames of reference for sexual violence, they did demonstrate that the political permission for everyday authoritarianism—particularly in the form of naming and shaming transgressions as a way of cohering social order—was available to and exercised by users in response to this case. The emphasis on injustice of class and failure of authorities to implement zero-tolerance policies for crime control demonstrates the role that extant practices of shaming public transgressions as part of “bottom-up” authoritarian control played in supportive responses. That these responses reflect these political conditions can also be illustrated when contrasted with recent backlashes against other anti-violence speech acts focused on violence against women in Singapore during 2019. One important example was a public dispute between a local notfor-profit women’s organisation, the Association of Women for Action and Research (AWARE) and the Singaporean Crime Prevention Council (Chong, 2019). In 2019, the Crime Prevention Council published an awareness raising campaign for “outrage of modesty”2 offences. This campaign features a man sitting next to a woman—presumably on public transport—with his hand approaching her leg. A price tag with the words “2 years’ imprisonment: It is not worth it” is attached to his wrist. AWARE publicly criticised this campaign due to its representation of punishment as the “cost” of assault, claiming that it de-emphasised victims’ experiences (Chong, 2019). This resulted in police backlash against AWARE and an affirmation from police that the focus on punishment as the cost of sexual violence was an appropriate form of crime prevention messaging (Chong, 2019). This backlash against an attempt to publicly situate sexual violence outside the discursive parameters of 2 Defined as the use of criminal force to any person, intending to outrage or knowing it to be likely that he will thereby outrage the modesty of that person (Singapore Penal Code s. 354).
706
L. Vitis and L. Naegler
the carceral state by prioritising women’s experiences can be directly contrasted with positive responses to Baey’s posts and criticism of the failures of institutional punishment on Hardwarezone. Both responses illustrate the importance of extant discourses in shaping the terms of public support for discursive activism.
Conclusion As noted above, resistant acts are never solely defined by the meaningmaking processes of those enacting them: they enter into an interactional relationship with external definitions and reactions of observers and targets (Hollander & Einwohner, 2014). As we note in this chapter, women’s online resistance to sexual violence intersects with an array of discourses and forces from the political to the technological, and these can be partially evidenced in public responses to these speech acts. As our own case study shows, public responses provide insight into the role of political context in shaping the distribution and reception of anti-violence speech acts and are, themselves fundamentally involved in the processes of meaning-making and “making visible” violence against women. This case study revealed a contextualised recognition of sexual violence which demonstrated demands for order without explicitly focusing on change to and recognition of gendered violence and injustices. This demonstrates the importance of utilising public responses to online testimony as a source of insight into contemporary understandings of both sexual violence and online resistance. Acknowledgements The author(s) received no financial support for the research, authorship and/or publication of this article. Legislation
Criminal Law Reform Bill 2019.
34 Public Responses to Online Resistance …
707
References Alcoff, L., & Gray, L. (1993). Survivor discourse: Transgression or recuperation? Source: Signs (Vol. 18). Winter. Ang, J. (2019, June 10). NUS accepts recommendations for tougher penalties on sexual misconduct; minimum 1-year suspension for serious offences. Straits Times. https://www.straitstimes.com/singapore/education/nus-acc epts-all-recommendations-by-committee-on-sexual-misconduct-cases AWARE. (2019). A recap: Taking ctrl, finding alt 2019. Available at: https:// www.aware.org.sg/2019/11/a-recap-taking-ctrl-finding-alt-2019/ Chong, E. (2019, November 18). Police defend anti-molestation posters after aware criticism. Straits Times, pp. 1–27. Clark, R. (2016). “Hope in a hashtag”: The discursive activism of #WhyIStayed. Feminist Media Studies, 16 (5), 788–804. Clark-Parsons, R. (2019). “I SEE YOU, I BELIEVE YOU, I STAND WITH YOU”: #MeToo and the performance of networked feminist visibility. Feminist Media Studies. Online First. https://doi.org/10.1080/14680777.2019. 1628797 Dragiewicz, M., Burgess, J., Matamoros-Fernández, A., Salter, M., Suzor, N. P., Woodlock, D., & Harris, B. (2018). Technology facilitated coercive control: Domestic violence and the competing roles of digital media platforms. Feminist Media Studies, 18(4), 609–625. Fileborn, B. (2017). Justice 2.0: Street harassment victims’ use of social media and online activism as sites of informal justice. British Journal of Criminology, 57 (6), 1482–1501. Fileborn, B., & Loney-Howes, R. (2019). Introduction: Mapping the emergence of me too. In B. Fileborn & R. Loney-Howes (Eds.), MeToo and the politics of social change (pp. 1–18). Springer International Publishing. Ganapathy, N. (2000). Conceptualising community policing, framing the problem. Australian & New Zealand Journal of Criminology, 33(3), 266–286. Gleeson, K. (2004). From centenary to the Olympics, gang rape in Sydney. Current Issues in Criminal Justice, 1(2), 183–201. Graeber, D. (2009). Direct action: An ethnography. AK Press. Harris, B. A., & Woodlock, D. (2019). Digital coercive control: Insights from two landmark domestic violence studies. British Journal of Criminology, 59 (3), 530–550. Harris, B., & Vitis, L. (2020). Digital intrusions: Technology, spatiality and violence against women. Journal of Gender-Based Violence. Online First. https://doi.org/10.1332/239868020x15986402363663
708
L. Vitis and L. Naegler
Hollander, J., & Einwohner, R. (2014). Conceptualizing resistance. Sociological Forum, 19 (4), 533–554. Ibrahim, N. A. (2018). Everyday authoritarianism: A political anthropology of Singapore. Critical Asian Studies, 50 (2), 219–231. Jane, E. (2012). Your a Ugly, Whorish, Slut. Feminist Media Studies, 14, 531– 546. Jane, E. (2015). Flaming? What flaming? The pitfalls and potentials of researching online hostility. Ethics and Information Technology, 17, 65–87. Jane, E. A. (2016). Online misogyny and feminist digilantism. Continuum, 30 (3), 284–297. Jane, E. A. (2017a). ‘Dude … stop the spread’: Antagonism, agonism, and #manspreading on social media. International Journal of Cultural Studies, 20 (5), 459–475. Jane, E. A. (2017b). Feminist flight and fight responses to gendered cyberhate. In M. Segrave & L. Vitis (Eds.), Gender, technology and violence (pp. 45–61). Routledge. Jiow, H. J., & Morales, S. (2015). Lateral Surveillance in Singapore Lateral Surveillance in Singapore. Surveillance and Society, 13(3/4), 327–337. Kaur, S., Tan, N., & Dutta, M. J. (2016). Media, migration and politics: The coverage of the Little India riot in The Straits Times in Singapore. Journal of Creative Communications, 11(1), 27–43. Megarry, J. (2014). Online incivility or sexual harassment? Conceptualising women’s experiences in the digital age. Women’s Studies International Forum, 47, 46–55. Mendes, K., Ringrose, J., & Keller, J. (2018). #MeToo and the promise and pitfalls of challenging rape culture through digital feminist activism. European Journal of Women’s Studies, 25 (2), 236–246. Naegler, L. (2018). ‘Goldman-Sachs doesn’t care if you raise chicken’: The challenges of resistant prefiguration. Social Movement Studies, 17 (5), 507–523. Ng, H. (2019, April 23). Peeping Tom caught filming in NUS hall shower: Thousands sign petitions calling for tougher action. Straits Times. https://www.straitstimes.com/singapore/peeping-tom-caught-in-nushall-shower-thousands-sign-petitions-calling-for-tougher-action Nilsson, G. (2018). Rape in the news: On rape genres in Swedish news coverage rape in the news: On rape genres in Swedish news coverage. Feminist Media Studies, Online First, 1–17. https://doi.org/10.1080/14680777. 2018.1513412
34 Public Responses to Online Resistance …
709
Powell, A. (2015). Seeking rape justice: Formal and informal responses to sexual violence through technosocial counter-publics. Theoretical Criminology, 19 (4), 571–588. Salter, M. (2013). Justice and revenge in online counter-publics: Emerging responses to sexual violence in the age of social media. Crime, Media, Culture, 9 (3), 225–242. Salter, M. (2019). Online justice in the circuit of capital: Metoo, marketization and the deformation of sexual ethics. In B. Fileborn & R. Loney-Howes (Eds.), MeToo and the politics of social change (pp. 317–334). Springer International Publishing. Seah, L. (2019). A detailed timeline of the Monica Baey incident. Alvinology, 23 April 19. https://alvinology.com/2019/04/23/monica-baey-nus-nicholaslim-instagram-petition/ Serisier, T. (2018). Speaking out: Feminism, rape and narrative politics: Speaking out: Feminism, rape and narrative politics. Springer International Publishing. Singham, C., & Thomas, M. (2017). The art of advocacy in Singapore. ethos books. Teng, A. (2019, June 19). Harsher penalties for sexual misconduct at NUS to take immediate effect. Straits Times. https://www.straitstimes.com/singap ore/education/harsher-penalties-for-sexual-misconduct-at-nus-to-take-imm ediate-effect Vitis, L. (2020a). Media representations of camera sexual voyeurism in Singapore: A medicalised, externalised and community problem. Feminist Media Studies, Online First, 1–18. https://doi.org/10.1080/14680777.2020.181 0095 Vitis, L. (2020b). Private, hidden and obscured: Image-based sexual abuse in Singapore. Asian Journal of Criminology, 15 (1), 25–43. Vitis, L., & Gilmour, F. (2017). Dick pics on blast: A woman’s resistance to online sexual harassment using humour, art and Instagram. Crime, Media, Culture, 13(3), 1–13. Vitis, L., Naegler, L., & Salehin, A. (under review). ‘This is not a case of gender inequality: This is a case of injustice’: Perceptions of online resistance to camera sexual voyeurism in the context of everyday authoritarianism in Singapore. Crime, Media, Culture. Wood, M., Rose, E., & Thompson, C. (2019). Viral justice? Online justiceseeking, intimate partner violence and affective contagion. Theoretical Criminology, 23(3), 375–393.
Index
A
Ableism 598, 632, 646 Aboriginal. See Indigenous Accountability 58, 59, 61, 70, 213, 418–423, 428–436, 520, 522, 557, 574, 608, 609, 619–621, 697 Activism 2, 8, 10–12, 20, 56, 77, 78, 84, 375, 396, 399, 400, 402, 568, 620, 646, 673, 680, 682, 683, 686, 687, 695, 696, 698–700, 706 Affordances 2, 166, 181, 380, 397, 550–552, 557, 558, 694, 698, 699, 701 Africa 5, 77, 93–97, 99, 100, 104, 105, 107–111, 120, 125, 265, 276, 277 Agency 35, 45, 47, 50, 57, 58, 63, 83, 85, 100, 184–186, 211,
263, 272, 282, 290, 301, 303, 310, 342, 366, 418, 435, 462, 469–472, 515, 521, 551, 555, 574, 597, 623, 632, 653, 661, 665, 674, 678 AI-Facilitated Abuse (AIFA) 585, 598 Algorithmic 44, 360, 556, 698 Anti-rape 461, 463, 464, 467–469, 472–474 Aotearoa. See New Zealand Artificial intelligence (AI) 24, 26, 57, 68, 483, 583–587, 589, 598 Assemblage 4, 193, 342, 548, 550–552, 555, 556, 559, 560, 651, 656 Attitudes 26, 64, 65, 70, 122, 141, 167, 172, 356, 376, 380, 475, 557, 571, 652, 675, 682, 694, 695
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 A. Powell et al. (eds.), The Palgrave Handbook of Gendered Violence and Technology, https://doi.org/10.1007/978-3-030-83734-1
711
712
Index
Australia 5, 6, 42, 45, 47, 49, 51, 55, 56, 62, 65–68, 70, 77, 117, 119, 124, 127, 166, 182–192, 234, 262, 263, 265, 299, 300, 326, 336, 338, 343, 344, 418–421, 424, 433, 435, 482, 514, 517, 522, 585–592, 596–598, 675–677
B
Backlash 63–65, 144, 146, 339, 356, 364, 369, 379, 616, 659, 678, 705 Blackmail 148, 303, 387, 549, 654, 684 Blackpill 357, 359, 365, 368–370 Blog/blogging 20, 22–24, 61, 400, 403, 611, 612, 680, 682–684 Bodily autonomy 282, 513, 515, 565, 567, 573, 685 Body-worn cameras (BWCs) 9, 417–436, 445, 473 Butler, Judith 327, 619 Bystander 5, 400, 463, 466, 467, 471, 486, 522, 552, 570, 571, 573, 597, 598
C
Cameras 116, 297, 298, 321, 327, 441–444, 446, 447, 449–451, 453, 473, 519, 621, 702 Canada 5, 6, 32, 41, 42, 45–47, 49, 117, 120, 188, 265, 443, 444, 446, 521, 576, 608, 620 Child-sexual abuse material (CSAM) 533, 534 Civil law 597
Coercive control 158, 163–167, 171, 172, 466, 679 Collective action 469, 472, 699 Colonisation 184, 186, 187, 337 Community standards 100, 593, 594 Consent 7, 20, 26, 59, 105, 137, 143, 167, 234, 281, 282, 286, 289–292, 298–302, 319–329, 340, 362, 420, 442–444, 447–449, 452, 454, 463, 464, 466, 468, 470, 472, 508, 515, 516, 518, 522, 523, 547, 549, 553–555, 567, 592–595, 610, 617, 632, 637–639, 641, 646, 653, 664, 665, 680, 681, 703 Coping 210, 211, 213, 265, 270, 271, 273, 275, 304, 588 Counselling 102, 105, 108, 267, 271, 701 Courts/court-trials 27, 41, 46, 48–51, 105, 366, 442, 443, 446–448, 450, 451, 454, 455, 518, 519, 531, 553, 554, 646, 651, 678 COVID-19 22–25, 147, 286, 308, 507, 553, 585, 632, 635, 641, 643 Crenshaw, Kimberle 5, 82, 121, 509 Criado-Perez, Caroline 27, 531 Crime 26, 29, 33, 45, 47, 48, 94, 143, 166, 204, 216, 222, 262, 276, 314, 329, 364, 396, 398, 404, 418, 444, 446–448, 469, 470, 473, 482, 483, 485, 486, 491, 496, 509, 517, 534, 539, 549, 552, 554, 587, 656, 676, 683, 700, 701, 704, 705
Index
713
Criminal law 105, 552, 557, 586, 589, 590, 596, 598 Culturally and linguistically diverse (CALD) 5, 63, 116–128, 517 Cyberbullying 139, 158, 219, 220, 535, 632, 637, 639, 673 Cyberfeminism 109 Cyberstalking 4, 27, 30, 34, 120, 158, 164, 165, 168, 203–222, 229, 262, 264–269, 272, 273, 276, 277
339, 347–351, 378, 540, 554, 559, 610, 619 Diversity 2, 7, 11, 12, 94, 122, 188, 274, 276, 522, 523, 540, 569, 596, 597, 634 Domestic violence 106, 121, 122, 158, 160, 161, 163, 164, 166, 167, 171, 172, 277, 424, 511, 517, 520, 552, 574, 699, 700 Doxing/doxxing 99, 144, 164, 303, 398, 549, 663
D
E
Dating apps 5, 83, 167, 182, 188–191, 196, 231, 299, 309, 310, 335–337, 339–343, 345–351 Deepfake 56, 57, 67–69, 75, 135, 144–147, 584–598 Deterrence 530, 556, 557, 704, 705 Digilantism 400, 405, 407, 695, 696, 701 Digital Charter 532, 540 Digital communication 164, 203, 461, 464, 465, 585, 677, 678 Digital dating abuse (DDA) 158, 160–162, 172 Digital media 165, 445, 547, 548, 550, 551, 555, 556, 560, 655, 656, 658, 666, 667 Digital technologies 81, 93, 122, 125, 161, 181, 189, 193, 262, 319, 323, 356, 360, 380, 463, 482, 552, 633, 656 Disassemblage 554 Disclosures 8, 274, 398, 643, 700 Discrimination 4, 5, 117, 123, 127, 128, 137, 189–191, 336, 337,
Education 11, 26, 27, 34, 45, 60, 61, 82, 99, 107, 117, 118, 190, 208, 269, 270, 276, 283, 289, 290, 292, 298, 320, 328, 419, 470, 475, 573, 576, 577, 585, 596–598, 611, 613, 623, 632, 634, 636, 640, 641, 645, 646, 681 Embodied 205, 398, 467, 532, 534, 551, 662 Empirical 12, 76, 78, 263, 264, 470, 471, 473, 552, 598, 634, 635, 640, 680 England 9, 29, 265, 401, 486, 514, 515, 519, 530, 531, 534, 537, 539, 540, 585, 586, 632, 634 eSafety Commissioner 116, 118– 120, 124, 522, 585, 589, 597 Evidence 8, 45, 47, 77, 84, 95, 106, 123, 125, 263, 264, 273, 274, 297–300, 315, 329, 342, 357, 399, 420, 427, 428, 433, 442, 443, 445–450, 452–454, 463, 465, 467–470, 472–475, 482,
714
Index
400–404, 406, 407, 445, 448, 508, 515, 574, 609, 614, 622, 632, 635, 639, 659, 664, 673–676, 680–686, 693, 694, 697, 698, 700, 701
483, 485, 519, 520, 531, 575, 586, 621, 634, 645, 655, 664, 678–680, 706 Exploitation 34, 35, 79, 94, 96, 264, 536, 631, 632, 635, 652, 659, 663–667 G F
Facebook 20, 23–25, 32, 42, 76, 97, 100, 101, 103, 104, 107, 231, 234–236, 239, 240, 242–245, 247, 250, 266, 301, 309, 342, 382, 406, 492, 514, 531, 547, 556, 558, 586, 591, 595, 597, 611, 615, 621, 622, 666, 679, 681–683 Family violence 2, 9, 11, 116, 117, 121, 123, 125, 159, 163, 418, 419, 423–425, 427, 428, 473, 585, 679 Fear 25, 27, 30, 59–61, 68, 102, 103, 117, 118, 128, 147, 163, 169, 195, 204–207, 209, 212, 217, 218, 230, 232, 233, 239, 243, 245, 248, 249, 262, 265, 266, 268, 269, 308, 310–312, 322, 346, 349, 365, 396, 398, 399, 402–406, 462, 466, 469–471, 512, 517, 568, 576, 588, 591, 592, 639, 646, 676, 679 Feminist/feminism 3, 4, 8, 25, 26, 43, 62, 76, 80, 82, 85, 94, 95, 107, 109–111, 121, 137, 138, 143, 144, 146–148, 158, 169, 171, 183, 275, 304, 320, 356, 358, 359, 363, 364, 369, 370, 376–378, 385, 386, 396–398,
Gender-based abuse online (GBAO) 9, 529–541 Gender-based online harm (GBOH) 6, 10, 607–610, 613, 617–623 Gender-diverse 187, 596, 608, 618, 619, 635 Gendered hate 396–404, 406, 632, 646 Gendered violence 1–3, 7–12, 23, 24, 27, 29, 33, 35, 172, 553, 572, 618, 676, 682, 684–687, 695, 706 Gender-inclusive 530, 541 Google 23, 31, 56, 57, 60, 62, 99, 264, 464, 556, 586, 597, 679
H
Harassment 2, 4, 7, 8, 10, 21, 24, 25, 28, 31, 44, 80, 95, 96, 99, 101, 102, 107, 108, 120, 126, 135, 140, 142, 165, 206, 207, 215, 216, 219, 231, 232, 236–238, 248, 262, 271, 329, 336, 347, 358, 377, 381–383, 386, 395, 464, 472, 495, 511, 518, 521, 530, 535, 537, 549, 566, 588, 590, 592, 596, 610, 612, 613, 615–621, 642, 673–677, 679, 694–697 Haraway, Donna 6
Index
Harmful Digital Communications Act (HDCA) 673, 677, 679, 680, 686 Hashtag 8, 25, 27, 31, 101, 361, 400, 615, 616, 680, 681, 693, 696, 697, 699 Hashtag activism 681, 683, 696–698 Hegemonic masculinity 357, 358, 377 Help-seeking 117, 122, 123, 204, 211, 475 Home Office 29, 533 Homophobia/homophobic 385, 386, 395, 567, 569, 598, 608
I
Ideology 33, 186, 356–360, 368, 369, 377, 388 Image-based abuse (IBA) 2, 4, 11, 12, 45, 47, 48, 50, 57, 119, 126, 141, 143, 145, 159, 161, 167, 168, 172, 310, 482, 550, 585–588, 590–594, 596, 597, 652, 659, 694, 702, 705 Image-based sexual abuse. See Image-based abuse (IBA) Images 20, 23, 26, 35, 48, 56–61, 67, 68, 78, 79, 116, 118, 141, 143, 144, 162–164, 167, 168, 185, 219, 220, 236, 262, 265, 266, 269–271, 281–283, 285– 292, 298, 300, 302–305, 307, 311–313, 320, 324, 327–329, 340, 361, 397, 419, 447, 452, 454, 455, 508, 511, 514, 515, 517, 522, 548–550, 552, 553, 555, 556, 559, 565–568, 576, 584, 585, 591–595, 611, 633,
715
635, 637–639, 642, 645, 667, 675–677, 683–686, 698 Impacts 4, 5, 7, 12, 20, 24, 30, 31, 50, 59, 69, 78, 94, 95, 102, 103, 108, 117, 121, 123, 124, 128, 139, 158, 159, 162, 163, 165, 166, 168, 169, 171, 172, 187, 192, 203, 204, 208–211, 215, 217–222, 232, 243, 263, 265–267, 269, 272–274, 276, 277, 289, 298, 301, 304–311, 313–315, 320, 335, 344, 388, 419–421, 424, 429–431, 435, 462, 467, 531, 532, 537, 538, 556, 567, 568, 570, 572, 574, 576, 578, 587, 591, 593, 596, 611, 617, 618, 621, 623, 636, 638, 641, 645, 647, 675, 676, 686, 700 Incel 5, 31–33, 355–370 India 75, 357, 507, 514–516 Indigenous 5, 42–44, 50, 182–189, 191–196, 336–342, 344, 346–351, 453, 607, 610, 616, 620, 621 Inequality/inequalities 4–6, 10, 12, 82, 97, 127, 167, 172, 364, 379, 509, 538, 540, 598, 665, 674, 681, 685, 704, 705 Instagram 21–23, 25, 30, 136, 137, 231, 250, 340, 345, 533, 646, 685, 696, 701, 702 Internet Safety Strategy (ISS) 532–534, 539, 540 Intersectionality/intersectional 5, 33, 77, 82, 95, 109, 116, 117, 121, 122, 126–128, 376, 507–509, 511, 513, 519, 520, 523, 535, 540, 609, 610, 619
716
Index
Intimate images 26, 57, 62, 70, 281–283, 285, 290, 298–300, 304, 509, 512, 513, 515–517, 520, 521, 549, 550, 554, 566, 576, 585, 589, 592, 593, 596, 610, 611, 617, 673, 675, 677, 680, 702, 705 Intimate partner violence (IPV) 2, 157, 158, 163, 166, 169, 172, 204, 208, 217, 222, 232, 248, 481, 483, 492, 496, 674, 679, 684 Italy 9, 547, 552, 559, 585
J
Judiciary 45, 554 Justice 2, 9, 11, 12, 25–27, 29, 43, 45, 47, 49–51, 58, 61, 65, 66, 69, 70, 81, 100, 105, 123, 128, 164, 262, 272, 276, 379, 400, 418, 419, 443, 444, 467, 473, 508, 513, 516, 518–521, 523, 538, 547, 552, 554, 565–571, 573, 575, 576, 586, 597, 607–609, 613, 623, 655, 683, 693–697, 700 Justice needs 10, 608, 609, 623
K
Kelly, Liz 158, 169, 170, 233, 248, 249, 404, 462, 469, 656
L
Law 4, 9–11, 26, 28, 32, 41–49, 51, 55, 56, 58, 60, 61, 65–68, 70, 105, 106, 108, 118, 125,
128, 184, 212, 221, 272, 301, 303, 310, 328, 329, 350, 408, 420, 445, 448, 483, 507, 508, 514–518, 529–531, 537–540, 547, 548, 552, 554, 557, 559, 586, 591, 593, 596, 597, 611, 613, 619, 632, 641, 646, 655, 674, 677, 700, 701, 703 Law Commission of England and Wales 539 Law reform 9, 12, 45, 55, 65, 67, 531, 533, 536, 539, 540, 612 Left-wing 5, 375–379, 384, 388 LGBTQ/LGBTIQ+ 5, 78, 298–301, 305, 306, 308, 314, 315, 335–347, 350, 516, 518 Lived experience 4, 5, 43, 62, 94, 97, 109, 110, 263–265, 267, 276, 277, 307, 596 Lockdown 22, 286, 553, 585, 631, 632, 635, 643, 645
M
Machine learning (ML) 483–485, 487, 490, 491, 493–495, 497, 498, 584, 585 Male radicalisation 76, 86 Masculinity 140, 143, 144, 167, 339, 358, 364, 366, 370, 377, 382, 384 #MenCallMeThings 696, 699 #MeToo 8, 27, 28, 364, 681, 693, 696, 698 Migrant women 116, 120, 122 MindGeek 651–653, 655, 656, 658–660, 663–668 Misandry 357, 359, 362–364, 367, 369, 381
Index
Misogyny 4, 5, 10, 21, 27–29, 31, 33, 34, 77, 81, 98, 136, 149, 186, 355–357, 360–363, 368–370, 377, 378, 380–382, 385–388, 395, 396, 400, 532, 540, 598, 652, 656, 658–662, 664–668, 695, 696 Monitoring 119, 120, 126, 159, 161–166, 168, 215, 216, 218, 220, 230, 232, 236–240, 245, 470, 492 #MyBodyMyTerms 680–683, 685, 686
N
National Society for the Prevention of Cruelty to Children (NSPCC) 277, 533, 631 Neoliberal 358, 467, 474, 657, 661, 664, 668, 685, 701 Network/networked 21, 22, 27, 29, 30, 58, 75, 94, 126, 140, 164, 213, 222, 231, 234, 235, 269, 270, 272–276, 281, 303, 360, 380, 382, 450, 451, 551, 552, 568, 569, 608, 610–614, 616, 620, 621, 635, 643, 655, 699 New Zealand 5, 10, 42, 45–47, 117, 144, 265, 284, 300, 418, 587, 673–677, 679–683, 685, 686 Non-consensual pornography 80, 565–568, 570, 571, 573–577, 588 Nonhuman 44, 47, 49, 548, 551, 552
717
O
Objectification 60, 136–138, 140, 146–148, 184, 186, 188, 511, 657, 660, 665 Office of the eSafety Commissioner. See eSafety Commissioner Online harms 81, 530, 532–541, 555, 608, 609, 620, 636, 640, 646, 655, 674, 676, 677, 683 Online hate 539, 540, 701 Online misogyny 396–398, 400, 401, 408, 530, 532, 533, 632, 646 Online resistance. See Resistance Online safety 98, 250, 531–534 Online violence against women (OVAW) 9, 94, 95, 97, 530, 531, 534, 537, 540
P
Patriarchy 82–86, 98, 100, 109, 137, 148, 149, 370, 540 Perpetration/perpetrators 2, 3, 7, 12, 30, 58, 63–66, 68, 76, 83, 84, 100, 103–105, 107–109, 116, 118–120, 126, 127, 135, 157, 159, 161–166, 168, 170, 171, 182, 203, 204, 207, 209, 212, 214–222, 229, 231, 239–241, 249, 250, 265, 269, 271, 286, 290, 291, 297, 298, 300, 302–305, 307, 310, 329, 337, 347, 351, 361, 366, 380, 387, 400, 404, 405, 407, 444, 447, 462, 464, 466, 467, 474, 492, 495–497, 548, 551, 552, 555, 558, 566–577, 584, 585, 587,
718
Index
592, 595, 598, 618, 645, 655, 676, 678, 695, 696, 699, 704 Plant, Sadie 6 Police/policing 9, 25–28, 45, 47–51, 58, 84, 85, 99, 105, 118, 125, 127, 128, 148, 211, 212, 232, 233, 248, 262, 267, 291, 417–435, 443, 446, 449, 453, 467, 469, 473, 487, 493, 516–518, 553, 567, 568, 597, 608, 613, 616–618, 623, 638, 657, 679, 701, 702, 704, 705 Policy 9, 10, 12, 22, 24, 30, 43, 47, 49, 51, 66, 77, 80, 106, 108, 117, 123, 125–128, 158, 159, 170, 184, 185, 192, 231, 250, 289, 320, 328, 329, 350, 383, 386, 388, 418–420, 428, 433, 445, 485, 517, 523, 529–540, 557, 559, 569, 586, 597, 617, 632, 634, 635, 637, 639, 640, 642–646, 666, 700, 705 Politics 3, 5, 10–12, 23, 24, 30, 63, 75–79, 81–86, 96, 100, 121, 122, 144–146, 148, 185, 356, 360, 362, 369, 376–379, 381, 383–388, 531, 547, 668, 685, 693, 694, 696–701, 703–706 Populism 376, 377, 384, 387, 388 Pornhub 597, 651–653, 655, 658, 660, 663–666, 668 Pornography 10, 26, 67, 79, 83, 106, 135, 137, 145, 146, 148, 159, 324, 378, 443, 509, 512, 515, 516, 534, 576, 584, 585, 597, 612, 638, 651–654, 656–658, 661, 663, 666, 667 Post-feminist/postfeminism 652, 656, 659, 661, 662, 665, 685
Prevalence 3, 94, 95, 120, 126, 161–163, 166, 204–207, 212, 215, 217, 218, 263, 298–300, 326, 329, 357, 463, 512, 531, 532, 646 Prevention 125, 463, 465, 466, 470, 472–474, 522, 586, 596–598, 701, 705 Privacy 42, 43, 45–47, 49, 51, 63, 106, 108, 126, 141, 145, 161, 243, 244, 269, 281, 283–286, 289, 290, 292, 315, 346, 362, 395, 442, 466, 486, 515, 516, 518, 520, 521, 523, 548, 550, 565, 567, 589, 593, 597, 642, 646, 677 Procedural fairness 419–423, 428–434, 436 Public 4, 7, 8, 10, 21, 26, 27, 32, 42, 45, 57, 58, 61–65, 67, 69, 76, 78, 80, 83, 84, 94, 101, 105, 118, 123, 126, 136–140, 142, 144, 146–148, 217, 233, 248, 261, 266, 267, 283, 305, 307–309, 324, 329, 357, 361, 364, 375, 379–383, 388, 395–397, 400, 402, 404, 406–408, 419–423, 428–434, 436, 443–448, 450, 451, 454, 462, 467, 470–473, 511, 515–519, 529, 537, 538, 540, 541, 549, 550, 557–559, 583, 584, 586, 590, 593, 597, 609, 612, 614, 615, 619, 635, 643, 659, 663, 674, 682, 694–706 Punishment 105, 119, 144, 147, 215, 520, 557, 566, 570, 575–577, 696, 697, 701, 704–706
Index
R
Racism/racist 6, 117, 118, 123–125, 128, 181, 182, 186, 189–192, 195, 196, 336–339, 346–351, 360, 379, 381, 385, 386, 388, 395, 509, 598, 608, 632, 646 Radicalisation 79 Rape culture 147, 400, 618, 641, 673, 681, 697 Reddit 31, 361, 362, 386, 558, 584, 587, 597 Rejection 140, 187, 207, 209, 214, 347, 357, 362, 365–367, 369 Resistance 2, 9, 10, 23, 34, 80, 85, 138, 183, 193, 196, 356, 398, 399, 408, 469–474, 552, 608, 641, 693–696, 698, 702–704, 706 Responsibilisation 467, 652, 656, 660, 665, 667 Responsibility 10, 26, 63, 126, 213, 250, 283–286, 290, 292, 366, 376, 434, 467, 473, 533, 534, 536, 540, 541, 556, 557, 586, 596, 597, 609, 618, 632, 651, 652, 655, 656, 659, 660, 664–667, 684, 685, 687 Restorative justice 571, 574, 575 Revenge 136, 145, 207, 209, 271, 298, 549, 677 Revenge porn. See Image-based sexual abuse Rhizomatic harm 567, 569–571 Rhizomatic justice 566
S
Safety theatre 462, 474, 475
719
School 9, 10, 49, 56, 60, 99, 107, 109, 125, 141, 161, 162, 235, 283–285, 287–291, 380, 387, 522, 566–569, 571, 573, 577, 587, 632–647, 654 Scotland 164, 486 Sexism/sexist 4, 6, 28, 63, 78, 137, 163, 186, 360, 376, 378, 379, 383–386, 388, 509, 530, 531, 538, 567–569, 571–573, 612, 618, 657 Sexting 161, 220, 264, 276, 277, 281–284, 286, 288–290, 292, 299, 300, 329, 516, 567, 632, 637, 638 Sextortion 60, 61, 303, 585 Sexual assault 9, 27, 29, 30, 164, 182, 189, 286, 320, 321, 385, 441–445, 447, 448, 450–452, 454, 455, 463–465, 469, 471, 472, 474, 512, 518, 597, 609, 674, 675, 696 Sexual harassment 2, 11, 76, 79, 83, 94, 99, 101, 136, 138, 159, 167, 168, 233, 262–264, 269, 272, 273, 276, 277, 287, 329, 398, 462, 472, 632, 636, 637, 640–643, 645, 693 Sexuality 5, 63, 109, 122, 140–142, 145–147, 159, 168, 182, 185–187, 189, 193, 194, 261, 274, 276, 281, 282, 288–293, 302–304, 320, 328, 336, 337, 340–342, 344, 347, 348, 358, 377, 384, 512, 516, 596, 644, 661, 662, 674–676, 683, 684, 686 Sexual violence 2, 4, 7, 8, 10, 11, 29, 43–45, 48, 136, 143,
720
Index
146, 147, 157–160, 166–168, 170, 172, 186–188, 190, 192, 233, 248, 261–264, 267, 270, 271, 274, 277, 319, 320, 327, 336, 337, 361, 367, 369, 381, 387, 395, 397, 398, 443–445, 448, 450, 461–463, 465–467, 469, 471, 473–475, 549, 555, 565–568, 574, 575, 577, 578, 607, 619, 674, 680, 693, 694, 698, 700–702, 704–706 Singapore 5, 694, 701–705 Slut-shaming 135, 138, 141–143, 145, 146 Sociality 7, 558 Social media 4, 8, 20–22, 24, 25, 27, 29, 30, 34, 35, 50, 57, 62, 63, 70, 75, 78, 95, 97, 99, 104, 108–110, 116, 118, 119, 124, 126, 136, 137, 141, 188, 189, 193, 208, 216, 218, 229–231, 233, 234, 241, 250, 269–271, 276, 298, 303, 308–310, 320, 321, 327, 329, 336, 337, 340–342, 345, 346, 348–350, 357, 358, 360, 362, 382, 384, 388, 395, 399, 408, 483, 532, 533, 541, 552, 555, 585, 587–589, 636–638, 640, 642, 646, 654, 666, 693, 696 Social policy 703 Spacelessness 165 Spying 559 Stalking 2, 4, 8, 11, 20, 49, 79, 94, 95, 99, 119, 120, 126, 157, 159, 163, 164, 168, 203–208, 210, 211, 213–215, 229–234, 239–241, 243–245, 247–249,
264, 276, 277, 397, 466, 496, 588, 591, 592, 595, 612 Stanko, Elizabeth (Betsy) 158, 169, 404, 469 Stigma 128, 346, 512, 520, 559, 676 Support 21, 25, 28, 32, 34, 59, 62, 65, 67, 75, 77, 79, 81, 83, 85, 103, 107–109, 121, 122, 124, 126, 128, 166, 170, 212, 213, 221, 222, 250, 265, 267, 269–275, 301, 314, 323, 357, 359, 369, 377–383, 396, 400, 402, 405–407, 429, 430, 433, 444, 449, 450, 463, 473, 475, 490, 513, 514, 521, 522, 533, 552, 557, 566, 570, 572, 573, 575, 576, 578, 597, 598, 610–613, 616, 620, 621, 623, 636, 639, 640, 642–645, 683, 685, 694, 695, 697, 699, 700, 702, 704–706 Surveillance 2, 4, 12, 97, 119, 126, 157, 161, 164, 166, 240, 245, 441–444, 446–455, 466, 470, 472, 622, 674, 702
T
Technofeminism 82 Technology facilitated abuse/technology-facilitated abuse 4, 5, 8–12, 55, 56, 68, 69, 120, 158, 159, 169, 170, 172, 220, 481, 485, 493, 584, 598 Technology-facilitated sexual violence/technology facilitated sexual violence (TFSV) 5, 167,
Index
168, 262–269, 272–275, 337, 398, 656 Telstra 586 Trans/transgender 3, 5, 139, 142, 183, 335, 341, 343, 344, 346, 349, 350, 462, 511, 512, 518, 608, 618, 619 Transformative justice movement 608 Transparency 110, 418–423, 428–432, 434–436 Trauma 63, 69, 266, 267, 282, 304, 307, 309, 310, 315, 327, 329, 348, 419, 434, 466, 508, 510, 618, 622, 680, 686 Turkle, Sherry 6, 96 Twitter 8, 20, 23, 25, 26, 75, 94, 101, 104, 136, 139, 144, 266, 301, 341, 342, 361, 380, 381, 385, 403, 519, 531, 610, 614–619, 622, 666, 681, 682, 699
721
V
Victim blaming/victim-blaming 10, 62, 63, 68, 101, 104, 265, 270, 274, 290, 292, 509, 516, 549, 554, 566–568, 572, 573, 575, 577, 638, 676, 677, 682, 683, 685, 686 Victimisation 4, 95, 161, 162, 166, 168, 169, 205–208, 210, 211, 213, 217, 220, 221, 230, 231, 233, 250, 262, 270, 274, 304, 359, 366, 462, 467–469, 472–474, 510, 511, 552, 553, 565, 576, 578, 585, 587, 588, 646, 665, 677, 685, 701 Video evidence 443, 449–451, 454 Vigilantism 215, 400, 695, 703, 704 Viral 25, 28, 32, 137, 282, 287, 556, 654, 663, 698 Voyeurism 559, 702
W U
United Kingdom (UK) 5, 22, 26, 27, 29, 32, 46, 56, 94, 205, 231, 263, 265, 298, 300, 315, 482, 483, 485, 508, 512, 514, 517–519, 531, 532, 534, 587, 632, 634–636, 643–645, 655, 677 United States (US) 5, 30, 33, 42, 46, 49, 94, 96, 120, 136, 158, 160, 188, 231, 232, 265, 299, 300, 315, 326, 360, 378, 485, 518, 519, 521, 586, 653–655
Wajcman, Judy 6–8, 82, 137, 467, 657 Wales 9, 29, 401, 486, 514, 515, 519, 530, 531, 534, 537, 539, 540, 586, 632 Women of colour 5, 63, 77, 94, 97, 194, 380, 381, 388 World Health Organisation (WHO) 2, 157, 168, 248, 261, 262, 631, 645
Y
Yahoo 586 Young women 5, 24, 107, 162, 263, 267, 270, 281–283, 285,
722
Index
287–293, 654, 661, 662, 675, 677, 682, 685, 686 Youth/young people 10, 24, 44, 80, 96, 139, 163, 250, 262,
263, 265, 270, 273, 274, 277, 281–286, 289–292, 327, 365, 383, 520, 533, 565–576, 578, 596, 632–647, 675