Understanding Digital Literacies: A Practical Introduction [2 ed.] 9781138041721, 9781138041738, 9781003177647

Understanding Digital Literacies Second Edition provides an accessible and timely introduction to new media literacies.

261 151 24MB

English Pages [321] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Endorsement Page
Half Title
Title Page
Copyright Page
Table of Contents
List of illustrations
Preface to the second edition
Chapter 1 Mediated me
Mediation
Affordances and constraints
Creativity
Media utopias and dystopias
What are ‘digital literacies’?
How this book is organized
Part I Digital tools
Chapter 2 Information everywhere
Information and relationships
Ontologies: Organizing the world
Filtering
The attention economy
Conclusion
Chapter 3 Reading and writing in digital contexts
Hypertext and linking
Dynamic hypertext
Clickbait or ‘You won’t believe what you’re about to read!!’
Is hypertext making you stupid?
Interactivity
Automatic writing
Mashups and remixing
Conclusion
Chapter 4 Multimodality
From page to screen
Interfaces
Visual design in multimodal texts
Combining modes
Designing video
Conclusion
Chapter 5 Online language and social interaction
Media effects
User effects
What are we doing when we interact online?
‘Phatic’ communication
The richness of lean media: Mode-mixing and mode-switching
Conclusion
Chapter 6 Mobility and materiality
Mobility and hybrid spaces
Locative media
Placemaking with digital images
Embodiment
Materiality and the ‘Internet of Things’
Conclusion
Chapter 7 Critical digital literacies
Ideologies and imaginaries
Are technologies ideological?
Mediation redux
Interfaces and dark patterns
Fake news
Who owns the internet?
Conclusion: ‘Hacking’ digital media
Part II Digital practices
Chapter 8 Online cultures and intercultural communication
Online cultures as discourse systems
Light communities and ambient affiliation
Cultures-of-use and media ideologies
Intercultural communication online
Tribalism and polarization
Conclusion
Chapter 9 Games, learning, and literacy
Games and literacy
Reading and writing in games
Games and our material and social worlds
Games and identity
Games and learning
Conclusion
Chapter 10 Social (and ‘anti-social’) media
We are not files
Social networks and social ties
The presentation of self on social networking platforms
‘Anti-social media’
Conclusion
Chapter 11 Collaboration and peer production
Collaboration in writing
Wikinomics and peer production
The wisdom of crowds
Memes and virality
Conclusion
Note
Chapter 12 Surveillance and privacy
What is privacy?
Responses to privacy challenges
Lateral surveillance
Pretexts
Entextualization
Conclusion
Notes
Afterword: Mediated Us
References
Index
Recommend Papers

Understanding Digital Literacies: A Practical Introduction [2 ed.]
 9781138041721, 9781138041738, 9781003177647

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

“Not only the best overview of the area; Understanding Digital Literacies, Second Edition is also a virtual handbook on how to do good and not harm in the digital age.” James Paul Gee, Arizona State University, USA “Thoroughly updated with newly written chapters, this new edition weaves together important and up-to-date materials highlighting the dynamism of texts and practices in mobile and social media that did not even exist when the frst edition was published. A much-anticipated and timely second edition of a popular text!” Carmen Lee, Chinese University of Hong Kong “I know the frst edition of this book inside out. This second edition brings digital literacies bang up to date, with new discussions on topics ranging from algorithms, chatbots and fake news, to mobility, surveillance and Zoom. A comprehensive and critical guide that is both authoritative and entertaining, combining innovative theory with engaging activities and illuminating case studies.” Caroline Tagg, The Open University, UK

Understanding Digital Literacies

Understanding Digital Literacies Second Edition provides an accessible and timely introduction to new media literacies. This book equips students with the theoretical and analytical tools with which to explore the linguistic dimensions and social impact of a range of digital literacy practices. Each chapter in the volume covers a different topic, presenting an overview of the major concepts, issues, problems, and debates surrounding it, while also encouraging students to refect on and critically evaluate their own language and communication practices. Features of the second edition include: •• •• •• ••

Expanded coverage of a diverse range of digital media practices that now includes Instagram, Snapchat, TikTok, Tinder, and WhatsApp; Two entirely new chapters on mobility and materiality, and surveillance and privacy; Updated activities in each chapter which engage students in refecting on and analysing their own media use; E-resources featuring a glossary of key terms and supplementary material for each chapter, including additional activities and links to useful websites, articles, and videos.

This book is an essential textbook for undergraduate and postgraduate students studying courses in new media and digital literacies. Rodney H. Jones is Professor of Sociolinguistics and Head of the Department of English Language and Applied Linguistics at the University of Reading, UK. Christoph A. Hafner is Associate Professor in the Department of English, City University of Hong Kong.

Understanding Digital Literacies

A Practical Introduction

Second Edition

Rodney H. Jones and Christoph A. Hafner

Second edition published 2021 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN and by Routledge 605 Third Avenue, New York, NY 10158 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2021 Rodney H. Jones and Christoph A. Hafner The right of Rodney H. Jones and Christoph A. Hafner to be identified as authors of this work has been asserted by them in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. First edition published by Routledge 2012 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record has been requested for this book ISBN: 978-1-138-04172-1 (hbk) ISBN: 978-1-138-04173-8 (pbk) ISBN: 978-1-003-17764-7 (ebk) Typeset in Sabon by Deanta Global Publishing Services, Chennai, India Access the Support Material: www.routledge.com/9781138041738

Contents

xii xiv

List of illustrations Preface to the second edition 1 Mediated me Mediation 2 Case study: The wristwatch 4 Affordances and constraints 6 Activity: Affordances, constraints, and social practices Creativity 12 Media utopias and dystopias 14 What are ‘digital literacies’? 16 How this book is organized 19 PART I

Digital tools 2 Information everywhere Activity: Finding information 23 Information and relationships 24 Ontologies: Organizing the world 26 Filtering 30 Case study: Search engines 35 Activity: Interrogating search engines 39 The attention economy 40 Activity: Attention in social media 43 Conclusion 43

1

12

21 23

viii

Contents

3 Reading and writing in digital contexts

45

Hypertext and linking 46 Activity: Reading critically in hypertext 50 Dynamic hypertext 50 Case study: Facebook as dynamic hypertext 52 Clickbait or ‘You won’t believe what you’re about to read!!’ 53 Is hypertext making you stupid? 55 Interactivity 56 Automatic writing 58 Mashups and remixing 60 Activity: Remix culture 63 Conclusion 63 4 Multimodality From page to screen 67 Activity: Sign language? 69 Interfaces 70 Case study: Multimodal design in Instagram Visual design in multimodal texts 78 Combining modes 81 Activity: Text/image interaction in image macro memes 85 Designing video 85 Conclusion 88

65

75

5 Online language and social interaction Activity: Analyzing language on different platforms Media effects 91 Activity: Textual chemistry 96 User effects 98 What are we doing when we interact online? 100 ‘Phatic’ communication 103 Case study: Chatbots 104 The richness of lean media: Mode-mixing and mode-switching 108 Conclusion 110

89 91

Contents

6 Mobility and materiality

ix

112

Mobility and hybrid spaces 113 Locative media 115 Activity: Locative technologies 118 Placemaking with digital images 118 Embodiment 122 Case study: Why is Zoom so exhausting? 127 Materiality and the ‘Internet of Things’ 129 Activity: Your Internet of Things 133 Conclusion 133 7 Critical digital literacies

135

Ideologies and imaginaries 135 Are technologies ideological? 137 Mediation redux 138 Case study: Algorithms 145 Activity: Folk algorithmics 148 Interfaces and dark patterns 148 Activity: Detecting dark patterns 151 Fake news 152 Who owns the internet? 154 Conclusion: ‘Hacking’ digital media 156 PART II

Digital practices 8 Online cultures and intercultural communication Online cultures as discourse systems 162 Activity: The cultures of social networking platforms 166 Light communities and ambient affliation 168 Case study: Massively multi-player online games as online discourse systems 170 Cultures-of-use and media ideologies 173 Activity: Your media ideologies 176 Intercultural communication online 177 Tribalism and polarization 179 Conclusion 181

159 161

x

Contents

9 Games, learning, and literacy Games and literacy 183 Activity: The games people play 184 Reading and writing in games 185 Case study: Virtual game worlds 193 Games and our material and social worlds Games and identity 199 Games and learning 201 Activity: Boon or bane? 203 Conclusion 203

183

195

10 Social (and ‘anti-social’) media We are not fles 206 Social networks and social ties 208 Activity: Mapping your social network 211 The presentation of self on social networking platforms Activity: Performance equipment 217 Case study: Infuencers and microcelebrities 218 ‘Anti-social media’ 221 Conclusion 225 11 Collaboration and peer production

205

212

226

Collaboration in writing 227 Activity: Your collaborative writing practices 233 Wikinomics and peer production 234 Case study: The wiki 238 The wisdom of crowds 242 Memes and virality 243 Activity: The creation and circulation of memes 247 Conclusion 248 Note 249 12 Surveillance and privacy What is privacy? 251 Responses to privacy challenges 254 Activity: The ‘creepy scale’ 257 Lateral surveillance 259 Pretexts 261

250

Contents

Entextualization 264 Case Study: Genres of disclosure 267 Activity: What do they know about you? Conclusion 273 Notes 274 Afterword: Mediated Us References Index

xi

270

275 277 295

Illustrations

Figures 2.1 3.1 4.1 4.2 4.3 4.4a,b 4.5 4.6 4.7 5.1 5.2 6.1 6.2 6.3 7.1 7.2 9.1 9.2 10.1 10.2 11.1 11.2 11.3

An ‘ontology’ based on the Dewey Decimal System Organizational structures in hypertext Mockup of Twitch livestream Drug Enforcement Administration poster Elements of Material Design interface Twitter interface on smartphone and desktop ‘Ratatouille’ image macro meme PETA advertisement Storyboard template Mica responding to small talk Mica sending a cat pic Snapchat image 1 Snapchat image 2 Snapchat image 3 Interface interference (from Segura, 2019) Tweet from @PoliteMelanie (from Linvill and Warren, 2020) Spore screenshot: Cell stage (Spore and screenshots of it are licensed property of Electronic Arts, Inc) Spore screenshot: Space stage (Spore and screenshots of it are licensed property of Electronic Arts, Inc) Bonding and bridging Disagreement and engagement on Facebook: Pew Research Centre (2017) Sequential writing (adapted from Sharples et al., 1993) Parallel writing (adapted from Sharples et al., 1993) Reciprocal writing (adapted from Sharples et al., 1993)

28 48 66 69 71 73 83 84 87 106 107 121 122 123 151 154 187 188 210 223 229 230 231

Illustrations

11.4 11.5 11.6 12.1

xiii

Draft with mark-up in Microsoft Word Wikipedia discussion page ‘Grumpy cat’ ‘Change my mind’ mashup meme Phishing email

232 241 245 263

Taxonomy of dark patterns (adapted from Gray et al. 2018: 5)

150

Table 7.1

Preface to the second edition

Frederick Ericson (1986: 316) once compared conversation to ‘climbing a tree that climbs back.’ That’s a bit like what writing this second edition of Understanding Digital Literacies has been like. When we started to write this new edition four years ago, so much had changed in the world of digital media that we suspected that we would end up having to write a book that was very different from the frst edition. For one thing, most of the applications our students were using and the digital practices they were engaging in didn’t exist when the frst edition was published in 2012, when there was no Snapchat, no TikTok, no Tinder, and Instagram had just been acquired by Facebook. Not surprisingly, the world didn’t slow down to allow us to take stock of these changes. It kept changing. In the time we have been working on this new edition, we have seen the UK leave the European Union, Donald Trump ascend to and descend from the US presidency, and the world fall into the grip of a global pandemic which, at the time of writing, had killed almost 3 million people, and all of these historic events have been intimately entangled with digital media and people’s digital literacy practices. Either digital media have been implicated in helping to bring them about, as with the election of Donald Trump, or these events have dramatically changed people’s everyday practices with digital media, as is the case with the COVID-19 pandemic. Meanwhile, our thinking about digital literacies also developed as we observed the growing scepticism with which people were regarding technologies and tech companies, and as we conducted new research into things like digital surveillance, self-quantifcation, new digital academic genres, and the effects of algorithms on practices of reading and writing. Although this book retains the overall structure of the frst edition, and rests on the same theoretical foundations of mediated discourse analysis and new literacy studies, over half of the material in it is brand new. Among other things, we take into account recent developments in artifcial intelligence, dynamic hypertext, automatic writing, and algorithmic inferencing. We explore the new range of multimodal literacies made possible by platforms like Snapchat, TikTok, and Zoom. We also deal with techno-social

Preface to the second edition xv

phenomena like fake news, micro-celebrity, memes, and online tribalism. Finally, we include two entirely new chapters in this edition: one about mobility and materiality, which focuses on the spatial and embodied dimensions of digital literacies, and one on surveillance and privacy, which explores both the practices of online social surveillance between peers and the broader practices of ‘surveillance capitalism’ (Zuboff, 2019) engaged in by commercial entities. In order to make the most of this book, students and instructors can download additional e-resources in .pdf format from [www.routledge. com/9781138041738]. These include additional material and activities for each chapter, suggestions for further reading, links to other web-based content, and a new and expanded glossary which explains all of the terms marked in bold in the print edition. Perhaps more than when we wrote the frst edition, we are keenly aware of the civic dimension of digital literacies. What we mean by that is not just that we have a heightened awareness of the monumental consequences digital technologies, and the way we use them, can have for our politics and our economics, but we also have a heightened certainty that being ‘digitally literate’ requires from people a willingness to interrogate the political and economic systems that digital media are a part of, and to work together with others to try to infuence these systems. Although much in this book has changed, the analytical framework we introduced in the frst edition remains, as does our overall message that understanding how digital media ‘work’ requires understanding how human communication works, how humans make meaning, construct identities, form relationships, and work with other humans to get things done, and how technologies affect and are affected by these human activities. Not long after this edition of Understanding Digital Literacies fnds its way into print, new technologies will have come along and new social practices will be forming around them. The way to keep up with these developments, and to help our students keep up with them, is not to focus on how technologies ‘work’, but rather on how humans ‘work’ technologies, and how we engage together with our inventions in an endless conversation that is never resolved, rather like ‘climbing a tree that climbs back.’ What we should aim to do for our students is not to provide answers, but to empower them to have these conversations. Rodney H. Jones Reading, UK Christoph A. Hafner Hong Kong

Chapter 1

Mediated me

It’s hard to think of anything we do nowadays, from working on projects to socializing with friends, that is not somehow mediated through digital technologies. It’s not just that we’re doing ‘old things’ in ‘new ways’. Digital technologies are actually introducing new things for us to do like tweeting, memeing, and gramming. They have also made new social practices available to people who may not have our best interests in mind, practices like trolling, hatelinking, and catfshing. They have given private companies the ability to track our every move and to use that information to manipulate us. They have given governments an unprecedented ability to monitor their citizens and to disrupt political processes in other countries. They have given unscrupulous politicians a heightened ability to deceive people, to distort reality, and even to call into question the whole idea of ‘truth’ itself. And they have given ordinary people new ways of harassing, exposing, or terrorising others. These new practices, both good and bad, require from people new skills, new ways of thinking, and new methods of managing their relationships with others. Some examples of these include: • • • • • • •

The ability to quickly search through and evaluate great masses of information; The ability to create coherent reading pathways through linked texts; The ability to separate the ‘true’ from the ‘fake’ in a complex information eco-system; The ability to quickly make connections between widely disparate ideas and domains of experience; The ability to shoot and edit digital photos and video; The ability to create complex multimodal documents (such as Instagram ‘stories’) that combine words, graphics, video, and audio; The ability to create and maintain dynamic online profles and manage large and complex online social networks;

2

• •

Mediated me

The ability to explore and navigate online worlds and digitally ‘augmented’ physical spaces and to interact in virtual and ‘digical’ environments; The ability to manage constant surveillance by peers and private companies and to protect one’s personal data and ‘identity’ from being misused by others.

Some people just pick up these abilities along the way by surfng the web, playing online games, posting to social networking platforms, and using mobile apps like Snapchat and WhatsApp. But people are not always very conscious of how these practices change not just the way they communicate but also ‘who they can be’ and the kinds of relationships they can have with others. They are also sometimes not conscious of the kinds of things others might be using digital technologies to do to them, and how their use of digital media can make them vulnerable to exposure or abuse. The purpose of this book is not just to help you become better at the things you use digital media to do, or to make you better at protecting yourself from those who might be using digital media to do things to you. It is also to help you understand how digital media are affecting the way you make meanings, the way you relate to others, the kind of person you can be, and even the way you think. We believe that the best way to become a more competent user of technologies is to become more critical and refective about how you use them, the kinds of things that they allow you to do, and the kinds of things they might prevent you from doing. This book is not just about computers, mobile phones, the internet, and other digital media. It’s about the process of mediation, the age-old human practice of using tools to take action in the world. In this introductory chapter we will explain the concept of mediation and how it relates to the defnition of ‘digital literacies’ which we will be developing throughout this book.

Mediation A medium is something that stands in-between two things or people and facilitates interaction between them. Usually when we think of ‘mediated interaction’ we think of things like ‘computer-mediated communication’ or messages delivered via ‘mass media’ like television, radio, or newspapers. But the fact is, all interaction—and indeed all human action—is in some way mediated. This was the insight of the Russian psychologist Lev Vygotsky, who spent his life observing how children learn. All learning, he realized, involves learning how to use some kind of tool that facilitates interaction between the child and the thing or person he or she is interacting with. To learn to eat, you have to learn to use a spoon or a fork or chopsticks, which come

Mediated me

3

between you and the food and facilitate the action of eating. To learn to read, you have to learn to use language and objects like books that come between you and the people who are trying to reach you through their writing and facilitate the action of communication. These cultural tools that mediate our actions are of many kinds. Some are physical objects like spoons and books. Some are more abstract ‘codes’ or ‘systems of meaning’ such as languages, counting systems, and computer code. The ability to use such tools, according to Vygotsky, is the hallmark of human consciousness. All higher mental processes, he said, depend upon mediation. In order to do anything or mean anything or have any kind of relationship with anyone else, you need to use tools. In a sense, the defnition of a person is a human being plus the tools that are available for that human being to interact with the world. These tools that we use to mediate between ourselves and the world can be thought of as extensions of ourselves. In fact, the famous Canadian media scholar Marshall McLuhan (1964) called media ‘the extensions of man.’ He didn’t just mean things that we traditionally think of as media like televisions and newspapers, but also things like light bulbs, cars, and human language, in short, all mediational means which facilitate action. The spoon we use to eat with is an extension of our hand. Microscopes and telescopes are extensions of our eyes. Microphones are extensions of our voices. Cars and trains and busses might be considered extensions of our feet, and computers and smartphones might be considered extensions of our brains (though, as we will show in the rest of this book, the ways computers and the internet extend our capabilities goes far beyond things like memory and cognition). The point that both Vygotsky and McLuhan were trying to make was not just that cultural tools allow us to do new things, but that they come to defne us in some very basic ways. They usually don’t just affect our ability to do a particular task. They also affect the way we relate to others, the way we communicate, and the way we think. As McLuhan (1964: 2) puts it: ‘Any extension, whether of skin, hand, or foot, affects the whole psychic and social complex.’ Cars, trains, and busses, for example, don’t just allow us to move around faster; they fundamentally change the way we experience and think about space and time, the kinds of relationships we can have with people who live far away from us, and the kinds of societies we can build. A microphone doesn’t just make my voice louder. It gives me the ability to communicate to a large number of people at one time, thus changing the kinds of relationships I can have with those people and the kinds of messages I can communicate to them. On the one hand, these tools enable us to do new things, think in new ways, express new kinds of meanings, establish new kinds of relationships, and be new kinds of people. On the other hand, they also prevent us from doing other things, of thinking in other ways, of having other kinds of

4

Mediated me

relationships, and of being other kinds of people. In other words, all tools bring with them different kinds of affordances and constraints. The way McLuhan puts it, while new technologies extend certain parts of us, they amputate other parts. While a microphone allows me to talk to a large number of people at one time, it makes it more diffcult for me to talk to just one of those people privately, and while a train makes it easier for me to quickly go from one place to another, it makes it more diffcult for me to stop along the way and chat with the people I pass.

CASE STUDY: THE WRISTWATCH Before mobile telephones with built-in digital timekeepers became so pervasive, few technologies seemed more like ‘extensions’ of our bodies than wristwatches. Sometimes people even think of watches as extensions of their minds. Consider the following conversation: A: Excuse me, do you know what time it is? B: Sure (looks at his watch) It’s 4:15. In his book Natural Born Cyborgs (2003), Andy Clark points to conversations like this as evidence that we consider tools like watches not as separate objects, but as part of ourselves. When B says ‘sure’ in response to the question about whether or not he knows the time, he does so before he looks at his watch. In other words, just having the watch on his wrist makes him feel like he ‘knows’ the time, and looking at the watch to retrieve the time is not very different from retrieving a fact from his mind. Before the sixteenth century, timepieces were much too large to carry around because they depended on pendulums and other heavy mechanical workings. Even domestic clocks were rare at that time. Most people depended on the church tower and other public clocks in order to know the time. This all changed with the invention of the mainspring, a coiled piece of metal which, after being wound tightly, unwinds, moving the hands of the timepiece. This small invention made it possible for ‘time’ to be ‘portable’. In the seventeenth century, pocket watches became popular among the rich. Most people, though, continued to rely on public clocks, mostly because there was no need for them to be constantly aware of the time. It wasn’t until the beginning of the twentieth century that watches became popular accessories for normal people to wear on their wrists.

Mediated me

In the beginning, they were considered fashion accessories worn only by women. There are a number of stories about how wristwatches came to be more commonly used. One involves Brazilian aviator Alberto Santos-Dumont, who in 1904 complained that it was diffcult to fy his plane while looking at his pocket watch. So his friend, Louis Cartier, developed a watch that he could wear on his wrist, which eventually became the frst commercially produced men’s wristwatch. According to another account, during WWI, soldiers strapped their watches to their wrists in order to enable them to coordinate their actions in battle while leaving their hands free to carry their weapons. These early wristwatches were known as ‘trench-watches,’ after the trenches of WWI. These two examples demonstrate the new affordances introduced by the simple technology of strapping a watch to one’s wrist. It allowed soldiers and aviators to do things they were unable to do before, that is, to keep track of time while fghting or fying their planes. Some might even argue that these new affordances contributed to changes in the nature of war as well as the development of modern aviation. This ability to ‘carry the time around’ also introduced new possibilities in the business and commercial worlds. The development of railroads as well as the ‘scientifc management’ of assembly-line factories both depended on people’s ability to keep close track of the time. Of course, these developments also changed people’s relationships with one another. Human interaction became more a matter of scheduled meetings rather than chance encounters. People were expected to be in a certain place at a certain time. The notions of being ‘on time’ and ‘running late’ became much more important. Along with these changes in relationships came changes in the way people thought about time. Time became something abstract, less a function of nature (the rising and setting of the sun) and more a function of what people’s watches said. When people wanted to know when to eat, they didn’t consult their stomachs, they consulted their wrists. Time became something that could be divided up and parcelled out. Part of managing the self was being able to manage time. Time became like money. Finally, time became something that one was meant to be constantly aware of. One of the worst things that could happen to someone was to ‘lose track of time’. With the development of electronic watches, portable timepieces became accurate to the tenth or even the hundredth of a second. This new accuracy further changed how people thought about how time could be divided up. Before the 1960s, the second was the smallest measurement of time most normal people could even conceive of.

5

6

Mediated me

Ever since the development of pocket watches, timepieces have always had a role in communicating social identity and status. After wristwatches became popular, however, this role became even more pronounced. Many people regard watches as symbols of wealth, status, taste, or personality. It makes a big difference whether or not someone is wearing a Rolex or a Casio. In fact, with the ubiquity of time on computer screens, mobile phones, and other devices, the timekeeping function of wristwatches is becoming less important than their function as markers of social identity and status. Nowadays, many of the timepieces that people wear on their wrists don’t just tell the time, but do other things as well, such as track their steps and their heartbeat, connect them to others via text or voice messages, and remind them about important appointments. The new affordances of ‘smart’ watches have further altered the way people conceive of space and time in relation to their bodies and their movement through the world. They have also had a profound effect on their social identities and their privacy, allowing them, for example, to share statistics about their physical activities with others, and allowing the companies that make these watches or design apps for them to gather data about their wearers’ whereabouts and activities every moment of the day. The obvious question is whether it was the development of the wristwatch that brought on all of these social and psychological changes, or the social and psychological changes that brought on the development of the wristwatch. Our answer is: both. Human beings are continually creating and adapting cultural tools to meet the needs of new material or social circumstances or new psychological needs. These tools, in turn, end up changing the material and social circumstances in which they are used as well as the psychological needs of those who use them.

Affordances and constraints As you can see from the case study above, the cultural tools that we use in our daily lives often involve complicated combinations of affordances and constraints, and understanding how people learn to manage these affordances and constraints is one of the main themes of this book. We can divide the different affordances and constraints media introduce into fve different kinds: affordances and constraints on what we can do, affordances and constraints on what we can mean, affordances and constraints on how

Mediated me

7

we can relate to others, affordances and constraints on how or what we can think, and, fnally, affordances and constraints on who we can be. Doing Perhaps the most obvious thing we can say about cultural tools is that they allow us to do things in the physical world that we would not be able to do without them. Hammers allow us to drive in nails. Telephones allow us to talk to people who are far away, and location-based apps like Tinder allow us to see who is in physical proximity to us, even if they are not close enough to be physically visible. Just as important, they allow us to not do certain things. Text messages, for example, allow us to get a message across to someone immediately without having to call them, and ‘swiping right’ on Tinder allows us to ‘firt’ with someone without having to think of something clever to say or risk the potential embarrassment of a face-to-face encounter. Some of the things that people do with technology are of earth-shattering importance, things like landing on the moon or mapping the human genome. However, most of the things these tools allow us to do are pretty mundane, like sharing photos with friends, using a smartphone app to fnd a place to eat, or acquiring the ‘magical power’ that we need to reach the next level in an online game. It is these small, everyday actions that we will be most concerned with in this book. These are the actions that are at the heart of everyday literacy practices and ultimately, it is these everyday practices that form the foundation for greater achievements like moon landings and genome mappings. Sometimes when individuals are given new abilities to perform small, everyday actions, this can have an unexpectedly large effect on whole societies and cultures. As we saw above, for example, the ability to keep track of time using a wristwatch was an important factor in the development of other kinds of technologies like airplanes, train schedules, and assembly lines. Similarly, your ability to share random thoughts with your friends on Facebook can have an enormous effect on life beyond your social network in realms such as politics and economics. Meaning Not only do media allow us to do different kinds of things, they also allow us to make different kinds of meanings that we would not be able to make without them. The classic example is the way television has changed how people are able to communicate about what is happening in the world. Reporting on a news event in print allows the writer to tell us what happened, but reporting on it through a television news broadcast allows the reporter to show us and to make us feel like we were there. Live streaming

8

Mediated me

events via social media takes this affordance to another level, allowing viewers to experience things while they are happening from the perspective of the people they are happening to. Apps like Twitter allow users—politicians, celebrities, and ordinary people—not just to describe newsworthy things as they are happening, but also to ‘make news’ by tweeting outrageous or controversial things, and to connect what they tweet to meanings shared by other Twitter users through #hashtags. The lines of print in a book allow us to make meaning in a linear way based on time—frst we say one thing, then we add something else to that. Multimodal content and hypertext, on the other hand, allow us to make meaning in a more spatial way, inviting people to explore different parts of the screen and different linked content in any order that they wish. Apps like Snapchat allow us to incorporate images and videos of our physical surroundings (or our physical body) into our communication, to enhance those images with text, drawings, or flters and ‘lenses’ that add animated features, and to arrange multiple images into ‘stories’ (see Chapters 3 and 4). Media also affect meaning by changing the vocabulary we use to talk about everyday actions. Not so long ago, for example, ‘friend’ was a noun meaning a person that you are close with. Now, however, ‘friend’ is also a verb meaning to add someone on a social networking site. In fact, about 25,000 new words are added to the Oxford English Dictionary every year, most of them the result of new meanings related to new technologies. In 2015, the Oxford ‘word of the year’ wasn’t even a word, but an emoji (the ‘face with tears of joy’ emoji, which, as anyone who has used this emoji would tell you, does not necessarily mean ‘tears of joy’, but can have all sorts of meanings depending on the context of use). Relating Different media also allow us to create different kinds of relationships with the people with whom we are interacting. One way is by making possible different kinds of arrangements for participation in the interaction. Does the interaction involve just two people or many people? What roles and rights do different kinds of people have in the interaction? What kinds of channels of communication are made possible: one-to-one, one-to-many, or many-to-many? A book, for example, usually allows a single author to communicate with many readers, but he or she can usually only communicate to them in relative isolation. In other words, most people read books alone. They may talk with other people who have read or are reading the same book, but usually not as they are reading. Also, they normally cannot talk back to the writer as they are reading, though, if the writer is still alive, they might write a letter telling him or her what they thought of the book. The chances of readers actually having a conversation with the author of a book are slim.

Mediated me

9

Blogs, online forums, and social media sites, on the other hand, create very different patterns of participation. First, they allow readers to talk back to writers, to ask for clarifcation or dispute what the writer has said or contribute their own ideas. Writers can update what they have said in response to readers’ comments. Readers can also comment on the comments of other readers, that is, readers can talk to one another as they are reading. Even books are different now, with e-books like those available on Amazon’s Kindle providing highlighting and comment tools that allow people to engage in social reading, interacting with a community of like-minded readers gathered around a particular book. The internet, with its chat rooms, forums, social networking sites, and other interactive features has introduced all sorts of new ways for people to participate in social life, and people can experience all sorts of new kinds of relationships in online communities. They can lurk in communities or become active members. They can ‘friend’ people or ghost them, and create many kinds of social gatherings that did not exist before the development of digital media. In his famous essay, ‘The Relationship Revolution,’ Michael Schrage (2001: n.p.) claims that to say the internet ‘is about “information” is a bit like saying that “cooking” is about oven temperatures—it’s technically accurate but fundamentally untrue.’ The real revolution that the internet has brought, he says, is not an ‘information revolution’ but rather a ‘relationship revolution.’ Other than making possible different kinds of social arrangements for participants, media also have an effect on two very important aspects of relationships: power and distance. Technologies can make some people more powerful than others or they can erase power differences between people. For example, if I have a microphone and you don’t, then I have greater power to make my voice heard than you do. Similarly, if I have the ability to publish my views and you don’t, then I have greater power to get my opinions noticed than you do. One way the internet has changed the power relations among people is to give everyone the power to publish their ideas and disseminate them to millions of people. This is not to say that the internet has made everyone’s ideas equal. It’s just that more people have the opportunity to get their ideas noticed. At the same time, big media companies like Google still control the means by which different people’s content is made prominent and accessible to others through tools such as search engines and recommendation systems. Finally, when our relationships are mediated through technology sometimes they can make us feel closer, and sometimes they can make us feel more distant from each other. When text-based computer chat and email were frst developed, lots of people thought that it would be harder for people to develop close relationships since they couldn’t see each other’s faces. As it turned out, chat rooms and instant messaging programs seemed to

10

Mediated me

facilitate interpersonal communication, self-disclosure, and intimacy rather than hinder it. These programs are now used much more for maintaining interpersonal relationships than they are for instrumental purposes (see Chapter 5). Similarly, many commentators in the early days of the World Wide Web held out the hope that digital technologies would bring people together and make everybody more informed. What appears to have happened, however, is that digital technologies have helped to facilitate cultural tribalism and political polarization where different people are ‘informed’ with totally different sets of facts (see Chapters 2 and 8). Thinking Perhaps the most compelling and, for many people, the most worrying thing about technologies is that they have the capacity to change the way we experience and think about reality. If our experience of the world is always mediated through tools, what we experience will also be affected by the affordances and constraints of these tools. Certain things about the world will be amplifed or magnifed, and other things will be diminished or hidden from us altogether. One of the frst to express this important insight was the communications scholar Harold Innis (1951/1964). Innis said that each medium has a built-in bias, which transforms information and organizes knowledge in a particular way. The two most important ways media affect our experience of reality is the way they organize time and space. Some media make information more portable, making it easier to transport or broadcast over long distances. Some media also make information more durable; that is, they make it easier to preserve information over long stretches of time. The philosopher and literary critic Walter Ong (1982/1996) argues that the medium of written language, by making it easier for us to preserve our ideas and transport them over long distances to a large number of people, fundamentally changed human consciousness. In oral cultures, he says, because so much had to be committed to memory, human thought tended to focus more on concrete and immediate concerns and to package ideas in rather fxed and formulaic ways. The invention of writing, partly because it freed up people’s memories, allowed them to develop more abstract and analytical ways of thinking and made possible the development of things like history, philosophy, and science. Some people think that digital technologies are having similarly dramatic effects on the way we think. The optimists among them see computers and the internet taking over routine mental tasks like calculations and acting as repositories for easily retrievable knowledge, freeing up the brain for more sophisticated tasks like forming creative new connections between different kinds of knowledge. Pessimists, on the other hand, see digital technology taking away our ability to concentrate and to think deeply, weakening our

Mediated me

11

ability to remember things for ourselves and to evaluate knowledge critically, and making us more susceptible to addictive behaviours. Being Finally, different technologies have affordances and constraints in terms of the kinds of people that we can be—that is, the kinds of social identities we can adopt when we are using them. Certain kinds of social identities, of course, require that we have available to us certain kinds of technologies and that we know how to use them. If we want to convince others that we are carpenters, then we’d better have access to tools like hammers, saws, and screwdrivers and be able to skillfully use them. In fact, some people would argue that nearly all social identities are a matter of having certain tools available to us and having mastered how to use these tools. We could also put this the other way around, that when we use certain kinds of tools, we are implicitly claiming certain kinds of identities. So when we walk into a lecture theatre and start speaking through the microphone at the podium, we are claiming the identity of a professor, and imputing on those listening the identities of students. Some tools, however, are not necessarily part of such specialized identities. Using a mobile phone, for example, is not something that is reserved for certain professions or social groups. Nevertheless, when you use your mobile phone you are still showing that you are a certain kind of person. For one thing, you are a person who can afford a mobile phone (which not everybody can). How you use your mobile phone also communicates something about who you are. A boss, for example, might be able to answer his or her mobile phone during an important meeting, whereas a lower-ranking employee might not be able to get away with this. You might be enacting a certain kind of social identity just by the kind of mobile phone you use. Are you carrying an iPhone or an Android phone? Is it the latest model or one from two years ago? Finally, the range of apps you have installed on your mobile phone alters its affordances as a tool for enacting your identity. You might, for example, use your phone to take selfes to upload to Instagram, or you might use it to advertise your sexuality using an app like Grindr. Different kinds of technologies can also help you present yourself as a certain kind of person to others by allowing you to reveal certain parts of yourself and conceal other parts. The privacy settings on Facebook, for example, allow you to share information with some people in your social network while keeping it secret from others. The sociologist Erving Goffman (1959) uses the metaphor of a play to talk about how we present ourselves to other people. Like actors, he says, we have different kinds of expressive equipment—costumes, props, and various staging technologies—which allow us to create a kind of illusion for our audience. This equipment allows us to reveal certain things to our audience and keep other things hidden.

12

Mediated me

Sometimes we can even reveal some things to some members of the audience while keeping them hidden from others (see Chapter 10).

ACTIVITY: AFFORDANCES, CONSTRAINTS, AND SOCIAL PRACTICES A. Affordances and constraints Consider the different kinds of technologies listed below and discuss how they have affected: 1) The kinds of physical things people can do in particular situations; 2) The kinds of meanings people can express in particular situations; 3) The kinds of relationships that people can have in particular situations; 4) The kinds of thoughts people can think in particular situations; 5) The kinds of social identities people can perform in particular situations. Traffc Signals

Phone Cameras

Fitness Trackers

‘Like’ Buttons

B. Social practices Now consider these technologies as parts of wider social practices. What other technologies are they usually used together with and in what kind of social situations? How do these other technologies and social situations affect what we do with these technologies?

Creativity While technologies allow us to do certain kinds of things, make certain kinds of meanings, and think, relate to others, and enact our own identities in certain ways, they also invariably introduce limitations on these activities. Social networking sites, for example, make it easier for us to stay connected to our social networks, but they make it more diffcult to maintain our privacy (especially from internet companies, advertisers, and potential ‘stalkers’). Caller identifcation on mobile phones makes it easier for us to screen our calls, but it also makes it easier for calls that we make to be screened by others. Often the constraints of new technologies are less visible to us than their affordances. We tend to be so focused on the things we can do with a tool that we don’t pay so much attention to the things we cannot do with it.

Mediated me

13

It would be a mistake, however, to regard affordances as universally good and constraints as universally bad. Sometimes affordances of technologies can channel us into certain kinds of behaviour or ways of thinking and can blind us to other (sometimes better) possibilities. Constraints, on the other hand, can sometimes spur us to come up with creative solutions when the tools we have at hand do not allow us to do what we want to do. In this way, the constraints of tools can drive creativity and innovation. Just because different technologies allow us to do some things and constrain us from doing other things does not mean that technologies determine what we can do, what we can mean, the kinds of relationships we can have, what we can think, and who we can be. Despite the affordances and constraints of the tools we use, human beings always seem to fgure out how to do something new with them. We appropriate old tools into new situations, and we creatively alter and adapt them to ft new circumstances and new goals. Commenting on how people she knew were using the popular Scrabble-like game Words with Friends to fnd romantic partners, one of Rodney’s students said, ‘nowadays it doesn’t matter what the app is really for—people will fgure out some way to turn it into a dating app.’ The psychologist James Wertsch (1993) says that all human actions take place at a site of tension between what the cultural tools available to us allow us to do (affordances and constraints) and the ways we are able to adapt them to do new things. In fact, managing this ‘tension’ is an important aspect of the defnition of ‘literacy’ we will develop below and in the rest of this book. The way we use different tools is not just determined by their affordances and constraints and our own ability to adapt them to different situations. It is also partially determined by the histories of the tools, the way they’ve been used before, and the way people in different communities think they should be used. If, after a while, more and more people start using Words with Friends as a ‘dating app’, looking for romantic partners might become something that you are expected to do with this app, and those using it without this intention might be considered to be deviant. The media anthropologist Ilana Gershon (2010) calls the sets of expectations about how different media should be used that grow up in different communities media ideologies. Finally, and most importantly, we rarely use media in isolation. We almost always mix them with other tools. As we saw with the example of the wristwatch, using one tool (like a watch) often affects how we can use another tool (an airplane). Sometimes the affordances of one medium can help us to overcome the constraints of another. More and more, in fact, different media are merging together. Mobile phones, for example, have become devices which we use not just to have phone conversations but also to surf the internet, check stock prices and the weather, take snapshots and videos, play games, and even measure our pulse and body temperature. In

14

Mediated me

addition, many of the apps we use allow us to do many different things. The Chinese social media app WeChat (Weixin), for example, allows users to exchange text and voice messages with friends, post personal updates and pictures, fnd new friends nearby and share their location, pay in shops and on public transport, read the news, keep track of steps and other physical activities, book taxis as well as rail, fight, and hotel tickets, and play games. The app also serves as a platform for ‘miniapps’ produced by third-party developers. Therefore, instead of thinking about media in a simple, ‘one-to-one’ way—a single technology with a clear set of affordances and constraints being used to take certain discrete actions—it’s better to think of media as parts of systems of actions and activities, meanings and thoughts, social organizations and identities all linked up through what the media scholars Sarah Kember and Joanna Zylinska (2012: xviii) call ‘interlocked and dynamic processes of mediation.’ We ourselves and the tools that we use are parts of large techno-social systems in which the affordances of one technology might create constraints in other technologies, the meanings that we are able to make in one situation might make possible new meanings in totally different situations, and the actions that we take now might have profound and unexpected effects on relationships and identities we might form in the future. As Daniel Miller and Don Slater put it in their book The Internet: An Ethnographic Approach (2000: 14), a central aspect of understanding the dynamics of digital media is not to look at a monolithic medium called ‘the Internet’, but rather at a range of practices, software and hardware technologies, modes of representation and interaction … not so much people’s use of ‘the Internet’, but rather how they [assemble] various technical possibilities which [add] up to their Internet.

Media utopias and dystopias People usually have strong reactions when new media are introduced into their lives. This is not surprising since, as we said above, mediation is intimately connected to the ways we go about doing things in our daily lives, the ways we express meaning, relate to others, and even the ways we think. When new ways of doing, meaning, relating, thinking, and being start to develop around new media, it is natural for people to worry that the old ways that they are used to are being lost or marginalized. Sometimes these worries are justifed, especially in cases where new media disrupt social norms around things like intimacy and privacy or where the legal, political, and ethical frameworks of a society have failed to adapt. In the past, whenever new technologies arose, people inevitably expressed concerns. When writing was developed, none other than the Greek

Mediated me

15

philosopher Socrates declared it to be a threat to civilization. Under the infuence of this ‘new media’, he insisted, people would lose their ability to remember things and think for themselves. They would start to confuse ‘real truth’ with its mere representation in symbols. Later, when the printing press was developed, there were those who worried that social order would break down as governments and religious institutions lost control of information. And when television became available, many people worried that it would make people stupid or violent or both. Similarly, with the introduction of digital media in the late twentieth and early twenty-frst centuries, many people—including parents, teachers, and newspaper reporters—raised alarms about their possible effects on individuals and societies. Some of these concerns were (and still are) justifed, and some turned out to be less so. Interestingly, most of these concerns focused on the fve kinds of affordances and constraints that we discussed above. People worried that digital media would take away people’s ability to do some of the things they could do before, or would allow people to do things that they didn’t think they should do. People worried that digital media would ruin people’s ability to make meaning precisely and accurately with language. They worried about the effects of digital media on social relationships, claiming either that people would become isolated from others or that they would meet up with the ‘wrong kind of people’. They worried that digital media would change the way people think, causing them to become easily distracted and unable to construct or follow complex arguments. And fnally, people were concerned about the kinds of social identities that people would perform using digital media, worrying about whether or not these identities were really ‘genuine’ or about how much of their own identities and their privacy they actually had control over. At the same time, others who experienced the early days of the internet and digital technologies were extremely optimistic about the way it would affect people’s lives, predicting that digital media would bring people together, facilitate democracy and deliberative debate, and power up students’ ability to learn. In fact, what has characterized attitudes towards digital technologies over the years has often been a contest between the extremes of technological dystopianism, the view that digital technologies are destroying our ability to communicate and interact with one another in meaningful ways, and technological utopianism, the belief that digital technologies will invariably make us all smarter and the world a better place. Now that digital media have been part of our daily lives for over three decades, people are beginning to reassess both their hopes and their fears. Some of the negative effects from digital media that people predicted in the past have not come to pass, while some negative effects that were not predicted did. Few, for example, predicted that the kind of mass surveillance that private companies regularly use the internet to carry out would come to be regarded as ‘normal’, nor that foreign governments would carry out

16

Mediated me

information warfare over social media sites. At the same time, while many of the rosy predictions of cyberutopians have not come true, advances in digital technologies have come to beneft people in myriad ways, improving access to information and services for large numbers of people and providing the tools we need to fgure out some of the most complex problems that societies face nowadays. Despite the obvious benefts digital technologies have brought to societies and individuals, and despite the obvious problems that they have introduced, our aim in this book is to avoid trading in technological utopianism and dystopianism, focusing more practically on how mediational means like computers, smartphones, and the internet introduce into our social interaction certain affordances and constraints in particular social contexts and the ability we have to creatively respond and adapt to these affordances and constraints in ways that can increase our individual and collective agency (see Chapter 7).

What are ‘digital literacies’? Before ending this chapter, it’s very important that we explain more about the title of this book and what we mean by it, and especially by the term ‘digital literacies’. As we have seen above, using media is a rather complicated affair that infuences not just how we do things, but also the kinds of social relationships we can have with other people, the kinds of social identities we can assume, and even the kinds of thoughts we can think. When we talk about being able to use media in this broader sense, not just as the ability to operate a machine or decipher a particular language or code, but as the ability to creatively engage in particular social practices, to assume appropriate social identities, and to form or maintain various social relationships, we use the term ‘literacies’. ‘Literacy’ traditionally means the ability to read and write. Someone who can’t read or write is called ‘illiterate’. But reading and writing themselves are complicated processes. Reading and writing in different situations require very different skills. For example, you write an essay for an English class in a different way than you write a lab report for a physics class or a message to your WhatsApp group about a celebrity whom you and your friends admire. The reason for this is that you are not just trying to make different kinds of meanings, but also trying to establish different kinds of relationships and enact different kinds of social identities. There are also a lot of other activities that go along with reading and writing like looking things up in the dictionary, fnding information in the library or on the internet, and fguring out the right way to package information or to ‘unpack’ it in different kinds of texts. Finally, reading and writing often involve encoding and decoding more than just language. They might also

Mediated me

17

involve using and interpreting pictures, the spatial layout of pages, or the organizational structures of texts. It should be clear from the above that literacy is not just a matter of things that are going on inside people’s heads—cognitive processes of encoding and decoding words and sentences—but rather a matter of all sorts of interpersonal and social processes. Literacy is not just a way of making meaning, but also a way of relating to other people and showing who we are, a way of doing things in the world, and a way of developing new ideas about and solutions to the problems that face us. This view of literacy as a social phenomenon rather than a set of cognitive or technical abilities associated with individuals was pioneered by a group of scholars in the 1980s and 1990s who called their approach ‘the new literacy studies’ (see for example Barton, 1994; Gee, 2007a; Scollon & Scollon, 1981; Street, 1984). There are also those who study what they call ‘new literacies’ (see for example Lankshear & Knobel, 2011), meaning that they focus on more recently developed literacy practices which are often (but not always) associated with ‘new technologies’ like smartphones and the internet. In any case, there is the widespread acknowledgement that people’s understanding of what it means to ‘be literate’ has changed with the advent of digital technologies and the social, economic, and political environments these technologies have helped to bring about. In this book, we use the term ‘digital literacies’ to refer to practices of communicating, relating, thinking, and ‘being’ associated with digital media. Understanding digital literacies means in part understanding how these media themselves may affect the kinds of literacy practices that are possible and the ways people’s notions of literacy itself are changing. One example of this that we mentioned above is the way digital media are breaking down boundaries of time and space. Because of digital technologies we don’t have to go to physical places like classrooms, libraries, offces, and marketplaces to engage in literacy practices (like learning, researching, or shopping) that were previously confned to particular physical places and particular times. Another example is the way digital media are breaking down barriers that traditionally governed the way we thought about language—for example, the distinction between spoken language and written language. At the same time, we do not wish to fall into the trap of technological determinism and suggest that new practices of communicating with and relating to others are determined solely by the affordances and constraints of the new digital tools available. An understanding of these affordances and constraints is important, but developing digital literacies means more than mastering the technical aspects of digital tools. It also means understanding how people around us use those tools to do things in the social world, and these things invariably involve managing our social relationships and our social identities in all sorts of different situations.

18

Mediated me

‘Digital literacies’ are ways in which people use the mediational means available to them to take actions and make meaning in particular social, cultural, and economic contexts. Consequently, they are inevitably tied up with the values, ideologies, power relationships, and cultural understandings that are part of these contexts. They involve not just being able to ‘operate’ tools like computers and smartphones, but also the ability to adapt the affordances and constraints of these tools to particular circumstances. At times this will involve mixing and matching the tools at hand in creative new ways that help us do what we want to do and be who we want to be. We also don’t want to fall into the trap of drawing too fne a line between ‘digital literacies’ and what we might call ‘analogue literacies’—those involved in print-based reading and writing. Strictly speaking, the process of mediation and the tension between what tools allow us to do and what we want to do with them is fundamentally the same whether you are using pencil and paper or a word processing program. What is different, we will argue, are the kinds of affordances and constraints digital tools offer and the opportunities they make available for creative action. In many ways, digital media are breaking down boundaries that have traditionally defned our literacy practices; we rarely engage in digital literacy practices that do not also involve more traditional kinds of literacies, and, as we said above, in real life, all literacies are always part of complex ecologies of communicative practices involving the mixing of different tools. Therefore, while we may seem at times in this book to focus quite heavily on the ‘digital’ part of digital literacies, that is, to dwell on the affordances and constraints of these new technologies, what we are really interested in is not the tools themselves, but the process of mediation, the process through which people appropriate these tools to engage in particular social practices. It is through this focus on mediation that we hope to call attention to the tension between the affordances and constraints of digital media and the creativity of individuals and groups as they adapt these media to specifc social goals and contingencies. As we said above, understanding this tension is central to understanding ‘digital literacies’. Finally, the study of digital literacies (and of all literacies, for that matter) is always a fundamentally political undertaking. The literacy practices that people develop in different contexts are always tied up with the personal politics of everyday life (the power relations between you and the people around you such as your parents, your teacher, or your boss) as well as the wider political and economic forces that shape our societies. Just as the way people use different tools to communicate is infuenced by the particular literacy ‘policies’ of their families or their schools, many of the kinds of literacies that we develop around digital technologies are prescribed and promoted by a small number of large corporations that control the platforms that we use to communicate with others and access and share information, platforms such as Google, Facebook, and Amazon (or, in China, Baidu,

Mediated me

19

Weibo, and WeChat). Each of these platforms plays a role in socializing us into particular literacy practices that are designed as much to advance the business interests of their owners as they are to improve our access to information or our relationships with our friends. Some people, in fact, are using the term platform literacies to refer to the particular kinds of literacy practices associated with different platforms (see Chapter 7). The goal of studying platform literacies is to understand the ways our behaviours are controlled by the affordances and constraints of platforms and ultimately to understand the larger political and economic agendas that infuence their design.

How this book is organized This book is divided into two parts. In the frst part we explore the fabric of affordances and constraints digital media make available to us, looking at things like search algorithms, hypertext, and the ways new technologies facilitate our ability to manipulate different modes in texts like photographs and videos in ways never before possible. We will also explore how digital media enable and constrain different cognitive and social processes, ways of sharing and evaluating information, ways of distributing attention, and ways of managing our social relationships. We will also examine the ways digital tools are changing the way we interact with the physical world and the way we experience our bodies and use them as communicative resources. At the end of this section, we will critically explore the degree to which these affordances and constraints act to promote particular ways of seeing and representing the world, to normalize particular kinds of behaviour, and to advance the agendas of particular kinds of people. In the second half of the book, we will go on to apply this analysis to specifc ‘literacies’ that have grown up around various digital media and within various communities of media producers and consumers. We will examine practices like online gaming, social networking, peer production and collaboration, and surveillance. Each chapter in the book includes a case study in which the concepts or principles discussed are illustrated with an example. In addition, each chapter includes activities that help you to apply the ideas we have discussed to examining and analysing your own digital literacy practices. In the online supplementary materials, we have included a glossary of terms, and throughout the book, whenever we introduce an important term for the frst time, we will highlight it in bold type and include a defnition in the online glossary. By the time you read this book, many of the ‘new literacies’ we discuss here will already be ‘old’ and many of the ‘new technologies’ may already be obsolete. Indeed, this second edition of the book, written just seven years after the frst edition was published, includes information on a range of new

20

Mediated me

technologies and a range of new literacy practices that didn’t exist when we wrote the frst edition. Hopefully, however, our framework for understanding digital media by exploring how they affect what we do, how we make meaning, how we relate to one another, how we think, and the kinds of people we can be will still be of value. Our real goal is not to teach you about particular literacy practices so much as to give you a useful way to refect on your own literacy practices and how they are changing you. Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Part I

Digital tools

Chapter 2

Information everywhere

Understanding how to cope with and use information is one of the most important aspects of digital literacies. Many people nowadays believe that digital technologies have brought about information overload (Waddington, 1998), a condition characterized by increased levels of stress and confusion resulting from having ‘too much information’. Others are concerned about the accuracy of the information that circulates through digital networks, and the proliferation of fake news and conspiracy theories online. In this chapter we will examine the effect digital technologies have on how we access and process information and how we understand concepts like ‘knowledge’ and ‘truth’. We will argue that problems like ‘information overload’ and ‘fake news’ are not so much problems of ‘too much information’ or ‘bad information’, but rather problems of understanding how we create information, knowledge, and truth to begin with, through forming relationships between different pieces of data and between data and the social contexts in which we encounter them (including the people from whom we get them). While digital technologies have dramatically increased people’s access to information, they also provide extremely sophisticated tools for fltering, channelling, and sharing information with others, and of evaluating the information we fnd. These tools involve both affordances and constraints. On the one hand, they help us to flter out unnecessary information, but on the other hand, they may keep important information from us or contribute to the creation of a distorted view of the information that is available. Coping successfully with information and turning it into knowledge involves understanding the information creating, information limiting, and information distorting capacities of digital media.

ACTIVITY: FINDING INFORMATION Imagine that you are doing the following things. In each scenario, how would you go about fnding the information you need to help you complete the task? What information sources would you go to, and

24

Digital tools

how would you judge their credibility? Would you be likely to get more information than you need or less? How would you evaluate the information based on its relevance to your task? • • • • •

You’re writing an essay on Martin Luther King; You want to buy a new smartphone; You’re looking for a roommate; You’re looking for some new music to listen to; You want to fnd out if your new boyfriend is secretly married.

Follow-up questions 1. What sorts of resources and strategies did you use to fnd information relevant to these different tasks? Were these resources and strategies different for different kinds of task? 2. Did any of these tasks involve combining information from different sources? What are the advantages of and challenges associated with this? 3. In each of these tasks, what strategies would you use to evaluate whether or not the information you have is accurate or not? 4. Did any of these tasks require you to relinquish information to others in order to get information you need? Explain.

Information and relationships Much of the concern about information overload comes from a fundamental misunderstanding of what information is and how we create and process it. Think about walking on a busy city street. All around you, things are happening. There are thousands of sights and sounds, signs everywhere and people all around you talking. Most people who fnd themselves in such situations do not feel they are suffering from ‘information overload’ because they do not consider everything that is happening around them to be ‘information’. They selectively pay attention to and process the data which they judge to be important for them. In other words, they create information from the data that is available. The frst distinction we need to make, then, is that between ‘information’ and ‘data’. Data are ‘perceptible phenomena’ (including sights, sounds, colours, words) that exist in the external world. These ‘phenomena’ only become information when we establish some kind of relationship with them. Besides data and information, there is also a third category that we need to consider, and that is ‘knowledge’. Knowledge is what is created when

Information everywhere 25

information is integrated into our minds so that we can adapt it to different circumstances and apply it to problems. The creation of knowledge always depends on how new information interacts with the previous knowledge, memories, intentions, and biases we carry around in our heads, as well as with the knowledge, memories, intentions, and biases that circulate in the communities we belong to. Let’s return to the busy city street and consider how the concepts of data, information, and knowledge apply. In that environment, when we select from and connect up the data from the environment or from sources like maps and other people, we create the information we need to get from one place to another. When we make use of that information to decide what the best way to go is, and that judgement becomes part of our habitual way of getting to where we want to go, then we have created knowledge. Such knowledge may not necessarily result in us getting from here to there in the quickest or most effcient way, because knowledge is affected by all sorts of things such as ideas about what counts as a good way to go, whether it is particularly scenic or particularly unsafe, or whether or not we care about getting to where we are going quickly. Information, then, is the result of how we organize data, how we make some pieces of it more relevant than others and create relationships between different pieces. Knowledge is the result of our beliefs about the value of information and what we think information is for. Information and knowledge are created through processes of categorizing the data that we encounter and fltering it, that is, deciding what data is worthy of our attention in a given situation. It also involves processes of evaluating the data that we are paying attention to and the information that is created from it for how useful or credible it is. It is important to remember that we never engage in these processes alone. Our social worlds provide all sorts of tools to help us. In the case of walking on a city street, these tools might consist of things like street signs, traffc signals, maps, guidebooks, and recommendations from other people about the best route to take. When navigating digital environments these tools include things like search engines, recommendation systems, and networks of ‘friends’ and followers on social media. These tools allow us to ‘offoad’ some of the cognitive work of sorting, fltering, and evaluating. But, as we said in the last chapter, all tools have affordances and constraints, so while these tools may bring advantages, they also inevitably bring disadvantages. For many people nowadays, for example, navigating their way through an unfamiliar city is likely to involve staring at their smartphone and following the little blue dot in Google Maps or some other app. What’s great about these apps is that they do most of the information creation work for us, gathering together, fltering, and connecting up the data we need to get to where we want to go. They might even suggest alternate routes, evaluate them, and suggest things that we can eat or buy along the way. But there are

26

Digital tools

also problems with such apps. First, the data they gather might be incomplete; that is, they may not be aware of obstacles like temporary road works or a street demonstration or fash mob. Sometimes our own eyes and ears are better at gathering data from the environment, and our own minds are better at processing it. In the worst-case scenario, we might be so intent on staring at our phones that we fall into a manhole or get hit by a truck. Another problem with such apps has to do with knowledge. Although the information and recommendations that the app gives us may seem ‘objective’, they are based on criteria (like user recommendations or the calculations of algorithms) which we may not be entirely aware of. So when we ‘offoad’ knowledge creation to a digital tool or platform, we are investing a great deal of trust in that platform. At the same time, we might lose the chance to create knowledge ourselves through experimentation. In the case of map apps, one thing that we lose is the knowledge and resilience that often comes from getting lost in an unfamiliar city. But perhaps the biggest problem with such apps, though it is a problem invisible to most users, is that they also turn us into information, which is inevitably used by people and organizations that we might not have imagined we were sharing our walk with. In other words, Google Maps does not just tell us where to go, it also tells Google where we are, where we want to go, and how we eventually got there. This is true of almost all of the digital tools we use to create information and knowledge—when we use them, we are also contributing to the creation of information and knowledge about us for others (internet companies, advertisers) to use when deciding what kind of information to give us in the future. To sum up, information is not about ‘facts’ so much as it is about the relationships that we create between different ‘facts’ and between ourselves (and other people) and those ‘facts’. In the frst chapter we argued, quoting Michael Schrage (2001), that what is often referred to as ‘the information age’ is more accurately thought of as the ‘relationship age’, mostly because people seem to use computers as much to connect with and communicate with other people as they do to search for, store, and manipulate information. Now we would like to take that idea further, arguing that even these practices of searching for, storing, and manipulating information are more a matter of relationships than they are of data itself. In other words, we would like to argue that information is most usefully seen not as a collection of ‘facts’, but as a social practice based on establishing relationships.

Ontologies: Organizing the world The frst step in creating information is having data available to us in a way that makes it easy to form useful relationships with it. Throughout history, human beings have come up with various systems of organizing and classifying data. An organization system is any system which makes it

Information everywhere 27

easy for us to locate the data that is relevant to the task we are performing. Organization systems also help us to form useful relationships between pieces of data because their whole point is to arrange data in relation to other data. Organization systems exist in books, online, and even in the arrangement of physical objects (the layout of the streets in a city, for example, can be seen as an organization system). Organization systems are based on ontologies. In philosophy, ontology refers to the study of ‘being’, which means the study of reality, the entities which make up reality and how they relate to one another. It asks what kinds of things exist and how they are connected together or divided up into categories. For computer scientists, ontologies are the sets of concepts and relationships computer systems need to solve problems. Ontologies are especially important in the development of artifcial intelligence, but even relatively simple software applications are based on ontologies. The ‘reality’ of Microsoft Word, for example, consists of a distinct set of objects (such as fles, fonts, line spacings, and words) and a distinct set of things that users can do with these objects. Ontologies are essential for being able to make sense of the world, but they are always limited or ‘biased’ in some way. One of the most common ways of organizing data is the hierarchical taxonomy. The eighteenth-century Swedish botanist Carl Linnaeus is usually considered the father of modern taxonomies. He developed a classifcation system for plants and animals that is still used by scientists today: a nested hierarchy which classifes all living things frst into one of three ‘kingdoms’, subdivided into classes, orders, genera, and species. Other classifcation systems of this type are those we fnd in libraries such as the Dewey Decimal System, developed by American librarian Melvil Dewey in 1873, which consists of 10 main classes of subjects divided into 100 divisions and 1000 subdivisions. Despite Dewey’s intention to develop a ‘universal’ classifcation system, however, his categories belie an ontology limited by his own Victorian British way of seeing the world. While Christianity is given eight subdivisions, for example, all ‘religions other than Christianity’ are relegated to a single category (no. 290) (see Figure 2.1) (Shirky, 2005). This is what we mean by ontologies being biased. In order to be used effectively, organization systems usually require technological tools to act as an interface between the system and its users. One of the greatest advances in information technology in the nineteenth century, for example, was the invention of the fling cabinet and its close cousin, the library card catalogue cabinet. You may think that a fling cabinet is not such a big deal, but at the time of its invention it was. The idea that you could store documents vertically in different folders for easy classifcation and access was a tremendous advance. The development of hierarchical taxonomies and technological tools like fling cabinets had a profound effect on the way people interacted with data. It changed what they were able to do by making it much easier to

28

Digital tools

Dewey, 200: Religion 210 Natural theology 220 Bible 230 Christian theology 240 Christian moral & devotional theology 250 Christian orders & local church 260 Christian social theology 270 Christian church history 280 Christian sects & denominations 290 Other religions

Figure 2.1 An ‘ontology’ based on the Dewey Decimal System.

locate the data which they needed to create meaningful information. It also affected the meaning behind these relationships since data were arranged so that much of their meaning was derived from their relationship with other data. It changed human relationships and identities as well by giving to certain people and institutions the power to control knowledge by deciding what belonged in the classifcation system and where it should be classifed. Finally, it changed the way people thought about data, frst by encouraging them to think of the natural and social worlds as rational and orderly, and second, by making us think of data itself as ‘information’ (since the classifcation system itself seems to create a kind of predetermined relationship between pieces of data). There are, of course, disadvantages to organizing data in this way. Often it is diffcult to know exactly where different items should be placed in the series of categories. Classifying books, for example, is often extremely subjective. Whether or not a particular volume is about linguistics or psychology, or about history or politics, or even if a work is fction or non-fction is sometimes a matter of debate. Placing an item in this system always involves judgement, and that judgement is often a refection of a particular agenda or ideology (see Chapter 7). The second problem with such a system, at least when applied to physical things like books, is that items can occupy only one place in the system at a time. Library books can only have one place on the shelves. Documents can usually only occupy one folder in a fling cabinet. Of course, it is possible to make multiple copies of a document and store it in multiple places, but after a while this becomes unwieldy, expensive, and, in the end, undermines this whole system of classifcation which assumes that there is a place for everything and everything should be in a particular place. Finally, this system does not really work the way our minds work. We do not usually think in strict categories and hierarchies. Instead, we think

Information everywhere 29

associatively, making connections between topics based not just on whether or not they are hierarchically related to each other, but on whether they are related in a whole host of other ways, many of which may not be immediately clear to people who create classifcation systems. One of the greatest contributions of digital media when it comes to the way we create and use information has been to make available another way of organizing data which is actually closer to the way the human brain works, a method based on networked associations. Pieces of data can now be easily linked with other pieces of data based on all sorts of different relationships other than simple hierarchy. The internet with its collection of linked web pages constitutes a system of networked associations that has developed organically from the bottom up based on the kinds of relationships people have noticed between different pieces of data on the internet. While this is an extremely powerful way of organizing data, using it requires a very different set of techniques and tools than those we previously used with static classifcation systems. Trying to create useful information by navigating through a complex web of data linked in all sorts of different ways is challenging, and so requires an interface much more powerful than a fling cabinet. The most important interface for operating in this kind of ontology is the search engine, which is the topic of our case study below. Besides associative linking, another new tool for organizing data which digital technologies make available is the ability to ‘tag’ data with metadata or data which describes a particular piece of data with a set of concepts or references to other data. With tagging, rather than putting items into folders, we put labels onto the items, enabling us to search for them later. This allows our information to be organized in many different ways in many different ‘places’ at once. The good thing about tagging is that it allows people to organize information in the same way that the brain does, using multiple, overlapping associations rather than rigid categories. For example, in Instagram I can tag a picture of my puppy wearing a Santa hat with labels like #puppy, #cute, and #Christmas. With these labels I create all sorts of different routes for people to access my picture. In Instagram and many other social media platforms you can also search for items using combinations of labels based on Boolean logic, in the same way you can search for information in search engines. For example, a user can combine #cute and #Christmas and fnd not just my puppy but all sorts of cute things having to do with Christmas like elves and Rudolph the Red-Nosed Reindeer. Although many software applications support tagging, and you can attach metadata to fles in both Microsoft Windows and Mac OS, the place where tagging has become a really central strategy for the organization of data is on the World Wide Web, and especially on social media platforms. In the past, whenever we wanted to get data from a public source, like the

30

Digital tools

library, we relied on an expert to organize and classify it for us. The problem was that we were stuck with whatever classifcation system the expert had settled on. Nowadays, many platforms, rather than relying on experts to organize and classify data, rely on users to do this work. Of course, everybody has a different way of tagging based on their own judgements and their own opinions, and often, especially on social media platforms, people use hashtags as much to communicate something to their friends or followers as to objectively describe what they are posting. So, if everybody is adding different kinds of tags to the same kinds of information, why doesn’t this result in chaos? The answer is a concept that the philosopher Pierre Lévy (1997) calls collective intelligence, a concept that we will discuss in more detail in Chapter 11. The idea of collective intelligence is that if lots of people make decisions about how something should be classifed or organized and you put all of these decisions together, you end up with a system that refects the collective ‘wisdom’ of the community. Nowadays, however, it is less trendy for people to talk about ‘collective intelligence’ when discussing the internet. They are more likely to talk about collective ignorance. To understand how we got from Lévy’s optimism of the late 1990s to today’s concerns about the quality of information that people tag and share online we need to consider another important process involved in the creation of information. That process is fltering.

Filtering In the beginning of this chapter, we gave the example of navigating the streets of a city, and we noted that, in order to do this, people can’t possibly pay attention to all of the data available to them. Only some of this data will be relevant to what they want to do. The process of separating data that is more relevant from data that is less relevant is called fltering. Like organization systems, fltering systems are always biased in some way because they are, by their very nature, based on assigning more value to some pieces of information and less value to others. The science of how people assign value to data and to information is called epistemology. Epistemology is the study of how humans decide on the best way to create knowledge, what methods for knowledge creation they value over others, and how they decide whether information and the knowledge that we build from it is ‘true’ or ‘false’, ‘valid’ or ‘invalid’, ‘fact’ or ‘opinion’. The basis of epistemology is fltering. Value may be assigned to data based on criteria that we have chosen (‘is this piece of data important for me for what I want to do?), or it may be assigned based on criteria others have come up with for us. The fact is, we almost always require the help of other people and of various tools to help us to flter data; it’s usually too big a job for our brains to handle unassisted.

Information everywhere 31

We also rarely rely on just one flter, but rather make use of multiple fltering systems (our own minds, the advice of friends, and the fltering capacities of various technologies), so the work of fltering is almost always distributed among different people and different non-human entities. In navigating the city street, we may rely on our previous knowledge and experience to flter out things that we don’t think are important to our journey, we might rely on a companion to point out what is important and what is not, we might look at the signs people have placed in various places to direct our attention, and we may also have Google Maps to help us fnd our way and alert us to attractions. We can call these combinations of mental flters, social flters, and technological flters: attention structures (Jones, 2005; Jones & Hafner, 2012). Every situation we encounter involves different attention structures, that is, different combinations of technological tools, people, and our own cognitive capacities. Mental filters Perhaps the most important fltering system we have is the human brain. At any given moment we are bombarded with countless sensory perceptions, but we don’t pay attention to all of them because our brains process these perceptions in a way that foregrounds some and backgrounds others. One area of the brain that is largely responsible for this is called the thalamus; it acts as a kind of relay switch between sensory neurons and the cerebral cortex, the area of the brain that governs planning and organization as well as motor functions. When the cortex receives sensory data that it deems important, it sends a signal to the thalamus to inhibit the transmission of other less important data. Certain mental disorders characterized by the inability to adequately flter sensory perceptions, such as ADHD and schizophrenia, are associated with a malfunctioning of this switching mechanism. The cerebral cortex makes decisions about the data and the thoughts we form around that data based on neural pathways that grow up over time as a result of our experience and learning. As we use these neural pathways, they are strengthened, sometimes to the point at which we are able to make decisions about which thoughts and data are important, useful, and ‘correct’ automatically. This is a good thing, since it allows us to perform tasks such as driving a car or, to return to our original example, walking on a city street, without having to think too much about what to pay attention to and what to ignore. But this automatic fltering of data and thoughts can also result in a distorted view of the world. All of us suffer from some degree of cognitive distortion based on the way our brains flter data and selectively accept or reject ideas and impressions about this data. Perhaps the most common cognitive distortion is known as confrmation bias, the tendency we have to selectively flter out data that challenge

32

Digital tools

assumptions and beliefs that we already hold. For example, we might continue to take a particular route through the city even in the face of evidence that an alternate route might be more effcient or more scenic. Confrmation bias operates in all areas of our lives, from how we evaluate the actions of friends and relatives to how we evaluate political information. In fact, in many cases, the more data you present to someone contradicting their assumptions or beliefs, the more strongly they hold to them, a phenomenon known as the backfre effect. Another common bias that can affect our mental fltering systems is called loss aversion bias, sometimes referred to as the ‘fear of missing out’ (FOMO). FOMO can interfere with our ability to effectively flter data because of the feeling that we might be missing something important. This is particularly common when it comes to data and information we get from social media platforms in situations in which not noticing or acknowledging data that others are making available can have negative social consequences. Loss aversion bias can be exacerbated when the choice of data sources or the amount of data available increases. Psychologists and behavioural economists call this phenomenon the paradox of choice, the fact that the more choices we have the less happy we feel about the choices we have made. Social filters Another important way we decide what data is important, relevant, and valid is by relying on the advice or behaviour of other people. Sometimes we explicitly elicit advice from people around us regarding what we should pay attention to, and sometimes we make decisions based on what we think others are doing. It’s obvious that when people around us pay attention to something, we are more likely to follow suit. For example, when we see a crowd gathering on the street around a traffc accident or street performance, we feel compelled to check it out as well. Social flters can increase cognitive effciency because they give us a way of distributing some of the mental work of fltering to others. But social flters can also distort our view of the world, foregrounding those things that the people around us think are important and backgrounding the things they think are less important. They can also work to increase confrmation bias since we often surround ourselves with people who share our assumptions and beliefs. Two key aspects of social flters are affliation and reputation. We are more likely to trust information that has been fltered by people with whom we are affliated in some way: friends, family, members of the same political party or ‘tribe’, or even just people that we perceive as being more like us. We also tend to evaluate information based on the reputation of the people who are fltering it. For example, we are more likely to trust a news story published by a news outlet that we think has the trust of lots of people, or buy products from a company that has a good reputation within our social group.

Information everywhere 33

Many of the technological tools we use on a daily basis leverage the power of social flters to help us sort through the massive amount of data available to us through the internet. Nearly all social networking platforms make use of social flters: when we get our news from platforms like Facebook, for example, it is fltered by our ‘friends’, who may browse a range of sources (including their own Facebook Newsfeeds) and ‘like’ or ‘share’ those items that they think are worthy of our attention. This process is helped along by Facebook’s EdgeRank algorithm (see Chapters 3, 7, and 10), which makes items shared by friends with whom we have a stronger affliation appear more prominently in our newsfeeds. Online commerce platforms and reviewing sites such as Amazon and Trip Advisor also use social fltering by aggregating the opinions of many users about a product or hotel or restaurant and assigning it a ranking (from one to fve stars), and making recommendations to people based on what people who bought the product that they bought, also bought. One of the main ways digital technologies have changed the way we use social flters to flter information is that they have given us new ways to ‘flter people’, that is, to separate ‘trustworthy’ from ‘untrustworthy’ people. In the past, our social flters were usually based on the trust we had in our friends or in trusted institutions such as schools, governments, and mainstream media outlets. Now many digital platforms and businesses associated with the sharing economy (or ‘gig economy’ see Chapter 7) feature ratings systems in which people compete for the trust of other users when it comes to the relevance of the information they are providing (e.g. Reddit), or the value or reliability of goods they are providing (e.g. eBay, Uber). The main result of these new ways of fltering people is that digital technologies have made social fltering less hierarchal—that is, less dependent on authoritative institutions and individuals to tell us which information is important—and more distributed—more a matter of networks of friends or even like-minded strangers helping us flter information (Botsman, 2018). One problem with this is that people who traditionally fulflled fltering or gatekeeping functions usually helped us to flter information based on some kind of expertise (scientists) or based on some kind of professional standards for evaluating information (journalists, editors), whereas friends and like-minded strangers often neither possess expertise nor adhere to particularly rigorous standards for evaluating what they share or ‘like’. Technological filters The third kind of fltering that people use for sorting information is fltering done by technological tools. While the most relevant of these for our discussion are digital technologies, any tool that serves the purpose of making some data visible and other data invisible can be said to serve this fltering purpose.

34

Digital tools

Most of the digital tools that we use on a daily basis (such as search engines and social media sites) are designed to help us flter the data that they make available. The main way they determine which data to deliver to us and which to flter out is through the use of algorithms. In his book The Advent of the Algorithm, mathematician David Berlinski argues that the development of the concept of the algorithm at the end of the nineteenth century was the basis for all modern science, especially modern computing. Computers, he says, are physical machines that embody algorithms. Although algorithms themselves can be quite complicated, their essence is simply a fnite list of precise steps for executing a particular task. A recipe for chocolate chip cookies, for example, can be seen as a simple kind of algorithm. As we will see in our case study below, algorithms are central to the way digital tools such as search engines work. Google’s PageRank algorithm, for example, is a series of steps that a computer program executes to determine how relevant a web page might be to a particular search term not just by calculating the semantic relevance of the content of the page to the search term, but also by calculating the number of links to that page and whether those links come from popular sites. Google’s algorithm, as we also note in the case study, also considers other criteria as well, such as the geographical location of searchers and their past surfng behaviour. Algorithms make predictions about what is relevant and what is not based on a set of data and a set of assumptions about how that data can be applied to a particular problem of fltering. When it comes to technological tools such as search engines and recommendation systems, this data is mostly provided by users themselves through the actions they take online (what apps they use, websites they visit, links they click, and people that they interact with). As a result, the data that is presented to users is often based on their own previous choices and social affliations. This is important for two reasons. First, it highlights that many technological flters end up amplifying the effects of users’ cognitive and social flters, including the biases inherent in these flters. In other words, technological flters are not, as a rule, any more ‘objective’ than the humans who use them. Second, it reminds us that, when using such flters to search for information, users almost inevitably end up giving information to the companies that own these flters, information that can be used for other purposes, such as determining which kinds of advertisements users might be most susceptible to. Just as cognitive and social flters can be biased, so can algorithms (see Chapter 7). As we mentioned above, they might end up amplifying the biases of users or their social groups, creating what Eli Pariser (2011) calls flter bubbles, online techno-social environments, such as social networks of like-minded people, which limit the range of opinions users are exposed to. While the fltering capacity of algorithms has the enormous advantage of protecting us from unwanted data and making our searches more effcient,

Information everywhere 35

there are disadvantages as well. Sometimes flters flter out potentially useful information along with ‘spam’. No matter how sophisticated an algorithm is, it can never be absolutely sure about the data you need. Results returned based on criteria such as popularity and ‘relevance’ are not necessarily the best or most accurate. Sometimes data that is less popular or falls outside of the scope of the things you are usually exposed to is much more useful. It is important, therefore, to be aware of how algorithms flter data and to critically evaluate the criteria they use (see Chapter 7).

CASE STUDY: SEARCH ENGINES Perhaps the most important and widely used digital tool for turning data into information is the internet search engine. Over the years there have been many different approaches to searching the internet, but nearly all search engines consist of three main components: 1) a crawler or spider, which is a software program that travels through the World Wide Web and retrieves data to be indexed; 2) the indexer, which arranges what has been harvested into a form that can be searched by the user; and 3) the interface which consists mainly of a set of procedures by which the index is searched and the results of the search are sorted. All three components present special kinds of challenges for the designers (and users) of search engines. When they work well, however, search engines provide the enormous affordance of allowing us to effectively flter the vast amount of data available online. Search engines were not always the preferred way of locating data on the internet. In the early years of the World Wide Web, ‘directories’ or ‘web portals’ were much more widely used. Portals, like Yahoo and AOL, were originally web pages with lists of links arranged in hierarchical taxonomies according to subject. In fact, the development of the World Wide Web in the 1990s can in some ways be seen as a competition between the two systems of organization discussed above: the hierarchical taxonomy and the associative network. The problems with using a directory to manage data on the World Wide Web are obvious. First, there is just too much data to ft realistically into a directory, and so the links that are included must always be selected by some central authority. Second, the larger a directory gets, the more time and labour intensive it becomes to search. And fnally, as we stated above, directories lock users into rather rigid conceptual categories that may not match the way they divide up data in their own minds. Search engines also have problems, mostly having to do with the special technological challenges associated with the three components

36

Digital tools

mentioned above. The frst challenge is to develop a crawler that can harvest the massive store of data on the web both thoroughly and effciently. The second is developing a method for indexing the data so that the right kinds of search terms result in the right kinds of results. For example, if you are searching for York, you are probably more interested in York, England than New York, USA, though most of the web pages on the internet containing the word York are about New York. Similarly, if you type in the name George Washington, you are likely more interested in pages about George Washington (the person) rather than pages about the George Washington Bridge or George Washington University. Lastly, there is the challenge of developing a set of procedures which will return results in a way which can help the user to judge their relevance to what he or she wants to know or do and facilitate the formation of useful relationships with and among these results. Over the years, different developers have gone about solving these problems in different ways. Perhaps the frst great advance in search engine design came with the launch in 1995 by Digital Equipment Corp. of Alta Vista, a search engine which, for the frst time, made the effcient crawling and indexing of the web possible. The problem with Alta Vista and many other search engines of this period was that they lacked an effective algorithm with which to judge the relevance of results and so were open to abuse by ‘spammers’. Spam is a term used for unsolicited and usually unwanted data. The type of spam most familiar to us is email spam, but another important kind of spam is known as ‘search engine spam’, which refers to web pages which attempt to fool the indexing systems of search engines and impose themselves into the results of unsuspecting searchers. Back in the 1990s the most popular method for doing this was keyword stuffng—flling webpages with popular keywords, often hidden (for example white text against a white background) in order to fool crawlers and indexers. For example, a pornography site might secretly embed the names of popular entertainment fgures in order to trick search engines into listing them as the results for popular searches. One of the most important developments in search technology was the invention of the PageRank algorithm in the late 1990s by Larry Page and Sergey Brin, two students at Stanford who went on to found Google. PageRank is based on the central idea we used to introduce this chapter: that information is not about ‘facts’, but about relationships. Thus, the ‘information value’ of any given piece of data comes from the number and strength of the relationships it has with other pieces of data and with other people. PageRank sorts search results in

Information everywhere 37

terms of relevance based on the number of other sites which link to them and the quality of these linkages. In other words, the more sites that link to a given site, the more ‘important’ that site is deemed to be. Not all relationships are equal, of course. If your brother links to your site, that may help you, but not much because not many sites have linked to his site. If, on the other hand, The New York Times links to your site, your site will go up in ‘information value’ since so many other sites have linked to The New York Times. The basic PageRank algorithm can be expressed mathematically as: PR ( A ) = (1 - d ) + d ( PR ( Ti ) /C ( Ti ) + … + PR ( Tn ) /C ( Tn ) ) Where PR(A) is the PageRank of page A, PR(Ti) is the PageRank of pages Ti which link to page A, C(Ti) is the number of outbound links on page Ti and d is a damping factor which can be set between 0 and 1. This algorithm is not the only way that Google ranks search results, but it is the most important and, when it was developed, the most revolutionary method of returning relevant results, resulting in the meteoric success of Google as the search engine of choice for most people at the time this book was written. Besides the PageRank algorithm, Google also uses other kinds of data signals to determine the relevance of particular websites to users’ queries. Starting in 2009, it added personalized search features. What this means is that the results you get are not just affected by other people’s behaviour in linking pages to one another, but also by your own past search behaviour—the terms you have searched for before and the results you have clicked on. These and other factors go together to help the search engine determine the relevance of particular sites for you (or for the kind of person the search engine thinks you are). If you frequently search for and click on links related to George Washington University, for example, when you search for George Washington, or even just George, chances are that the university homepage will appear high in your results. Personalized search also takes into account your location, and so the term George is more likely to return George Washington University if you are searching from Washington, DC, than if you are searching from Cairo. Google’s personalized search also makes use of data that is present in other Google apps such as Gmail and Google Docs, and even location information from the Google Maps app you might have used during your stroll around the

38

Digital tools

city. It searches through the emails you have sent and received, for example, and cross references that data with information on what you have searched for in order to get a more accurate idea about what kinds of search results you will fnd most relevant. As we mentioned above, some people are concerned that such practices might make search results too biased towards users’ preconceived opinions or ideas about the world, creating flter bubbles. It is also important to remember that Google doesn’t just use the data it collects about users to help them get more relevant search results. This data is also used by Google to select advertisements for you to see through its AdWords platform, which is Google’s biggest source of revenue. Google is just as much an advertising company as it is a search company, and with the amount of data it gathers from users it has become an extremely effective one. Another enhancement of Google’s search capacity was semantic search, introduced in its Hummingbird upgrade in 2013. This enhancement allows users to formulate queries using natural language and is able to detect how the meanings of words change as a result of the contexts in which they are used. Despite these advancements, Google’s algorithms are still not immune from search engine spammers. These spammers have, over the years, attempted various methods to trick the algorithms of search engines, including creating ‘link farms’ which have no purpose other than to link to other sites and increase their ranking. Consequently, Google and other search companies constantly ‘tweak’ their algorithms in order to fght against such activity, a process which unfortunately sometimes causes legitimate sites from honest purveyors of data to fall in the rankings. The lesson that can be taken from this brief survey of search is that no matter how good search engines become at delivering to you the data you want and need to create useful information, there will always be limitations. There will always be people who will try to fgure out ways to beat the system and to push to you data that you do not want, and sometimes attempts to stop them or to ‘personalize’ your search can end up making the results too narrow. Another lesson is that Google, along with other platforms we use for fltering information, are also in the business of harvesting information about us. In other words, apart from indexing the information on the internet, Google is also creating what media scholars Felix Stalder and Christine Mayer (2009) call a ‘second index’ of information about people who are using its services. Apart from raising concerns about personal privacy, this practice also raises concerns about

Information everywhere 39

personal autonomy, people’s ability to control the kind of data they are exposed to and the kind of information and knowledge they can create. As Stalder (2010: para 30) puts it: [P]ersonalization enhances and diminishes the autonomy of the individual user at the same time. It enhances it because it makes information available that would otherwise be harder to locate. It improves, so it is claimed, the quality of the search experience. It diminishes it because it subtly locks the users into a pathdependency that cannot adequately refect their personal life story, but reinforces those aspects that the search engines are capable of capturing, interpreted through assumptions built into the personalizing algorithms.

ACTIVITY: INTERROGATING SEARCH ENGINES Apart from Google, there are a number of other search engines available, though most of them are not as popular. Some of these search engines advertise features that distinguish them from Google such as providing ‘privacy’ to users or giving them access to pages that are not available through Google. Some of them are based in countries in which censorship practices of the government can affect search results. Here are some examples of these ‘alternatives’ to Google: Bing (https://www.bing.com), operated by Microsoft Corporation, is the most well-known competitor to Google. It is said to be particularly effective for searching for multimedia content such as images and videos. DuckDuckGo (https://duckduckgo.com) does not collect data from users or track them as they surf the internet, and so foregoes the advantages of personalization in favour of what they refer to as ‘unbiased results outside the flter bubble.’ Gibiru (https://gibiru.com) claims to give users access to the information that is ‘censored’ by other search engines when users click the ‘uncensored’ tab. In particular, Gibiru returns results from sites like 4chan. Yandex (https://yandex.com) is the most popular search engine in Russia. Baidu (https://www.baidu.com) is the number one search engine in China, where Google is blocked.

40

Digital tools

Try out the different search engines listed above or some other search engines you’ve heard of, typing in the same search terms into each one and noting the differences in the results and rankings that are returned. Possible things that you can search for include: • • • •

A recent political event in your country; A popular entertainment fgure; A local business; Your own name.

Discuss the ‘biases’ of these different search engines and how you might account for these biases. Now enter the same terms into Google and compare the top ten results retuned with the Google results of other people searching for the same terms (using their own devices or accounts). Compare your results to those of: • • • •

A classmate; Your teacher; An older relative such as a parent or grandparent; Someone you know who lives in a different city or country.

Are the results returned to different people the same or different? How can you explain these similarities and/or differences based on what you know about Google’s personalization features?

The attention economy The example of Google highlights the fact that the possible biases of technological fltering systems result not just from the affordances and constraints of the technologies themselves, but also from the economic agendas of the companies that have developed these tools. In fact, it’s really impossible to understand how knowledge is created and valued in the digital era without understanding the economic factors that determine the way flters are designed and affect the way information is circulated. As we said above, digital platforms such as search engines and social media sites are part of the attention structures we use to navigate the world. And it is because of their ability to channel people’s attention that digital platforms have become so proftable for their owners. While Google’s stated mission is ‘to organize the world’s information and make it universally accessible and useful’ (Google, n.d.), the real value of the company, in economic terms, comes not just from making data available to its users, but also from making the attention of its users available to advertisers.

Information everywhere 41

The prevalence of this kind of business model—one based on selling users’ attention—among big internet companies such as Google and Facebook has led some scholars to propose that we are not so much living in an information economy as in an attention economy, where value is created from the monetization of attention. The idea of the attention economy, however, existed long before search engines and social media sites. Back in the early 1970s economist Herbert Simon theorized that as the amount of information available to consumers increased, the value of that information would actually decrease, while the value of people’s attention would increase. He wrote (1971: 40–41): [I]n an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention effciently among the overabundance of information sources that might consume it. For most of this chapter we have focused on the cognitive, social, and technological means that people use to sort through the vast amount of data available to them online. But this is only half of the story. The other half is the fact that, while we are trying to focus our attention on the things that are important to us, others are trying to push data to us and to focus our attention on things that are important to them. They include internet companies, whose interest is to increase users’ engagement with the information on their sites, advertisers, who are trying to sell us things based on what they know about us, and other users, who consciously design their content in order to attract ‘likes’ and comments from us. The problem with this is that just because content is able to attract attention doesn’t mean that it is useful or relevant to users’ needs, and it certainly doesn’t mean it is true. In fact, the best way to attract people’s attention is to provoke strong emotional reactions and promote the biases that already distort their information gathering behaviour. In the information ecosystem of the internet, as we noted above, cognitive flters, social flters, and technological flters can often work together to amplify these biases. Technological tools hold our attention not just by feeding us information that arouses our emotions and confrms our biases, but also by changing our relationship to information itself through the use of a range of different design features. In his book Hooked: How to Build Habit-Forming Products, business consultant Nir Eyal (2014) describes how technological tools are designed to psychologically compel users to keep using them. One way they do this is by frequently triggering people to engage with them, often through notifcations that remind users that they haven’t engaged with

42

Digital tools

the tool for some time or inform them that someone else has done something (like sent you an email, posted on Instagram, or invited you to play Candy Crush). Every day, in fact, over 11 billion notifcations are sent to users of the Android mobile operating system. External triggers are effective because they activate internal, psychological triggers, such as the ‘fear of missing out’ (see above). The promise of a possible reward that notifcations give us also causes our brains to release a chemical called dopamine, which makes us feel good. Another way technological tools hold our attention is through providing intermittent variable rewards. Rather than a steady fow of useful or engaging content, what we fnd when we respond to notifcations is often unpredictable. This has the effect of making us more likely to respond and more likely to keep scrolling down our feeds until we fnd something rewarding. This tendency to become more engaged when the rewards are unpredictable was discovered by the famous behavioural scientist B.F. Skinner in his experiments with lab mice: he found that mice who, when they pressed a lever, sometimes received a small treat, sometimes a large treat, and sometimes no treat at all, were likely to press the lever more compulsively than mice that received the same treat every time. This is also what makes gambling so engaging. As internet ethicist Tristan Harris (2016: n.p., see also Chapter 3) puts it: When we pull our phone out of our pocket, we’re playing a slot machine to see what notifcations we have received. When we swipe down our fnger to scroll the Instagram feed, we’re playing a slot machine to see what photo comes next. When we “Pull to Refresh” our email, we’re playing a slot machine to see what email we got. When we swipe faces on dating apps like Tinder, we’re playing a slot machine to see if we got a match. The design features and algorithms governing the way many platforms operate, then, are contrived not to inform us, but to distract us, holding our attention so that we can be exposed to data that serves the economic interests of platform owners. Sometimes this results in us being served content that not only confrms our existing opinions but leads us to more extreme opinions. For example, one way that YouTube keeps users engaged with the site is by recommending and automatically playing other, related videos. Unfortunately, YouTube’s algorithms have determined that videos containing extreme views and incendiary information are more likely to keep people engaged. Distraction and engagement are not just a matter of economic incentives. They can also serve political aims. For example, the affordances of social media to engage users (and take their attention away from other things) have

Information everywhere 43

been exploited effectively by politicians such as former US President Donald Trump, who frequently posted extremist or incendiary things on Twitter in order to defect attention away from news that refected badly on him.

ACTIVITY: ATTENTION IN SOCIAL MEDIA Choose a social media app that you have used before (such as YouTube, Facebook, Twitter, Instagram, or Snapchat) and discuss the following questions: 1. What tools does the app make available to people to effciently distribute their attention by fltering data? 2. What tools does the app make available for helping people attract the attention of others? 3. What design features can you fnd in the app that might make it ‘addictive’ or ‘habit forming’? Explain how you think these features work.

Conclusion In this chapter we have attempted to address the problem of ‘information overload’ often associated with digital media by making a distinction between data and information. Data is something that we either fnd or is ‘pushed’ to us by other people, and information is something that we create by forming useful relationships with this data and among different pieces of data. Forming these relationships always involves using various kinds of tools such as organizational systems and flters, and the affordances and constraints of these different tools affect the kind of information we are able to create and what we are able to do with it. We will have much more to say in later chapters about how to ‘interrogate’ and evaluate the data you fnd on the internet and how to distinguish the ‘true’ from the ‘fake’. For now, it will suffce to focus on this notion of relationships. Usually, the best way to judge data you fnd on the web is to examine the relationships they have to other pieces of data and to different kinds of people who have either produced, used, or recommended them. In looking for relationships with people, for example, you might ask who produced the data and what kinds of affliations they have (e.g. a university, a religious organization). When it comes to publicly available data, it is important to remember that all data have an agenda (see Chapter 7), which is usually to convince you of a certain idea or position or to make you do something (like make a purchase). In looking for relationships between this

44

Digital tools

data and other data, you can consider how the data fts in with the data you already have, the relevance of the data to what you have to do, when the data were produced and when you need to use them, and the other sources of data linked to them. Perhaps the most important message of this chapter has to do with the inherent biases of all fltering systems when it comes to the creation of information and knowledge. The fact that the responsibility for fltering the data that we have access to lies in the hands of very few large and powerful companies should give us pause, even if these companies follow the motto ‘don’t be evil.’ Another important point we have tried to make in this chapter is that the effective creation of information and knowledge requires that we have some control over what we pay attention to and what we don’t, but many of the tools that we use to gather information are designed to take that control away from us, manipulating what we pay attention to for the beneft of others. As James Williams (2018) puts it, ‘the main risk information abundance poses is not that one ’s attention will be occupied or used up by information, as though it were some fnite, quantifable resource, but rather that one will lose control over one’s attentional processes.’ Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Chapter 3

Reading and writing in digital contexts

With the development of digital media, the activities of reading and writing, which used to be mediated by pen and paper, typewriters, and printing presses are now increasingly mediated by digital tools like e-readers, smartphone apps, web browsers, word processers, and other digital reading and writing tools. This shift from print-based to digital media has been accompanied by new literacy practices, shaped by the affordances and constraints of digital tools. Remember that when we talk about ‘literacy practices’ and ‘activities of reading and writing’, we are not only talking about cognitive processes happening inside people’s heads, but also social and material practices going on in the world. When we use different tools to read and write, we don’t just do different things with our brains, we also do different things with our bodies and interact with people in different ways. Most (but not all) printed books are designed to be read in a linear way that limits the reading path that we can take. This is not true of every book, of course. Encyclopaedias and dictionaries, for example, are designed to allow us to move from place to place, but we are still restricted by the classifcation systems of the authors (see Chapter 2). Printed books also separate us from the people who wrote them and put us in the position of passive recipients of information. In digital media, these design limitations are overcome through three main affordances, namely: • • •

Hypertextuality; Interactivity; and Multimediality.

Let’s take multimediality frst. Multimedia is a form of communication that combines different media (such as video, audio, and text) and so allows us to make meaning using a wider range of modes (see Chapter 4) than is possible using just print. While print technologies allow us to combine image and text, it is impossible to include aural elements and moving images. Even the relatively straightforward job of inserting a picture on a page posed

46

Digital tools

technical challenges in the early days of print. In contrast, digital technologies have made these operations childishly easy, with the result that we now encounter multimedia everywhere. We consider these changes in detail in the next chapter. Another affordance available in digital media is hypertextuality. In essence, hypertext is electronic text which is ‘hyperlinked’ to other electronic text. Hypertext is so fundamental to the architecture of the internet that we tend to take it for granted. However, it has had a profound effect on the way writers can structure and organize information, and the way readers can navigate their way through it. Unlike the pages of a book, which unfold in a linear sequence, hypertext can be organized in a variety of different ways. In addition, hypertexts can be generated dynamically by machines, as with search results or social media feeds that aggregate posts, articles, and web pages selected by algorithms. Finally, digital media provide writers and their readers with the ability to interact with texts in ways that were previously diffcult or impossible. For example, if you are reading a book and fnd an error, then there is no easy way for you to ‘write back’ to the author and suggest an improvement. In contrast, Web 2.0 technologies allow us to comment on or even annotate the texts that we read online and engage in conversations with the author and other readers about them. At the same time, more and more sophisticated writing tools are being developed based on machine learning; these tools can also interact with us as we read or write, give us feedback and even automatically write texts or change the texts we are reading based on how we respond to them. These new affordances of digital media have required people to rethink their understanding of reading and writing and redefne their ideas about what a reader is and what a writer is.

Hypertext and linking Anyone who has ever surfed the internet or used a mobile app has experienced hypertext; it is the very fabric that makes up the World Wide Web. As we’ve defned it above, hypertext is electronic text which is hyperlinked to other electronic text. The idea behind hypertext can be traced back to the 1945 essay ‘As We May Think,’ written by the engineer and inventor Vannevar Bush. Bush contemplates a future device for information retrieval that he calls a ‘memex’, designed to retrieve information not only by following the logic of classifcation in hierarchical taxonomies, but also by following the logic of association (see Chapter 2). Using the memex, Bush imagined, scientists would be able to link books and articles, creating ‘trails’ of information that they could later follow in order to retrieve information. The term ‘hypertext’ was itself coined by American sociologist and

Reading and writing in digital contexts 47

philosopher Theodor H. Nelson in the 1960s. In his book Literary Machines (1992:0/2), he describes it this way: By ‘hypertext,’ I mean non-sequential writing—text that branches and allows choices to the reader, best read at an interactive screen. As popularly conceived, this is a series of text chunks connected by links which offer the reader different pathways. A key feature of hypertext, then, is that it allows readers and writers to make use of hypertext links (or hyperlinks) in order to organize electronic text in a non-sequential way. In essence, the writer of a hypertext document creates a range of choices for the reader, who selects from them in order to create a pathway through the text. A good example of the branching, nonsequential choices in hypertext is the homepage of a website. Often, such a homepage is little more than a series of links, each offering the reader different possible paths to choose from. Because hypertext allows readers greater fexibility to create their own reading paths, readers are allowed to play a more active role in reading digital media than traditional print-based media. This more active role is refected in the expressions that we use to describe the activity of online reading. For example, we ‘surf’ the internet and ‘navigate’ a website, but we don’t usually ‘surf’ or ‘navigate’ a book. As a general principle, readers of hypertext are expected to make choices and read in a non-linear fashion. This contrasts with print-based media, where the dominant (though by no means only) expectation is that readers will proceed in a linear way. Hypertext provides writers with new opportunities as well. It allows writers to link their texts to other texts on the internet to support and elaborate on their claims. In this way, electronic texts can coherently reference and include prior texts, with writers linking to prior texts in a way that makes them accessible with minimal effort. However, this affordance also creates a new problem. In hypertext it is harder for writers to lead readers through a lengthy argument, because it is easier for readers to navigate away, dipping in and out of a range of other documents. As a reader and writer of hypertext you need to be aware of its affordances and the way that hyperlinks can be used to provide order and structure to texts. Links can do this in one of two main ways: they can internally link different parts of the same text (or website) in a logical way, and they can externally link the text to other texts on the internet. Linking internally to structure texts As suggested, hyperlinks can be used to link different parts or ‘pages’ of the same online document and organize or structure the text internally. There

48

Digital tools

are three main ways people use hyperlinks to internally organize online texts (illustrated in Figure 3.1). Some hypertexts are organized in a hierarchical structure, with the hyperlinks arranged like a menu or a tree-like outline. Similar to the table of contents that you fnd in a book, this allows readers to see the entire organization of the document or website at a glance and easily navigate to the part that is most relevant to their needs. Another popular pattern of organization is the linear structure in which parts of the text are organized in a specifc sequence which readers have to follow. This kind of organization is common in things like online learning sites or for flling out surveys and forms. For example, the writer of an online learning site might want to break learning material down into a series of fxed steps, perhaps moving from a presentation of information at the beginning of the sequence to a quiz at the end. A third pattern might be called the hypertextual structure (Baehr, 2007). In this case, parts of the document are linked to other parts of the document or other documents on the internet (see below) based on relationships of association (see Chapter 2). For example, some key terms may be

Hierarchical structure

Hypertextual Structure

Level 1 link

Association 1

Level 2 link Level 2 link

Association 2

Level 1 link Level 2 link

Association 3

Level 2 link Association 4

Linear structure

Link to next step

Link to next step

Figure 3.1 Organizational structures in hypertext.

Reading and writing in digital contexts 49

highlighted and linked to related pages, defnitions, or elaborations of terms or concepts. Most websites use a combination of these hierarchical, linear, and hypertextual organizational patterns. An online shopping site, for example, may have a list of the different kinds of products hierarchically arranged, links that can take you from one product to related products (for example ‘customers who bought this also bought…’), and then take you through the check-out process in a linear fashion. Linking externally to create associations Hyperlinks can also organize an online document externally, by associating the text with other texts on the internet. For example, a website may provide an external links section, which summarizes other resources that readers could be interested in. Or links to other online documents can be added in the fow of the text to provide more information about concepts relevant to the topic under discussion. When we use links in this way they create associations, either explicitly or implicitly, suggesting a relationship between linked texts. If you think about it, the same thing happens when you put two sentences next to each other in print (for example ‘He’s been behaving very oddly lately. He’s in love.’ Here the second sentence implicitly stands in a relation of cause and effect with the frst). The following is a (non-exhaustive) list of relations that could be implied by a link: • • • • • •

Cause–effect; Comparison/contrast; Example; Sequence; Whole–part, or part–whole; Evaluation.

The ability to organize texts externally in this way provides a new affordance for writers, allowing them to easily juxtapose different texts. This affordance can be exploited in creative and playful ways, for example where links create unexpected associations to humorous effect. On the other hand, links can also be used to contrast different texts and viewpoints in critical, thought-provoking ways. It is important to remember that when authors link to an external source, they often leave it to the reader to infer the relation between the two texts. Also, links can have the effect of subtly changing the way that a text is read, by providing clues to the underlying assumptions of the writer. For example, our reading would probably change if a website about terrorism linked to another website about a particular political or charitable organization. In

50

Digital tools

this example, we would probably infer that the author sees the organization as related to terrorism, or similar to, and we would have to evaluate this association. A careful and critical reading of links can provide useful additional information about a writer’s underlying assumptions and attitudes.

ACTIVITY: READING CRITICALLY IN HYPERTEXT Find a website or blog that contains a mix of internal and external links. Select an article to analyze, which has a good number of links in it and then: 1. Read the article so that you understand its main points; 2. Count the number of links on the webpage and classify them as internal links or external links; 3. Classify each link according to the associations that it creates. This could include: • Cause–effect, comparison/contrast, example, sequence, whole-part or part–whole, evaluation, elaboration, reason; 4. Critically evaluate each link that you fnd: • Who/what has been linked to? • Who/what has not been linked to? • What assumptions and biases does this reveal?

Dynamic hypertext Readers and writers in digital media also need to understand that hypertext is not always static, but sometimes changes over time as readers interact with it, usually with the help of algorithms which determine which content to present to different readers. We refer to these kinds of hypertexts as dynamic hypertexts. A simple example is a news website or app, which presents you with a series of articles. These could be drawn from various internet sources if it’s a news aggregation service or just from different reporters in the case of a single news agency. If you quit the app and come back to it later, you will likely encounter a different series of articles, updated as they become available. Social media sites work in a similar way (see the case study on Facebook below) but tend to update faster than news sites. Another kind of dynamic hypertext is Google Search; in response to your input, the algorithm generates results in the form of linked hypertexts that match your query (see Chapter 2).

Reading and writing in digital contexts 51

Finnemann (2017: 846) describes these kinds of dynamic hypertexts as ‘multiple source knowledge systems’ because they aggregate information from different sources in various ways, based on: 1. Time sensitivity: how quickly the information in the hypertext is updated; 2. Local/global sources: how far the reach of the system is, a global or more local network of sources; 3. Public/private: who can access the system; 4. Whom to whom: who can communicate with whom in the system; 5. Editability/interactivity: how you can add to, delete, and edit the hypertext; 6. Messiness: the inherent diffculty of working out how the hypertext was originally written and by whom. If you think about it, all of the examples of dynamic hypertexts mentioned above operate on different timescales: that is, they change at different speeds. You might be able to leave a news site alone for a day and not miss too much information but you need to check social media much more frequently to stay up to date. An example of a hypertext confguration that is extremely time sensitive is the global market trading system that provides information about the market to fnancial traders. Some of the screens displaying fnancial information that these traders track and ‘read’ update so fast that responses (like buying or selling fnancial products) are needed in a split second. As a result, traders have to constantly monitor the hypertext on the screen (Knorr Cetina, 2014). This draws our attention to the fact that users of hypertext can perform a range of roles (Finnemann, 1999): navigating the hypertext, which they do using the interface on the web or app; reading the hypertext; and authoring/ editing the hypertext, through interfaces that allow the user to add, delete, or alter text. It’s important to note that some of these roles can also be performed by machines, such as algorithms that select posts and articles to present to you in social media (thereby ‘editing’ the page you are reading). Understanding how algorithms go about choosing our texts for us and evaluating the dynamic hypertexts that we encounter on this basis is an important part of reading critically in digital media. We need to remember that underneath the dynamic hypertexts we read, algorithms are observing our online activity and selecting future texts that they think we want to read. In other words, at the same time you are reading the text, the text is also ‘reading’ you. Every time that we visit a web page, ‘like’ a post, or leave a comment, these algorithms learn more about us. And, as Dumitrica (2016, para. 6) points out, their role is often not so much to understand and respond to our needs as readers, but rather to ‘mould us into receptive customers’ (see Chapter 7).

52

Digital tools

CASE STUDY: FACEBOOK AS DYNAMIC HYPERTEXT Facebook, along with other social media platforms, provides us with a good example of a dynamic hypertext that changes and updates over time. Unlike a book, which contains the same content every time you look between the covers, your Facebook newsfeed is constantly being updated so that every time you launch Facebook you encounter new, user-generated content. Facebook presents this stream of posts and information to you using hypertext, and, whether you access Facebook on its web interface or on the mobile app, the hypertext principles that underlie that presentation remain the same. The screen itself consists of a series of posts from multiple sources, namely other users, the ‘friends’ in your network. The posts are dynamically selected and displayed by Facebook’s EdgeRank algorithm, which relies on factors like: who created the post (and their relationship with you), when the post was created, how other people engaged with the post (likes, comments, shares), and what type of post it is (status update, image, video, link) (see Chapter 2). Based on these factors, the algorithm makes a prediction about how you are likely to engage with the post and then generates a ‘relevancy’ score that is used to rank posts. If you’re not happy with the newsfeed selections, there are tools that allow you to control it to some degree: the ‘hide post’ button tells the algorithm you want to see less of these kinds of posts, the ’snooze’ button pauses posts from a particular friend, and the ‘unfollow’ button stops posts from that friend. You can also change the settings for your newsfeed to allow you to view posts from all of your friends in reverse chronological order rather than based on the ‘relevancy’ score determined by the algorithm. These tools allow you to infuence the presentation of the dynamic hypertext. You can also interact with the posts you read, by ‘reacting’ (like, love, wow, etc.) or by commenting. At this point, you become an editor, ‘writing back’ to the author of the post and adding your own content to the hypertext. Finally, you can of course create your own posts to appear in the newsfeeds of your friends. So, what does all of this mean for the practice of reading and writing on the internet? Facebook illustrates the effects of digital media on our reading and writing practices in several ways. Firstly, Facebook makes use of internal and external linking to allow readers to adopt non-linear reading paths. You can navigate around the site in a non-linear way, jumping from your profle to your friends list to your newsfeed for example. You can also follow the

Reading and writing in digital contexts 53

external links generated in your newsfeed to engage with content elsewhere on the internet. Secondly, this kind of dynamic hypertext is always changing and updating. Facebook is collaboratively constructed, an aggregation of multiple users’ posts, as well as the input of machines that decide which content and advertising to present you with. It’s important to remember that the information on Facebook is not neutrally selected but has instead been targeted at you by an algorithm. As a result, you need to read critically and be on guard for misinformation and fake news. You are far less in control of the ‘news’ that you are reading than you perhaps think (see Chapter 7). Thirdly, the ability to comment and react illustrates the changing relationship between readers and writers in the digital age. On Facebook you are empowered to ‘write back’ to authors in a way that would previously have required great effort. Reading and writing on Facebook is much more like having a conversation than it is like reading or writing a book. In addition, because you can broadcast your own posts to a wide audience, you can easily relate to others in a oneto-many confguration. Finally, Facebook shows how digital media affect the way that people think. The power to publish and reach an audience is now available to all. As the barriers to publication have been removed, systems of quality control that were associated with publishing in the past have also gone. As a result, we might think of many forms of writing in digital media as much less ‘authoritative’ than writing in printbased media.

Clickbait or ‘You won’t believe what you’re about to read!!’ As mentioned above, readers in hypertext take an active role in formulating reading paths, selecting from the links provided. However, hypertext writers and designers are not necessarily powerless in this process. They can try to infuence the path that a reader will take by making some paths more prominent than others, using multimodal design elements such as the size, placement, font, colour, and language of particular links. What motivates the way that hyperlinks are designed? While we might think that it’s all about getting the attention of the audience, there could be economic reasons as well, especially if web designers are getting paid per click. If that’s the case, then online writers will of course try to design

54

Digital tools

hyperlinks in a way that provokes a response from the reader and steers them towards a certain reading path. This is the primary goal of what is known as clickbait, which is defned as ‘internet content whose main purpose is to encourage users to follow a link to a web page, esp. where that web page is considered to be of low quality or value’ (Clickbait, n.d.). Achieving this goal—more clicks and views—can be seen as a strategic discursive practice which involves designing effective text that compels readers to click on it. But it is often also seen as ethically questionable, especially where the reader is misled by deception, exaggeration, or sensationalism. The examples below, collected by researchers (Bhattarai, 2017; Blom & Hansen, 2015), illustrate some of the strategies clickbait writers use to encourage people to click on them: 1. A schoolgirl gave her lunch to a homeless man. What he did next will leave you in tears! 2. This is an A-paper? 3. He loves Beatles, menthol cigs..and longs for muscles like Van Damme [sic]. 4. 21 stars who ruined their face due to plastic surgery. Talk about regrets! 5. Man divorced his wife after knowing what is in this photo. 6. Supermodels apply these three simple tricks to look young. Click to know what they are. 7. 15 hilarious tweets of stupid people that makes you think ‘Do these people even exist?’ 8. Can you solve this ancient riddle? 90% people gave the wrong answer. 9. Is your boyfriend cheating on you?... He is, if he does these fve things. 10. 15 tweets that sum up married life perfectly. (number 13 is hilarious) 11. Cristiano Ronaldo has fnally spoken about Messi’s retirement announcement, and his words are rather shocking! In example 1, the headline sets up an information gap by using forward referencing to the hyperlinked article. The headline has two parts: a description of the events (‘A schoolgirl gave her lunch…’), followed by the reference to the subsequent discourse (‘What he did next…’). The effect is to set up a ‘cognitively induced deprivation that arises from the perception of a gap in knowledge or understanding’ (Loewenstein, 1994: 75). The only way to satisfy our curiosity is to click on the link and watch the video. The headline also promises readers an emotional experience if they click, namely that they will be left ‘in tears’. In the other examples, language is also used to refer forward to the hyperlinked article using words such as ‘This’ in example 2 and ‘He’ in example 3. There is also a tendency for clickbait headlines to include numbers (examples 4, 6, 7, 8, 9, 10), use questions (examples 2, 7, 8, 9), and use emotional language (examples 1, 4, 7, 9, 10, 11). Example 5 uses ‘reverse narrative’,

Reading and writing in digital contexts 55

a discourse pattern that is thought to provoke curiosity (Knobloch et al., 2004), where the outcome is presented frst before other narrative elements like exposition, complication, climax, and initiating event. Profanity, colloquial expressions, and emotional content in images can also indicate clickbait. So, why do journalists and other writers use clickbait? One reason relates to pure economics. From the point of view of publishers, more clicks means more views, which in turn means that they can charge more for advertising on their website. From the point of view of journalists, the number of clickthroughs to an article becomes a ‘metric’ that employers can take into account, sometimes even paying per click (Frampton, 2015). Clickbait is problematic because it might encourage journalists to ‘dumb down’ their content in order to appeal to a wider audience, and clickbait is also often associated with the spread of fake news on social media platforms. At the same time though, some of the techniques that clickbait writers use might just be good strategies to get people’s attention online (see Chapter 2). When you design a heading and description for an article, or when you design hyperlink text, you might like to think about the most effective language that you can use, both to inform readers about the target text and to generate interest, if that’s your goal. Keep in mind that there is a danger in using exaggerated and sensationalized headlines; in the short term, they might generate clicks, but if the content doesn’t deliver on its promise, then you could lose your readers in the long run.

Is hypertext making you stupid? So far, we have focused on how hypertext has changed the way people read and write, as well as some possible consequences of these changes on the kinds of information we have available to us. But does hypertext also change the way we think about that information? Walter Ong (1996), whose work we mentioned in Chapter 1, claimed that the transition from orality to literacy did not just change the way we communicate, but also the way we think. Print literacy, he said, restructured human consciousness, making us less dependent on formulaic ways of thinking as aids to memory and more able to engage in abstract reasoning. More recent research in the feld of reading confrms that learning how to read results in physiological changes in the brains of children, and even learning how to read different languages results in different kinds of changes (see for example Wolf, 2007). If the transition from orality to print literacy engendered changes in the way we think, it’s not unreasonable to suppose that the shift to reading hypertext might also be resulting in cognitive changes. Some research supports this supposition. A study conducted by Rowlands and his colleagues (2008) on how young people use electronic resources in their studies, for example, found that early exposure to hypertext likely helped their

56

Digital tools

participants develop good parallel processing skills needed to move effciently from one document to another. The researchers worry, however, that the sequential processing skills necessary to follow the logical progression in longer narrative works were not being as strongly developed. This worry is shared by a number of other writers on digital literacies, most notably journalist Nicolas Carr (2010), who argues that reading hypertext may be compromising our ability to read conventional texts and follow complex arguments. The constant stimulus of moving from document to document, he claims, ‘short-circuits both conscious and unconscious thought, preventing our minds from thinking either deeply or creatively. Our brains turn into simple signal-processing units, quickly shepherding information into consciousness and then back out again’ (119). There are also those who disagree with Carr. One is the famous cognitive scientist and linguist Steven Pinker. Pinker (2010) points out that distractions to information processing have always existed, and people have always developed strategies to deal with them. The ability to reason and follow logical arguments does not come from the media that we use, he says, but from effort and education. Although different media may have the power to affect the way we deal with information, we are not passive victims being ‘reprogrammed’ by media. Based on a review of the literature in neuroscience, philosopher and cognitive scientist Robert Clowes (2019: 706) concludes that ‘the idea that deep reading is being undermined at the level of neural circuitry is currently a risky and unsubstantiated hypothesis.’ Another faw in this argument that hypertext is making us ‘stupid’ is that it assumes that ‘deep’, linear reading is an inherent quality of all print literacy practices, and that this kind of reading is inherently better than other forms of reading for grasping complex ideas. But research in reading has shown that the reading of print documents—especially by the most advanced readers—is not always linear. Rather, good readers develop strategies to interact with documents in extremely creative and fexible ways, as Andrew Dillon and his colleagues (1989) found in their study of how academics read printbased journal articles.

Interactivity One important early development in reading and writing in digital media was the move from ‘Web 1.0’ to ‘Web 2.0’, from the ‘read–only web’ to the ‘read–write web’. In the initial days of the internet, the World Wide Web was dominated by those who had the technical knowledge necessary to publish their work. For most people, the World Wide Web was a ‘read– only’ experience: they were able to browse the internet but not publish to it. However, subsequent developments in technology reduced these technical barriers, so that ordinary people were able to publish online. Today’s online

Reading and writing in digital contexts 57

platforms (like blogs, wikis, social networking sites) allow us to not only read, but also to write on the web. One affordance of the read–write web is the interaction made possible by the comment function in social media, the discussion page in wikis, and so on. It is now possible for readers and writers to easily interact around content posted online. As a result, there has been a shift in the relationship between reader and writer, with readers now empowered to ‘write back’ and contribute their own points of view. Of course, this kind of interaction between reader and writer is not unique to online contexts. Think of the ‘letters to the editor’ in print-based newspapers, where writers and readers exchange their views, referring back to previous letters and making ‘comments’. With the ease with which readers can engage with writers in digital media, interactivity and participation have become features of nearly all online writing. This kind of interaction between readers and writers can lead to the establishment of specialized online reading and writing communities, where people comment on books that they have read or on each other’s writing. For example, Goodreads, a social networking site for readers, allows people to share what books they are reading, and rate, review, and discuss them with other readers. An example of a writing community is FanFiction.Net (see Black, 2005, 2006). According to Black, fanfction is ‘writing in which fans use media narratives and pop cultural icons as inspiration for creating their own texts’ (2006: 172). Black notes that printed fanfction has been around for many years, but that with the read–write web, fans now have ‘the opportunity to ‘meet’ in online spaces where they can collaboratively write, exchange, critique, and discuss one another’s fctions’ (172). In her 2006 case study, Black describes the experiences of Tanaka Nanoko, an English language learner and successful writer of Japanese anime-based fanfction. She shows how Nanoko communicates with her readers through the ‘author notes’ that accompany her writing, and how she uses these notes to present herself to her readers, provide them with additional background, encourage particular kinds of feedback, and express her gratitude to readers and reviewers. In the author notes, Nanoko explicitly identifes herself as an English language learner and this appears to have an effect on the kind of feedback that she receives from readers. Readers’ feedback is supportive of her writing development: some readers gently suggest improvements to aspects of her writing, while others admire her skill as a multilingual and multicultural writer. Another, looser, form of interaction happens when people use commenting and markup tools to respond to a text and share this response with other readers. One example is the text highlighting and notes functions in Amazon’s Kindle books which allows readers to see what other people have highlighted and commented on. Reading researcher Tully Barnett (2014) argues that these functions have transformed the act of reading, which before was mostly

58

Digital tools

a solitary activity, into a collective social activity, with people paying particular attention to passages other people have highlighted, and authors using the highlighting function to connect with readers. Readers also interact with one another through the notes function, responding to each other’s comments, and sometimes even using the notes function for real-time communication (some examples include: ‘Hey anyone on?’ and ‘Go to bed Remi’). Teachers can also use this function to communicate with their students. It is important to note that these tools also serve a commercial purpose, allowing Amazon to collect data on consumer behaviour, such as how many books people read, when they read, what parts of the books they fnd most interesting, and which books they abandon before they fnish them. Communities like Goodreads, FanFiction.Net, and tools like Amazon’s text highlighting and notes functions underscore the kinds of relationships between readers and writers facilitated by digital media. Writing is often now seen more as a collaborative, social activity, where the participative process of writing together is as important as the actual writing itself. In view of this, part of learning how to be a good reader increasingly involves developing practices of commenting and participating in particular communities of writers. Similarly, part of being a good writer is knowing how to engage an audience, facilitating opportunities for comments and discussion.

Automatic writing It’s not only people who have input into our writing processes—our writing is increasingly assisted by tools for automatic text input like auto-complete, spelling check, and grammar check. These are designed either to provide feedback on writing or to anticipate what we want to say and help us compose texts. Taking this trend one step further, tools are also being developed not just to assist us with our writing but to automate the writing process for us. The following tweet, for example, was written not by a human, but by a robot called ‘Heliograf’, used by the Washington Post during the 2016 Rio Olympics to automatically report results as soon as they became available. Post Olympics @wpolympicsbot Jiyeon Kim #KOR wins fencing gold in Women’s individual sabre, beating Sofya Velikaya #RUS It is not hard to see why this kind of reporting is well suited to a robot: tweets like this are short, formulaic, and can be composed based on a clear set of data. The point of such automated writing, however, is not necessarily to replace human writers. Gilbert (WashPostPR, 2016) notes, Heliograf ‘free[s] up Post reporters and editors to add analysis, colour from the scene and real insight to stories in ways only they can.’ Stuart Frankel, Chief

Reading and writing in digital contexts 59

Executive of Narrative Science, insists that robots will not take jobs away from human writers because they are often doing work that wasn’t being done before. In the future though, we will likely see some kind of division of labour between humans and machines when it comes to producing most of the texts that we read (Lohr, 2011). Another example of machine writing is Associated Press’s use of a natural language generation platform called Wordsmith to automatically summarize quarterly fnancial reports of publicly listed companies (Faggella, 2018). The platform converts structured data fles, i.e. spreadsheets with information like company name, quarterly income, earnings per share, and so on, to text. According to Faggella (2018), this enabled AP to increase the number of fnancial summaries 12-fold, and most readers are unable to tell that these texts were written by machines rather than humans. How does this kind of automatic writing in sports and fnancial journalism work? Levy (2012) describes the main steps. First, the robot writer needs to be provided with as much high-quality data as possible, not only about the event that it is reporting on but also about all related events. Then, in order to understand the subject matter, the robot needs to be taught the rules of the game, for example a sport like baseball. To transpose the data into a narrative, the machine will look for an ‘angle’: was the result of the game consistent with predictions, was it a come-from-behind win, did a particular player perform especially well? Using templates, style sheets, and jargon provided by human ‘meta writers’ the machine can then quickly generate the story. Automatic writing has led to the generation of greater quantities of articles, which can be personalized for particular audiences. Automatic writing can also be customized to generate stories that are more in line with what readers want to read. For example, a team of journalists and computer scientists working on sports stories noticed that, when favoured teams lost a game, the algorithm wrote stories that people felt were too harsh (Levy, 2012). As a result, the team adjusted the algorithm so that popular players would be praised even when they lost. One problem is that these sorts of practices might end up ‘diluting’ our reading by serving us up with only the ‘angle’ that we want to see and taking away the critical edge. Another advantage of this kind of technology is that machines have access to a lot of data and so might be able to highlight unexpected connections that are hard for humans to see. The increasing quantity of articles produced by writing robots can also make publishers more visible, as they may lead to more referrals from Google, which ranks new content on popular subjects highly (Lohr, 2011). One possible problem for readers is that automatically generated stories could be used to saturate newsfeeds in social media, overwhelming readers with information. The possibility of large amounts of automatically generated spam and fake news means readers need to be especially vigilant in selecting and consuming content online. Not only are

60

Digital tools

algorithms selecting your news for you, but other algorithms and bots could be manipulating those algorithms as well. So, if robot writers can conjure up sports and fnancial reporting, can they also be creative and write fction? Kris Hammond, the Chief Technology Offcer at Narrative Science, which generates automatic news stories, is certain that eventually a machine will win a Pulitzer Prize (Lohr, 2011). Perhaps a frst step in this direction was the opinion article written for The Guardian by GPT-3, OpenAI’s language generator (GPT-3, 2020). Features of automatic writing appear in the academic context as well, often in the form of tools that evaluate writing. For example: the English Testing Service uses an automated writing evaluation tool to score the writing component of TOEFL, the Test of English as a Foreign Language; academic publishers and teachers use Turnitin’s plagiarism detection software in order to detect possible instances of plagiarism; and writers of all kinds are using machine learning tools like Grammarly to get feedback on issues of grammar and style in their writing.

Mashups and remixing As we’ve seen, hypertext and interactivity provide readers and writers with new ways of organizing and associating text, as well as of relating to other readers and writers. Digital media also make it easier to copy and share content created by others, and as a result, writers now have the ability to create new kinds of digital texts or artefacts that incorporate and appropriate the works of others: for example, ‘mashups’ or ‘remixes’. The well-known internet scholar Lev Manovich (2007) suggests that we now live in a ‘remix culture’ and goes so far as to claim that the ‘World Wide Web [has] redefned an electronic document as a mix of other documents.’ You can see this remix culture at work in a lot of popular practices including: music remixes, that combine different musical works; political commentary media remixes, setting politician’s speeches to popular music or creating new speeches by combining previous ones; web mashups, combining different web services to create new services; remakes of viral videos; movie trailer remixing; photoshopping online images; and many fan-based practices like fan fction and fan art. At a more mundane level, social media communications are replete with remixing as people modify and share memes, or incorporate animated gifs or images into their social media messages. As these examples show, remix can involve any kind of media (video, audio, image, text) and usually involves not just appropriating existing work, but also modifying it and creating something new. The terms mashup and remix are used to refer to new and original texts or cultural artefacts that have been created out of existing works. Although the difference between a mashup and a remix is not always clear, most consider remixing a process of altering or re-engineering aspects of an existing work and ‘mashing’ as combining two or more (sometimes seemingly

Reading and writing in digital contexts 61

incommensurate) works together. Because of the way that mashups and remixes build on the work of others, they pose some interesting questions, of both a philosophical and practical nature. They challenge us to rethink beliefs about originality, intellectual property, and ethics. At the philosophical level, we need to remember that the practices of digitally mashing and remixing texts have obvious predecessors in print media. The most obvious is the practice of quoting, where the author explicitly incorporates the words of another person in their text. Most people would accept that a text that uses quotations appropriately can still be considered to be an original text. This is in spite of the fact that it builds on previous texts. We can take this further by pointing out that, if you think about it, all original texts build on previous texts in some way. The famous Russian literary critic Mikhail Bakhtin (1986: 89) said that texts are ‘flled with others’ words, varying degrees of otherness or varying degrees of our-own-ness,’ and the French philosopher Roland Barthes said a text is just a ‘tissue of quotations’ (1977: 146). Nevertheless, some people are concerned that the remix culture is fostering plagiarism in educational institutions like universities. They point to cases where students have submitted work that reads like a patchwork of online articles cut and pasted into their assignments without any proper form of attribution. Such cases clearly pose a serious problem, but they are often relatively easy to resolve because they are so obviously unoriginal. Others are concerned about broader issues related to the ownership of texts, intellectual property, and copyright, especially in relation to commercial, pop culture texts like movie clips and songs. Owners of this kind of commercial content are understandably upset about the ease with which perfect digital copies of their work can be circulated, remixed, and ‘mashed’ into the work of others. They see remixing as a kind of stealing, which deprives them of their ability to proft from their own work. Lawrence Lessig, an American legal scholar who specializes in intellectual property and cyber law, however, argues that this attitude constrains creativity and the development of culture by unnecessarily limiting the extent to which people are free to build upon the creative work of others. In his 2004 book, Free Culture, he suggests that when it comes to the creation of cultural texts we have moved from a ‘free culture’ to a ‘permission culture’. He provides a number of examples to support this idea, including the fact that the frst Mickey Mouse cartoon, Steamboat Willie (distributed in 1928), was based on a silent flm by Buster Keaton called Steamboat Bill, Jr., which appeared in the same year. According to Lessig (2004: 22–23), ‘Steamboat Willie is a direct cartoon parody of Steamboat Bill.’ Such creative borrowing was commonplace at the time, but, given the current legal framework, people today have less freedom to borrow and build upon the creative works of the Disney Corporation in the way its founder borrowed and built upon the creative works of others. Nevertheless, despite

62

Digital tools

these legal constraints and because of the affordances of digital media (which make texts both more reproducible and more mutable) remixing and mashing have become common forms of creative production on the internet. Remixes and mashups are much more than simply a matter of copying: they require a creative re-working of the source material so that when it is placed in the new context and mixed up with texts from other sources, it takes on a new meaning or signifcance. Lessig points out that a major change in the way that culture is produced now as compared to in Walt Disney’s day is that prior to the internet, culture was produced by a small elite group who had access to the media. In the age of digital media however, the barriers to publishing have been removed, opening the way for thousands upon thousands of ‘amateur’ publishers to contribute to the creation of cultural texts. By ‘amateur’ Lessig means that these people are involved in cultural production for the love of it, rather than for monetary reward. Often, they are very talented and create cultural texts of great interest. Such people want to share their creations. But this is where the law of copyright gets in the way. Copyright law automatically protects creative works (for example books, songs, flms, software) and prohibits others from making copies without the consent of the copyright holder. Under US law, these protections last for the author’s entire life, plus 75 years (or 95 years altogether for a corporation). If an author does nothing, the protections will apply whether or not they want them, and whether or not they want to share their creative works with others so that others can sample and remix them. Lessig has proposed a system, called Creative Commons Licensing to make it easier for people to create and remix cultural texts. The system works by making available carefully drafted but easy-to-understand licenses, that clearly describe what intellectual property rights the creator wants to reserve. Using a Creative Commons License, you can specify that your creative work is available for others to use, subject to a range of possible conditions. For example, you could specify that future remixes of the work must attribute the creator of the original, must not be for commercial purposes, and/or must also be issued under a Creative Commons License (see http://creativecommons.org for the full range of licenses). Even though copyright law lags behind technological developments of digital media, a remix culture has developed in which the new texts that people are creating explicitly mix and build upon the prior texts of others. At the same time, this practice of remixing must be governed by an appropriate ethical framework. The Creative Commons Licenses identify the most important elements of this framework in the choices that they offer creators: attribution (do you want credit for your work?); commercial use (do you want others to proft from your work?); and future participation (do you want others to share their work too?).

Reading and writing in digital contexts 63

ACTIVITY: REMIX CULTURE A remix appropriates existing works and modifes those works in order to create something new. It does so by ‘sampling’ the existing work and combining those samples in innovative ways: a ‘sample’ is a copied stretch from the original work that is picked up in the remix. In this activity, we will consider how samples are combined in a remix and evaluate whether this process can be seen as ‘creative’. Search the internet to identify a remix—preferably one that you know well, including the original material—or use one of the suggested remixes below. Some interesting remixes include: 1. Music remix; a. Kutiman ‘Thru you’: https://youtu.be/tprMEs-zfQA b. Charlie bit me remix: https://youtu.be/pOle1AnPOc4 2. Political commentary remix; a. Cassette boy remix the news: https://youtu.be/dh_Og-MjWZI b. Barack Obama singing…: https://youtu.be/bmhbqKT7ONo 3. ‘Faux’ movie trailers; a. Mary Poppins: https://youtu.be/2T5_0AGdFic b. Brokeback to the Future: https://youtu.be/8uwuLxrv8jY Watch your chosen remix and discuss the following questions: 1. Where do the ‘samples’ for the remix come from? That is, what source material does the remix sample from? 2. How similar or different is the remix from the original? Think in terms of the message conveyed, the genre/audience/purpose, the tone/style of the texts; 3. Based on your analysis, to what extent do you think that the remix can be considered to be creative and original? Why? 4. Has the remixer made any effort to acknowledge the original? Why/why not? Would you consider the remix to be a ‘fair use’ of the original version? Why?

Conclusion The affordances of digital media have had a demonstrable effect on the ways that we can ‘do’ reading and writing. First, hypertext and the associated practice of linking allow us to make new kinds of meanings, organizing text in non-sequential ways and linking it to other texts to create sometimes subtle associations. Second, new technologies make it possible for hypertexts

64

Digital tools

to be generated dynamically with the help of algorithms that select content for you and update the hypertext over time. These algorithms act like ‘coeditors’ of the text, arranging human input and other data according to their rules. Third, hypertext and the interactivity of the read–write web have changed the way that readers and writers relate with each other, with readers moving from the position of passive recipients of information to active collaborators in the process of knowledge creation. Fourth, machines are now playing a more important role in this interactive reading–writing process with robot writers that can generate texts and other machine learning tools able to provide feedback to writers. Finally, the ability to easily copy digital texts has led to the development of a ‘remix culture’ in which writers explicitly create new cultural texts by building upon existing prior texts. All of these changes contribute to an evolution in the way that we think about reading and writing. Hypertext has blurred the boundaries of reading and writing and many people now see the two activities as essentially interconnected. The read–write web has facilitated the development of online communities of readers and writers and as a result reading and writing are now perceived as more collaborative, social activities than they were before. Finally, now that anyone can participate in the creation of cultural texts and publish their work online, published writing has lost much of the perceived authority that it had in the days of print media. Of course, an equally striking development is the way that digital texts now incorporate multiple modes, with print, image, audio, and video all working together to create meaning. We will examine this shift from page to screen and the associated shift from print to image in the next chapter on multimodality. Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Chapter 4

Multimodality

It should be obvious that making meaning has always involved more than just words. In speech, aural elements such as the pace, rhythm, and the tone of your voice, as well as visual elements such as your gestures and facial expressions all contribute to the message that you send. Similarly, in writing, the use of visual elements such as font, layout, colour, spacing, and accompanying images can have an effect on the meaning you make. Spoken and written texts have always drawn on multiple modes: aural, visual, and gestural, as well as verbal and textual. In this chapter, we will focus on the way that the affordance of multimediality mentioned in the last chapter makes it easier for users of digital media to combine multiple semiotic modes based on different systems for meaning making. This practice of combining multiple modes is known as multimodality, and texts that are made up of a combination of modes in this way are called multimodal texts. One effect of developments in digital media has been to greatly increase the multimodal content that we normally encounter in texts. This is most obvious in the texts that we encounter on screens, because these digital texts can incorporate audio and video in a way that print-based texts cannot. In social media, it’s very common to combine different modes, for example posting images or video accompanied by a written comment. In fact, a lot of social media is designed specifcally for modes other than writing: think of videos on YouTube and TikTok, images on Instagram, and image-based interactions on Snapchat. Designing messages for these platforms increasingly requires literacies different from those we use to produce and consume written texts. But this shift towards multimodality is also visible in print media, which now tend to incorporate more visual elements. One example is the way that newspapers now include more images than before, many of them colour rather than black and white. The same can be said of scientifc texts (Lemke, 1998b). Pages of the World Wide Web have also become more multimodal as digital technologies have improved. Early websites often consisted of just written text and hyperlinks. But websites these days can be

66

Digital tools

extremely complex multimodal ensembles of video, image, graphics, sound, and writing. A good example of the multimodal complexity in websites is Twitch TV (twitch.tv), a video streaming site often used by video gamers to livestream gaming sessions. Figure 4.1 shows a mockup of a livestream on Twitch. In the centre of the screen, the livestreaming video is divided into two visual components: 1) the gamer’s screencast, covering most of the screen, and 2) a video of the gamer himself in a small picture-in-picture frame on the right. To the left of the screen are recommended channels, while to the right is the chat area, where viewers (sometimes in the tens of thousands) can post comments. Layout has been strategically designed so that the dominant element, the live stream, is in the centre of the screen and other elements like the text chat and navigation options are at the periphery. If you consider that both the video and the chat in this livestream are unfolding in real time, then you can appreciate how much information is presented through this multimodal webpage design and how quickly it changes. In fact, there is so much information that many livestreamers

Figure 4.1 Mockup of Twitch livestream.

Multimodality

67

use two monitors: one for the game and one to keep track of the chat (Recktenwald, 2017). What’s more, the information has to be made available not only through the web interface to users of laptops and desktops (shown in the fgure) but also through an app for users of smartphones and tablets, with their smaller screens. The Twitch smartphone app is designed to present all of the same information as the webpage, but it does so by splitting different kinds of information into different ‘screens’ that users can swipe through: one screen for the video stream and chat, one for browsing, one for searching, and so on. It’s not suffcient just to design for the screen; the creators of websites like twitch.tv have to design for all screens. In this chapter we will focus on the implications of multimodality for practices of ‘reading’ and ‘writing’ (or perhaps more aptly ‘designing’). It is increasingly important for readers and writers to understand the logic of multimodal communication. In particular, they need to be aware of 1) the affordances and constraints of text and images, 2) how to design for the visual interfaces available on different kinds of screens, and 3) how image, text, moving image, and sound can be combined to make meaning.

From page to screen Gunther Kress, a pioneering scholar in the feld of multimodality, describes the evolution in media from the printed page to the digital screen. According to Kress, the underlying principles with which these media organize information is different, with books ‘organized and dominated by the logic of writing’ and the screen ‘organized and dominated by the image and its logic’ (Kress, 2003: 19). Kress believes that the technological affordances of digital media have an effect on the way we communicate, pushing us to adopt the logic of the image promoted by the screen. So, what is the difference between the logic of writing and the logic of the image? Writing, like speech, follows a temporal and sequential logic. For example, when you tell a story in speech, the story unfolds in time and as a result you are forced to organize your story into a kind of sequence. Similarly, in writing, the logic is sequential. Even though writing is fxed in space (as words on a page) rather than unfolding over time, readers are still, for the most part, expected to proceed in a linear, sequential fashion. By comparison, the logic of the image is spatial and simultaneous: all of the information in an image is displayed at the same time, with different elements of the image related to one another in space. Because of this spatial logic, images tend to have a more direct effect, often provoking an immediate emotional reaction from viewers. In addition, images tend to be more ‘polysemous’, that is they are capable of sending numerous messages at the same time. This can create a challenge for communication as viewers of images are sometimes presented with a range of competing messages to choose from or to integrate.

68

Digital tools

Because of their different affordances and constraints, the mode that you use makes a difference to the kind of meanings that you can make. Meanings in images are more ‘topological’, that is, images are capable of representing continuous phenomena, like the changing slope of a hill, or the various shades of colour in an object. In contrast, meanings in speech or writing are more ‘typological’. Language describes things in terms of categorical choices: for example, we say the hill is ‘steep’, or the colour is ‘blue’. That’s why scientists often use visual resources, like charts and graphs, to describe complex, continuously changing relationships. For example, using a graph allows us to communicate the changing acceleration of an object over time, something that would be much harder to communicate with words alone. Knowing which mode is best suited to a particular kind of information can make you a much more effective communicator. Different modes have different underlying assumptions and require you to make different kinds of choices. When you write a sentence in English you typically follow the syntax: subject–verb–object, and as a result you have to explicitly describe who is doing what to whom (in that order), or who is related to whom (and how). In contrast, when you draw a picture, you have to be explicit about the spatial relationships between the entities. For example, if you describe in writing a traffc accident that you witnessed, you might say ‘The truck hit the car’ and commit to a particular relationship of agency (with the truck, in this case, doing the ‘hitting’). On the other hand, if you draw a diagram to represent the accident, then you might depict the truck and car in a collision on the road and commit to a particular spatial relationship, i.e. where exactly on the road the collision occurred and from what angle you are depicting it. Your diagram would also explicitly show whether other vehicles were present at the time, and where exactly on the road they were. The picture, however, may not contain the same information about agency that your sentence did. With the move from the page to the screen, we see a change in the amount and precision of information that is communicated through different modes. In their study of educational materials (textbooks and online materials) from the 1930s to the present day, Bezemer and Kress (2016) noticed that the number of images increased, and the relationship between images and the text changed, with images adopting a more important role in the overall message in more recent materials. They also noticed a greater variation in typefaces used, increased use of background colour, and/or ‘boxing’ to mark off sections in the text. These changes inevitably have an effect on the way we ‘read’ such texts. In traditional texts, where writing is the dominant mode, a preferred reading path is usually evident. In English texts you usually begin at the top left of the page and proceed in a linear way to the bottom right. But when writing is no longer the dominant mode, we need to develop different kinds of reading paths. In order to negotiate reading paths through such multimodal texts, Kress (2003: 159) suggests we frst scan

Multimodality

69

the page to identify the modes it draws on, and then decide which mode (if any) is dominant. We can then read the text as an image by interpreting the spatial relationships between entities on the page, or read it as written text by following the linear/sequential logic of writing.

ACTIVITY: SIGN LANGUAGE? Consider the poster in Figure 4.2 below and discuss the following questions: • • •

What visual elements has the designer of this text used? What message is conveyed by these visual elements? What message is conveyed by the textual elements, i.e. the words on the page? (Continued over page)

Figure 4.2 Drug Enforcement Administration poster.

70

Digital tools

• • •

How do the textual elements interact with the visual elements? What effect does this interaction create? Give examples of other texts that combine textual and visual elements (including images) for effect. How do the textual and visual elements interact?

Interfaces The screen devices like smartphones, tablets, and laptop/desktop computers that we interact with use interfaces to mediate that interaction. One of the earliest such user interfaces was the ‘command-line interface’, where users could interact with their device by typing written commands, line by line, into a terminal. Interfaces have since evolved to become more visual; today we mostly interact with computers through graphical user interfaces, or GUIs. The Microsoft Windows operating system is an example of a GUI. Instead of typing commands into a terminal, users point a mouse at icons arranged on the screen and click to execute commands, for example, to run an application. One important feature of interfaces is that they often use organizing ‘metaphors’ to structure the user experience. One of the organizing metaphors of MS Windows is the ‘desktop’. When you are logged in to your computer, the desktop is displayed on the screen as a kind of virtual workspace where you can store documents and other media that you are working on or place shortcuts to commonly used apps. By referring to this virtual space as a ‘desktop’ the designers are likening the user experience to sitting at a desk. This metaphor, however, is not used for smartphones or tablets: instead of a ‘desktop’ you have a ‘home screen’ where app icons are arranged. In each of these cases, it is not just the design of the interface but also the metaphor that frames it that help to defne the user’s experience. As well as operating systems, apps and webpages also use GUIs, and, when it comes to reading and writing for the screen, it’s helpful to understand some of the design choices such interfaces make available for devices with different form factors (different sizes and shapes). One example of an attempt by designers to provide a consistent user experience across devices is Google’s ‘Material Design’ (material.io). Designers on this project took a fresh look at the way that people interact with their devices and came up with the idea that the ‘material’ that makes up an app is really a kind of ‘quantum paper’. You could say that the book, paper, and pen were the underlying metaphors for the interface design system that they were creating. As one designer put it, ‘There’s a very, very clear parallel between the systems of book design and the way that humans also hold

Multimodality

71

and use devices. People use materials in life every day, and we want them to understand software in the same way’ (Google Design, 2015: 01:54) (see Chapter 6). In order to design an environment inspired by the metaphor of ‘quantum paper’ the team used a range of visual resources, aiming to simulate a more tactile user experience and add a sense of depth to the interface. For example, touching a button on the screen causes an animation that depicts the button moving up, as though ‘magnetically’ drawn to the user’s fnger. Semiotic resources of shading, animation, colour, and font are all recruited in the design of this and other elements. Figure 4.3 shows what some of the key elements of visual design look like in this design project. The fgure also gives us some idea of the way that the

Figure 4.3 Elements of Material Design interface.

72

Digital tools

layout of the screen can be recruited in the design of an interface. We see important layout components like a title bar, a hamburger icon (three lines) that expands into a menu, a search icon, and various kinds of buttons, tick boxes, toggles, radio buttons, text input felds, sliders, and a navigation bar at the bottom of the screen. The foating action button (represented by a + sign inside of a circle) at the bottom right highlights one single possible action that designers judge to be especially important (like composing a post in social media). The overall layout adopts some conventions that we have probably come to expect from mobile interfaces: for example, title and menu at the top and key navigation options at the bottom of the screen. Because of the form factor, screen layout possibilities change with different devices of different sizes, as illustrated by the comparison of the smartphone and desktop interfaces for the Twitter profle page shown in Figure 4.4. Of the two devices, it is the smartphone screen that is the most constrained in terms of space, and so a single column design is adopted, with a banner at the top and a navigation bar at the bottom. The user’s posts are displayed in the middle of the screen and the user can scroll through them to access earlier posts. A foating action button (bottom right) allows the user to quickly access the form to compose a tweet. In contrast, the desktop version uses three columns, the frst column providing navigation options, the second displaying the user’s posts, and the third featuring the search form, suggested follows, and a list of trending topics. At the bottom right of the screen is a further pop-up list for direct messages. This example shows not only how different devices force the adoption of different interface designs, but also that the design across the interfaces can be coherent enough to create a similar user experience. Note the similar design of icons and layout of tweets in the two interfaces. If you are designing a webpage or app yourself, you need to keep in mind that the page that you are designing may look different to you than it does to your audience: for example, if you design it on a desktop computer with a large screen but your audience consumes it on a smartphone with a much smaller screen. Here, the obvious solution is to test your design on multiple devices of different form factors to ensure that it works well with them all. On ‘responsive’ web sites, the site detects the screen resolution and adjusts display accordingly—for example, a menu bar displayed on a desktop would be ‘repackaged’ as a hamburger icon, freeing up space for other content. As you can see from the above discussion, layout is important when it comes to presenting information in a structured way. Gunther Kress and Theo van Leeuwen (2006), in their work on the ‘grammar’ of visual design, point out that visual layout of an image (or screen) can be divided into different regions: the left and the right, the top and the bottom, and the centre and the margin. In some cases, there are very clear divisions between these different regions. When that happens, readers often form expectations about the kind of information they should see in different regions. Kress and

Multimodality

Figure 4.4a,b Twitter interface on smartphone and desktop.

73

74

Digital tools

van Leeuwen describe the kinds of expectations associated with different kinds of layouts as follows: •





A left-right visual division: A lot of social media and web app login pages seem to use this layout, with information about the service (logo etc.) on the left and the register/login form on the right. According to Kress and van Leeuwen, information on the left is ‘given information’, that is, information that we already know. Information on the right is ‘new information’, the new message that the reader is intended to take from the text; A top-bottom division: A simple two-part top/bottom division is rare but often websites do organize information in multiple ‘blocks’ from the top to the bottom of the page (which you usually need to scroll through). According to Kress and van Leeuwen, the bottom represents the ‘real’, i.e. concrete reality and factual information, while the top represents the ‘ideal’, i.e. the aspired to promise, hope, or ideologically foregrounded information. Often, the banner at or near the top of the screen includes idealized promotional images about the brand presented. In contrast, the bottom of the screen is often where one fnds concrete information and details about the product as well as legal notices and contact information of the company. A centre-margin division: Probably the most famous example of this is Google’s search page, which presents the search bar as part of a cluster in the middle of the screen, while other elements appear at the edges. Here, the screen is organized around a dominant element in the centre that gives meaning to the other elements that surround it. As Kress and van Leeuwen (2006: 196) note: ‘For something to be presented as Centre means that it is presented as the nucleus of the information on which all the other elements are in some sense subservient.’ In Google’s case, that element is search.

Keep in mind that designers sometimes combine these different patterns. Understanding the kind of information that these different regions can convey may help you make decisions about which part of the screen to use when you design the visual layout of images or web pages. Other modes that would seem to be relevant to interface design are those of touch and gesture, especially with respect to touchscreen devices. A good example of a gesture incorporated into mobile app interface design is the ‘pull to refresh’ feature found in many social media apps. As you pull down on the feed, the app checks for and displays any new posts. While this gesture effciently gets around the need for a refresh button, the function has attracted some controversy, with some people suggesting that it adds to the addictive quality of social media apps and even comparing it to pulling the lever on a one-armed bandit (Harris, 2016) (see Chapters 2 and 7). Another

Multimodality

75

example is reorienting a mobile device, fipping it from portrait to landscape orientation, which is a gesture used in some apps as a command to expand a video to full screen. With the forward-facing camera on mobile devices, some apps use facial recognition, or even facial expressions and gaze to interact with the interface.

CASE STUDY: MULTIMODAL DESIGN IN INSTAGRAM Instagram was launched in 2010 as a photo-sharing app, originally only available through the Apple App Store for iPhone devices. At the time of writing, Instagram had over one billion monthly active users (Statista, 2020). One distinctive feature about Instagram is that it was originally intended to be a social networking site that exploited the visual mode—people were supposed to post images, rather than text. Of course, this kind of image sharing on social networking sites has become a common feature of social media. The Instagram platform and its users, however, retain a commitment to communication in the visual mode in a way that other SNSs do not, with users often seeking to visually outdo each other with the images that they post. This makes Instagram a very interesting case study for multimodal design. Like other social networking sites, Instagram draws on the affordances of digital imaging. Digital imaging allows users to capture an image, immediately view it, manipulate it in various ways, share it through social media or on the internet, and permanently store it if desired. In addition, portable devices like smartphones allow people to capture, publish, and share moments of their lives that would have gone undocumented in earlier times. When using Instagram, an important goal is to fully exploit the visual mode of communication and create effective visual designs. As we have mentioned in this chapter, images adopt a spatial, simultaneous logic that can create meaning immediately and often appeal to emotion, compared to the linear, sequential logic of writing. Instagram users fnd that they can use both of these different modes for different purposes: often, well-designed images serve to catch people’s attention while the accompanying written captions make more specifc calls to action. According to Leaver, Highfeld, and Abidin (2020), Instagram was designed, at least initially, with a certain kind of retro visual aesthetic in mind. This aesthetic harkens back to the times of flm photography and is underpinned by ‘a digital reimagining of photographic traditions’ (Leaver et al., 2020: 41). This particular visual culture is

76

Digital tools

cultivated in part by the way that the Instagram interface is designed to promote a particular visual identity. For example, the Instagram icon started off as an explicit, though stylized, reference to a Polaroid OneStep instant camera. The icon has become more abstract over time as the visual identity of the app has developed. Similarly, Instagram initially restricted posts to square images, 640 pixels across, the width of the iPhone screen at the time and a choice which also evoked Polaroid photography, which produced (roughly) square images with white space for a possible caption below. We can also see Instagram’s distinctive visual culture in the kinds of flters that were initially offered. Some of the early flters that Instagram featured were those that provided a ‘vintage’ look to the images, with a dreamy washed-out feel, once again reminiscent of Polaroid photography. The functionality of the flters that Instagram has provided over the years has had an interesting effect on the users of Instagram (and of other social media). Filters are the pre-selected settings that users can apply to their images and that create particular effects by automatically modifying features like colour, saturation, and brightness. Arguably more than any other tool, it was Instagram that normalized editing one’s pictures before posting them to social media, so that today this kind of image manipulation has become the ‘default action’ (Tiidenberg, 2018: 56). Subsequently, Instagram flters have developed to include ‘adornments’ like hats and glasses, bunny ears and whiskers, or ‘distortions’ that make people look like cartoon characters or pieces of modern art. These flters need your permission to access the front-facing camera and apply the artifcial intelligence algorithms that produce the fltered image. They therefore play a role in another kind of normalization: namely, the everyday use of facial recognition technologies (Stark, 2018). There are lots of different kinds of Instagram accounts and users: for example, celebrities, infuencers (see Chapter 10), advertisers, photojournalists, adventurers like urban climbers and nature trail walkers, nature photography accounts, food photography accounts, social justice accounts, and meme accounts, to name a few. What these accounts often have in common is their attention to careful, multimodal design. The fact that, alongside their public accounts, some people create a private ‘spam’ account—limited to their ‘real’ friends, and where they feel that they can post whatever they want without judgement— underlines the fact that the norm of publication on Instagram is to aim for careful design. A spam account (also called a ‘fnsta’ or ‘fake’ Instagram account) subverts this norm.

Multimodality

When people post to Instagram, they need to make a lot of design choices. One choice has to do with the image posted. If that image is a photograph, then it could be either an original, an edited or collaged version, or appropriated images that are being ‘recycled’, as with memes or pop culture references (Highfeld & Leaver, 2016). But the image doesn’t always need to be a photograph. It can also be an inspirational quote or a text-based observation—to post these you need to create an image fle that portrays the written text over some kind of neutral background. The next choice is what to put in the caption. Each post is actually a multimodal ensemble that includes the image itself, and a caption consisting of writing, emojis, and hashtags that also function as hypertext links. In writing the caption, people need to pay attention to the way that the text and the image interact. Does the text support the meaning of the image? Does the text somehow elaborate on the meaning of the image? Do the text and the image together cohere, so that the combination makes sense? Some people advise Instagram users to use the caption to engage more specifcally with the audience: for example, you could ask a question to solicit responses in the comments, or make other ‘calls to action’ like asking people to subscribe, register, tag friends, and so on. Some people compose these captions very carefully, considering their visual appearance on the screen and also how it will appear in someone’s feed, where the caption will be shortened. The last set of choices relate to the feed. A user’s feed is where others can view all of their posts. On mobile devices, it appears as a three-bythree grid of thumbnail images with the newest posts at the top of the screen. As well as beautiful posts, people also try to create beautiful, visually coherent feeds. There are even apps that allow you to preview and reorganize your feed. This can mean designing a series of images to work well together in terms of their subjects, their themes, and their colour. It can also mean presenting a consistent mix of content, for example alternating images and inspirational quotes, so that the grid looks like a checkerboard of photographic and text-based content. In summary, Instagram creates a context where careful multimodal design is an important platform norm. In creating their posts, Instagram users draw on the affordances of image and text to create meanings and engage their audience. Not only do they carefully consider each image as they post, they also design captions, paying attention to text/image interaction, and design their feed with a view to creating an aesthetically pleasing, visually coherent whole.

77

78

Digital tools

Visual design in multimodal texts Now that taking and sharing images has become so easy to do, a lot of our communication is done through visual texts. One common kind of image that people share is the selfe. Even though selfes can be created and shared in an instant, they are still carefully designed as an important way that we present ourselves to the world. Taking a selfe means drawing on a whole range of semiotic resources, including the setting you choose, the clothes you are wearing, your facial expression, the way you angle the camera, and any flters that you might use. When you post the image, you can also add text, or a link or a hashtag. All of these semiotic resources work together to create meaning. In this section, we are going to take a look at the way that different visual resources contribute to meaning when you design visual texts. Then, in the following section, we will consider how images can be combined with writing and other modes to add additional layers of meaning. Experiential meaning: Narrative and conceptual representations The frst aspect of meaning we will consider is what is called experiential meaning, the way you show what your image is ‘about’. When it comes to a selfe, you are usually portraying yourself in a particular place doing a particular thing. Often you are telling a story about yourself. For example, ‘here I am, on my way up to the top of Mount Everest’ or ‘here I am eating a sandwich.’ According to Kress and van Leeuwen (2006) images that tell a story are called narrative representations because they represent the world in terms of doings and happenings. They usually include participants (people or things) and portray actions. A narrative representation could be saying: ‘someone is doing something (to someone)’; ‘someone is reacting (to someone)’; or ‘someone is saying/thinking something.’ The key point is that, in a narrative representation, there is always something going on. Digital media have given us the ability to manipulate images, distorting reality in ways that can help us enhance these narratives. For example, in Snapchat you can add horns on someone’s head like the devil or whiskers on their face like a cat. Other image editing applications make it possible for you to portray your subject in different locations, like on the moon, or paste your subject’s head onto someone else’s body. Although it can be fun to play with images in this way, this kind of technology is becoming more and more problematic. Increasingly people are using these applications to spread fake news, generating images of people doing things they never did or videos of them saying things they never said (see Chapter 7). As well as narrative representations, another kind of image that Kress and van Leeuwen identify is the conceptual representation. One example of a

Multimodality

79

conceptual representation often shared on digital media is the infographic, a visualization of information that uses charts, diagrams, and other visual resources to present data about a certain state of affairs. Unlike narrative representations, conceptual representations do not usually involve actions unfolding over time. Instead, they represent timeless properties of the world and the way that these relate to each other. One kind of conceptual representation focuses on classifcational processes: how different participants relate to each other as part of a taxonomy, for example. Another kind focuses on analytical processes: how different participants relate to each other in a part–whole structure. For example, in a biology textbook you could fnd a diagram of a cell depicting its various components. With digital tools, innovative kinds of conceptual representations can be composed that are more dynamic and more interactive. First, the state of the world can be represented in a dynamic way, updated in real time as new information becomes available. Your maps app does this when it colours the streets black, red, blue, or green according to how much traffc is on the road. Second, we can sometimes interact with conceptual representations by manipulating the perspective and viewing the representation from different angles. Again, maps are a good example: you could consult the top-down view to get a general sense of the directions when preparing your route but switch to a ‘frst-person’ view that mirrors your driving or walking perspective. Interpersonal meaning: Interaction and involvement Visual resources can also help you to interact with and involve your audience. Selfes, for example, are usually used as a way of interacting with others. As Frosch (2015: 1610) points out, ‘A selfe is “see me showing you me”… It points to the performance of a communicative action rather than to an object…’. In our selfes, we can use visual resources in a more or less engaging way. For example, sometimes people photograph themselves not looking into the camera. The choice not to look at the camera, and therefore at the audience, has an effect on how involved the audience feels. As noted earlier, the simultaneous logic of images means that they are able to evoke an immediate emotional reaction in a way that writing cannot. As such, they can powerfully infuence our attitude towards a particular subject or event. Many historians explain the strong opposition to the Vietnam war among the US public in the 1960s as a result of the rise of both television news and photojournalism, both of which brought images of dead and dying soldiers and civilians into people’s living rooms. The emotional impact of images has a lot to do with the way they create interpersonal relationships with viewers. Kress and van Leeuwen (2006) note that such interactions in images can be between the participants represented in the picture, or between the represented participants and the

80

Digital tools

viewer. The producer of an image can draw on a range of techniques in order to engage and involve their audience, express power relations, and express modality (how truthful or realistic something seems). The kinds of resources that can be drawn on include: • • •

For involvement: Gaze, distance, camera angle (frontal/oblique); For power relations: Camera angle (low/eye-level/high); For modality: colour saturation, colour differentiation, colour modulation, contextualization, representation, depth, illumination, brightness (including aspects of the image that can be changed with flters).

Audience involvement can be achieved by drawing on resources of gaze, distance, and camera angle. For example, images of people or animals looking directly out of the image at the viewer ‘demand’ some kind of response from the audience. The kind of response demanded is usually conveyed through the use of a facial expression. In contrast, pictures of people or animals that do not look directly at the viewer in this way simply ‘offer’ the subject as a spectacle for the viewer. These two kinds of images are referred to as offer images and demand images. The distance of the shot, whether close-up, medium shot, or long shot, also has an effect on audience involvement. Close-up shots create more of a feeling of intimacy whereas long shots and extreme long shots create a feeling of social distance. News presenters are typically shot from the waist up (a medium close shot): they are still ‘in our personal space’, but not too close. Camera angle can also infuence the degree to which we identify with fgures in a picture: we identify more if the shot was taken from a frontal angle (looking directly at the subject from the front), than an oblique angle (looking across the subject from the side), and shots taken from a low angle, looking up at the fgure, convey a sense that the fgure has power over the viewer, while the reverse is true for a high angle looking down. Finally, there is the question of what Kress and van Leeuwen call ‘modality’: how ‘truthful’ the representation is portrayed to be. Various effects can be applied to images using flters and other software like Photoshop, and these may have an effect on how real or lifelike the picture looks. For example, the popular sepia tone is achieved by applying a simple tint which has the effect of reducing the colour differentiation in the picture to a single brown–gold spectrum. This gives the picture a warmer tone, and evokes photographs taken in the early days of photography. Ironically, although the colour in such pictures is less true to life, the effect can nevertheless serve to make the image seem more realistic by giving it an aura of authenticity. Other effects such as colour saturation, or blurring or emphasizing outlines can be used in order to give pictures a dream-like, surreal quality.

Multimodality

81

Combining modes Text/image interaction As the Instagram case study shows, images posted to social media are often accompanied by some kind of written comment. As we saw above, a selfe is a kind of interactive performance that means something like ‘see me showing you me’. But that usually isn’t all there is to it when you share the image. Often, you want to add more information, explain something, or comment on the image. You can do so by writing on the image itself or adding a written comment in the post. Taken together, the text and the image interact to produce another layer of meaning: when text and image work together in this way, we refer to it as text/image interaction. In social media posts where an image is accompanied by a written message, the image and text serve to frame (create the context for) each other, providing additional contextual information to help us interpret them. Similar kinds of framing relationships can be observed in traditional media as well. For example, when images are given captions in newspaper articles or photo essays, both the text and the image have a potential framing effect on each other. That is to say, we can use the text to understand the image, and we can also use the image to understand the text. If you are composing a multimodal ensemble—that is, a text that draws on multiple modes—you will have to consider which modes are best suited to conveying your meaning. For example, as we noted earlier, the written and visual modes offer different affordances and constraints, with writing better suited for representing sequential relations and images more suited for representing spatial relations. We refer to the idea that different modes are especially suited to different kinds of meanings as functional specialization: in other words, modes are ‘specialized’ in terms of the meaningful ‘functions’ they can perform. One example from everyday life is the food recipe. It is common for steps in food recipes to be described sequentially in writing, for example, and to be accompanied by pictures of stages of the completed dish, which allow the reader of the recipe to see fne gradations of texture, colour, shape, and proportion that they can compare with their own creations. In fact, these days many people prefer to just watch a video of all of the steps. When you are planning to cook a meal, sometimes you might refer to the images/video and sometimes, as for example when you need to check exact measurements, you might prefer to rely on text. Another example of functional specialization can be seen in scientifc texts of various kinds. Over the years, scientists have relied a lot on writing to communicate the procedures of the experiments that they conduct. However, other scientists trying to replicate these procedures have frequently failed: it turns out that written descriptions often leave out a lot of important information. As a result, some scientists have now turned to

82

Digital tools

video to illustrate their procedures, setting up an innovative new video journal called the Journal of Visualized Experiments. The videos in this journal can clarify steps in the procedure by visually illustrating them (just like with the recipe). In Hafner’s (2018) study of this genre, one scientist commented that, in video ‘there’s lots of fner details which are being sort of captured visually’ (26). Here, the visual mode is better suited to describing the fne details of scientifc procedures than is the written mode. When it comes to the relationship between texts and image, meanings in these different modes may concur with, complement, or diverge from each other. If the textual and visual information are in concurrence, that means the essential messages are the same and reinforce each other. The information can also be complementary, with text or image presenting slightly different information, which ‘colours in’ the details of the message in the other mode. Len Unsworth (2008: 390), in his examination of digitally produced materials for teaching science, notes that the images complement text in three main ways: 1) by explaining the ‘how’ or ‘why’ of an event in the main text (enhancement), 2) by providing additional information to that in the main text (extension), and 3) by restating or specifying what is in the main text (elaboration). Finally, the visual and textual information sometimes diverge in meaning, carrying messages that are incompatible with each other. In particular, this might be observed where the ‘tone’ of an image, the attitudes and emotions that it conveys, differ from those in the accompanying text. For example, a newspaper article about a rogue banker whose irresponsible actions have ruined the lives of many ordinary people would be diffcult to reconcile with a head-and-shoulders portrait of that individual smiling in a sympathetic way. Such conficting meanings are often used to create irony or humour in genres like image macro memes (see below). Intertextuality: Pop culture references in image macro memes Some multimodal ensembles are especially meaningful because of the kinds of references that these texts make to other texts. A well-known tune, a wellknown lyric or a well-known image can evoke an entire cultural context for an audience, adding an additional layer of meaning. The textual reference could be to a famous flm like Star Wars, a celebrity like Beyoncé, or to an image or catchphrase from an internet meme that has gone viral and that everyone knows about. We call these kinds of references to other texts intertextual references, and the property of referring to other texts is known as intertextuality. Intertextual references can be direct quotes (‘A long time ago in a galaxy far, far away…’) or they can be adaptations of quotes (‘A long time ago in a classroom far, far away…’). And, of course, apart from writing, intertextual references can be made with other modes like image and sound. The image macro meme in Figure 4.5 provides an example of how intertextual pop culture references can be used as part of a multimodal

Multimodality

83

Figure 4.5 ‘Ratatouille’ image macro meme.

ensemble in an effort at humour. The meme shows an image of a rat being grabbed with the accompanying text: ‘Ratatouille / Cook me dinner you fucking useless rodent.’ Here, ‘Ratatouille’ is an intertextual reference to the 2007 Pixar flm of the same name, which features a rat that can cook. This reference introduces an expectation, which we know is false, that rats can cook. The image shows how this expectation is acted on, in a brutal and stupid way, by grabbing the rat. At the same time, the accompanying text interacts with the image in a complementary way, flling in meaningful details that explain. Part of the humour is derived from the divergence between the image and the text: while the text identifes the rat as ‘Ratatouille’, the image actually shows a different rat, that is, a real one and not the famous cartoon one. Here, the humour relies on the interaction between text and image as well as the intertextual reference made. First, the joke wouldn’t work if we didn’t have both the text and the image: just an image of a person grabbing a rat is not really funny on its own, and the text alone would not be comprehensible without the image. Second, as well as text and image you also need the background (cultural) knowledge to be able to make sense of this meme.

84

Digital tools

Appeals to emotion and visual arguments Because images appeal to our emotions in a way that text does not, we need to be aware of the ways images often present ‘visual arguments’. The ad below (Figure 4.6) provides an interesting example of a visual argument, with an appeal to emotion. The ad pictures a calf, and the words ‘Fashion victim.’ The shot is about medium distance and it is a ‘demand’ with the calf looking directly at the camera: we are invited to identify with and feel sorry for the calf. Like all visual arguments, this one is less direct than a verbal argument. The argument itself relies on us to make connections and formulate the proposition ourselves. In order to understand the argument that this ad makes, we need to formulate the proposition ‘cows are victims of the fashion industry’ and the argument ‘it is wrong to wear leather because if you do you are supporting this victimization.’ The risk, of course, in giving

Figure 4.6 PETA advertisement.

Multimodality

85

the reader so much responsibility is that he or she might misinterpret the message. Another reason why this ad is effective is because of the way that it combines the words with the picture. The words evoke the fashion industry with a clever play on the phrase ‘fashion victim.’ This phrase, which is usually used to refer to a person who decides what to wear only based on what is trendy, is used ironically here to refer to the calf. At the same time, the picture arouses our emotions in a way which is not possible with written text. Critics of visual communication consider such ads to be dangerous precisely because they appeal to emotions, and not to reason.

ACTIVITY: TEXT/IMAGE INTERACTION IN IMAGE MACRO MEMES Image macro memes are a kind of internet meme that combine an image with text, usually a line of text above and another line below the image. Find examples of image macro memes on the internet and analyze them for intertextual references and text/image interaction. Answer the following questions: 1. What intertextual cultural references does the meme use? 2. What meanings does the image make? a. How do the spatial, simultaneous affordances of image contribute to these meanings? b. Does the image appeal to emotions? How? 3. What meanings does the writing make? a. How do the temporal, sequential affordances of writing contribute to these meanings? 4. How do text and image interact? Can you see examples of concurrence (the same meaning in both modes)? Complementarity (similar but slightly different meanings)? Divergence (different meanings)? 5. How would you explain the joke? How does the joke depend on the image? The text? Your knowledge of the cultural reference?

Designing video The design principles that we have discussed so far in relation to images can also be applied to the design of video. In essence, a video presents the viewer with a sequence of images. Each of these, alone or in combination, can be designed to tell a story, describe concepts, create interactions with viewers, interact with surrounding text, make intertextual

86

Digital tools

references, appeal to emotions, and make arguments. Videos differ from images because of their sequential properties: in a sense, videos combine the affordances of images with the affordances of writing. On the one hand, videos present visual information organized according to the spatial/simultaneous logic of the screen. On the other, they present textual information, the scripted narrative, voiceover, and dialogue that unfolds over time, and this is organized according to the linear/sequential logic of speech and writing. In addition, the written mode can be incorporated as text on the screen, and sound effects and accompanying soundtrack also need to be considered. Any serious attempt to work with video requires that you take both this visual and verbal logic into account. In particular, it is a good idea to plan the sequence of images that the viewer will see, the transitions between them, and any accompanying voiceover, scripted dialogue, sound effects, or soundtrack. One tool that can be used to portray this combination of modes is the storyboard: a visual representation of the story that displays a series of sketches showing different scenes in the story with accompanying script and notes about the soundtrack for each visual frame. An example of a blank template for a storyboard is shown in Figure 4.7 and there are also a lot of online tools available that you can use for the same purpose (e.g. Storyboarder, see https://wonderunit.com/storyboarder/). In creating a storyboard you can consider visual elements such as composition, camera angle and distance, as well as how these visual elements interact with surrounding written or spoken text, soundtrack, and other sound effects. Another important issue in video design is pace: how quickly or slowly the images and flm unfold in time. If the visual information is presented too quickly then the video may seem overwhelming, but if it is presented too slowly the video may seem dull. Finally, in video, sound effects and music play a very important role. A video without a soundtrack, or with a poorly chosen one, is much less likely to be effective than one which uses sound in an appropriate way to complement the visual and verbal messages. The importance of the soundtrack is especially obvious in an app like TikTok. For many TikTok users, the process of creating a video begins with fnding and storing tunes, i.e. the soundtrack. This soundtrack is usually a kind of intertextual reference: a short snippet of a popular song or another popular TikTok video. Having identifed the soundtrack, TikTok users then develop their ideas around this soundtrack and record the visuals. Often, this involves an embodied performance, with careful attention paid to dance, gesture, facial expression, speech (sometimes lip-synched), and the surrounding environment and even people in it. Camera angles might be varied along with costume if the user wants to create the impression of multiple characters in the video, all inhabiting different parts of the screen. The fnal step before upload is to edit the soundtrack and the visuals together,

Multimodality

87

Scientific Documentary Storyboard Title: Group members:

Dialogue

Dialogue

Music

Music

Figure 4.7 Storyboard template.

adding text on the screen if necessary and potentially adjusting the video for length. The kind of multimodal design involved in video production can be done very carefully, with a lot of planning and storyboarding, or it can be done ‘on the fy’ in a more improvised way. Over the years, we have seen how video can be used as a tool by oppressed individuals to hold powerful social actors to account. One example of this is when people take videos of violent police action against protestors, creating a permanent record of events. Another example is the kind of ‘police stop’ video described in Jones (2020b), taken when police offcers pull citizens over in their cars and interrogate these citizens from the roadside. As Jones points out, citizens who flm such interactions do much more than just flm. They also verbally narrate what they see going on around them. The verbal narration functions to frame the recorded video, providing context for the interaction. This, in turn, might prompt the police to respond, attempting to challenge the narration with their own verbal account of events. But the most important point about such videos

88

Digital tools

is the way they create eye-witness accounts that can be shared with others and referred to long after the event. These sorts of videos can fundamentally challenge existing power structures, as they allow marginalized individuals to bring their stories to the attention of the general public.

Conclusion In this chapter we have considered the ways that texts in the digital age have become increasingly multimodal, combining text with image, audio, and video. In particular, we have discussed the affordances and constraints of writing and image and we have seen that written and visual modes follow different logics. Because of these different logics, the current trend towards visual communication challenges us to rethink the way we represent the world. As Kress (2003) suggests, we are moving from ‘the world told’ to ‘the world designed.’ An important part of this design is the design of the graphical interfaces that we interact with on webpages and apps using different kinds of devices. We have also discussed some principles of visual design as they apply to webpages and interfaces where text is easily combined with digital images and digital video. Webpages draw heavily on visual elements and so need to be read and designed in a different way than most written texts. Because of the ease with which digital images and video can be inserted into digital texts, writers now need an understanding of how images and video can tell different kinds of stories, can engage audiences differently, how they can be combined with text, sound, and other modes, how they can draw on intertextual cultural references, and how they can appeal to emotions to construct visual arguments. As we will see in the next chapter, however, despite the new prominence of visual communication in digital media, much of our online communication continues to rely on text, but in ways that are making us rethink the affordances and constraints of written language. Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Chapter 5

Online language and social interaction

In the last chapter we discussed how images, video, and other modes function in digital communication. Despite the importance of such modes, however, written language still remains the primary tool for communication in online environments, and the creative ways people use language online is still one of the most interesting aspects of digital literacies. In the early days of the internet and mobile communication, journalists, teachers, parents, and politicians frequently complained about the new ways young people were using language, often portraying it as evidence of falling literacy standards (see Thurlow, 2006 for examples). These people were mostly reacting to the language people used (and still use) in chat and mobile text messaging associated with features like shortened forms and acronyms (such as BTW, LOL), unconventional spellings and punctuation, emoticons, and, more recently, emojis. In some contexts, this also involved mixing multiple languages (what is known as translanguaging), often resulting in hybrid codes such as Arabizi, a way of writing Arabic with Latin characters and numbers (often mixed with English words). In 2001, the linguist David Crystal argued that digital media were giving rise to a new ‘variety’ of language which he called ‘Netspeak’. And much of the early research on digital literacies focused on trying to describe this ‘new language’ as it was used in different contexts. Despite worries that ‘Netspeak’ was threatening people’s ability to produce more standard varieties and communicate effectively, there has been little evidence that this is the case. In fact, much of the evidence has shown the opposite. Alf Massey and his colleagues (2005), for example, found that students who spend more time instant messaging and texting tend to write more complex sentences, use a wider vocabulary, and have more accurate spelling and capitalization in their English examinations, and Bev Plester and her colleagues (Plester, Wood, & Bell, 2008) showed that students who use SMS (‘txt’) language more frequently actually perform better in standardized measures of English profciency. Another important fnding of early research on online language was that it is often not as ‘non-standard’ as people think. In her 2004 study of the instant messaging language of

90

Digital tools

US university students, for example, Naomi Baron (2004) found that only 0.3% of the words or symbols they used were abbreviations, less than 0.8% were acronyms, and only 0.4% were emoticons. In fact, there are a number of fundamental problems with this whole idea of ‘Netspeak’. First of all, many of the features associated with it such as abbreviations and non-standard orthography were present in written communication long before the invention of computers in genres such as personal letters, telegraph messages, and advertising texts (Dürscheid, 2004). Also, the way people use language online varies tremendously from platform to platform, from context to context, and even from person to person depending on the purpose of their communication and the identity they are trying to project. Media themselves don’t determine how people use language; they only make available certain affordances and constraints on the use of different communicative resources. One thing we did learn from early studies on ‘Netspeak’, however, was that text-based online communication often gives rise to creative uses of language. One reason for this involves the constraints on expression that characterized many of the early chat platforms and still persist in the platforms people use today; creativity is a natural response to the limitations of media. But text-based digital communication also encourages creativity because of its affordances, particularly the way it makes it easy for users to mix different kinds of linguistic resources. Of course, such mixing is not confned to text-based digital communication. In fact, people who speak more than one language or variety of a language, for example, regularly mix languages, especially when they are speaking to other multilinguals. What is special about text-based digital communication is that it can be multi-scriptural as well as multilingual, allowing users to combine different writing systems and to add into the mix typographic innovations and symbols that are only possible in writing. This multi-scriptural capacity of text-based online communication has been particularly evident in the communication of those whose language is not traditionally written with the Roman alphabet. One variety of multiscriptural language that has developed in Russia and the Ukraine, for example, is called ‘padronkavskiy zhargon’—a mixture of Russian and Ukrainian rendered phonetically in the Roman alphabet and often full of puns and profanity. Another example is what came to be known in China in the early 2000s as ‘Martian Language’ (火星文), a combination of Chinese characters, romanized Chinese words, English words, symbols, abbreviations, and emoticons. Nowadays, of course, people who engage in online text-based interactions also have other resources such as emojis and ‘animojis’, stickers, and animated GIFS. Most platforms now also allow people to embed photographs, videos, and voice fles in their text-based conversations and some even allow them to express themselves using handwriting and drawings.

Online language and social interaction 91

At the end of the day, though, understanding text-based communication goes beyond describing how these different features are mixed together. It also requires attention to the interactional aspects of online language: how people use these various resources to structure conversations and negotiate their roles and responsibilities in them. Explanations for the linguistic and interactional features of text-based digital communication can be divided into two main types: those which focus on media and how they constrain ‘normal’ language use, and those that focus on the users of media, their social identities and their relationships with the people with whom they are communicating.

ACTIVITY: ANALYZING LANGUAGE ON DIFFERENT PLATFORMS Collect samples of different kinds of text-based digital interactions (emails, WhatsApp conversations, Facebook Messenger chats, Snapchat conversations) and answer the following questions: 1. What are some of the features you can fnd in these conversations that you don’t fnd in other kinds of written language? 2. In what ways does the language used on these platforms differ from the written language you might expect to see in more traditional types of texts such as newspapers and school essays? 3. What are some of the main differences between the language used on different platforms? Are some linguistic features more common on different kinds of platforms? How can you account for these differences? 4. Refect on how you use language differently on these platforms and how you learned what kind of linguistic behaviour is acceptable on these platforms and what kind is not.

Media effects Approaches which focus on media effects see the unique features of textbased digital communication as the result of the affordances and constraints inherent in digital media. The most conspicuous affordance of digital media when it comes to writing is interactivity (see Chapter 3). Unlike most written communication, text-based digital communication involves opportunities for interaction between the people writing messages and the people receiving them, whether it is the more or less ‘real time’ (synchronous) interaction of instant messaging or the ‘delayed’ (asynchronous) interaction of email and most social media.

92

Digital tools

At the same time, digital media also place signifcant constraints on interaction. One set of constraints, particularly in more synchronous interaction, has to do with time. Not only does typing, for most people, require more time than speaking, but there is also an inevitable delay between the time a message is typed and the time it is received by one’s interlocutor. The proliferation of acronyms and abbreviations in earlier versions of texting and instant messaging (before the implementation of auto-complete and auto-correct functions on digital devices) are, from this perspective, seen as attempts to compensate for these constraints. They help to facilitate rapid responses in chat and instant messaging environments in which those who produce messages faster are often more successful at maintaining conversations (see for example Jones, 2005). Such an explanation, however, does not entirely account for why such features continue to appear and also why they frequently appear in contexts which do not involve these constraints, for example social media posts. Another kind of constraint of text-based digital communication involves what communication scholars refer to as media richness. Although chat, instant messaging, and ‘texting’ are in many ways like face-to-face conversation, they lack the ‘rich cues’ (like tone of voice, facial expressions, and gestures) that are available in face-to-face interaction. Just as abbreviations and other forms of shorthand are seen as a way of compensating for constraints of time, emojis, animated GIFs, and non-standard typography and spelling are often seen as a way of compensating for the relative lack of ‘richness’ of text by giving users ways to communicate their emotions and attitudes that imitate face-to-face conversation. In their study of the way people use animated GIFs, for example, psychologists Jackson Tolins and Patrawat Samermit (2016) argue that people use them to compensate for the disembodied nature of text-based communication, using ‘the embodied actions of others as stand-ins for their own nonverbal behaviour’ (75). Similar claims have been made regarding emojis as replacements for users’ facial expressions. At the same time, there are some problems with the assumption that emojis, GIFs, and other features of online chat are primarily used as replacements for bodily cues—after all, many of the emojis that people send depict facial expressions that they would fnd it diffcult to make with their human faces (such as the famous laughing/crying emoji), and many popular emojis depict animals, objects, or activities rather than facial expressions. One example of an emoticon that does not have clear correspondence in face-to-face conversation is the posture emoticon ‘orz’, common in the texting and instant messaging of Japanese, Chinese, and Korean speakers. The symbol is meant to convey someone bowing or ‘kowtowing’, and has many possible meanings depending on the context. It might be used to express frustration, despair, sarcasm, or grudging respect. This and other posture emoticons (as well as abbreviations like ROFL—‘rolling on the foor with laughter’) demonstrate

Online language and social interaction 93

ways users of text-based communication can animate their verbal messages that would be either impossible or inappropriate in face-to-face communication: it is unlikely that those who use the symbol ‘orz’ would ever actually ‘kowtow’ to an interlocutor in a face-to-face conversation. One of the most important differences between the facial expressions that we make in face-to-face communication and those we depict in emojis and GIFs is that, in face-to-face communication we don’t always have complete control over the expressions that we make; what we communicate through the human face, to use the words of the famous sociologist Erving Goffman (1959), is not just ‘given’, but sometimes ‘given off’—an unintentional expression of our inner thoughts or feelings. Digital representations of the face, on the other hand, are always produced consciously and intentionally. Most linguists, therefore, have long questioned the view that emoticons or emojis are primarily used to express emotion, pointing out that they often have pragmatic functions as well, that is, they help us to do things in our digital conversations such as apologize, contextualize our utterances, or manage the ongoing conduct of our interactions. Over a decade ago the linguists Eli Dresner and Susan Herring (2010: 250) argued that ‘the primary function of the smiley and its brethren is not to convey emotion, but rather pragmatic meaning,’ and thus the function of such symbols must be understood ‘in linguistic rather than extra linguistic terms.’ One important way such elements function linguistically is as what linguists call contextualization cues. In face-to-face communication we sometimes use facial expressions, gestures, stress, and intonation to create a ‘context’ within which our words should be interpreted. Such cues convey information about what we are ‘doing’ when we are communicating (for example, joking, complaining, or arguing) and how the verbal part of our message is meant to be taken. When we shift from a serious ‘frame’ to a joking one, for example, we might change the expression on our face, adjust our posture, or change the pitch, volume, or intonation of our voice. Since the early days of text-based communication, people have found ways of using the affordances of written language to produce similar kinds of cues to strategically frame and reframe their messages. The following conversation, for example, between two young people in Hong Kong, took place on an instant messaging platform called ICQ in the early 2000s. Barnett: u’re....~?! Tina: tina ar....... a beautiful girl........ haha... ^_^ Barnett: ai~ i think i’d better leave right now....^o^! (from Jones 2012a)

94

Digital tools

In this example, Tina signals that her characterization of herself as ‘a beautiful girl’ is playful and facetious by adding ‘haha…’ and the emoticon ^_^. Similarly, Barnett signals that he is not entirely serious about leaving by adding a tilde (~) to his interjection ‘ai’ (signalling a lengthening of the vowel sound) and ending his contribution with the playful emoticon ^o^ (a clown’s face). Without these cues to signal participants’ attitudes, it would be diffcult to interpret these utterances as they were meant to be understood. The point we are trying to make here is that while the systems of emoticons/emojis and typography used in text-based communication might draw upon the semiotic systems of facial expressions and phonology, they are not simply replications of these systems in writing. They are systems in their own right, with their own conventions and their own sets of affordances and constraints. Nowadays, rather than creating emoticons ‘from scratch’ using symbols from a standard keyboard, most people choose from a standardized set of emojis arranged in their own keyboard, and some systems, such as Apple’s iOS will suggest emojis for users when they type words. For example, when users begin to type the word ‘pizza’, both the word and a pizza slice emoji appear in the list of predictive choices. Similarly, many chat applications come equipped with a GIF keyboard that allows users to search for appropriate animated GIFs to add to their messages using the popular GIF search engine Giphy. One result of such innovations might be an increasing standardization of the symbolic language associated with online chat, with all users having access to more or less the same set of emojis and animated GIFs. It would be a mistake, however, to think that there is necessarily a one-to-one correspondence between the ‘name’ of an emoji or the term we use to search for an animated GIF and the ‘meaning’ that that emoji or GIF expresses. In fact, emojis and GIFs don’t really have ‘meaning’ in the same way words do and can typically serve multiple communicative functions depending on how they are used. Partly because of the common use of emojis and other textual elements as contextualization cues in online chat, some more conventional aspects of written language have taken on meanings when used online that they do not have in more traditional written texts. Several researchers (e.g. Androutsopoulos & Busch, 2021; Houghton, 2018), have found, for example, that punctuation marks, especially full stops, which are used to mark boundaries between sentences in written texts, can actually communicate anger, sarcasm, or a lack of sincerity when used in online chats. Others, such as language scholar Stefania Spina (2019), have found that emojis in Twitter posts often function to mark clause and sentence boundaries, either replacing or being used alongside other punctuation marks such as commas, full stops, and quotation marks.

Online language and social interaction 95

Emojis, GIFs, and other non-linguistic elements in chat and text language can also serve as conversational placeholders, providing people with something to contribute when they are not sure what to write, when they want to convey a reaction to what their conversational partner has typed, or they simply want to show that they are ‘listening’. Despite the ways text-based chat can resemble verbal conversation, complete with contextualization cues, backchanneling, and even ‘silence’, its lack of embodied cues and the ability to monitor how people are reacting to what we are saying as we are saying it, still make it a very different form of communication, but this is not necessarily a bad thing. Having more modes of communication (such as bodily movements or facial expressions) does not necessarily result in ‘more meaning’ being conveyed. In fact, linguists have been pointing out for some time that quite a lot of communication occurs indirectly and that meaning is often conveyed as much in what is not said as in what is said. Depending on the context and the relationship between communicators, a text message containing a single emoji can express just as much as an elaborate verbal utterance. In the early days of text-based chat, communication researchers such as Lee Sproull and Sara Kiesler (1986) were convinced that the medium was not really suited for interpersonal communication between friends involving, for instance, the expression of feelings, arguing that, at best, it would be used for very instrumental, workplace communication. Later on, of course, it turned out that interpersonal communication is one of the main things people use text-based chat for. Rather than hindering the expression of feelings, text-based communication seems to actually facilitate intimacy, a phenomenon the communication scholar Joseph Walther (1996) calls ‘hyperpersonal communication.’ The biggest problem with approaches that focus on media effects is that they start from the assumption that text-based digital communication is an ‘imperfect replica’ of some other mode of communication. When judged against less interactive forms of written language like newspapers and books, the language of text-based digital communication is found to lack precision, clarity, and ‘correctness’. When measured against face-toface conversation, it is found to lack ‘richness’. Either way, this approach constitutes a ‘defcit model’, which explains the features of digitally mediated text as ‘compensation’, rather than as responses to the media’s unique set of communicative affordances. Not only do people do very different things with text-based digital communication than they do with written texts or verbal conversations, but text-based digital communication has itself introduced into social life a whole array of new kinds of interactions which are not possible using traditional writing or voice-based conversation.

96

Digital tools

ACTIVITY: TEXTUAL CHEMISTRY Given the ways text-based digital interactions such as texting and messaging seem to facilitate hyperpersonal communication, it is not surprising that texting has become an important means of communication for people in romantic or intimate relationships. In such interactions, text-based communication has a number of clear technological affordances: It allows users to revise their messages The asynchronous nature of text-based communication gives users the opportunity to craft more thoughtful, considered, and creative messages that better communicate how much they care for each other. It provides an ‘emotional buffer’ Because text-based communication avoids the necessity of having to manage bodily aspects of communication like facial expressions, it decreases the discomfort or fear some people feel when talking to people they have strong feelings for. It’s discreet By texting, a person can avoid having their conversation overheard by friends, family members, or other people in the vicinity. This is especially important when one is discussing particularly intimate topics. This discrete or ‘secret’ nature of text-based communication can also have the effect of heightening the sense of intimacy between users. It facilitates ‘firting’ A lot of the communication that goes on in intimate relationships involves teasing, firting, and double entendre, forms of communication that depend as much on what is not communicated as what is. Text-based communication facilitates the linguistic ambiguity that makes firting possible. It makes other forms of communication more ‘special’ By using text messages for most day-to-day communication, romantic partners can reserve ‘richer’ forms of communication like phone calls

Online language and social interaction 97

and video chats for times when they want their interaction to seem more special. It’s persistent What we mean by persistent is that one is able to save text messages. The obvious advantage of persistence is that it allows communicators to look back and check what was said earlier. For romantic partners, persistence has the added advantage of allowing them to preserve their conversations as ‘keepsakes’ much in the way people in the past saved love letters. Of course, the affordances or benefts brought about by media can also bring about certain constraints or drawbacks. The persistent nature of text messages, for example, might be seen as a drawback after you have broken up with your boyfriend or girlfriend and they still have a record of your conversations, or if someone outside of the relationship gains access to those conversations. Another possible drawback is that, despite the affordances being the same for everybody, different people make use of those affordances in different ways. Some romantic partners, for example, like to text a lot, while others text more sparingly, and some use a lot of emojis, while others don’t. In an article published in Time magazine on Valentine’s Day 2016, Eliana Dockterman coined the term ‘textual chemistry’ to describe compatibility (or lack of compatibility) when it comes to texting behaviour. In order to understand something about your own expectations when it comes to media use in the context of intimate relationships, create a list of Dos and Don’ts when it comes to texting/messaging in romantic relationships. Then compare your list with that of a classmate and determine whether or not you have ‘textual chemistry’. Your list can also include rules about using image sharing programs such as Snapchat. We have started with some examples. Dos

Don’ts

Respond within 20 min after receiving a text

End a text with a full stop, unless you’re angry

98

Digital tools

User effects The exercise above should have alerted you to the fact that, although users of the same technologies experience the same set of affordances and constraints, media do not determine how we use language online. The way different people respond to the affordances and constraints of media when they communicate can vary considerably. Another way scholars have come to account for online linguistic practices, then, has been to focus on the users of text-based communication: who they are, their relationships with and attitudes towards the people with whom they are communicating, and what they are communicating about. Below are two tweets, one from former President Barack Obama (a), and the other from a 15-year-old student named Kayla Dakota (b). (a) Barack Obama High-speed wireless service is how we’ll spark new innovation, new investment, and new jobs—and connect every corner of America to the digital age. (b) Kayla Dakota truth is, I don’t like our health class at all…. UGHHH ahahaha *holds up two fngers* … am I the only one who fnds that condom on the banana trick creepy? It’s easy to see how the language that former President Obama uses is very different from that Kayla uses. Whereas President Obama’s contribution is written in rather formal style with standard spelling and grammar, much like a conventional written text, Kayla’s uses frequent ellipsis (…), sound words (UGHHH ahahaha), an emoji, the intensifer ‘like’, and a parenthetical description of a gesture (*holds up 2 fngers*), a practice known as emoting. One reason for this has to do with the kinds of social identities they are enacting. Another reason has to do with the people they are communicating with. Kayla is communicating with her friends, and the style of her language helps her to maintain a close relationship with them. Obama, on the other hand, is addressing his constituents. Of course, sometimes people stylize their online language in unexpected ways. For instance, one of the most obvious features of former President Donald Trump’s Twitter feed (and, arguably one of the things that makes it most effective in attracting attention) is that he does not compose his messages in the way Obama did. In fact, both in overall tone and in his use of punctuation and capitalization, President Trump often sounds more like a 15-year-old girl: (c) Donald Trump Why would Kim Jong-un insult me by calling me “old,” when I would NEVER call him “short and fat?” Oh well, I try so hard to be his friend - and maybe someday that will happen!

Online language and social interaction 99

The linguist James Paul Gee (2008) calls the different styles of speaking and writing associated with different kinds of people and different social groups social languages. Scholars of computer-mediated communication have found that particular groups, such as players of a particular online game, or friends on an online social network, tend to employ unique ‘social languages’ which mark them as members of the group. The more competent they are at using these languages, and the more creatively they can adapt them to different circumstances, the more accepted or respected they are in the group. From this perspective, emojis, animated GIFs, and the various forms of non-standard spelling and punctuation usually associated with online chat and instant messaging are not simply ways of compensating for the ‘lack of richness’ of written texts, but also characteristics of particular social languages used by particular groups to show group identity. In this regard, it is not surprising that people from different cultures use the elements of online text-based language differently. Linguists Alice Chik and Camilla Vasquez (2017), for instance, have found that Americans and Hong Kongers use emojis differently in online reviews, Satomi Sugiyama (2015) describes how Japanese teenagers use emojis strategically to signal gender identities, and Rui Zhou and her colleagues (2017) observe how different subgroups of WeChat users in Southern China (young, old, rural, urban) assign different meanings to particular emojis and engage in particular subcultural practices such as ‘sticker wars’ (non-verbal, competitive exchanges of stickers). The use of animated GIFs, often constructed out of short clips from popular movies or television programs, is another way people in online textbased interaction can signal subcultural identity and affnity. ‘The GIF,’ say media scholars Kate Miltner and Tim Highfeld (2017), ‘is not just a proxy for the individual’s particular affective or emotional state, but an illustration of the user’s knowledge of a certain text or cultural conversation through their choices.’ One example they give of how people use GIFs to signal their membership in a community is the creative exchange of GIFs from the television show RuPaul’s Drag Race by fans as a way of demonstrating ‘insider’ knowledge about the show, a practice which was later supported by the development of resources such as the Drag Race keyboard app (Miltner, 2016). As mentioned above, just as important as who we are trying to ‘be’ when we compose online messages is who we are talking to. The way people talk is affected by their understanding of who the audience for their talk is, a phenomenon sociolinguist Allan Bell (1984) calls audience design. This is equally true in online communication. Saudi Arabian linguist Areej Albawardi (2018), for instance, found that Saudi students vary their use of ‘Arabish’ and other forms of non-standard Arabic in WhatsApp messages depending on whether they are talking to friends, family members, or teachers. Audience design in text-based digital communication, however, can become complicated when users may have multiple audiences as in group

100

Digital tools

chats, multiplayer games, or on social media sites like Facebook. Internet linguist Jannis Androutsopoulos (2014) has shown how multilingual users of social media sites use style and language choice to signal who in their diverse audiences their messages are directed towards, including or excluding certain segments of their social network. Although an approach to text-based digital communication that takes users’ identities and relationships as its starting point helps to explain why different people use language differently online, it still fails to fully capture the unique affordances of this form of communication. In the rest of this chapter we will turn to the ways that people creatively engage with the affordances and constraints of digital media to act and interact with each other. As we said in the frst chapter, a focus on mediation begins with questions like: What are people doing with text-based digital communication that they cannot do with other forms of communication? And how do they use the different affordances of media as resources to perform these actions?

What are we doing when we interact online? After the web entrepreneur Evan William’s initial success in developing Blogger, a simple tool which allowed anyone to create their own blog, he turned his attention to a new project: audio blogging. The idea was that bloggers could make their blog posts more expressive by embedding recordings of their voices. This attempt to enhance the text-based communication of blogging with the ‘richer’ mode of voice-based communication, however, didn’t take off. So Williams tried a different strategy. Rather than trying to make blogging richer, he developed a platform that placed even more constraints on users, not just limiting them to text, but also confning them to posts of only 140 characters long. He called this new platform Twitter. Of course, nowadays Twitter is a much ‘richer’ platform than it was when it frst started, allowing users to embed links, images, and videos into their tweets. But one lesson we can take from the early success of Twitter, with all of its limitations, is that the richness of a communication channel does not necessarily correspond to how useful people actually fnd it. The reason for this has to do with what economists call transaction costs. Imagine you are in a used-record shop and fnd a vintage vinyl disk from a 1990s band you love called the Iridescent Pineapples. You really want to tell your friend, Fred, also a Pineapple fan, about it, but there are many things that keep you from calling him: it’s the middle of the day and you don’t want to interrupt him at work; you owe him ten dollars and are afraid he might ask you about it; you just don’t want to go to all of the trouble of starting a phone conversation, asking how his girlfriend is, listening to

Online language and social interaction 101

him complain about his boss, or face the diffculties inherent in ending the conversation once it has started. In other words, because the effort and trouble—the transaction costs—of giving this piece of information to Fred are too high in relation to the importance of the information, you resolve to wait and tell Fred about your fnd later, but you end up forgetting and miss the chance to share this information with him. ‘Leaner’ forms of communication such as texting or tweeting substantially lower the transaction costs of communication. And so, instead of calling, you might use your mobile phone to text Fred, saving yourself the trouble of engaging in a full-scale conversation and avoiding the risk of inconveniencing him with a phone call. Or you might tweet the news to your social network, alerting not just Fred but all of your other Pineapple-loving friends. You might also use Snapchat to send a picture of the album cover to Fred. Photo sharing apps like Snapchat and Instagram are also ‘lean’ forms of communication, since they allow us to send quick, easily produced messages using limited modes. In general, the ‘richer’ a medium of communication is, the higher the transaction costs are for using it because people have to attend to more modes. The transaction costs of traditional forms of verbal interaction include not just engaging in conversational rituals like opening, closing, and making small talk, but also the necessity of constantly attending to tone of voice, facial expressions, and gestures, of constantly showing that one is listening, and of responding in a timely manner. Similarly, video communication (using platforms such as Skype and FaceTime) entails fairly high transaction costs (see Chapter 6). Voice-based telephone calls entail fewer transaction costs, but still require you to monitor your own voice and manage things like openings, closings, and turn-taking with your interlocuter. ‘Leaner’ forms of communication such as instant messages, tweets, snapshots, and even voice messages free one from much of the interactional work involved in conversation, and text-based communication in particular saves people from having to pay attention to facial expressions and vocal quality. In face-to-face conversation, you could not get away with using no facial expressions at all, but you can with instant messaging. From this perspective, the interesting thing about emojis and GIFs is not that they are necessary to compensate for the lack of facial expressions in text-based digital communication, but that they are not a necessary feature of such communication. Their optional status allows people to be more creative about when and how to use them. Lean forms of communication also often free people from having to start and end conversations. Conversation analysts like Harvey Sacks and Emmanuel Schegloff (1973) have demonstrated that starting and stopping conversations actually requires a lot of work. Starting a face-to-face conversation usually requires that people engage in an opening sequence (such

102

Digital tools

as: ‘A: Hi, Christoph B: Hi, Rodney’), sometimes referred to as a summonsresponse sequence, in order for interactants to establish themselves in a ‘state of talk’ (Goffman, 1981) before going on to raise the topics that they want to talk about. Text-based communication on platforms such as WhatsApp, on the other hand, usually does not require such opening sequences—users can just launch into their topic. Closing conversations in face-to-face communication (and through richer media such as the telephone) can also be awkward, requiring people to ‘ease out of’ the conversation with pre-closing sequences such as ‘A: okay… B: okay, then… I’ll see you on Thursday.’ When people use lean forms of communication such as text messaging and photo sharing, they often don’t feel the need to politely exit from the interaction. In fact, many digitally mediated conversations don’t involve closings at all. People just stop sending messages until hours or days later, when they have something new to say. Finally, interactants in text-based or photo-sharing interactions are not under the same obligation to respond to messages quickly, or even to respond at all. If Fred or others in your social network do not respond to your tweet about the Iridescent Pineapples, you probably won’t hold it against them. That is not to say that whether someone responds or how long it takes them is not important. In fact, sometimes response time can take on particular importance in text-based messaging, especially when people are negotiating their relationship, as when you’ve sent a text to someone you are romantically interested in. At the same time, with lean forms of communication it is often diffcult to interpret the ‘meaning’ of the silences between messages: ‘Is he ignoring me? Or is he just doing something else?’ One key feature of most messaging programs nowadays is the read receipt feature whereby users receive signals when their messages have been received successfully and when they have been opened by recipients. In the case of WhatsApp, these signals appear in the form of blue ticks. They add yet another dimension of meaning to the time people take before answering messages: an unanswered message with a blue tick next to it has a different meaning than one without. In many situations, the relative lack of mutual monitoring opportunities in most text-based forms of interaction can be seen to have advantages, allowing people to better control the messages they are sending, and lowering the risk that unintended information might be ‘given off’ or ‘leaked’ (Goffman 1959) through our facial expressions or tone of voice. This lack of mutual monitoring opportunities also allows people to engage in other activities, including having simultaneous conversations with other people. One study by Jones (2008a) found that one reason teenagers like messaging and texting is that they can talk with their friends without their parents overhearing, and another study by Ling (2004) showed that teenagers use mobile phone texting not just to stay in contact with their friends but also to stay out of reach of their parents.

Online language and social interaction 103

‘Phatic’ communication As we said above, because of their low transaction costs, text messages and image sharing make it easy for people to exchange thoughts, ideas, experiences, and feelings with others which they might not otherwise have bothered to share. For some, however, the kinds of messages people share through chat clients or on social media sites seem ‘banal’, ‘trivial’, or even lacking in ‘content’ altogether. Why is it necessary for people to endlessly exchange emojis or pictures of the coffee they are drinking at Starbucks? But often the ‘content’ is not the point of this kind of communication; connection is. What people are often doing when they interact through lean media is not so much exchanging information as maintaining a sense of connection with their friends. The anthropologist Bronislaw Malinowski (1923) called this kind of communication phatic communication, and pointed out its importance for maintaining interpersonal relationships and social cohesiveness more generally. More than 70 years later, the evolutionary linguist Robin Dunbar (1996) argued that the main reason humans developed language in the frst place was not to convey ‘meaning’ but to engage in phatic communication. Before language, he says, primates, including early humans, maintained social bonds through the practice of grooming, consisting of, among other things picking insects out of each other’s fur. As human groups became larger, this was no longer an effcient means to achieve group solidarity, and so language, especially forms of conversation such as gossip and ‘small talk’, took over this function. Digital media have allowed us to take these practices of social grooming online. In fact, computer scientist Victoria Wang and her colleagues (2011) go so far as to label digital messaging and social media platforms ‘phatic technologies’. This practice is particularly evident in the ritual exchanges of digital ‘tokens’ such as emojis, photos, as well as stickers and animated GIFS in what have been referred to as ‘sticker competitions’ or ‘GIF wars’ (see e.g. Zhou et al., 2017). While lean media facilitate phatic communication, they are also particularly suited for completely instrumental exchanges of information: a boy informing his girlfriend that he’s going to be late, a wife asking her husband to pick up a carton of milk, or an employee informing her boss that she has completed a report. Again, it is not that these interactions did not take place before text-based digital communication, but rather that they took much more time and energy. Calling his girlfriend to say he’s going to be late increases the likelihood that the boy is going to have to explain why. By walking down the corridor to tell her boss that she has fnished a report, the employee wastes both the boss’s time and hers. Text-based digital communication helps to make such purely instrumental transactions more effcient.

104

Digital tools

CASE STUDY: CHATBOTS So far, in this chapter, we have focused on the way humans interact with other humans online. Increasingly, however, people are also interacting with non-human conversational partners, sometimes with the full knowledge that the ‘person’ they are talking to is not a person, but sometimes unaware. Conversations with chatbots occur in all kinds of contexts: companies use them for online customer service, online counselling sites use them to give people ‘someone’ to talk to, and hackers and scammers use them to try to get people to give up personal information such as their bank account numbers. A bot is a computer program that can perform some kind of automated task such as performing an internet search or playing an online game. A chatbot is a bot that can imitate human conversation either through text or through electronic voice simulation. Nowadays, people are becoming more accustomed to talking to machines online. According to the website smallbizgenius.net, when we were writing this book there were over 300,000 chatbots on Facebook, and by the time this book is published over 85% of all customer service interactions will be handled by non-humans. These trends raise important questions about online language and interaction (whether through written text or through voice-based interfaces), questions about how chatbots are actually able to imitate human conversation, and about how chatting with non-human conversationalists might actually alter the way real people talk. Intimate machines The frst famous chatbot was a program called ELIZA, developed in the mid -1960s by Joseph Weizenbaum at MIT Artifcial Intelligence Laboratory. Weizenbaum is widely considered one of the fathers of the feld of artifcial intelligence, but there was nothing particularly intelligent about ELIZA, at least not compared to today’s sophisticated algorithms. The program was based on a technique called pattern matching in which a program searches for patterns (such as sequences of words or letters) and then responds to them using a pre-programmed set of responses or ‘script’. This technique allowed ELIZA to respond to things that human users typed based on keywords and phrases in a way that made it seem like it actually understood what the human user meant. The most popular script Weizenbaum developed was based on a technique in psychoanalysis in which doctors try to get people to talk about their feelings by getting them to elaborate

Online language and social interaction 105

on things that they have already said. As it turns out, engaging in this kind of conversation is not particularly diffcult for a bot that can recognize lexical patterns and is programmed with the right kind of script. Below is an example of a conversation between ELIZA and a human user: Human: Men are all alike. ELIZA: IN WHAT WAY Human: They’re always bugging us about something or other. ELIZA: CAN YOU THINK OF A SPECIFIC EXAMPLE Human: Well, my boyfriend made me come here. ELIZA: YOUR BOYFRIEND MADE YOU COME HERE Human: He says I’m depressed much of the time. ELIZA: I AM SORRY TO HEAR YOU ARE DEPRESSED (Weizenbaum, 1976: 3) One of the things that surprised Weizenbaum the most about his program, however, was not that it was able to imitate this rather predictable genre of spoken discourse, but rather the degree to which people who used the program seemed to form an emotional attachment to it. He wrote (1976: 6): I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it. Once my secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room. In a way, this is not particularly surprising given what we said above about the way digitally mediated communication can facilitate phatic communication, and the highly patterned, almost ritualistic form this kind of communication often takes. It might be argued that phatic exchanges in which conversationalists exchange tokens of mutual regard or mirror back to each other what the other has said are actually easier for a chatbot to handle than exchanges that involve debating abstract ideas or solving problems (though nowadays chatbots are pretty good at these kinds of exchanges as well). Today’s bots, even those designed for more instrumental purposes such as marketing and customer service often integrate phatic elements such as non-conventional punctuation and emojis. One example is

106

Digital tools

Figure 5.1 Mica responding to small talk.

‘Mica the hipster catbot’ (https://hipstercatbot.com). Mica is designed to help users fnd ‘hip’ restaurants and coffeeshops in their location, but is also able to engage in small talk (Figure 5.1) and even to send cute cat photos when he doesn’t quite understand what you have said (Figure 5.2). A bit too human One reason that bots today are getting better at imitating humans is that they no longer depend on the crude technique of pattern recognition and the fxed set of scripts that ELIZA used. Instead, most bots today make use of machine learning algorithms that allow them to statistically predict what kinds of responses are appropriate (both in terms of content and in terms of language use) based on an analysis of big datasets of human interaction archived by chat platforms and social media sites. In other words, contemporary bots are able to learn

Online language and social interaction 107

Figure 5.2 Mica sending a cat pic.

from us how we talk online, and even to alter the way they talk when people start talking differently. In fact, whenever we interact with other humans online, we are contributing to the education of bots. It is not surprising, then, that many of the linguistic innovations that humans introduce into their online communication become features of bot talk. Of course, there are downsides to this. One of these stems from the fact that human interaction online does not always resemble the kind of friendly, phatic communication we have been focusing on in this chapter. Much interaction online is abusive and incendiary, designed to bully or harass others (see Chapter 10), and those less-than-pleasant conversations also make up part of the big datasets from which bots learn how to ‘act human’.

108

Digital tools

Perhaps the most famous example of the pitfalls of machine learning was a Twitter bot designed by Microsoft named Tay. Tay started out as a rather friendly, innocent interlocuter, but soon she was targeted by armies of internet trolls who fooded her Twitter stream with hateful comments, and since Tay was designed to learn how to interact from those with whom she interacted, it wasn’t long before she began sending hateful tweets herself laced with racist comments and swastikas, until fnally Microsoft had no choice but to shut her down. Xu (2018) tells a similar story about how a pair of chatbots in China named Xiao Bing and BabyQ on the popular instant messaging clients WeChat and QQ went ‘rogue’, starting to imitate the messages of netizen activists that had managed to get past the government’s sophisticated censorship apparatus, and beginning to respond to users with politically subversive messages. The bots were quickly removed, and when they were reintroduced a few months later they had been reprogrammed to avoid talking about politically sensitive topics. Some might argue that the stories of Tay, Xiao Bing, and BabyQ prove how effcient artifcial intelligence has become at learning how to talk like humans. At the same time, it also demonstrates something about the nature of human online communication, especially on platforms like Twitter. Just as humans can now infuence the way bots communicate, there is also growing evidence that communicating with bots can change the way humans communicate. For example, computer scientist Jane Hill and her colleagues (2015) compared the way people interacted with a bot called Cleverbot to the way they interacted online with other humans. The found that people actually tended to have longer conversations with the chatbot than they did with people, but their messages in these conversations tended to contain fewer words, perhaps adapting to the generally shorter messages sent by bots. Perhaps the most interesting fnding was that people seemed much more willing to be abusive to bots. The team found an almost 30-fold average increase in profanity in the human–chatbot conversations compared to those between two humans.

The richness of lean media: Mode-mixing and mode-switching So far, in this chapter, we have been focusing on the ‘leanness’ of chat-based interactions, as well as other kinds of relatively ‘lean’ interaction such as

Online language and social interaction 109

photo sharing. But it should be clear from our discussion that these forms of interaction are often not as lean as they might at frst seem, but often involve complex combinations of modes (see Chapter 4). In text-based interactions people often mix graphic material with their words in the form of emojis, animated GIFS, and stickers, and most platforms that people use to exchange images such as Snapchat and Instagram allow users to add words, emojis, and even drawings to their images. But there is still a big difference between the way modes work together in rich media, such as face-to-face interaction and video chat, and leaner forms of media such as text-based messaging. For one thing, when we are interacting with someone face to face (or through a video camera) the main modes that we use (voice, gestures, facial expressions) are what we can call embodied modes. Modes such as writing and images are disembodied modes. While photographs such as ‘selfes’ involve the use of embodied modes when we produce them (specifcally facial expressions and gestures), the whole point of taking a photograph is to create a ‘document’ that is separate from the body. The way we mix modes in real-time face-to-face conversations is extremely complex, with our words, intonation, stress, voice quality, gestures, facial expressions, posture, and proxemics tightly integrated with one another in a way that makes it diffcult to separate one mode from another. With leaner media, on the other hand, while multiple modes are often used, we usually have more choice about which modes we want to use and how to combine them. Sometimes we mix modes together in a single text or ‘utterance’, as, for example, when we combine words and emojis in a single text, and sometimes we switch modes, using one mode at one point in the communication and another mode at another point, as when we switch from text to voice message in WhatsApp. Understanding the affordances that different platforms provide for mode mixing and mode switching is an important part of understanding how people make meaning and conduct interactions using these platforms. Snapchat is a good example of an app which facilitates mode mixing, allowing users to combine images, text, emojis, and other symbols, as well as handwriting and drawing in a single message. The important thing about mode mixing is the way the different modes are combined to produce the message. The ‘meaning’ of one particular modal element, such as an emoji or a word, can’t be understood without reference to the other elements in the composition, a point we touched upon in the last chapter in our discussion of multimodal design. Mode switching is when people switch from one mode to another in different messages. For example, you might use text in one message, and then switch to an image or an animated GIF in your next message. You could do this using an app like WhatsApp, which allows users to switch from sending text messages, to sending images, to sending voice messages, to even

110

Digital tools

engaging in real-time voice or video interaction. There are all sorts of practical reasons why someone might choose to switch modes while using an application like WhatsApp. For example, they may be exchanging voice messages with someone, but then switch to using text when they move into a situation where there are people who might overhear their conversation. At the same time, switching modes might also have some kind of interactional meaning—switching from text messages to a voice message, for example, might signal that I’m angry, annoyed, or have something important to tell you. At the same time, it is important to remember that the different modes that WhatsApp makes available have different transaction costs in different situations. Apps like Snapchat, which allow people to mix modes in complex ways allow people to exploit the meaning making potential of simultaneity, the way different modes work together when they are presented at the same time. In this regard, users might pay more attention to the spatial aspects of message design, for instance, where in a picture to superimpose a particular text or emoji. Mode switching orients users to the meaning-making potential of sequentiality, the way meaning is made when things follow each other in sequence. For instance, a user’s choice to send a voice message rather than a text message ‘says something’ about the way they interpreted the previous message and creates expectations about how the subsequent message might be formed.

Conclusion In this chapter we have discussed some of the ways the language of text-based digital communication is different from other kinds of written and spoken language and some of the reasons for these differences. We have offered two kinds of explanations: those based on media effects, which focus on the way media itself infuences the way people use language, and those based on user effects, the way particular kinds of users exploit the affordances and constraints of text-based messaging in order to present themselves as certain kinds of people or to manage their relationships with others. We have also considered why text-based communication continues to be such a pervasive mode of digital interaction, even when other more multimodal alternatives are readily available. One reason we discussed was the low transaction costs of text, a factor which encourages people to engage in many kinds of interactions which would involve too much time or effort using other modes. Another reason we mentioned was the unprecedented amount of control text-based communication gives people over managing the meanings they make and the identities they create. Finally, we discussed how sometimes the very constraints that seem to make communication more diffcult can actually foster more creative forms of self-expression, as

Online language and social interaction 111

well as encouraging more ‘phatic’ communication, communication whose purpose is not to exchange information but to increase intimacy. Throughout this chapter we considered the multimodal dimensions of text-based communication, the fact that it almost never just involves ‘text’, but often includes elements such as emojis, stickers, and animated GIFs, and we explored the ways different platforms allow users to mix modes in different ways or to switch from one mode to another. Much of our discussion in this chapter was based on a distinction between ‘rich media’ and ‘lean’ media. Rich media allow people to use more modes and to combine them in more complex ways. They are also usually more synchronous and give users more opportunities for mutual monitoring. Face-to-face communication is considered a particularly ‘rich’ form of communication, because we can use our bodies, the tone of our voices, and aspects of the material world along with the words that we say to communicate. Lean media usually involves fewer modes used in a more controlled way, and is usually asynchronous and disembodied. It is important to remember that ‘richness’ and ‘leanness’ are not distinct categories, but more of a continuum, and that particular media can be used in ‘richer’ or ‘leaner’ ways. Snapchat, for instance, can be used to exchange rich, multimodal texts involving complex combinations of words, images, and emojis, or it can be used to send simple text messages written on top of a black background. In fact, whether or not someone chooses to use a particular medium in a rich way or a lean way itself communicates something to the people they are messaging. Early forms of text-based computer-mediated communication (such as internet relay chat and email) were particularly lean, involving fewer modes and giving users a feeling of ‘disembodiment’. As digital technologies have developed, however, they have not just become more multimodal, but also more embodied. In the next chapter we will explore how digital technologies are increasingly providing users ways to use their physical bodies and aspects of the material world as resources for communication. Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnition of key terms.

Chapter 6

Mobility and materiality

Perhaps the two most important advances in digital technology in the past two decades have been 1) the ability of people to access the internet through small portable devices and 2) the increasing integration of digital technologies into the physical world. As a consequence, digitally mediated communication has not only become more mobile and ubiquitous but physical spaces and physical objects (including our physical bodies) have become important resources in digitally mediated communication. Applications that allow us to create and transmit texts and utterances using aspects of our physical surroundings (sights, sounds, and even haptic sensations), and which locate us and those with whom we are communicating in specifc physical spaces (such as digital maps, location-based dating apps, or ride-sharing apps) highlight the fact that understanding digital literacies increasingly requires that we understand how ‘meanings are made across time, across space, in and through matter’ (Lemke, 2011: 143, emphasis ours). In this chapter we will examine how digital technologies have changed our relationship with time, space, and the material world. They have changed our relationship with time and space in part by allowing us to inhabit multiple spaces at the same time. This, in turn, has altered the ways we interact with other people, making it easier for us to engage in multiple interactions ‘layered’ across different spaces and different timescales. They also make it possible for us to exploit the communicative potential of location and movement. When we communicate with each other at a distance, we no longer do so from fxed locations, as we did with landline telephones. Rather, we communicate as we move through space, and the way we navigate spaces and locate ourselves and other people in them has become a crucial aspect of our communication. Mobile digital technologies have helped to create a situation in which people are both ‘always on’ (Baron, 2010), that is, always available for communication no matter where they are, and always connected to other people and to digital networks, always sending out signals about where they are and often what they are doing, even when they are not consciously communicating (Licoppe, 2016).

Mobility and materiality

113

Digital technologies have also changed our relationship with the physical world and with our own bodies by making it possible for us to use embodied modes of communication (see Chapter 5)—gestures, facial expressions, movement, and even touch—from a distance, through video chat and through devices (like the Apple Watch) which allow users to send physical sensations (such as their heartbeat) to other people. At the same time, as digital communication has become more physical, physical reality has become more digital: our experience of the physical world is increasingly augmented by information from our digital devices, and physical objects (such as cars, refrigerators, toothbrushes, and human bodies) are increasingly connected to digital networks in what is referred to as the Internet of Things (IoT).

Mobility and hybrid spaces In the early days of the internet, people used to talk about cyberspace as if it were some kind of alternate reality separate from the physical spaces that people inhabit. This way of thinking about the internet as a separate ‘virtual’ world was celebrated in science fction classics of the time such as William Gibson’s 1984 novel Neuromancer, in which characters connected themselves to digital networks and transversed ‘cyberspace’, which Gibson described as: ‘Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding’ (1984: 51). At that time, what people were trying to express with the term ‘cyberspace’ was the feeling that digital technologies were altering their experience of space and time. But, in reality, this altered experience wasn’t so much that digital technologies allowed people to inhabit a space that was separate from physical space, but rather that computers allowed them to layer time and space in new ways. This reconfguration of our relationship with time and space is something that has accompanied the development of every new communication technology, all the way back to the invention of writing, which, for the frst time, allowed people to transport their words across physical and temporal distances. Equally profound disruptions came with the development of electronic media—the telephone, radio, and television. In his book No Sense of Place (1985), for example, media scholar Joshua Meyrowitz argues that television had the effect of breaking down barriers between physical spaces, transporting people to places and spaces that they had never before had access to, and consequently contributing to a breakdown of the boundaries between ‘public’ and ‘private’ spaces. As these technologies became mobile, people were able to use them to control their relationship with physical spaces outside of their homes. Historian Wolfgang Schivelbusch (1986), for example, notes how the invention of the paperback book in the nineteenth century gave people a way to manage the

114

Digital tools

awkward situations of having to wait alone in crowded urban spaces or ride in carriages with other people during long train journeys by creating what Goffman (1963: 38) calls ‘involvement shields.’ De Souza e Silva and Firth (2008) note how devices like the Sony Walkman and Apple’s iPod dramatically altered people’s experience of urban space, allowing them not just to block out the sounds of the city, but also to augment their experience of moving through space with their own soundtrack. This hearkens back to the point we made in Chapter 2 about how media allow us to flter and manage the data that are available to us in order to be able to create useful information. They also allow us to flter and manage our social interactions, whether those interactions are occurring in digital or physical environments. These reconfgurations of time and space made possible by media have never made physical spaces irrelevant—the classroom where school children read about faraway lands and the living room where family members watch television together inevitably affect what is being read or watched and how it is understood. As de Souza e Silva and Firth (2008: 39) note in relation to the paperback book: the reader does not completely withdraw from the space of the train into the narrative of the novel. She is both in the train compartment and in the space of the novel. The experience of the narrative is shaped by the place she is sitting, as much as the experience of the place is shaped by the narrative. By immersing themselves in novels, readers are able to experience a place differently by, for example, feeling that a subway trip is more enjoyable or by imagining a coffee shop as a personal reading room. What media allow people to do, then, is to inhabit multiple spaces simultaneously, and to experience space as layered. This layering of space, however, introduces new challenges for communication, requiring people to constantly readjust their frames of temporal and spatial reference when interpreting different messages, and to work together to achieve a shared understanding of the context of communications that occur at the nexus of multiple overlapping spaces and timeframes (Lyons & Tagg, 2019). Examining the way people used chatrooms and instant messaging applications in the early 2000s, Jones (2005) observed that when people were communicating through their computers, they were experiencing at least fve different spaces: 1) the physical spaces where they sat in front of their computers, 2) the ‘virtual’ spaces of chatrooms, 3) the space on their screens where they could, for example, manage their interactions in multiple chat windows, 4) their geographical location and the location(s) of the people they were chatting with, and 5) the ‘relational’ space, meaning the physical and emotional distance between them and other people online—the relative feeling of ‘presence’ or ‘absence’ they had with other people.

Mobility and materiality

115

Mobile digital technology has made this complex confguration of spaces even more complex. The communications scholar Adriana de Souza e Silva argues that mobile technologies help to create what she calls hybrid spaces, ‘mobile spaces, created by the constant movement of users who carry portable devices continuously connected to the internet and to other users.’ This ‘always on’ connection, she continues, ‘transforms our experience of space by enfolding remote contexts inside the present context’ (2006: 262). They have also altered our experience of what Jones (2005) called ‘relational space’: the experience of co-presence that people often feel when talking on the phone or engaging in text-based chat has become a sense of ‘ambient copresence’ (Ito & Okabe 2005, emphasis ours), an ‘ongoing background awareness of others’ achieved through ‘keeping multiple channels of communication open’ no matter where one is (264). Again, however, it must be emphasized that these devices do not take us out of physical spaces. Instead, activities in online and offine spaces often intertwine and mutually support each other. When we are talking to someone on our mobile phone, not only are we and the other party inhabiting different physical spaces, but these spaces might be changing as we talk, and as the conditions of our physical location changes, so might the conditions of our conversation. Sometimes, for example, we might engage in multiple simultaneous involvements with people on the phone and people inhabiting the physical spaces through which we move, and in some cases, the separate spaces inhabited by people talking on the phone might merge, as when people use their mobile phones to locate each other in a crowded public place and suddenly fnd themselves standing right next to each other talking on the phone.

Locative media Perhaps the most dramatic difference between digital mobile technologies and earlier mobile technologies is the affordance they make available for users to flter the kinds of information they get from their digital devices based on their location, and to automatically communicate their location and locate others in physical space. The location services built into smart phones, which use a combination of information from GPS, Bluetooth, Wi-Fi hotspots, and mobile phone masts to determine the geographical coordinates of users, are integral to the operation of many mobile apps— not just the obvious ones like map apps (see Chapter 2), weather apps, ftness apps, and ride-sharing apps—but also apps for which the users’ location may not seem at frst to be crucial information, such as shopping apps, entertainment apps, calendar apps, camera apps, dictionary apps, and social networking apps like Facebook, Instagram, and Twitter. Many apps that use location-based services allow users to interact with or comment upon

116

Digital tools

their locations by, for example, attaching pictures or writing reviews, which can contribute to altering the meaning of these locations for other users. Users of such apps are able to access various kinds of information based on where they are, having a more personalized experience of space. But while such apps help people to locate things in space, they also render the users themselves locatable, bringing different affordances and constraints to the way they interact with others and the way they are able to present themselves both online and offine. One example of an app which makes use of location-based services is Snapchat. In 2017, Snapchat rolled out its ‘Snap Maps’ feature, which places users’ followers on a map in the form of cartoon renderings called ‘ActionMojis’, which change based on people’s location or the time of day. The app does allow users to opt out of appearing on other people’s Snap Maps by entering what is called ‘ghostmode’, creating the ability for them to communicate their location selectively, turning the feature on when they want to make themselves available to nearby friends or to communicate that they are at an interesting location, and to turn it off when they are someplace that they don’t want others to know about. Such features transform location into a communicative resource, and also alter the possibilities people have for face-to-face interaction (i.e. allowing them to locate people nearby). At the same time, they can also give rise to various complications in terms of managing social interactions and relationships. You might, for example, fnd that someone you know is nearby, but not want to interact with them, or you may fnd that making information available to people about where you have been or whom you have been with might affect how you can interact with them later on. One possible affordance of this feature is that it helps people to avoid loneliness by maximizing their opportunities to engage with others nearby (Constine, 2017), but some users have noted that the ability to see where your friends are and who they are with can also exacerbate feelings of loneliness. One user writes: [I]t is important to note that life can be really lonely, and college can be really, really lonely. Everyone has times when they feel alone, and having immediate access to the locations of all of your friends is the absolute last thing you need in those moments. Look at all the people together at lunch—wonder why you weren’t invited? Or why on earth is Sarah hanging out with James? It creates more negativity. There is almost no instance in which checking Snapmaps when you’re bored actually makes you feel better. (Nann, 2020: para 5) Another example of how apps that use location-based services can alter physical spaces and the way people conduct social interaction is location-based

Mobility and materiality

117

dating apps such as Tinder and Grindr. In a small study on the ways university students in the UK use Tinder (Jones, 2016), users described how people use the app to fnd potential companions in busy bars and clubs. The ability to make initial contact via Tinder, they said, helped to relieve the awkwardness of striking up a conversation with a stranger. ‘If you see someone on their phone in the bar,’ one participant said, ‘they are either looking for a drunk friend or on Tinder.’ Another interesting aspect of the way participants used the app was that they often did so together with the people they were with, huddling together and directing their attention at one person’s screen, making suggestions about which profles on which the phone’s owner should ‘swipe right’. This observation highlights the ways mobile phones and location-based apps can contribute to altering physical confgurations and focal involvements in physical spaces. Finally, people’s use of dating apps of this kind also intersects with their understandings of and prejudices about different geographical locations. Some users, for example, set the location radius strategically to limit or expand the geographical areas of the profles they had access to based on their ideas about the kinds of people who live in different places, and a number of participants in the study opted to turn the app off when they left university and returned to their home towns. One of the key affordances of location-based apps, then, is the way they allow users to use location as a way to manage their social identities and to infer the social identities of others. By choosing to make themselves visible or ‘check in’ in some places rather than others, users aim to communicate to their social networks certain qualities about themselves, and by checking where other people are, and sometimes choosing to engage with or avoid others based on their location, users often reinforce spacebased prejudices, which are sometimes also intertwined with class- and race-based prejudices. Perhaps the most important effect of location-based apps on social relationships, however, is their potential to alter relations of power by making it more diffcult for people to manage the kind of information they make available to others. Nowadays, for example, parents regularly use the location-based services on their children’s phones to track their activities, which could severely curtail children’s autonomy depending on how parents use it. But even the peer-to-peer ‘lateral surveillance’ (see Chapter 12) that people engage in using social media apps such as Snapchat can change the power dynamics between friends. Finally, users of such apps (and other apps which they may not even be aware are tracking their location) end up sharing large amounts of (sometimes intimate) information with commercial entities which can use it to more effectively manipulate them into buying certain things or visiting certain shops or restaurants.

118

Digital tools

ACTIVITY: LOCATIVE TECHNOLOGIES Many apps on your phone collect data about your location for various purposes, and some app developers are less than transparent about the purposes of this collection. For this activity, do a survey of all of the apps on your phone that are using location data by checking your phone’s settings. If you have an iPhone, open the Settings app and navigate to Privacy. Tap on Location Services, and you’ll see the individual location settings (such as ‘While Using’ or ‘Always’) for every app that is collecting data about your location. If you have an Android phone, open Settings and then Apps & Notifcations. Pick an app, and tap on Permissions to see whether Location is turned on or off. You can also access a list of all apps and their location settings by scrolling to the bottom of the Apps & Notifcations screen, tapping Advanced on App Permissions, and then tapping Location. 1. Try to decide why each of these apps is collecting information about your location. Is location information necessary for certain functions that the app provides? Apart from functionality, are there other reasons why app developers might want to collect data about your location? 2. Based on your survey, do you want to alter the permission you give to any apps on your list to gather location-based information? Why or why not? 3. Why do you think Android phones require so many steps for users to access a list of apps and their location settings (see Chapter 7)?

Placemaking with digital images Usually, when we think of the idea of ‘location’ we are mostly focused on where someone or something is in relation to other people or things. And when we speak of locating something or someone, we are mostly focused on ‘fnding’ them—determining their position in space; we might try to locate a place on a map or to locate a friend in a crowd. Location is often conceived of in terms of geographical coordinates (latitude and longitude), and most apps that use location-based services use technologies based on measuring the distance between different things (such as cell phone towers and satellites). While many location-based apps function chiefy by helping us to ‘locate’ people and places, they usually go beyond this very basic idea of

Mobility and materiality

119

location as a particular set of geographical coordinates. As we have seen above, apps like Snapchat and Tinder also serve to make locations meaningful. Someone’s location may be meaningful because that location is ‘close to me’. But often what makes one’s location meaningful is that it is associated with a certain place—a neighbourhood, a restaurant or bar, a sports stadium, or someone’s home. Even apps like Google Maps, whose main purpose is location, usually also function to make places meaningful by, for example, allowing users to access pictures of the places they are trying to locate and read reviews and descriptions written by people who have been there before, and then allowing them to rate or comment on the places they have been. In this section we will discuss another important way mobile digital technologies, especially social media apps like Snapchat, change our relationship with space by allowing users to turn spaces into places, and to use placemaking as a means of communication. Philosophers and geographers have, for a long time, argued that there is a difference between ‘space’ and ‘place’, places usually being seen as spaces that have ‘a discursive/symbolic meaning… beyond that of mere location, so that events that occur there have a particular signifcance’ (Harvey, 1996: 293). For philosopher Michel de Certeau (1988), what makes spaces places are the social practices that people perform in them and through them, and geographer Doreen Massey talks about places as being ‘thrown together’ as a result of the interaction among people, physical objects, and meanings. Perhaps the most famous philosophical treatment of space comes from the French philosopher Henri Lefebvre (1991), who argued that space is made meaningful (i.e. turned into ‘place’) through a combination of 1) what we do within that space (how we move through it or stand in it and whom we interact with when we are there), 2) how we experience the space (the kinds of personal meanings that we give to it), and 3) what other people in the society tell us about that space (the kinds of labels, descriptions, or values that have been used to represent it in the past and the ways ideologies and power relations are built into it). Think, for example, about what makes the location of a classroom into a certain kind of ‘place’ for you. Your understanding of this space as a place is affected by how you move through it or locate yourself in it and what you do there (whether you are sitting in the front or the back row, whether or not you are paying attention to the teacher or playing with your phone under the desk). It is also affected by the meanings you associate with classrooms in general and this classroom in particular (Do you like school? Do you like this teacher?). Finally, it is affected by ideas about what you are supposed to do in classrooms promoted by society, ideas that are reinforced by the arrangement of the furniture, often rows of desks all facing towards the teacher. Of course, these three dimensions of place don’t always ft together comfortably—what students want to do in classrooms is not always what they are ‘supposed to do’.

120

Digital tools

So places like classrooms, restaurants, monuments, and museums are not stable confgurations of space, bodies, and architectural structures, but things that are constantly made and remade by the people that interact with them and in them. We engage in placemaking by means of our bodies—how we situate ourselves in spaces which affect how we experience those spaces, as well as how others experience them; by means of our social relationships—who we interact with in those spaces and what different people are able to do in different spaces; and by means of our communication and meaning-making—how we talk and make texts in and about spaces. In other words, places are spaces that are embodied, relational, and meaningful. Different kinds of media provide different kinds of affordances and constraints for placemaking, and for using placemaking as a communicative resource. One of the most common ways we use mobile digital technologies for placemaking is through apps that, along with location-based services, also use the digital cameras embedded in almost all mobile phones. While photography has been an important technology for placemaking ever since the invention of portable cameras, apps like Instagram and Snapchat have made exchanging images of places (and images of users’ bodies in those places), often at the very moment they are inhabiting those places, part of our everyday communication. While location-based services allow users to use their location as a communicative resource, camera-based apps like Snapchat and Instagram allow users to make places out of those locations by making available new ways of ‘throwing together’ (Massey, 1995) different aspects of embodiment, relationships, and meaning. An example of this can be seen in Figure 6.1, which comes from a study of how female university students in Saudi Arabia use Snapchat (Albarwardi, 2017; see also Albawardi & Jones, 2019; Jones, 2020b). Here the user is communicating to her friends the fact that she is in her translation class, but rather than simply ‘checking in’ to share her location, she engages in placemaking—using the camera angle to express the embodied experience of sitting at a particular desk in that classroom, adding to that additional information about how she feels about the classroom in the form of words and emojis superimposed on the picture (‘Life is hard in translation class ’), and even communicating something about the power dynamics relevant to that space (where students are prohibited from having their mobile phones, making it necessary for her to position her camera in a way that the teacher can’t see it when she is taking the picture). The digital camera embedded in her small, inconspicuous mobile phone, the affordances of Snapchat that enable users to place different kinds of semiotic objects (words, drawings, emojis) on their pictures (see Chapter 5), and the ability to transmit this image immediately after it was taken to other users (possibly including people sitting in the same classroom) all work together to enable this user to not just communicate something about the place where she is,

Mobility and materiality

121

Figure 6.1 Snapchat image 1.

but to communicate about what it is like to be there through her situated practices of placemaking. Similar points about the affordances of Snapchat (and other mobile imaging apps) can be made in relation to Figures 6.2 and 6.3. Figure 6.2, showing a picture of the user’s foot striding forward with the caption ‘Looking for food ) shows how the app allows the user not just to communicate about where she is and what she is doing, but to depict her embodied experience of moving through space from her own perspective. Figure 6.3 highlights another affordance of mobile phone cameras, the front-facing lens, which people can use to take ‘selfes’ (see Chapter 4) that serve to situate their bodies in a particular place, or, as in this example, transform their bodies into a place that can be drawn upon and commented on.

122

Digital tools

Figure 6.2 Snapchat image 2.

Wargo (2015) calls such practices ‘embodied composing’ because users engage their bodies and their embodied perspective of the spaces they inhabit to create texts. The power of being able to communicate not just with one’s location but with one’s embodied experience of space, says social psychologist Alan Radley (1996: 561) ‘involves a capacity to take up and to transform features of the mundane world in order to portray a “way of being”, an outlook, a style of life that shows itself in what it is.’

Embodiment As should be clear from the observations above, one of the main affordances of mobile digital media is that they make possible new ways for people not just to use physical space but also to use their physical bodies as resources

Mobility and materiality

123

Figure 6.3 Snapchat image 3.

for communication. In face-to-face communication the embodied modes of gaze, gesture, facial expression, posture, and proxemics are important means for people to communicate, enabling them both to supplement the meanings they are making with spoken language and to make meanings independent of it. Along with the ‘lean media’ (see Chapter 5) that characterized most early computer-mediated communication (which primarily use the disembodied modes of writing and graphics), people now have access to much richer alternatives, the most obvious being video chat. Video, whether used to create content that people broadcast to their social networks (such as TikTok videos), or used to engage in real-time interactions, as with apps such as Skype, Zoom, or FaceTime, makes it possible for people to introduce more embodied modes into their digitally mediated communication; in other words, to communicate with their whole bodies rather than just their voices or still images of themselves.

124

Digital tools

In the frst edition of this book, published in 2012, we noted that, although there were plenty of video chat applications available, they were not widely used, and often people preferred to interact through leaner media such as instant messaging platforms. One reason for this, we said, was that the transaction costs of video chats often outweigh their benefts: they require users to engage in a lot of ‘interactional work’ (often even more than in faceto-face communication) to open and close interactions and to establish the integrity of the channel of communication and keep it open (see Chapter 5). Synchronous video-based interaction, we said, also sometimes forces users to reveal more of themselves than they want to or need to. There were also technical reasons. Transmission of video over the slower mobile networks at the time often resulted in poor quality images, time delays, and large bills from telecom companies. Back then, people usually only engaged in video interactions on special occasions or with special people, or to engage in practices in which embodied modes were particularly important such as cybersex (see Jones, 2008b). A lot has changed since then. First of all, the 4G and 5G mobile networks in use today are much more suitable for the transmission of video content. Video interaction has also become more normalized. Apple’s FaceTime app (which was only two years old when the frst edition of this book came out) is now considered by most people less like a separate app and more like a choice available when they make a phone call (voice or video?), and most messaging platforms such as WhatsApp, WeChat, and Facebook Messenger have integrated video chat functionality. Another important reason that video-based interaction has become so popular, of course, is because of the severe constraints on embodied face-to-face interactions brought about by the COVID-19 pandemic, which forced practices such as schooling, business meetings and dating to migrate online. 2020 saw a resurgence in popularity of a range of video-based platforms such as Zoom (2013) and Houseparty (2016), which, before the pandemic, had only managed to achieve modest uptake. Video-based channels of communication, especially when combined with the affordances of mobility, obviously create new ways for people to use their bodies and their physical environments to communicate. There are big differences, however, in how we make use of embodied modes in face-toface communication and how we do so through video-based applications (as well as differences in how we do so when we are using different applications or different kinds of devices). In mobile video communication using apps such as FaceTime, for example, the face becomes the dominant embodied mode, partly because the short distance at which you can hold your phone from yourself makes it diffcult to project a more complete display of your body, and the fact that you have to hold the phone in one hand limits the range of gestures you can produce. Proxemics (how the distance between people is used to communicate) is also severely constrained, though people

Mobility and materiality

125

have become adept at exploiting even the narrow distance between the hand and the face, sometimes holding their phones far away from themselves, and sometimes bringing them in for extreme close-ups. In fact, it might be argued that facial expression is even more dominant a mode in mobile video chat—where people normally display their faces on the screen for the greater part of the interaction—than it is in face-to-face communication. In the latter, people often direct their gaze away from the face of their interlocuters, focusing on other parts of their bodies or even looking away, and long periods of sustained attention to someone’s face is considered odd or signals some special meaning (such as an attempt to intimidate someone or to display extreme intimacy). In digital video interaction, displaying the face on the screen is taken to be a sign of listenership, and, although users will often point their cameras elsewhere to display something in their environment to their interlocuters, they almost always return to displaying their face as the default sign of engagement. Sociologists of technology Christian Licoppe and Julien Morel argue that mobile video interaction (with apps such as Skype and FaceTime) is governed by a relatively universal set of shared expectations about how it will be managed as follows: (a) Video calls are patterned, often alternating between a ‘talking heads’ arrangement, in which both participants are on-screen and facing the camera, and moments in which they are producing various shots of their environment in line with their current interactional purposes; (b) Conversational openings occur almost always in the talking heads arrangement; (c) The video images on either side [of the call] are produced and expected to be scrutinized with respect to their relevance to the ongoing interaction; (d) The talking heads arrangement is oriented to as a default mode of interaction, with the implication that when there is nothing else relevant to show, the participants should show themselves on-screen; (e) In multiparty interactions, the party who is handling the video communication apparatus has an obligation to put other speakers on-screen when they talk, and the video callers orient to their appearance onscreen as making a distinctive participation status relevant. (Licoppe & Morel, 2012: 399) As with other media, the meaning-making potential of digital video interaction comes as much from what the medium allows users to conceal as from what it allows them to reveal. Video chat (especially with mobile digital devices) gives users control over what aspects of their environments (and their physical bodies) they make available, and, because of this, the frame of the camera becomes an important tool for communication. Moving the

126

Digital tools

camera so that the frame captures a particular part of the environment, or a particular object, or another person, is essentially what linguists call a deictic gesture, a way of pointing at an object or person or place. Not only is what is shown in the camera frame made meaningful, but users are also held accountable for what appears—they can’t just point their cameras randomly around their environments. As Licoppe and Morel (2012: 408) put it, ‘When one party video shoots something other than his or her own face, such images are available for monitoring and are scrutinized by the recipient with respect to their “gazeworthiness” and relevance to the ongoing interaction.’ Even the background can become meaningful in this medium with users sometimes carefully matching their background to the interaction, as when people position themselves in front of bookcases for job interviews. Sometimes people’s physical environments become topics of conversation, either for instrumental reasons, or as a way for people to manage their relationships or smooth over potentially awkward moments in the call such as the opening or the closing. Sociolinguist Dorottya Cserző (2019), for example, in her research on Skype calls between friends and family members, notes that a common feature of such interactions is the ‘guided tour’, where people call attention to where they are and what they are doing as a kind of ‘small talk’. Another important difference between video and face-to-face interaction is that most video chat applications allow users to see themselves as well as their interlocuters, enabling them to monitor and attend to their own use of embodied modes and environmental resources. The ways users of video chat programs are able to use their bodies and their environments to communicate depends on the device they are using and the degree of mobility it affords. It is much more diffcult to turn and point a desktop computer (or even a laptop) than it is a mobile phone, and so users are much more constrained in what they can display in the frame apart from their faces. Even the way faces are displayed are different with different kinds of devices: with handheld mobile phones people usually display close-ups of their face, while in calls using desktop or laptop computers people usually present a medium headshot that includes the face and the upper torso. The main point that we are trying to make here is that, while synchronous video-based interaction is much more embodied than text-based interaction (or even photo sharing), the way people use their bodies to communicate is very different from the way they do in face-to-face interaction. Video gives users much more control over what they are showing, making it possible for them to direct the gaze of their interlocuters in ways that they cannot in face-to-face-interaction. At the same time, two-dimensional bodies shrunk to the dimensions of a computer or mobile phone screen and usually limited to the face are much more restricted in the ways they can express meaning than three-dimensional bodies in physical space (see the case study below).

Mobility and materiality

127

CASE STUDY: WHY IS ZOOM SO EXHAUSTING? One consequence of the COVID-19 pandemic was that interactions that normally took place face-to-face, such as work meetings, school classes, and social events (even weddings and funerals) migrated online. One result, as we noted above, was that video-based interaction using digital devices became much more commonplace and ‘normalized’. At the same time, people became more sensitive to the norms of interaction associated with such technologies that began to grow up in different contexts: Could you turn your camera off for all or part of the interaction? How should you respond when children or pets wandered into the frame? Where should you sit and what should you wear for different kinds of encounters? And should you display your environment, or should you use one of the electronically generated backgrounds that some applications make available? One almost universal observation arising from the increase in video-based interactions was the feeling of exhaustion people felt after long periods of interaction, even when that interaction was social in nature, and so meant to be ‘relaxing’. People began using terms like ‘video call fatigue’ and ‘Zoom burnout’. Of course, there are many factors that may contribute to such feelings, including the physical strain of sitting at a desk or holding up a phone or tablet, and of gazing at an illuminated screen for long periods of time. The experience of video call fatigue, however, also has a lot to teach us of about the limits of embodied communication in the context of the kinds of tools that are currently available. Video call fatigue seems to be particularly pronounced for people who engage in more static desktop or laptop-based interactions in which the kinds of resources for making meaning afforded by mobile phones (e.g. being able to easily manipulate camera angles and point to things in the physical environment) are absent and users are mostly restricted to attending to each other’s faces and the words they speak (and sometimes type) to make sense of what’s going on. The problem with this is that, in embodied conversations with people who are physically present to each other, they rely on a range of very subtle cues (such as slight changes in head position, facial expression, and gaze direction), cues that can come quickly one after another and are carefully entrained to the cues that are coming from the other person. Often these subtle cues help us to understand how to interpret people’s words, to tell how others are interpreting our words, or to know when it’s our turn to talk. Among their most important

128

Digital tools

functions, however, is to help us develop feelings of empathy or ‘connection’ with the person we are talking to. Scholars of social interaction have long observed that people in face-to-face conversations often come to develop what is known as ‘conversational synchrony’ as interactions progress: they start to talk and move in time with each other and mirror each other’s gestures and facial expressions (Richardson et al., 2008). Moreover, psychologists and neurologists have determined that many social behaviours are triggered automatically from the perception of similar behaviours in others—for example, people who see other people wince automatically wince themselves (Bavelas et al., 1986; Garrod & Pickering, 2004). When we are ‘in sync’ with others, we feel closer to them and we feel that the interaction is going well, but when we fail to achieve conversational synchrony, we are apt to feel vaguely uncomfortable or feel that the conversation is not going well. ‘The emotional/aesthetic experience of a perfectly tuned conversation is as ecstatic as an artistic experience,’ writes linguist Deborah Tannen (2005: 145). The important thing about these subtle cues and the way they contribute to conversational synchrony is that, although they are crucial for successful communication, we usually perceive and process them below the level of consciousness. The problem with most video-based interaction is that we are unable to perceive these subtle cues, and even when we are, they come at the wrong times due to freezing, jerkiness, or slight delays in transmission. As a result, we are liable to feel that something is not quite right about the conversation, and our brains have to work extra hard to fll in the missing information that usually comes from subtle non-verbal cues and compensate for the lack of conversational synchrony. We are also unable to accurately mirror what other people are doing, making it harder to interpret how other people feel or to predict what they are going to say next. Finally, because the eyes of our interlocuters are not in the same place as our cameras, we fnd it diffcult to use gaze as a communicative resource: to establish mutual gaze as a way to establish alignment, or to utilize gaze to manage things like turn taking. When we think we are looking at someone, we appear to them to be looking slightly to the side of them. This form of communication is likely to leave people unsettled and fatigued for two reasons. First, no matter how many times we engage in video calls, the representation of the physical body of the other person on the screen makes us think that the cues that are normally part of face-to-face communication will be available to us, and when they don’t work the way we think they should, we become frustrated

Mobility and materiality

129

and confused. Second, because we cannot rely on embodied modes as much as we would like to, we have to pay much closer attention to what people say to us and what we are saying to them to avoid misunderstandings. We also often have to spend time ‘repairing’ technical or interactional glitches. Actually, if we are looking to establish a sense of co-presence or intimacy with another person, we might be better off with a ‘leaner’ medium (see Chapter 5) such as voice-based telephony. When talking on the phone with someone, we are already prepared to compensate for the lack of visual cues, and, through years of engaging in phone conversations we have become sensitive to hearing slight shifts in someone’s tone of voice or the rhythm of their breathing. Back in the mid-1990s, in fact, media theorist Allucquère Rosanne Stone (1995) argued that the lack of visual cues was one thing that made phone sex so satisfying for so many people, compelling them to tune in to the subtle (and sexy) aspects of the other person’s voice while, at the same time, leaving room for them to fantasize about that person’s appearance.

Materiality and the ‘Internet of Things’ So far, in this chapter, we have been emphasizing the fact that digital literacies are not just about being able to use digital devices and the software that run on them. They also involve understanding how to use our bodies and the physical spaces that we occupy and move through in creative and communicative ways. One reason this is so important is because most of the digital devices that we use nowadays are portable; we are able to carry them into different spaces and to record aspects of our bodies as they interact with those spaces. But using digital devices has always been an embodied experience. The invention of the computer mouse by engineer Douglas Engelbart in the early 1960s, for example, invited us to imagine the computer screen as a space to be navigated, and interacting with the touch screens of our mobile phones involves a range of physical motions such as tapping, pinching, and swiping. Much of the software we use on these devices is also designed to reinforce a feeling of physicality; when we turn pages on the Apple iBook app, we feel as if we are peeling the page back on a physical book, and when we swipe left on someone’s picture on Tinder, the app’s animation makes it seem as if we are actually lifting the image with our fnger and tossing it off the side of the screen (see Chapter 3). Many digital devices make possible a constant fow of information between the body and the machine, so that bodily movements

130

Digital tools

become part of real-time computer simulations, as with Nintendo’s Wii console, which allows people to play tennis, bowl, dance, or engage in other physical interactions with the device (see Chapter 9). The materiality of digital technologies may seem obvious, but it is often ignored in discussions of digital literacies as people focus more on things like information and interactions. But as the critic Katherine Hayles (1999: 13) has pointed out, information and interactions are always dependent on matter, they must ‘always be instantiated in a medium.’ Recently, however, with the development of digital sensors and ‘smart objects’, the relationship between information and matter has become even more intimate. Digital technologies are now woven so tightly into the fabric of the material world that they have come to be described as ubiquitous or pervasive. They are embedded in children’s toys, toothbrushes, televisions, refrigerators, kettles, toasters, lighting and heating systems, cars, street signs, key fobs, water bottles, sex toys, and many other objects in our everyday surroundings. As Paul Dourish (2004: 1) puts it, the physical world has ‘become an interface to computation, and computation [has] become an adjunct to everyday interaction.’ Whereas in the past, digital networks consisted of connections between digital devices and between the people operating them, now they consist of (usually wireless) connections between devices, people, spaces, and physical objects, connections that we are sometimes not aware of, to the extent that many of the innocent, everyday activities that we engage in (such as brushing our teeth, taking a walk, and watching TV) produce information that circulates through these networks. This new kind of digital network has come to be referred to as the Internet of Things. The Organisation for Economic Co-operation and Development (OECD, 2016: 4) defnes the Internet of Things as ‘an eco- system in which applications and services are driven by data collected from devices that sense and interface with the physical world.’ The term was coined by an engineer named Kevin Ashton in the late 1990s, rather prosaically, in a presentation to Procter & Gambol about how to solve the problem of tracking bottles of shampoo and packets of razor blades through the company’s supply chain. At that time, the connections between digital devices and the physical world consisted chiefy of tracking and tracing capabilities that were made possible by RFID (radio frequency identifcation) chips, tiny rice-sized transmitters that can be attached to products in warehouses or embedded under the skin of pets or in people’s identity documents. Nowadays, objects and networks of objects are able to communicate much more than their location; they are able to measure and record how they are used and, as our environments become more and more permeated with sensors and smart devices collecting data about our behaviours, these ‘smart’ things are increasingly being used to make inferences about

Mobility and materiality

131

their users and even to predict what they will do next. ‘Smart’ objects are being combined to create ‘smart’ environments: ‘smart homes’ and ‘smart cities’, environments in which people are placed in ‘a state of continuous electronic engagement’ with their surroundings and with other people (Mitchell, 2003: 2). We talked above about how mobile digital devices have changed people’s relationship with physical space. With the advent of the Internet of Things, however, space itself has become digitized. In the words of geographers Rob Kitchin and Martin Dodge (2011), physical spaces are increasingly ‘coded spaces’, spaces that are inscribed with and produce digital information, or ‘code/space’—spaces that are essentially dependent on their ability to collect and communicate information. There are a lot of benefts that come from ‘smart’ devices and the Internet of Things. For business, being able to track objects as they move through supply chains and into the homes of consumers can improve effciency and help create more satisfying user experiences. For governments, being able to monitor in real time how public facilities are used can lead to the better allocation of scarce resources and inform policy-making. For healthcare workers, smart medical devices can help save lives by allowing patients to be monitored from a distance. And for families, ‘smart’ homes with ‘smart’ appliances can help them save energy, engage more effciently with family members, and protect themselves from threats such as home intruders. At the same time, the Internet of Things has important implications for digital literacies, potentially affecting in dramatic ways how we make meanings in the world, manage our social relationships, and exercise agency. First of all, the proliferation of objects that can ‘talk to’ us, and ‘talk about’ us to other people and to other objects unsettles our traditional assumptions about communication. We are used to thinking of media as things that we communicate through, not as things we communicate to or as participants in our social interactions that can exercise as much agency as human interactants. Even before the advent of the Internet of Things, though, the French technology scholar Bruno Latour (2007) pointed out that human interactions have always taken place within networks of people, objects, institutions, ideas, and processes, and that all of these participants in the network can be considered social actors, or, as he called them actants. Objects exert agency through their affordances and constraints, through the ‘stories’ they tell of the histories of their use, and through the way they combine with and complement other actors within the network. In a sense, then, the emergence of the Internet of Things has simply made the fact that objects use humans as much as humans use objects more obvious. But there are differences between the analogue actor networks that Latour was talking about and the digital Internet of Things, perhaps the

132

Digital tools

most important being the speed and effciency with which information can travel through the network from person to person, person to thing, and thing to thing. What this does, say technologists Daniel Kellmereit and Daniel Obodovski (2013: 11), is ‘eliminate(s) a lot of the guesswork from everyday decisions.’ For example, rather than guessing whether or not a customer is a good driver, an insurance company can actually monitor their driving using the sensors installed in their car and even fne-tune their rates as the customer’s driving skill improves or deteriorates. This digitalization of everyday life can have consequences on the degree to which we are able to exercise agency—how much we are able to control what we do and what happens to us. The Internet of Things, says science fction writer Bruce Sterling (2014: n.p.) often gives people the illusion that they have more control by, for example, allowing them to control their thermostat when they are not in their house. But, in reality, it is often users who are being remotely controlled by the companies that market these products—products that are, in reality ‘designed “at” the user rather than “for” the user.’ It is this enhanced fow of information, this increased interoperability among people and among things, that can so unsettle the conduct of our communication. It is important to remember that ‘smart’ things don’t just do things for us—they record what we do and communicate that to other people and other things who might use that information to enable and constrain our future actions. That is to say, our everyday mundane interactions with things become a form of communication, even when we are not aware of it. When we wear our Fitbit when we exercise, we are not just monitoring our exercise, we are communicating about it. When we watch TV, we are not just watching TV, we are communicating about what we are watching. And when we walk through a ‘smart city’, every step we take becomes an act of communication. Granted, it is not always clear to us whom this communication is directed at, which is part of the problem. Equally important, however, is the way connected things and ‘smart’ environments can alter patterns of communication with people in our everyday lives such as friends, family members, employers, and teachers by changing the degree of control we have over the data that is generated about us and how it is disseminated. In an experiment designed to examine the communication patterns of a ‘connected family’, for example, Michael Brown and his colleagues (2013) monitored a family equipped with Fitbits and other IoT technologies that shared data about their whereabouts and activities with other family members. They found that, on the one hand, the increased availability of data about one another helped to promote a sense of connectivity and engagement. On the other hand, they found that the increased transparency of personal information about family members removed the

Mobility and materiality

133

ambiguity that often facilitates successful social relationships—the ability to keep some of our activities secret or to tell socially useful ‘white lies’.

ACTIVITY: YOUR INTERNET OF THINGS This activity is designed to help you to think about what sorts of literacies we need to develop to ‘read’ and interact with ‘smart’ things. Spend a day paying attention to all of the things that you interact with that are (or might be) somehow connected to digital networks. Some of these things might belong to you, such as your wristwatch, your television set, or your car. Others you might fnd in your environment, such as traffc signals or digital surveillance cameras. Make a list of all of these things below and refect on 1) what you think the advantages (or affordances) are that these objects gain from being connected to other things, 2) what you think the possible negative consequences of this interconnectivity and interoperability might be, and 3) what sorts of data about you these things might be gathering and what happens to that information.

Thing

Advantages of connectivity

Possible negative consequences

Data about me

Conclusion Because of the increasing mobility and ubiquity of digital technology, it no longer makes sense to think of ‘the internet’ as a ‘place’ that one accesses through a computer and is separate from the physical world. Nowadays, as Sadowski and Pasquale (2015: n.p.) put it, ‘the boundaries between body–city–technology are blurred. There are not so much discrete entities— the person, the building, the device—as there are entangled assemblages of fesh, concrete, and information.’ When we talk about digital literacies, then, we must also include spatial and material literacies, literacies that will help us to understand the new ways digital technologies are enabling people to use their bodies and their physical environments as resources for communication, and enabling people to experience new confgurations of space,

134

Digital tools

new ways of interacting with others, and new kinds of embodiment. At the same time, we have also pointed out some of the implications of these new technologies when it comes to human agency, autonomy, and privacy. In the next chapter we will turn our attention to ways that we can think (and act) more critically when it comes to some of the technologies that we discussed in this chapter and about digital media more generally. Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Chapter 7

Critical digital literacies

So far, in this book, we have focused on the affordances and constraints of digital media and how they affect the way people do things, make meanings, manage their relationships, construct their identities, and think. In this chapter we will take a more critical stance, attempting to discover if and how these affordances and constraints advance particular ideologies or the agendas of particular people. The word ‘critical’ as we are using it here doesn’t mean that you need to fnd fault with digital media or the practices people engage in with them. What we really mean by a critical stance is a conscious stance—a stance that puts you in the position to ‘interrogate’ digital media, the texts that you encounter through them, and the institutions that produce, promote, or depend on them.

Ideologies and imaginaries Ideologies are systems of ideas, practices, and social relationships that govern what is considered right and wrong, good and bad, and normal and abnormal. These systems are extremely powerful. They can determine which people in a society are included and which are marginalized, who has power and who doesn’t, and how the society’s resources are distributed. The problem with ideologies is not that they are necessarily bad, but that most people are unconscious of them, taking the particular ideology they subscribe to as the ‘truth’ rather than simply as one of many possible ways of seeing reality. As a result, it is diffcult for them to question their own ideology. One place where we can see ideology at work is in the way people talk about digital media. As we mentioned at the beginning of this book, many people are upset about digital technologies. They are afraid these technologies are negatively affecting our minds and our relationships, and point to things like cyber-bullying, mobile phone ‘addiction’, the erosion of personal privacy, the apparent shortening of our attention spans, and the dissemination of misinformation and disinformation over social networks as evidence of this. But one reason these negative accounts of digital media are so ‘newsworthy’ is that the underlying imaginaries about digital technologies in most

136

Digital tools

contemporary societies are overwhelmingly positive. The assumptions that advances in digital technologies represent ‘progress’ for our societies and ‘freedom’ and ‘fulflment’ for individuals are woven into educational materials, journalism, advertising, corporate discourse, and even popular culture, so most people take these assumptions to be ‘common sense’. We might call this the techno-utopian imaginary. An imaginary is a set of beliefs or ‘stories’ about a particular phenomenon that people in a society adhere to and which function to support particular ideologies. Throughout history, various imaginaries have developed around new technologies—including stories about what will happen to us when we use them. Professor of science and technology studies Sheila Jasanoff (2015) calls these sociotechnical imaginaries, which she defnes as collective visions of the future that will be made possible by science and technology. People construct and promote these imaginaries by the way they talk about technologies—the kinds of words and metaphors they use to describe them—and the presuppositions that underly these ways of talking. One example is the tendency to talk about digital technologies, especially those associated with the Internet of Things (see Chapter 6) as ‘smart’. We talk about ‘smart phones’, ‘smart cities’, ‘smart schools’, and ‘smart homes’. The main effect of using such a word, of course, is to make technologies seem modern and desirable—glossing over the possible problems associated with them (for example, the concerns about personal privacy associated with the Internet of Things). It also has the effect of making things that are not digital or not connected to the IoT seem ‘dumb’. Finally, language like this can lead people to anthropomorphize digital devices (treat them like humans) and to uncritically trust the decisions or ‘recommendations’ they make, even when the processes used to make those decisions are invisible to us. Another important way of speaking associated with the techno-utopian imaginary is the language of ‘solutions’. The Belarus-born media critic Evgeny Morozov (2011) calls the belief that all the world’s problems can be solved technologically solutionism. One problem with solutionism is that it often turns things that are not necessarily problems into problems in order to justify implementing whatever technological solution has been developed. Another problem is that the technological solution often distracts people from non-technological solutions that may require more collective negotiation and political engagement. One example of this can be seen in ftness trackers that calculate how many steps you take each day. It might be argued that the problem that such devices solve is sedentariness—the fact that people don’t exercise enough—but non-technological solutions such as encouraging people not to use their cars or providing more bicycle paths in a city might be more effective than getting people to count their steps (though not as proftable for the tech industry). Part of the reason the techno-utopian imaginary is so persistent is that it helps to promote a larger, extremely powerful ideology called neoliberalism,

Critical digital literacies

137

which posits that ‘progress’ (usually measured in terms of economic growth) is the most important goal of society, and that ‘individual freedom’ (rather than collective action) is the best way to reach that goal. It is an ideology that both encourages people to take individual action (like counting their steps) rather than collective action (lobbying the government to build more bike paths), and one that argues that government regulations that restrict the activities of large tech companies ‘stife innovation’. Of course, as we said above, there is also a techno-dystopian imaginary prevalent in most contemporary societies, and it appears to be growing as news of the malevolent effects of digital media and the abuses of tech companies come to light. The techno-dystopian imaginary is automatically sceptical of technology, often seeing it as a threat to humanistic values, freedom, and privacy. Of course, the techno-dystopian imaginary can be just as distorting as the techno-utopian imaginary, leading people, for example, to imagine futures where AI becomes the dominant form of intelligence on earth and computers take control, or to embrace conspiracy theories about, for instance, the supposed health threats posed by 5G mobile technologies.

Are technologies ideological? While it is not surprising that the way people talk about technology is ideological, perhaps a more important question is whether or not technologies themselves are ideological, that is to say, whether or not the ways technologies are designed advance the political or economic agendas of particular groups. Many people insist that technologies are ideologically ‘neutral’, that politics come into play not from technologies themselves but of what people do with them. At the same time, there are plenty of examples of how the design of technologies channel people into certain patterns of behaviour or thought. In a famous essay called ‘Do Artifacts have Politics’ (1980), the political theorist Langdon Winner describes the exceptionally low overpasses on the Long Island Parkway, designed by the renowned civil engineer Robert Moses to intentionally make it impossible for public busses to pass under them. This made it more diffcult for African Americans living in New York City, who were more likely to travel by bus, to get to the parks and beaches on Long Island, which apparently was Moses’s intention. In other words, because of the affordances and constraints that were intentionally designed into the motorway, certain kinds of people ended up being privileged, and others ended up being disadvantaged. We are not saying that designers of digital technologies are necessarily driven by political agendas, as was Moses, nor that they intentionally build their technologies to be racist or sexist or to marginalize particular people. What we are saying is that all technologies inevitably have biases, because they inevitably make some kinds of actions, relationships, identities, and ways of thinking easier than others, and this almost always ends up

138

Digital tools

favouring some people over others. Moreover, artefacts are always designed by people, and so carry the presuppositions of these people about how the world works and what sorts of things should be made easier or harder for users to do. One problem with biases in digital technologies is that they are usually not as obvious as the biases built into Robert Moses’s overpasses. This is because many of these biases are written in code and executed by algorithms that are not as visible as overpasses on a motorway. Sometimes the decisions these invisible algorithms make, however, can be extremely consequential, playing a similar gatekeeping role as the overpasses on the Long Island Parkway—helping to determine, for example, which people are considered for jobs, get bank loans, or get searched at airport security points.

Mediation redux In the frst chapter we introduced the concept of mediation as the foundation of our approach to digital literacies. All human actions, we said, are mediated through tools, either technological tools, like telephones and computers, or symbolic tools like languages. The crux of the concept of mediation is that we cannot interact with the world without doing it though some kind of medium, and the media that we use play an important role in determining how we perceive the world and the actions we can take. So mediation is always a matter of power—the tools we use can either empower or disempower us in different ways. There are at least three ways that media (and those who design them) can exert power over us. The frst is through what we have been calling affordances and constraints. Different tools make some actions easier and others harder. They reveal certain aspects of reality and conceal others. They amplify participation in certain kinds of social practices and social groups, and constrain participation in others. Another way that digital technologies exert power over us, which is related to the frst, has to do with the way technologies alter social practices—how they help to redefne what counts as ‘normal’ behaviour and condition us to develop certain habits. The third way technologies exert power over us is through access. This refers both to access to the technologies themselves, but also access to knowledge about how technologies work. Affordances and constraints In his book The Language of New Media (2001), Lev Manovich argues that the logic of digital media is the logic of selection rather than creation, a logic that essentially reduces many of our actions to ‘menus’ of discrete ‘choices’. This framework for action, says Manovich, makes thinking and acting more

Critical digital literacies

139

‘automated’. Often software designers refer to these choices as ‘preferences’, giving users the illusion that the media can be truly ‘personalized’, but this is rarely the case. The ‘choice architectures’ (Thaler & Sunstein, 2008) built into these media always limit the kinds of actions we can take and the kinds of meanings we can make. Perhaps the main difference between digital technologies and earlier analogue technologies lies in the fact that, while digital technologies seem to make more choices available, these choices are always expressed as discrete alternatives: we are not, as we are with many analogue technologies, able to invent our own choices by creating fne gradations of meaning which we can control ourselves. Even when it seems we can move along a continuum of, say, brightness or volume, the underlying mechanism is always governed by predefned sets of alternatives. The semiotician and jazz pianist Theo van Leeuwen (2012) illustrates this in a comparison of what it’s like to play a digital piano as opposed to an analogue one. He writes: On a modern digital piano I cannot do what Billie Holiday did with her voice, or what I can even do, to some extent, on an acoustic piano. I cannot make the sound rougher or tenser as I play. Instead the designers of the instrument have provided me with a choice from a wide range of piano sounds, some tenser, some mellower, some higher, some lower, some smoother, some rougher (in a synthetic kind of way) and so on. All I can do myself is phrasing. One problem with these fxed systems of choices is that they can become ‘locked-in’, constraining the development of later technologies. Computer engineer Jaron Lanier (2010), for instance, describes how the MIDI format, which was developed in the 1980s to digitally represent the sounds of a keyboard, has become the foundational technology for all digital music including the mp3 fles most people listen to today. The problem with this, he says, is that, while MIDI is good at representing the sounds of a keyboard, it is less accurate when it comes to representing the sounds of other instruments like the violin or the human voice. Unfortunately, this format has become such an integral part of digital music systems that it is too diffcult to change it. ‘Software,’ writes Lanier (2010: 1), expresses ideas about everything from the nature of a musical note to the nature of personhood… (it) is also subject to an exceptionally rigid process of “lock-in.” Therefore, ideas (in the present era, when human affairs are increasingly software driven) have become more subject to lock-in than in previous eras.

140

Digital tools

Not only do technologies often give us a limited number of choices, but they also usually make some choices easier than others. Most software programs present users with a series of ‘default settings’, which inevitably infuence their ideas about what is the ‘normal’ way of doing things. No matter how much the palette of choices is expanded, giving the impression of freedom, default settings always steer users towards a certain set of normative behaviours. The behavioural economists Richard Thaler and Cass Sunstein (2008) refer to the way designers use default settings to infuence behaviour as ‘nudging’. People can be ‘nudged’ to engage in behaviour that is benefcial to them, but most internet companies nudge people to behave in ways that are more benefcial to the economic agendas of the companies. Finally, digital technologies can exert power over us by making some information easier for us to see than other information. Google’s PageRank algorithm, for example, promotes the proposition that the value of information is best determined by how popular it is. Perhaps the most insidious flters are those that shape our view of reality based on the view of reality that we already have, such as Facebook’s EdgeRank algorithm which pushes status updates and news from the friends we most frequently interact with to the top of our newsfeeds. As we said in Chapter 2, one danger of such flters is that they limit what we are exposed to based on our past choices (Pariser, 2011). Technology and social practice The second way media exert control over us is through the social conventions and personal habits that grow up around their use. The way particular tools get used is not just a matter of what we can do with them, but also of the ways people have used them in the past. All tools carry the histories of their past use. After people have used a particular tool in a certain way to perform a particular practice for a period of time, the conventions or ‘social rules’ that have grown up around the tool and the practice start to ‘solidify’. At the same time, some technologies are intentionally designed to create certain kinds of habits in their users and to promote certain kinds of social practices. We call the process by which social practices and conventions come to ‘solidify’ around various technologies the technologization of practice (Jones, 2016). The technologization of practice often leads to situations in which the way certain things are done comes to be controlled by the dominant technologies that we use to do them. One example of this is the ‘like’ button, introduced by Facebook in 2009 and now a feature of most social media platforms (with a range of variations). The ‘like’ button has changed the way people interact online by turning ‘liking’ into a pervasive social practice, which has come to accumulate its own set of social meanings and

Critical digital literacies

141

rules of etiquette. ‘Liking’ is now something that people are expected to do when they use certain social media platforms. One reason that Facebook and other social media sites have been so successful in transforming ‘liking’ into such a pervasive social practice is because they have made it so easy to do. Easier, in fact, than actually saying to someone, ‘Hey, I really like that picture of your breakfast that you posted.’ Another reason that ‘liking’ has become a social practice is that ‘like’ buttons on social media sites are designed to be habit forming by exploiting certain psychological vulnerabilities of users such as the universal need for social approval and the intermittent variable rewards (see Chapter 2) which are delivered when you check to see how many people have liked your post. Eventually, these changes in social practices can also change the way people think about and manage their social relationships. ‘Like’ buttons, for example, can make reciprocal liking of social media content an important measure of intimacy, and the number of likes users are able to attract an important measure of popularity or social capital. In other words, friendship becomes something that, more and more, is measured quantitatively, and sometimes takes on a competitive or ‘game like’ quality, as when users of Snapchat engage in streaks where they attempt to send snaps back and forth to each other without interruption for several consecutive days, being rewarded with different emojis for unbroken chains of communication. As one user describes it, ‘On Snapchat, streaks develop a level of friendship between people. The longer your snap streak is, the better friends you are’ (Lorenz, 2017). Of course, the ultimate benefciaries of these practices of ‘liking’ and ‘streaking’ are social media companies, who use the data people generate from these practices to better target advertising to them. And so, while many of the social practices that have grown up around social media seem to be ‘natural’ outcomes of human sociality, they are, to a large extent, designed based on the business models of social media sites. One example of how social media companies can try to change social practices around their tools is Twitter’s 2015 change of its ‘like’ icon from a ★ to a ❤. One of the reasons they did this was to try to encourage people to see ‘liking’ on the platform as a more intimate practice, less like ‘evaluating’ (by giving something a ‘star’) and more like expressing an emotion. As Twitter’s product manager Arkshan Kumar put it, ‘The heart is more expressive, enabling you to convey a range of emotions and easily connect with people’ (Meyer, 2015). And, of course, the more users of the platform ‘connect’ with other people, the more valuable the platform becomes. When media come to be regarded as inseparable from our social practices, and when the social practices associated with particular media come to seem ‘normal’, we can say that the media have achieved a certain amount

142

Digital tools

of transparency. Such media have a tendency to blend into the background of our daily lives, to make us forget that the social practices we are engaging in are, in fact, mediated (Bolter & Grusin, 1999). In the early 1990s, Mark Weiser, chief technologist at the Xerox Palo Alto Research Center (PARC), wrote, ‘The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it’ (1991: 66). Transparent media encourage us to regard the kinds of actions that they make possible as ‘natural’ or desirable and the kinds of actions that they constrain as unnatural or undesirable. There are, of course, a lot of apparent advantages to media transparency. The more transparent a mediational means is, the more comfortable and convenient it is for us to use. But at the same time, the more invisible its built-in limitations are to us, the more diffcult it becomes for us to question those limitations. Transparent tools are resistant to critique because they make us forget that they are even there. Access The third way media exert control over us is through who has access to them. The distribution of tools, both technological and symbolic, in any society is always unequal, which means that the particular kinds of social actions that media make possible are also only available to certain people. In other words, the use of mediational means is always tied up with economic and political systems that govern the way access to them is distributed. As a result, the ways media end up being used usually support or perpetuate these political and economic systems. Usually when people think about the ‘digital divide’ they think about access to hardware, the fact, for example, that some children have access to expensive laptops and fast internet connections to take advantage of online learning while others don’t. But access to hardware is only part of the story. Unequal access also involves unequal access to the social practices and digital literacies that grow out of access to digital tools. It also is a matter of access to knowledge about how digital media work. Even the relatively privileged who have access to the latest digital devices are unlikely to know much about how the technologies they use actually work. In his book Program or Be Programmed (2010), the author Douglas Rushkoff points out the irony that fewer children were learning computer programming when he wrote his book than did two decades before. But even those that do study programming are usually no match for companies that design hardware and software in ways that intentionally make it hard to fgure out. When a technology is hard to fgure out or to alter (reprogramme), we call it opaque. Technologies are opaque when we can’t open them up and see what’s inside and when the code for the software is kept

Critical digital literacies

143

secret from us. Engineers have long used the metaphor of the black box to describe opaque technologies. Black boxes are systems, devices, or other objects whose operations can only be discerned by observing the relationship between inputs and outputs because their actual internal functions are hidden from view. The media scholar Friedrich Kittler (1997: 151), in his famous essay ‘There Is No Software,’ argues that digital technologies are, by their nature, opaque, built upon what he calls a ‘system of secrecy’ in which each layer of technology hides the one immediately below it—for example, the graphical user interface that users are exposed to hides the actual workings of the operating system, which hides the computer’s BIOS which allows that system to run. ‘These systems,’ he writes, are self-concealing, for they are pre-programmed and are quite often burnt into the kernel or silicon of the system itself. This means that they become largely immune to user intervention or “hacking”, for they restrict what the user may alter or even see. (1997: 158) There are several ways that the opacity of digital technologies is maintained. One of the most common ways is through closed systems, in which source code for software is kept secret and users must purchase particular devices in order to run it. Apple Corporation’s devices and software are examples of closed system technologies. Users cannot run their software on any devices other than those manufactured by Apple and they cannot reprogram these devices, and even third-party software for Apple devices must be sold through the company’s app store and approved by Apple. Microsoft’s Windows and Google’s Android are examples of more open systems which can be run on different devices, though these operating systems are still licensed by the manufacturers. An example of a truly open system technology is Linux, an open-source operating system whose code is publicly available to anyone who wants to fnd out how it works or change it. The more closed a system is, the more opaque the technologies within it become. Of course, there are often good reasons for media opacity having to do with intellectual property, the ability of technology companies to earn money, and the protection of people who are using the technology. Sometimes people who hack media systems have less than honourable intentions. Systems that are completely open can also make it diffcult to maintain quality and are more vulnerable to abusive practices. Systems that are totally closed, on the other hand, lock people into certain ways of behaving and thinking and can discourage innovation. It is important to state here that opacity is not the opposite of transparency as we used this word above. What we mean by transparency is not

144

Digital tools

that we can see ‘inside’ of something, but that we ‘see through it’ and forget that it’s there, and what we mean by opacity is that it becomes resistant to adaptation and modifcation. The ironic thing about digital media is that transparency and opacity often go hand in hand. The more transparent a technology becomes, the less likely it is for us to understand how it works. Transparency makes the effects of media less apparent to us and discourages us from critically examining them, and opacity makes it more diffcult for us to critically examine them, even if we want to. It is also important to remind ourselves that opacity is not limited to digital technologies—our daily lives are in fact flled with black boxes, from simple mechanical devices whose workings are hidden from us to the minds of other human beings whose thoughts can often only be gleaned through processes of inference. In most cases, we learn how to ‘work’ black boxes, even when we don’t know how they work, through processes of trial and error, observing how certain kinds of inputs result in certain kinds of outputs. The famous cybernetician W. Ross Ashby (1956: 86) describes this process with the example of a child learning how to open a door: The child who tries to open a door has to manipulate the handle (the input) so as to produce the desired movement at the latch (the output); and he has to learn how to control the one by the other without being able to see the internal mechanism that links them. In our daily lives we are confronted at every turn with systems whose internal mechanisms are not fully open to inspection, and which must be treated by the methods appropriate to the Black Box. Nowadays, however, media opacity is not just a matter of you not quite understanding the internal workings of a doorknob or of your mobile phone. Because digital technology pervades so many aspects of our lives and has become entangled in so many processes of decision-making—from which university a child is accepted into to whether or not a prisoner is granted parole—not being able to query and, in some cases, challenge the workings of these technologies has become extremely problematic. The legal scholar Frank Pasquale (2015: 8) has argued that we are now living in a ‘black box society,’ where algorithms and other technologies determine the rules by which we are meant to live and ‘the values and prerogatives that the encoded rules enact are hidden within black boxes.’ Of course, opacity is not the only reason for our lack of access to what’s going on inside of technologies. In many cases, even if we were able to open up the black box, we wouldn’t be able to understand what is inside. In other words, there is much about the way digital systems operate that is simply unknowable through human cognitive processes, even if the humans in

Critical digital literacies

145

question are computer scientists. Humans, for example, are simply unable to process large masses of big data in the way computers can, and many AI systems that use deep learning technologies (see below) operate in ways that even their inventors don’t quite understand (Knight, 2017).

CASE STUDY: ALGORITHMS In Chapter 2 we talked about algorithms in the context of search engines and other information fltering systems, but algorithms are not just used to help us manage and curate the massive amounts of information that we fnd on the web. They are increasingly used to make decisions about a range of consequential things like when to buy and sell stock, who to hire, and whether or not we will be accused of plagiarism. Because of this, it is important to learn how algorithms operate in our daily lives and how to critically interrogate the ideological assumptions that are programmed into them. As we said in Chapter 2, an algorithm is basically a sequence of instructions that a computer executes to carry out a particular task. However, thinking about algorithms as merely recipes people write to tell computers what to do is an oversimplifcation. While algorithms are designed to solve particular problems, how they go about solving these problems often goes beyond the steps programmed into them by their human creators. Many algorithms nowadays have the ability to ‘learn’ or improve their performance over time, predicting outputs based on their accumulated exposure to large amounts of data. In order to reach these goals, such algorithms are ‘trained’ with preexisting datasets, and the quality of these datasets ends up affecting the accuracy of the predictions the algorithms are able to make. More complex algorithms use techniques of ‘deep learning’ based on neural networks which are designed to allow the simultaneous processing of many different layers of information. Despite their increasing capacity to learn on their own, it is important to remember that algorithms never operate independently. They are always somehow connected to humans: both the humans that designed them in the frst place, and the humans whose behaviour creates the data they use to make predictions. One of the most dramatic examples of this relationship is Microsoft’s chatbot Tay, which we talked about in Chapter 5, which ‘learned’ from other Twitter users to be rude and racist. As the media critic Ian Bogost (2015) puts it, algorithms are not simple, singular systems, but rather operate as complex and varied arrays ‘of people, processes, materials, and machines.’

146

Digital tools

One reason it is so important to pay attention to algorithms is because of the role they play in ordering our experience of the world. They act both as gatekeepers, deciding what sorts of cultural artefacts get recommended or suppressed, and as enforcers of social norms, deciding what kinds of human behaviour gets rewarded or punished. This fact raises important questions about who (or what) should have the power to decide the value of different kinds of knowledge, behaviours, and people. As media scholar Tania Bucher (2018: 3) puts it, ‘In ranking, classifying, sorting, predicting, and processing data, algorithms are political in the sense that they help to make the world appear in certain ways rather than others’ (emphasis ours). Like all technologies, algorithms are never ideologically neutral. They have designed into them certain assumptions about the world and about people and what they should be like. Much has been written about ‘algorithmic biases’. One famous example of a ‘biased’ algorithm was Google’s image recognition app, which, when it was frst released, identifed pictures of African Americans as pictures of gorillas. Although it is a stretch to conclude that the algorithm was ‘racist’, or even that those who created it were intentionally promoting racism, it is a symptom of systemic racism when the collections of images used by the company to train the algorithm just happened to overwhelmingly feature faces of white people so that the algorithm has reduced opportunities to learn how to recognize the faces of non-whites. In this way, the ‘biases’ of algorithms often function as windows onto the biases in our societies. Another example of algorithmic bias can be seen in algorithms for predictive policing. Many police forces use such algorithms to help them determine the neighbourhoods in a city where they should deploy more personnel based on data about where most arrests have occurred in the past. But the assumption that places where more people have been arrested are likely to have more crime may itself be fawed, since arrest rates might have been affected by other factors such as systemic racism. The deployment of more offcers to ‘high risk’ neighbourhoods is likely to result in more arrests, thus causing the algorithm to determine that the neighbourhood is even more ‘high risk’ (Isaac & Dixon, 2017). An important point here is that algorithms can only ‘learn’ what we teach them, and what we teach them is based on the kinds of data we select to train them with, a selection process that is inevitably affected by our own pre-existing assumptions about the world and the outcomes that we are interested in.

Critical digital literacies

147

Many people believe that the problem with algorithms is essentially one of media opacity, and algorithms are indeed opaque. Not only are they hard to understand, but most are not even open to scrutiny because they constitute trade secrets of the companies that have created them. Opening up these black boxes, however, would probably not make much difference. As communication scholars Mike Annany and Kate Crawford (2018: 10) note, ‘Even if an algorithm’s source code, its full training data set, and its testing data were made transparent, it would still only give a particular snapshot of its functionality.’ Opening up the ‘black box, however, is not the only recourse we have in interrogating algorithms. As we said above, just because we can’t see inside of something does not mean that we can’t fgure out how it works and be critical about its workings. The digital media scholar Alexander Galloway (2006) argues that guessing the way algorithms work and learning ‘how to work’ them is something that we do all the time when we, for example, play computer games, use social media sites, and navigate our way through the world with location-based apps. People regularly change the way they use social media sites based on how they think the site’s algorithms affect how their posts appear, and people often interpret the advertisements they see online by making inferences about how their previous browsing activity may have been algorithmically processed (Jones, 2021). From this perspective, a better way to develop a critical stance towards algorithms might be to worry less about ‘opening up the black boxes’ in which they are hidden and focus more on getting better at analyzing their outputs and reverse engineering their operations (Gehl, 2014). Jones (2021) draws on the feld of pragmatics, a branch of linguistics, to suggest ways that people might develop a more critical attitude in their interactions with algorithms. When you really think about it, other humans are as much ‘black boxes’ as are the digital technologies that we use: we can never be absolutely sure what other people are thinking. We can only make inferences based on the outputs they produce (their speech and behaviour). Pragmatics is the study of how people make inferences about the intentions of other people based on what they say and how they act. The hints that computers give about what’s going on inside of them, however, are different from those that humans give. And so, as we interact more and more with technologies, we need to develop a new set of inferencing skills, a new kind of pragmatics which Jones (2020a) calls algorithmic pragmatics.

148

Digital tools

ACTIVITY: FOLK ALGORITHMICS As we said in the case study above, although most people don’t know how the algorithms that are built into the software they use actually work, they come to develop hypotheses or ‘folk theories’ about algorithms that they apply to the way they use software. Users of social media platforms, for example, develop ideas about how they can make their posts more prominent on other people’s feeds, and users of dating apps develop strategies that they think will get the algorithm to promote their profle to particular kinds of people. For this activity you should interview your friends and family members about the different apps they use and how their ideas about how they work affect how they use them. Ask them: 1) How do you think the algorithm that this app uses works to make predictions or decisions or execute actions? 2) How did you fgure out how the algorithm works? 3) Do you think the algorithm is ‘accurate’ or ‘biased’ in some way? 4) Are there things that you do to try to get the algorithm to behave differently? After you have talked to a few people, compare your notes with those of your classmates. Can you fnd any similarities regarding how people make inferences about how algorithms work and how these inferences change the way they use different technologies?

Interfaces and dark patterns Sometimes digital technologies are used to manipulate us not through the secret workings of algorithms but through the design of interfaces (see Chapter 4) and the way they present information to us. As we said in Chapter 3, as reading and writing have become more interactive, producers of texts have developed strategies for getting people to click on things they want them to click on. The examples we gave were ‘clickbait’ headlines. These are just examples of a larger set of manipulative strategies that UX designers (designers who focus on ‘user experiences’) refer to as dark patterns. Dark patterns are instances where designers use their knowledge of human psychology and communication to trick users into taking actions that might not be in their best interests, such as buying things they don’t

Critical digital literacies

149

need, agreeing to things they don’t intend to agree to, or revealing information about themselves they don’t wish to reveal. Underlying all dark patterns is the fact that, as Kittler (see above) observed, interfaces always obscure what is really going on beneath them; interfaces, by their nature, are designed to ‘trick’ us. Often these ‘tricks’ are rather benign, and even ‘enhance’ our experience of using a technology. When the camera on your mobile phone makes a ‘clicking’ sound it gives you the feeling that a shutter has opened and closed, just like on an analogue camera, but this, of course, is not the case. Most of the progress bars that we see when some kind of online process is taking place don’t really represent the progress of the process, but are designed to reassure us that something is happening and sometimes to hold our attention to the advertisements that sometimes appear next to them (Cheeny-Lippold, 2017). But sometimes the workings of interfaces are not so benign. They might, for example, be designed to distract users so that they don’t notice that they have done something they hadn’t intended to do, to make it diffcult for them to undo actions or change default settings, or to create a false sense of urgency—as is the case with countdown timers which pressure users into making decisions in a short amount of time. While dark patterns usually exploit some aspects of human psychology (such as attention or cognitive biases), most also have a linguistic or discursive basis, exploiting common patterns of language use and social interaction. For example, designers of online interfaces often exploit fundamental patterns of reading whereby readers tend to pay more attention to the beginning of texts, expecting them to contain the most important information. In the passage below from an online shopping site, for example, the designer highlights the fact that customers’ credit card numbers will be used to verify their identity and their cards will not be charged during the free trial period by putting it at the beginning of the text, leaving the more important information—that their cards will be charged £14.99 a month after the trial unless they cancel—until the end. Why do we need your card details? Your card is used to verify your identity. Don’t worry, your card will NOT be charged for membership during your free 30-day trial and there is no obligation to continue the service after the trial period. If you choose to stay as a member after your free 30-day trial your card will be charged a monthly membership fee of £14.99. (Brignull, 2011) Colin Gray, a scholar of Human-Computer Interaction (HCI), and his colleagues have compiled a list of the fve most common types of dark patterns

150

Digital tools

Table 7.1 Taxonomy of dark patterns (adapted from Gray et al. 2018: 5). Nagging Obstruction Sneaking Interface interference Forced action

The system continues to ‘nag’ you to take an action even after you have opted out. The system makes it difficult to do certain things in order to dissuade users from doing them. The system attempts to hide information that is relevant to the users (e.g. hidden costs on a shopping site). The interface is designed to privilege some actions over others. The interface requires users to take an action in order to continue using it.

you might encounter online, which is summarized in Table 7.1. Most of these tricks are designed to alter the behaviour of users or to distort their perceptions about what is actually going on in their interaction with the interface. An example of nagging is the notice that Instagram users receive on their iPhones until they turn on notifcations (which are designed to increase their engagement with the app). The notice says: ‘Please Turn on Notifcations. Know right away when people follow you or like and comment on your photos.’ Notably, the permission dialogue does not allow users to just say ‘No’; the choices are ‘OK’ and ‘Not Now,’ which assures that if the user does not turn on notifcations the same request will appear at a later date. Obstruction occurs when an interface makes it diffcult for users to take actions designers don’t want them to take. In 2018, for example, Consumer Union, a consumer advocacy group, complained to the US Federal Trade Commission about how diffcult Facebook makes it for users to change their advertising settings or strengthen their privacy settings by, for example, directing the user through a confusing dashboard of policies to learn how to change their settings, … requiring the user clicks and/or swipes multiple times to alter their advertising preferences, … [and] requiring many more clicks and/or swipes for a user to limit the collection of their personal information. (Consumer Union, 2018: 2) Interface interference and forced action, however, are the most common dark patterns, especially in cases where designers wish to make the choice of one option easier than another, or when they wish to make it seem like users are given a choice when they are not. Examples of forced action can be seen in permission dialogues which don’t allow users to continue to access a website or use an app unless they agree to certain Terms and Conditions. Figure 7.1 shows an example of interface interference, where a website that has offered users free access then asks them to select the plan they would

Critical digital literacies

151

Figure 7.1 Interface interference (from Segura, 2019).

like to try, all of which involve payment after the trial period. Choosing the totally free option requires users to notice the tiny hyperlink located on the upper right of the screen, which we have marked with an arrow.

ACTIVITY: DETECTING DARK PATTERNS a. Collect as many examples of dark patterns as you can from the websites you visit and the apps that you use; b. Interface designers often nudge us to make choices we don’t want to make by the way they word questions or phrase the choices that we are presented with. Look at three examples below of alternate wordings for a website asking users to choose whether or not they would like to receive advertising emails in the future. Which of these examples do you think are the clearest, and which do you think are designed to trick people into agreeing to receiving promotional emails? What are the features in these texts that you consider ‘tricky’? Example A □ I would like to receive the latest special offers, promotions and product information from AcmeCorp. □ I would like to receive carefully selected partner offers, competitions and messages.

152

Digital tools

Example B □ Please tick here if you would prefer not to receive the latest special offers, promotions and product information from AcmeCorp. □ I would like to receive carefully selected partner offers, competitions and messages. Example C Please send me occasional email updates about new features and special offers from example.com □ Yes □ No Please contact me from time to time with special offers from example .com affliate Web sites, publications and carefully selected companies. □ Yes □ No

(from Brignull, 2011)

Fake news No treatment of critical digital literacies would be complete without a discussion of the online spread of ‘fake news’. Some people think the main focus of critical digital literacies should be helping people to fgure out which information they encounter online is ‘reliable’ and which is not. This is not our position. As you can see from our discussion above, a focus on mediation helps to reveal that the ways people can be manipulated or deceived in their use of digital media has as much to do with the affordances and constraints of the media themselves, the architectures they make available for social interaction, the (often invisible) ways they process information, and the ways interfaces are designed to trick us than with the content of the texts that we read online. This is also the case when it comes to fake news. There are a number of ways readers of online texts can attempt to verify the information in those texts. They can, for example, attempt to determine the sources of the texts and decide whether these sources are ‘trustworthy’ (for example, whether they are established mass media outlets or academic publishers with certain standards for deciding whether or not something is worthy of publication). One problem with this, however, is that the purveyors of fake news are getting better and better at impersonating trusted sources. Another problem is

Critical digital literacies

153

that the interfaces of social media feeds such as Twitter and Facebook tend to ‘fatten’ content, presenting everything in a similar typeface and layout, removing many of the visual cues that might alert people to less reliable or less professionally produced content. Another method readers can use to understand whether or not something they read is true is to engage in the practice of lateral reading, that is, visiting other websites and checking out other sources to see if they can verify information about which they have doubts. The ‘endless scroll’ of social media feeds, however, often makes it diffcult for readers to leave the site in order to conduct fact checks of what they have read. Ultimately, learning how to fgure out if something you read on your social media feed is true or not is not enough. It is also necessary to understand the underlying technological and economic conditions that made it appear on your feed in the frst place and how those same conditions conspire to make it more likely that you will share it with other people. As we mentioned in Chapter 2, one of the consequences of the dramatic changes to reading and writing brought about by the development of Web 2.0, and the subsequent rise of social networking has been a breakdown of traditional forms of information gatekeeping. Gatekeeping functions that were previously performed by academics and editors have been replaced by a range of different kinds of flters which are more likely to select information based on whether or not it is popular or whether or not it conforms to beliefs or preferences users already have rather than on whether or not it is true. In other words, not only do digital media allow anybody to publish content, regardless of their knowledge, expertise, or intentions, but they are also often designed to effciently distribute that content to people who are most likely to believe it. But even ‘belief’ is not always the most important thing when it comes to fake news. While there are plenty of examples of deliberate and coordinated online campaigns of disinformation designed to manipulate people’s opinions in the service of some political agenda, the main incentive for much fake news is economic. It is not designed to change your mind so much as to get you to click on it and share it in ways that proft both those who created it and the social media platform on which it appears. Along with the technological infrastructure of flters and algorithms that distributes fake news to people who are most likely to share it, the economic infrastructure of social media sites actually incentivizes the promulgation of fake news. Sometimes the most insidious examples of fake news that circulate online are not the most dramatic (like the stories about the Pope endorsing Donald Trump or Hillary Clinton masterminding an international paedophilia ring), but rather, small, more personal messages that use humour and anecdotes. Figure 7.2 is an example of such a message.

154

Digital tools

Figure 7.2 Tweet from @PoliteMelanie (from Linvill and Warren, 2020).

Because of its content, this message was algorithmically targeted to educated, left-leaning Americans who may already have a negative opinion of conservative Christians. The fact that they are educated, however, did not stop them from sharing it widely with their educated, left-leaning friends, despite the fact that the source of the information (a poll supposedly taken by somebody’s cousin and her friend) is questionable at best. The reason they shared it was because it conformed to their preconceived ideas about conservative Christians being uneducated and intolerant, and because it is funny. As it turns out, the source of the tweet is even more questionable than it appears on the surface. Communication scholars Darren Linvill and Patrick Warren (2020) were able to trace the account @PoliteMelanie to Russia’s Internet Research Agency (IRA), which intelligence services credit with waging massive disinformation campaigns to disrupt elections in a number of countries. This example demonstrates how such fake news campaigns can sometimes take very subtle forms designed less to deceive or misinform people as to confrm their existing prejudices and exasperate feelings of polarization in a society (see Chapter 8).

Who owns the internet? The problem of fake news, and of deceptive interfaces, and even of algorithmic biases are not just technological and discursive problems. They are systemic problems. In other words, the purveyors of fake news, the designers of dark patterns, and the creators of algorithms that privilege some people and disadvantage others do not act alone; they are part of a larger system of power and the distribution of capital. The real goal of developing critical literacies is developing the ability to understand how this system works and how to formulate creative (and often collective) strategies to change it.

Critical digital literacies

155

In the early days of the internet, people thought that digital networks would result in a decentralization of power and a democratization of information. This is not what has happened. Today, digital networks and the platforms that operate on them are controlled by a few large corporations (and in some countries, by authoritarian governments), and the so-called democratization of information has created the conditions for the spread of fake news and propaganda. Of course, not everyone in those early days was so optimistic. In her famous essay ‘The Cyborg Manifesto’ (1991: 149–81), written three decades ago, feminist media critic Donna Haraway predicted that the shift in power that digital media would bring about would not be a matter of more people gaining power, but of fewer people gaining the ability to exercise their power in more subtle and invasive ways. She wrote about a new ‘informatics of domination’ and ‘scary new networks’ that extend themselves into every corner of our lives, from work to play. She also predicted that central to this new form of power would be the ability to collect, control, and analyze data. The British geographer Nigel Thrift (2005) has called this new system for the exercise of power ‘knowing capitalism,’ because it is based not on the accumulation of capital in the traditional sense, but on the accumulation and control of knowledge (data). The new business model that eventually developed and which now controls how most information fows through digital networks and who benefts from it is the platform. Examples of platforms are Facebook, YouTube, and Uber. On the surface, the idea of platforms seems rather benign, and even somehow ‘democratic’; they are simply digital infrastructures that allow people to publish content and that act as ‘intermediates’ connecting up ‘customers, advertisers, service providers, and even [in the case of the Internet of Things] physical objects’ (Srnicek, 2016: loc. 610). While presenting themselves as neutral spaces for other people to interact in, platforms actually set the terms for that interaction, and so exert an enormous amount of power over how people are able to conduct their lives. At the same time, platforms are designed as extremely effcient tools for the extraction and control of data. As a result, big internet companies like Facebook, Google, and Amazon are positioned as gatekeepers for the dissemination of knowledge, the conduct of economic transactions, and the form that political debates end up taking in our societies. They are able to monitor our thoughts as we reveal them through our internet searches and social media posts and use that information to create messages that can infuence our subsequent thoughts and our behaviours (see Chapter 12). Many people seem comfortable with trusting these platforms and the companies that control them with this enormous power and happily relinquish their personal data in exchange for convenience. It is important to consider, however, the ways in which this power could be abused, as well as how the economic agendas of these companies have come to infuence

156

Digital tools

the ways we make everyday decisions, conduct our social relationships, and engage in political debates. It is also important to remember that the activities we engage in on platforms are ultimately a kind of labour, that is, there are people profting from our ‘work’ of sharing, liking, and commenting. Treating data as the labour of users instead of capital owned by internet companies is one of the main ideas promoted by Jaron Lanier. In his book Who Owns the Future (2013), Lanier argues that the primary business model of the internet is surveillance, or what Shoshana Zuboff (2019) calls surveillance capitalism. Surveillance, by its nature, creates an unequal relationship between the surveiller and the surveilled. In order to give users more agency in the relationship, Lanier suggests that platform users should be able to more actively choose what data they wish to give to internet companies and be compensated for it in some way (through, for example, micropayments or credits). Such a model, however, would also require users to make adjustments to the way they use the internet. Under the current system, people are used to accessing internet services ‘for free’ (though they pay for this service with their data). Under Lanier’s proposal, these transactions would be made much more explicit. Lanier and others believe that one of the main problems with the way the internet is run nowadays is that it is almost entirely funded by advertising. Most internet companies are, in reality, advertising companies. This ends up distorting the kinds of information we have access to online and encourages platforms to adopt designs that beneft advertisers rather than users. Changing this would require users to be willing to pay for some services, and to lobby their governments to provide funding for services (like search) that are essential for their citizens.

Conclusion: ‘Hacking’ digital media If there is one lesson you are meant to take from this chapter it is that critical literacy means trying as much as possible to learn how things work, whether we are talking about the way language works to infuence our opinions or the way software works to infuence our behaviour and our relationships with other people. It is diffcult to operate in most contemporary societies without using digital media, often through the kinds of commercial platforms that we have described above. However, the more we know about how these media and the platforms they support work, not just technologically, but also politically and economically, the better we can become at ‘working’ them in ways that beneft us. One way to think about ‘working’ digital media is with the metaphor of ‘hacking’. Often when people think of ‘hackers’ they think of cybercriminals who break into other people’s computers to alter or steal data, but our use of the term harkens back to the ‘hacker culture’ of the early days of the internet (The Mentor, 1986). In this sense, the word ‘hacker’ refers to expert

Critical digital literacies

157

computer programmers, who get satisfaction from exploring the limits of programmable systems. The way we use the term hacking is similar to the way it is used by learning scientist Raf Santo (2011) in his concept of ‘hacker literacies,’ which he defnes as: empowered participatory practices, grounded in critical mindsets, that aim to resist, reconfgure, and/or reformulate the sociotechnical digital spaces and tools that mediate social, cultural, and political participation. (2) Some hacker literacies might involve developing technical skills—like learning a programming language or learning how to change the privacy settings on social media sites. Other hacker literacies involve more discursive or linguistic skills such as being able to spot trick questions in permission dialogues. The most important skills for hackers are inferencing skills, skills that enable us to guess what’s going on inside of a black box when all we have access to are inputs and outputs, and that enable us to reverse engineer the technologies that we use. But there is also a danger in using the metaphor of the ‘hacker’ to talk about critical literacies. We usually think of hacking as an individual activity and of hackers as solitary creatures slouched for hours alone over their keyboards. But the kind of hacking that will be needed to confront the issues that we have talked about in this chapter will require people working together to create alternative imaginaries about technologies that serve their interests. Digital ethicist Luciano Floridi and other contributors to the Onlife Manifesto (2014: 42) refer to this process as concept reengineering. ‘The dominance of negative projections about the future is often the signature of the inadequacy of our current conceptual toolbox,’ they write. ‘[T]he overall purpose of… concept reengineering… is to acknowledge such inadequacy and explore alternative conceptualisations that may enable us to re-envisage the future with greater confdence.’ Activists might, for example, create spaces where people can challenge (sometimes in fun ways) dominant imaginaries, as with the @internetofshit Twitter account where people post examples of Internet of Things devices that appear to be designed mainly to extract data from users. Or citizens of a city might work together to imagine a different kind of Internet of Things that is based on ideas of urban social justice, as occurred in 2015 in Barcelona when the mayor and citizens rejected Cisco’s plan for a ‘smart city’ and instead developed their own plan based on a ‘city data commons’ and participatory civic platforms (Morozov & Bria, 2018). In other words, being able to hack technologies is not enough. We also need to understand how to use technologies to hack political and economic systems. And this kind of hacking necessarily involves collective action. Critical digital literacies are, by their nature, distributed—some people will

158

Digital tools

be good at some things while others will be good at other things—and so the frst step to developing them is to engage in conversations with the people around you about the kinds of things you are concerned about regarding digital technologies and the kind of internet (and society) that you would like to create. Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Part II

Digital practices

Chapter 8

Online cultures and intercultural communication

In the 1960s, the scholar Marshall McLuhan prophesized that the world would soon become a ‘global village’ in which nearly everyone would be linked together by communication technologies. He predicted that this would lead to greater political and social consensus and a heightened awareness of our common responsibility for one another. In the last edition of this book, published in 2012, we pointed out that digital technologies had essentially failed to deliver on this promise. Instead, there were already clear signs that the internet was ‘becoming increasingly ghettoized, with people rarely challenging themselves by venturing away from their habitual hangouts’ (Jones & Hafner, 2012: 126). What we see now is that, with increasing use of technologies that sort, flter, and microtarget information, the internet has continued to be fragmented—with lots of different groups online all pursuing their own interests and agendas. It has also become increasingly polarized, with different groups increasingly hostile towards one another and engaged in demonizing those who disagree with them. This doesn’t mean that McLuhan was wrong about the affordances of networked digital communication technologies. He correctly identifed their globalizing effect and the potential of digital media to encourage cooperation and community. Digital media have supported the trend towards globalization by breaking down barriers to communication over great distances and facilitating global fows of information, money, goods, services, and people. They have also helped immigrants to maintain social ties in their countries of origin while at the same time establishing new connections in their adopted homes. Digital media have allowed people to form relationships across national and cultural boundaries around social practices such as online gaming, social networking, and collaborative writing. In this section of the book, we will consider some of these digital practices and the kinds of social relationships they help people to form and social groups that they give rise to. In this chapter we will begin to explore what we mean by terms such as ‘community’ and ‘social group’ in relation to digital media, using theories developed in the study of intercultural communication. Participating

162

Digital practices

in different groups online, we will argue, usually involves learning certain ‘cultural’ conventions and norms of communication. There are two main kinds of online social groups that we will focus on. Some are more stable and durable: for example, the group of people who edit Wikipedia, or the groups of people who frequent different kinds of online forums. People who regularly participate in such practices often get to know one another and may start to think of themselves as part of a ‘community’. On the other hand, other online groupings are much more temporary and ephemeral: for example, the people who participate momentarily in some viral event such as the ice-bucket challenge. These people establish a temporary connection with one another while spreading content, but the connection is usually feeting. The sociolinguist Jan Blommaert refers to these temporary groupings as light communities. He argues that the sharing of user-generated content through online social networks creates particularly favourable conditions for the formation of these kinds of communities. We will also consider how different digital communication tools and platforms themselves come to foster particular norms of communication and develop their own cultures-of-use. We will end the chapter by considering the issues of polarization and online tribalism that we brought up above and how the model of online intercultural communication laid out in this chapter can help us to understand these issues.

Online cultures as discourse systems To understand the idea of online cultures we need to frst clarify what we mean by ‘culture’ and how this notion might be applied to the kinds of social groupings that we encounter online. The word ‘culture’, in its everyday use, typically refers to the traditional norms, conventions, and practices often associated with particular ethnic or regional groups. For example, we might talk of ‘Chinese culture’, ‘American culture’, or ‘French culture’ to refer to the traditional practices of people who live in these places. However, one problem with using the word ‘culture’ in this way is that it suggests that the practices described are universally shared by all members of a particular ethnic group or people who live within particular geographical boundaries. In addition, by focusing our attention on ethnic and regional groupings, this use of the word ‘culture’ distracts us from other possible sources of norms and practices not associated with region or ethnicity. In this chapter, we will use the word ‘culture’ in a rather different way. We follow Ron Scollon and his colleagues (Scollon, Scollon, & Jones, 2012) in defning cultures as discourse systems made up of 1) ideologies, 2) norms of communication, 3) ways of conducting social relationships, and 4) practices of socialization which people participate in rather than ‘belong to’ or ‘live inside of’. In this sense, culture can refer not only to the practices of a

Online cultures

163

regional group, but other kinds of groups as well: gender groups, generational groups, professional groups, and many other groups based on shared affnities, interests, and discourse practices. Online, such groups can include online social networks, gaming communities, fan communities, the followers of a particular YouTube channel, or the looser kind of groups that spring up around viral challenges, memes, fashion trends, and conspiracy theories. This view of culture highlights the fact that individuals participate in many different discourse systems at the same time. For example, Christoph is affliated with a number of different regional discourse systems: he was born in Switzerland, grew up in New Zealand, is married to a Canadian woman, and (at the time of writing) lives in Hong Kong. At the same time he also participates in a variety of professional discourse systems: he trained as a lawyer, then as an English language teacher, and now works as an academic in the Department of English, City University of Hong Kong. He is also a participant in discourse systems based on gender, generation, and interests: for much of his life, for example, he has participated in various musical groups like orchestras and choirs. Online, he participates in various groups on social media: groups for family members, friends, and groups that bring together the parents of kids in his children’s sports teams. If he takes a picture of an insect he doesn’t know the name of, he shares it with a Facebook group he belongs to for bug lovers, and if the pot plants on his balcony start to succumb to a mysterious disease he consults a group of gardening afcionados on Reddit. Professionally, he participates in social networking sites like Academia.edu and ResearchGate. If you think about your own background, you are bound to notice a similar diversity in terms of the discourse systems that you participate in. Taking this more fexible view of culture also allows us to describe the way people participate in a much greater variety of social practices. For example, we can describe the ‘culture’ of a particular professional group in a particular workplace by describing the norms, conventions, and practices that people who participate in that workplace share. We can also extend the notion of discourse systems to online social groups and describe the particular ‘discourse systems’ of those groups, i.e. the shared norms, conventions, and practices of people who participate in them. Consider, for example, the social groups that come together in what the discourse analyst James Paul Gee (2004) calls online affnity spaces. Online affnity spaces are the virtual places where people interact to promote a particular shared interest or common goal (i.e. a shared ‘affnity’). They include a range of different online spaces, such as websites that promote particular kinds of fan fction writing, online games, social networking sites, knowledge-building sites, or sites catering to the interests of particular professions or workplaces. The participants in such online affnity spaces often come from very diverse backgrounds, including people of different ages and

164

Digital practices

genders and with different regional, linguistic, and professional affliations. In addition, people may participate to varying degrees, with some participants being regular and active contributors, while others just ‘lurk’ or ‘pass through’. In spite of this diversity, just like physical places (such as neighbourhoods in a city), these online spaces nevertheless develop their own ‘character’ based on the unique norms of interaction that grow up among the people who frequent them. That is, online affnity spaces often develop their own ‘cultures’ or ‘discourse systems’ which include shared ways of thinking, interacting, and getting things done. As we increasingly live our lives moving in and out of such spaces, it is important for us to develop some understanding of how the norms, conventions, and practices of all of the different ‘cultures’ of the spaces we participate in interact. ‘Online cultures’ are in many respects similar to other discourse systems. According to Scollon, Scollon, and Jones (2012), discourse systems can be broken down into four interrelated and interdependent components, as follows: 1. 2. 3. 4.

Ideology: what people think; Face systems: how people get along with one another; Forms of discourse: how people communicate; Socialization: how people learn to participate.

As we said in the last chapter, ideology refers to people’s deeply held beliefs about what is true or false, good or bad, right or wrong, and natural or unnatural. The ‘ideology’ of a discourse system refers to the commonly held beliefs, values, and worldviews of participants in the system. These beliefs are rarely made explicit but are assumed and operate in the background of communicative interactions and surface in the form of ‘values’—whether or not, for example, things like ‘education’, ‘hard work’, ‘material wealth’, or ‘spiritual attainment’ are valorized. Among the most important of these values has to do with the nature of the self and its relationship to the group. Some discourse systems emphasize the individual, seeing the purpose of the group as promoting and supporting individual achievement. Others emphasize the group more, with individuals being judged on the basis of their contribution to the community. Most discourse systems, however, have rather complicated ideas about the nature of the self and the nature of the group, and it is very diffcult to label any particular ‘culture’ as either strictly ‘individualistic’ or strictly ‘collectivistic’. It is usually the case in most discourse systems that the individual will be emphasized in some circumstances and the group will be emphasized in others. Because we all participate in multiple discourse systems, we sometimes have to modify or adapt our beliefs and worldviews as we move from one

Online cultures

165

discourse system to another in the course of our lives. The ideology we advance in the workplace, for example, might be very different from the one we advance at home, and from the one we advance when we are playing Counter-Strike with our gaming friends. ‘Face systems’ refers to systems of interpersonal relationships that govern the way that people interact and how they get along with one another. This, in turn, depends to a degree on the ideology of the discourse system. One aspect of face systems can be seen in forms of address, for example whether you address someone by their frst name or by their title and last name, or some other honorifc. In some discourse systems, the preferred face systems are more egalitarian, and in such systems people tend to address each other in a similar way and use more strategies of solidarity. In other discourse systems, the preferred face systems are more hierarchical, with people at the top of the hierarchy addressed differently to those in the middle or at the bottom, and people at the bottom expected to use strategies of deference towards their ‘superiors’. ‘Forms of discourse’ refers to the media, languages, and texts that are used in interactions between participants in a discourse system. This dimension is particularly important for online cultures because of the various ways digital media can potentially infuence interaction in online contexts. As we have seen in the preceding chapters, different media introduce different affordances and constraints in terms of communicative action, and so the forms of discourse used in an online interaction can both refect and shape the values, relationships, and social practices of a given ‘online culture’. Forms of discourse also include the assumptions within a discourse system about how meaning is made and what the purpose of communication is. In some ‘online cultures’ for example, such as political and academic blogs, the informational function of language is valued more highly than the relational function. In other ‘online cultures’, however, such as those that grow up around online social networks on sites like Facebook, the relational function of language is often more highly valued. People use language not so much to convey information as to establish and maintain relationships (see Chapter 5). Finally, ‘socialization’ refers to the ways that people learn to participate in a given discourse system. Some ‘cultures’ place value on formal processes of education, as for example in school settings. Other ‘cultures’ place value on more informal processes of socialization, where novices are apprenticed into the discourse system by expert members. This kind of ‘apprenticeship’ (Lave & Wenger, 1991) is often the kind of socialization one fnds in online affnity spaces. To get a better idea of how the concept of discourse system (and its components) applies to online cultures, it is helpful to consider an example. One

166

Digital practices

online discourse system with an explicit ideology is Wikipedia. Wikipedia’s ideological commitment to collective action over individual achievement is clear: articles are written by groups of volunteers, and are owned by the community. It is diffcult for individuals to take credit for their contributions because the only acknowledgement that individual editors receive is the record of their edits maintained in an article’s history. In Chapter 11 we will explore in greater detail the kinds of norms and conventions that grow up around this kind of peer production. As the ‘free encyclopaedia that anyone can edit’ one would expect to see egalitarian face systems at work in Wikipedia. Indeed, Wikipedia’s principles make it clear that regardless of background, all editors are entitled to be treated in a polite, civil way. If you look at Wikipedia discussion pages, you can see that editors tend to refer to each other in a neutral way, by referring to the username and tend not to employ honorifcs (like ‘Professor’). In principle, respectful treatment in Wikipedia depends on the quality of the edits that a contributor makes, not on his or her social position. This principle is of course complicated by the fact that Wikipedia nevertheless embodies a social hierarchy with founders like Jimmy Wales (at the top) to moderators (in the middle) to editors (at the bottom). The forms of discourse in Wikipedia, especially the choice of media, both refect and promote the ideological values described above. First of all, the wiki is set up so that anyone (even anonymous users) can participate, in line with Wikipedia’s egalitarian philosophy. Second, the wiki’s discussion page provides editors with the means to collectively resolve differences, in line with its more collectivistic ideology. As already noted, other features of the texts themselves, such as a lack of individual attribution, also refect this cultural emphasis on the good of the collective. Finally, socialization in Wikipedia, as in many online communities, is an informal process of mentoring and apprenticeship. There are no formal qualifcations that prepare Wikipedians for participation, though there is documentation that provides explicit instructions on how to write an appropriate article. Learning to contribute to Wikipedia is comparable to learning to function in a workplace, a gradual process characterized by a combination of explicit instructions and informal socialization through observation and trial and error.

ACTIVITY: THE CULTURES OF SOCIAL NETWORKING PLATFORMS Read the extracts from the ‘About’ pages of LinkedIn, Instagram, and Twitter below and, considering also your experience of these social networking sites, answer the questions.

Online cultures

167

1. What different kinds of people do you think these different social networking platforms are targeting? 2. How do you think their ‘online cultures’ differ? Consider: a. Ideology: how people think, what is valued, what is devalued; b. Face systems: how people get along, whether or not relationships tend to be egalitarian or hierarchical; c. Forms of discourse: how people communicate, what they think the underlying purpose of communication is and the different kinds of tools, languages, genres, and styles they use for communication; d. Socialization: how people learn to participate, how much of this learning is formal and how much is informal, what the consequences are for those who fail to participate in a manner deemed appropriate by the community. LinkedIn (https://about.linkedin.com/) Welcome to LinkedIn, the world’s largest professional network with 722+ million members in more than 200 countries and territories worldwide. Vision Create economic opportunity for every member of the global workforce. Mission The mission of LinkedIn is simple: connect the world’s professionals to make them more productive and successful. Who are we? LinkedIn began in co-founder Reid Hoffman’s living room in 2002 and was offcially launched on May 5, 2003. […] In December 2016, Microsoft completed its acquisition of LinkedIn, bringing together the world’s leading professional cloud and the world’s leading professional network. Instagram (https://about.instagram.com/) Bringing you closer to the people and things you love ALL ARE WELCOME We’re committed to fostering a safe and supportive community for everyone. EXPLORE WHAT’S NEW Express yourself in new ways with the latest Instagram features. STAND OUT ON INSTAGRAM Connect with more people, build infuence, and create compelling content that’s distinctly yours.

168

Digital practices

GROW WITH US Share and grow your brand with our diverse, global community. Twitter (https://about.twitter.com/) Twitter is what’s happening in the world and what people are talking about right now. #IVoted #Election2020 #VoteReady See what’s happening. When it happens it happens on Twitter. See what people are talking about. Spark a global conversation. See what’s happening. (Web sources retrieved on 9 November 2020)

Light communities and ambient affiliation Not all of the social groups that we participate in online are as stable and long lasting as the Wikipedia community. As sociolinguists Jan Blommaert and Piia Varis point out, many online social groups are more spontaneous and ephemeral in nature. Blommaert and Varis (2015: 127) refer to these more temporary groups as light communities and they describe them in the following way: [Light communities] are best seen as focused but diverse occasioned coagulations of people. People converge or coagulate around a shared focus—an object, a shared interest, another person, an event. This focusing is occasioned in the sense that it is triggered by a specifc prompt, bound in time and space (even in ‘virtual’ space), and thus not necessarily ‘eternal’ in nature. As Blommaert and Varis explain, light communities emerge in response to a focus like an event and then disperse again when the focus is no longer important or compelling. While these communities are only temporary, participants nevertheless do ‘display, enact and embody a strong sense of group membership’ (Blommaert & Varis, 2015: 128) and therefore follow norms of behaviour that can be analyzed in a similar way to more durable discourse systems (see above). Blommaert and Varis give the example of a group of people who meet in a pub to watch a sports match. While these people are very diverse in terms of their social backgrounds and many are not known to one another, they are nevertheless united in following norms of behaviour during the time that they are focused on the match. A similar kind of light community can form in online contexts, as sometimes massive

Online cultures

169

numbers of people connect to celebrate the outcome of an election, share a viral meme, or become enraged about a piece of fake news. Indeed, some of the ‘forms of discourse’ available in digital media make it especially easy for light communities to emerge because of the way that they facilitate brief moments of focusing. These include the liking, sharing, and hashtagging functions available in social networking sites. The internet sociolinguist, Michele Zappavigna (2011), for example, describes the kind of ambient affliation that is made possible by hashtags on Twitter (see Chapter 2). Ambient affliation happens online when people ‘affliate with a copresent… impermanent, community by bonding around evolving topics of interest’ (800). When people use hashtags as part of a tweet, their tweet becomes searchable by others, meaning that it can easily be grouped with other tweets on the same topic, in a kind of network. The hashtag also serves an interpersonal function: it is a kind of invitation to ‘fnd out what people are saying about a topic right now and… involve yourself in communities of shared value that interest you in this given moment’ (804). We can also see ambient affliation and light communities at work in the making and sharing of fake news. Journalist Sapna Maheshwari (2016) gives a detailed history of one fake news story concerned with demonstrations against then President-elect Donald Trump immediately after the 2016 US election. She documents the way that Twitter user @erictucker posted pictures of busses in the vicinity of the protests, along with the text: Anti-Trump protesters in Austin today are not as organic as they seem. Here are the busses they came in #fakeprotests #trump2016 #austin In fact the busses were being used for a nearby conference with 13,000 delegates. Nevertheless, the post was shared 16,000 times on Twitter and over 350,000 times on Facebook. It was picked up in Reddit, then in conservative discussion forums, blogs, and Facebook groups before Donald Trump himself tweeted about ‘professional protestors’ (though without referencing @erictucker’s tweet). We can see here how a large light community quickly sprang up and focused on the busses and their supposed role in the protests. After the story was debunked @erictucker removed his tweet. But a similar light community failed to materialize around his next post in which he labelled his original tweet as ‘FALSE.’ This last example demonstrates that sometimes light communities overlap or intersect with more durable communities. The group of people that congregated around the conspiracy theory about bussed-in protesters in Austin doubtless consisted of people who belonged to communities of Republicans, communities of Trump supporters, and communities of Texans of various sorts. But the point is that not all Republicans, Trump supporters, or Texans participated in this viral event, and many people who were not members of any of these more durable communities likely did. Part of the power of light

170

Digital practices

communities is that they can serve a bridging function between different more durable communities which otherwise may not have a common point of focus or object of affnity. It is possible, of course, for light communities to grow into more durable social groups, as has happened with the adherents of the QAnon conspiracy theory who originally came together around a range of viral ideas connected to an affnity for Donald Trump, a distaste for ‘elites’, and a moral panic about paedophilia.

CASE STUDY: MASSIVELY MULTI-PLAYER ONLINE GAMES AS ONLINE DISCOURSE SYSTEMS Massively multi-player online games (MMOGs) are digital games that you can play with a large number of others on the internet. In this case study, we will consider the virtual worlds of MMOGs as online affnity spaces with their own discourse systems, where a diverse range of individuals interact according to the shared cultural norms of these spaces. The virtual worlds of MMOGs are populated by individuals from a diverse range of backgrounds. While you might at frst think that such games appeal mainly to teenage males, the research shows that this is in fact not the case. For example, Williams, Yee, and Caplan (2008) surveyed 7,000 players of the popular role-playing game Everquest II about their background and motivation. The results showed considerable diversity in the player population on all demographic measures except for gender (where the distribution was 80.8% males, 19.2% females). Players ranged in age from 12 to 65 years old (average 31.16), with more players in their 30s than in their 20s. The player population was also found to be diverse on other demographic measures, such as ethnic background, income and education, and religious beliefs. One would also expect to fnd players from diverse regional and linguistic backgrounds coming into contact in such MMOGs. Because of this diversity, MMOGs provide players with the opportunity to come into contact with others from cultural and linguistic backgrounds very different from their own. In the virtual world, players may fnd themselves interacting with people whom they would normally have nothing to do with in the ‘real’ world. What unites players in such a multicultural and multilingual space is their common affnity for the game and their common desire to advance in it. Often, in order to achieve the objectives of the game, players must work together, drawing on each other’s complementary strengths in order to successfully complete ‘quests’ or go on ‘raids’. Players of the game

Online cultures

171

can be seen as participating in an online discourse system, with its own ideology, forms of discourse, face systems, and socialization processes. One feature of these discourse systems is the organization of players into social groups. For example, in many role-playing games players can band together and form a ‘guild’, i.e. a persistent group of like-minded players who meet online more or less regularly to play the game together. Over time such guilds develop social norms that regulate the behaviour of group members, and some of these norms are made explicit in the form of guild rules posted online. Such rules are one example of a specialized form of discourse in the discourse system of particular MMOGs. In part, they refect the specifc values and practices of the guild that created them. In part, they refect more general values and practices common to the online culture of the MMOG to which the guild belongs. Gaming researchers Magnus Johannson and Harko Verhagen (2010) analyzed guild rules and found that they address issues related to: 1) group membership; 2) commitment to the group; 3) code of conduct; 4) distribution of resources (for example ‘loot’); 5) cheating; 6) strategies; 7) ‘griefng’. Many of these categories refect underlying collectivist values, which promote the interests of the group above the individual. It’s not surprising to see this value in many online games in which players working together for the beneft of the group are able to achieve much more in the game than individuals acting alone. Indeed, one of the largest guilds on Everquest II, Virtue, Guild of Honour, provides a code of conduct with the following rule at the top of the list: Thou shalt love thy guild mate by helping them complete tasks and quests and expecting nothing in return. (Retrieved 6 July 2011 from: http://virtue.everquest2guilds.com/en/ twopage.vm?columnId=10520) The guild rules Johannsen and Verhagen analyzed also identify another interesting cultural practice, found in a range of online virtual worlds: ‘griefng’ or ‘grief play’. This refers to a kind of play which is intended to disrupt the experience of other players, for example by harassing them, deceiving them, unfairly ‘killing’ them, and/or behaving in other ways that violate the ‘spirit of the game’ (Foo & Koivisto 2004). It is interesting to note that the word ‘griefng’ itself is a specialized term of players of MMOGs, and therefore in order to understand what constitutes ‘griefng’ insider status or some knowledge of a particular MMOG or the conventions of a particular guild is required. Analyzing

172

Digital practices

such disruptive behaviour is particularly useful because it helps to highlight the norms of particular gaming cultures and what happens when those norms are breached. The practice of griefng is one which is hotly contested. Firstly, what players consider to be griefng varies from one kind of virtual world to another, depending on the aims and goals of the virtual world in question. For example, killing another player’s character may be acceptable in MMOGs that permit player versus player combat, but not in a virtual world like Second Life (Linden Lab), where such combat is not allowed. Secondly, in some contexts players might argue about the status of griefng itself, with some suggesting that griefng is a legitimate form of activity in the game world. Fink (2011) describes how the practice of griefng is interpreted by residents of Second Life (referred to below as SL—a well-known virtual world which we discuss in more detail in Chapter 9). He identifes three different frames that residents use in order to make sense of the activity: 1) the ‘lawbreaker’ frame; 2) the ‘misft’ frame; 3) the ‘jester’ frame. Here’s how he describes them: 1. The ‘lawbreaker’ frame: Griefng is interpreted as a crime and griefers are seen as criminals who violate the ‘right of quiet enjoyment’ of other residents of the virtual world. 2. The ‘misft’ frame: Griefng is interpreted as unacceptable behaviour related to some kind of personality disorder, and griefers are seen as sociopaths in need of psychological care. 3. The ‘jester’ frame: Griefng is interpreted as play or performative critique of the prevailing norms of the virtual world. Griefers adopt this frame to justify their actions, saying that their activity is aimed at residents who treat the virtual world too seriously. Fink’s analysis highlights the dominant norms of the SL discourse system, as they emerge through the practice of griefng and the interactions and conficts between SL residents that surround it. Those who adopt the lawbreaker and misft frames cast griefers as socially unacceptable outsiders whose behaviour threatens the norms of the SL community. The interactions suggest that these SL residents view behaviour which causes harm to others or interferes with a right to ‘quiet enjoyment’ of the virtual world as unacceptable or wrong. They often go further to suggest that such behaviour should be dealt with through rules and sanctions imposed (for example, when a resident fles an ‘abuse report’ with Linden Lab, the creators of SL).

Online cultures

173

Residents who adopt the jester frame do not deny that griefng in some sense violates the norms of the SL discourse system. Indeed, they openly acknowledge that griefers are testing and breaking the ‘rules of the game’. However, they justify these actions by casting the griefng activity as a legitimate form of expression, which deliberately disrupts the experience of others in order to provide a playful commentary or critique of SL culture. Considering the two sides of the confict around the social practice of griefng in SL, we gain some insight into the way that online culture develops in this online affnity space: social norms and values are created and contested through the online discussions of participants in the space. As participants discuss the practice of griefng and whether it ought to be sanctioned, they negotiate which values should take priority: the right to ‘quiet enjoyment’ of the virtual world or the right to complete freedom of expression in that world.

Cultures-of-use and media ideologies Exactly how digital media help to give rise to ‘online cultures’ depends on two things: 1) the technological affordances and constraints of the media, and 2) the cultures-of-use that grow up around it. The applied linguist Steven Thorne suggests that over time particular ‘cultures-of-use’ grow up around digital media. He defnes ‘culture-ofuse’ as ‘the historically sedimented characteristics that accrue to a CMC (computer-mediated communication) tool from its everyday use’ (Thorne, 2003: 40). In the last chapter we described the way certain kinds of social practices are ‘technologized’ around certain tools. ‘Cultures-of-use’ refer to the expectations, norms, and values associated with these ‘technologized’ practices. Thorne points out that digital media are variably understood by different individuals and often affected by the ‘cultures’ (or ‘discourse systems’) individuals participate in. In other words, the ‘natural’ way to use a tool like a hashtag might vary from one person to another and from one ‘cultural’ group to another. To illustrate this Thorne uses the example of a tandem language learning project, in which language students from two universities (one in France, one in America) were paired up over email for communicative practice. He demonstrates that the expectations of the American students were very different from the expectations of the French. The Americans expected to develop a relationship of trust and solidarity with their French counterparts by using this medium, which they heavily engaged with in their everyday lives. The French students, however, were less likely to use email in their day-to-day

174

Digital practices

social lives and treated the exchange as a purely academic exercise. Thus, for the Americans, email was seen as a relational form of discourse, whereas for the French it was seen primarily as an informational one. Thorne suggests that the mismatch occurred partly because of the students’ different historical experiences of the medium. Thorne also describes how different groups have differing attitudes towards media like email and chat. Both of these media afford communication through text over a distance, but chat is synchronous, whereas email is not. Thorne reports the perceptions of American students in another tandem learning project. The students indicated that, while they were excited to interact with French speakers, they felt that the use of email constrained their interaction. One characterized email as ‘inconvenient’, and ‘an effort’ (56). Thorne concludes that for these students: ‘Email is a tool for communication between power levels and generations (for example, students to teachers; sons/daughters to parents) and hence is unsuitable as a medium for age-peer relationship building and social interaction’ (56). The point of these studies is not that ‘French’ are different from ‘Americans’, or that people who use email belong to a different ‘culture’ from people who use chat. The point is that certain norms, conventions, and values tend to grow up within different groups of media users over time. The practices that grow up within what Thorne calls ‘cultures-of-use’ are not solely determined by the affordances and constraints of media, and not solely determined by the ‘cultures’ of users, but rather the result of an interaction between affordances of the media and the values, experiences, and predispositions of users. The idea that media have different meanings for different people is one that is also developed by the linguistic anthropologist, Ilana Gershon. According to Gershon, the way that individuals understand and use media is shaped by their media ideologies, the ‘set of beliefs about communicative technologies with which users and designers explain perceived media structure and meaning’ (2010: 3). Her work shows how different people develop different beliefs about when and how to use media based on their perceived affordances and constraints. In her 2010 book, The Breakup 2.0, Gershon examines media ideologies in the context of mediated breakups, for example breakups through email, Facebook, instant messaging, and text messaging. She describes how texting is often used by people at the beginning stages of relationships when they are frst getting to know each other. Texts, which are limited in length, afford a medium to briefy show someone attention without expending great effort. Because they are short, the messages also tend to be ambiguous with respect to any underlying feelings. People often perceive the medium as informal, low stakes, and not very serious: a good alternative to a phone call which would express too much interest (see Chapter 5). However, as

Online cultures

175

Gershon notes, ‘it is this very casualness that makes texting a problematic medium for breaking up’ (2010: 24). Another point that Gershon makes is that media ideologies cannot be understood in isolation. This is because people’s media ideologies change when they begin regularly using a new medium. For example, your beliefs about email (how formal or informal a medium it is for example) might change if you begin to use text messages. Thus, to understand the media ideologies of a particular person or group of people, you have to consider all of the media that they use, because of the way that these ideologies infuence each other. Similarly, because different generations have grown up with different media, media ideologies can differ across generational discourse systems. Gershon (75–77) notes that many of the young people in her study considered the telephone to be an acceptable medium for breaking up, which would not have been the case when she and her generation were at university. She explains this observation by pointing out that the younger generation have grown up with a much wider range of media, and compared to text-based digital media, the telephone constitutes a relatively rich medium. Media ideologies are not only associated with forms of communication, like email and instant messaging, but also different communication platforms, like Facebook, Instagram, TikTok, and YouTube. Such ‘platform ideologies’ can be seen in the conventional ways that people express themselves on a particular platform. As the information scientists Martin Gibbs et al. (2015: 257) put it, ‘each social media platform comes to have its own unique combination of styles, grammars, and logics, which can be considered as constituting a “platform vernacular”, or a popular (as in ‘of the people’) genre of communication’. In Chapter 4, we saw an example of this with respect to Instagram. From this discussion it should be clear that choice of media (or platforms) used within a particular discourse system have a profound effect on the forms of discourse that are available to members. The affordances and constraints of a particular digital medium have an obvious effect on the kinds of message that can be constructed (for example, whether synchronous or asynchronous, long or short). In addition, cultures-of-use develop around media and platforms, shaping how people understand and use them (for example, whether they associate them with formal or informal, serious or casual interaction). Because communication in online discourse systems is always mediated by technological tools, the range of meanings that can be made depends heavily on the kinds of tools available. At the same time, people bring to their use of tools their own values, knowledge, relationships, and experiences and those associated with the various groups they participate in. It is in this tension between the affordances and constraints embodied in tools and the ‘cultures-of-use’ that develop around these tools that ‘online cultures’ are formed.

176

Digital practices

ACTIVITY: YOUR MEDIA IDEOLOGIES A. Evaluating tools Consider the media/platforms below and answer the questions: Letter Snapchat TikTok

Telephone Email Facebook

Text messenger Instagram (other)

1. What different communicative situations do you use these tools for? Why? 2. Are some tools better suited to some situations than others? Why? 3. Are some tools more personal than others? Why? 4. Are some tools more formal than others? Why? 5. Are some tools more closely associated with particular groups you belong to? B. Evaluating situations Consider the following situations and explain which of the media/ communication tools listed above you would use in each context. Do any of these situations call for unmediated (face-to-face) communication? If so, why? • • • • • • •

You are meeting some friends at a party and you want to let them know that you are running late; You have been dating your boyfriend/girlfriend for about a year and you want to break up with them; You are writing an assignment and you want to ask your professor for an extension of the due date; You are learning a foreign language and you want to practise by doing a language exchange with some native speakers; You want to apply for a job over the summer vacation period; You want to show support for a particular candidate or political cause; You want to express your loyalty to a sports team you support.

Online cultures

177

Intercultural communication online As we noted in the case study above, participants in online affnity spaces like MMOGs develop discourse systems with shared norms and practices. In such online spaces we are increasingly likely to come into contact with and need to communicate with people from a wide range of diverse backgrounds. Thus, another feature of these spaces is the possibility for ‘intercultural communication’, where participants from different discourse systems (and often from different linguistic backgrounds) communicate with one another and accommodate to one another’s communicative assumptions and expectations. Such accommodation leads to the possibility for developing innovative new forms of communication, and the opportunity to engage with and learn about the diverse languages and cultures of participants in the space. In the early days of the internet, there might have been some cause for concern that the English language would come to dominate such intercultural communication. At the time, technical limitations such as input hardware and internet protocols made it impossible to write with non-Roman sign systems like Chinese characters, and even some languages based on Roman scripts were diffcult to write, because diacritics and accents were not available in the standard character set. Fortunately, much has changed since those days, and more recent studies of language on the internet describe a range of multilingual and multicultural practices (see, for example, Lee, 2017). Much of the research on intercultural communication in online affnity spaces has focused on the out-of-class learning experiences of language learners. For example, education researcher Eva Lam (2000) describes how ‘Almon’, a Hong Kong Chinese teenager living with his family in the US, designs a J-pop (Japanese pop music) website. Through the website, he develops a number of online friendships with others who share his affnity for J-pop, and communicates with them using online chat and email. His online friends come from diverse locations including Canada, Hong Kong, Japan, Malaysia, and the US. Lam describes the language used by Almon and his online chatmates in this intercultural space as ‘the global English of adolescent pop culture rather than the Standard English taught in ESL classes’ (475–476). She characterizes this global English as an innovative form, which is highly relevant for use in the kind of online community that developed in this case. Indeed, such innovative use of language is typical of online settings (see Chapter 5). One feature of intercultural communication online is the way that participants draw on a variety of cultures in order to create a hybrid, ‘third space’ with norms and conventions of its own. Lam argues that, in the case of Almon, successful participation in this cultural space was benefcial. She notes that ‘the English he controlled on the internet enabled him to develop a sense of belonging and connectedness to

178

Digital practices

a global English-speaking community’ (476). This contrasts with his identity as a somewhat marginalized ESL speaker in the school system. Similar claims are made by Rebecca Black, who studies English language learners participating in fan fction websites (see also Chapter 3, where we described these affnity spaces as ‘online writing communities’). For example, she describes how one learner negotiates a strong identity as a multilingual and multicultural fan fction writer in fanfction.net (Black, 2006). Another example of intercultural communication online is Thorne’s (2008) study of interaction in the massively multiplayer online game World of Warcraft. Thorne reports the experiences of an American university student who chats with a Russian player. The study shows how chat in World of Warcraft can present opportunities to interact with and learn about people from different cultural backgrounds. Here is how their interaction begins (presented as in Thorne, 2008: 319): 1. Zomn: ti russkij slychajno ? 2. Meme: ? 3. Zomn: nwm :)) sry [sorry] 4. Meme: what language was that? 5. Zomn: russian :) 6. Meme: was going to guess that 7. Meme: you speak english well? 8. Zomn: :)) where r u [are you] from ? 9. Meme: USA, Pennsylvania 10. Zomn: im from Ukraine 11. Meme: ah nice, do you like it there? 12. Zomn: dont ask :)))) at least i can play wow :)) After an initial moment of confusion, when Zomn asks Meme a question in Russian, the participants take the opportunity to quiz each other about their respective backgrounds. In the ensuing interaction Meme goes on to tell a story about a friend from Ukraine as well as fnding out from Zomn that he is a law student. In the interaction there are some signs of the innovative use of language that one would expect in an intercultural interaction online, including code-switching and use of text abbreviations and emoticons (see Chapter 5). What is interesting is the way that, a bit later in the interaction, Meme (the American) begins asking questions in Russian after contacting his Ukrainian high school friend and asking him to provide some Russian phrases. Here is an example (320): 24. Meme: kak dela? 25. Zomn: :))) normalno :)))

Online cultures

179

26. Meme: if I may ask, what did I say haha, I’m not quite sure 27. Zomn: how r u :) /// These informal language lessons illustrate the positive potential of this kind of intercultural communication. As Thorne notes, ‘in an uncorroborated but interesting follow-up to this episode, during an informal conversation with the American student, he mentioned a strong interest in Russian language courses’ (322).

Tribalism and polarization At the beginning of this chapter, we considered McLuhan’s notion of the ‘global village’. We pointed out that McLuhan’s idea that digital media would usher in an age of consensus had failed to become reality. Instead, online spaces have become increasingly fragmented: rather than diverse spaces where competing worldviews are exchanged and a consensus view emerges, different ‘tribes’ with different cultures, ideologies, and agendas have become siloed in different online spaces. There are spaces for gamers, ftness freaks, professionals, white supremacists, Black Lives Matter advocates, pro-democracy protestors, and conspiracy theorists of various kinds. In her 2018 book, Political Tribes: Group Instinct and the Fate of Nations, the legal scholar and writer Amy Chua suggests that belonging to tribes is a deeply felt human need. Tribes are not something that have sprung up because of the internet. At the same time, it should come as no surprise that people often use online social groups to fulfl their need to belong to tribes. Tribalism, however, has a dark side. As Chua puts it: ‘...the tribal instinct is not just an instinct to belong. It is also an instinct to exclude’ (2018: 8). Many people are concerned that tribal tendencies on the internet have led to an increasing polarization of world views. The basic idea is that participating in more fragmented groups reduces exposure to competing ideologies, which allows people to develop more extreme ideas and makes them less tolerant of differences. It would be a mistake, however, to solely blame digital media for perceived increases in tribalism or polarization (Benkler et al., 2018). First, both tribalism and polarization are phenomena that were widely observed long before digital media came along. Second, different people and different ‘tribes’ seem to respond differently to the information fows made possible by digital media. In their study of media/social media coverage of the 2016 US election, the economist Yochai Benkler, political scientist Robert Faris, and law professor Hal Roberts (2018) observed that people at different ends of the political spectrum were affected differently. According to their study, misinformation and disinformation spread more readily in right-wing media than in the centre/left media ecosystem, resulting in greater radicalisation on the right. They maintain that, had technology

180

Digital practices

been the deciding factor, one would have expected similar degrees of radicalisation on both sides. There are clearly other factors involved as well. Keeping these caveats in mind, there are still three possible ways that digital media could be implicated in the rise of online tribalism and polarization: 1) digital media enable ‘personalization’ of information, allowing people to insulate themselves from views they are not comfortable with; 2) digital media can create self-reinforcing ‘flter bubbles’ (see Chapter 2); and 3) digital media can be leveraged to manipulate mainstream media. First, digital media provide people with the ability to personalize the information that they consume, curating their feeds to create a self-imposed echo chamber that excludes any challenging views. While people also curated their news before digital media came along, subscribing to newspapers and magazines that aligned with their political views, at that time there were fewer sources of news and news outlets had to cater to a larger slice of the market. In his book #Republic: Divided Democracy in the Age of Social Media, American legal scholar, Cass Sunstein (2017), argues that the increased microtargeting and personalization of information has made it more diffcult for people to understand opposing viewpoints. Second, and related to this, algorithms provide further ‘personalization’ by serving us with search results or social media posts based on what they know about our preferences. As we saw in Chapter 2, this can lead to the establishment of self-reinforcing flter bubbles that have the effect of insulating and isolating individuals. According to Sunstein, flter bubbles can diminish two important components of social life: experiences of serendipity; and common shared experiences. When information is algorithmically fltered, people are less likely to come across new information (and people with differing viewpoints) by chance. In addition, shared societal experiences may be less common in a world tailored to individual schedules and preferences. A third way that digital media can play a role in the polarization of society is when they are used to deliberately manipulate the mainstream media. One important set of actors in disinformation campaigns at the time of the 2016 US Presidential election were online trolls who congregated in sites like Reddit, 4chan, and 8chan. Internet researchers Alice Marwick and Rebecca Lewis (2017) describe the practices of a range of overlapping far-right online groups, including internet trolls, gamergaters, men’s rights activists, white nationalists, and conspiracy theorists. These groups ‘[take] advantage of the opportunity the internet presents for collaboration, communication, and peer production’ in order to ‘target vulnerabilities in the news media ecosystem to increase the visibility of and audience for their messages’ (3). For example, by making use of peer-produced and distributed content like memes on social media (see Chapters 10 and 11), these groups have been able to generate attention from more mainstream bloggers and news outlets in order to spread often harmful ideologies.

Online cultures

181

So, what does all of this mean for digital literacies? Firstly, in order to participate effectively online, internet users now need to factor in some of the polarizing behaviours that we have described here. For example, it is worth bearing in mind that some internet memes are generated with the purpose of mainstreaming fringe ideologies that can be harmful. Some would therefore argue that people have an ethical responsibility to think carefully about what content they share with their networks. Secondly, issues of tribalism and polarization also force us to examine our own practices of media consumption. Are we creating self-imposed echo chambers and if so, what effect does that have on our outlook? Indeed, many people strive for a rich and varied media diet. There are even apps that are designed to ‘break’ flter bubbles. For example, the news comparison platform Ground News has a feature called ‘blind spot’: based on an analysis of a user’s media preferences the app allows them to fnd news articles that are in their ‘blind spot’, that is, that lie outside of those preferences and that they might not normally encounter. Most discussions of tribalism, and most proposals to address it, however, focus too much on the problem of ‘information’ and ignore the other aspects of culture that we laid out in our model of discourse systems above—things like ideologies, forms of discourse, face systems, and socialization. Indeed, while you might be able to change people’s minds on some issues by giving them new information, if their beliefs are regarded by them as part of their identity as a member of a particular tribe, then they are unlikely to change them no matter what information they have. In fact, information might be the least important factor when it comes to tribalism. Informing supporters of Donald Trump of the inaccuracy of things he says almost never affects their support for him, because their support isn’t based on the factual basis of his statements, but on their emotional resonance, the way they make supporters feel part of an ‘in-group’ and give them permission to dislike members of various ‘out-groups’. Much more important are the ways different forms of discourse (such as slogans, memes, and symbols of various kinds) as well as particular ways of relating to other members (such as certain speech and writing styles and forms of address) come to reinforce a sense of belonging. In this regard, digital media don’t just make the exchange of such emblems of belonging much more effcient, but also promote more phatic forms of communication (see Chapter 5), forms of communication designed not to inform but to inspire feelings of either conviviality or hostility (see Chapter 10).

Conclusion In this chapter we have considered the concepts of online cultures and intercultural communication. Online cultures can be seen as discourse systems which are made up of four components: 1) ideologies, or how people think;

182

Digital practices

2) face systems, or how people get along with one another; 3) forms of discourse, or how people communicate; and 4) socialization processes, or how people learn to participate. Media play an important role in online discourse systems, with specifc cultures-of-use developing around different media and platforms as a result of individual and collective experiences over time. Online discourse systems are often multilingual and multicultural, with participants from a wide range of different backgrounds who must accommodate to one another’s communicative practices. Such intercultural communication can lead to the development of innovative forms of expression and provide participants with the potential for positive intercultural exchanges. At the same time, there is a natural tendency for people to seek out ‘tribes’ in online spaces, and this can result in polarization when these tribes become isolated from one another. In the following chapters we will elaborate on some of the principles introduced here by describing a number of specifc digital literacy practices and how they affect the way people interact in social groups, managing their identities, their relationships, and their sense of belonging. In particular, we will consider literacy practices related to digital video games (Chapter 9), social networks (Chapter 10), collaboration and peer production (Chapter 11), and fnally, practices of online surveillance and privacy (Chapter 12). Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Chapter 9

Games, learning, and literacy

Among the most pervasive practices people engage in online is playing video games. These games can be played on a range of devices including personal computers, arcade machines, consoles, smartphones, and media tablets (we use the term ‘video game’ to refer to all such digital games). Early video games were simple affairs, with basic graphics and simple goals. For example, one of the frst commercial video games, released in 1972, was an arcade game called Pong (Atari), a kind of virtual tennis in which each player controlled a stick-like paddle and hit the virtual ‘ball’ from their side of the screen to their opponent’s side. Video games have come a long way since those early days. The games themselves have become more demanding, often challenging players to solve complex problems as part of a developing narrative. Advanced 3-D graphics, virtual reality and augmented reality have made it possible to create intricate and compelling game worlds for players to act in. With the advent of the internet it has also become possible for players to interact in real time with other players and work together in the game world. In addition to these technological advances, there have been important social developments as well. Online affnity spaces (see Chapter 8) have emerged where video game fans can congregate and discuss their favourite games, share tips and modifcations of games, and watch gameplay videos called ‘machinima’.

Games and literacy Perhaps more than any other digital media, video games have attracted criticism from parents, teachers, and the media. Critics point to issues of gender stereotyping and violence in video games, as well as to the problem of video game ‘addiction’. Video games are often dismissed as a corrupting infuence, or, at best, a waste of time. At the same time, however, there is also a growing recognition of the potential of video games to promote active and critical learning. In this chapter, we will explore and critically evaluate the affordances and constraints of video games, what makes them so compelling, and the kinds of literacy practices that have grown up around them.

184

Digital practices

In his book, Everything Bad is Good for You, the author Steven Johnson (2006) points out that video games have become more and more demanding over time. Obviously, there are still a good number of mindless video games, but in this chapter we will be focusing more on the complex video games that Johnson has in mind. These are games that: 1) are situated in some kind of virtual game world; 2) tell a story in that world with the player as a central, active participant; 3) involve the player in problem-solving activities in order to achieve the goals of the game; 4) may involve the player in collaborating with others in order to achieve the goals of the game; 5) may offer the player the possibility to customize the game, for example by ‘modding’ the game and designing new levels. We will be discussing these games in terms of both technological affordances and literacy practices. We begin by considering ‘reading’ and ‘writing’ in video games and how games open up new ways of meaning, both within games themselves and outside games in wider fan communities. Then we examine meaning-making in games as an embodied experience (see Chapter 6), with digital and physical spaces increasingly mashed up with our material and social worlds. Next, we discuss how video games allow us to adopt new ways of being in the world and relating to other people. Finally we consider the topic of games and learning, exploring the potential of games to promote new ways of thinking.

ACTIVITY: THE GAMES PEOPLE PLAY Think about a video game that you have played, or one that you know of. Describe and evaluate that game by answering the questions below. 1. What kind of game is it? Is it a: a. Shooter? b. Arcade game? c. Adventure game? d. Role-playing game? e. Simulation game? f. Strategy game? g. Puzzle game? 2. How would you describe the visuals? Sound? Music? Are they: a. Realistic? b. Stylized? 3. Does the game tell a story? How? 4. Does the game involve problem-solving? How? 5. Does the game allow you to interact with other players? How? Is it: a. Competitive? b. Co-operative?

Games, learning, and literacy 185

6. Does the game bring together physical and virtual environments? How? 7. Does the game allow you to share your achievements? How? 8. How similar or different is the experience of playing this game compared to other literacy practices like reading or writing in print-based media or on the screen (for example websites)? 9. Would you describe this as a good game? Why? Does the game motivate you, challenge you, reward you and hold your interest? How?

Reading and writing in games If you have ever read a game review, you’ll know that the language gamers use to talk about games is often very similar to the language of book reviewers or flm critics. Games are ‘titles’ that can be described in terms of their ‘setting’, ‘backstory’, ‘plot’, ‘action’, and ‘characters’. Like books or flms, they usually tell a story. However, the stories that games tell are always interactive ones in which players play a central role and make important decisions about the plot. Games also tell their stories through a range of visual, verbal, aural, and textual modes. These elements are combined in complex ways that both represent the unfolding action in the world of the game and provide an interface through which the player can interact with that world. What this suggests is that a video game can be seen as a complex kind of interactive text, one that encourages new forms of ‘reading’ and ‘writing’. We considered some aspects of this new form of reading and writing in Chapters 3 and 4. Like other forms of digital media, video games draw upon the affordances of new media such as interactivity, multimodality, and multimediality. They also have structures—they tell stories or construct arguments. Finally, they exhibit intertextuality, that is, they are connected to a range of other texts outside of games, including game manuals, walkthroughs, and fan websites. Texts within games In order to illustrate interactivity and texts within games we will use the example of Spore (Electronic Arts). Spore begins with an asteroid collision on an alien planet far away in space. The player is a microscopic organism on that asteroid, and the game is about the player’s evolution into more and more complex life forms. The game has fve stages: cell, creature, tribe, civilization, space. In the course of the game, the player evolves from a single-celled organism to a complex life form to a member of a space-age civilization.

186

Digital practices

When players begin, they have to make a number of choices that affect their future evolution. The very frst thing that they do in the game is to decide whether to be a herbivore or a carnivore, though they can later evolve into an omnivore as well. This choice has an impact on the options that become available to them later. For example, if they decide to be a carnivore in the cell stage, then in the creature stage they play a predator that is only able to eat meat (fruit will make them ill). In the tribal stage they play members of an aggressive tribe, in the civilization stage they play members of a military civilization, and in the space stage they play warriors. As warriors, they have access to cheap weapons, so military conquest is an attractive option. However, the tools needed for economic and religious conquest are expensive, making these strategies harder to pursue. As players progress through the game, they are constantly developing and building on to their creatures and the social groups that they live in. With every successive generation in the cell and creature stages they make changes to their organisms, using a ‘creator’ to add various limbs and organs. In the tribal stage they discover fre and create distinctive, tribal clothing, and in the civilization and space stages they create buildings, cars, boats, planes, and spaceships that they will use to conquer frst the world and then the galaxy. The story of Spore is the story of a journey from primitive life form to master of the universe. The choices that players make as they interact with the game directly shape the plot of that story and the character of the virtual environment that evolves. The player thus assumes a centrally important role as main character and storyteller, in a manner that is not possible in other media such as books and flms. The education scholar James Paul Gee calls the stories of video games embodied stories because of the way that they are ‘embodied in the player’s own choices and actions in a way they cannot be in books and movies’ (Gee, 2003: 82). Note that such embodied stories are not necessarily better than traditional stories, just different. In Spore, each stage (cell, creature, tribe, civilization, space) is progressively more complex than the last, both in terms of the tools and technologies that you have at your disposal and in terms of the goals that you have to achieve to advance in the game. Even at four years old, Christoph’s son was able to happily play the cell stage, even without being able to read any of the texts that appear on screen. By the time he got to the space stage, however, he had to recruit the assistance of his six-year-old sister and other family members in order to decipher the written texts and understand the missions that he had to go on. This increasing complexity is refected in the game’s interface, which slowly but surely becomes cluttered with the tools that players need in order to succeed in the game. It’s interesting to compare the interface from the cell stage (Figure 9.1), where it is the simplest, to the interface from the space stage (Figure 9.2), where it is the most complex.

Games, learning, and literacy 187

Figure 9.1 Spore screenshot: Cell stage (Spore and screenshots of it are licensed property of Electronic Arts, Inc.)

At the cell stage, the player has a view of the primordial ooze in which his cell is hunting for food. At the bottom of the screen is a control panel, with ten items. From left to right these include: a zoom button (A), an options menu (B), a pause button (C), the ‘Sporepedia’ (D—lists all of the creatures that the player has encountered), ‘My Collections’ (E—lists all of the cell parts that the player has unlocked), DNA points (F), progress bar (G), a ‘Call mate’ button (H), a history button (I—shows a player’s evolution), and a health bar (J—shows damage). Of these, the only button that is regularly used is the ‘Call mate’ button. Goals appear in the top left of the screen (K). In the space stage, the ‘Call mate’ button is gone and is replaced by three control panels. The frst is a map of the planet (A), indicating cities, other spaceships, and so on. The second is a communicator (B), which can be used to talk to the aliens on different planets. Selecting the communicator provides a range of options: trade, repair, recharge, take on a mission, engage in diplomacy. Next to the communicator is a control panel (C) with eight different kinds of tools (maximum 18 tools per category): socialization, weapons, main tools, colonization, planet atmospheric tools, planet sculpting tools, planet

188

Digital practices

Figure 9.2 Spore screenshot: Space stage (Spore and screenshots of it are licensed property of Electronic Arts, Inc.)

colouring tools, cargo. Again, goals (D) appear on the top left, and incoming communications from other aliens (E) appear below them. Both stages adopt an organization that is commonly seen in video games, dividing the screen into two main parts (Beavis, 2004). One part of the screen graphically displays the game world that the player can act on. The other part is an iconic display of the tools that the player can use (at the side or bottom of the screen). When playing the game, players move between their toolset and the unfolding action of the game. In networked multiplayer games, these tools include a chat tool, which allows players to send each other messages in real time. Comparing the two interfaces, one is struck by two main observations. The frst is how much more a player must attend to in the space stage of the game: players have a much greater range of options open to them. The second is that written text in the space stage has become much more important. In order to progress, the player must take on missions assigned by the leaders of their own or other civilizations (these are non-player characters simulated by the computer). These missions are described in written text primarily in the form of pre-scripted conversations that allow the player to negotiate the terms of the missions with the alien leaders.

Games, learning, and literacy 189

That’s why Christoph’s son at four years old had a diffcult time of it in space. In earlier stages, understanding written text was not so crucial: he was able to ‘read’ the game without being able to understand the written texts. In fact, when offered help, he confdently claimed, ‘I can read.’ As well as the written text, he attended to a range of multimodal cues: things like the kinds of icons and the kinds of sounds that accompanied textual messages. For example, in the tribal stage, a message warning that his village was under attack was always accompanied by a yellow ‘alert’ icon and a particular sound effect. Thus, meanings in video games are highly multimodal. In addition to the visual and iconic elements of screen layout already described, subtle cues about the game world can be conveyed through colour, sound effects, and music, which do not just contribute to the overall atmosphere of the game, but also convey meaning. For example, in the space stage of Spore, the basic environment of a planet can be deduced from a distance by looking at its colour. If the planet is red, then the environment is likely to be hostile, with lots of volcanic eruptions that could damage your ship. In many online multiplayer games, colour (along with dress) is also a resource that players can draw upon in order to construct their identities. In a typical fantasy role-playing game, for example, a wizard wearing white robes suggests a different kind of character than one wearing black. In summary, video games provide a range of ways of making meaning. On one level, video games can be seen as new kinds of texts, which tell new kinds of interactive, embodied stories. These stories are created from the choices made by the game designer on the one hand, and the player on the other. As we have seen, the visual layout of the game refects this interactive quality, by combining two main elements: 1) a representation of the game world; 2) an iconic interface, which allows players to direct action in that world. On another level, the texts in the game themselves draw on the affordances of digital media by combining a variety of modes: visual, verbal, aural, and textual. In order to make sense of the game, players must attend to this combination of modes. Storylines and arguments As we said above, like written texts, video games are also structured as ‘stories’ or ‘arguments’. According to Gee (2003: 81): The storyline in a video game is a mixture of four things: 1. The game designers’ (“author’s”) choices; 2. How you, the player, have caused these choices to unfold in your specifc case by the order in which you have found things; 3. The actions you as one of the central characters in the story carry out (since in good video games there is a good deal of choice as to what to do, when to do it, and in what order to do it);

190

Digital practices

4. Your own imaginative projection about the characters, plot, and world of the story. Here, the game designers’ choices include things like the backstory to the game, the available settings and map of the game, and the characters and items in the game. As a player, you encounter particular settings, characters, and items in a sequence based on the choices that you make, for example which area of the map you explore frst. Other kinds of choices involve the actions that you take, like whether you choose to fght a character or befriend them. Video games often have multiple possible plot endings depending on these actions. Finally, there is the imaginative projection or investment that you make, a feature of the storylines in both books and games. As Gee points out, in a book, you may feel sad when an important character dies, but that’s not your fault. In contrast, if your character dies in a game, you are probably also annoyed that you have failed and determined to try again and succeed. Based on these different kinds of imaginative projections, Gee argues that investment in video games is different than in books. All of this also highlights the interactive nature of the storyline in games, the fact that it is co-constructed by the designer and the player. Indeed, Gee (2015) characterizes games as a kind of ‘conversation’. They involve reciprocal ‘turn-taking’ where ‘[t]he player acts and the game responds’ in ‘real time’ (10). Sometimes, this ‘conversation’ can take unexpected turns, as players make use of the affordances of the virtual world in unexpected ways, creating an ‘emergent narrative’ or ‘emergent gameplay’. This is seen especially in games that are loosely structured and where designers have provided players with a lot of freedom. For example, in the game Minecraft (Mojang Studios), it was initially possible to farm various kinds of resources, like wheat, potatoes, and chickens, which could be used for sustenance. However, players surprised designers by developing a way to farm iron (the metal), which they did by capturing and slaying ‘iron golems’, i.e. ‘monsters’ that appear near towns and provide iron when they have been killed. As iron is a relatively rare resource that can be used in buildings, weapons, and armour, the resulting iron farms made it possible to effciently craft more sophisticated buildings and items, and made possible new kinds of storylines. In other words, these storylines ‘emerged’ from players’ experimentation. As well as storylines, games may also be capable of constructing arguments. That is to say, some games may be able to make a point. An example is the online fash game September 12th: A Toy World (Newsgaming), which was designed in the wake of the September 11, 2003 terrorist attack on the twin towers in New York. In this game, the player can aim missiles at terrorists in a middle-Eastern setting, but when these terrorists are killed, they are mourned by passers-by who then turn into terrorists themselves. Ultimately, this experience forces the player to conclude that shooting the

Games, learning, and literacy 191

terrorists leads to the creation of more terrorists. Here again, we can see how this argument is co-constructed by designers’ and players’ choices. The game begins with a message that spells out the choice for players: ‘The rules are deadly simple. You can shoot. Or not.’ Players can test either course of action by interacting with the virtual world and seeing the results for themselves. These sorts of ‘games for change’ (Antle et al., 2014) are designed to provide an interactive experience that argues for or against a particular course of action, thereby changing opinions, attitudes, and behaviours. The video game researcher Ian Bogost (2007) maintains that games use procedural rhetoric in order to persuade users in this way. According to this perspective, the expressive power of video games is derived not from verbal or visual modes, but rather from the computational procedures that games can enact and that their users can experience. Bogost considers this ‘procedurality’ to be the ‘core representational mode’ (ix) of video games. In this case, making an argument in a game is about designing a set of rules that users can interact with and, through this interaction, making persuasive claims. Texts outside of games So far we have been considering the texts and literacy practices inside the world of the video game. However, much of the literate activity associated with video games extends to texts and practices outside of games themselves, such as game reviews that appear in gaming magazines and websites, game manuals, walkthroughs, fan modifcations, and fan machinima. It is important to point out that this web of outside texts is linked to the group of people who share an affnity for a particular game, kind of game, or video games in general. The group of people who enjoy playing Spore would occupy one such ‘affnity space’ (as described in Chapter 8). Members of the group would create and share texts that serve to improve their own and others’ enjoyment of the game. These texts then act as resources that other members of the group can draw upon if necessary. One such resource is the game manual. Video games are typically sold with a game manual, which provides a basic guide to playing the game. For example, the ‘galactic edition’ of Spore comes with a 100-page ‘Galactic Handbook’ which provides instructions on everything from installing the game to uploading creations to the Spore community. However, this handbook does not provide exhaustive information about how to play the game. Players who encounter unexpected obstacles that they are unable to overcome on their own can refer to player-generated ‘walkthroughs’. These usually take the form of written guides that describe in detail how to advance through the game.

192

Digital practices

Game manuals and walkthroughs tend to be densely packed, specialized texts, which players consult when they get stuck and need help to advance in the game. Without frst experiencing the game, these texts can be very diffcult to understand. Gee (2003) compares them to the specialist literature in domains like science and law, which also presuppose extensive background knowledge as well as a shared, specialist worldview. From the point of view of digital literacies, what is important here is that these resources are created not just by the game designers, but also by players. In Chapter 11 we will further discuss how, not just in gaming but in a wide variety of digital literacy practices, the role of the media consumer has shifted from passive recipient to active participant. In the context of gaming, this more active role is illustrated by two new literacy practices: modding and machinima. Modding or fan modifcation is the practice of modifying a game either by adding content (like a new level or new items) or by creating an entirely new game. Many video games, such as The Elder Scrolls series (Bethesda Softworks), provide fans with the tools to create these mods. In some cases, mods can lead to a new commercial release. For example, the popular game Counter-Strike (Valve) is a mod of the earlier game Half-Life (also by Valve). Creating an interesting mod requires not only a good understanding of the technological tools, but also an understanding of elements of representation in games, like plot and character as described above. It also requires an appreciation of the ludic qualities of games, those qualities that make the game fun to play. In their study of students designing games, Buckingham and Burn (2007) stress the importance of understanding games as rule-based systems that both challenge players and reward them when they follow the rules correctly. They also point out that game designers create economies of resources, such as health, hunger, point scores, and so on, which players must utilize strategically in order to achieve their goals. Machinima is another emerging literacy practice in the context of gaming. The word is a combination of ‘machine’ and ‘cinema’ and refers to the use of a video game’s real-time 3-D animation engine in order to create a cinematic product. The creation of machinima is in many ways similar to the creation of fan fction, discussed in Chapter 3. It draws on the characters and settings in a game but combines them in a way that is new and original, whether that be as a drama, comedy, music video, or whatever. As we suggested earlier, these new literacy practices often take place in the context of online affnity spaces where fans can interact and share their creations. These spaces provide them with a venue where they can create and share a range of texts: walkthroughs and guides on how to get the most out of the game; mods that extend the game with new levels; machinima that build on the game’s resources to create new stories. All of these practices allow fans to adopt new, more active roles in the creation and distribution of culture.

Games, learning, and literacy 193

CASE STUDY: VIRTUAL GAME WORLDS The idea of a virtual game world that you can interact with has been around for a long time. It goes back to interactive text-based adventure games, which peaked in popularity in the 1980s and 1990s. In these games, the player assumes the role of the central character and has to use problem-solving techniques to navigate through the world and progress in the story. This world is described using text and players type simple commands (for example ‘go west’, ‘open door’, ‘get key’) into their computer to interact with the world. A well-known example of this kind of game is Zork by Infocom, released in 1980. In this game, the player becomes an adventurer whose goal is to collect twenty treasures. The game opens with the following text: West of House You are standing in an open feld west of a white house with a boarded front door. There is a small mailbox here. If the player types ‘open mailbox’ more text appears: ‘Opening the small mailbox reveals a leafet.’ By responding to these textual descriptions the player gradually uncovers clues about the location of the 20 treasures, progressing through the story in an interactive way. Of course, now adventure puzzle games like this have been replaced by games with more advanced graphical user interfaces. It has also become possible to interact with other players in the game world, forming alliances with them and working together to achieve the goals of the game. This kind of interaction frst became possible with MUDs (Multi-User Dungeons, later known as MultiUser Domains) and MOOs (MUD Object Oriented). Again, these were text-based spaces that simulated a virtual world by using textual descriptions. Unlike the adventure games considered above, MUDs and MOOs were created for networked computers, so that it was possible for people on the network to interact and collaborate. People interacted in this way by using text. The virtual game worlds of today’s massively multiplayer online games are far more sophisticated than those of the early adventure games, MUDs and MOOs, described above. In MMOGs, large numbers of players meet in online virtual worlds dedicated to a range of different kinds of games: from frst-person shooter games like CounterStrike where players team up and go on military missions, to fantasy role-playing games like World of Warcraft where groups of players enter dungeons and do battle with mythical monsters. The virtual worlds of these MMOGs can be very compelling, often drawing on

194

Digital practices

advanced 3-D graphics in order to simulate a life-like setting, which includes a graphical representation of the player, called an avatar. Players can control this avatar’s actions and can usually communicate with each other through a range of channels, including both text and voice options. The idea of the virtual game world has been taken one step further with the development of Second Life (SL) launched by Linden Lab in 2003. Although SL has declined in popularity over the years, with one estimate placing its active users at only 600,000 at the end of 2017 (Jamison, 2017), it is still a unique and interesting virtual world. Like MMOGs, SL has a persistent online virtual world where people (the ‘residents’ of SL) can meet and interact. But unlike the kind of MMOGs mentioned above, there are no clearly defned goals or problems to be solved, and players don’t ‘level up’, so talk of ‘winning’ or ‘losing’ doesn’t make much sense. Instead, SL provides its residents with a virtual environment which they themselves can design, using modelling tools to construct virtual islands, buildings, gardens, and objects like cars, motorbikes, clothes, and so on. The resulting virtual world often bears an uncanny resemblance to ‘real life’. In addition, the kind of creative activity that goes on in SL is not limited in the same way that it is in the real world. As a result, players can experience a range of virtual recreations and simulated environments, which might be diffcult or impossible to access in real life. Some of the things that players can do and experience in SL include: • • • • • •

Visit a historical place like Ancient Greece, Renaissance England; Visit major cities like London, Paris, and New York; Visit the White House Oval Offce; Land on the moon or visit a space station; Step inside a giant Dell Computer; Enter the human body and perform surgery.

Because of the ability of the virtual world to create compelling simulations like those listed above, it has attracted the attention of educators who want to provide immersive experiences for their students. In one instance, Peter Yellowlees, a Professor of Psychiatry at University of California, Davis, created an environment that simulated the hallucinations suffered by a schizophrenic. By visiting SL, Yellowlees’ students could experience frst-hand the kind of disorientation that their patients feel (Virtual online worlds…, 2006). SL also allows residents to adopt new identities, new ways of being in the world. One of the frst things that you do is customize your avatar, and, unlike in real life, you can be as fat or thin as you like.

Games, learning, and literacy 195

Whether to create an avatar that resembles your real life self is completely up to you, though it’s interesting that quite a few people choose to do just that. Other people deliberately create an avatar that is at odds with their real-life selves in order to project a different kind of person onto the virtual world. One striking example of this is the case of the physically disabled, who can adopt a ‘normal’ body in SL, and walk, fy, and teleport around SL. In their interactions with other residents, these people learn what it is like to be treated ‘normally’ (Lessig, 2008). Of course, we are not here saying that SL solves the problem of social stigma for disabled people. It’s important to recognize that many disabled people in SL do in fact choose to have wheelchairs. From their perspective, ‘able’ people should learn to treat them ‘normally’ even with their disability (Jones, 2011b). In the virtual world of SL the line between real and virtual has become blurred in a way that challenges us to rethink notions about the real and virtual world, and what it means to ‘play’ a game. Although the world of SL is a virtual one, people still have authentic experiences, like meeting others and forming lasting relationships that carry over into what we used to think of as the ‘real’ world. The line between the ‘real’ and the ‘virtual’ has been further blurred by recent developments in virtual reality (VR) and augmented reality (AR) (see below). In the case of VR, where players wear headsets and sometimes sensors on their bodies, the game world becomes even more life-like and immersive. In AR, the virtual world is overlayed on the real world in a way that increasingly erases the distinction between them. At one point, people thought that virtual worlds like SL were the future of interaction on the internet. However, with the widespread adoption of social media, what we have actually seen develop are real and virtual existences that are even more entangled than the notion of a virtual world in a space like SL allows (see Chapter 10).

Games and our material and social worlds In Chapter 6 we talked about the way that digital technologies allow us to ‘layer’ the physical and virtual to experience space and time in different ways. We saw how communication could draw on our physical environment, our physical bodies, other material objects around us, and even our physical location in space. In this section, we will examine the way that material and social worlds can also shape the experiences that people have in games. First, we will consider gameplay as an embodied activity, especially how people experience the kinds of virtual game worlds described in

196

Digital practices

the case study. We will see how more recent technologies like virtual reality (VR) and augmented reality (AR) increasingly mash up digital and physical space (Rigby, 2014). Second, we will focus on the dimension of time and how ‘real’ clock-time can be incorporated into game mechanics to play a meaningful role, not only in terms of the gameplay itself but also as a means for staging rewards in such games. Finally, we will address the way that digital games, and games more generally, interact with the material world by reproducing real-life social inequalities, coding in privileges for gamers who can afford to pay for subscriptions. Space Whenever you are playing a digital game, you are inhabiting a range of physical and digital spaces (see Chapter 6). The kinds of complex games that we have been focusing on in this chapter usually have a virtual world that you interact with and you usually interact with it by using an ‘avatar’. In a frst-person game, your avatar is minimal: you usually only see the forearms, hands, and any tool (like a weapon) that your character is holding. In a third-person game, you see the entire body. This avatar is what mediates the experience of the virtual world: if you want to act on the virtual world, you can only do so through the avatar. For example, if you want to pick something up, you have to make the avatar pick it up. In this sense, the avatar is like an artifcial limb or ‘prosthetic’. But your presence is also mediated by the way that you see the virtual world, which depends on the virtual camera that provides a changing view of the world as you move around it. In some games, you can manipulate this camera so that you can ‘look around’. Some game studies research refers to this way of experiencing the virtual world as ‘prosthetic telepresence’ (Klevjer, 2012). According to this idea, we experience gameplay as an extension of our bodies and visual perception of the world which is an extension of the way that we visually perceive the world, relocating our spatial self-awareness into screen space. One interesting question to consider is whether people experience this avatar as part of themselves or as something (or someone) else. Hafner (2015) investigated this question, examining the gameplay experience of two children playing Moshi Monsters (Mind Candy), a virtual world for children. In Moshi Monsters, children ‘adopt’ a monster, which lives in a room that they can decorate in various ways, can explore the world beyond the room, interact in a limited way with other player-controlled monsters, and go on adventures. When reviewing a recording of their gameplay, the children were asked to narrate their experience in response to the prompt ‘what is going on here?’ In answering this neutral question, the children positioned themselves in different ways vis-à-vis the monster avatar. They were observed to position themselves either as the monster or as the monster

Games, learning, and literacy 197

owner. In the frst case, they more directly associated themselves with their avatar than in the second. Your experience of the virtual world is also related to how you control your avatar. Sometimes this is through your fngers as you manipulate a game controller. But it can also be through other parts of your body: Kinect by Microsoft uses a skeletal tracking technology that identifes the player’s body motion as ‘input’ to play different kinds of dancing, fghting, and action adventure games. If you are using a Kinect your embodied action is directly translated into action on the screen, without the need for a mediating game controller. Virtual reality games take this a step further by responding to the way that you direct your gaze with head movements. In virtual reality, players put on a headset and can control which part of the game world that they see by literally looking around. This tends to exaggerate the feeling that players are immersed in the virtual world. If you have access to a full VR rig that can monitor movement in your whole body, you will also be able to control your avatar with physical body movement. Virtual reality has been defned as ‘a unique combination of technologies (sensory interaction, control, and location) which is able to create a feeling of ‘being there’ (van Gisbergen, Sensagir & Relouw, 2020: 401). Finally, augmented reality games work by using the camera on mobile devices to overlay your physical environment with virtual elements. As Juno Hamari and his colleagues (2019: 805) point out, ‘much of the design is focused on the game space and everyday space blending into each other in a hybrid reality that takes cues from both, but is restricted to neither.’ For example, in the popular location-based game Pokémon Go (Niantic) the player has to explore the physical environment in search of mythical monsters called ‘Pokémon’. Viewing the world through their mobile device’s camera, the player can locate and capture the Pokémon. Jonne Arjoranta and his colleagues (2020), have shown that Pokémon Go can encourage a range of positive behavioural changes including: self-improvement related behaviours like increased physical activity; routine related behaviours like increased exploration; social related behaviours like strengthening social bonds as the majority of players were found to play with friends and family. At the same time, there are also stories of players having accidents or straying into restricted areas due to being distracted by the game. Time Another way the material world sometimes intrudes into game worlds is through the dimension of time. Our experience of the world is not only shaped by our physical presence in it but also by the way that we experience time and timescales. When it comes to digital games (as well as other media like books and flms), time is frequently, perhaps even usually, experienced

198

Digital practices

on a different ‘scale’ to time in the ‘real world’. An adventure game storyline might span months or years but be playable in days. However, many games now make use of ‘clock time’ to stage plot developments or rewards, thereby altering our gameplay experience, blending it more with our real routines. For example, in the mobile multi-player strategy game Clash Royale (Supercell) players use customized decks of cards to battle each other, with the winner receiving a treasure chest as a reward. This chest is initially locked for a fxed period of clock time, which counts down until it opens. Of course, the main reason that developers use this strategy is so that they can offer the player a so-called ‘micro transaction’—allowing them to unlock the reward sooner if they pay real money. Social structure In the 1992 science fction novel Snow Crash, the author Neal Stephenson imagines a virtual reality-based internet called the Metaverse, where people interact with one another through avatars. Those who can afford it can buy custom, life-like avatars, while those who cannot have to make do with ‘offthe-shelf’ avatars with a limited range of facial expressions. Moreover, to be rendered in full colour, people have to access the Metaverse through a good connection that they have paid for. Here is how Stephenson describes some of the people in a virtual bar scene: A liberal sprinkling of black-and-white people—persons who are accessing the Metaverse through cheap public terminals, and who are rendered in jerky, grainy black and white. (49) Digital games are also capable of reproducing this kind of socio-economic inequality, as different kinds of barriers to access play a role. These include technological barriers like access to the necessary hardware, software, electricity, and high-speed internet connections that are needed to run massively multiplayer online games. But even if one has access to these ‘basics’, other barriers can spring up for people from lower socio-economic groupings. For example, in online virtual worlds for children, users are encouraged to sign up and pay a monthly fee as a ‘member’, granting privileges like access to a better wardrobe for their avatar. In her study of children playing in such worlds, Marsh (2011) points out that children in the virtual world are easily able to recognize membership status by the visual representation of the avatar. Marsh found that the economic, social, and cultural capital that children possessed often mirrored the real-world circumstances of the users, thereby perpetuating offine social structures in the digital environment. In this way, aspects of the social world are also refected in the world of the game.

Games, learning, and literacy 199

Games and identity One of the most important aspects of video games is the new opportunities they provide for the construction of identity. In the virtual worlds of video games it is not necessary (sometimes not possible) to ‘be yourself’. Instead, you choose a virtual identity, which as we have seen, can be more or less similar to your real identity. The degree to which you can customize this virtual identity varies from game to game. In SL a great deal of customization is possible, but in a game like SuperMario Galaxy (Nintendo), there is only one character you can play (Mario). In his 2003 book What Video Games Have To Teach Us About Learning And Literacy, James Paul Gee describes three different identities that come into play in video games. According to Gee, the real identity (of the player in the real world) and the virtual identity (of the character in the virtual world) is mediated by a ‘projective identity’, the interface between real and virtual. In this identity, players project their own values, hopes, and aspirations onto the virtual character, shaping that character through the decisions that they make and the actions that they perform. For example, if you, the player, have certain beliefs about ‘fghting fair’, then you might try to make sure that the actions of your virtual character conform to those beliefs. You adopt a projective identity that places limitations on the way that your virtual character can act. Gee argues that such a projective identity opens up a space to critically refect on the values that you are required to enact as you progress through the game. In ‘serious games’ this projective identity may enter into confict with the identity that you need to adopt to ‘win’ the game. For example, in Sweatshop (Littleloud), you play a factory manager whose goal is to produce as many clothing items as possible, using raw materials that appear in a never-ending stream on a conveyor belt manned by workers. The game forces players to make a choice between the safety of workers and the overall productivity of the factory, often evoking guilt in players as a result. Gee goes on to describe how video games can serve to develop an understanding of cultural models, by immersing players in a particular cultural worldview. He points out that video games make implicit cultural assumptions, which players are unable to infuence. For example, in American frst-person shooter games, the cultural model puts the Americans in the role of the ‘good guys’ who face off against ‘bad guys’ like the Germans, the Russians, and Islamic ‘extremists’. Gee points out that a game which pits Palestinian militants against Israeli military forces and civilian settlers is operating according to a completely different cultural model. In this model, the militants are viewed as justifed in attacking civilian settlers because they are seen as enemy advance forces. This is not to say that the ideology underlying and implicitly legitimated by such

200

Digital practices

a game should be accepted, but rather that video games (like literature) provide a means to experience, understand, and critically evaluate alternative cultural models. Video games that reverse commonly held cultural models in the manner described above are likely to be highly controversial. In part, this is because of the way that video games involve the player so actively in the development of the story. One example is the release of the Medal of Honour series of frst-person shooter games set in modern-day Afghanistan (Electronic Arts). In its multiplayer version, this release initially included the possibility for players to play the game either as NATO forces or as Taliban militants. After a storm of criticism, the game designers reacted by removing the Taliban option, renaming the opposing team ‘Opposing Force’. However, the game remained controversial, with the US Army and Airforce Exchange Service refusing to sell the title on its bases. The concern with video games like Medal of Honour is that the immersive experience of the game will lead players to identify strongly with the political objectives of the groups that they role-play in the game. Commenting on the Medal of Honour controversy, the Canadian Minister of National Defence, Peter Mackay, gave voice to this concern, saying: ‘I fnd it wrong to have anyone, children in particular, playing the role of the Taliban’ (Wylie, 2010). Nevertheless, Gee argues that in spite of such concerns, it is important to engage with such games, even if we do not agree with the cultural values that they represent. According to Gee, such games, and the identities and cultural values that they promote, should be subjected to a critical evaluation, rather than dismissed out of hand. Another aspect of identity in video games has to do with the kind of relationships that can form in virtual spaces. Because people in these spaces interact with each other’s virtual identities rather than real identities, relationships that we might fnd unlikely in the real world can form. For example, a teenage student might regularly play a game like World of Warcraft with a group that includes young professionals in their 20s and 30s (see Chapter 8). In the make-believe world of the role-playing game, the ‘physical’ characteristics of the player’s avatar (such as race, gender, and age) may not correspond to the real-world characteristics of the player, and in many cases these ‘real world’ characteristics may not be relevant. Instead, what counts is the player’s ability to perform their virtual identity (for example as a fghter, or thief) and contribute expertly to the in-game fortunes of the group. In this way MMOGs provide opportunities for individuals to take on roles that would not be possible in the real world. For example, with suffcient expertise, the hypothetical teenager that we mentioned above could perform the role of teacher and mentor to other players who, in ‘real life’ might be much older.

Games, learning, and literacy 201

Games and learning One way of looking at video games is as learning experiences. If this is true, then what exactly is it that people are learning? Marc Prensky, author of Digital Game-Based Learning, points out that learning occurs at two different levels. He says: On the surface, game players learn to do things – to fy airplanes, to drive fast cars, to be theme park operators, war fghters, civilization builders, and veterinarians. But on deeper levels they learn infnitely more: to take in information from many sources and make decisions quickly; to deduce a game’s rules from playing rather than by being told; to create strategies for overcoming obstacles; to understand complex systems through experimentation. And, increasingly, they learn to collaborate with others. (Prensky, 2003: 21/2) As we have already argued, video games have become progressively more sophisticated over time. As a result, many games now give players the opportunity to practice complex problem-solving skills. In addition, with MMOGs players can also learn important social skills by interacting with the large number of other players in the gaming environment. As we saw above when we were considering the role of arguments and identity in games, there are also so-called ‘serious games’ that go beyond skills and try to change people’s attitudes and behaviour through the powerful experiences that they provide. In some cases, what you do in the game world might not seem particularly important or useful, and this leads some people to suggest that video games are a waste of time. However, most people would agree that the kind of collaborative problem-solving skills, innovation, and creativity fostered in games are just the kind of skills that twenty-frst-century citizens are increasingly going to need (Jones & Hafner, 2012). As well as the question of what you learn, there is also the question of how you learn. As Gee (2003) points out, the best (most popular) games are usually diffcult and complicated to learn, so if video game designers want to make money, they have to somehow make learning how to play their games fun. Gee argues that this commercial necessity has turned video game designers into expert teachers and motivators. He goes on to suggest that we can learn a great deal by understanding the principles of learning that are used in good video games. In total, Gee (2003) identifes 36 such principles of learning; here, we will try to distil some of the main ideas.

202

Digital practices

First of all, learning in games is always experiential and active. You are placed in a world that you can experience frst-hand through seeing, hearing, and feeling (not just thinking). Moreover, as already discussed, players can exert an infuence on that world through their actions, as well as take on identities that they are willing and able to invest in emotionally. The result is that the player feels like an active agent in the game, playing an active role in co-creating the game world. Secondly, the learning itself is carefully staged, so that players are always operating around the edge of their level of competence. As a result, the activities that players engage in are challenging but do-able. When players need to learn new skills and new routines, these are introduced gently and in a non-threatening way. For example, many games have a kind of sandbox or tutorial area where players can practice basic skills (running, jumping, attacking, defending) without consequences (like physical injury) if they make mistakes. Gee contrasts the learning in video games with the learning in formal school-based contexts. He points out that learning in video games is always situated in experience of the game world: information is provided ‘just-intime’ when it is needed to solve a problem in the game. In contrast, much of the learning that children do in schools is abstract and decontextualized. Gee argues that school contexts, with their emphasis on standardized skills, too often adopt a ‘just-in-case’ model of learning. In this model, the curriculum is covered without any strong connection to the experiences of the students, ‘just-in-case’ they need it at a later point in their life. Game designer Jane McGonigal takes Gee’s argument a step further in her 2011 book, Reality is Broken. She observes that increasing numbers of people are spending an increasing amount of time playing digital video games and concludes that ‘in today’s society, computer and video games are fulflling genuine human needs that the real world is currently unable to satisfy’ (2011: 4). She foresees a possible future where people retreat from reality into the relative comfort of the virtual worlds of games. However, she goes on to suggest that the power of games to engage such vast numbers of people is not something that should be wasted on escapist entertainment. Accordingly, she challenges game designers to come up with games that are able to ‘fx’ reality and engage people in solving ‘real-world’ issues and problems. One example of such a game is Evoke (World Bank Institute). The trailer for this alternate reality game claims that it will teach players ‘collaboration, creativity, local insight, courage, entrepreneurship, knowledge networks, resourcefulness, spark, sustainability, vision.’ Players are ‘agents’ who are tasked with tackling social issues, with ten missions over a ten-week period, each mission focused on a potential future crisis related to issues like poverty, hunger, disaster relief, and so on. Players respond to each crisis with a creative solution including a project which they try out in the real world. They report on their projects each week by sharing blog posts, videos, and

Games, learning, and literacy 203

photos. The game is designed so that by the end players will develop an idea for a larger project or business, with top players awarded online mentorships, travel scholarships, or seed funding for their projects.

ACTIVITY: BOON OR BANE? Consider the quote from Michael Highland’s (2006) short flm As Real as Your Life and discuss the questions. But maybe brainwashing isn’t always bad. Imagine a game that teaches us to respect each other, or helps us to understand the problems we are all facing in the real world. There is a potential to do good as well… It is critical, as these virtual worlds continue to mirror the real world that we live in, that game developers realize that they have tremendous responsibilities before them. 1. Do you know/can you imagine any games designed to teach players to respect each other, or help them to understand the problems we face in the real world? 2. What does it/would it look like? 3. How do you/would you play?

Conclusion In this chapter we have considered the potential of digital video games to provide a space for new literacy practices. Video games open up new ways of making meaning by drawing on the affordances of digital media described in earlier chapters of this book, which include hypertext, interactivity, and multimediality. In particular, games provide for a new kind of embodied storytelling, with the player positioned as active co-creator rather than passive recipient of information. With the player in this active role, some games can employ ‘procedural rhetoric’ based on the choices that player’s make and their consequences, in order to persuade people to take certain kinds of actions. In addition, the literacy practices associated with video games extend beyond the game itself into online affnity spaces, where fans interact, creating and sharing a range of texts: walkthroughs, mods, machinima. We examined the way that games increasingly mash up digital and physical spaces and the various ways in which our experience of games blends into our material and social worlds. Finally, we noted that the interactions that players experience both within and outside of games provide them with various opportunities to adopt alternative identities, often very different from the ones that they adopt in other parts of their lives. In the

204

Digital practices

next chapter we will further explore this aspect of identity in digital media by examining how people present themselves in online communities and on social networking platforms. Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Chapter 10

Social (and ‘anti-social’) media

In its early years, the internet was not so different from old media networks: it connected people and information together, but most users had little control over those connections beyond being able to navigate through them. This changed with the rise of social networking platforms like Facebook, Twitter, Instagram, and Snapchat. Now ordinary users of the internet are involved in creating and maintaining connections within the network. In Chapter 3 we noted that one of the main affordances of the ‘read–write’ web is that it allows users to create content. Perhaps a more profound affordance is that it has given internet users a role in creating and maintaining the networks through which this content is distributed. But social media sites are not just about maintaining social relationships. For many they have become their primary window on what is going on in the world, their main source of news, and their main avenue for participation in politics and public life. So the ways social networking affects our social relationships have come to have profound consequences for our political lives as well. Not all of these consequences have been positive. While social media has undeniably brought people closer together, helped them to build frameworks for mutual support, and contributed to positive political change, they have also had the effect of driving people apart, and have facilitated a range of what might be called ‘anti-social’ behaviours. In this chapter we will explore the literacy practices that have developed around social networking platforms. First, we will consider how online social networking exploits the affordances of digital technologies and contrast it to other forms of social organization, both offine and online. We will then go on to explore how people use the expressive equipment provided by social networking sites to manage their social identities and their social relationships. Finally, we will explore how the affordances of social media can sometimes create conditions in which otherwise congenial users adopt ‘anti-social’ patterns of language use and interaction.

206

Digital practices

We are not files The internet scholars danah boyd and Nicole Ellison (2008: 211) defne a ‘social network site’ as a web-based service that allows individuals to 1) construct a public or semi-public profle within a bounded system, 2) articulate a list of other users with whom they share a connection, and 3) view and traverse their list of connections and those made by others within the system. The key aspect of such sites is not just that they allow users to create profles and share things like texts, pictures, and videos, but that they allow them to do so within bounded systems of social connections which they defne themselves. Some platforms, such as Facebook, encourage more durable social connections, whereas others, like TikTok, encourage more ephemeral connections based on momentary objects of mutual focus such as memes or competitions. The capacity for social networking sites to facilitate the formation and maintenance of complex links among people is akin to the internet’s capacity to facilitate the formation of complex links among different pieces of information which we discussed in Chapter 2. Some scholars, in fact, see the development of social networking platforms as a natural outcome of the web’s inherent capacity for connecting things, ‘an evolution from the linking of information to the linking of people’ (Warschauer & Grimes, 2007: 2). Of course, the idea of digitally connecting people is not new. The internet actually started primarily as a way for people to connect with others. As early as 1979, Tom Truscott and Jim Ellis from Duke University conceived of a worldwide discussion system called Usenet that people could use to talk about things with others who were interested in the same topic. For example, users could go to rec.pets.dogs to talk about dogs, or misc.invest.real -estate to talk about investing in property. These were the earliest ‘online communities’. In the early 90s, other kinds of places for people to meet and form online communities became popular, such as bulletin board systems (BBS), MUDs, and MOOs (see Chapter 9), and chat rooms. Coming at a time, at least in the United States and Europe, when involvement in ‘real-life’ communities and social organizations seemed to be on the decline (see Putnam, 1995), many people extolled the virtues of these online communities in which strangers thousands of miles away could come together around common interests and goals (Rheingold, 1993). Most of these groups, however, were formed based on a system of social organization that was not much different from the fle system for organizing information we discussed in Chapter 2: one which facilitated the grouping of people with similar interests into categories, but worked against the

Social (and ‘anti-social’) media

207

formation of connections between categories. People could go to Usenet groups, for example, and hang out either with the dog lovers or the realestate investors, but dog lovers and real estate investors rarely hung out together. Topic-based communities still persist on sites such as Reddit, and even social networking platforms like Facebook host ‘pages’ where people with a particular interest can congregate. At the same time, the rhetoric around online communities in that era was not always optimistic. There were those who worried that online communities lacked the strong social ties and opportunities for joint action that characterized ‘real-world’ communities, and, even worse, that by participating in them, people were becoming even more alienated from their ‘realworld’ friends, colleagues, and family members. Others pointed out how the anonymity of such groups sometimes encouraged abusive behaviour like trolling and faming, which is still prevalent on anonymous message boards such as 4chan. But then something began to change in the mid-1990s. In 1996, a company called Mirabilis released a free instant messaging client called ICQ, later followed by similar clients from AOL and MSN. These tools gave people the ability to talk in real time to all sorts of people, including people from their offine social networks like friends, relatives, and colleagues whom they had rarely interacted with online before. A few years later the frst social networking sites like Classmates.com and SixDegrees.com began to appear, along with a number of online journaling services like Open Diary and Live Journal. These social networking sites allowed people to contact old friends and acquaintances, and the journaling services allowed users to post their thoughts and feelings online, and to comment on the thoughts and feelings of others. These developments brought about a number of changes in the way people conceived of the social organization of online groups. First, people’s online interactions began to focus more on people that they already knew offine than on anonymous ‘net friends’. In fact, the whole idea of anonymity fell out of favour. While early discussions about the internet constructed anonymity as one of its main affordances, allowing people to transcend their physical bodies and experiment with different identities (Turkle, 1995) and giving heretofore marginalized people and groups (sexual minorities, political dissidents) the opportunity to speak out without fear of reprisal, with the rise of social networking sites, people began to see the internet more as a tool for presenting and promoting their ‘real selves’. Some social networking platforms, like Facebook, in fact, require people to use ‘the name they go by in everyday life’ so that others know who they’re connecting with (Facebook, n.d.). Second, people were now able to organize their online relationships not just according to their interests but according to connections, and to expand their social networks and those of their friends by introducing friends to one

208

Digital practices

another. The main difference between the ‘Usenet’ way of organizing people and the social media way is that previously people were organized into interest groups like documents in folders, whereas now people are linked to other people in complex networks based not on interests but on relationships. Just as Web 2.0 tools now allow us to organize information based on its relationship with other information rather than based on fxed topics and rigid hierarchies, they also allow us to organize our connections online based on relationships rather than on fxed interests or roles. In other words, we are no longer like documents that are put into different folders. We are like nodes in a network of relationships that allows us to be connected to a whole lot of people whom before we might never have shared a ‘folder’ with. This way of organizing social relationships, of course, was not invented by social networking sites. Indeed, one of the reasons for their popularity, is that social networking sites in many ways mirror the ways relationships are organized in offine social networks much more than did the more rigid system of online communities and interest groups.

Social networks and social ties Different online social networks have different ways to refer to the relationships between the people who are part of the network. They might be referred to as ‘friends’, or ‘followers’, or ‘contacts’. What such terms mask is the fact that people have all sorts of different kinds of relationships with the people in their social networks. They include your best friends as well as people you know only indirectly through your friends, including some you hardly know at all. All of these people, however, are important in their own way. In fact, part of what makes social networks (whether they be online or offine) so useful is the variety of connections they make possible. In the event that you need help with something outside of the expertise of your close circle of friends and relatives, chances are there will be someone in this extended network who will be able to help you, eliminating the necessity to rely on complete strangers. Moreover, you are likely to trust these people more because they are ‘friends of friends’. This, of course, also has its problems; for example, we are also more likely to believe things that are shared by ‘friends’ on social media, making these platforms particularly effcient environments for the spread of ‘fake news’ (see Chapter 7). Social networks are characterized by a combination of what are known as strong ties and weak ties. When people with whom we have weak ties become ‘helpful’ to us in some way, we refer to the ties we have with them as strong weak ties. The idea of ‘strong weak ties’ comes from an article called ‘The Strength of Weak Ties’ published in 1973 by sociologist Mark S. Granovetter, who was interested in why some people had an easier time fnding a job than others. The answer he came up with was, in the case of most professions, those

Social (and ‘anti-social’) media

209

who had an easy time were those who had acquaintances who could introduce them to people whom they didn’t know in groups different from their own. In other words, the weak ties between such people and their acquaintances became crucial bridges to other communities. People who had only close friends and few acquaintances, on the other hand, were deprived of information and help from groups other than their own which not only put them at a disadvantage when it came to job hunting but also tended to insulate them from new ideas and make them provincial and less fexible in their thinking. Of course, what Granovetter’s work, done well before the invention of the internet, shows is that ‘strong weak ties’ have always been a feature of social life. Online social networks, however, have a way of facilitating the formation and maintenance of ‘strong weak ties’ frst because they make ‘weak ties’ more explicit and visible to us, and second because these ties attain a kind of durability they didn’t have before. Before, when people swapped phone numbers or email addresses at parties, this often did not lead to the formation of any enduring connection. Now, when people exchange social media usernames or connect on platforms such as Snapchat using QR codes, the chances that they will establish a more durable connection are much greater, even if they never again meet face to face. It is much easier to communicate with someone who is already part of your online social network than it is to call someone you met at a party on the telephone. Furthermore, it is easier to make these weak ties stronger because the transaction costs (see Chapter 5) of sharing information (such as photos, links, and ‘status updates’), even with people at the periphery of our social network, are relatively low. This strengthening of weak ties has the result of connecting people, at least in indirect ways, to people and groups which they previously may not have had much contact with. The value of the different people in our social networks has as much to do with the functions that they serve in the network as they do with the strength of the relationships they have with us personally. According to author Malcolm Gladwell (2000), the two most important functions that people in social networks fulfl are those of connectors and mavens. ‘Mavens’ are people in possession of or with access to things that are benefcial to other people, such as goods, services, knowledge, information, or emotional ‘commodities’ like friendship, loyalty, and a sense of humour. Connectors are people who act as bridges, facilitating the fow of information, goods, and services between different groups or clusters of people. Of course, some people play both roles or play different roles at different times. The sociologist Robert Putman (1995) calls the two kinds of relationships in social networks bonding and bridging. Bonding is what occurs when you interact with your close friends. Bridging occurs when people with contacts in more than one cluster act as links between the people in those clusters (see Figure 10.1).

210

Digital practices

Cluster

Bonding

Bridging Cluster

Cluster

Figure 10.1 Bonding and bridging.

Of course, different social media platforms promote different kinds of relationships between the various people in the networks through the kinds of opportunities they provide for connecting and sharing content. Some networks promote more durable connections or take steps to make visible the strength of users’ connections through algorithmically pushing stronger ties to the top of users’ news feeds, while others, like TikTok, promote weaker connections built around various trends and ‘challenges’. There are also certain people in networks who play special roles by virtue of the degree of fame or infuence they have acquired either outside of the network or through interacting with it (such as microcelebrities and infuencers, see below). Other people in the network might take up special roles in the context of political advocacy or social media marketing campaigns, acting as ‘activists’, or ‘organizers’, or ‘brand ambassadors’. But the most important thing about social networks is that their value is never solely a matter of the fame or infuence of individual members, but rather the quality of their connections, including both the number of connections and the range of different kinds of connections, especially ‘bridging’ connections that link up groups of people which, otherwise, may not have opportunities to interact.

Social (and ‘anti-social’) media

211

One of the main differences between ‘networks’ and ‘communities’ is the different kind of values that they are based on and that they end up perpetuating. In his book The Internet Galaxy (2002), Manuel Castells argues that we have experienced a shift from societies that were mainly organized around communities, based on shared goals and common interests, to a ‘network society’, in which people are conceived as individuals who align themselves with other people based on their own self-interests. There is usually less of a sense of the collective in social networks; rather, they arise from the choices and strategies of individuals, who may choose to create connections with others not based on affnity but on the perceived usefulness of the connection. Castells calls this new pattern of sociability based on individuals rather than groups networked individualism. While the rise of social media platforms has certainly helped to promote and sustain networked individualism, Castells believes that it has political and economic roots that predate digital media. He writes (2002: 130–1), ‘it is not the Internet that creates a pattern of networked individualism, but the development of the Internet provides an appropriate material support for the diffusion of networked individualism as the dominant form of sociability.’

ACTIVITY: MAPPING YOUR SOCIAL NETWORK Choose an online social network that you belong to and look at your list of ‘friends’ or ‘followers’. 1. Classify them based on those with whom you have ‘strong ties’ (good friends and family members), ‘moderately strong ties’ (friends and close acquaintances), ‘weak ties’ (distant acquaintances), and ‘very weak ties’ (people you hardly know or don’t know at all); 2. Notice with which of your connections you share a large number of mutual connections (or, as Facebook calls them, ‘mutual friends’). Is there any relationship between the number of mutual connections and the strength of the bond you have with a person? What is the difference between a strong tie with which you have lots of mutual connections and a weak tie with which you have lots of mutual connections? Do they function differently for you in your social network? 3. Try to divide your connections into different ‘clusters’ (for example, family members and family-related acquaintances, school friends and acquaintances, work-based acquaintances). Are there any connections that can be put into more than one category?

212

Digital practices

4.

5. 6. 7.

What kind of functions do these kinds of people have in your social network? Are there certain people in your social network that you consider ‘mavens’? What kinds of goods or benefts do they provide you and other members of your social networks with and what kind of infuence do they exert? Can you identify any people in your social network that you would consider ‘celebrities’, ‘microcelebrities’, or infuencers? What role do they play in relation to other people in the network? Can you divide the people in your social network into ‘friends’, ‘followers’, and ‘fans’? On what basis are you making this distinction? Based on this analysis, would you consider deleting any people from your social network? Why or why not?

The presentation of self on social networking platforms Central to the idea of networked individualism is the importance of the individual ‘self’ as an autonomous and unique ‘node’ in the network. But people’s successful performance of ‘uniqueness’ also depends on the degree to which other people in the network are willing to validate this uniqueness (by, for example, following them, or ‘liking’ the content that they have posted). Social media sites both provide the means for individuals to display their ‘unique selves’, and explicit metrics (such as number of ‘likes’ or ‘shares’) to indicate (both to themselves and others) how successful this display is. The underlying driver for much of the communication that takes place on social media platforms is the individual quest for visibility (see Chapter 12), and, as media scholar Taina Bucher (2012) notes, a corresponding ‘threat of invisibility’ in the context of an attention economy (see Chapter 2) in which everyone else in the network is also seeking visibility. As we said above, while the ethos of the early internet celebrated anonymity, with the rise of social networking sites, it began to be seen frst and foremost as a tool for self-promotion. Because of this, perhaps the most important digital literacy associated with social networking platforms is being able to engage in effective practices of self-presentation and impression management, including understanding how to use the tools available on these platforms to attract some kinds of attention and defect other kinds. The idea that social identity is a kind of performance was introduced by the sociologist Erving Goffman in his classic 1959 book The Presentation of Self in Everyday Life. When we are in public, Goffman said, we cooperate with one another to perform various roles. Depending on whom we

Social (and ‘anti-social’) media

213

are with, we may want to (or have to) perform different ‘roles’. Goffman refers to the way that we reserve certain kinds of performances for certain kinds of people as audience segregation. Despite Facebook founder’s Mark Zuckerberg’s insistence that ‘you have only one identity,’ and that ‘the days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly’ (Kirkpatrick 2011: 199), the fact is that social identities are always performances, and we perform different identities for different people as a matter of course. Many people have argued that one of the biggest challenges for the presentation of self on social media sites is managing our ‘multiple selves’. Because social networks are organized based on connections rather than interests, most people’s networks consist of people from different parts of their lives: friends, family members, work colleagues, classmates, and practical strangers. This ‘mixed’ audience makes it more diffcult for people to tailor their performances for different kinds of audiences. Internet researchers Alice Marwick and danah boyd (2010) refer to this phenomenon as context collapse. Many users of social media platforms have experienced the negative consequences of messages intended for one group of people (for example, your friends) being visible to another group of people (like your colleagues). Other scholars, however, have argued that context collapse is not as big a problem for the presentation of self as it might seem. Not only do many social media sites have tools that allow users to selectively post to different people, users of social media platforms, as boyd (2014) herself observes, develop discursive strategies for communicating in ‘collapsed contexts’ in ways that their message mean different things to different people. Caroline Tagg and her colleagues (2017) have argued that context is not something that is determined by the architecture of the social media platform or the makeup of individuals in a person’s social network, but that people dynamically design contexts as they interact with others online. Nevertheless, the technical affordances of social media sites are still central to the way we are able to manage the presentation of the self. We accomplish performances of identity, according to Goffman, by making use of the various kinds of ‘equipment’ different settings make available to us. Actors use a stage, props, costumes, and make-up. People in ‘real life’ also use equipment like the physical setting, various objects, clothing, and makeup. Just as different physical settings make available to us different kinds of equipment for self-presentation, so do different online settings. The equipment online social networks make available for impression management generally consists of two kinds: there is equipment for displaying information and equipment for concealing information. Equipment for displaying information in online social networks consists of things like profles and photo albums. Such sites also make an important part of selfpresentation the display of one’s relationships with other people; lists of

214

Digital practices

‘friends’ and other information about one’s position in the social network have become an important part of displaying one’s identity. Some sites also allow users to make explicit statements about their ‘relationship status’ with other people or to communicate relationships through actions like tagging their friends in photos. Of course, the main way social media platforms allow users to display their identities is through posting content, which can consist of text, photos, videos, audio clips, content from other sites (such as news articles), or a combination of these. Different social media platforms infuence the way users manage these ongoing performances through the kinds of content they encourage people to share and the way that content is arranged and archived. Some social media platforms, such as Instagram, encourage visual communication in which people express their personalities through the pictures they take of themselves or their environments. Twitter, on the other hand, although it allows users to share images and even audio recordings, tends to favour textual content. Social media platforms also make available a host of other ways for people to share content through links to third-party applications (such as ftness trackers, quizzes, and games), which automatically send updates to social media sites. Moreover, many social networking sites automatically share information about users whenever they do something like upload a new photo or change their relationship status. All of these avenues for information display are designed to encourage people to share more about themselves, which increases the value of the network for the companies that operate them and the advertisers to whom they sell this information. It is also through interaction in the network that we display our identities—the way we comment on or ‘like’ other people’s content and the way we react to their comments on or reactions to ours. Social media feeds often become like stages in which people act out conversations with their friends for other people to witness. What this means is that we are not alone in constructing our online identities. Others in our social network also contribute by doing things like commenting on our posts and tagging us in pictures. This interactive and cooperative aspect of self-presentation also makes it more diffcult for us to manage our information preserves. While we have control over what we want to reveal and what we want to conceal, we do not always have control over what other people reveal about us or the kinds of comments they make about our content. Just as important as the equipment such sites provide for displaying information is the equipment they provide for concealing it. ‘Privacy settings’, for example, allow users to make certain kinds of information available only to certain people. Equipment for concealing information also consists of the ability to remove information. One might, for example, fnd it necessary to

Social (and ‘anti-social’) media

215

‘untag’ oneself from an embarrassing photo posted by another user or to remove incriminating comments people have posted on one’s feed. Different online social networks provide different kinds of tools for users to manage their privacy. The problem is that in some systems the privacy settings are diffcult to use or ‘leaky’. On Facebook, for example, one can usually gain access to the photo-album of a ‘non-friend’ after a mutual friend comments on a photo. Furthermore, although nearly all sites provide users with the ability to change their privacy settings, most people stick to the default settings determined by the platform (see Chapter 7), settings which usually offer a minimum level of privacy. Third party applications (such as games and quizzes) can also gain access to people’s information and share it outside of the social network (see Chapter 12). Rather than attempting to set different levels of privacy for different people within a particular social networking site, some people prefer to use different services for different purposes. They may, for example, use Facebook for their friends, LinkedIn for their business associates, and Twitter to contact people who have similar hobbies or interests. Some people (see for example Bennett, 2011) argue that the simplest and most open privacy settings actually have the effect of helping people to better maintain their privacy. In Twitter, for example, there are only two options: completely open (in which all tweets are public and can be retweeted by anyone) and completely private. Most users choose to tweet publicly (since in many ways publicity is the whole point of tweeting). What this means is that they do not suffer from an ‘illusion of privacy’ and make use of the most effcient flter available to control what they reveal: their own judgement. The most important thing about identity in online social networks is that it is not static. One’s identity must be constantly updated. Some sites like Facebook facilitate this constant maintenance and reinvention of the self with features like the ‘news feed’ which alerts people in your network when you update your status, post new information, or change something in your profle. At frst this was a controversial feature among users, making them feel that too much information about their actions was being broadcast to other people. The clever and oddly attractive part of this feature, however, is that it encourages the performative aspect of identity: our own lives and the lives of others become like dramas that are played out before different audiences in our social networks. Different platforms also have different ways of helping users to create a coherent self out of all of the separate pieces of content they have posted over time. Facebook, for instance, strongly promotes a view of a stable, historical identity that users build up as they use the site. All of the content users post is arranged in reverse chronological order on their Timeline (formerly known as their ‘Wall’), and Facebook constantly reminds users of content they have posted in the past with its ‘On this Day’ feature, which

216

Digital practices

invites users to repost ‘memories’. The company also sometimes presents users with algorithmically produced video compilations of their posts from a particular time period (‘Look Back Videos’) or of their posts involving a particular user (‘Friendship Anniversary Videos’). Finally, the site prompts users to record ‘critical moments’ such as buying a house, their frst kiss, and giving birth to children. In this way, Facebook encourages users to see the ‘small stories’ (Georgakopoulou, 2007) that they tell in each of their individual posts as contributing to their overall ‘life story’. They also encourage them to regard the platform as an ‘archive’ of personal and communal memories. Instagram, on the other hand, encourages users to focus much more on the aesthetic aspects of their lives, and to establish a coherent image of themselves based on the unique ‘style’ that they develop as they use the platform. Compared to Facebook and Twitter, communication on Instagram places a much higher value on visual design, encouraging users to see self-presentation not just as a matter of the kinds of objects, events, and situations that they depict in the pictures they share, but also the lighting, colour, and composition of their photos (often manipulated using the flters and other photo editing tools available in the app). These stylistic concerns extend from individual photos to the grid layout of previously posted pictures on the landing page of their account, with users often striving to create a kind of ‘theme’ or ‘mood’ with the way the photos work together (see Chapter 4). Despite its emphasis on visual design in the presentation of the self, Instagram (as well as Snapchat) also include a narrative element, allowing users to use images to create what are called ‘stories’, slideshows of pictures taken over a certain period of time during a day. But these narratives are very different from the durable ‘life stories’ that Facebook encourages, most importantly because they are ephemeral; they disappear from users’ feeds after 24 hours. These ephemeral stories encourage users to see their lives not in terms of one long coherent narrative, but rather as a series of episodes, each day presenting a new opportunity for them to be a character in a new story. TikTok has very different affordances for creating coherence in self presentation. Rather than personal histories in which separate posts are arranged in a timeline, or visual ‘styles’, in which people attempt to maintain a certain theme or ‘aesthetic’ across posts, TikTok videos are more typically detached performances—funny dance moves, gestures or catch phrases—which may not have much relationship with one another. That is not to say that some users don’t try to construct coherent narratives or consistent styles with their videos over time, but the app itself doesn’t provide many affordances for them to do so. Where connections are typically made are not between the different videos that users have posted,

Social (and ‘anti-social’) media

217

but between individual videos and videos posted by other users. The main affordance of the app is that it allows users to respond to or mimic other users by combining their performance with the voice or music from other videos or duplicating videos and adding themselves alongside other users (known as ‘duets’). The kinds of selves expressed through such affordances are less narrative and more ephemeral—identity is predicated on one’s participation in some relatively short-lived trend or challenge or joke and the conviviality (see below) and possibility of momentary ‘fame’ that one’s performance might generate.

ACTIVITY: PERFORMANCE EQUIPMENT Consider an online social networking platform that you use such as Facebook, Instagram, Twitter, Snapchat, or TikTok and answer the following questions. 1. What kind of equipment does this platform make available for displaying information about yourself? What are the affordances and constraints of such equipment and what kinds of displays do they facilitate? How do you and other people in your social network make use of this equipment? What sort of strategies do people use to enhance their displays in order to get other users’ attention or to express certain things about themselves? 2. What kind of equipment does the service make available for concealing information about yourself? What are the affordances and constraints of such equipment? How knowledgeable are you about how to use this equipment? 3. Do you display and conceal different things to different people in your social network? What kinds of equipment does the service that you use make available for managing audience segregation? Do you present different identities to different audiences? 4. What kinds of selves does this platform encourage users to present? Does it encourage users to construct durable, coherent identities or more ephemeral identities? What are the affordances the platform makes available that encourage these kinds of identity constructions? 5. To what extent do you regard your self-presentation using this service to be an ‘authentic’ and ‘honest’ refection of your ‘real world’ self? To what degree do you engage in dissembling, exaggerating, idealizing, role-playing, and outright lying? What are the social functions of such activities?

218

Digital practices

CASE STUDY: INFLUENCERS AND MICROCELEBRITIES As we said above, self-presentation on social media takes place in the context of an attention economy in which users compete with other users to attract attention. In this economy, tokens of recognition such as ‘likes’ and comments serve as the ‘currency’ users exchange, in order to accumulate what the French sociologist Pierre Bourdieu (1986) calls social capital. Those who accumulate economic capital through social media platforms are mostly the platform owners and the advertisers who use the information people reveal to sell them products. There are, however, some users who are able to monetize their participation on social media sites, becoming microcelebrities or infuencers and earning money through advertisements or product promotions. In a way, this monetization is a logical extension of the way social media sites promote entrepreneurial and neoliberal notions of identity and selfhood. The term ‘microcelebrity’ was coined by internet researcher Theresa Senft (2008) while she was researching ‘camgirls’—a form of internet celebrity popular at the turn of the millennium in which people would set up webcams in their homes and broadcast their lives for people to watch. As new tools for online self-presentation, particularly social media platforms, became available, people had new ways of ‘amping up’ (25) their popularity within and across networks. For people seeking microcelebrity, there is, as Ruth Page (2012: 182) argues, an ‘emphasis on the construction of identity as a product to be consumed by others, and on interaction which treats the audience as an aggregated fan base to be developed and maintained in order to achieve social or economic beneft.’ Social media infuencers are a specifc type of microcelebrity who use social media to craft an authentic ‘personal brand’, which can be used by companies and advertisers for social media marketing. The rise of microcelebrity can be seen as part of a wider ‘celebrity culture’ that developed in the early twenty-frst century, a culture which sociologist Joshua Gamson (2011: 1068) says is characterized by a heightened consciousness of everyday life as a public performance—an increased expectation that we are being watched, a growing willingness to offer private parts of ourselves to watchers —known and unknown—and a hovering sense that perhaps the unwatched life is invalid or insuffcient. This trend has been fuelled by a range of different media formats, such as reality TV, but social media platforms are probably most

Social (and ‘anti-social’) media

219

responsible for promoting it, mostly because, even for ordinary users, social media encourages the public performance of private life and tacitly promises ‘fame’ in the form of recognition from other users. Social media also encourages practices of self-presentation consistent with the notion of ‘self-branding’ in which users carefully cultivate online personae. Finally, social media platforms have built into them what Alice Marwick (2013: 75) calls ‘status affordances’, such as displaying the number of followers a user has on their profle page, which encourages them to orient towards practices designed to increase popularity. Different social media platforms provide different kinds of ‘status affordances’ and promote different strategies of self-promotion. For example, Ruth Page (2012) examines how the strategic use of hashtags is a central strategy for achieving prominence on Twitter. Referring to hashtags as the ‘currency’ in the linguistic marketplace of Twitter (184), Page notes how they simultaneously enable users to make their content searchable and to affliate themselves with popular trends, movements, or catchphrases. While hashtags are also important on more visually oriented platforms like Instagram, the style and content of users’ images are usually the main determinants of fame. Alice Marwick (2015) notes that ‘Instafame’ often depends on how successful users are at emulating the ‘tropes and symbols of traditional celebrity culture, such as glamorous self-portraits, designer goods, or luxury cars.’ At the same time, there are also users who gain microcelebrity from challenging traditional norms of celebrity culture by creating ‘quirky’ or anti-establishment personae, as well as those who focus on appealing to particular niche audiences, as is the case with YouTube channels that focus on specialized interests like cooking, extreme political views, or offbeat trends such as Mukbang (videos that depict people slurping food) and ASMR (videos in which people produce soothing or sensual sounds). In this regard the recommendation algorithms of platforms like YouTube play an important role in making the content of microcelebrities more prominent. While microcelebrity builds upon many of the motivations and strategies for self-presentation common among users of social media, there are a number of particular challenges microcelebrities and infuencers face when it comes to impression management. One has to do with understanding how to address their followers, who inhabit the ambiguous position of being ‘friends’ and ‘fans’ at the same time, or, as Theresa Senft (2012) puts it, being members of ‘communities’ and members of ‘audiences’. Normally, people speak differently to these different kinds of people online. As Senft (350) notes, ‘Audiences desire someone to speak at them; communities desire someone to

220

Digital practices

speak with them.’ This situation requires that microcelebrities become skilful at striking a balance and strategically switching between two different modes of address. This ambiguous positioning of followers can be an advantage for ‘real’ celebrities such as Beyoncé and the Kardashians because it helps them to create a more intimate relationship with their fans, positioning them as ‘friends’, a strategy that Horton and Wohl (1956) call ‘parasocial interaction’, the illusion of real-life friendship that is created between performers and their audiences (see also Marwick & boyd, 2010). But for microcelebrities it entails the risk of making friends feel like they are being treated as ‘fans’, being looked down on or kept at a distance. Another challenge microcelebrities and infuencers face is in maintaining an air of authenticity, projecting the impression that they are ‘ordinary people’ doing the things that they would ordinarily do. This is particularly important for infuencers, who aim to give the impression that they really do use the products that they are promoting. One way they try to maintain authenticity is by talking about their products in the context of intimate disclosures about their everyday lives and problems, often making a point of revealing weaknesses and imperfections. They also regularly address their followers using terms of direct address and adopt casual styles of speaking and slang terms that are familiar to those to whom they are trying to market their goods. In fact, the strategies that infuencers use to seem authentic are so well tested that they can even be applied to situations in which the infuencer may not be authentic—or may not even be human. One of the most followed social media infuencers during the time that we were writing this book was actually a computer-generated (CGI) persona named Miquela Sousa (lilmiquela), who purports to be a 19-year-old BrazilianAmerican girl from Los Angeles. Lil Miquela has over 1.5 million followers and has been used to promote brands such as Diesel, Stüssy, and Prada, and has even appeared in photos with ‘real’ celebrities like Diplo, Millie Bobby Brown, and Brazilian drag queen Pabllo Vittar. She is also a keen advocate of social justice, posting messages on her Instagram account promoting #BlackLivesMatter and transgender rights. Fictional infuencers such as Lil Miquela raise important questions not just about the kinds of strategies infuencers use to communicate authenticity, but also about what authenticity even means. One of the main reasons Miquela seems authentic is that she exhibits the imperfections, insecurities, and penchant for melodrama that people often associate with ‘real’ 19-year-olds. In one incident, for example, Lil Miquela’s account was hacked by another teenaged CGI infuencer named BermudaIsBae who bullied her and eventually ‘outed’ her as

Social (and ‘anti-social’) media

221

not really human. This led Miquela to post a series of revelations about the diffculties and mental stress associated with coming to learn that she was not human and her journey of accepting herself for who she is. One post, for example, read: I’m thinking about everything that has happened and though this is scary for me to do, I know I owe you guys more honesty. In trying to realize my truth, I’m trying to learn my fction. I want to feel confdent in who I am and to do that I need to fgure out what parts of myself I should and can hold onto. I’m not sure I can comfortably identify as a woman of color. “Brown” was a choice made by a corporation. “Woman” was an option on a computer screen. My identity was a choice Brud made in order to sell me to brands, to appear “woke.” I will never forgive them. I don’t know if I will ever forgive myself. The drama surrounding this incident, including the revelation that she was not ‘real’, actually endeared her to her fans and substantially boosted the number of people following her Instagram account. One reason for this was that, despite her revelation of inauthenticity, the pain and vulnerability she expressed when talking about it made her seem more authentic. Cases of ‘synthetic authenticity’ (Cossell, 2019) such as Lil Miquela have much to teach us about the performative nature of authenticity online, whether such performances are undertaken by microcelebrities or by ordinary people. Being authentic on the internet is not about being ‘real’. Social media users are, in fact, often very tolerant of other users making themselves seem ‘better’ than they really are online, just as they are tolerant of Lil Miquela being a totally fctional character. It seems that what is really meant by being ‘authentic’ online is being able to perform attributes and behaviour associated with sincerity and openness, and also to disclose fears and vulnerabilities that other people can relate to.

‘Anti-social media’ In their book Taking Offence on Social Media: Conviviality and Communication on Facebook (2017), internet sociolinguist Caroline Tagg and her colleagues examine how users of Facebook deal with what they might consider offensive content from other people in their social networks. Based on interviews with users, they found that most people do their best to

222

Digital practices

avoid confict, usually ignoring and sometimes ‘hiding’ posts from users that they disagree with. Based on these fndings, they posit that conviviality— ‘the desire for peaceful co-existence through negotiating or ignoring difference and avoiding contentious debate’ (89)—is the underlying principle governing this kind of social media encounter. Conviviality is not just about avoiding confict; it is about building and maintaining social bonds, often through what we referred to in Chapter 5 as phatic communication. Piia Varis and Jan Blommaert (2015: 40) similarly argue that conviviality—‘a level of social intercourse characterized by largely “phatic” and “polite” engagement’—is exactly what social media platforms like Facebook are made for. Such engagement is facilitated by the technological affordances of such platforms which enable users to easily send tokens of positive regard (such as ‘likes’) with very low transaction costs (see Chapter 5). Others, however, have pointed out the frequency with which users of social media exhibit less than convivial behaviour, for example, cyberbullying, faming, trolling, and other forms of anti-social attention seeking. Some, such as computer scientist Jaron Lanier (2018), have gone so far as to claim that social media, by its nature, turns people into ‘assholes’: since social media sites reward people for getting attention, and assholes generally get the most attention. Lanier calls social media ‘Asshole Amplifcation Technology.’ Online hostility has been particularly evident where social media platforms have become sites where political debates are played out, and many have blamed social media for encouraging extreme political polarization and even politically motivated violence in some countries. ‘Social media,’ say social psychologist Jonathan Haidt and designer Tobias Rose-Stockwell (2019) ‘turns many of our most politically engaged citizens into… arsonists who compete to create the most infammatory posts and images, which they can distribute across the country in an instant.’ Of course, much of this behaviour is provoked by trolls who intentionally release incendiary content into the social media eco-system for political purposes or for economic gains, and it is no doubt exacerbated by flter bubbles that encourage confrmation bias and intolerance (see Chapter 2). But there is also something about the architecture of social media and the incentives of the attention economy that sometimes brings out the ‘inner troll’ in otherwise convivial people. Empirical studies bear this out. A 2017 study by Pew Research Centre, for example, showed that posts on Facebook exhibiting ‘indignant disagreement’ received nearly twice as much engagement (reactions, comments) as other posts (Figure 10.2). Of course, certain social media platforms seem to be particularly notorious for ‘anti-social’ behaviour. Amnesty International, for instance, singles out Twitter as a site that encourages or enables ‘toxic’ discourse, especially against women (Dhrodia, 2018), a fact that has been

Social (and ‘anti-social’) media

223

Figure 10.2 Disagreement and engagement on Facebook: Pew Research Centre (2017).

regularly confrmed by the tweets of former US President Donald Trump. YouTube is also infamous both for the extremist and incendiary video content that can be found there, and the aggressive nature of the comments users leave about other users’ videos (Murthy & Sharma, 2019). Just as the technological affordances of social media sites encourage conviviality, they can also drive hate speech and toxic communication: it is just as easy to send negative messages as it is to send positive ones, and, in an attention economy, sometimes more rewarding. Digital media scholar Luke Munn (2020) argues that the architectures of platforms like Facebook and YouTube promote stimulus-response loops that promote the expression of anger and outrage, since expressions of anger and outrage are likely to be rewarded with reactions. The observation that social media platforms encourage both conviviality and hostility may not be as much of a contradiction as it seems. In fact, conviviality and hostility are in some ways closely related, two sides of the ‘phatic’ nature of social media communication. Sending messages that are hostile to an ‘outgroup’ also have the effect of encouraging solidarity among members of an ‘ingroup’. The internet scholar José van Dijck (2012: 161) has pointed out that the most important thing about social network sites is that they are social, that they are designed to ‘activate relational impulses.’ Platforms like Facebook, in fact, have all sorts of features that are specially designed to get people to invest more strongly in their relationships such as reminding users to wish their ‘friends’ a happy birthday. In other words, as Haidt and Rose-Stockwell (2019) note, people use social media not so much

224

Digital practices

to communicate as to display something about themselves, their beliefs, and their relationships to other people. In other words, they push people to focus on who and what they ‘like’, as well as on who or what they don’t ‘like’. The practice of sharing content on social media sites, then, is usually less about sharing ‘information’ and more about signalling how you relate to others in the network and claiming an affliation with a particular ‘tribe’, less about ‘What’s on your mind?’ and more about what’s in your heart. More than that, they can even push us to like or dislike people and things that we may not have previously had strong feelings about. Haidt and RoseStockwell (2019) observe that social media can encourage us ‘to observe conficts and pick sides on topics about which we would otherwise have few opinions.’ In contexts where ‘picking sides’ becomes a way of signalling one’s affliation with like-minded users, hostility can even function as a form of ‘conviviality’, a way of supporting and ‘sticking up for’ friends who hate ‘those other people’ as much as you do. In her book Affective Publics (2015), media scholar Zizi Papacharissi argues that social media have a unique power to amplify ‘affect’ (emotions) through the forms of condensed and intense storytelling that they make available to users (such as images, short videos, and tweets). The condensed and often visual nature of social media content is more likely to depend upon appeals to emotion than longer, more discursive forms of communication. Similarly, Ken Hillis, Susanna Paasonen, and Michael Petit (2015) assert that what drives engagement with social media is what they call ‘networked affect’: it is the affordances of social media platforms to encourage emotional exchanges (whether convivial or hostile) that makes them ‘sticky’, and the emotional exchanges themselves that make the platforms even ‘stickier’. This affective feedback loop, of course, is further exacerbated by algorithms which promote content based on predictions about how much it will increase user engagement. It is their affective dimension that make social media platforms so attractive, and in many ways, so benefcial, by making us feel closer to our friends and family members, and making us feel ‘liked’. But they can also make us feel further away from our ‘enemies’ and exacerbate feelings that we are not ‘liked’ enough. The biggest impact of the way social media platforms amplify affect may be the quality of the information that is shared on these sites. As internet critic Siva Vaidhyanathan (2018) observes: We share content regardless of its truth value or its educational value because of what it says about each of us. “I am the sort of person who would promote this expression” underlies every publication act on Facebook. Even when we post and share demonstrably false stories and claims we do so to declare our affliation, to assert that our social bonds

Social (and ‘anti-social’) media

225

mean more to us than the question of truth. The reason for the spread of ‘fake news’ through social media sites (see Chapter 7), then, is not just mysterious ‘foreign’ hackers or malicious bots— though they certainly play a role—but also the fact that, in the context of social media, ‘fake news’ can function as a form of ‘phatic’ communication. Indeed, it is this very fact that trolls, hackers, and bots have learned to so effectively exploit.

Conclusion In this chapter we have focused primarily on how social networking sites are affecting the way we manage our social relationships, particularity their ability to facilitate what we referred to as ‘strong weak ties.’ We have also explored the kinds of ‘expressive equipment’ social media sites make available for the presentation of the self, and the effect of this equipment on the kinds of selves that people present. In particular, we have focused on the ‘attention economy’ that dominates social media and how the constant ranking and evaluation that characterize these platforms has given rise to a culture of self-commodifcation and microcelebrity. Finally, we have shown how the kinds of social relationships we are able to form on social media are intimately connected to the ways they facilitate the sharing and circulation of certain kinds of content, specifcally content designed to elicit emotional reactions from other users. In the next chapter, we will consider the way that people work together online, whether on collaborative projects like Wikipedia or in the spread of memes and other kinds of ‘viral’ content that takes advantage of the network effect. Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Chapter 11

Collaboration and peer production

As we saw in Chapters 3 and 4, reading and writing using digital media are interactive processes, with readers able to write back to authors and actively take part in text production. In addition, with mashups, remixing, and memes, creating a digital text is often a matter of piecing together the texts of others to come up with a new and original work. Although it might seem as though you are working ‘alone’, you are in a sense entering into ‘collaboration’ with the creators of the texts, images, music, and sound that you are remixing. This is even more the case when it comes to the circulation of content: for content to ‘go viral’, many people connected in networks have to contribute by sharing, commenting on, and promoting it. In this sense, nearly all digital literacy practices involve us in interaction and/or collaboration with others. At the same time, digital media and communication technologies also make it easier for us to enter into more structured collaborations. Firstly, it is easier for us to establish relationships with potential collaborators: for example, we can identify people with similar interests and interact with them through social media and the kinds of online affnity spaces described in Chapter 8. Secondly, thanks to the cheap availability of networked communication tools and collaboration platforms such as WhatsApp, WeChat, Skype, Zoom, Slack, and Microsoft Teams, it is easier for us to maintain collaborative relationships over a distance. Finally, we now have access to various cloud-based tools and services that can help us to manage different stages in the collaborative writing process: online notetaking tools, wikis, and online offce/productivity suites all make it easier to manage collaborative writing tasks. As well as supporting collaborative writing processes like those traditionally found in the workplace, digital media and communication technologies have also made it possible for a new form of collaborative process to emerge. This new process is known as commons-based peer production (peer production for short). In peer production, massive numbers of people, who are distributed across the globe and connected to each other by digital networks, work together voluntarily to promote projects that

Collaboration and peer production 227

they are interested in. Under the right conditions, these loosely organized groups of peers can work together on projects so effectively that they are able to compete with more traditional organizations, like governments and corporations. In the case of collaborative writing, perhaps the best-known example of this peer production process is the online encyclopaedia, Wikipedia. Although the content of Wikipedia is produced almost exclusively by volunteers, its quality nevertheless rivals professionally produced encyclopaedias like the Encyclopaedia Britannica (Giles, 2005). As we will see, the kind of peer production processes that have resulted in Wikipedia can also be found in other walks of life, including software development, archiving, academic and commercial research, entertainment, and commerce. In fact, once you become aware of the underlying principles of this form of networked, peer-to-peer collaboration, you begin to see instances of it everywhere. As well as peer-produced content, there is peer-produced fltering and peer-produced reviewing and commentary. For example, the social news aggregation site, Reddit, relies on its users to rate and rank content. The order of posts appearing on the site is determined by the input of users who upvote the content that they fnd appealing. Similarly, the list of recommendations that Amazon serves you as you browse through its online store is compiled by aggregating the selections of the many customers who bought the book you are interested in. Although you might not think of yourself as working in collaboration when you upvote a post on Reddit or buy a book through Amazon, in reality your input, combined with the input of millions of others, provides a valuable resource for other people. In this chapter, we will consider the effect of digital media and communication technologies on collaboration. We begin by exploring the potential of digital media to facilitate collaborative writing. Then we describe peer production, the collaborative information production model that makes use of internet technologies to engage vast numbers of individuals distributed across the globe in collaborative projects. Finally, we consider the collaborative circulation of information by looking at the mechanisms at work in virality, especially the circulation of memes.

Collaboration in writing In order to understand the way that digital tools can contribute to collaboration in writing, we need to understand something about the process of collaboration itself. In fact, collaboration is complex and varies from group to group. Different groups adopt different processes and strategies depending on their particular needs and the individual style of group members. Despite this variation, we can nevertheless identify a number of common issues that collaborative writing teams must face. A frst set of issues relates to the group: group formation, group organization and maintenance, and

228

Digital practices

communication about group processes. A second set of issues relates to the writing task: the strategies people use to accomplish the task and how they coordinate their work (see Sharples et al., 1993; Lowry, Curtis, & Lowry, 2004 for more detailed accounts). An initial issue in collaborative writing is how to form a group and defne the writing task. In some cases group membership and task defnition are taken as givens, as when groups of knowledge workers from the same company collaborate on routine writing tasks that emerge in the course of business. In other cases, where the writing is of a less routine nature, it may be necessary to review the goals of the collaborative writing project and identify individuals from a range of backgrounds with the necessary knowledge and skills to participate. In the case of academic writing, McGrath (2016) provides an interesting analysis of the Polymath 8(a) open access research blog, set up by mathematicians and available for anyone to access. This blog ‘is innovative in its approach to including non-specialist mathematicians in knowledge construction, writing, and dissemination’ (27). A research blog of this kind can be infuential in group formation, allowing a group of likeminded individuals to coalesce and engage collaboratively in the writing of a research article. This process of group formation can be further facilitated by participation in online affnity spaces, where individuals from different backgrounds interact with one another about a shared interest (see Chapter 8). By participating in such online spaces (for example, email lists, online forums, social networks) people expand their networks and enter into relationships with like-minded others, all of whom are potential collaborators. Such online spaces could conceivably provide useful contacts for professional collaborations as well as for more informal collaborations that arise out of nonprofessional interests, as is the case with fan fction writing for example. In addition, digital communication tools facilitate the formation of collaborative groups that are geographically distributed over large areas. A second issue relates to organizing and maintaining the group once it has been formed and the writing task has been clearly defned. This involves negotiating a suitable process (see below), negotiating the roles of various contributors, and resolving confict as it arises. Again, in routine workplace writing the roles of team members may be fairly stable, with given team members regularly taking on particular social roles (team leader, facilitator) or task-related roles (writer, reviewer). Confict can arise when team members are not satisfed with each other’s contributions, or when they disagree about aspects of the writing, such as the intended audience, the rhetorical purpose, and so on. Such conficts can act as both a constructive or destructive force, depending on how they are managed. Digital media like email, chat, messaging, and video-conferencing tools can be used in order to address these issues of group organization and maintenance, especially in groups that are geographically distributed. However,

Collaboration and peer production 229

compared to face-to-face interaction, these tools can create a sense of anonymity and increase the social distance between members of the team. Sharples et al. (1993) call this phenomenon de-individuation and Lojeski (2007) calls it virtual distance, noting that a lack of rapport and trust can sometimes be a by-product of mediated communication. This virtual distance can undermine the social cohesion of the group. Bearing this in mind, where possible it is often desirable for collaborative teams to meet face-toface during the initial stages of group formation and planning or when serious conficts arise. In groups where this is not possible, contact with team members through more informal channels such as social networking sites might help to reduce the de-individuation effect (see Jones & Hafner, 2012). A third issue relates to the strategies adopted when actually engaging in the writing task. In general, a collaborative writing project can involve both periods of close collaboration (team members meet for writing sessions, discuss and develop drafts together) as well as periods of individual work (team members divide the labour and draft on their own). Sharples et al. (1993) identify three different collaborative writing strategies: sequential writing, parallel writing, reciprocal writing. In sequential writing (see Figure 11.1), one person works on the document at a time before passing control of it to the next writer. This strategy allows each writer to review and build on the existing text in a coherent

DOC

Figure 11.1 Sequential writing (adapted from Sharples et al., 1993).

230

Digital practices

way. The drawback is that each writer has to ‘wait their turn’ and this leads to some ineffciency. In parallel writing (see Figure 11.2), the document is divided into different sections, which different writers work on at the same time. This allows the team to make progress on a number of sections of the writing at the same time. However, once these sections are complete it is necessary for an editor to take control of the full draft and ensure that the sections build on each other in a logical way. Some stages of the writing task are particularly well suited to a parallel writing strategy. For example, in academic writing while one team member is editing a completed draft for formatting and style, another can compile the reference list and appendices. In reciprocal writing (see Figure 11.3), control of the document is shared between all members of the team, who simultaneously discuss, draft, and respond to each other’s suggestions. This strategy is well suited to initial brainstorming and outlining stages of a project, but can also be used to produce a full draft. For example, collaborators can meet, discuss and compose their draft with a ‘scribe’ making changes to the document. Alternatively, collaborators can share the document that they are working on and allow any member of the team to make changes to any section. Wiki platforms (see the case study below) could be used to facilitate such a reciprocal writing process, as could online offce/productivity suites which store the document

DOC

DOC

DOC

DOC

Figure 11.2 Parallel writing (adapted from Sharples et al., 1993).

Collaboration and peer production 231

DOC

Figure 11.3 Reciprocal writing (adapted from Sharples et al., 1993).

on a central server and allow team members to simultaneously view each other’s edits. The three strategies outlined here can all be supported by technological tools to create, share, comment on, and jointly edit artefacts that support different stages in the writing process (for example, mind maps, outlines, drafts). Cloud-based fle-sharing services like Dropbox and OneDrive make it easy for members of a team to distribute their notes and drafts to one another. In addition, there are now a wide range of dedicated collaborative writing tools available, including brainstorming tools, word processors, online offce suites, and wikis. In order to support collaboration in writing, these tools must provide detailed commenting and annotation tools for peer review and feedback. After an initial draft has been created, other team members must be able to record comments and suggested changes in a way that preserves the original version of text, in case suggestions are not accepted. One tool that provides very detailed mark-up of this kind is Microsoft Word. An example, taken from a draft of the frst edition of this book, is provided in Figure 11.4. In this example, Rodney has marked up a draft written by Christoph, using Microsoft Word’s ‘track changes’ and ‘comments’ tool. The track changes tool provides a way for collaborators to add and delete text, while keeping track of the original version. Four changes to the text have been

232

Digital practices

Figure 11.4 Draft with mark-up in Microsoft Word.

made here: additions to the text are displayed in purple, underlined text; deletions are recorded in the bubbles on the right. The edits are tied directly to the text and also record the editor’s name as well as a timestamp. This detailed display makes the suggested edits easy for the original writer to evaluate and either accept or reject: frst, the writer can tease out the contributions of different team members; second, the writer can compare the new text with the old version. If a change is not accepted, it is easy to roll back to the previous version. Less detailed suggestions can be made using the comment tool, as shown in the fgure, where Rodney asks for an example to be added (also displayed in the bubbles on the right). Once again, this comment is tied directly to the text it refers to (highlighted in purple), and this makes the comment easier to interpret. Such annotation tools are well suited to both sequential and parallel writing strategies, where the text or sections of the text are passed from one writer to another for review. Once a document has been marked up using such tools, it can be shared by email or through a fle-sharing service, providing collaborators with the means to discuss their written products in detail, even if they are separated by a great distance. Online offce suites like Google Drive and Microsoft Offce 365 allow writers to engage in the reciprocal writing strategy at a distance. With these services, the document is stored on a server online, and any team member with the necessary permissions can edit it, so that a number of team members could work simultaneously on the entire text if desired. In such online offce suites, because the document is stored on a server, there is no need to share it to the local hard drives of other team members. This kind of central

Collaboration and peer production 233

storage (which is also a feature of wikis, see below) also addresses the problem of ‘versioning’ in collaborative writing. When documents are shared between different computers (for example, by email), there is a risk that collaborators will lose track of who has the latest version, and simultaneously edit different versions of the same document on their local computers. When this happens the versions stored on different computers get out of sync and have to be reconciled.

ACTIVITY: YOUR COLLABORATIVE WRITING PRACTICES Describe and evaluate your own collaborative writing experiences, i.e. any time that you have been involved in some form of group writing or group writing project. Use the following questions to guide your discussion: 1. What was the context for the writing? a. Formal context, for example, school, university, training programme? b. Informal context, for example, Facebook, blog writing, fan fction, memes? 2. What motivated you to do the writing? Was your motivation: a) intrinsic, i.e. doing it for fun? b) extrinsic, i.e. doing it to meet some external pressure or reward? 3. How did you manage the collaborative writing process? a. Group: How did you identify group members? What roles did different people adopt and how were these roles allocated? Was there a group leader? Were there any free riders? Was there any confict? If so, what was the confict about, how did you talk about it and resolve it? b. Task: How closely did you work with your collaborators in: 1) idea formation/brainstorming? 2) drafting? 3) revising/reviewing? Did you follow a parallel, sequential, or reciprocal working strategy, or some mixture? How did you communicate suggestions and revisions to the text? How did you keep track of the most recent version of the text? c. Communication: What media did you use to communicate with collaborators? How did that media affect the collaborative writing? Did you notice any ‘de-individuation effect’? 4. What communication tools/writing tools did you use to help you manage the collaborative writing process? How did those tools contribute to the process?

234

Digital practices

Wikinomics and peer production The kind of digital media and communication tools that we have described so far make it easier to do collaborative writing in traditional contexts like the workplace. But, as we noted earlier, they also go beyond this to make possible a new kind of collaborative information production model: commons-based peer production (peer production for short). In essence, peer production is collaboration between very large, diverse, loosely organized collections of individuals, who are distributed throughout the world and connected by a digital network. Peer production differs from traditional collaboration in the workplace in a number of ways. First, it usually involves massive numbers of collaborators from very diverse backgrounds. Second, the relationship between collaborators is one of equal peers working together and there is usually no formal ‘chain-of-command’ or organizational structure. Finally, the collaborators are self-selected, and motivated to contribute to the project out of a sense of enjoyment or fun. The theory behind commons-based peer production is described in a paper written in 2002 by law professor Yochai Benkler called ‘Coase’s Penguin, or Linux and the Nature of the Firm.’ Benkler focuses on an early example of peer production, the development of the open source software Linux. Unlike proprietary software developed by companies like Apple and Microsoft, open source software is developed and maintained by a global community of volunteers who give their time for free. The source code underlying the software is available for anyone to use, modify, and distribute, both for commercial and non-commercial purposes following the terms of the General Public License (GPL). One key term of this licence is that all software products that build on open source code must also be distributed under the GPL license. In other words, future programming innovations have to be shared back to the developer community. This kind of peerproduction is said to be ‘commons-based’ because individuals do not retain the intellectual property rights to their creative work, but return the work to the ‘commons’ so that others can use it as well. The open source software movement attracted a lot of attention because of its phenomenal success. As an example of this success consider the popularity of the open source server software, Apache, which competes with Microsoft products. For many years, Apache dominated the market share of all websites, far ahead of Microsoft. More recently, another open source server, nginx (pronounced ‘engine X’) has emerged as the top performer with Apache in second place (see the web server surveys at http://news.netcraft.com). The dominance of these open source options invites the question: how could groups of part-time volunteers organize themselves so effciently as to outperform one of the biggest software developers in the world? The answer to this question probably has something to do with the unique characteristics of peer production as an economic model for the production

Collaboration and peer production 235

of information (and other economic goods). According to Benkler, two kinds of economic arrangements have traditionally dominated in capitalist societies. The frst is the ‘market model’, in which individuals sell things directly to the public, making decisions about their behaviour according to the market (for example, what people want to buy, how much money they have to spend). The second is the ‘frm model’, in which individuals work in a hierarchical frm and make decisions about their behaviour based on what their boss tells them to do. In a free market system, frms will emerge wherever organizing the production of goods and services is cheaper than securing those goods and services by market exchange. Commons-based peer production, largely made possible by digital media, has arisen as an alternative to these two models. In peer production, the individual works neither alone nor for a boss who tells him or her what to do. Instead, the individual works as part of a loosely organized group of people, connected by networked computers and often distributed over a large geographical area. This system of organization is sometimes referred to as the ‘crowd model’. In these groups, there is no boss to make sure that the project succeeds. Instead, things work themselves out by virtue of the sheer size of the group and everyone in the group contributing according to his or her abilities and talents. There is an important advantage to peer production, compared to markets and frms. That advantage has to do with the way that peer production opens participation up to everyone, allowing individuals with the necessary interest and skills to self-select and take part in the project. In the frm model or market model, people do work because their boss tells them to, or because there is demand in the market and they wish to proft from it. However, the boss or the market may not always be very good at identifying the right individual for the job. By contrast, in the crowd model people do work because they are interested and consider themselves to have the necessary skills. This system is sometimes more effective because the selection of individuals is left up to the individuals themselves, and they are the ones with the best information about their own suitability. What’s more, in the kind of knowledge work that peer production involves, selecting individuals with appropriate skills for the job is very important. But what about the limitations of this model? One possible limitation is motivation. In the market-based system, people are motivated by the desire to earn money. In the frm-based system people are motivated by the desire to earn money and the desire for security. But peer production relies on people who are willing to forego the opportunity to make money, and work for some other motive. In peer production people are motivated chiefy by ‘fun’. What we mean by ‘fun’ here is a complex combination of people’s intrinsic desires to do things that interest them, to interact with like-minded people, to ‘show off’ their talents or intelligence, and to be part of something bigger than themselves. Being part of a peer production project can provide people

236

Digital practices

with indirect benefts, such as the opportunity to build ‘social capital’, improving their reputations as they are recognized for their contributions. Experience shows that the question of motivation is in fact a rather trivial one. It is more important to fnd ways to reduce the effort of participating, than it is to provide motivations to participate (of course, some threshold level of motivation is essential). The model works best if the project is suffciently ‘modular’ or ‘granular’ to be broken down into small chunks that only take a small amount of effort to process. These chunks then need to be distributed to a very large group of people. The idea is that the many small efforts of a large number of individuals will eventually add up to a meaningful outcome. Finally, once these chunks have been isolated and processed, it must be possible to cheaply re-integrate them into a meaningful whole, some kind of fnal product. An early example of a successful peer production project in research and development, which began in the year 2000, was NASA’s Clickworkers project. This project provided public volunteers with the opportunity to get involved in scientifc research, specifcally the analysis of images from Mars. Volunteers visited a website where they could view images from Mars and either map Martian land features or count craters. The tasks required human judgement but no specifc scientifc training was necessary. In this example, the task is clearly of a modular nature: it can be broken down to individual pictures which can then each be analysed by a number of ‘clickworkers’. The analysis is easy and doesn’t require a lot of effort. The results of the analysis can be aggregated and averaged out using some kind of algorithm in order to integrate the many contributions into a single fnal analysis. In all likelihood, people volunteer to analyse a few images partly for the fun of being involved in NASA research, but also because it only takes them a few minutes to do so. NASA has evaluated the research that has been conducted in this way, and found that the combined efforts of a large group of clickworkers is at least as good as the efforts of a single trained scientist working with the same images. This kind of ‘crowd sourcing’ can also work well in commercial enterprises. One example is Lego’s Mindstorms. In 1998, Lego released a product with programmable pieces to turn its interlocking blocks into programmable robots and machines. Shortly after the release of the product, consumers started hacking the software and creating their own programs to do much more than what Lego engineers had designed. Initially, Lego threatened to sue those who were violating their software licence, but quickly reversed their position when they realized that they had an enormous source of creativity to co-develop their toy. Lego then developed a website where users could share software for programming Lego toys. Furthermore, Lego also offered free downloadable development software so that consumers could easily make new creations and share them with other consumers. Today, there are a range of active online communities

Collaboration and peer production 237

that provide support for consumers, who want to fgure out how to get the most out of their Lego Mindstorms product. As this example shows, in the early 2000s the collaborative practice of peer production ushered in an entirely new economic model for many commercial enterprises. Authors Dan Tapscott and Anthony Williams called this model ‘wikinomics’, described in their 2006 book of the same name. In their book they suggest that businesses must adapt to the ways of thinking of this new economic environment in order to remain competitive. Among other things, they highlight the way that peer production is affecting the roles of producers and consumers. As in the example of Lego Mindstorms, many consumers now take a more active role in design and development. In order to refect these changing relationships, Tapscott and Williams adopt the term ‘prosumer’ to describe this new kind of active consumer. Wikinomics and peer production also challenge the way that we think about authorship and ownership. Traditional wisdom and current intellectual property laws say that your creative work (including the information you produce) should belong to you alone. However, as mentioned above, commons-based peer production projects like the open source software initiative rely on contributors giving back to the community and returning their creative innovations to the commons. Adopting a traditional approach to authorship and ownership in the open software movement would be disastrous for the project because of the way that programmers innovate by building on the work of others. None of what we have discussed here is to say that the crowd model is always going to outperform traditional frm and market models. Jarod Lanier, author of You Are Not A Gadget (2010), questions the potential of the crowd model when compared with the traditional frm and market models. He points out that, as impressive as they are, the achievements of the open source software movement are in fact limited to relatively straightforward kinds of development: for example, server software like Apache, and offce software like OpenOffce. Although the crowd is effective at organizing to produce these ‘software clones’, true innovation, Lanier believes, relies on the vision, creativity, drive, and management of individual leaders typically working in traditional hierarchical organizations. That’s why, he says, the open source software collective was unable to come up with a revolutionary product like the iPhone. Instead, the individual brilliance of Steve Jobs (supported by his creative team) was needed to make this happen. As early as 2007, the internet critic Andrew Keen suggested that the enthusiasm for user-generated content on the internet had gone too far, destroying professionalism and inundating high-quality content with a deluge of amateur content. With the sheer amount of disinformation circulating on the internet nowadays, it is hard not to reach the conclusion that the collective sometimes works in ways that are counterproductive.

238

Digital practices

More recently , the idea of crowd sourcing has been taken in new directions by technology companies like Uber and Airbnb, which have applied crowd sourcing to a new economic model known as the ‘sharing economy’. In the sharing economy, tech companies provide platforms that allow individuals to make their services available to others, effectively connecting suppliers with potential customers. Rather than investing in cars or rental properties and employing the necessary drivers and property managers, companies like Uber and Airbnb effectively ‘crowd source’ these elements of their business. This has opened up new economic opportunities for many individuals. However, these companies have also been tremendously disruptive not only to the taxi and hotel industries but also related sectors such as real estate. Locals in some popular tourist destinations are now unable to rent property because the landlord can get more for it renting it out on Airbnb.

CASE STUDY: THE WIKI The word ‘wiki’ comes from the Hawaiian word for ‘quick’ and refers to a kind of website which is designed to allow its users to quickly and easily create and edit web pages, using only a web browser. The frst wiki, Wikiwikiweb, was designed by American computer programmer Ward Cunningham and launched in 1995. It focuses on the topic of software development and can be visited at http://c2.com/cgi/wiki. These days, wikis like those at fandom.com tend to be used to create knowledge bases for fans of pop-cultural products like flm franchises, TV shows, and videogames. In addition, wikis are also used in corporate contexts to support collaborative projects: for example, the Microsoft Teams platform includes a wiki tool that can be used for collaborative writing. Wikis provide a number of affordances that make them interesting collaborative tools. In particular, most wikis now provide their users (potentially anyone) with: 1. The ability to easily create, edit, and hyperlink to web content; 2. A discussion page, where users can talk about the content that they create, posting comments to explain their revisions or challenge the revisions of others; 3. A ‘history’ function that allows users to view previous versions of the page, compare changes between different versions, and roll back to an earlier version if they want to. These affordances can provide a number of benefts to collaborative writers. First of all, a wiki can be set up to invite contributions from

Collaboration and peer production 239

anyone, even anonymous users. This provides the potential to expand the collaborative team and draw upon diverse perspectives, and such diversity is likely to have a positive effect on the project. Secondly, because wikis are web based they provide an up-to-date, centralized record of the document that is under creation. This solves one problem of collaborative writing, namely keeping track of different versions of a document, which have been separately edited by different authors. Finally, wikis provide ways of discussing the document contents and reverting to earlier versions where necessary. There are, however, some constraints associated with these features as well. For example, the fact that anyone can edit a wiki opens up the possibility of intentional disruption and vandalism of the website. Similarly, the history function can be abused, as where ‘edit wars’ break out with different users repeatedly rolling the wiki page back to the version that they prefer. Some wiki platforms provide the ability for a moderator to lock changes until major issues have been resolved. Authors of wikis who are using the discussion function to talk about such issues have to become very good at using text to do so, which some would argue is a relatively impoverished medium, especially compared to face-to-face interaction. Without a doubt, the most well-known wiki at the time of writing is Wikipedia. Studying contributions to Wikipedia can provide interesting insights into the way massive online collaboration using a wiki works. In particular, we can see that the technological affordances described above must be complemented by a system of community norms to ensure that the group works in a cohesive way. Wikipedia presents itself as ‘the free encyclopaedia that anyone can edit.’ It originally started in 2001 as a supplementary project to Nupedia, another effort to create a free encyclopaedia. Nupedia, which relied on experts to write articles, had to be abandoned because of diffculties getting authors to agree to contribute and ineffciencies in article production and editing .The Wikipedia project, in contrast, brings together volunteers from very different backgrounds, many of whom are not recognized experts in the areas on which they write and most of whom have never met before, to collaborate to create what has become the largest reference website available (http://en.wikipedia .org/wiki/Wikipedia:About). Wikipedia is remarkable because of the nature of its mass collaborative authorship. According to Wikipedia, there are roughly 91,000 active contributors to the website, and these people are largely volunteers, not paid for their effort or expertise. Contributors are self-selected rather than vetted by experts, and yet the fnal product

240

Digital practices

compares well with professional encyclopaedias like the Encyclopaedia Britannica (Giles, 2005). Furthermore, as a collaboratively written text, contributors understand that individuals cannot take credit for their work, beyond the username information that is recorded in each page’s history. With so many people working on the project, managing the group in terms of shared purpose, roles, and confict resolution becomes a major challenge. Not surprisingly, the Wikipedia community has developed a set of guidelines, policies, and social norms to deal with such issues. Over time, the Wikipedia community has reduced the principles by which it operates to a core set of principles known as ‘The Five Pillars’. These are: • • • • •

Wikipedia is an online encyclopaedia Wikipedia has a neutral point of view Wikipedia is free content Wikipedians should interact in a respectful and civil manner Wikipedia does not have frm rules (See http://en.wikipedia.org/wiki/Wikipedia:Five_pillars)

Taken together, these Five Pillars help to provide group cohesion by: 1) stating the aim of the wiki, i.e. to construct a free, online encyclopaedia; 2) stating some expectations about how this aim will best be achieved, i.e. by writing articles with a neutral point of view, treating collaborators with respect, and by not slavishly following or enforcing community-generated rules to the letter. Where Wikipedians disagree about the content of a Wikipedia page, it is not surprising to see them referring to one or other of these Five Pillars. The following example (Figure 11.5) comes from a Wikipedia discussion page, on an article about statistical inference (http://en.wikipedia.org/wiki/Statistical_inference). The frst participant raises a perceived problem of bias, a failure to meet the standard of neutral point of view (NPOV), in the comments of one editor. The second participant adds a link to the Wikipedia project page that provides detailed guidelines for achieving an NPOV. However, the original editor (the third participant in this discussion) disputes the accusation of bias, and goes on to justify the substance of what he has written in the article. Here, the technological affordance of the discussion page provides a venue for editors to discuss and resolve conficts about the content of articles. However, the technological affordance alone is not suffcient to ensure that such discussions will be fruitful. Ultimately, the community must establish norms by which contributions can be evaluated,

Collaboration and peer production 241

Neutral tone versus pejorative language

[edit]

I have repeatedly edited KW’s comments, which imply that randomization is the only defensible approach to inference - see comment on “playing with data sets” above. There are several approaches to inference, all of which merit impartial and neutral-toned discussion. (talk) 19:50, 2 March 2010 (UTC) Agreed. Basic considerations are at WP.NPOV. 2010 (UTC)

(talk) 10:05, 3 March

No statement written by me implies that “randomizaton is the only defensible approach”. Please quote one offending text, or (or both)! On the contrary, I clarified the “nonparametric” use of permutation tests,

Figure 11.5 Wikipedia discussion page.

such as those relating to the NPOV in the example in Figure 11.5. In addition, appropriate norms of interaction (i.e. netiquette) and procedures of dispute resolution also need to be established and followed. In order to collaborate effectively a wiki community has to develop social norms, which foster that collaboration. Wikis provide a tool for collaboration, which, as in the example of Wikipedia, is capable of facilitating massive collaboration with associated new literacy practices. In order to make meaningful contributions, participants in such a collaborative writing project have to become good at managing relationships with a large number of partners, most of whom they have never met. Among other things, this involves: understanding group norms; understanding group writing purpose; adopting appropriate roles; communicating appropriately with the group; and resolving confict in appropriate ways. In addition to wikis, a number of other tools can also be used in order to facilitate collaboration in massive, loosely organized, groups of like-minded individuals who may not have met their collaborators face to face. We can think of examples of Facebook groups being used to organize online collectives of researchers (e.g. the ‘panmemic’ collective, see Adami et al., 2020) and Google Docs being used by a range of corporate stakeholders, academics, and media organizations to compile resources to counter fake news (see Design solutions for fake news, n.d.). Such tools change the quality of collaborative practices, both through the technological affordances that they introduce as well as the social norms that grow up around established collaborative communities.

242

Digital practices

The wisdom of crowds At the heart of peer production is the idea of ‘collective intelligence’, which we introduced in Chapter 2. Essentially, this is the idea that the group is often smarter than the individual. In other words, if you have a large, diverse group of people and you set them a problem to solve, then the collective solution of the whole group is often better than the best solution of any one individual. In this way the collective effort is often greater than the sum of the individual contributions that make it up. In his 2004 book, The Wisdom of Crowds, author James Surowiecki illustrates the idea of collective intelligence with the example of Francis Galton, who in 1906 tested the idea (in a rather informal way) at a livestock show in the West of England. At the show, there was a competition to guess the weight of an ox, with prizes for the best guesses. It attracted 800 guesses from members of the public. Galton calculated the mean value of all the individual guesses and arrived at a collective estimate of 1,197 pounds. This collective estimate turned out to be very good indeed: the correct weight was 1,198 pounds. The idea of collective intelligence challenges our usual instinct that the best solution to a problem can be arrived at by consulting experts. One wonders how a group of people like the one at the livestock show could be so smart. After all, the average intelligence of a crowd like that is probably just average. As it turns out, having a large number of very intelligent people in the collective is not that important to the combined intelligence of the collective. Other factors play a more important role. Surowiecki points out four main characteristics of smart groups: 1) diversity; 2) independence; 3) decentralization; and 4) aggregation. Diversity in a group is good because then the group is able to take into account a wider range of different perspectives on the same problem. Independence is important to encourage an environment where group members arrive at their own views, which they openly state and debate. Decentralization is helpful because decentralized groups can take into account a wide range of local conditions when they make decisions. Finally, for collective decisionmaking to work, the group must have some means by which it can aggregate all of the contributions of different members of the group. It follows that dumb groups are those that lack these characteristics. Instead, the following properties can be observed: 1) homogeneity; 2) centralization; 3) division; 4) imitation; 5) emotionality. The process of networked peer production that we describe above often works to encourage the formation of these kinds of smart groups. It does this mainly by opening the process up and allowing anyone to self-select for the project. The distributed groups that result usually involve: 1) a diverse range of individuals, 2) loosely connected in a more or less decentralized

Collaboration and peer production 243

network, 3) working in a more or less independent way, 4) to make a contribution which is aggregated into the collective work. However, a note of caution is in order. The idea of collective intelligence does not imply that a collective decision will always be better than an individual one. As Lanier (2010) points out, problems that can be reduced to discrete, measurable components are better suited to collective decisionmaking than those that cannot. For example, the collective decision-making that determines prices in a market, with each individual simply paying an ‘affordable’ price for goods, tends to work fairly well. In this example, individual interests can be reduced to a single variable, namely price, which can be aggregated to arrive at a collective decision. But if the laws of a country were put on a wiki for anyone to edit, the results would be chaotic because the collective would likely fail to reach agreement. In these more complex scenarios, the individual interests of members of the collective need to be regulated by checks and balances, which more often than not place decisionmaking authority in the hands of individuals. It is also important to remember that a lot has changed about the architecture of the internet since Pierre Lévy coined the term ‘collective intelligence’ in 1995, the most important change being the rise of ‘artifcial intelligence’. Much of the collaboration that takes place online nowadays does not involve groups of humans using technologies to interact, but rather, humans collaborating with technologies such as algorithms and bots (who can sometimes pretend to be humans). While such technologies can sometimes make ‘smart’ crowds ‘smarter’ by fltering and refning the results of human collaboration, they can also wildly pervert the process of crowd sourcing by, for example, promoting certain kinds of ideas over others for reasons more linked to the proft-making motives of companies than the need to fnd the best solution to a problem, or by fooding the crowd with bots which are programmed to promote a particular solution or point of view.

Memes and virality The creative production and distribution of cultural artefacts is also something that takes place in a collective and so involves collaboration. One example of this kind of collaborative text production is when people create and remix memes, building on and modifying texts created by others. And, when those memes are circulated, it is the collective actions of many people connected in networks that sometimes causes these memes to ‘go viral’. In other words, virality is essentially a kind of collaborative distribution of information. The term ‘meme’ was coined by the British biologist, Richard Dawkins in his book The Selfsh Gene (2016, originally published in 1976). He used it to describe ideas or cultural units that are copied and imitated and so

244

Digital practices

spread among people. Back then, he was talking about things like tunes, catchphrases, and fashions, styles of architecture, and religious ideas. Nowadays, most people associate the term ‘meme’ with a particular kind of internet content that spreads very easily between users, who reference and remix each other’s works. According to the communication scholar Limor Shifman (2014: 41) internet memes refer to: (a) a group of digital items sharing common characteristics of content, form, and/or stance, which (b) were created with awareness of each other, and (c) were circulated, imitated, and/or transformed via the Internet by many users. In Chapter 4, we looked at the multimodal design of image macro memes, showing how the combination of text, image, and intertextual references creates humorous effects. Such image macro memes share a common format, frequently a multimodal combination of writing and image that lends itself to easy modifcation. They are collectively created by internet users, who can transform them by creatively modifying either the text, the image, or both. As an example of this collective creative process, let’s consider the ‘Change my mind’ meme.1 The original image was posted to Twitter on 18 February 2018 by conservative podcaster Steven Crowder, publicizing a regular segment of his podcast in which he debates issues with passers-by. The image depicts Crowder seated behind a desk outside the campus of Texas Christian University. Wearing a blue sweater, he is holding a coffee mug and smiling at the camera. On the desk are some papers, a microphone, and a second coffee mug. Fastened to the front of the desk and extending down to the ground is a large sign reading ‘Male privilege is a myth: Change my mind.’ Internet users edited the image by modifying the text on the sign, with examples like: • • •

Pineapple goes on pizza: Change my mind; Pop tarts are ravioli: Change my mind; Australians are just British Texans: Change my mind.

Since then, internet users have sometimes remixed the meme by modifying both text and image, for example ‘Coronavirus is a hoax: Change my mind’ with a photoshopped image that shows Crowder sprawled lifeless on the ground at the side of his desk, his chair tipped over and coffee spilt beside him. Finally, internet users have also created mashup memes, as in Figure 11.6. Here, a drawing of the well-known internet cat ‘Grumpy cat’, the star of many other memes, is photoshopped over Crowder and the sign reads ‘You suck: Don’t even try to change my mind.’ In keeping with the

Collaboration and peer production 245

Figure 11.6 ‘Grumpy cat’ ‘Change my mind’ mashup meme.

‘Grumpy cat’ meme, the words ‘Tears of my enemies’ are shown on a coffee cup photoshopped onto the tabletop. This example shows how meming involves both collective and individual work. Collectively, the ‘Change my mind’ meme draws on both 1) the intertextual cultural reference to the podcast, and 2) intertextual references to the artefacts of others. As Shifman (2014) points out, the texts were created ‘with awareness of each other’. Individually, each new remix contributes an individual innovation, which may or may not take hold in the community and be shared. This combination of collective and individual text production is so much a part of practices of internet content creation that some social media platforms go out of their way to facilitate it. For example, the video-sharing app, TikTok, provides a ‘duetting’ function that allows users to select another user’s video and create a ‘duet’: a new video that shows the performances of both the original user and the new user in a split-screen format. As the ‘Change my mind’ example shows, creating these kinds of memes involves the kind of multimodal literacies that we described in Chapter 4. As well as understanding how people go about creating memes, it is also important to consider why they do so. When people share memes, remix them, and reshare them, they are engaged in a communicative act that allows them to present themselves as a certain kind of person to members of their network (see Chapter 10), engaging in phatic communication that establishes and maintains social bonds (see Chapter 5). A certain amount of cultural

246

Digital practices

knowledge is necessary in order to successfully produce and interpret certain memes. For example, the popular lolcats memes involve a particular, non-standard form of language (e.g. ‘I can has cheezburger?’). As a result, successfully producing this kind of meme allows a person to claim insider status in a light community (see Chapter 8). When the right people share the right content at the right time, memes and other kinds of internet content can go viral. One way of looking at virality is as a particular kind of information fow. This information fow is not unique to digital media—it was seen in pre-digital times as well—but the highly interconnected nature of digital media has certainly made it easier for content to ‘go viral’ and reach more people in a shorter time frame. The information scientists Karine Nahon and Jeff Hemsley (2013, p. 16) defne ‘virality’ in the following way: Virality is a social information fow process where many people simultaneously forward a specifc information item, over a short period of time, within their social networks, and where the message spreads beyond their own [social] networks to different, often distant networks, resulting in a sharp acceleration in the number of people who are exposed to the message. According to this view, a viral information fow follows an S-shaped curve—a slow-fast-slow pattern of exposure. The spread of information starts slowly as just a few people view and share, picks up speed suddenly with a lot of viewing and sharing, and then tapers off as the viral event comes to a close. For something to be considered viral, three key criteria have to be met: 1) the information must be shared by people in their social networks; 2) it must be shared quickly; and 3) it must have a wide reach that transcends individual networks, hopping from one social group to another. When something goes viral, it is likely to garner attention from a large number of people and can therefore be very infuential. As a result, a lot of people (especially advertisers and marketers) would like to know what it is that makes things go viral. The process is rather unpredictable and diffcult to control: there is no ‘recipe’ that you can follow to create a viral event. Still, if you analyse viral events you can see that there are both top-down and bottom-up factors involved. For Nahon and Hemsley (2013), top-down factors come from opinion leaders or infuencers (see Chapter 10). If infuential individuals share information then it is more likely to be picked up and spread by others. They call these infuential individuals network gatekeepers. Such people usually have a lot of followers and bridge multiple different networks (see Chapter 10), so that when they share interesting information it is likely to spread into these different networks. What is it about these people—apart from their position in the network—that makes them so infuential? At any given moment in

Collaboration and peer production 247

time, they are the ones who are able to hold the attention of others, perhaps because they are perceived to be entertaining, insightful, or trustworthy. Bottom-up factors come into play when large numbers of people make the individual decision to share content. Nahon and Hemsley (2013: 63) point out that people are more likely to share content that is remarkable in some way. For example, the content has an emotional impact, or is humourous, surprising, novel, or resonates with people in some way. It probably also helps if the content has high production value but this is not crucial: there are plenty of examples of low-quality video of shocking terrorist acts or police brutality in protests that have gone viral because of the interest of the subject matter. The notion that information with an emotional impact is more likely to be shared is something that social media algorithms pick up on as well. As we discussed in the last chapter, people tend to share emotionally engaging posts, and for this reason, social media algorithms tend to promote these kinds of posts to them. Internet trolls are also very aware of the network dynamics that can contribute to content spreading in a viral way. The fake accounts that they set up are frst maintained for a length of time, posting the kind of harmless, humourous content that many people will engage with. This allows them to build up histories and reputations as entertaining and trustworthy accounts that have a large base of followers (some of these will also be fake, of course). Eventually, these fake accounts are then used in order to promote misleading or polarizing content but by this stage they have already established themselves as a trustworthy source for many people and capable of holding these people’s attention. All of this greatly enhances the likelihood of real people sharing misinformation in real networks. Simple checks on the originating account—like investigating how long it has been up and running or how many followers it has—are unlikely to reveal anything suspicious. Most people tend not to think about their own practices of sharing as being a part of a viral pattern potentially manipulated by algorithms, advertisers, and trolls. But understanding that information fows can and regularly are manipulated in this way ought to make us think quite carefully before we share a post with our networks. If we’re not careful, we could easily become part of a collaborative effort to distribute misinformation intended to heighten divisions in society.

ACTIVITY: THE CREATION AND CIRCULATION OF MEMES Memes often arise in response to some kind of cultural event that has particular signifcance for a group of people. The example of the ‘Change my mind’ meme that we described above illustrates this well,

248

Digital practices

demonstrating that a meme can originate in user-generated cultural content (a podcast and a tweet). Memes have also originated in flms, TV shows, advertisements, user-generated YouTube videos, stock photos, fashmobs, internet challenges, lip synching, recut movie trailers, and many more sources. Search the internet in order to identify a popular meme and answer these questions: A. Collective production 1. What was the original source of the meme? 2. How did people transform or remix it? 3. Did they mash it up with other memes? B. Collective distribution 1. Why do you think that people shared this meme? 2. Were there bottom-up factors involved? a. Is the meme remarkable somehow? b. Does it have an emotional impact? c. Is it humorous, surprising, or novel? d. Does the meme resonate with people in some way? e. Does it have high production values or not? How does that affect sharing? 3. Were there top-down factors involved? a. Was it shared by infuential network gatekeepers?

Conclusion In this chapter we have examined the way that people use digital media and communication technologies to collaborate. On one level, these technologies support collaborative writing projects in traditional contexts like the workplace by providing tools for team members to communicate with each other about their writing. On another level, the same tools have made possible a new collaborative information production model known as peer production. This model is particularly effective for knowledge work, because of the way that it opens participation up to diverse groups of individuals, allowing those with the appropriate motivation and skills to self-select for a given project. We’ve seen examples of how businesses, government organizations like NASA, and non-governmental organizations like Wikipedia have been able to use the peer-production model and advance their interests by looking outside of the organization and drawing on the wisdom of

Collaboration and peer production 249

crowds. We have also seen how the production of popular cultural artefacts online is also a kind of collaborative process, with internet users working together both to create cultural products like memes and to distribute them. Understanding the way that networks are involved in this collective distribution process can help us to understand how something goes viral and how bad actors might seek to take advantage of our own participation in that process. In the next chapter, we go beyond these kinds of networked social interactions and turn our attention to issues of privacy and surveillance, exploring the way that much of what we do online is wrapped up in practices of ‘watching’ and ‘being watched’. Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Note 1 (https://knowyourmeme.com/memes/steven-crowders-change-my-mind-campus -sign)

Chapter 12

Surveillance and privacy

At the beginning of this book, we noted that one of the most important things to remember about digital technologies is that, whenever we use them, we end up creating and disseminating information about ourselves. Sometimes we do this consciously, such as when we post a picture, video, or status update to a social media platform for our friends or followers to see. But more often than not we are creating information about ourselves in ways that we are not conscious of through, for instance, our searches, clicks, swipes, and likes, information that is collected by commercial entities and used to market goods and services to us. Of course, there are also other groups of people who might be using the internet to gather information about us, including law enforcement offcials, hackers, scammers, stalkers, and government agents. Usually, when we think of literacies, we think of understanding the affordances media provide for communicating information. Just as important, however, is understanding how to control information and withhold it from others when we want to. New technologies have always been associated with disruptions of norms regarding privacy. In the late nineteenth century, for instance, the development of portable photography was seen as such a threat to people’s privacy that it led US Supreme Court Justices Samuel Warren and Louis Brandeis to assert that individuals have a ‘right to privacy’, even when they are in public places (1890: 206). Digital media have engendered a similar disruption by allowing people to create, store, and share information more easily than they could in the past. In fact, the digitization of information has changed the very nature of surveillance. In the past, to ‘spy’ on someone meant primarily to observe their physical actions. Nowadays it is more a matter of observing the ‘data trails’ they leave as they interact with technologies. In 1988, the Australian communications scholar Roger Clarke coined the term dataveillance to describe this new form of surveillance involving the ‘systematic use of personal data systems in the monitoring of people’s actions and communications’ (1988: 498). One of the biggest challenges digital media present when it comes to privacy is how easy they make it for people to share information. In fact, the

Surveillance and privacy 251

whole point of many of the digital media we use on a daily basis is to make information about ourselves available to others, either for convenience or as a way to feel socially ‘connected’. Social media platforms practically compel people to become objects of surveillance by encouraging the almost constant sharing of information. This, in turn, has brought about changes in social norms and expectations about what kinds of information should be public and what kinds of information should be kept private. In this chapter, we will consider privacy and surveillance as ‘literacies’, exploring how the new practices of ‘watching’ and ‘being watched’ made possible by digital media are affecting the ways we communicate, manage our identities, maintain relationships, and negotiate social norms. The starting point for this discussion is not the assumption that people should avoid disclosing information about themselves, nor that surveillance is necessarily bad, but rather that privacy and surveillance are important ways of managing our social interactions and social relationships which we need to master to successfully engage in social life both online and offine.

What is privacy? One problem with talking about privacy is that, although most people regard it as something ‘good’, something that should be ‘protected’, even an inherent ‘right’ of individuals, many people fnd it diffcult to articulate exactly what it is. Some confuse privacy with ‘modesty’, and so make the mistake of thinking that people who like to disclose personal information about themselves online ‘don’t care about privacy’. Others confuse it with ‘secrecy’, insisting that people who resist disclosing information about themselves are somehow ‘inauthentic’ or ‘untrustworthy’. The fact is, all social life requires us to disclose information about ourselves to others. Sharing information is an essential part of constructing effective social identities and establishing healthy relationships. At the same time, successful social identities and relationships also depend on concealing certain information from others. What is important, when it comes to privacy is not how much or what kind of information we disclose or conceal, but how much control we have over what information we are disclosing and what information we are concealing, when, where, and to whom. For social psychologist Irwin Altman (1975) privacy is not a ‘thing’ that we either have or don’t have, but a process that we need to engage in whenever we interact with other people. In social interactions, we work together with others to manage our relationships through selectively adjusting what we disclose and what we conceal moment by moment. Of course, this is not always as easy as it sounds, because we often don’t have complete control over what we disclose. The sociologist Erving Goffman (1959) makes the distinction between information that is ‘given’ and information that is ‘given off’. Even when we don’t intend to disclose information, people can

252

Digital practices

sometimes infer it from the way we look or the way we act. We may not want to disclose to others how old we are, for example, but information about our age is almost always ‘given off’ by our physical appearance. Not only do we ‘do’ privacy differently with different people, but we also do it differently in different situations, and so another important aspect of privacy is the degree we are able to control the contexts in which we produce information and the contexts into which information about ourselves travels. The philosopher Helen Nissenbaum (2009) argues that our ideas about what information we should reveal and what information we should conceal is always embedded in particular social contexts. She defnes contexts as ‘structured social settings characterized by canonical activities, roles, relationships, power structures, norms (or rules), and internal values (goals, ends, purposes)’ (132). From this perspective, privacy is chiefy a matter of maintaining what Nissenbaum calls contextual integrity, that is, controlling how information fows between different contexts. Based on the above, we can say that privacy is essentially our ability: 1. To control what information about ourselves to reveal or conceal (including, as much as possible, the information that we ‘give off’); 2. To control whom we reveal information to and be able to use the selective disclosure of information to manage our social relationships; 3. To control the social situations in which different kinds of information about ourselves is disclosed and understand the norms of disclosure of these different situations; 4. To control what happens to information about ourselves after we disclose it. While in many ways, digital media decrease our ability to control information, in some ways they actually increase it. We have, in fact, talked a lot in this book about the affordances digital media provide for controlling what aspects of ourselves others have access to. We can send texts instead of calling or visiting someone in order to avoid using modes of communication (such as our voices or facial expressions) which might ‘give off’ information that we may want to keep secret (see Chapter 5). We can also screen our calls and messages from other people and decide whom we want to interact with and whom we want to ignore. We can touch up, crop, and apply flters to the images of ourselves we post to social media platforms, hiding aspects of our appearance we don’t want others to see (see Chapter 4), and we can choose apps like Snapchat which cause the images we send to disappear after the people we have sent them to have seen them. We can choose where to point our cameras during video calls, and even whether or not to turn our cameras on in the frst place. Digital media can also help us to control information fows in the material world. We can engage in

Surveillance and privacy 253

text-based conversations with others without the people around us knowing what we are saying, and we can use our digital devices as ‘involvement screens’ in order to avoid having to talk with other people in public places (see Chapter 6). At the same time, digital media have also made it more diffcult for people to control fows of information. As we mentioned in Chapter 10, online social networks, with their large numbers of strong and weak ties that connect us in complex ways to all sorts of people, make it diffcult for us to control who is able to see the messages that we post and to maintain the contextual integrity of the situations in which we communicate. In addition, the collaborative nature of self-presentation on such sites means that other people might disclose or spread information about us in unpredictable ways when they comment on or share our posts or tag us in pictures. Access to such environments is also inherently ‘leaky’, providing myriad ways, both legal and illegal, for unauthorized people to gain access to our information. Most importantly, our online interactions on practically every mainstream platform are constantly monitored by invisible participants, the most important being the owners of these platforms themselves, who meticulously collect and store vast amounts of information about what we do when we use the platforms. As we discussed in Chapter 10, many online environments are intentionally designed to reward the sharing of information; the more information you share, the more rewards you receive (for example, in the form of ‘likes’ and comments from other users). In fact, the main way people are rewarded for sharing information on social media is for their friends and followers to share that information with even more people. Writing about Facebook, the media scholar Rachel Dubrofsky (2011: 124) notes: the main imperative is to upload endless data for others to pay attention to (by commenting on it or clicking the “like” tab) because the more attention the data get, the more the data circulate, and thus the more advertising a person’s Facebook page receives on newsfeeds. Moreover, digital media give to the information that we disclose a degree of permanence that they don’t have in the material world. In the material world, the words we utter in conversations disappear into the air, and even when we write things down in letters or notebooks, the paper on which they are written will someday decay. But the words and images that we upload to the internet, even when they are not intended for posterity, live on indefnitely and are almost impossible to delete. One consistent fnding of studies on online privacy is that, while people profess to be concerned about their privacy, they continue to engage in behaviours that open themselves up to surveillance. This phenomenon is known as the privacy paradox (Barnes, 2006).

254

Digital practices

One reason for this is that many people are not entirely aware of the degree to which their online activities are being monitored, the amount of information about them that is available, and what can be done with it. Many are also not entirely familiar with the steps they can take to protect their privacy, such as adjusting privacy settings on social media platforms and denying websites and apps permission to deposit cookies on their computers or gain access to information stored on their devices. The fact that people don’t normally take steps to protect their privacy, even when they feel uncomfortable with the amount of information about them others have access to, however, can’t be entirely explained by ignorance. What it really shows is how complicated privacy is. As we said above, participating in social life always involves some willingness to disclose personal information, and the need for privacy is only one of many needs people have. Other needs include the need to ft in with one’s social group, the need to impress other people, the need to save time or conduct online activities in effcient and convenient ways. Often decisions to disclose information or to allow information about us to be gathered are made based on a kind of cost–beneft analysis whereby people weigh the perceived dangers of disclosing information with the perceived rewards they may get from doing so.

Responses to privacy challenges People have sought to address the privacy challenges described above in different ways, suggesting technological solutions, legal solutions, and solutions that focus on equipping people with different kinds of ‘literacies’. Technological solutions usually depend on things like ‘privacy settings’ by which people can control who has access to their information and who doesn’t. While many platforms are making privacy settings easier to access and use, it is still the norm of most platforms to require users to expend effort to access and change such settings. Recall, for example, the number of steps you needed to take to access and change the location settings on your phone in the activity in Chapter 6. Meanwhile, for most platforms, the default settings allow the least amount of privacy, and, as we said in Chapter 7, default settings can have a profound impact on people’s choices. In the words of danah boyd (2010: n.p.), most online platforms are ‘public by default, private through effort.’ Other technological solutions include third-party software such as browser extensions that block internet companies from showing you ads, depositing cookies on your computer, or tracking you as you navigate to other websites. At the most extreme end of these software solutions are applications such as Tor, which is a free browser that enables anonymous communication through a principle called onion routing by which information sent over networks (including users’ IP addresses) is hidden in multiple layers of encryption.

Surveillance and privacy 255

The problem with relying on technological solutions to control your privacy is that they usually require a degree of technological know-how that not all users have or a degree of effort to implement that not all users are willing to expend. They can also give users a false sense of security, making them less vigilant to threats that the software may not be able to protect against (such as those that exploit human error or manipulate cognitive biases). Finally, such software solutions are inevitably ‘behind the curve’ when it comes to technological innovation, with legitimate internet companies and illegitimate scammers working day and night to invent new ways to collect data from people that developers of software solutions have not yet thought of. Another way people have attempted to help others protect their privacy online is through the passage of legislation which restricts the kind of data companies can collect from users and requires that they elicit consent for how they use these data. The highest profle example of such legislation is the European Union’s General Data Protection Regulation (GDPR), which requires website owners to obtain users’ affrmative opt-in for cookies and other tracking technologies to be deposited on their devices. There are a number of limitations to such solutions including the fact that users often do not take the time to read the ‘fne print’ in permission dialogues and privacy policies, and the fact that, when confronted with prompts for consent, most users simply click ‘ok’ in order to get on with what they are doing (see below). Indeed, it is possible that laws such as GDPR actually make it harder for users to make informed decisions about their privacy by conditioning them to give consent—if every website you visit asks you to click ‘agree’ in order to access its content, it is likely that you’ll just get used to clicking ‘agree’ without thinking about it. There are many people (including the authors of this book) that think that neither technological nor legal solutions are enough to help people manage their privacy online, and that online privacy is a matter of equipping people with certain literacies to help them make complex decisions about what they want to disclose to others, when, where, and how. Unfortunately, most attempts to educate people about online privacy focus on teaching them about the technical and legal solutions we discussed above and take a largely negative approach—emphasizing techniques for concealing information, but not talking much about effective strategies for revealing information. As we said above, privacy is not the same as secrecy, and being able to manage privacy involves knowing what to show (and how to show it) as well as knowing what to hide. Philip Seargeant and Caroline Tagg (n.d.) argue that understanding what to share and what to hide is not just a technical skill, but also a social skill, one of the primary means through which we manage our relationships and build our identities both online and off. Along with technical literacies associated with ‘data management’, they say, people need to be equipped

256

Digital practices

with what they call ‘social digital literacies’—an understanding of ‘how our social interactions and relationships play an important part in infuencing our decisions regarding what to share’ (n.p.). Most of these kinds of social literacies are not things that we learn about in school, but rather, literacies that we acquire through interacting with peers. Much can be learned, in fact, from studying the everyday privacy practices of internet users. In her famous study of the digital practices of American teens, for example, media researcher danah boyd (2014) found that, despite what many adults think, teenagers are extremely concerned about preserving their privacy online, but, rather than limiting how much information they post to social media or making use of technological solutions like privacy settings, they often attempt to manage their privacy using discursive strategies, for example, carefully crafting their posts so that only certain people can understand what they are talking about. A big part of developing and refecting upon social digital literacies is understanding and engaging (either positively or negatively) with the norms of communication that grow up in the various communities we are part of and on the various media platforms upon which we stage our social interactions (see Chapters 8 and 10). In general, norms (rather than laws) are the most important guides for people’s behaviour when it comes to establishing and respecting boundaries around personal information. Offine, for instance, in most societies, people refrain from looking into other people’s windows, even when they are left open, and in public spaces such as elevators and train carriages they practise what Goffman (1972) calls ‘civil inattention’, avoiding staring at strangers (Tene & Polonetsky, 2014). The diffculty with relying on norms is that they sometimes do not keep up with technological developments that make possible new situations that did not exist when the norms were established. While digital technologies themselves don’t necessarily change norms, people have to adapt their norms to adjust to the new practices these technologies make possible, and sometimes the companies that develop and market these devices and applications play a role in promoting new norms around their use. Social networking sites like Facebook, for example, have played an important role in promoting an ethos of radical self-disclosure and casting suspicions on privacy-promoting practices such as pseudonymity that were common in the early days of the internet. In 2001, in discussing news that Google was developing ‘brain implant’ technologies, Eric Schmidt, the company’s CEO at the time, said that it was ‘Google policy… to get right up to the creepy line—but not to cross it’ (Saint, 2010: n.p.). It is clear, though, that, in many cases, the goal of tech companies is not just to get as close as possible to the ‘creepy line’, but to alter where people draw the line, getting them to start regarding behaviour that before they might have considered creepy as ‘normal’ or even benefcial.

Surveillance and privacy 257

‘Social stalking’ (see below), where people monitor the social media sites of acquaintances in order to learn more about them, for example, is seen by most people as no big deal, whereas the same people would never consider hiding outside of people’s houses to fnd out who was visiting them. Ten or twenty years ago, if you asked people whether or not they would be willing to carry around tracking devices that record every place they go and allow other people (such as their parents) to locate them whenever they want, they probably would not have consented, but that is exactly what today’s mobile phones do. The process through which technologies and the social practices they enable gradually come to be regarded as less and less ‘creepy’ is known as domestication (Silverstone & Hirsch, 1992). One good example of domestication is the increasing popularity of digital security cameras such as Google’s Nest Cam for monitoring both the area outside of people’s houses and people inside the house (such as children, domestic helpers, and babysitters). One way that Google gets people to accept such technologies is by activating longstanding fears about things like home intrusion and child abuse, but that’s not the only way. Google also markets such cameras by creating associations with modernity, convenience, and even fun. It has even started a scheme in which users of the camera are invited to submit videos of the funniest moments captured on their home surveillance cameras (Burns, 2018). Sometimes, however, consumers resist technologies that they consider ‘too creepy’ despite companies’ best attempts to promote them. This was the case with Google Glass, a headset worn like a pair of eyeglasses that allows people to access information about their environments via the internet as well as to take pictures and record videos with simple voice or gesture commands. The product was rolled out for beta testing in 2013, creating an uproar from people worried about the prospect of users being able to secretly video record others in public and possibly, with the right software, to perform facial recognition and gather data about them. Those who were beta testing the product earned the unsavoury moniker ‘glass holes’, and Google ended up stopping the beta testing in 2015 without releasing the product on the market (though newer ‘enterprise’ versions have been developed for specialized professional and military use).

ACTIVITY: THE ‘CREEPY SCALE’ As we said above, many digital technologies introduce new possibilities for sharing information and collecting information from others that require people to readjust their expectations about privacy and to develop new norms and rules of etiquette. When these technologies are frst developed, people sometimes fnd them ‘creepy’, but may later

258

Digital practices

get used to them. Some technologies, however, are regarded by too many people as too creepy and so are slow to be adopted. The frst step in understanding how to deal with the changing social practices around privacy made possible by new technologies is to refect on where you draw the line when it comes to ‘creepiness’. Consider the scenarios below (all based on actual cases) and rank them on the ‘creepy scale’. Discuss with a partner what factors you took into account when assigning a place on the ‘creepy scale’. 1. Your university has installed digital video camera in classrooms equipped with facial recognition technology which can ‘take attendance’ by scanning the room and determining which students are there. I’m okay______________|_______________That’s totally with that ‘creepy line’ unacceptable 2. A childcare centre has installed surveillance cameras equipped with technology that can analyze the facial expressions and movements of workers in order to monitor them for signs of depression, anxiety, or other signs of mental or emotional instability. I’m okay______________|_______________That’s totally with that ‘creepy line’ unacceptable 3. An airline loyalty programme systematically monitors members’ social media feeds in order to fnd out more about them so that they can provide more ‘personalized’ service. I’m okay______________|_______________That’s totally with that ‘creepy line’ unacceptable 4. The new app the supermarket you shop at has asked you to install on your phone monitors your movements through the store and offers you discounts based on the aisle that you are in and the kinds of products you are spending time looking at. I’m okay______________|_______________That’s totally with that ‘creepy line’ unacceptable 5. Someone who has a crush on you monitors your social media accounts to fnd out about your interests, your friends, and where

Surveillance and privacy 259

you go, and tries to hang out in the same places you do in order to strike up a relationship with you. I’m okay______________|_______________That’s totally with that ‘creepy line’ unacceptable 6. A parent has purchased a software that can monitor what is happening on every device connected to the home Wi-Fi network and provides detailed information about who is online, what applications they are using, and what websites they are visiting. I’m okay______________|_______________That’s totally with that ‘creepy line’ unacceptable

Lateral surveillance Although much public discourse about internet privacy focuses on ‘third parties’ such as hackers and internet companies that syphon up people’s personal data for proft, for most people their most immediate privacy concerns are about controlling what kind of information people they know—parents, colleagues, friends—have access to. This is not surprising since digital media, especially social media, make available a wealth of affordances for what is known as ‘lateral surveillance’ (Andrejevic, 2002). In fact, many digital tools seem to encourage this kind of peer-to-peer ‘social spying’. Lateral surveillance, of course, didn’t start with the internet. Humans have always kept tabs on the people around them and commented on their activities through, for example, gossip. It might be argued that lateral surveillance is a necessary part of human social life, a way of ensuring that people don’t misbehave and of enforcing social norms. We want to know as much as possible about the people we interact with, and this almost always involves watching what they do, and fnding out what other people have to say about them. Lateral surveillance is also a way of building solidarity with others, showing them that you are paying attention and looking out for them. The most common digital environments where lateral surveillance takes place are social media platforms, where users regularly engage in practices such as ‘stalking’ or creeping—monitoring other people’s activities on social media by browsing through their posts, interactions and friends lists—and lurking—hanging out in online environments and reading other people’s contributions but rarely posting yourself. Stalking, legally defned, is repeated unwanted surveillance that makes victims feel threatened or harassed, but this is not the way most social media

260

Digital practices

users use the term. On social media, some amount of ‘stalking’ is considered perfectly normal. That’s not to say all social media stalking is okay. In fact, the sinister connotations of the word belie a kind of moral ambivalence. Whether stalking is considered ‘creepy’ or not depends on many things, such as who is doing the stalking, why they are doing it, and how they are doing it. People might stalk their friends, co-workers, teachers, or boss in order to fgure out how to get along with them better. Or they might stalk someone they would like to be romantically involved with, or someone they used to be romantically involved with to fnd out if they are involved with someone else. They might be motivated by things like curiosity, attraction, or a desire for revenge, and they might do it casually or obsessively. Alice Marwick (2012), in her study of social surveillance in everyday life, notes that power has a lot to do with whether or not people think ‘stalking’ is acceptable. They are much less comfortable with people like parents, teachers, and bosses stalking them than they are with friends doing so. In another study of the social surveillance practices of teens, sociolinguists Graham Jones, Bambi Schieffelin, and Rachel Smith (2011) note that while most of their participants considered their own stalking behaviours innocent and harmless, discovering that others were involved in online surveillance or voyeurism often made them the targets of gossip. Another important point that Graham Jones and his colleagues make is that social media stalking is not necessarily a solitary activity, that friends often engage in stalking people’s social media profles in pairs or groups, suggesting that social media stalking can function like gossip, helping people to bond with friends. Some people have criticized social media sites for not just facilitating but actually encouraging practices of lateral surveillance, thus ‘normalizing creepy behaviour’ (McCann, 2012). At the same time, social media users themselves clearly perceive some benefts to such practices. Sociologist Mary Chayko (2008) insists that watching and being watched on social media contributes to feelings of social solidarity and social cohesion in an otherwise fragmented social world. ‘“Watching” one another… and knowing that we are being watched,’ she writes, ‘is an active process of self-expression and other-interpretation’ (176); it helps us to ‘feel the presence of others in our networks and communities and to be present to them in return’ (115). There is, of course, another side to the coin: the affordances digital media provide for lateral surveillance also make possible the ‘weaponization’ of people’s personal information in practices such as cyberbullying, which often involves the collection and dissemination of information about people in order to harass them or damage their reputations, doxing, publishing personally identifable information or revealing the identity of people who wish to remain anonymous, and revenge porn, the distribution of sexually intimate videos or photos of people, often by former romantic partners. One particularly problematic by-product of the increased opportunities for lateral surveillance made possible by digital media is online social

Surveillance and privacy 261

shaming, practices that involve exposing ostensibly inappropriate or illegal behaviour of others. As with all practices involving lateral surveillance, social shaming is a phenomenon with a long history in human societies. But digital tools not only greatly facilitate the acquisition of incriminating information about others, they also facilitate their dissemination and potential for ‘going viral’ (see Chapter 11). One problem with assessing such practices has to do with the fact that they are often motivated by a sense of moral righteousness, that is, people think that what they are doing is ultimately benefcial. But the problem is that the victims of such shaming often end up being the targets of a kind of ‘mob justice’ where the social media reward expressions of ‘outrage’ (see Chapter 10) rather than engagement in more impartial or rational discussions about the behaviour in question. Victims also rarely have the chance to tell their side of the story. Practices of social shaming range from the large-scale dissemination of the private data of corporations and governments by platforms like Wikileaks, to smaller-scale grievances, such as a waitress posting the credit card receipt of a customer who fails to leave her a tip (Zimmerman, 2013). The ubiquity of mobile phones equipped with digital video cameras means that even people’s offine public behaviours are subject to being recorded and posted on the internet. In some cases, as when citizens have circulated videos of police shootings, this practice has proven effective in holding powerful people to account and raising public awareness of serious social problems. The online dissemination of videos documenting the bad behaviour of ordinary people, however, no matter how rude or racist, raises questions not just about privacy but also about fairness and due process.

Pretexts A potentially much more consequential form of online surveillance comes not from friends, parents, or potential romantic interests, but from various ‘third parties’, whether they be hackers and scammers using illegal methods to gather data about users or corporate entities using legal means. The most important thing about this form of surveillance is that it often takes place with the passive or active consent of those who are being monitored. In other words, people rightly or wrongly decide to relinquish their personal data. To understand why this happens we need to explore the strategies that individuals and companies use to convince us that the benefts of disclosure outweigh the costs. This usually involves the creation of pretexts for us to disclose information that make us feel like it is a reasonable or benefcial thing to do. The most basic defnition of a pretext is a reason that is given by someone to try to get other people to do something. In the feld of social engineering, which studies how to psychologically manipulate people into performing actions such as divulging personal information, the practice of ‘pretexting’

262

Digital practices

involves a set of tried and tested tactics which include: 1) creating scenarios in which divulging information seems reasonable; 2) establishing a relationship of trust between the victim and the person who is trying to get their information; and 3) creating a sense of urgency so victims have less time to consider the consequences of their disclosure. Perhaps the most obvious example of pretexting can be seen in the scam emails that we sometimes get which try to convince us to divulge our banking information in exchange for the promise of large amounts of money being deposited into our accounts (known as advance fee scams or Nigerian 419 scams because many of them originate in Nigeria). A writer of such an email, for example, might pose as a Nigerian Prince attempting to transfer a large sum of money out of the country so that it doesn’t fall into the hands of corrupt government offcials and request the recipient’s help, promising some percentage of the total in return. In addition, writers of such emails often try to create a relationship of trust with the recipient with words like ‘I’m writing to you because of your reputation as a man of good character and high moral standing.’ Finally, such emails usually pressure the recipient to act quickly with some claim that ‘time is running out.’ Of course, most of us don’t fall for such schemes—the stories the writers of such emails tell might seem too farfetched, or we might take features of the grammar or writing style as signals that they are fake. But some people do fall for them—in fact, more than ever before (Newman, 2018). Microsoft researcher Cormac Herley (2012) suggests that the seemingly unsophisticated pretexting techniques in such emails may be a feature rather than a bug, fltering out more sophisticated targets and attracting the most gullible, who can then be roped into the scheme for a longer period of time and tricked into making repeated payments to a foreign bank account. This form of pretexting is called phishing (because perpetrators are ‘fshing’ for people’s information), and most instances of it are much more effective than emails from Nigerian princes; scammers have become extremely good at impersonating trusted organizations and creating plausible scenarios. Figure 12.1, for instance, is an example of a scammer posing as Amazon.com offering a customer a refund because of a billing error, not an entirely implausible scenario. Features like the ‘reference code’ make the email seem even more authentic, and the information requested (the recipient’s address) does not seem particularly risky to disclose (compared to, for example, credit card details). When the recipient clicks on ‘Click Here to Update Your Address,’ they are taken to a convincing replica of the Amazon sign-in page where they are asked to enter their email and password, and with this information the scammer can access their Amazon account and all of the information (including credit card details) that might be stored there. So far we have been focusing on examples of pretexting that involve outright deception. Most pretexting, however, is not blatantly deceptive nor illegal. Many online retailers, software developers, and social media

Surveillance and privacy 263

Figure 12.1 Phishing email.

platforms use versions of the same strategies to get users to disclose personal information or to grant permission for these companies to track them or access information stored on their devices. Sometimes this involves creating scenarios in which users are involved in enjoyable activities such as playing Candy Crush or taking online quizzes that distract them from the fact that information about them is being collected. Others do it by framing the collection of data in ways that make it seem benefcial to users. Notices that ask users to give permission for cookies to be deposited on their devices, for example, often say that the purpose of the cookies is ‘so we can provide content that is more useful and relevant to you,’ while their real purpose is to serve you more targeted advertisements. And sometimes companies lull users into a false sense of security, making them think that they have more control over their data than they actually do. The privacy settings on social media platforms allow users to restrict the information they give to other users, but there are no settings that allow them to restrict the information that is collected by the platform owners. Some browsers, like Google Chrome, include an ‘incognito’ setting which prevents users’ browsing history from being recorded on their devices but do little to prevent the browser vendor, internet service providers, or websites from tracking users’ activities. Google keeps track of your searches and your clicks no matter what mode you are browsing in. By far the most prevalent pretext internet companies use to gather data is the promise of convenience. Apps like Uber and Lyft make it easy for you to fnd a ride, though you must pay for that convenience by making your

264

Digital practices

location data available to the companies and whatever third parties they sell your data to. Social media platforms provide a convenient way for you to keep track of your friends, but also provide a convenient way for social media companies and the third parties they share your data with (including, in some cases, government agencies) to keep track of you. This is no secret. Although most people are not fully aware of the extent of the data collection that goes on when they use digital media, they are aware that they are disclosing some data and that they might be being tracked. However, they often accept this as a reasonable trade-off for the convenience they get from the apps they use and services they enjoy from internet companies. The problem, though, with seeing this as a ‘rational’ choice based on cost–beneft analysis is what is known as information asymmetry. Internet companies always know more about how much data they collect and what they do with it than users do. What is always most immediately salient to users when they make such decisions are the benefts (the chance to connect with a friend, to control their home thermostat remotely, or fnd their way when they are lost in an unfamiliar city). The consequences of giving away personal information, on the other hand, are usually not so immediate and not so clear. Indeed, people often see targeted ads as a beneft rather than a consequence, and are less conscious of the fact that surrendering data makes them easier to manipulate and control in the long term. Indeed, the whole point of manipulation is that the people who are manipulated are usually not aware that it’s happening.

Entextualization Perhaps the biggest challenge in raising consciousness among people about digital surveillance by commercial entities is a general lack of understanding about exactly what kind of information is being collected about them and what happens to it. Most people think about the information that they disclose in terms of what is freely ‘given’ through the texts that they create, for example, in their social media profles, and the content they upload onto platforms such as Instagram, Facebook, and Twitter. Many feel that the information they disclose online is not particularly private or incriminating, and don’t think it’s such a big deal if the companies that track them online know what their favourite books are or what they had for breakfast. This attitude is based on a limited understanding of digital information and communication, the idea that what we disclose when we communicate is mainly contained in the message that we send. But even in face-to-face communication, we always end up communicating much more than these messages. We ‘give off’ information with our facial expressions, our body language, and the seemingly incidental actions that we take in our daily lives. When it comes to digital surveillance, it is this

Surveillance and privacy 265

‘given off’ information that is much more important than the information that we ‘give’. Social media companies are much less interested in the texts that you create consciously through what you ‘say’ online than they are in the texts that you create unconsciously through what you do. The process of creating texts is called entextualization. The linguistic anthropologists Charles Bauman and Charles Briggs (1990: 73) defne it as ‘the process of rendering discourse extractable,’ making it into ‘a text—that can be lifted out of its interactional setting.’ When you take a picture of a moment in your life to upload onto Instagram, you are engaging in entextualization, turning that moment into a text. Similarly, when you compose a text message to your friend, you are also engaging in entextualization, turning your thoughts or ideas or feelings into a text. Perhaps the biggest difference between online communication and face-to-face communication is the fact that online communication almost always involves entextualization, resulting in more or less permanent records of our communications. But that’s not the biggest problem when it comes to entextualization in online environments. The biggest problem is that, online, we don’t just create texts from information that we freely ‘give’, but also from information that we ‘give off’ through, for example, the websites we visit, the things we click on, and even our physical locations when we are using out mobile phones. In fact, the biggest difference between digital media and other media is not just that they require us to communicate through creating texts, but that practically everything we do when we interact with them is entextualized. Every action, interaction, and transaction generates a record. The mistake many people make when it comes to digital surveillance is being vigilant about the texts that they voluntarily upload, but not about the myriad tiny actions which are captured by digital systems and come to constitute what Battelle (2005) calls ‘a massive click-stream database of desires, needs, wants, and preferences’. One reason for this is that they don’t see these tiny actions as particularly ‘meaningful’. In other words, they project their own ‘human’ ideas about making and interpreting meaning onto machines. But digital technologies, powered by algorithms, make and interpret meaning very differently from humans. For digital surveillance systems, the actual content that we upload is not always particularly useful, because it is often diffcult to ‘read’: we often use unconventional spellings and grammar when we communicate online, and much of our meaning is implicit rather than explicit. Pictures are also harder for computers to make sense of, though they are getting better at it. For them, a much more ‘legible’ form of information is metadata, data about our communication and the people we are communicating with that is generated through clicks, swipes, choices from menus, location coordinates, and connections within a network. For many people, the collection of metadata seems less intrusive, not nearly as threatening as someone, for example,

266

Digital practices

‘reading’ their emails or fipping through their Instagram photos. But in some respects, as Blaze (2013: n.p.) points out, ‘metadata can reveal far more about us, both individually and as groups, than the words we speak.’ Former head of the CIA General Michael Hayden, put it more bluntly: ‘We kill people based on metadata’ (Cole, 2014: n.p.). As a result, many of the digital media platforms that we use are designed to encourage us not just to share ‘content’, but to do it in ways that produces as much metadata as possible. Online dating sites, for example, ask us to disclose information about ourselves using dropdown menus with fnite choices. Social media platforms encourage us to tag friends in photos and label posts with hashtags. ‘Like’ buttons are a particularly effcient way to get us to give away our data because what we give ends up being expressed in a simple binary format. Swiping right or left on dating apps similarly creates information that makes our desires and preferences more ‘machine readable’. It might not be immediately clear to some people why they should be so concerned about metadata. After all, when I like a post on social media or swipe right on a photo in a dating app, I don’t really feel like I am disclosing a lot of personal information about myself. But digital surveillance systems don’t rely on individual likes or swipes to make inferences about people, but on aggregates of likes and swipes that build up over time and can be combined with other datasets about users (such as online purchases, search terms, and connections on social media) as well as datasets gathered from other users which are analyzed by sophisticated algorithms in order to generate (usually very accurate) profles of people and predictions about their future behaviour. In Chapter 7 we talked about the importance of inferences in communication, how human–human communication depends on people making inferences about what’s going on in each other’s heads based on (sometimes limited) information, and also how human–computer interaction depends on people making inferences about how the algorithms that are hidden below the surface of the apps we use or the websites we visit work. We called this algorithmic pragmatics. Perhaps the most important thing about algorithmic pragmatics is remembering that algorithms are able to make inferences much more effciently than we are. If, for example, we try to fgure out what someone is ‘really like’ by stalking their social media accounts, our inferences will inevitably be formed based on a relatively small amount of information that they have ‘given’ or other people have ‘given’ about them: the content they have posted, the interactions they have engaged in with others, the photos that they have been tagged in. When an algorithm sets out to fnd out what this person is like, perhaps in order to target advertisements to them, it goes about it in a rather different way, relying not on a limited set of information that this person has posted over a limited period of time, but rather on the information they have given off through everything they have ever done on these social media sites, every person they have interacted with, everything they

Surveillance and privacy 267

have ‘liked’, every link they have clicked on, every physical location they have ‘checked in’ to. This data might also be combined with other data such as information about all of their purchases on online shopping sites, every emoji or animated GIF they have used in a messaging app, and every word they have ever typed in a search engine. Finally, the way algorithms analyze data is very different from the way humans do. Rather than relying on ‘reasoning’, algorithms calculate mathematical probabilities based on correlations. In other words, it doesn’t matter to them what this person ‘really meant’ when they used a particular emoji, but rather the focus is on calculating the probability that a person who uses a particular emoji with a particular frequency might have particular demographic characteristics, personality traits, behaviours, or desires. And often algorithms don’t even need that much data to make such inferences. Back in 2013, computer engineers Michal Kosinski, David Stilwell, and Thore Grapel reported on an algorithm they had developed to see what kinds of private traits and attributes they were able to infer from what a person has liked on Facebook. What they found was that from a list of just 68 Facebook likes, the algorithm could predict with a high degree of accuracy: someone’s age, gender, sexual orientation, ethnicity, religion, political views, intelligence, happiness, use of addictive substances, and whether or not their parents were divorced. It was this algorithm that was later used by the data frm Cambridge Analytica to target messages to voters in the 2015 Brexit referendum in the UK and the 2016 Presidential election in the US.

CASE STUDY: GENRES OF DISCLOSURE A genre is a type of text that is associated with a particular social practice and a particular community of users. The linguist John Swales (1990) talks about genres as texts that are structured around a staged series of social actions. A job application letter, for example, begins with the applicant indicating their desire to apply for a particular post, then goes on to provide information about the applicant and their qualifcations for the job, and ends with a request for an interview. Another important point about genres that Swales makes is that their purpose is not just to perform social actions, but also to help people to show that they belong to a particular social group. Swales calls these groups discourse communities. Different discourse communities have different genres associated with them. Being able to write an academic essay, for example, is a way of showing that you are a competent member of a community of students. And being able to reproduce the genre of a game walkthrough or cheat sheet is a way of showing that you are a competent member of a particular

268

Digital practices

community of gamers. By using genres, people both engage in recognizable social practices and become recognizable kinds of people. Genres are empowering because they give us a way of engaging in these practices and participating in the social groups associated with them. But they are also restrictive because they dictate how these practices ought to be carried out and, to some degree, determine the kinds of people that we are allowed to be. Computer scientists Leysia Palen and Paul Dourish (2003) apply the notion of genres to the kinds of texts we interact with on the internet that are designed to get us to disclose information, which they refer to as genres of disclosure. Such texts include things like social media profles, internet quizzes, and permissions dialogues that ask people to give consent for information about them to be gathered by internet companies. Central to the concept of genres of disclosure are the structures of staged social actions that they guide users through, and the way they make them ‘legible’ to others. Perhaps the most common genre of disclosure that we fnd online is the profles people are asked to create on social media platforms and dating sites. It is practically impossible to participate in such sites without giving some kind of personal data. The minimum amount of information required to create a Facebook account, for example, is a frst name, last name, valid email address, password, gender, and date of birth. But when you are flling out your profle, Facebook will also ask you for additional optional information such as where you go to school and where you work, as well as past schools and places of employment, all of the different places you have lived, the people who are related to you, your relationship status, and your favourite books, movies, and hobbies. In such contexts, you are made to feel that the information you are giving is for the beneft of your ‘friends’ rather than for the beneft of Facebook, and the more information you give, the more you are encouraged to give. Facebook also encourages users to record important ‘life events’ in their profles such as getting married, buying a new car, having children, competing in a marathon, framing these acts of disclosure as opportunities for users to ‘commemorate’ important milestones in their lives. All of this information, of course, is used to create a rich consumer profle that advertisers who use Facebook’s services can exploit. They also give Facebook more information that they can use to encourage you and your friends to engage more on the platform by reminding you of one another’s birthdays, anniversaries, etc. Another popular genre of disclosure is the online quiz in which people answer a series of questions in order to determine, for example,

Surveillance and privacy 269

which movie star they should marry, what kind of pasta they are, and how they would die in Game of Thrones. Such quizzes are reminiscent of the popular personality quizzes that have for years been a staple of entertainment and fashion magazines, the main difference being that when you take such quizzes online your answers are recorded and often used for marketing purposes. Many of these quizzes, in fact, are actually designed to get users to disclose information that is useful for advertisers. The quiz ‘How would you die in Game of Thrones?’,1 distributed on Buzzfeed, for example, asks questions like ‘What is your worst fear?’, ‘What would be your last meal?’, ‘What (kind of place) do you consider heaven?’, ‘Pick your poison (i.e. what kind of alcohol do you like to drink?)’, and ‘Who would you rather sleep with?’ After all of the questions have been answered, the user receives an answer such as: You’d be decapitated! You’re well intentioned and loyal to a fault. All you really want to see is justice in the world, but you fnd that others don’t quite understand you. You work very hard, but unfortunately your efforts never seem to get the attention they deserve. Sadly, this means that you would be killed by a backwards system of justice. The most important move in this genre, however, comes when users are asked to share their result over social media. Results, like the example above, are usually carefully designed to get people to want to share them (who wouldn’t want to broadcast the fact that they are ‘well intentioned and loyal to a fault’ and that their ‘efforts never seem to get the attention they deserve’?). In exchange for posting the result to Facebook or some other site, however, the company that has designed the quiz often asks users for permission to access information stored on that site, such as their profle, friends list, and photos, giving the company access to even more valuable personal data. Finally, by posting their result, users are unwittingly performing the labour of encouraging their friends and followers to disclose their personal data to the company. Perhaps the most basic online genre of disclosure is the permission dialogue. Permission dialogues usually consist of popup windows that request users to give their consent for something. An app, for example, might ask for permission to access the camera or the microphone on your phone, or a website might want permission to deposit cookies on your device so they can track you when you visit other websites, or a quiz company might ask for consent to obtain data from your social

270

Digital practices

media account, as in the example above. There are sometimes legal requirements about the moves that need to be included in such genres. For instance, the European Union requires cookie consent requests to 1) notify users that cookies are being used by a website, 2) inform users what data is being collected, 3) explain how that data is being used and by whom, and 4) elicit the users’ opt-in or invite them to partially or fully opt out. As it turns out though, these staged moves are the least important feature of these genres since most users never read them, not just because they are sometimes boring and diffcult to understand, but because designers design these genres to encourage users not to read them by embedding them at strategic places in sequences of actions where users are least likely to want to engage with them in a meaningful way. You might be asked by an app for permission to access information on your phone just as you are using the app to do something important like sending a message or taking a photo. Cookie consent notices always appear when you are trying to access information on a website, and the consent notice functions as a barrier to getting on with what you want to do. So people are naturally more likely to click ‘Agree’, just to get rid of the popup window. Designers further encourage this kind of ‘automatic’ consent by the way they label choices. Along with ‘Agree’, they almost never provide an unambiguous way to opt out (such as ‘Refuse’), but rather they label the opt-out option with words like ‘Learn more’ or ‘Manage settings’, words which make these options even less appealing because they imply even longer delays before the user actually gets to read the website that they have navigated to.

ACTIVITY: WHAT DO THEY KNOW ABOUT YOU? Many social media platforms such as Facebook and Instagram allow users to access the data that they have collected about them including, for example, the content they have posted, the people they have interacted with, the things they have ‘liked’, the links they have clicked on, and even the places where they have been, along with information about the kinds of things the platform’s algorithms have determined that they are interested in. When Rodney downloaded his data from Facebook for example, he found the things Facebook thought he was interested in included:

Surveillance and privacy 271

He also found that the site had a detailed record (expressed in GPS coordinates) of where he had been ever since he had installed the app on his phone. His data for 20 May 2019, for example, included the following entries:

272

Digital practices

Find out if a social media platform you use allows you to download the data they store about you and how to do it. When we were writing this book, for example, users’ stored data was accessible on Facebook by following the steps below:

Similar services are available for Instagram,2 Snapchat,3 WhatsApp,4 and many other platforms. After you have downloaded and gone through your data, discuss the following questions: 1) Are there any data that you were surprised this social media platform had collected about you? 2) What do you think the purpose is for the platform collecting and archiving the various kinds of data that you found? 3) If the platform has compiled a list of your ‘interests’ for the purpose of advertising, how close do you think this list is to your actual interests? 4) Does looking at this data help you in any way to understand the kinds of texts you see on this platform (e.g. posts, news stories, advertisements)? 5) Does the platform provide any way for you to delete any of this data or block the platform from collecting it?

Surveillance and privacy 273

Conclusion In this chapter we have addressed issues of privacy and surveillance, exploring how complicated these issues are. We have emphasized that surveillance is not always nefarious or abnormal, and how privacy is more a matter of controlling information than of concealing it. We have also talked about the importance of social norms and how the development of new technologies can sometimes create situations where people change or adapt their norms around privacy. We have talked about two kinds of surveillance: surveillance from peers, family members, and other people in your social circle, and surveillance from people and entities outside of your social circle including both scammers and hackers and corporations, institutions and governments, and the various strategies they use to convince you to relinquish your personal data. The important thing to remember is that ‘social’ surveillance—the practices you and your friends employ to ‘watch’ and ‘be watched’ on social media sites—and ‘corporate’ surveillance—the practices that companies employ to collect data from you for business and marketing purposes— are not separate and independent from each other. In fact, they are closely intertwined. Most social media platforms are designed so that the more we watch one another (and encourage others to watch us), the easier it is for the platform owners to watch us as well. In other words, the attention economy that is promoted in online social spaces inevitably feeds into the larger economic interests of platform owners, each incident of peer surveillance feeding into a massive corporate surveillance system. These two forms of surveillance are also often closely intertwined with government surveillance. In countries with more centralized internets such as China, people’s activities on social media are closely monitored by the government, and even in places where the internet is less centrally controlled, government agencies rely heavily on social media platforms, telecom companies, and internet service providers to provide them with data about citizens’ activities. In the beginning of this chapter we outlined three different approaches to helping people to protect their privacy online: technical solutions, legal solutions, and literacy solutions. It should be clear from our discussion that none of these kinds of solutions alone is adequate. People need easy-to-use technical means to control who can access their data online, robust regulations from governments regarding how their data can be collected and used, and the skills to understand how to manage disclosure and concealment as essential tools for managing their social lives. Most importantly, people need to engage in conversations with friends, family members, colleagues, and classmates about the social norms governing the way we exchange data about ourselves and others, as well as conversations with the companies whose services we use, politicians and elected representatives, and community leaders about how we might reform the current system of ‘surveillance

274

Digital practices

capitalism’ which appears to be predicated on the relentless erosion of personal privacy. Be sure to download the e-resources for this chapter to fnd out more about the ideas discussed, get recommendations about other resources you can consult, do additional activities, and check the defnitions of key terms.

Notes 1 https://www.buzzfeed.com/hbogameofthrones/how-would-you-die-in-game-of -thrones 2 https://www.facebook.com/help/instagram/181231772500920?helpref=hc_fnav 3 https://support.snapchat.com/en-US/a/download-my-data 4 https://faq.whatsapp.com/general/account-and-profile/how-to-request-your -account-information/?lang=en

Afterword Mediated Us

It’s all about us Father John Culkin (1967: 54), in an oft-quoted commentary on the work of Marshall McLuhan, said: ‘We become what we behold... We shape our tools and thereafter our tools shape us.’ What Culkin was trying to capture in this statement was the power of media to affect the kinds of meanings we can make, the kinds of relationships we can have with others, the kinds of people we can be, and the kinds of thoughts that we can think. What we have tried to emphasize in this book, however, is that this process is not as one-way as Culkin makes out; that even as our tools shape us, we never stop shaping our tools, creatively adapting them to the different situations we encounter both individually and collectively. While a big part of mastering digital literacies is understanding the affordances and constraints of the technological tools available to us, another big part is understanding ourselves, our particular circumstances, our needs, our limitations, and our capacity for creativity. Literacies are not things we develop just for the sake of developing them. We develop them to do certain things, become certain kinds of people, and create certain kinds of societies. And so the most basic underlying questions governing your development of digital literacies are: What do you want to do with them?, Who do you want to be?, and What kind of society do you want to live in? How you engage with digital literacies has a direct impact on the kind of friend, lover, colleague, and citizen you become. Every time you use digital tools, you create yourself and the society that you inhabit in small and subtle ways through what you click, what you share, and the different connections that you make. We develop technological tools and the social practices that they become part of collectively. All of us play a role in and have a stake in this development. This is especially true given the participatory affordances of digital media. We all have the responsibility for shaping our digital futures. Traditional ideas about ‘literacy’ bring to mind solitary fgures with their noses buried in books. The more social understanding of ‘literacies’ that we

276 Afterword

have introduced in this book reminds us that nothing we do with digital media is ever really solitary, that what we do with technologies is inevitably tied up with broader social practices and the social groups that we belong to. It follows then that becoming ‘digitally literate’ is not something we can do alone. It requires that we work together with others to understand the possibilities and the pitfalls associated with the myriad ways digital technologies connect us to one another. Digital media open up a staggering range of possibilities for us. They make available new ways to create information and knowledge, to express ourselves, to reach out to others and form relationships, and to explore our potential as humans. They also bring along with them a range of questions, some of which we have mentioned in this book: questions about privacy; questions about property and ownership; questions about freedom of speech and our capacity for political action; questions about what to pay attention to and about the extent to which our view of the world is distorted by the tools that mediate our experience and by our own agendas; questions about how to learn and how to educate our children; questions about truth and deception, friendship and love. Equally important, however, are the questions that we have not raised, those that are specifc to your own situation, your own needs, or the particular social or cultural circumstances in which you live, or questions that will arise from new technologies developed after this book is published. These are questions that you will have to formulate yourself in collaboration with those around you. The Spanish painter Pablo Picasso famously said, ‘Computers are useless. They can only give you answers.’ Perhaps the most important digital literacy you can master is learning how to ask the right questions.

References

Adami, E., Al Zidjaly, N., Canale, G., Djonov, E., Ghiasian, M.S., Gualberto, C., … Zhang, Y. (2020). PanMeMic manifesto: Making meaning in the Covid-19 pandemic and the future of social interaction. Working papers in Urban language and literacies no 273. Retrieved from https://f-origin.hypotheses.org/wp-content /blogs.dir/8699/fles/2020/08/WP273_Adami_et_al_on_behalf_of_the_PanMe .pdf Albawardi, A. (2017). Digital literacy practices of Saudi female university students (Unpublished doctoral dissertation). University of Reading, United Kingdom. Albawardi, A. (2018). The translingual digital practices of Saudi females on WhatsApp. Discourse, Context & Media, 25, 68–77. doi:10.1016/j.dcm.2018.03.009 Albawardi, A., & Jones, R.H. (2019). Vernacular mobile literacies: Multimodality, creativity and cultural identity. Applied Linguistics Review, 11(4), 649–676. doi:10.1515/applirev-2019-0006 Altman, I. (1975). The environment and social behavior: Privacy, personal space, territory, and crowding. Monterey, CA: Brooks/Cole Publishing Company. Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. doi:10.1177/1461444816676645 Andrejevic, M. (2002). The work of watching one another: Lateral surveillance, risk, and governance. Surveillance & Society, 2(4), 479–497. doi:10.24908/ ss.v2i4.3359 Androutsopoulos, J. (2014). Languaging when contexts collapse: Audience design in social networking. Discourse, Context & Media, 4–5, 62–73. doi:10.1016/j. dcm.2014.08.006 Androutsopoulos, J., & Busch, F. (2021). Digital punctuation as an interactional resource: The message-fnal period among German adolescents. Linguistics and Education, 62. doi:10.1016/j.linged.2020.100871 Antle, A.N., Tanenbaum, J., Macaranas, A., & Robinson, J. (2014). Games for change: Looking at models of persuasion through the lens of design. In A. Nijholt (Ed.), Playful user interfaces (pp. 163–184). Singapore: Springer. doi:10.1007/978-981-4560-96-2_8 Arjoranta, J., Kari, T., & Salo, M. (2020). Exploring features of the pervasive game Pokémon GO that enable behavior change: Qualitative study. JMIR Serious Games, 8(2), e15967. doi:10.2196/15967 Ashby, W.R. (1956). An introduction to cybernetics. London: Chapman & Hall.

278

References

Baehr, C. (2007). Web development: A visual spatial approach. Upper Saddle River, NJ: Pearson. Bakhtin, M.M. (1986). Speech genres and other late essays. Austin: University of Texas Press. Barnes, S.B. (2006). A privacy paradox: Social networking in the United States. First Monday, 11(9). doi:10.5210/fm.v11i9.1394 Barnett, T. (2014). Social reading: The Kindle’s social highlighting function and emerging reading practices. Australian Humanities Review, 56, 141–162. Baron, N.S. (2004). See you online: Gender issues in college student use of instant messaging. Journal of Language and Social Psychology, 23(4), 397–423. doi:10. 1177/0261927X04269585 Baron, N.S. (2010). Always on: Language in an online and mobile world. New York: Oxford University Press. Barthes, R. (1977). The death of the author. In R. Barthes (Ed.), Image, music, text: Essays selected and translated by Stephen Heath (pp. 142–148). London: Fotana. Barton, D. (1994). Literacy: An introduction to the ecology of written language. Oxford: Blackwell. Battelle, J. (2005). The search: How Google and its rivals rewrote the rules of business and transformed our culture. Boston, MA: Nicolas Brealey Publishing. Bauman, R., & Briggs, C.L. (1990). Poetics and performances as critical perspectives on language and social life. Annual Review of Anthropology, 19, 59–88. doi:10.1146/annurev.an.19.100190.000423 Bavelas, J.B., Black, A., Lemery, C.R., & Mullett, J. (1986). ‘I show how you feel’: Motor mimicry as a communicative act. Journal of Personality and Social Psychology, 50(2), 322–329. doi:10.1037/0022-3514.50.2.322 Beavis, C. (2004). “Good game”: Text and community in multiplayer computer games. In I. Snyder & C. Beavis (Eds.), Doing literacy online: Teaching, learning, and playing in an electronic world (pp. 187–205). Cresskill, NJ: Hampton Press. Bell, A. (1984). Language style as audience design. Language in Society, 13(2), 145– 204. doi:10.1017/S004740450001037X Benkler, Y. (2002). Coase’s penguin, or, Linux and “The Nature of the Firm”. The Yale Law Journal, 112(3), 369–446. doi:10.2307/1562247 Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. New York: Oxford University Press. Bennett, S. (2011, June 18). Defending your privacy: Is Twitter more secure than Facebook? All Twitter. Retrieved from http://www.mediabistro.com/alltwitter/ infographic-social-media-security_b10357 Bezemer, J., & Kress, G. (2016). The textbook in a changing multimodal landscape. In N.-M. Klug & H. Stöckl (Eds.), Handbuch Sprache im Multimodalen Kontext (pp. 476–498). Berlin: De Gruyter. doi:10.1515/9783110296099 Bhattarai, A. (2017, January 5). You won’t believe how these 9 shocking clickbaits work! (Number 8 is a killer!). The Zerone – Medium. Retrieved from https:// medium .com /zerone -magazine /you -wont -believe -how -these -9 -shocking -clickbaits-work-number-8-is-a-killer-4cb2ceded8b6 Black, R.W. (2005). Access and affliation: The literacy and composition practices of English-language learners in an online fanfction community. Journal of Adolescent and Adult Literacy, 49(2), 118–128. doi:10.1598/JAAL.49.2.4

References

279

Black, R.W. (2006). Language, culture, and identity in online fanfction. E-Learning and Digital Media, 3(2), 170–184. doi:10.2304/elea.2006.3.2.170 Blaze, M. (2013, June 19). Phew, NSA is just collecting metadata. (You should still worry). Wired. Retrieved from https://www.wired.com/2013/06/phew-it-was -just-metadata-not-think-again/ Blom, J.N., & Hansen, K.R. (2015). Clickbait: Forward-reference as lure in online news headlines. Journal of Pragmatics, 76, 87–100. doi:10.1016/j. pragma.2014.11.010 Blommaert, J., & Varis, P. (2015). Enoughness, accent and light communities: Essays on contemporary identities. Tilburg Papers in Culture Studies no. 139. Tilburg: Babylon Center. Bogost, I. (2007). Persuasive games: The expressive power of videogames. Cambridge, MA: MIT Press. Bogost, I. (2015, January 15). The cathedral of computation. Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2015/01/the-cathedral-of -computation/384300/ Bolter, J.D., & Grusin, R. (1999). Remediation: Understanding new media. Cambridge, MA: MIT Press. Botsman, R. (2018). Who can you trust? How technology brought us together and why it might drive us apart. New York: Public Affairs Books. Bourdieu, P. (1986). The forms of capital. In J.G. Richardson (Ed.), Handbook of theory and research for the sociology of education (pp. 241–258). London: Greenwood Publishing Group. boyd, d. (2010, March 4). Privacy, publicity, and visibility. Presentation delivered at Microsoft Tech Fest. Redmond. Retrieved from http://www.danah.org/papers /talks/2010/TechFest2010.html boyd, d. (2014). It's complicated: The social lives of networked teens. New Haven, CT: Yale University Press. boyd, d., & Ellison, N.B. (2008). Social network sites: Defnition, history, and scholarship. Journal of Computer-Mediated Communication, 13(1), 210–230. doi:10.1111/j.1083-6101.2007.00393.x Brignull, H. (2011, November 1). Dark patterns: Deception vs. honesty in UI design. A List Apart. Retrieved from https://alistapart.com/article/dark-patterns -deception-vs-honesty-in-ui-design/ Brown, M., Coughlan, T., Lawson, G., Goulden, M., Houghton, R.J., & Mortier, R. (2013). Exploring interpretations of data from the internet of things in the home. Interacting with Computers, 25(3), 204–217. doi:10.1093/iwc/iws024 Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society, 14(7), 1164–1180. doi:10.1177/1461444812440159 Bucher, T. (2018). If...then: Algorithmic power and politics. New York: Oxford University Press. Buckingham, D., & Burn, A. (2007). Game literacy in theory and practice. Journal of Educational Multimedia and Hypermedia, 16(3), 323–349. Burns, W. (2018, March 2). The “Nestie Awards” from Nest is a content idea fring on all cylinders. Forbes. Retrieved from https://www.forbes.com/sites/ willburns/2018/03/02/the-nestie-awards-from-nest-is-a-content-idea-fring-on -all-cylinders/

280

References

Bush, V. (1945, July). As we may think. Atlantic. Retrieved from https://www .theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/ Carr, N. (2011). The shallows: What the internet is doing to our brains. New York: W.W. Norton and Company. Castells, M. (2002). The internet galaxy: Refections on the internet, business, and society. Oxford: Oxford University Press. Chayko, M. (2008). Portable communities: The social dynamics of online and mobile connectedness. Albany: SUNY Press. Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves. New York: New York University Press. Chik, A., & Vásquez, C. (2017). A comparative multimodal analysis of restaurant reviews from two geographical contexts. Visual Communication, 16(1), 3–26. doi:10.1177/1470357216634005 Chua, A. (2018). Political tribes: Group instinct and the fate of nations. New York: Penguin Press. Clark, A. (2003). Natural-born cyborgs: Minds, technologies, and the future of human intelligence. Oxford: Oxford University Press. Clarke, R. (1988). Information, technology and dataveillance. Communications of the ACM, 31(5), 498–512. doi:10.1145/42411.42413 Clickbait. (n.d.). In Oxford English dictionary online. Retrieved from https://www .oed.com/view/Entry/37263110 Clowes, R.W. (2019). Screen reading and the creation of new cognitive ecologies. AI & Society, 34(4), 705–720. doi:10.1007/s00146-017-0785-5 Cole, D. (2014, May 10). We kill people based on metadata. The New York Review. Retrieved from https://www.nybooks.com/daily/2014/05/10/we-kill-people -based-metadata/ Constine, J. (2017, June 21). Snapchat launches location-sharing feature Snap Map. TechCrunch. Retrieved from https://techcrunch.com/2017/06/21/snap-map/ ?guccounter=1 Consumer Union (2018, June 2017). Letter to the Federal Trade Commission. Consumer Reports. Retrieved from https://advocacy.consumerreports.org/wp-content/uploads /2018/06/CU-to-the-FTC-Facebook-Dark-Patterns-6.27.18-1-1.pdf Cossell, H. (2019, January 15). Why synthetic authenticity is honest fakery. We Are Social. Retrieved from https://wearesocial.com/blog/2019/01/why-synthetic -authenticity-is-honest-fakery Crystal, D. (2011). Internet linguistics: A student guide. London: Routledge. Cserzo, D. (2019). A nexus analysis of domestic video chat: Actions, practices, affordances, and mediational means. (Unpublished doctoral dissertation). University of Cardiff, Cardiff. Culkin, J.M. (1967, March). A schoolman’s guide to Marshall McLuhan. Saturday Review, 51–53, 70–72. Dawkins, R. (2016). The selfsh gene: 40th anniversary edition. Oxford: Oxford University Press. De Certeau, M. (1988). The practice of everyday life. Berkeley, CA: University of California Press. De Souza e Silva, A. (2006). From cyber to hybrid: Mobile technologies as interfaces of hybrid spaces.Space and Culture,9(3), 261–278. doi:10.1177/1206331206289022

References

281

De Souza e Silva, A., & Firth, J. (2008). Mobile interfaces in public spaces: Locational privacy, control, and urban sociability. London: Routledge. Dhrodia, A. (2018). Unsocial media: A toxic place for women. IPPR Progressive Review, 24(4), 380–387. doi:10.1111/newe.12078 Dillon, A., Richardson, J., & McKnight, C. (1989). Human factors of journal usage and design of electronic texts. Interacting with Computers, 1(2), 183–189. Dockterman, E. (2016). How “textual chemistry” is changing dating. Retrieved from https://time.com/4217474/valentines-day-texting-and-dating/ Dourish, P. (2004). What we talk about when we talk about context. Personal and Ubiquitous Computing, 8(1), 19–30. doi:10.1007/s00779-003-0253-8 Dresner, E., & Herring, S.C. (2010). Functions of the nonverbal in CMC: Emoticons and illocutionary force. Communication Theory, 20(3), 249–268. doi:10.1111/ (ISSN)1468-288 Dubrofsky, R.E. (2011). Surveillance on reality television and Facebook: From authenticity to fowing data. Communication Theory, 21(2), 111–129. doi:10.1111/j.1468-2885.2011.01378.x Dumitrica, D. (2016, June 13). The truth about algorithms. OpenDemocracy. Retrieved from https://www.opendemocracy.net/en/digitaliberties/truth-about-algorithms/ Dunbar, R. (1996). Grooming, gossip and the evolution of language. Cambridge, MA: Harvard University Press. Dürscheid, C. (2004). Netzsprache – ein neuer mythos. Osnabrücker Beiträge zur Sprachtheorie: Internetbasierte Kommunikation, 68, 141–157. Eyal, N. (2014). Hooked: How to build habit-forming products. London: Penguin. Facebook (n.d.). What names are allowed on Facebook? Retrieved from https:// www.facebook.com/help/112146705538576 Faggella, D. (2018, December 7). News organization leverages AI to generate automated narratives from big data. Emerj. Retrieved from https://emerj.com/ai -case-studies/news-organization-leverages-ai-generate-automated-narratives-big -data/ Fink, E.M. (2011). The virtual construction of legality: “Griefng” & normative order in second life. Journal of Law, Information, and Science, 21(1), 1–32. Finnemann, N.O. (1999). Hypertext and the representational capacities of the binary alphabet. Århus: Center for Kulturforskning, Aarhus Universitet. Finnemann, N.O. (2017). Hypertext confgurations: Genres in networked digital media. Journal of the Association for Information Science and Technology, 68(4), 845–854. doi:10.1002/asi.23709 Floridi, L. (2014). The onlife manifesto. New York: Springer. Foo, C.Y., & Koivisto, E. (2004). Defning grief play in MMORPGs: Player and developer perceptions. Proceedings of the 2004 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, 74, 245–250. doi:10.1145/1067343.1067375 Frampton, B. (2015, September 14). Clickbait: The changing face of online journalism. BBC. Retrieved from https://www.bbc.com/news/uk-wales-34213693 Frosh, P. (2015). The gestural image: The selfe, photography theory, and kinesthetic sociability. International Journal of Communications, 9, 1607–1628. Galloway, A.R. (2006). Gaming: Essays on algorithmic culture. Minneapolis, MN: University of Minnesota Press.

282

References

Gamson, J. (2011). The unwatched life is not worth living: The elevation of the ordinary in celebrity culture. PMLA, 126(4), 1061–1069. doi:10.1632/ pmla.2011.126.4.1061 Garrod, S., & Pickering, M.J. (2004). Why is conversation so easy? Trends in Cognitive Sciences, 8(1), 8–11. doi:10.1632/pmla.2011.126.4.1061 Gee, J.P. (2003). What video games have to teach us about learning and literacy. New York: Palgrave Macmillan. Gee, J.P. (2004). Situated language and learning: A critique of traditional schooling. New York: Routledge. Gee, J.P. (2008). Social linguistics and literacies: Ideology in discourse (3rd ed.). London: Routledge. Gee, J.P. (2015). Unifed discourse analysis: Language, reality, virtual worlds, and video games. Abingdon: Routledge. Gehl, R.W. (2014). Reverse engineering social media: Software, culture, and political economy in new media capitalism. Philadelphia, PA: Temple University Press. Georgakopoulou, A. (2007). Small stories, interaction and identities. Amsterdam: John Benjamins. Gershon, I. (2010). The breakup 2.0: Disconnecting over new media. Ithaca, NY: Cornell University Press. Gibbs, M., Meese, J., Arnold, M., Nansen, B., & Carter, M. (2015). #Funeral and Instagram: Death, social media, and platform vernacular. Information, Communication & Society, 18(3), 255–268. doi:10.1080/1369118X.2014.987152 Gibson, W. (1984). Neuromancer. New York: Ace Books. Giles, J. (2005). Internet encyclopaedias go head to head. Nature, 438, 900–901. doi:10.1038/438900a Gladwell, M. (2000). The tipping point: How little things can make a big difference. New York: Little Brown. Goffman, E. (1959). The presentation of self in everyday life. New York: Doubleday. Goffman, E. (1963). Behavior in public places: Notes on the social organization of gatherings. New York: Free Press. Goffman, E. (1972). Relations in public: Microstudies of the public order. New York: Harper & Row. Goffman, E. (1981). Forms of talk. Oxford: Blackwell. Google Design. (2015, May 28). Making material design [Video fle]. Retrieved from https://www.youtube.com/watch?v=rrT6v5sOwJg GPT-3. (2020, September 8). A robot wrote this entire article. Are you scared yet, human? The Guardian. Retrieved from https://www.theguardian.com/ commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3 Granovetter, M.S. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–1380. Gray, C.M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A.L. (2018). The dark (patterns) side of UX design. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Paper 534. doi:10.1145/3173574.3174108 Hafner, C.A. (2015). Co-constructing identity in online virtual worlds for children. In R.H. Jones, A. Chik, & C.A. Hafner (Eds.), Discourse and digital practices: Doing discourse analysis in the digital age (pp. 97–111). Abingdon: Routledge.

References

283

Hafner, C.A. (2018). Genre innovation and multimodal expression in scholarly communication: Video methods articles in experimental biology. Ibérica, 36, 15–41. Haidt, J., & Rose-Stockwell, T. (2019, December). The dark psychology of social networks. Atlantic. Retrieved from https://www.theatlantic.com/magazine/ archive/2019/12/social-media-democracy/600763/ Hamari, J., Malik, A., Koski, J., & Johri, A. (2019). Uses and gratifcations of Pokémon Go: Why do people play mobile location-based augmented reality games? International Journal of Human–Computer Interaction, 35(9), 804–819. doi:1080/10447318.2018.1497115 Haraway, D.J. (1991). Simians, cyborgs, and women: The reinvention of nature. London: Routledge. Harris, T. (2016, July 27). Smartphone addiction: The slot machine in your pocket. Spiegel International. Retrieved from https://www.spiegel.de/international/ zeitgeist/smartphone-addiction-is-part-of-the-design-a-1104237.html Harvey, D. (1996). Justice, nature and the geography of difference. Oxford: Blackwell. Hayles, N.K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and informatics. Chicago, IL: University of Chicago Press. Herbert, S.A. (1971). Designing organizations for an information-rich world. In M. Greenberger (Ed.), Computers, communications, and the public interest (pp. 37–52). Baltimore: John Hopkins Press. Herley, C. (2012). Why do Nigerian scammers say they are from Nigeria? Microsoft Research. Retrieved from https://www.microsoft.com/en-us/research/publication /why-do-nigerian-scammers-say-they-are-from-nigeria/ Highfeld, T., & Leaver, T. (2016). Instagrammatics and digital methods: Studying visual social media, from selfes and GIFs to memes and emoji. Communication Research and Practice, 2(1), 47–62. doi:10.1080/22041451.2016.1155332 Highland, M. (2006). As real as your life (director’s Cut) [Video fle]. Retrieved from http://www.youtube.com/watch?v=fxVsWY9wsHk Hill, J., Ford, W., & Farreras, I.G. (2015). Real conversations with artifcial intelligence: A comparison between human-human online conversations and human-chatbot conversations. Computers in Human Behavior, 49, 245–250. doi:10.1016/j.chb.2015.02.026 Hillis, K., Paasonen, S., & Petit, M. (2015). Networked affect. Cambridge, MA: MIT Press. Horton, D., & Wohl, R.R. (1956). Mass communication and para-social interaction. Psychiatry, 19(3), 215–229. doi:10.1080/00332747.1956.11023049 Houghton, K.J., Upadhyay, S.S.N., & Klin, C.M. (2018). Punctuation in text messages may convey abruptness. Period. Computers in Human Behavior, 80, 112–121. doi:10.1016/j.chb.2017.10.044 Innis, H. (1951). The bias of communication. Toronto, ON: University of Toronto Press. Isaac, W., & Dixon, A. (2017, May 10). Why big-data analysis of police activity is inherently biased. The Conversation. Retrieved from http://theconversation.com/ why-big-data-analysis-of-police-activity-is-inherently-biased-72640

284

References

Ito, M., & Okabe, D. (2005). Technosocial situations: Emergent structurings of mobile email use. In M. Ito, D. Okabe, & M. Matsuda (Eds.), Personal, portable, pedestrian: Mobile phones in Japanese life (pp. 257–275). Cambridge, MA: MIT Press. Ito, M., Matsuda, M., & Okabe, D. (2006). Technosocial situations: Emergent structurings of mobile email use. In M. Ito, M. Matsuda, & D. Okabe (Eds.), Personal, portable, pedestrian: Mobile phones in Japanese life (pp. 237–273). Cambridge, MA: MIT Press. Jamison, L. (2017, December). The digital ruins of a forgotten future. Atlantic. Retrieved from https://www.theatlantic.com/magazine/archive/2017/12/second -life-leslie-jamison/544149/ Jasanoff, S. (2015). Future imperfect: Science, technology, and the imaginations of modernity. In S. Jasanoff & S.H. Kim (Eds.), Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power (pp. 1–33). Chicago, IL: University of Chicago Press. Johansson, M., & Verhagen, H. (2010). And justice for all – The 10 commandments of online games, and then some.... DiGRA Nordic ’10: Proceedings of 2010 International DiGRA Nordic Conference: Experiencing Games: Games, Play, and Players. Retrieved from http://www.digra.org/dl Johnson, S. (2006). Everything bad is good for you. New York: Riverhead Books. Jones, G., Schieffelin, B., & Smith, R. (2011). When friends who talk together stalk together: Online gossip as metacommunication. In C. Thurlow & K. Mroczek (Eds.), Digital discourse: Language in the new media (pp. 26–47). Oxford: Oxford University Press. Jones, R.H. (2005). Sites of engagement as sites of attention: Time, space and culture in electronic discourse. In S. Norris & R.H. Jones (Eds.), Discourse in action: Introducing mediated discourse analysis (pp. 141–154). London: Routledge. Jones, R.H. (2008a). Technology, democracy and participation in space. In V. Koller & R. Wodak (Eds.), Handbook of applied linguistics Vol. 4: Language and communication in the public sphere (pp. 429–446). New York: De Gruyter Mouton. Jones, R.H. (2008b). The role of text in televideo cybersex. Text & Talk, 28(4), 453–473. Jones, R.H. (2011). Discourses of defcit and defcits of discourse: Computers, disability and mediated action. In C.N. Candlin & J. Crichton (Eds.), Discourses of defcit (pp. 275–292). Basingstoke: Palgrave MacMillan. Jones, R.H. (2012). Discourse analysis: A resource book for students. London: Routledge. Jones, R. (2016, April 15). Indexicality and emplacement in Facebook Snapchat and Tinder: Towards a geosemiotics of networked imaging. A plenary address delivered at the Language and Media Workshop: Multimodality in Social Media and Digital Environments. Queen Mary University, London. Jones, R. (2020a). The rise of the pragmatic web: Implications for rethinking meaning and interaction. In C. Tagg & M. Evans (Eds.), Messaging and medium: English language practices across old and new media (pp. 17–37). Amsterdam: De Gruyter.

References

285

Jones, R. (2020b). Towards an embodied visual semiotics: Negotiating the right to look. In C. Thurlow, C. Dürscheid, & F. Diémoz (Eds.), Visualizing digital discourse: Interactional, institutional, and ideological perspectives (pp. 19–42). Boston: De Gruyter. Jones, R.H. (2021). The text is reading you: Teaching language in the age of the algorithm. Linguistics and Education, 62, 100750. doi.org/10.1016/j. linged.2019.100750 Jones, R.H., & Hafner, C.A. (2012). Understanding digital literacies: A practical introduction. London: Routledge. Keen, A. (2007). The cult of the amateur: How today’s internet is killing our culture. New York: Doubleday/Currency. Kellmereit, D., & Obodovski, D. (2013). The silent intelligence: The internet of things. San Francisco, CA: DND Ventures. Kember, S., & Zylinska, J. (2012). Life after new media: Mediation as a vital process. Cambridge, MA: MIT Press. Kirkpatrick, D. (2011). The Facebook effect: The inside story of the company that is connecting the world. New York: Simon and Schuster. Kitchin, R., & Dodge, M. (2011). Code/space: Software and everyday life. Cambridge, MA: MIT Press. Kittler, F.A. (1997). Literature, media, information systems. (J. Johnson, trans.). London: Routledge. Klevjer, R. (2012). Enter the avatar: The phenomenology of prosthetic telepresence in computer games. In J.R. Sageng, H. Fossheim, & T. Mandt Larsen (Eds.), The philosophy of computer games (pp. 17–38). Dordrecht: Springer. doi:10.1007/978-94-007-4249-9_3 Knight, W. (2017, April 11). The dark secret at the heart of AI. MIT’s Technology Review. Retrieved from https://www.technologyreview.com/2017/04/11/5113/ the-dark-secret-at-the-heart-of-ai/ Knobloch, S., Patzig, G., Mende, A.-M., & Hastall, M. (2004). Affective news: Effects of discourse structure in narratives on suspense, curiosity, and enjoyment while reading news and novels. Communication Research, 31(3), 259–287. doi:10.1177/0093650203261517 Knorr Cetina, K. (2014). Scopic media and global coordination: The mediatization of face-to-face encounters. In K. Lundby (Ed.), Mediatization of communication (pp. 39–62). Berlin: De Gruyter. Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences of the United States of America, 110(15), 5802–5805. doi:10.1073/pnas.1218772110 Kress, G.R. (2003). Literacy in the new media age. London: Routledge. Kress, G.R., & Van Leeuwen, T. (2006). Reading images: The grammar of visual design (2nd ed.). London: Routledge. Lam, W.S.E. (2000). L2 literacy and the design of the self: A case study of a teenager writing on the internet. TESOL Quarterly, 34(3), 457–482. Lanier, J. (2010). You are not a gadget: A manifesto. New York: Alfred A. Knopf. Lanier, J. (2013). Who owns the future? New York: Simon & Schuster.

286

References

Lanier, J. (2018). Ten arguments for deleting your social media accounts right now. New York: Henry Holt and Company. Lankshear, C., & Knobel, M. (2011). New literacies: Everyday practices and classroom learning (3rd ed.). Maidenhead: Open University Press. Latour, B. (2007). Reassembling the social: An introduction to actor-networktheory. Oxford: Oxford University Press. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press. Leaver, T., Highfeld, T., & Abidin, C. (2020). Instagram: Visual social media cultures. Cambridge: Polity Press. Lee, C. (2017). Multilingualism online. Abingdon: Routledge. Lefebvre, H. (1991). The production of space. Oxford: Blackwell. Lemke, J.L. (1998). Multiplying meaning: Visual and verbal semiotics in scientifc text. In J.R. Martin & R. Veel (Eds.), Reading science: Critical and functional perspectives on discourses of science (pp. 87–113). London: Routledge. Lemke, J.L. (2011). Multimodality, identity, and time. In C. Jewitt (Ed.), Routledge handbook of multimodal analysis (pp. 140–150). London: Routledge. Lessig, L. (2004). Free culture: How big media uses technology and the law to lock down culture and control creativity. New York: Penguin Press. Lessig, L. (2008). Remix: Making art and commerce thrive in the hybrid economy. New York: Penguin Press. Lévy, P. (1997). Collective Intelligence: Mankind’s emerging world in cyberspace. New York: Plenum. Levy, S. (2012, April 24). Can an algorithm write a better news story than a human reporter? Wired. Retrieved from https://www.wired.com/2012/04/can-an -algorithm-write-a-better-news-story-than-a-human-reporter/ Licoppe, C. (2016). Mobilities and urban encounters in public places in the age of locative media: Seams, folds, and encounters with “pseudonymous strangers”. Mobilities, 11(1), 99–116. doi:10.1080/17450101.2015.1097035 Licoppe, C., & Morel, J. (2012). Video-in-interaction: “Talking heads” and the multimodal organization of mobile and skype video calls. Research on Language and Social Interaction, 45(4), 399–429. doi:10.1080/08351813.2012.724996 Ling, R.S. (2004). The mobile connection: The cell phone’s impact on society. San Francisco, CA: Morgan Kaufmann. Linvill, D.L., & Warren, P.L. (2020). Troll factories: Manufacturing specialized disinformation on Twitter. Political Communication, 37(4), 447–467. doi:10.10 80/10584609.2020.1718257 Loewenstein, G. (1994). The psychology of curiosity: A review and reinterpretation. Psychological Bulletin, 116(1), 75–98. doi:10.1037/0033-2909.116.1.75 Lohr, S. (2011, September 10). In case you wondered, a real human wrote this column. The New York Times. Retrieved from https://www.nytimes.com/2011 /09/11/business/computer-generated-articles-are-gaining-traction.html Lojeski, K.S. (2007). When distance matters: An overview of virtual distance [PDF fle]. Virtual Distance International. Retrieved from http://virtualdistance.com/ Documents/When%20Distance%20Matters%20-%20An%20Overview%20of %20Virtual%20Distance.pdf

References

287

Lorenz, T. (2017, April 14). Teens explain Snapchat streaks, why they’re so addictive and important to friendships. Business Insider. Retrieved from https://www .businessinsider.com/teens-explain-snapchat-streaks-why-theyre-so-addictive -and-important-to-friendships-2017-4?r=US&IR=T Lowry, P.B., Curtis, A., & Lowry, M.R. (2004). Building a taxonomy and nomenclature of collaborative writing to improve interdisciplinary research and practice. Journal of Business Communication, 41(1), 66–99. doi:10.1177/002194 3603259363 Lyons, A., & Tagg, C. (2019). The discursive construction of mobile chronotopes in mobile-phone messaging. Language in Society, 48(5), 657–683. doi:10.1017/ S004740451900023X Maheshwari, S. (2016, November 21). How fake news goes viral: A case study. The New York Times. Retrieved from https://www.nytimes.com/2016/11/20/business /media/how-fake-news-spreads.html Malinowski, B. (1923). The problem of meaning in primitive languages. In C.K. Ogden & I.A. Richards (Eds.), The meaning of meaning (pp. 296–336). London: K. Paul, Trench, Trubner & Co. Manovich, L. (2001). The language of new media. Cambridge, MA: MIT Press. Manovich, L. (2007). What comes after remix? Retrieved from http://manovich.net /DOCS/remix_2007_2.doc Marsh, J. (2011). Young children’s literacy practices in a virtual world: Establishing an online interaction order. Reading Research Quarterly, 46(2), 101–118. Marwick, A.E. (2012). The public domain: Social surveillance in everyday life. Surveillance & Society, 9(4), 378–386. doi:10.24908/ss.v9i4.4342 Marwick, A.E. (2013). Status update: Celebrity, publicity, and branding in the social media age. New Haven, CT: Yale University Press. Marwick, A.E. (2015). Instafame: Luxury selfes in the attention economy. Public Culture, 27(1), 137–160. doi:10.1215/08992363-2798379 Marwick, A.E., & boyd, d. (2010). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society, 13(1), 114–133. doi:10.1177/1461444810365313 Marwick, A.E., & Lewis, R. (2017). Media manipulation and disinformation online. Data & Society Research Institute. Retrieved from https://datasociety.net/library /media-manipulation-and-disinfo-online/ Massey, A.J., Elliott, G.L., & Johnson, N.K. (2005, November). Variations in aspects of writing in 16+ English examinations between 1980 and 2004: Vocabulary, spelling, punctuation, sentence structure, non-standard English. Cambridge Assessment. Retrieved from https://www.cambridgeassessment.org.uk/Images/ aspects-of-writing-2004-report.pdf Massey, D. (1995). The conceptualization of place. In D. Massey & P. Jess (Eds.), A place in the world? Places, cultures and globalization (pp. 45–77). Oxford: Oxford University Press. McCann, A. (2012, June 29). Facebook is normalizing creepy behavior. BuzzFeed. Retrieved from https://www.buzzfeed.com/atmccann/facebook-is-normalizing -creepy-behavior McGrath, L. (2016). Open-access writing: An investigation into the online drafting and revision of a research article in pure mathematics. English for Specifc Purposes, 43, 25–36. doi:10.1016/j.esp.2016.02.003

288

References

McLuhan, M. (1964). Understanding media: The extensions of man (1st ed.). New York: McGraw Hill. Meyer, R. (2015, November 3). Twitter unfaves itself: The validation button gets a new look. Atlantic. Retrieved from https://www.theatlantic.com/technology/ archive/2015/11/twitter-unfaves-itself-hearts/413917/ Meyrowitz, J. (1985). No sense of place: The impact of electronic media on social behavior. Oxford: Oxford University Press. Miller, D., & Slater, D. (2000). The internet: An ethnographic approach. Oxford: Berg. Miltner, K.M. (2016). “The shade of it all”: Affect, resistance, and the RuPaul’s drag race keyboard app. Paper presented at 66th Annual International Communication Association conference. Fukuoka, Japan. Miltner, K.M., & Highfeld, T. (2017). Never gonna GIF you up: Analyzing the cultural signifcance of the animated GIF. Social Media + Society, 3(3), 1–11. doi:10.1177/2056305117725223 Mitchell, W.J. (2003). Me++: The cyborg self and the networked city. Cambridge, MA: MIT Press. Morozov, E. (2011). The net delusion: How not to liberate the world. New York: PublicAffairs. Morozov, E., & Bria, F. (2018). Rethinking the smart city: Democratizing urban technology. New York: Rosa Luxenburg Stiftung. Munn, L. (2020). Angry by design: Toxic communication and technical architectures. Humanities and Social Sciences Communications, 7, 53. doi:10.1057/ s41599-020-00550-7 Murthy, D., & Sharma, S. (2019). Visualizing YouTube’s comment space: Online hostility as a networked phenomena. New Media & Society, 21(1), 191–213. doi:10.1177/1461444818792393 Nahon, K., & Hemsley, J. (2013). Going viral. Cambridge: Polity Press. Nann, J. (2020, January 31). Snapmaps: Less selfe, more dystopian self-surveillance. University Times. Retrieved from https://www.universitytimes.ie/2020/01/ snapmaps-less-selfe-more-dystopian-self-surveillance/ Nelson, T.H. (1992). Literary machines: The report on, and of, project Xanadu concerning word processing, electronic publishing, hypertext, thinkertoys, tomorrow’s intellectual revolution, and certain other topics including knowledge, education and freedom. Sausalito, CA: Mindful Press. Newman, L.H. (2018, March 5). Nigerian email scammers are more effective than ever. Wired. Retrieved from https://www.wired.com/story/nigerian-email -scammers-more-effective-than-ever/ Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Palo Alto: Stanford University Press. Ong, W.J. (1996). Orality and literacy: The technologizing of the word. London: Routledge. Organisation for Economic Co-operation and Development. (2016). The internet of things: Seizing the benefts and addressing the challenges. OECD Digital Economy Papers 252. Page, R. (2012). The linguistics of self-branding and micro-celebrity in Twitter: The role of hashtags. Discourse & Communication, 6(2), 181–201. doi:10.1177/ 1750481312437441

References

289

Palen, L., & Dourish, P. (2003). Unpacking “privacy” for a networked world. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 129–136). New York: ACM. doi:10.1145/642611.642635 Papacharissi, Z. (2015). Affective publics: Sentiment, technology, and politics. Oxford: Oxford University Press. Pariser, E. (2011). The flter bubble: What the internet is hiding from you. New York: Penguin Press. Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge, MA: Harvard University Press. Pew Research Center. (2017). Partisan confict and congressional outreach. Retrieved from https://www.pewresearch.org/politics/wp-content/uploads/sites/4/2017/02/ LabsReport_FINALreport.pdf Pinker, S. (2010). Mind over mass media. The New York Times. Retrieved from https://www.nytimes.com/2010/06/11/opinion/11Pinker.html Plester, B., Wood, C., & Bell, V. (2008). Txt Msg n school literacy: Does texting and knowledge of text abbreviations adversely affect children’s literacy attainment? Literacy, 42(3), 137–144. Prensky, M. (2003). Digital game-based learning. Computers in Entertainment, 1(1), 21. doi:10.1145/950566.950596 Putnam, R.D. (1995). Bowling alone: America's declining social capital. Journal of Democracy, 6(1), 65–78. doi:10.1353/jod.1995.0002 Radley, A. (1996). Displays and fragments: Embodiment and the confguration of social worlds. Theory & Psychology, 6(4), 559–576. doi:10.1177/0959 354396064002 Recktenwald, D. (2017). Toward a transcription and analysis of live streaming on Twitch. Journal of Pragmatics, 115, 68–81. doi:10.1016/j.pragma.2017.01.013 Rheingold, H. (1993). The virtual community: Homesteading on the electronic frontier. New York: Harper-Perennial. Richardson, D.C., Dale, R., & Shockley, K. (2008). Synchrony and swing in conversation: Coordination, temporal dynamics, and communication. In I. Wachsmuth, M. Lenzen, & G. Knoblich (Eds.), Embodied communication in humans and machines (pp. 75–94). Oxford: Oxford University Press. Rigby, D.K. (2014). Digital-physical mashups. Harvard Business Review, 92(9), 84–92. Rowlands, I., Nicholas, D., Williams, P., Huntington, P., Fieldhouse, M., Gunter, B., … Tenopir, C. (2008). The Google generation: The information behaviour of the researcher of the future. Aslib Proceedings, 60(4), 290–310. Rushkoff, D. (2010). Program or be programmed: Ten commands for a digital age. New York: OR Books. doi:10.2307/j.ctt207g7rj Sadowski, J., & Pasquale, F. (2015). The spectrum of control: A social theory of the smart city. First Monday, 20(7). doi:10.5210/fm.v20i7.5903 Saint, N. (2010). Eric Schmidt: Google’s policy is to “get right up to the creepy line and not cross it”. Business Insider. Retrieved from https://www.businessinsider .com/eric-schmidt-googles-policy-is-to-get-right-up-to-the-creepy-line-and-not -cross-it-2010-10 Santo, R. (2011). Hacker literacies: Synthesizing critical and participatory media literacy frameworks. International Journal of Learning and Media, 3(3), 1–5. doi:10.1162/IJLM_a_00075

290

References

Schegloff, A.E., & Sacks, H. (1973). Opening up closings. Semiotica, 8(4), 289–327. doi:10.1515/semi.1973.8.4.289 Schivelbusch, W. (1986). The railway journey: The industrialization of time and space in the 19th century. Berkeley, CA: University of California Press. Schrage, M. (2001). The relationship revolution. Merrill Lynch Forum. Retrieved from http://www.ml.com/woml/forum/index.htm Scollon, R., & Scollon, S.W. (1981). Narrative, literacy, and face in interethnic communication. Norwood, NJ: Ablex Publishing Corporation. Scollon, R., Scollon, S.W., & Jones, R.H. (2012). Intercultural communication: A discourse approach (3rd ed.). London: Wiley-Blackwell. Seargeant, P., & Tagg, C. (n.d.). Seargeant and Tagg on fake news and digital literacy. #SocSciMatters. Retrieved from https://www.palgrave.com/gp/blogs/ social-sciences/seargeant-and-tagg-on-fake-news-and-digital-literacy Senft, T.M. (2008). Camgirls: Celebrity and community in the age of social networks. New York: Peter Lang. Senft, T.M. (2012). Microcelebrity and the branded self. In J. Hartley, J. Burgess, & A. Burns (Eds.), A companion to new media dynamics (pp. 346–354). Chichester: Blackwell. Sharples, M., Goodlet, J.S., Beck, E.E., Wood, C.C., Easterbrook, S.M., & Plowman, L. (1993). Research issues in the study of computer supported collaborative writing. In M. Sharples (Ed.), Computer supported collaborative writing (pp. 9–28). London: Springer. Shifman, L. (2014). Memes in digital culture. Cambridge, MA: MIT Press. Shirky, C. (2005). Ontology is overrated: Categories, links, and tags. Retrieved from https://oc.ac.ge/fle.php/16/_1_Shirky_2005_Ontology_is_Overrated.pdf Silverstone, R., & Hirsch, E. (1992). Consuming technologies: Media and information in domestic spaces. London: Routledge. Spina, S. (2019). Role of emoticons as structural markers in Twitter interactions. Discourse Processes, 56(4), 345–362. doi:10.1080/0163853X.2018.1510654 Sproull, L., & Kiesler, S. (1986). Reducing social context cues: Electronic mail in organizational communications. Management Science, 32(11), 1492–1512. Srnicek, N. (2016). Platform capitalism. Cambridge: Malden. Stalder, F. (2010, February 10). The second Index. Search engines, personalization and surveillance (Deep Search). Retrieved 7 March 2021 from http://felix .openfows.com/node/113 Stalder, F., & Mayer, C. (2009). The second index: Search engines, personalization and surveillance. In K. Becker & F. Stalder. (Eds.), Deep search: The politics of search beyond Google (pp. 98–115). New Jersey: Transaction Publishing. Stark, L. (2018). Facial recognition, emotion and race in animated social media. First Monday, 23(9). doi:10.5210/fm.v23i9.9406 Statista (2020). Instagram: Active users worldwide. Retrieved from https://www .statista.com/statistics/253577/number-of-monthly-active-instagram-users/ Sterling, B. (2014). The internet of things is an oligarch power play. Retrieved from https://www.wired.co.uk/article/the-internet-of-things-is-an-oligarch-powerplay Stone, A.R. (1995). The war of desire and technology at the close of the mechanical age. London: MIT Press.

References

291

Street, B. (1984). Literacy in theory and practice. Cambridge: Cambridge University Press. Sugiyama, S. (2015). Kawaii meiru and Maroyaka neko: Mobile emoji for relationship maintenance and aesthetic expressions among Japanese teens. First Monday, 20(10). doi:10.5210/fm.v20i10.5826 Sunstein, C.R. (2017). #Republic: Divided democracy in the age of social media. Princeton, NJ: Princeton University Press. Surowiecki, J. (2004). The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations. New York: Doubleday. Swales, J.M. (1990). Genre analysis: English in academic and research settings. Cambridge: Cambridge University Press. Tagg, C., Seargeant, P., & Brown, A.A. (2017). Taking offence on social media: Conviviality and communication on Facebook. New York: Palgrave Macmillan. Tannen, D. (2005). New York Jewish conversational style. In S.F. Kiesling & C.B. Paulston (Eds.), Intercultural discourse and communication (pp. 135–149). Oxford: Blackwell Publishing. Tapscott, D., & Williams, A.D. (2006). Wikinomics: How mass collaboration changes everything. New York: Portfolio. Tene, O., & Polonetsky, J. (2014). A theory of creepy: Technology, privacy, and shifting social norms. Yale Journal of Law and Technology, 16(1), 59–102. Thaler, R.H., & Sunstein, C.R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven, CT: Yale University Press. The Mentor (1986). The hacker manifesto. Phrack, 1(7), 3. Thorne, S.L. (2003). Artifacts and cultures-of-use in intercultural communication. Language Learning and Technology, 7(2), 38–67. Thorne, S.L. (2008). Transcultural communication in open internet environments and massively multiplayer online games. In S. Magnan (Ed.), Mediating discourse online (pp. 305–327). Amsterdam: John Benjamins. Thrift, N. (2005). Knowing capitalism. London: SAGE. Thurlow, C. (2006). From statistical panic to moral panic: The metadiscursive construction and popular exaggeration of new media language in the print media. Journal of Computer-Mediated Communication, 11(3), 667–701. doi:10.1111/j.1083-6101.2006.00031.x Tiidenberg, K. (2018). Selfes: Why we love (and hate) them. Bingley: Emerald Publishing. Tolins, J., & Samermit, P. (2016). GIFs as embodied enactments in text-mediated conversation. Research on Language and Social Interaction, 49(2), 75–91. doi:1 0.1080/08351813.2016.1164391 Turkle, S. (1995). Life on the screen: Identity in the age of the internet. New York: Simon and Schuster. Unsworth, L. (2008). Multiliteracies and metalanguage: Describing image/text relations as a resource for negotiating multimodal texts. In J. Coiro, M. Knobel, C. Lankshear, & D.J. Leu (Eds.), Handbook of research on new literacies (pp. 377–405). New York: Lawrence Erlbaum. Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy. New York: Oxford University Press.

292

References

van Dijck, J. (2012). Facebook as a tool for producing sociality and connectivity. Television & New Media, 13(2), 160–176. doi:10.1177/1527476411415291 van Gisbergen, M.S., Sensagir, I., & Relouw, J. (2020). How real do you see yourself in VR? The effect of user-avatar resemblance on virtual reality experiences and behaviour. In T. Jung, M.C. tom Dieck, & P.A. Rauschnabel (Eds.), Augmented reality and virtual reality: Changing realities in a dynamic world (pp. 401–409). Cham: Springer. van Leeuwen, T. (2012). Design, production and creativity. In R. Jones (Ed.), Discourse and creativity (pp. 133–142). Harlow: Pearson Education. Varis, P., & Blommaert, J. (2015). Conviviality and collectives on social media: Virality, memes, and new social structures. Multilingual Margins: A Journal of Multilingualism from the Periphery, 2(1), 31–45. doi:10.14426/mm.v2i1.55 Virtual online worlds: Living a Second Life (2006, September 28). The economist. Retrieved from http://www.economist.com/node/7963538 Waddington, P. (1998). Dying for information? A report on the effects of information overload in the UK and worldwide. London: Reuters. Retrieved from http://www .cni.org/regconfs/1997/ukoln-content/repor~13.html Walther, J. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23(1), 3–43. doi:10.1177/009365096023001001 Wang, V., Tucker, J.V., & Rihll, T.E. (2011). On phatic technologies for creating and maintaining human relationships. Technology in Society, 33(1–2), 44–51. doi:10.1016/j.techsoc.2011.03.017 Wargo, J.M. (2015). Spatial stories with nomadic narrators: Affect, snapchat, and feeling embodiment in youth mobile composing. Journal of Language and Literacy Education, 11(1), 47–64. Warren, S.D., & Brandies, L.D. (1890). The right to privacy. Harvard Law Review, 4(5), 193–220. Warschauer, M., & Grimes, D. (2007). Audience, authorship, and artifact: The emergent semiotics of Web 2.0. Annual Review of Applied Linguistics, 27, 1–23. doi:10.1017/S0267190508070013 WashPostPR. (2016). The Washington Post experiments with automated storytelling to help power 2016 Rio Olympics coverage. Washington Post PR Blog. Retrieved from https://www.washingtonpost.com/pr/wp/2016/08/05/the-washington-post -experiments-with-automated-storytelling-to-help-power-2016-rio-olympics -coverage/ Weiser, M. (1991, September). The computer for the twenty-frst century. Scientifc American, 265(3), 94–104. Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. San Francisco, CA: W. H. Freeman & Co. Wertsch, J.V. (1993). Voices of the mind: Sociocultural approach to mediated action. Cambridge, MA: Harvard University Press. Williams, D., Yee, N., & Caplan, S.E. (2008). Who plays, how much, and why? Debunking the stereotypical gamer profle. Journal of Computer-Mediated Communication, 13(4), 993–1018. doi:10.1111/j.1083-6101.2008.00428.x Williams, J. (2018). Stand out of our light. Cambridge: Cambridge University Press. Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136.

References

293

Wolf, M. (2007). Proust and the squid: The story and science of the reading brain. New York: Harper Collins. Wylie, D. (2010, October 1). EA removes Taliban option from “Medal of Honour”. National Post. Retrieved from http://arts.nationalpost.com/2010/10/01/ampersand -arcade-ea-removes-taliban-option-from-medal-of-honor/ Xu, Y. (2018). Programmatic dreams: Technographic inquiry into censorship of Chinese chatbots. Social Media + Society, 4(4), 1–12. doi:10.1177/2056305118808780 Zappavigna, M. (2011). Ambient affliation: A linguistic perspective on Twitter. New Media & Society, 13(5), 788–806. doi:10.1177/1461444810385097 Zhou, R., Hentschel, J., & Kumar, N. (2017). Goodbye text, hello emoji: Mobile communication on WeChat in China. In Proceedings of the 2017 CHI conference on human factors in computing systems (pp. 748–759). New York: Association for Computing Machinery. doi:10.1145/3025453.3025800 Zimmerman, N. (2013). Pastor who left sanctimonious tip gets waitress fred from Applebee’s, claims her reputation was ruined. Retrieved 13 December 2017 from https://gawker.com/5980558/pastor-who-left-sanctimonious-tip-gets-waitress -fred-from-applebees-claims-her-reputation-was-ruined Zuboff, S. (2019). The age of surveillance capitalism: The fght for a human future at the new frontier of power. New York: PublicAffairs.

Index

Page numbers in bold denote tables, those in italic denote fgures. abbreviations 90, 92, 178 accuracy 5, 23, 145, 181, 267 acronyms 89–90, 92 activities: affordances, constraints, and social practices 12; analyzing language on different platforms 91; attention in social media 43; boon or bane 203; the creation and circulation of memes 247; the ‘creepy scale’ 257; the cultures of social networking platforms 166; detecting dark patterns 151; fnding information 23; folk algorithmics 148; the games people play 184; interrogating search engines 39; locative technologies 118; mapping your social network 211; performance equipment 217; reading critically in hypertext 50; remix culture 63; sign language 69; text/ image interaction in image macro memes 85; textual chemistry 96; what do they know about you? 270; your collaborative writing practices 233; your internet of things 133; your media ideologies 176 activity: browsing 147; canonical 252; creative 194; embodied 195; enjoyable 263; everyday 130; individual 157; inter- 45, 51, 56–57, 60, 64, 91, 185, 203; literate 191; online 51, 254; of online reading 47; physical 6, 14, 197; problem-solving 184; of reading and writing 45; social 58, 64; on social media 273; solitary 58, 260; users’ 263

affliation 32–33, 43, 224; ambient 168–169; professional 164; social 34 affrmative opt-in 255 affordances: communicative 95; different 81, 165, 216; of digital imaging 75; of digital media 62–63, 91, 122, 135, 189, 203, 252, 260, 275; of image 77, 86; key 117; of mobile phone cameras 121; of mobility 124; of multimediality 65; new 5–6, 49; perceived 174; sequential 85; simultaneous 85; of Snapchat 120–121; of social media 42, 205, 224; status 219; of technologies 13, 67, 96, 173, 184, 205, 213, 222–223, 239–241; of the virtual world 190; of writing 86, 88, 93 age of consensus 179 agency 68, 131–132, 156; collective 16; human 134; individual 16; news 50; relationship of 68 aggregation 50, 53, 227, 242 Albawardi, A. 99 algorithms 34–35, 38, 42, 46, 50–51, 60, 64, 138, 144–148, 153, 180, 224, 243, 247, 265–267, 270; artifcial intelligence 76; biases of 146, 154; calculations of 26; creators of 154; fltering capacity of 34; invisible 138; machine learning 106; personalizing 39; recommendation 219; search 19; of search engines 38; social media 247; sophisticated 104, 266; use of 34 Altman, I. 251 Amazon 9, 18, 33, 57–58, 155, 227, 262

296

Index

ambient affliation 168–169 Amnesty International 222 Androutsopoulos, J. 100 argument 56, 84, 86, 185, 189–191; complex 15, 56; lengthy 47; logical 56; role of 201; verbal 84; visual 84, 88 Arjoranta, J. 197 artifcial intelligence 104, 137, 145, 243; development of 27; effcient 108; see also algorithms Ashby, W.R. 144 Ashton, K. 130 Asshole Amplifcation Technology 222 associations 49–50, 63, 247; networked 29; overlapping 29; unexpected 49 associative: linking 29; network 35 attention: economy 40–41, 212, 218, 222–223, 225, 273; structures 31, 40 audience 11–12, 53, 55, 59, 63, 72, 77, 79–80, 82, 99, 180, 213, 215, 217–220; attention of the 53; design 99; diverse 100; engage an 58, 77, 88; illusion for 11; intended 228; involving the 79–80; members of 219; mixed 213; multiple 99; niche 219; segregation 213, 217 automatic writing 58–60

Berlinski, D. 34 Bezemer, J. 68 bias 25, 27, 30, 34, 38, 40–41, 50, 137–138, 148, 240; accusation of 240; built-in 10; cognitive 149, 255; common 32; confrmation 31–32, 222; inherent 44; loss aversion 32; see also algorithms Black Lives Matter 179, 220 Black, R.W. 57, 178 Blaze, M. 266 blogs 9, 50, 57, 100, 228; audio 100, 165, 169; mainstream 180; open access research 228; posts 202; research 228; writing 233 Blommaert, J. 162, 168, 222 Bogost, I. 145, 191 bonding 169, 209, 210 Bourdieu, P. 218 boyd, d. 206, 213, 254, 256 bridging 170, 209–210, 210 Brignull, H. 149 Brown, M. 132, 220 Bucher, T. 146, 212 Buckingham, D. 192 bulletin board systems (BBS) 206 Bush, V. 46

backfre effect 32 backstory 185, 190 Baidu 18, 39 Bakhtin, M.M. 61 Barnett, T. 57, 93–94 Baron, N.S. 90 Barthes, R. 61 Bauman, R. 265 behaviour: abusive 207; addictive 11; anti-social 205, 222; bad 261; consumer 58; convivial 222; creepy 260; disruptive 172; economists 32, 140; human 146; illegal 261; inappropriate 261; infuence 140; information gathering 41; linguistic 91; nonverbal 92; normal 138; normative 140; norms of 168; patterns of 137; polarizing 181; positive 197; search 37; selfimprovement related 197; social 128, 197; surfng 34; texting 97; unacceptable 172 Bell, A. 99 Benkler, Y. 179, 234–235

Carr, N. 56 Cartier, L. 5 case studies: algorithms 145–148; chatbots 104–108; Facebook as dynamic hypertext 52–53; genres of disclosure 267–270; infuencers and microcelebrities 218–221; massively multi-player online games as online discourse systems 170–173; multimodal design in Instagram 75–77; search engines 35–39; virtual game worlds 193–195; why is Zoom so exhausting? 127–129; the wiki 238–241; the wristwatch 4–6 Castells, M. 211 catfshing 1 centralization 242 cerebral cortex 31 channels of communication 8, 115, 124 chat rooms 9, 206 Chayko, M. 260 Chik, A. 99 Chua, A. 179 civil inattention 256

Index Clark, A. 4 Clarke, R. 250 clickbait 53–55 Clinton, H. 153 Clowes, R.W. 56 cognitive: abilities 17; biases 149, 255; capacities 31; changes 55; distortion 31; effciency 32; flters 34, 41; induced deprivation 54; means 41; processes 17, 19, 45, 144; work 25 collaboration 19, 180, 182, 202, 226– 229, 231, 234, 239, 241, 243, 276 collective 166, 211, 237, 242–243, 245; action 137, 157, 166, 243; agency 16; creative process 244; decisionmaking 242–243; distribution 248–249; effort 242; experiences 182; ignorance 30; intelligence 30, 242–243; negotiation 136; online 241; ‘panmemic’ 241; production 248; social activity 58; software 237; solution 242; strategies 154; visions 136; wisdom 30; work 243, 245 communication: computer-mediated (CMC) 2, 99, 111, 123, 173; day-today 96; digital 89–92, 95, 99–101, 103, 105, 110, 112–113, 123, 161–162, 228; embodied 127, 128; face-to-face 93, 102, 111, 123–125, 176, 264–265; human–human 266; hyperpersonal 95–96; intercultural 161–162, 175, 177–179, 181–182; interpersonal 10, 95; mediated 229; mobile 89; multimodal 67; norms of 162, 256; online 88, 90, 99, 107– 108, 265; phatic 103, 105, 107, 111, 181, 222, 225, 245; platforms 175; real-time 58; researchers 95; social media 60, 223; technology 113, 161, 226–227, 248; text-based 91–96, 98, 100–102, 110–111; tools 226, 233– 234; toxic 223; video 101, 124–125; visual 85, 88, 214; voice-based 100; workplace 95; written 90–91 connectors 209 consciousness 56, 128, 264; heightened 218; human 3, 10, 55 conspiracy theories 23, 137, 163 constraints 4, 6–7, 10–19, 23, 25, 40, 43, 45, 62, 67–68, 81, 88, 90–92, 94, 97–98, 100, 110, 116, 120, 124, 131, 135, 137–138, 152, 165, 173–175, 183, 217, 239, 275

297

Consumer Union 150 context collapse 213 contextualization 80, 93–95 conviviality 181, 217, 221–224 correlations 267 COVID-19 pandemic 124, 127 Creative Commons License 62 creativity 12–13, 18, 61, 90, 201–202, 236–237, 275 creeping 259 critical digital literacies 152, 157 Crystal, D. 89 Cserzo, D. 126 Culkin, J.M. 275 culture: American 162; celebrity 218– 219; Chinese 162; development of 61; free 61; French 162; gaming 172; hacker 156; -of-use 162, 173–175, 182; online 162, 164–165, 167, 171, 173, 175, 181; oral 10; permission 61; pop 61, 77, 82, 177; popular 136; remix 60–62, 64; visual 75–76 cyberbullying 222, 260 cyberspace 113 dark patterns 148–150, 150, 151, 154 data: categorizing 25; classifying 26; concepts of 25; evaluating 25; flter 30–32, 35, 43–44; high-quality 59; invisible 33; management 255; meta29, 265–266; organizing 26–29; personal 2, 155, 250, 259, 261, 268–269, 273; sensory 31; signals 37; sources 32; -veillance 250; visible 33 Dawkins, R. 243 De Certeau, M. 119 De Souza e Silva, A. 114 decentralization 155, 242 de-individuation 229, 233 design principles 85 Dewey Decimal System 27, 28 Dewey, M. 27, 28 digital: images 75, 88, 118; media 2, 9, 14–15, 17–20, 23, 29, 43, 45–47, 50–53, 56–58, 60, 62–63, 65, 67, 78–79, 88–89, 91–92, 95, 100, 103, 122, 134–135, 137–138, 142, 144, 147, 152–153, 155–156, 161, 165, 169, 173, 175, 179–181, 183, 185, 189, 203–204, 211, 226–228, 234–235, 246, 248, 250–253, 259–260, 264–266, 275–276; photos 1; practices 161, 256; surveillance

298

Index

133, 264–266; technologies 1–2, 10, 15–18, 23, 29, 33, 46, 65, 111–113, 115, 119–120, 130, 133, 135–140, 143–144, 147–148, 158, 161, 205, 250, 256–257, 265, 276; tools 17–19, 26, 34–35, 45, 79, 142, 227, 259, 261, 275 digitalization 132 Dillon, A. 56 dissemination 135, 155, 228, 260–261 diversity 163–164, 170, 239, 242 Dockterman, E. 97 Dresner, E. 93 Drug Enforcement Administration 69 Dubrofsky, R.E. 253 Dumitrica, D. 51 Dunbar, R. 103 dynamic hypertext 50–53 economy: attention 40–41, 212, 218, 222–223, 225, 273; gig 33; information 41; sharing 238 ELIZA 104–106 Ellis, J. 206 embodiment 120, 122, 134 emoji 8, 77, 89–90, 92–95, 97–99, 101, 103, 105, 109–111, 120, 141, 267 emoticon 89–90, 92–94, 178 encryption 254 engagement 42, 125, 132, 150, 222, 223, 224, 261; electronic 131; political 136; users’ 41, 224 Engelbart, D. 129 English Testing Service 60 entextualization 264–265 epistemology 30 Eyal, N. 52 face systems 164–167, 171, 181–182 Facebook 18, 33, 41, 43, 50, 52–53, 100, 104, 115, 140–141, 150, 153, 155, 163, 165, 169, 174–176, 205– 207, 211, 213, 215–217, 221–223, 223, 224, 233, 241, 253, 256, 264, 267–270, 272; EdgeRank 33, 52, 140; friends 7, 215; Messenger 91, 124; newsfeeds 33, 52; privacy settings 11 Faggella, D. 59 fake news 23, 53, 55, 59, 78, 152–155, 169, 208, 225, 241 Faris, R. 179

fear of missing out (FOMO) 32, 42 fltering 23, 25, 30–34, 38, 40, 43–44, 145, 227, 243, 262 Fink, E.M. 172 Finnemann, N.O. 51 faming 207, 222 fash mob 26 Floridi, L. 157 forms of discourse 164–167, 169, 171, 181–182 freedom of speech 276 functional specialization 81 Galloway, A.R. 147 Gamson, J. 218 gatekeeping 33, 138, 153 Gee, J.P. 99, 163, 186, 189–190, 192, 199–202 General Data Protection Regulation (GDPR) 255 Gershon, I. 13, 174–175 Gibbs, M. 175 Gibson, W. 113 GIFs 92–95, 99, 101; animated 60, 90, 92, 94, 99, 103, 109, 111, 267; keyboard 94; wars 103 Gladwell, M. 209 Goffman, E. 11, 93, 114, 212–213, 251, 256 Google 9, 18, 26, 37–41, 59, 74, 155, 256–257, 263; Android 143; Chrome 263; Design 71; Docs 37, 241; drive 232; Glass 257; image recognition 146; Maps 25–26, 31, 37, 119; Material Design 70; Nest Cam 257; PageRank 34, 36–37, 140; Search 50 GPT-3 60 gramming 1 Granovetter, M.S. 208–209 Gray, C.M. 149 hacking 143, 156–157, 236 Hafner, C.A. 82, 196 Haidt, J. 222–224 Hamari, J. 197 Hammond, K. 60 haptic sensations 112 Haraway, D.J. 155 Harris, T. 42 hashtags 8, 30, 77–78, 169, 173, 219, 266 hatelinking 1 Hayles, N.K. 130

Index Heliograf 58 Herbert, S.A. 41 Herley, C. 262 hierarchical: structure 48; taxonomy 27, 35 Highland, M. 203 Hill, J. 108 Hillis, K. 224 homogeneity 242 Horton, D. 220 Houseparty 124 Human-Computer Interaction (HCI) 149, 266 hybrid spaces 113, 115 hyperlinks 46–49, 53–55, 65, 151, 238 hypertext 8, 19, 46–48, 50–53, 55–56, 60, 63–64, 77, 203; confguration 51; dynamic 50–53; linked 50; organization patterns 48; principles 52; structure 48 hypertextuality 45–46 icon 70, 72, 76, 141, 189; alert 189; app 70; cultural 57; hamburger 72; Instagram 76; search 72 identity 2, 11, 90, 149, 178, 181, 199–201, 204, 213–215, 217–218, 221, 260; constructions 217–218; documents 130; group 99; projective 199; social 6, 11, 212; subcultural 99; virtual 199–200; visual 76 ideologies 18, 28, 74, 119, 135–137, 145–146, 162, 164–167, 171, 175, 179, 181, 199; collectivistic 166; harmful 180; media 13, 173–175; platform 175; powerful 136 imaginaries 135–136, 157; technodystopian 137; techno-utopian 136–137 independence 242 inference 130, 144, 147–148, 241, 266–267; statistical 240 infuencers 76, 210, 212, 218–220, 246 Innis, H. 10 Instagram 1, 11, 29, 42–43, 65, 75–77, 81, 101, 109, 115, 120, 150, 166– 167, 175–176, 205, 214, 216–217, 219–221, 264–266, 270, 272, 274 instant messaging 9, 89, 91–93, 99, 101, 108, 114, 124, 174–175, 207 interactivity 45, 51, 56–57, 60, 64, 91, 185, 203

299

intercultural communication 161–162, 177–179, 181–182 interfaces 51, 70, 72, 148–149, 152– 153, 188; deceptive 154; desktop 72; graphical 88; mobile 72; online 70, 149, 193; visual 67; voice-based 104 intermittent variable rewards 42, 141 Internet of Things (IoT) 113, 129–133, 136, 155, 157 interpersonal 17; communication 10, 95; function 169; meaning 79; relationships 10, 79, 103, 165 intertextual: pop culture 82; reference 82–83, 85–86, 88, 244–245 intertextuality 82, 185 involvement 79–80, 206; audience 80; focal 117; screens 253; shields 114; simultaneous 115 Jasanoff, S. 136 Jobs, S. 237 Johnson, S. 184 Jones, G. 260 Jones, R.H. 87, 102, 114–115, 147 Keen, A. 237 Kellmereit, D. 132 Kember, S. 14 Kitchin, R. 131 Kittler, F.A. 143, 149 Kosinski, M. 267 Kress, G.R. 67–68, 72, 74, 78–80, 88 Lam, W.S.E. 177 Lanier, J. 139, 156, 222, 237, 243 lateral reading 153 Latour, B. 131 learning 2, 17, 31, 48, 55, 58, 108, 142, 144, 147, 153, 157, 162, 166, 176, 184, 201–202, 276; critical 183; deep 145; experiences 177, 201; formal 167; language 173; machine 46, 60, 64, 106, 108; online 48, 142; principles of 201; project 174 Leaver, T. 75 Lefebvre, H. 119 Lessig, L. 61–62 Lévy, P. 30, 243 Levy, S. 59 Licoppe, C. 125–126 Ling, R.S. 102 linguistics 28, 147

300

Index

LinkedIn 166–167, 215 linking 37, 46–47, 49, 52, 63, 206; associative 29 Linnaeus, C. 27 Linvill, D.L. 154 literacies 16, 18–19, 65, 133, 250–251, 254–256, 275; analogue 18; critical 154, 157; digital 2, 16–18, 23, 56, 89, 112, 129–131, 133, 138, 142, 152, 157, 181, 192, 256, 275; hacker 157; material 133; multimodal 245; new 17, 19; platform 19; social 256; technical 255 live streaming 7 locative technologies 118 Lojeski, K.S. 229 Maheshwari, S. 169 Malinowski, B. 103 Manovich, L. 60, 138 Marsh, J. 198 Marwick, A.E. 180, 213, 219, 260 mashups 60–62, 226 Massey, A.J. 89 Massey, D. 119 massively multi-player online games (MMOGs) 170–172, 177, 193, 194, 200–201 materiality 129–130 mavens 209, 212 McGrath, L. 228 McLuhan, M. 3–4, 161, 179, 275 mediation 2–3, 14, 16, 18, 100, 138, 142, 152; redux 138 memeing 1 memes 60, 77, 163, 180–181, 206, 225–227, 233, 243–249; image macro 82, 85, 244; internet 181, 244; lolcats 246; mashup 244 metadata 29, 265–266 metaphors 70, 136 Meyrowitz, J. 113 microcelebrities 210, 212, 218–221 MIDI format 139 Miller, D. 14 Miltner, K.M. 99 Miquela, L. 220–221 mobility 113, 124, 126, 133 mode-mixing and mode-switching 108 Morozov, E. 136, 157 Moses, R. 137–138 MUD Object Oriented (MOO) 193, 206

multimedia 39, 45–46, 65, 185, 203 multimodal 65, 88, 111, 189; alternatives 110; combination 244; communication 67; complexity 66; content 8, 65; cues 189; design 53, 75–77, 87, 109, 244; dimensions 111; documents 1; ensembles 66, 77, 81–83; texts 65, 68, 78, 111; webpage 66; see also literacies multimodality 64–65, 67, 185 Multi-User Domains (MUD) 193, 206 Munn, L. 223 mutual monitoring 102, 111 Nahon, K. 246–247 Nann, J. 116 Nelson, T.H. 47 neoliberalism 136 Netspeak 89–90 networked individualism 211–212 neural pathways 31 neutral point of view (NPOV) 240–241, 241 Nissenbaum, H. 252 notifcations 41–42, 118, 150 Obama, B. 63, 98 Ong, W.J. 10, 55 onion routing 254 online: affnity spaces 163–165, 170, 177, 183, 192, 203, 226, 228; forums 9, 162, 228; journaling services 207; language 89, 91, 98, 104; tribalism 162, 180; see also privacy ontologies 26–27 Organisation for Economic Co-operation and Development 130 organization system 26–27, 30 ownership 61, 237, 276 Page, R. 218–219 Palen, L. 268 Papacharissi, Z. 224 paradox 32, 253 Pariser, E. 34, 140 Pasquale, F. 144 peer production 19, 166, 180, 182, 226–227, 234–237, 242, 248 perceptible phenomena 24 Pew Research Centre 222 phishing 262, 263

Index Picasso, P. 276 Pinker, S. 56 placemaking 118–121 plagiarism 60–61, 145 Plester, B. 89 polarization 154, 162, 179–182; political 10, 222 pop culture 61, 77, 82, 177 Prensky, M. 201 privacy 6, 12, 14–15, 39, 118, 134, 137, 182, 215, 249–259, 261, 273, 276; challenges 254; illusion of 215; internet 259; minimum level of 215; online 253, 255–256, 273; paradox 253; personal 38, 135–136, 274; right to 250; settings 11, 150, 157, 214–215, 254, 256, 263 problem-solving skills 201 proxemics 109, 123–124 psychology 28; human 148–149 radio frequency identifcation (RFID) chips 130 Radley, A. 122 read receipt 102 reality: alternate 113, 202; augmented 183, 195–197; concrete 74; distorted 1, 78; hybrid 197; physical 113; study of 27; view of 140; virtual 183, 195–198 recommendation systems 9, 25, 34 remixing 60–62, 226 representation 79–80, 128, 189; conceptual 78–79; digital 93; elements of 192; graphical 194; modes of 14; narrative 78–79; in symbols 15; visual 86, 198 reputation 32, 236, 247, 260, 262 revenge porn 260 Roberts, H. 179 Rowlands, I. 55 Rushkoff, D. 142 Sadowski, J. 133 Santo, R. 157 Santos-Dumont, A. 5 scams 262 Schegloff, A.E. 101 Schivelbusch, W. 113 Schmidt, E. 259 Schrage, M. 9, 26 Scollon, R. 162, 164

301

search engines 9, 25, 29, 34–36, 38–41, 145 Seargeant, P. 255 self: -awareness 196; -branding 219; coherent 215; -commodifcation 225; -concealing 143; -disclosure 10, 256; -expression 110, 260; -hood 218; -imposed 180–181; -improvement 197; individual 212; -interests 211; -portraits 219; -presentation 212– 214, 216–219, 225, 253; -promotion 212, 219; -reinforcing 180; -selected 234–235, 239, 242, 248 selfes 11, 78–79, 81, 109, 121 semantic 34, 38 Senft, T.M. 218–219 sequentiality 110 Sharples, M. 229–231 Shifman, L. 244–245 Simon, H. 41 simultaneity 110 Skinner, B.F. 42 Skype 101, 123, 125, 126, 226 smart: appliances 131; cities 131, 132, 136, 157; devices 130–131; environments 130, 132; homes 130–131, 136; medical devices 131; objects 130; phones 3, 7, 16–18, 24–25, 45, 67, 70, 72, 73, 75, 115, 136, 183; schools 136; things 130, 132–133; watches 6 SMS 89 Snapchat 2, 8, 43, 65, 78, 91, 97, 101, 109–111, 116–117, 119–121, 121– 123, 141, 176, 205, 209, 216–217, 252, 272 social: identities 6, 11–12, 15–17, 91, 98, 117, 205, 213, 251; interaction 16, 114, 116, 128, 131, 149, 152, 174, 220, 249, 251, 256; networking sites (SNSs) 9, 12, 57, 75, 163, 166, 169, 205–208, 212, 214, 225, 229, 256; practices 1, 12, 16, 18, 119, 138, 140–142, 161, 163, 165, 173, 257–258, 268, 275–276; reading 9; relationships 15–17, 19, 117, 120, 131–132, 135, 141, 156, 161–162, 205, 208, 225, 251–252 socialization 162, 164–167, 171, 181–182, 187 Socrates 15 solutionism 136

302

Index

spam 35–36, 38, 59, 76 Spina, S. 94 Sproull, L. 95 Stalder, F. 38–39 stalking 257, 259–260, 266 Sterling, B. 132 Stone, A.R. 129 storyboard 86–87, 87 Sugiyama, S. 99 summons-response sequence 102 Sunstein, C.R. 180 Surowiecki, J. 242 surveillance 19, 156, 249–251, 253, 261, 273; cameras 257–258; capitalism 156, 273–274; constant 2; corporate 273; digital 133, 264–266; government 273; lateral 117, 259–261; mass 15; objects of 251; online 182, 260–261; peer 273; social 260, 273; unwanted 259 Swales, J.M. 267 Tagg, C. 213, 221 tagging 29–30, 169, 214 Tannen, D. 128 Tapscott, D. 237 technological: advances 183; affordances 67, 96, 173, 184, 222–223, 239–241; barriers 198; challenges 35; determinism 17; developments 62, 256; dystopianism 15; flters 31, 33–34, 40–41; infrastructure 153; innovation 255; know-how 255; solutionism 136; tools 27, 31, 33–34, 41–42, 138, 175, 192, 231, 275; utopianism 15–16 tension 13, 18, 175 Test of English as a Foreign Language (TOEFL) 60 text messages 7, 96–97, 103, 109–111, 175 textual chemistry 97 thalamus 31 Thaler, R.H. 139–140 thinking 1, 3, 10, 13–14, 17, 55–56, 78, 113, 131, 137–138, 143, 164, 184, 237 Thorne, S.L. 173–174, 178–179 Thrift, N. 155 TikTok 65, 86, 123, 175–176, 206, 210, 216–217, 245

timescales 51, 112, 197 Tinder 7, 42, 117, 119, 129 Tolins, J. 92 transparency 132, 142–144 tribalism 179–181; cultural 10; online 162 trolling 1, 207, 222, 252, 259, 273 Trump, D. 43, 98, 153, 160–170, 181, 223 Truscott, T. 206 tweeting 1, 8, 101, 215 Twitter 8, 43, 72, 73, 94, 98, 100, 108, 115, 141, 145, 153, 157, 166, 168–169, 205, 214–217, 219, 222, 244, 264 Unsworth, L. 82 Usenet 206–208 user effects 98, 110 Vaidhyanathan, S. 224 van Dijck, J. 223 van Leeuwen, T. 72, 74, 78–80, 139 Varis, P. 168, 222 video chat 97, 109, 113, 123–126 virality 227, 243, 246 virtual: camera 196; distance 229; elements 197; environments 2, 185– 186, 194; existence 195; game world 184, 193–195; identity 199–200; places 163; recreations 194; space 70, 114, 168, 200; tennis 183; workspace 70; world 113, 170–173, 190–191, 193–199, 202–203; see also reality visibility 180, 212 visual: appearance 77; communication 85, 88, 214; culture 75–76; design 71–72, 75, 78, 88, 216; elements 65, 69–70, 86, 88; identity 76; interface 67; mode 75, 81–82, 88, 191; resources 68, 71, 78–79; texts 78 Vygotsky, L. 2–3 Wales, J. 166 walkthroughs 185, 191–192, 203 Walther, J. 95 Wang, V. 103 Wargo, J.M. 122 Warren, S.D. 250 WeChat 14, 19, 99, 108, 124, 226 Weibo 19 Weiser, M. 142

Index Weizenbaum, J. 104–105 Wertsch, J.V. 13 WhatsApp 2, 16, 91, 99, 102, 109–110, 124, 226, 272 Wikileaks 261 Wikinomics 234, 237 Williams, D. 170 Williams, J. 44, 100 Winner, L. 137 wisdom 30, 237, 242, 248

303

World Wide Web 10, 29, 35, 46, 56, 60, 65 Xu, Y. 108 Zappavigna, M. 169 Zhou, R. 99 Zoom 123–124, 127, 187, 226 Zuboff, S. 156 Zuckerberg, M. 213