398 42 7MB
English Pages 388 [389] Year 2023
Higher Education Dynamics 59
Juliana E. Raffaghelli Albert Sangrà Editors
Data Cultures in Higher Education Emergent Practices and the Challenge Ahead
Higher Education Dynamics Volume 59
Series Editors Peter Maassen, Department of Education, Faculty of Educational Science, University of Oslo, Blindern, Oslo, Norway Manja Klemenčič, Department of Sociology, Harvard University, Cambridge, MA, USA Editorial Board Members Akira Arimoto, Research Institute for Higher Education, Hyogo University, Kakogawa, Japan Elizabeth Balbachevsky, NUPPs-IEA/USP, Universidade de São Paulo, São Paulo, Brazil Giliberto Capano, Political & Social Sciences, University of Bologna, Bologna, Italy Glen Jones, Ontario Inst for Studies in Education, University of Toronto, Toronto, ON, Canada Marek Kwiek, Center for Public Policy Studies, Adam Mickiewicz University in Poznań, Poznań, Poland Johan Müller, School of Education, University of Cape Town, Rondebosch, South Africa Teboho Moja, Higher Education Program, New York University, New York, NY, USA Jung-Cheol Shin, Department of Education, Seoul National University, Gwanak-Gu, Seoul, Korea (Republic of) Martina Vukasovic, Administration and Organization Theory, University of Bergen, Bergen, Norway
This series is intended to study adaptation processes and their outcomes in higher education at all relevant levels. In addition it wants to examine the way interactions between these levels affect adaptation processes. It aims at applying general social science concepts and theories as well as testing theories in the field of higher education research. It wants to do so in a manner that is of relevance to all those professionally involved in higher education, be it as ministers, policy-makers, politicians, institutional leaders or administrators, higher education researchers, members of the academic staff of universities and colleges, or students. It will include both mature and developing systems of higher education, covering public as well as private institutions. All volumes published in the ‘Higher Education Dynamics’ series get peerreviewed (single-blind). The series is included in Scopus.
Juliana E. Raffaghelli • Albert Sangrà Editors
Data Cultures in Higher Education Emergent Practices and the Challenge Ahead
Editors Juliana E. Raffaghelli Universitat Oberta de Catalunya Barcelona, Spain
Albert Sangrà Universitat Oberta de Catalunya Barcelona, Spain
ISSN 1571-0378 ISSN 2215-1923 (electronic) Higher Education Dynamics ISBN 978-3-031-24192-5 ISBN 978-3-031-24193-2 (eBook) https://doi.org/10.1007/978-3-031-24193-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents
1
Data Cultures in Higher Education: Acknowledging Complexity������ 1 Juliana E. Raffaghelli and Albert Sangrà
2
Data, Society and the University: Facets of a Complex Problem�������� 41 Juliana E. Raffaghelli and Albert Sangrà
Part I Exploring Reactive Data Epistemologies in Higher Education 3
Fair Learning Analytics: Design, Participation, and Trans-discipline in the Techno-structure���������������������������������������� 71 Regina Motz and Patricia Díaz-Charquero
4
Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education�������������������������������������������������������� 89 Juliana E. Raffaghelli and Valentina Grion
5
“We Used to Have Fun But Then Data Came into Play…”: Social Media at the Crossroads Between Big Data and Digital Literacy Issues���������������������������������������������������������������������� 123 Benjamin Gleason and Stefania Manca
Part II Exploring Proactive Data Epistemologies in Higher Education 6
Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy�������������������������������������������������������������������������� 145 Gema Santos-Hermosa, Alfonso Quarati, Eugenia Loría-Soriano, and Juliana E. Raffaghelli
v
vi
Contents
7
Responsible Educational Technology Research: From Open Science and Open Data to Ethics and Trustworthy Learning Analytics���������������������������������������������������������������������������������������������������� 179 Davinia Hernández-Leo, Ishari Amarasinghe, Marc Beardsley, Eyad Hakami, Aurelio Ruiz García, and Patricia Santos
8
Exploring Possible Worlds: Open and Participatory Tools for Critical Data Literacy and Fairer Data Culture ���������������������������� 201 Caroline Kuhn
Part III The Challenge Ahead 9
Toward an Ethics of Classroom Tools: Educating Educators for Data Literacy�������������������������������������������������������������������������������������� 229 Bonnie Stewart
10 How to Integrate Data Culture in HE: A Teaching Experience in a Digital Competence Course ������������������������������������������������������������ 245 Montse Guitert, Teresa Romeu, and Marc Romero 11 Teaching Data That Matters: History and Practice������������������������������ 267 Rahul Bhargava 12 Critical Data Literacy in Higher Education: Teaching and Research for Data Ethics and Justice �������������������������������������������� 293 Javiera Atenas, Leo Havemann, Caroline Kuhn, and Cristian Timmermann 13 How Stakeholders’ Data Literacy Contributes to Quality in Higher Education: A Goal-Oriented Analysis���������������������������������� 313 Nan Yang and Tong Li 14 Data Centres in the University: From Tools to Symbols of Power and Transformation ���������������������������������������������������������������� 329 Pablo Rivera-Vargas, Cristóbal Cobo, Judith Jacovkis, and Ezequiel Passerón 15 Conclusion: Building Fair Data Cultures in Higher Education���������� 355 Juliana E. Raffaghelli and Albert Sangrà Afterword: For: Data Cultures in Higher Education – Emergent Practices and the Challenge Ahead���������������������������������������������������������������� 385
Chapter 1
Data Cultures in Higher Education: Acknowledging Complexity Juliana E. Raffaghelli
and Albert Sangrà
Abstract This chapter introduces the book “Data Cultures in Higher Education: Emerging Practices and the Challenges Ahead”. It is based on four sections that frame several chapters’ work and present it. In the first section, we briefly explain the problem of data and datafication in our contemporary society. To offer conceptual lenses, the idea of complexity is applied to the entropic and chaotic way with which datafication appears in several areas of higher education, triggering fragmented responses, ambiguity, and in the worst cases, harm. Hence, we offer the idea of higher education institutions’ data culture as potential apparatus to explore and understand the above-mentioned complexity. Data cultures characterise an institution and its tradition, people, narratives, and symbols around data and datafication. We purport here that awareness about their existence is crucial to engage in transformation to achieve fairness, equity, and even justice, beyond the subtle manipulation embedded in many of the assumptions behind data-intensive practices. Over these bases, we present the twelve central chapters composing this book, highlighting their perspectives and the way they contribute to study, act, and change data cultures. Finally, space is left to the book’s conclusions and the afterword by invited scholars as a point of arrival for the reader. Several threads conjoin in a web that will hopefully inspire future research and practice. Keywords Data cultures · Challenge · Higher education · Datafication
J. E. Raffaghelli (*) Edul@b Research Group, Universitat Oberta de Catalunya, Barcelona, Spain Department of Philosophy, Sociology, Pedagogy and Applied Psychology, University of Padua, Padua, Italy e-mail: [email protected]; [email protected] A. Sangrà Faculty of Psychology and Education, Universitat Oberta de Catalunya, Barcelona, Spain e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. E. Raffaghelli, A. Sangrà (eds.), Data Cultures in Higher Education, Higher Education Dynamics 59, https://doi.org/10.1007/978-3-031-24193-2_1
1
2
J. E. Raffaghelli and A. Sangrà
Introduction The present book deals with an emerging phenomenon, datafication: the digital data collected through many interfaces, their uses, and abuses. This term surely sounds odd or even awkward to the reader, so we imagine her urgent need of a comprehensive definition here. But let us wait for a moment for this clarification, to let the reader experience our same feeling as scholars, while faced with datafication and the changes it entailed within the digital spaces we used to know and to live comfortably in. Datafication opened more than a gap, it was a sort of precipice for our understanding about the role of the web and what we used to call the knowledge society. We had to deconstruct our theories, to abandon well-known places, to explore the unknown with enough flexibility and patience to try ways that could be conducted anywhere, and with tolerance to unanswered questions. Hence, our intention with this book was to make our journey evident, to show our uncertain footprints while exploring a problem that appeared less than 10 years ago, that was enormously increased by the pandemic 2 years ago, and that will apparently impact our lives in unprecedented ways. A phenomenon for which new words are being coined, discussions are opened, professionalism is being requested, and stories of failure and success are being written. Actually, this book was authored by a group of more than thirty scholars, who joined a conversation on data practices in Higher Education, with no promises of what we could come across or get from it – just to start a journey. We expose here our ongoing research, entangled with our perplexities and our questions about the future. Indeed, as far as we engaged with uncovering the darkest sides of datafication, some of us also centred their work in developing practices about the appropriation and use of data in education and society. So, the reader might be puzzled by these different perspectives. We ask the reader to wear our shoes and walk through the pages of this book with curiosity and flexibility, learning from the cases and concepts provided, treasuring them for their own reflections, practice, and research. If there will be unanswered questions, our advice is to consider this as an invitation to investigate further. However, to map the territory the reader is about to travel through, this first chapter offers definitions and conceptual lenses to grasp the jargon about datafication. By doing so, we also hope that the reader will become progressively aware of essential facts and their study by scholars who have anticipated the social hype about data that we are already undergoing. People work has been the key for the development of current scholarly work on datafication in education overall and higher education specifically. We also introduce our group and our story of collaboration to guide the reader to understand how several perspectives on data practices and narratives in higher education collected in this book can share a joint space. Indeed, our diverse strands of work converge in a central point: that datafication, behind the continuous technological development and the digital transformation that arrived or was maybe imposed on the universities, is a phenomenon advancing from different
1 Data Cultures in Higher Education: Acknowledging Complexity
3
corners, with varied consequences. In such conditions, generating a coherent and comprehensive landscape to act is still difficult, if not impossible. Therefore, our engagement with the problem must acknowledge its open-endedness, its incommensurability: its complexity. But this fact certainly does not imply immobility, hopelessness, or alienation. Our effort is hence to provide proposals for future action, to keep on exploring the contours of this problem and deepen on its essence. Naturally, our goal is to open a space for knowledge, development, research, and organisational innovation strategies, which the central actors in higher education could embrace to live in the context of digital and technological transformation originated by datafication. This chapter also introduces the book’s structure and the cases portrayed, with their analysis of the literature, trends, definitions, and approaches that will surely be helpful to grasp (and perhaps accept) the complexity with which we move – and stemming from each authors’ contribution. Beyond understanding key terms (and odd terms!) we deemed it relevant to offer some conceptual anchors to reading this book. As a matter of fact, the theory of complexity is applied to understand the entropic and chaotic way datafication appears in several areas of higher education, triggering fragmented responses, ambiguity, and, in worst cases, harm. Complexity, we argue, is based on diversified ideas, approaches, dispositions, and mindsets towards data by several stakeholders. Nonetheless, this relationality is also traversed by power and inequities that put some in the condition to extract and use data, and others to be the object of extraction. Moving forward in this chapter, we introduce the concept of data culture as a potential apparatus to explore and understand complexity. A data culture is situated and requires actors’ acknowledgement of the institutions and communities’ socio- historical context. Moreover, the actors can transform the data culture, building fairer spaces and practices. Several authors here argue that critical literacies are crucial to understand data complexity and situatedness. Others introduce experiences and reflections that bring to the fore the significance of engaging all actors, the critical deconstruction of data-intensive practices, and narratives. Over these bases, we contend the following: that any higher education institution might embrace diversified data cultures, making visible or invisibles habits, traditions, people, narratives, and symbols: awareness about such elements might support not only understanding the situated complexity, but it might also encompass the necessary transformation to face datafication in higher education. The twelve central chapters composing this book make new sense and build an integrated puzzle as several threads conjoining in a web that will hopefully support future research and practice.
Datafication: A Recent Story… In our contemporary society, technological evolution is increasingly blurring the lines between our bodies, our minds, and the digital infrastructures we live in. Elevated to a new paradigm for understanding several consequences of life with
4
J. E. Raffaghelli and A. Sangrà
technologies, the term “post-digital” started to be adopted in several spaces of social sciences to refer to either the technological evolution “after the digital” and a movement of critique of the digital-technical faith in progress (Knox, 2019). Indeed, the visible side of the web had been connected to better information for all in the initial paradigm of the Information Society theorised by Manuel Castells at the dawn of the twenty-first century (Castells, 2000); and it was hence followed by the enthusiastic idea of a networked society based on the prosocial web (Castells, 2011) a decade later. But there was another less visible layer in digital technologies that evolved slowly under the visible web and emerged only recently: massive data extraction and algorithmic processing at the apparent benefit of a few companies able to implement such data-intensive practices (Boyd & Crawford, 2012; Van Dijck, 2014). Therefore, the post-digital era was based on two pillars: the first one, relating the increasing use of mobile technologies, developing sensors, webcams, and other devices capturing our digital interactions; and the second, the growing capacity to transform such interactions in data that could be elaborated and transformed into visualisations, recommendations, and ability for the machines to “learn” and predict patterns of human behaviour, with the ultimate goal of controlling users’ emotions, cognition and behaviour (Kitchin, 2014; Zuboff, 2015). This situation did not evolve in a linear way, but on the contrary, it unfolded in several directions according to the industry, the society, and the scholarly interests in the technological arena. To understand how this situation evolved, we will now consider two scenarios and will invite the reader to reflect upon them. The first scenario is built upon many real examples already commented in pioneering studies about data and society (Poell et al., 2019; Van Dijck, 2014; Zuboff, 2015), namely the way in which we stay informed on recent news from our mobiles. Nowadays, not only do we search for news on the Internet, but we also get “recommendations” on information of our interest from mobile apps. Within such apps, news are organised through an algorithm that is based on the data captured from our Internet searches. Still, with the possibility of connecting the data extracted from different apps of our preference, particularly with social media, our data becomes more and more specific about our personal profiles as consumers. Therefore, news can be selected with great accuracy in relation to our interests. But these advantages suddenly become critical issues: the information captured from us as platforms’ users probably goes too far in understanding who we are and what we do in our private life. The pre-selected news can capture our attention and induce us to go on scrolling down for more; to buy products and services that are offered in such an informational context; and to polarise our opinions, exacerbating our emotional reactions not only in digital, but also in face-to-face contexts. This approach to information was just one of several of the so-called data-driven developments and practices. In fact, such an approach quickly pervaded other areas of human activity with innumerable nuances relating to the problems raised by several aspects such as: the type and quantity of data collected; the users’ privacy; badly calibrated algorithms leading to unexpected results; and last but not least, the unethical control of human behaviour. At this point, the reader might consider it one of the first scandals produced by such algorithmic procedures, circulated in mass media with the name
1 Data Cultures in Higher Education: Acknowledging Complexity
5
of “Cambridge Analytica” (or Facebook-Cambridge Analytica data scandal)1 fully disclosed in 2018: a data breach affecting potentially more than 80,000,000 users was the starting point to reveal data collection approaches without users’ consent and the later usage to build users’ psychological profiles and hence, address specific messages to them. Most importantly, those messages did not relate only to the advertising of commercial products or services, but also to persuade users to vote for specific candidates. But let us now imagine another scenario. As all those working in multilingual contexts know, transcriptions and translations are time-consuming activities, requiring high levels of linguistic expertise. Many of us will have experienced the usage of automatic translators within digital systems, such as browsers. About 20 years ago, the translations were so imprecise that they were deemed merely useless, requiring more effort to correct the final outputs than a manual translation. The increasing usage of such digital tools and apps connected to our web search engines slowly fed an amazing database made of cases and approaches to connect the words. These datasets (or corpora, for collections of text as data) were progressively handled with algorithms which improved their performance at any trial. And these practices were a matter of debate within the flourishing field of natural language. Nowadays, it is easy to adopt automated tools to make transcriptions and translations, which allow us to read web pages or social media posts and videos transcribed and translated in real-time, with very good accuracy. Of course, the precision depends on the data the algorithms have got for training, with minority languages in trouble to reach the standards of dominant languages (Olive et al., 2011; Zampieri et al., 2020). The same approach was applied to image detection and classification in a field of research denominated “computer vision”, which arrived at an amazing time but contested results such as the identification of objects at the base of, for example, automatic car driving or military usages of drones (Scheuerman et al., 2021). Images recognition, and particularly facial recognition, raised concerns and brought to light terrible cases such as the bias in detecting African-American women’s faces; or the association of black women to sex work and housekeeping jobs in images’ search on the Internet; or the poor people targeted and intensive tracking in comparison with other citizens to deliver basic social assistance to them (Eubanks, 2018; Noble, 2018). After considering these two scenarios, let’s make a silent exercise of reflection about our personal position about data in the society. We might be in possession of professional knowledge that endows us to have a peculiar understanding of the phenomena introduced above. And that same knowledge will address our attention and our disposition towards data in different ways: we might fear the consequences of a datafied society more than we would love technological innovation leading to futuristic intelligent systems interacting with us. Our main questions around such systems are: Who designs and controls them? Whose interests do they uphold?
For a full coverage on the Wikipedia in English and other languages: https://en.wikipedia.org/ wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal 1
6
J. E. Raffaghelli and A. Sangrà
Which are the unintended consequences? Or vice versa, we might be passionate followers of technological evolution, and maybe we are in a position of contributing to it, as computer scientists, for example. Therefore, more than fear, we wish to assure others that everything “is going to be all right” in the end, for these technologies might solve key problems afflicting our societies. Our main question in this case will be: What if we don’t develop intelligent technologies? Will our societies enter a dark era where we just survive natural events and processes without any ability to understand or transform them?
What Is Your Position? Indeed, now we are prepared to explore how data-driven practices have been embraced and contested with boundless enthusiasm in our societies. An initial wave envisaged them as dynamic and generative powers for innovating, smart technologies such as translation. Data was considered as the fuel of the post-digital society’s engines. Data flows and processings could indeed support the development of automatisms and dynamic human-information interactions (Kitchin, 2014) leading to the development of Artificial Intelligence industry, including the automatisation of recommendations, tasks, and entire services, not only in basic works, but also in areas such as medical diagnosis, mobility in big cities, home automation, and education, to mention but a few (Crawford, 2021; Fry, 2019). Recently, transnational bodies such as the European Commission, the OECD and UNESCO have expressed the compelling need of formulating policies and approaches to promote AI as a powerful driver of economic development (European Commission, 2021; OECD, 2019b; UNESCO, 2020). However, through the examples introduced above, we can observe that such innovations come at a cost, often made invisible by powerful interests. In fact, the pace of technological developments has increased exponentially in the sector of automation and AI in the last 10 years. This situation was accompanied by an initial lack of social critique that challenged the overly optimistic stance on the power of data (van Es & Schäfer, 2017). The term datafication (https://en.wikipedia.org/wiki/Datafication) was coined, hence, to point out the dreadful consequences of capturing large amounts of digital data to address algorithmic decision-making and automated services. This phenomenon was centred in several studies where concepts as “dataveillance” (Van Dijck, 2014) and “surveillance capitalism” (Zuboff, 2019) attempted to capture the emerging problem. Furthermore, the study of Shoshana Zuboff (op.cit) preconised a new socioeconomic paradigm based on data extraction for free associated with induced consumerism and control over human attention. Many other studies focused on algorithmic bias, conceptualising the problem as an “the automation of inequalities” (Eubanks, 2018), “racist algorithms” (Benjamin, 2019), or “data colonialism”(Couldry & Mejias, 2019). Other approaches promoted resilience and the possibility to resist the dark sides of datafication, like “data feminism” (D’Ignazio & Klein, 2020), which associated the ideals of feminism to working
1 Data Cultures in Higher Education: Acknowledging Complexity
7
with data in vulnerable communities, disentangling the injustices embedded in data fabricated by the élites; or “data sovereignty”, associated with the need of combating domination and achieving individual and community freedom from the power groups extracting and processing data (Hummel et al., 2021). But the forced usage of digital technologies during the COVID-19 pandemic accelerated datafication, exacerbating what had already been theorised as “platformisation”. Platforms came to our life with the prosocial web early in the last decades, and they were supposed to ease the “sharing economy”. With platform facilities, any user could create content, from blogs to videos, from spaces for conversation or for commenting on books to selling objects and services (Van Dijck et al., 2018). Let us consider some examples again, to dig deeper into the concepts. Most of us use platforms in our daily life: we will read the suggested articles at Google News at breakfast; go to a friend’s house hailing a ride with Uber; talk about a possible holiday with our friend renting a house through Airbnb; and calling Deliveroo to have our lunch delivered; and watch a movie on Netflix. Since we work at a university, we may adopt a learning management system such as Moodle or Canvas; and we might even integrate other platforms to organise video content, such as Zoom or Kaltura. Furthermore, we could be publishing curated video-content on a YouTube channel for our students; or we could use Twitter to promote an open discussion on a research topic. In all these examples, the content and the interactions are supported by schemes, by a digital infrastructure that shapes our response. There are spaces and ways of communicating, such as a button to “Like” content in Facebook, or stars to provide feedback on the services provided by many of the platforms mentioned above. The platforms are multi-layered and must be read through relational, multi-perspective lenses: they provide graphic interfaces that make the user experience easy and pleasant; they facilitate the several users’ activities while creating content; but they also embed hours of code and algorithmic programming to collect and analyse data that improves customers’ interest in the offered services; and they embed an idea of “user” that is imbued of the interests and values of those creating and leading such platforms. In this regard, these platforms exacerbate the disparities between the users and élites able to access and manipulate data and those exposed to data extraction. This is what van Dijck and colleagues termed the “platform society”, in which we should carefully consider “a profound dispute about private gain versus public benefit” (2018, p. 2) as the main issue. The authors highlight how the Big Tech industry could be particularly benefitted by the accumulated power in years of experimental activity with the data provided by the prosocial web. The neologism “platformisation” started to circulate and was defined by van Dijck’s research group as “the penetration of infrastructures, economic processes and governmental frameworks of digital platforms in different economic sectors and spheres of life as well as the reorganisation of cultural practices and imaginations around these platforms” (Poell et al., 2019, p. 1). The authors collocated this conceptualisation at the juncture of four different disciplinary fields, namely, software studies, business studies, critical political economy and cultural studies (op.cit, p. 2). It is relevant to point out
8
J. E. Raffaghelli and A. Sangrà
here that the term was rapidly embedded in educational research, given the visibility achieved by these dynamics during the pandemic. The use of Google Classroom or Zoom were examples intensely studied as part of platformisation. In the first case, the focus was put on the penetration of the platform through teachers’ training for free and the rapid adoption of it as part of national policies to cover emergency remote education during the lockdown. In the second case, it was highlighted how a private videoconferencing system became a key feature of most virtual learning environments offered by public schools and universities to support daily teaching and learning (Cobo & Vargas, 2022; Decuypere et al., 2021; Gleason & Heath, 2021; Saura et al., 2021). Recently, this problem has been considered central in relation to invaded students’ privacy or the de-professionalisation of teachers while using “pre-configured” learning activities, content, and metrics to follow the learning progress (Pangrazio et al., 2022). To sum up, the narrative around data in society was initially enthusiastic. Discourses on data abundance emphasised the opportunity to generate new business models, new professional landscapes connected to the science of data, and open practices in science and the public space (Baack, 2015; Kitchin, 2014). Those discourses were called into question only when some critical incidents spotted the inconspicuous and the banal around data definitions, metaphors, adoption, and social implications. The first visible downside was the “data divide”, connected to the abilities of common citizens, professionals and even researchers to handle the richness of open data (Janssen et al., 2012; Zuiderwijk et al., 2020). Also, there was growing concern and activism towards empowerment and agency around public and private data (Kennedy et al., 2015). Civic movements connected access with the literacy to understand the problem of over- and under-representation in the data captured and the data made visible by several public and private bodies; and to self- develop forms of expression through local data (Bhargava et al., 2015; D’Ignazio & Bhargava, 2015). Lastly, critical events connected to biased algorithms embedding racism and inequalities contested the idea that data could be a driver of more economical and objective social practices (Benjamin, 2019; Taylor, 2017). Hence, the “digital-technical faith in progress” (Ramge, 2020, p. 166) was interrogated by the social injustice it generated. And the need to promote fair data practices came to the fore (Dencik & Sanchez-Monedero, 2022; Taylor, 2017), adding an element to the debate on technological development that would remark the relevance of considering multiple voices to understand the problem of datafication.
Data and the University We briefly introduced what we called “datafication” and “platformisation” as emerging socio-technical problems in our contemporary society. But what was going on within education, and particularly, higher education? While striving to survive its crisis of credibility in the last 20 years (Carey, 2015) through intense adoption of digital technologies that mirrored the digital
1 Data Cultures in Higher Education: Acknowledging Complexity
9
transformation in the society, the university adopted data-rich practices. The consultation of the World Higher Education Database in 2022 (https://whed.net/home. php) yielded the impressive number of 20,000 Higher Education Institutions (HEI) for 196 countries across the world, offering at least a post-graduate degree or a fourprofessional diploma. The World Bank made an estimation (World Bank, 2021) of 220 million students in Higher Education, a figure that doubles the number of students in 2000 (100 million). Also, the number of academics within several HEIs has increased impressively, with a report from the OECD countries referring a growth from around 1,200,000 researchers in 2006 to more than 2,300,000 in 2016 (OECD, 2019a). And even though “higher education is associated with more favourable social outcomes across the OECD” (op.cit, ch. 1), the effective performance of higher education is still contested. Specifically, the HEIs are facing the relevant expansion commented above in a very short time, coping with large size lectures and adjusting curricula and pedagogical practices accordingly (Bozzi et al., 2021). This problem has encompassed increasing costs whereas at the same time, it has underlined the need to deliver quality and equitable education. Also, a changing capitalist society, moving from industrial production to a growing technological market, led to a compelling demand of technological skills in the workforce (Deming & Noray, 2018). Such a gap was allegedly covered faster by the Big Tech industry (as it was the example of Google Career Certificates at https://grow.google/intl/ europe/google-certificates) or by massive open online course providers like Coursera and Udemy (Calonge & Shah, 2016; Carey, 2015), than from higher education itself. All such factors had a pushing effect on the digital transformation of higher education that was exacerbated, as expressed before, by the arrival of the pandemic. Overall, the two main missions of higher education, teaching and research, went through a digital transformation frequently connected to discourses about “modernisation”, which encompassed data-intensive practices. Entangled with the idea of “world class universities” (Rider et al., 2020) and their representation in international rankings, there was the need of producing measurable outcomes, inventing metrics, and instruments to quantify teaching and research and hence serve the purposes of a managerial approach. Digital data collection and extraction were just the peak of the iceberg. Let us consider two very common scenarios in our academic life. The first relates to research activity, where the publications have quickly moved from paper-based formats to digital ones. Such a possibility has encompassed that our work is easier to share amongst colleagues through digital repositories and academic social networks (Pearce et al., 2010). Also, since a bunch of powerful publishers are concentrating on a relevant part of published research articles, they are developing metrics and systems to visualise the research productivity (see, for example, the platform Clarivate’s services, https://clarivate.com/). Moreover, we are more and more used to adopt digital services contracted externally and to move our data as researchers to “the cloud” (as in the case of Amazon Cloud Services, https:// aws.amazon.com/) to support data-rich practices like big data processing. We might have chosen such solutions autonomously sought, but will be more probably arrived to it through our technical university support staff. In our field, following the
10
J. E. Raffaghelli and A. Sangrà
example of like in Astrophysics or Genetics, it is probably necessary to collaborate through digital platforms to share data and to produce integrated analysis; or as in the Humanities, we need access to curated digital collections of real ancient objects preserved in libraries or museums far away from our research desk (Borgman, 2017). Finally, as researchers, we will receive indications from funders (particularly transnational bodies like the European Commission) to share our research from the very beginning, with interested audiences and communities, adopting social media or other public digital spaces (Owen et al., 2012). The second scenario relates to our teaching activity. In the last 20 years, we have assisted in the creation of technical units supporting e-learning and suggesting us to transform teaching through many possibilities offered by educational technologies (O’Neill et al., 2004). Our university has created web pages to share the syllabi with our prospective students. Hence, it has implemented learning management systems like Moodle (https://moodle.org/), based on an internal task force of informatics taking the lead of maintaining such a platform; or subcontracting services with expert external providers like Blackboard (https://www.blackboard.com/). We have also been offered training to create virtual classrooms; to facilitate participation via online discussions and assignments; to promote engagement through video lectures and interactive technologies such as students’ response systems (Bates & Sangra, 2011; Salmon, 2013). We also witnessed increasing enthusiasm around new technologies within the teaching practice, with approaches such as gamification bringing the logics of videogames to the classroom (Kapp, 2012); or flexible credentials and personal learning spaces like the e-portfolio for the students to promote their professional skills and interests (Jafari & Kaufman, 2006; Martí & Ferrer, 2012). Hence, we started to be criticised as teachers for not being able to integrate technology-enhanced learning in our increasingly crowded classrooms (Meyer, 2014; Singh & Hardaker, 2014). As we faced the problems of making our university more visible and delivering a quality service to the students already enrolled, we were called to produce more and more digital content, and there was increasing debate around the need of making such content, open (Scanlon, 2014). We felt, once again, unprepared to embrace the possibilities of an open scholarship (Costa, 2014). To worsen things, the leading US universities were faster and better in producing such digital content, so when they decided to open up entire courses to the world, the participation was massive: it was the dawn of “Massive Open Online Courses” or MOOCs. They stemmed from open digital repositories at the universities (like in the Open Courseware from the MIT) but quickly moved to private platforms (Macleod et al., 2015). With little solid research evidence about impact (Raffaghelli et al., 2015), we were pushed by our universities to produce content that might display institutional excellence in research and teaching, taking part in this nascent movement of MOOC. The technical e-learning units within our universities supported us in transforming our chaotic virtual classrooms in glossy video-lectures, followed by automated online quizzes. Indeed, with open and massive classrooms, automation became a key word. And we learnt that to support quality videos, the huge records captured through the students’ interactions, the algorithmic work behind automation, our digital infrastructures, and the relatively skilled technical
1 Data Cultures in Higher Education: Acknowledging Complexity
11
staff behind them were obsolete. Therefore, we needed those private services more and more (Fiebig et al., 2021). In all this tectonic movement, we learnt that quality learning was not the kernel: instead, the intense traffic of data collected with thousands of students as users was rapidly connected to the possibility to develop more automated interactions tailored by the users’ interests. It was also possible to predict students’ patterns of participation and engagement, catering personalised experiences to them. The commodification of students’ data started to become part of a “digital ecosystem” composed by the platform services, led by a private, for-profit company or consortium, the universities, and external head-hunters and companies. The platform services could encompass the digital spaces provision and maintenance, the security, the communication activities, the interfaces design, and finally, issuing certifications. Most importantly, the mentioned services could be based on research-led products like visual panels and recommendation systems for course taking according to learner profiles. Universities would be expected to produce content and eventually validate issued certifications. Regarding head-hunters and companies, these could be interested in the qualifications and the profiles trained within the system. Like us, many academics started to believe that becoming an autonomous learning community jointly with our students was somehow a dying idea, in a fragmented or “unbundled” university, as it was called (Swinnerton et al., 2020). These two scenarios relate to the actual development of digitalisation in research and teaching and are based on scientific literature. And as the reader can imagine, data-intensive practices and datafication are embedded in both stories. The first scenario displays the research side, where the debate about the digital scholarship rapidly evolved into the idea of an open science. The kernel of this transformation was the possibility to make knowledge more available for all. The public endeavour to promote access to open knowledge was central for open science; and data was increasingly a considered a central piece of such knowledge with the evolution of big data technologies and processes (European Commission – RISE – Research Innovation and Science Policy Experts, 2016; Kitchin, 2014). Public policies in science and technology invited citizens to explore and contribute to data collection through crowd-sourced schemes of collaboration between researchers and society (European Commission, 2018), but also with the aim of promoting virtuous data ecosystems feeding small businesses and social innovation (European Commission, 2016). Such discourses slowly approached the movement of open education by exploring possible bridges between the two “opens” (Czerwonogora & Rodés, 2019; Stracke et al., 2020). In this regard, data was identified as effective Open Educational Resources (Atenas & Havemann, 2015), strengthening higher education students’ ability to achieve data literacy through authentic learning situations such as civic monitoring and crowd science schemes. Moreover, the practice of collecting and sharing teaching and learning data was seen as a strategic opportunity to build an Open Education Science (van der Zee & Reich, 2018), as the datasets yielded from educational design-based research could be critically reviewed and shared in a broad educational community. However, such a positive panorama encountered factors hindering its development. Indeed, as told in our scenario, both the research cultures and the low awareness of open data policies and
12
J. E. Raffaghelli and A. Sangrà
instruments amongst scholars were considered obstacles (Quarati & Raffaghelli, 2020; Raffaghelli & Manca, 2022). Against these laudable efforts to open knowledge to the society, the digitalisation of research products, as seen in our scenario, led to the privatisation of metrics, catered through the same platforms adopted to manage research journals and peer-reviews. Tellingly, those same metrics were adopted by many national systems and universities to evaluate research quality and to consider the advancement in research careers. Open science should provide a response to such form of datafication and platformisation (Beall, 2013; Hartelius & Mitchell, 2014), but the problem is still…open. From the teaching side, as we observed in our second scenario, the data about learning and learners collected on unprecedented scales gave birth to educational data mining and learning analytics as pioneering efforts to improve, personalise and automatise learning (Siemens, 2013). The scholarly literature found doubtless value in the developments proposed by learning analytics to support teachers’ pedagogical practices and learners’ self-regulation through recommendation systems or panels visualising relevant information about the learning process (Nunn et al., 2016). However, as the LA movement evolved, the studies in the field pointed out the need to strengthen the connections between LA models and pedagogical theories (Knight et al., 2014); the lack of evaluation in authentic contexts; the problematic uptake by teachers and learners (Vuorikari et al., 2016); the assumptions on algorithms’ power to predict, support or address authentic, deep learning (Prinsloo, 2017); and the perils of preventing students’ agentic and transformational practices through excessive automation, or the perpetuation of inequalities (Perrotta & Williamson, 2018; Prinsloo, 2020). Moreover, the massive adoption of social media at the crossroads with learning management systems implies new forms of data, of which both teachers and students could be completely unaware (Manca et al., 2016). The post-pandemic scenario revealed the unintended consequences of a fast race for a naïve, enthusiastic digital uptake, which uncritically responded to managerialism. The education system in general, and the universities in particular, had been steadily adopting digital platforms supported by external providers, so, unsurprisingly, they also fell into platformisation. The elusive concept of “cloud computing”, encompassing the idea of software-as-a-service delivered by external Big Tech companies, was enthusiastically embraced as a panacea of effectiveness, supporting the much-desired digitalisation of higher education. The facilitated know-how embedded in digital platforms rapidly impacted teaching and research, not to mention other university administrative services (Fiebig et al., 2021). Even more serious, the platformisation led to the subtle contamination of higher education as a public space for knowledge, by private standards and principles. As Williamson and Hogan (2021) put, the context of COVID-19 increased the commercialisation and privatisation of higher education, where the EdTech industry is “simply offering up, opportunistically, in response to sudden coronavirus measures” (p. 10), tools for online learning and content, backed up with analytics to “see” the productivity of students, teachers, and ultimately, institutions. In any case, discourses around data were various and fragmented, as it was the case of the fast collaboration in research supported by open data sharing in several
1 Data Cultures in Higher Education: Acknowledging Complexity
13
areas of knowledge during the pandemic. As pointed out on the official site of the EU on “digital solutions during the pandemic”, digital tools have been used to “monitor the spread of the coronavirus, research and develop diagnostics, treatments and vaccines and ensure that Europeans can stay connected and safe online” (European Commission, n.d.). In the same vein, the OECD (2020) highlighted the relevance of open science to “combatting COVID-19”. Nonetheless, these discourses could be considered just the expression of hegemonic forces catering neoliberal values, which are more aligned with the above criticised techno-solutionism. To sum up, as in the broader social sphere, the initial fervent discourses embraced data-driven practices as an opportunity to improve efficiency, objectivity, transparency, and innovation (Daniel, 2017). But this situation rapidly evolved into the concerns about the injustices provoked by datafication, and the need to rethink data infrastructures, narratives and practices by making visible data extraction and commodification (Perrotta & Williamson, 2018; Williamson et al., 2022). Also, the need to promote critical data literacy within higher education and through the strategic engagement of the professoriate was discussed as one (but not exclusively the only) way to approach the problem (Raffaghelli et al., 2020).
Data Through the Lens of Complexity In each of the depicted contexts in the prior paragraphs, it is impossible to deny the relevance assumed by data as an emergent expression of human activity, and the need to move beyond mere tensions between utopic or dystopic visions of data. As Pangrazio and Sefton-Green pointed out, “the challenges brought about by datafication have inspired a range of different responses” that they classify as “top-down government regulations of tech-companies through to bottom-up grass roots activism” for having in common a “focus on changing how data is being “done” to individuals” (Pangrazio & Sefton-Green, 2022, p. 1). Moreover, a simple look at the progress of AI projects led by Big Tech-funded research as the Open AI Institute (https://openai.com/) let us understand both its relevancy and complexity more than just a critical understanding of technological advancement. In fact, not only this type of private initiatives is influencing the economies, the labour market, and the skill requirements within it; they are also reconfiguring geopolitical tensions (including the production of basic components for digital technologies as well as the human labour required to produce advanced AI). In such a context, the role of universities as knowledge and skill development institutions, which are part of the public sphere, is being called into question. And, as expressed above, this role is very much dependent on digital infrastructures that introduce the private interests into the kernel of higher education institutions. A crucial first step from our perspective is to move beyond fragmentation and polarisation between techno-enthusiasm and a critique of it. In our view, this tension appears to have its roots in what Charles P. Snow called “the two cultures” in
14
J. E. Raffaghelli and A. Sangrà
relation to an ongoing scientific revolution along the twentieth century (Snow, 1959). Earlier, Snow discussed the problem of “the intellectual life of the whole of Western society [that] is increasingly being split into two polar groups” (p. 3). Between the two poles, which he represents with literary intellectuals on one hand and physical scientists on the other, lies “a gulf of mutual incomprehension – sometimes (particularly among the young) hostility and dislike, but most of all, lack of understanding” (p. 4). There are resounding notes in the following paragraph, expressed in Snow’s lecture in 1959: The non-scientists have a rooted impression that the scientists are shallowly optimistic. Unaware of man’s condition. On the other hand, the scientist believe that the literary intellectuals are totally lacking in foresight, peculiarly unconcerned with their brother men, in a deep sense anti-intellectual, anxious to restrict both art and though to the existential moment. And so on. (p. 5)
This debate evolved in conversations and exchanges until the crucial awareness of the need for a complex thought was raised in the 90s by Morin, who invoked the central role of interdisciplinarity. Human problems, indeed, cannot be forced into disciplinary spaces for their dissection since they require increasing understanding and interpretation by their ultimate consequences on human life and history (Klein, 1996). Nonetheless, it appears that the breach between science and humanities has become immutable to our days. Reverberating with Snow’s quotation above, technological development as an expression of the scientific paradigm has been accused by the social sciences of catering the values of neoliberalism and colonialism, being hence supported by power. In turn, the social critique has been accused of obscure intellectualism and little concern about human problems requiring science and technology as means of human quality (Moats & Seaver, 2019). In any case, the evidence collected by several social scientists is pointing to an excessively rapid engagement with data technologies deeply entrenched with economic interests, provoking the “invisibilisation” of the social, political and ethical consequences of the AI industry (Benjamin, 2019; Eubanks, 2018; O’Neil, 2016; Zuboff, 2019). Although the ethics debate around AI has been considered performative (Green, 2021), the response has insisted on the crucial benefits of AI systems in areas of health, justice, education, and sustainable industry (Fry, 2019), suggesting that criticisms over AI could sometimes be vain, performative, and even fashionable attempts connected to charismatic career development or institutional visibility. As an example of this tension, in his effort to conciliate discourses around educational data mining and learning analytics (LA), which are forms of data-driven practices in higher education, Shum (2019) created a space for the engagement of more critical educational researchers in a dialogue with colleagues devoting their energies to intense programs of technological developments in the field. Shum pointed out that: “LA is now the subject of continuous, critical academic commentary” (…) “These tensions come with the territory of working in a field that has immediate and direct consequences for huge numbers of learners and educators, and we cannot afford to ignore how our work is (mis)understood by educational policymakers, practitioners, and citizens at large”. (p.6)
1 Data Cultures in Higher Education: Acknowledging Complexity
15
In a nutshell, data practices are not just a phenomenon under study from any specific discipline, but they are the emergent side of a social, cultural, and political problem. It is an entropic problem with intended and unintended consequences in a historical context, where technological advancement seems inexorable. Hence, data practices overall, and particularly within higher education, require overcoming simplistic declarations relating innovation and development, and, at the same time, they should avoid performative critiques to datafication. To elaborate more on this idea, we build on the concept of “data epistemologies”, borrowed from Milan and Van der Velden (2016). In their study on various responses of activism to datafication, they characterise the problem in the following terms: Zooming in on the many ways in which individuals and groups engage with data politics, we identify two main approaches: datafication is interpreted as a challenge to individual rights or as a novel set of opportunities for advocacy and social change. (p. 6)
The result of such an approach to datafication: translates into a varied action repertoire, i.e. the range of tactics (…) positioned along a continuum between two kinds of responses that are not necessarily in contradiction with each other: contentious attitudes such as obfuscating and resisting vs embracing and making the most of datafication. (p. 6)
Milan and Van der Velden define re-active data activism as the usage of social or technical means to defend privacy and anonymity to resist the surveillance tactics adopted by the state and corporations. Under this epistemology, data activists perceive massive data collection as a threat to their values. Other groups might behave in a wholly different way by mobilising, soliciting, appropriating, or crunching data to generate “new narratives of the social reality questioning the truthfulness of other representations, denouncing injustice and advocating for change” (p. 6). This epistemology of data activism is hence labelled proactive. The authors conclude that “reactive” and “proactive” represent two facets of the same phenomenon: both take information as a constitutive force in society capable of shaping social reality (p. 7). Undeniably, Milan and Van der Velden’s approach appears to be a potentially good model to characterise institutional and personal dispositions, narratives, and imaginaries around data. We argue that this model might be of relevance to characterise data practices in HEIs. Above all, it can be a powerful way to uncover areas where interdisciplinary collaboration is necessary or even urgent, integrating strategies of intervention. Nonetheless, by considering different perspectives on data, we do not intend to purport a pragmatic effort of classification, but rather, to embrace the philosophical approach of complexity. Complexity, indeed, has been theorised by many, but in general terms, it characterises the behaviour of systems as interactive and entropic. As a matter of fact, their interactions with local contexts express the efforts made (by the system) in the search for an inner equilibrium, including a balanced tension with external forces, which are also influenced by the system (Johnson, 2007, p. 4). More tellingly for social scientists (and particularly educationists) is the work of the French philosopher Edgar Morin, for whom complexity is part of being and
16
J. E. Raffaghelli and A. Sangrà
knowing beyond the classificatory, analytical, reductionist and causal approach to the world (Morin, 2008, p. 12). Therefore, complexity “at a first glance, is a quantitative phenomenon (…) combining a very large number of units” composing a system. But complexity is also “made up of uncertainty, indetermination and random phenomena” based on the interaction of the system with the context. Morin goes on to say that “complexity is therefore, linked to a certain mixture of order and disorder, a very intimate mixture, one that is very different from static conceptions of order/disorder” (p. 20). Moreover, for Morin, methodologically speaking, “it becomes difficult to study open systems as entities that can be radically isolated” for the “interaction of the system and the ecosystem (…) can be conceived of as the ‘going beyond’” (p. 11). He connected, indeed, the idea of organism and organisation in biology with the social sciences by discussing the principles of cybernetics, expressing that organisation seeks “to find the common principles of organization, the principles of evolution of these principles, the characteristics of their diversification” (p. 11). It is in these terms that data, as a social dynamic, crosses the universities as organisations that cannot be considered in isolation, but in their struggle to configure themselves, absorbing the external forces of datafication and re-connecting them with a historical institutional identity as far as their current data epistemologies. In addition, datafication is an emergent phenomenon based in data infrastructures or the physical base supporting the lightness of digital clouds; the conceptions and practices around metrics and quantification; the labour force entering data or producing it unconsciously; the data points generated across several systems; the way these are elaborated through simple statistical processing or algorithmic solutions; the engagement with such representations and artefacts and the relating to the emotional and behavioural outcomes triggered (Crawford, 2021; Decuypere et al., 2021). Nonetheless, the complex units composing universities, namely, academics, managerial staff, students and others, bring their own positionalities (in terms of personal stories and dispositions to technologies) that make the whole even more intricate. Can we separate such elements? As done by Decuypere (2021) in his topological view of data practices or by Crawford in her effort to map AI in an atlas of intertwined layers of information, we can select models, frameworks, and conceptual instruments for analytical purposes. We must indeed promote these operations for the sake of interpretation and understanding of data practices, and particularly, to uncover injustice. But once the critical deconstruction has been carried out, there is the need (and even the responsibility) to imagine “alternative data futures” to borrow Neil Selwyn’s expression (2021). The complexity of data practices is acknowledged hic et nunc but imagining the type of (social) fabric we are knitting with the current threads of our human activity. To synthesise what we have exposed earlier in this section, on one hand, we consider that data futures in HEIs will be “knitted” very much over the basis of prevalence or balance between the proactive and reactive technological movements. In the proactive case, the efforts to represent, visualise, and synthesise information could support exploration, facilitating workflows and automated services connected to democratising knowledge rather than controlling human behaviour and cognition
1 Data Cultures in Higher Education: Acknowledging Complexity
17
by serving power. Particularly, engaging students in such practices might also lead them to participate in a context of open data production and appropriation in science and society (Mazon et al., 2014; Purwanto et al., 2018; Raffaghelli, 2018). On the other hand, whenever the same data practices are interrogated, expressing the reactive epistemology, they raise concerns about data quality, the data divide, students and teachers’ privacy, potential inequalities, false beliefs about objectivity, and a naïve simplification of processes and outcomes. And this continuing search of equilibrium should be framed by understanding the role of human responsibility while engaging with complexity as part of data futures. However, we are probably far from such a vision. As professional practices, the industry and the academic disciplinary fields of data science evolve in a separated binary regarding the social sciences’ critique, and with data activism. It hence becomes evident that the imaginaries and the approaches to data are extremely diversified, especially some practices and narratives which could embed contradictions within the same institutional context. This is particularly true for knowledge organisations such as the universities, where the academics’ independent efforts as intellectuals might be in more or less open conflict with institutional discourses. Therefore, unveiling such contradictions and engaging in interdisciplinary dialogues around data futures is more necessary than ever. It is against the idea of complexity that we offer the idea of data cultures as a conceptual apparatus, enabling us to rethink data practices as a relational, interdisciplinary and multivoiced phenomenon. And hence, promoting understanding and reflection around datafication are situated processes that require new (critical) literacies within and from higher education, to question, resist or transform data practices and narratives.
nderstanding and Transforming Within Complexity: U The Concept of Data Cultures Let’s begin Data_culture):
with
Wikipedia’s
definition
(https://en.wikipedia.org/wiki/
Data culture is the principle established in the process of social practice in both public and private sectors, which requires all staffs and decision-makers to focus on the information conveyed by the existing data, and to make decisions and changes according to these results instead of leading the development of the company based on experience in the field.
This short definition highlights the values of organisational development theory (Watkins & Golembiewski, 1995) and the idea of culture as a hidden element making the institution work. Tellingly, such “soft” vectors were only considered and studied in terms of “blocking” or “enabling” factors of productivity. In the same vein, we could talk of learning organisations (Argyris, 1977) that are able to be agile in response to external pressures. In this regard, a data culture might be an effective
18
J. E. Raffaghelli and A. Sangrà
programme shared and negotiated towards the strategic goals intended by the power structure. In a different vein, institutions (including HEIs) have also been studied through the conceptual apparatuses of cultural-historical activity theory (Sannino, 2011) and critical studies on culture (Agger, 1992/2014), putting crucial attention on cultural contexts and human activity as a situated and collective expression with dynamics that can encompass tensions and contradictions leading to the development of professional identities. These areas of study have extensive histories and apparatuses that fall beyond the scope of our work, but concur in their reference to Marxist theories of culture, the contribution of the Frankfurt School and the debates headed by post-modernists, feminist cultural studies and post-colonial studies, inter alia. In short, there is a vision of culture that goes well beyond the common-sensical idea of artistic, literary or philosophical work, and instead stresses the “communicative constitution of meaning in everyday life” (Agger, 1992/2014, p. 6). Elsewhere (Raffaghelli, 2012), we have theorised on the relevance of the cultural- historical perspective of the Scandinavian school, which worked on the foundation of the contributions of the Russian scholars Vygotskij and Leont’ev and was established by Engeström (1987/2015) as a “child of Marxist scholarship” (p. xxiv) to understand HEIs and the overall educational change. Organisations can be a system of human activity or a network of several subordinated systems or “knots” (Engeström, 2008). These activity systems cannot be considered ahistorical or extraneous to a cultural context of reference; indeed: An activity system is by definition a multi-voiced formation. An expansive cycle is a re- orchestration of those voices, of the different viewpoints and approaches of the various participants. Historicity in this perspective means identifying the past cycles of the activity system. The re-orchestration of the multiple voices is dramatically facilitated when the different voices are seen against their historical background, as layers in a pool of complementary competencies within the activity system. (Engeström, 1991, pp. 14–15, cited in Engëstrom, 2015)
In an extreme synthesis, this theory facilitates the study of elements that explain how an individual action can be embedded into the collective system and how, vice versa, the historical context has implications for the division of labour, the organisation of human communities around the activity and the rules established, all of which affect the individual. Most importantly, this arrangement moves around an object of activity that ultimately ensures that the system’s activity leads to desired outcomes. Engeström nonetheless theorises the existence of contradictions, which form the motors of human activity and learning, to produce qualitative changes leading to small or large social transformation. The contradictions can be associated with any element of the system, and are determined as follows: • The primary contradiction in the activities inherent to the constitution of each of the elements of the activity system (division of labour, the subjects involved, the community, the tools, the rules and the object of activity). • The secondary contradiction, which relates to the conflicts between the activity system’s constituents.
1 Data Cultures in Higher Education: Acknowledging Complexity
19
• The tertiary contradiction, which appears when an object of activity is imposed and a community or specific subjects are obligated to take part, with no possibility of re-arranging rules, the division of labour or the instruments. • The quaternary contradiction, which highlights the activities “close” to a system and can be incorporated as a sub-system (a new technological tool, a new rule, a new subject, etc.). Naturally, these activities are subordinated to the main activity and build their own activity subsystems, generating potential hybridisation processes through continuous exchanges. No human transformation within institutional cultures can be deemed possible if the constituent activity systems are not harmonised. In Engëstrom’s terms, “the concept of learning activity can only be constructed through a historical analysis of the inner contradictions of the presently dominant forms of societally organized human learning” (Engeström, 2015, p. 74). Such an operation is only possible through progressive negotiations. Inspired by Bakhtin’s “dialogic imagination”, Engëstrom emphasises the idea that while negotiating change, heteroglossia, as ambiguous forms of referring to facts and ideas that are unclear in a human group, require deep and even conflictive recursive interactions to arrive at a relevant readjustment of the activity system (p. 20). Building on Schon’s concept of the “generative metaphor” (p. 226), he later posits that metaphors are indeed signals of positive dynamics that re-balance or advance in grasping the problem a human group is trying to solve. Engëstrom calls this effort to “go further” and re-structure the configuration and constituents of an activity system the “learning by expanding” process. With a long history of studies applying his theory to organisational contexts, the focus has been placed on professional learning as central for its dynamism, which provokes the “crossing the boundaries” of established, visible social structures, such as the school or organigrams (Akkerman & Bakker, 2011). The relevance at the institutional level is well-captured by David Nicolini (2012), who emphasises that the “complex and systemic-like nature of activity is one of the central and defining aspects of the theory” (p. 119). More recently, the theory has evolved towards the so-called problem of “runaway objects” as objects of activity that have the potential to escalate and expand up to a global scale of influence. They are objects that are poorly under anybody’s control and have far-reaching, unexpected effects. Such objects are often monsters: They seem to have a life of their own that threatens our security and safety in many ways. (…) in present-day capitalism, disasters and shocks are becoming a dominant object, exploited by the economic and political elites to reorganize societal conditions in line with the neoliberal doctrine. Runaway objects are contested objects that generate opposition and controversy. They can also be powerfully emancipatory objects that open up radically new possibilities of development and well-being. The Linux operating system is a well-known example. There are other, less known but potentially very significant new objects being created. (Engenström, 2008, p. 3)
In this characterisation, we clearly see the problem posed by datafication and most data practices, which function as intentions to capture the elusive, recent object of human activity. Organisations in general – and HEIs in particular – are struggling to capture such runaway objects, generating multiple contradictions that require human commitment, insistent negotiation and a recursive, continuing interpretation
20
J. E. Raffaghelli and A. Sangrà
of the ongoing activity. Activity theory thus brings a relevant dynamic perspective to the fore. Let us put these concepts to work. The generation of the politics and aesthetics of dashboards as a recent frontier of scholarship and raising concern for HEIs to “personalise” the learning environment and promote more “autonomous, self- regulated activity” (Rienties et al., 2018) has been seen as a main object of activity for organising visualisations of psycho-physio-neuro-pedagogical concepts. These are allegedly objective but always encompass semiosis rooted in science as a discourse of power that generates a second-level contradiction between the rules imposed by those who create the dashboards and those who are obliged to use them. Data-driven technologies are deeply entangled with political and classist phenomena that define dominant discourses of “normality” in cognitive development and social and professional behaviour connected to learning performance, which takes the contradiction to the fourth level. The contradiction is then placed between the central activity of education (learning) and the specific activity of developing and imposing a dashboard that embeds values that might as well be beyond pedagogical interests. Indeed, dominant discourses lie behind the designs of devices, apps and algorithms predicting behaviours. In higher education, forms of surveillance are intertwined with students’ freedom to define their own time and learning outcomes, with the risk of entering neo-Taylorist structures that control an invisible pacing machine, where the ultimate goal is measuring and displaying the system’s productivity (Prinsloo, 2019). This particularly applies to the affordances of learning management systems adopted by HEIs – in many cases, with more interest in performativity and productivity than on the essential agreement with the students – as part of the broader HEI culture in classrooms as activity systems. As expressed earlier in this chapter, data are entities that operate jointly with the social definitions used to make sense of them and with the forms in which data are collected, aggregated, made visible or circulated. To understand data infrastructures, it is necessary to observe whether they force or ignore consent, which is tightly connected to how data extracted is used for surveillance and control purposes (e.g., through entities such as dashboards, profiles, reports, rankings, etc.), and hence, how data addresses the political aspirations of governance within an institution. These entities are mediated by algorithms as actionable mathematical conceptualisations. For instance, after a student obtains a score or after a number of clicks on resources in an online environment, the statistical operations lead to the prediction of learning outcomes, and an algorithm may trigger a web message or an AI tutor. The simple fact of selecting pedagogical support or just informing the student that the learning outcomes will probably be negative at the end of a semester encompasses strong pedagogical assumptions. On this basis, we can assume that this outcome entails intertwined activity systems led by the computer scientists assembling the material components; the educational technologists catering to the technocratic or critical approaches to the definitions that (not necessarily) will conjoin with the development of technology; the academics in their role of defining relevant content, possibly research-based; the managers considering the ultimate “productivity” goals; and the statisticians consulting to develop the needed
1 Data Cultures in Higher Education: Acknowledging Complexity
21
measures. These systems stem from different values and work under different rules, organisations of labour and communities of reference. In the best of cases, these diversified systems engage in dialogue to avoid a fourth level contradiction. In the worst case, the differences are ignored, producing a “runaway” object: a datafied object that becomes abhorrent, or worse, unjust towards those under its influence. The assessment fiasco in the UK during the first year of the COVID-19 pandemic speaks eloquently to this (Kolkman, 2020). The decontextualised adoption of technological solutions during the pandemic, during which data monetisation spread at the same fast pace as the adoption of these solutions (Williamson et al., 2020) can be deemed another example. The way such activity systems combine within the institution therefore depends on a historical combination of situated beliefs, dispositions, narratives, and occasions to celebrate heroes and aesthetics around data, which ultimately shape what we can denominate “a data culture”. A contextualised approach, in which the historical and social contradictions become visible and negotiated, would imply, as Motz and Diaz (this chapter) suggest, the participation of academics and students in understanding the type of data captured and processed through algorithms that encompass (a) humans in the loop and (b) an open source that takes into account technological sovereignty. Such a data culture would imply progressive revisions and space to understand the political implications, as Kuhn underlines (Chap. 6, this book), relating data practices to the social context and the territory in which the university is placed. Our approach to an institutional data culture also debunks the idea that data neutrality is an impossible fiction that simply reinforces status quo power relations. The idea of situated activity systems and their embedded, contextual, and relational constituents rejects the reductionisms of data science connected to general, universal principles guiding automatisation and to AI as key frontier of knowledge. Under this lens, the networked and “glocal” approach (discussed by Vargas et al., Chap. 12, this book) acquires relevance, because the data practices researched or developed in a North context cannot just be “borrowed” by Global South universities, and is mirrored in what others, particularly Knox, have earlier disentangled in the case of data capture in massive open online courses or “MOOCs” (Knox, 2016). Data, in such a “global” and “prolific” digital approach, are captured and used for research purposes and indeed form part of the promotional material (p. 81). Data visualisations in MOOCs, as one of the results frequently adopted as part of an “actionable” and “personalised” approach, are more than the result of the data massively captured in a centre-peripheral synergy which retraces geographical colonialism but rather a “data-colonialism” (p. 74). Data should serve the purposes of meaning-making at the local level. Not only data, but also the approaches to how data are adopted are related to the territories and networks in which the universities exist. In their comparative study of two large but rather different universities in tradition and teaching approach, Raffaghelli et al. (2021) demonstrated that the data practices of HEIs differ, probably as an expression of different data cultures. As Atenas et al. (Chap. 10, this book) put it, Latin American engagement with open education and open government leads
22
J. E. Raffaghelli and A. Sangrà
to engagement with open data in the search for transparency and empowerment. Santos-Hermosa et al. (Chap. 4, this book) also clearly exemplify the efforts made by the European Union to provide policy recommendations on open science followed by concrete instruments (open data portals and spaces to publish data and research funded by the European Union, such as Open Research Europe); however, these instruments may appear as secondary activity systems within an existing system accepted in the (non) data cultures of HEIs, establishing the rules of career advancement and success. This generates a first-level contradiction, in which the researcher is puzzled about the right approach to data (because transnational activity supports open data and the university system does not), but also a fourth-level contradiction between the external EU system and the inner university system. The resolution of the fourth-level contradiction encompasses a similar outcome for the first type as well as a transformation of the HEI (open) data culture, supporting open data practices as a recognised and rewarded activity for the researcher. Only the visibility of and the negotiations related to the questions posed above concerning data practices and discourses can generate the spaces to explore complexity. Stakeholders’ engagement with spaces to make visible their dispositions and narratives towards data allow them to explore, deconstruct and renegotiate data assemblages and materialities (Raffaghelli et al., 2020). For instance, student and teacher participation in the selection of the metrics expressing the “quality” of teaching might impact the way university rankings are considered and used to address the allocation of funds (Pozzi et al., 2019). Discussion of the ethical principles that learning analytics must respect can lead to understanding data more as a synthesis of values and concepts than as an objective and incontestable source of information (Tsai & Gasevic, 2017). Understanding the relationship between digital and data infrastructures might lead to the promotion of an open source movement supporting the principles of data sovereignty as expressed in the Berlin Declaration on Digital Society and Value-Based Digital Government (2020). Datafication is also evolving in a way that it represents several diversified situations across the globe, with geopolitical areas that are more at risk of delivering data rather than extracting value from it and vice versa, as new forms of data colonialisms (Knox, 2016; Prinsloo, 2020; Ricaurte, 2019). In shaping its own data culture, the university must recall the continuous tension between the goals of an educational neo-humanistic perspective based on intellectual autonomy and the technocracy’s requirements highlighting the scholars’ attention to evolving contexts of social and economic innovation. The debate about data as wealth or data as part of surveillance brings such tensions to the fore (Bayne et al., 2020, p. 173). In short: fair data cultures are spaces of meaning-making where the knots of complex data in society can be disentangled, particularly relating the local effects of the global dynamics introduced by digital platforms. The concept of data culture also has heuristic potential in the sense of making space for questions and understanding that many data practices can be in contradiction – under an ongoing process of transformation rather than being perfectly aligned. We build on Selwyn’s set of questions (2015) for educational contexts to start exploring HEIs data culture tensions and contradictions:
1 Data Cultures in Higher Education: Acknowledging Complexity
23
What organisational cultures have formed around the use of data within (higher) education settings, and with what outcomes? Where does data work mirror existing institutional structures and hierarchies? Where is data disrupting, changing or leading to new arrangements, relationships and understandings? Where is data leading to a refocusing in practice/understandings towards to measurable and ‘visible’? What ethical, legal, managerial and organisational issues are shaping the use of data within (higher) education settings? (p. 77)
Unlike the technocratic vision of a data culture portrayed at the beginning of this section, the cultural-historical approach guides actors to get engaged in activities that trigger awareness and visibility of data practices. Importance is also given to an engagement that liaises with the participants’ agency and ethical values, so there is no dissonance between institutional choices, overall direction and the subjectivities that take part in such processes. Indeed, the learners, the professoriate in both teaching and research activities, the staff, HEI management and even the families supporting student awareness of the contextual and material characteristics of data imaginaries provide the basis for uncovering power issues, misrepresentation and inequities, and thereby pave the way to building fairer data practices. To wrap up, we purposely used the word “culture” to refer to the differences in the discourses, practices and narratives that any university might embrace while building and transforming its institutional identity. These processes are reciprocal from the institution as imposition or context of opportunity for the participants to the participants in shaping success stories, attached values and celebrated or rejected strategies. A data culture might be completely focused on rankings, metrics, measurement and the idea of objectivity, whereas others might reject the imposition of national or global metrics and might be concerned about participants’ agency and privacy related to the use of their data. We support the idea of exploring institution- specific configurations of cultural historical activity theory. Nonetheless, we acknowledge that the specific layers of technology, regulations and economic and social factors encompass technicalities, which our general approach cannot (and does not) intend to cover. The relevance of the developmental aspects of universities as institutions, encompassing support to professionalism and the literacies to deal with data, is highlighted. Engeström’s model also has limitations, because the poietic elements leading institutions to uniquely perform and improvise to respond to compelling situations, which cannot be universally captured by the schematic activity system.
Building (Fair) Data Cultures in Higher Education Why do we refer to the idea of a “fair data culture”? The idea of a fair data culture contrasts datafication as a pre-existing phenomenon. As discussed above, a data culture is formed by the situated conjunction of values, beliefs, narratives and practices about data. But a fair data culture refers to the possibility of making the elements of the data culture visible for all involved, creating spaces for them to engage and to transform the existing data culture. It is in this point where the idea of
24
J. E. Raffaghelli and A. Sangrà
fairness can be nurtured by the pre-existing debate around data justice (Dencik & Sanchez-Monedero, 2022; Taylor, 2017). In Taylor’s terms, “an idea of data justice is necessary to determine ethical paths through a datafying world” (Taylor, 2017, p. 2). In her proposed framework, based on an extensive review of the literature around social justice and datafication, she considers (a) visibility as one of the key areas of justice in data practices, connected to access to being represented and to informational privacy; (b) engagement with technology as the possibility to share data’s benefits and to promote autonomy in technology choices; and (c) non- discrimination as the ability to challenge bias and to prevent discrimination (Taylor, p. 9). Therefore, only the visibility of a data culture could not be a sufficient driver of transformation to embrace data justice, particularly in cases of oppression and injustice, like students’ data extraction, unethical research data practices, and the private interests entering an institution through learning platforms. Overall, our approach to understand a data culture is the first step to challenge it. Taylors’ framework underpins in more concrete ways the idea of understanding and transforming, through expansive learning movements, the tensions and contradictions produced by data infrastructures and the connected practices, as part of a data culture. And, while we believe that such structural elements also require political engagement, regulations, and the continuing negotiations of private and public interests, such elements could be modulated through appropriate literacies. We must not forget, at this point, that HEIs have been characterised by their commitment to advancing knowledge in society and, more recently, promoting the development of capacities for thriving as creative and responsible citizens (Fikkema, 2016; McAleese et al., 2013). In the case of data practices, the complex tension between the goals of a neo-humanistic perspective and the requirements of technocracy (which has been a matter of discussion since the beginning of the university) emphasises the commitment of HEIs to their role in society. Specifically, universities are equipped culturally and materially to blend advanced interdisciplinary theoretical reflection with empirical research and practice on datafication. In such a space, as envisioned early on by Wilhelm von Humboldt in the nineteenth century, academics and students engage in a conversation that ultimately pushes the latter to take an active part in addressing problematic data practices and cultures as reflective citizens and professionals (Pritchard, 2004). On these foundations, the university is called on to mediate meaning-making around the emerging data practices through the development of literacies. We refer to data literacy, specifically, as the knowledge and skills needed to understand, explore, and work with data and within datafied systems (Raffaghelli et al., 2020). But the relevant work catered by the authors in this book also points out to the need of developing critical data literacies, in the sense of supporting knowledge, understanding and skills to see the injustice embedded in datafication and to act against them. Which are the appropriate spaces to cultivate fair data cultures in higher education? We anticipate that this book caters to the reader, namely, relevant examples and projects connected to the development of awareness on datafication and platformisation, and the connected actions to develop critical data literacy.
1 Data Cultures in Higher Education: Acknowledging Complexity
25
In a nutshell, the reader will find discussions and resources about curriculum design, pedagogical practices, and strategies for faculty development to intervene in institutional strategies more than to just attach to directives or to integrate “recommendations” into academic practice. As for curriculum design and pedagogical practices, several authors support the thesis that teachers and students’ engaging in projects and activities triggering specific literacies about datafication and data practices might lead to make visible a data culture in society, local communities and the same HEIs. Making a data culture visible could avoid pushing effects from the outside, and hence, lead to transformation, contestation, or resistance about data-intensive technologies. But several contributions also highlight the need of promoting spaces beyond the classroom to rethink a data culture through informal, professional learning. They propose collaboratories, workshops, professional development and quality evaluation exercises in addition to actual research activities as spaces ensuring that the conceptualisation and problematisation of datafication are kept at the forefront of the agenda, both within and beyond the university. The challenge for us has been to collect several contributions showing how data practices and narratives are evolving in higher education, particularly connected to the literacies needed to develop a (fair) data culture.
Why This Book? The contribution of this book stems from the idea that a complex gaze over data cultures can conceive more balanced and constructive data futures for higher education. It also builds on the challenge ahead: to understand how to advance a critique of the intensive use of data as an expression of positivistic and naïf accounts on innovative learning ecosystems in higher education, without excluding the importance of technological innovation for human activity. By exploring cases, scholarly research and reflections that spot the criticalities raised by the choices around technological infrastructures, as well as the meaning assigned to data within HEIs and society, we attempt to shed light on the complexity, but also, on possible ways to engage with data without any intention to reduce or circumscribe it, rather the contrary. We hence offer the idea of “data cultures” as a conceptual approach of heuristic value for those doing research on datafication and platformisation, but also for HEI leaders and technical staff working within the digital transformation, and particularly, for academics dealing with curriculum reforms and technology uptake in their classrooms. By no means this collective work can be comprehensive of a global understanding of data practices in higher education. Our approach is limited in scope, catering mostly Western universities’ cases, but also including a Global South perspective brought by Latin America, and one Asiatic voice from China. Moreover, the book pays particular attention to the academic practice and professionalism, and to data literacy as the central driver of institutional and societal transformation. The
26
J. E. Raffaghelli and A. Sangrà
description of policy making and of social, institutional, and technological driving forces of higher education is intended as a contextualisation for our particular interest on the professoriate. Nonetheless, by introducing essential experiences, research and conceptualisation, this book collects elements in response to an essential question: how can data epistemologies be intertwined with a more holistic vision to enhance the potential of data in society while also understanding and re-thinking their downsides? This book has its roots in the Webinar Series “Building Fair Data Cultures in Higher Education” (Raffaghelli, 2019). The series explored a topic that was seen as fragmented and complex, requiring study as well as interdisciplinary dialogue. The webinars were structured upon open-ended questions that several guest speakers analysed and discussed over the basis of their own research activity. Nonetheless, the final goal was never to achieve clear responses but to problematise. The questions posed worked as icebreakers, pushing the participants to delve into their research and their conceptions about an evolving field. It was an invitation to spot the unknown, more than to convey certainties. However, a concurrent, transversal point was to consider which role would be played by education as an evolving discipline, and the space for academics’ ethos and praxis. The initial questions were: • What type of data is collected in specific institutional cases, and what are the subsequent conceptual and pedagogical foundations required to process these data? • What problems of usability of data and data visualisations (e.g. learning analytics dashboards) have been observed with one or more authentic evaluation cycles? • How are teachers addressing pedagogical practices based on the available data? How do they surf the data-abundance across institutional and social contexts of digital learning? • How are students addressing their learning processes across data-driven devices and resources? How do they surf the data-abundance across institutional and social contexts of digital learning? • Is there critical awareness of the visibility and usage of standardised and social data? • What are the skills and abilities required to work with data, beyond just technique and embracing the ethics, the politics, and the aesthetics of data? • Can data and data interpretations be shared? Will this provide a more democratic access to knowledge? These initial questions set the basis to foresee challenges faced by researchers and educators, which were discussed with the authors. Each of their contributions made clear the need of rethinking digital infrastructures as an element circumscribing HEI academics and students’ freedom, imposing practices and generating injustices like privacy issues or teachers’ de-professionalisation. We all embraced the idea of cultivating critical data literacies and discussed the ways in which they could be developed for all HEI stakeholders, for them to engage in productive dialogue, or at least in negotiating spaces of conflict. We finally considered the need of elevating the internal, situated concerns, once made visible, to a political level with implications
1 Data Cultures in Higher Education: Acknowledging Complexity
27
to regulatory reforms, funding resources for alternative technologies, curriculum development, and so on. Still, an open question is, to which extent the focus on exploring and transforming a data culture might be a crucial item in institutions that are dealing with critical situations like adapting education to the post-pandemic scenario; or surviving a funding crisis at the dusk of neoliberalism. The focuses of attention are various, but we must not forget that a reshaped, critically reconsidered digital transformation might also be linked to strengthening higher education communities (Czerniewicz, 2022), taking care of academics and students as persons suffering the inequities generated by the pandemic, war and climate change. And in this scenario, understanding the ways in which the digital world enters our universities implies to understand data-intensive practices and interests. Nonetheless, the feelings of helplessness or inability to understand the bigger picture around datafication and platformisation can be overwhelming for small, initial groups attempting to explore and transform sides of a data culture. Frustration might come from the same dynamics where platforms offer glittered products and spaces for free, against the somehow impossible efforts to maintain local and even national digital infrastructures (Fiebig et al., 2021). The present book moves around these questions. It certainly attempts to take a step forward regarding the webinars for two main reasons. Firstly, the social landscape is changing rapidly, as the pandemic has undoubtedly pushed some human activity in unthinkable directions. As the response that any educational institutions, particularly HEIs, were pushed to deliver, Emergency Remote Education put all educational stakeholders under colossal pressure to rethink their practices and roles. Secondly, aligned with the pandemic, the authors’ ideas and work evolved. As committed actors in HEIs and researchers around lifelong learning, distance education, open science, open education, digital scholarship, media education, educational quality, faculty development, and sustainable assessment, their ideas could not but attempt to contribute to a post-COVID-19 scenario. One of the topics that certainly generated important debate and concern was data generation, usage and their eventual monetisation in education (Williamson & Hogan, 2021). Therefore, furthering the original topics, the book collates the authors and editors’ efforts towards transformative, fair, and agentic practices within higher education.
The Book’s Structure Beyond this introductory chapter and a following piece expanding the policy making context relating data practices in higher education by the editors, the book is based on twelve central chapters spotting different areas of activity within HEIs’ data cultures. Each chapter deals with convergent and specific elements around datafication, data practices and data literacies, underlining the inherent complexity of the topic. In order to provide the reader with a scheme to collocate several
28
J. E. Raffaghelli and A. Sangrà
contributions, the book is divided into three thematic parts. The first two parts refer to the idea of reactive or proactive approaches (or epistemologies) to data, and introduce discussions and examples of practice that will help the reader to build a scheme to place the diversified approaches to data, and to understand complexity. The third encompasses responses to the challenges posed by data in the postdigital society, particularly emphasising the role of developing critical data literacies in higher education. In all chapters, the reader will grasp the importance of such literacies to support actors’ agency and self-advocacy in their efforts to explore and transform situated data cultures. The first part introduces the discourses of critique and reaction about data usage, abuse, and higher education metrics. The chapters in this section examine social justice, advocacy, and agentic practices around datafication as problems that seem to alienate the final user. The second part delves into the most enthusiastic perspectives of open data as a building block in open science and open access to knowledge in society; nonetheless, this section also points out the hill to be climbed, concerning citizens, students, educators and professionals, to make good use of the open data treasure. Therefore, the third part explores the possible responses that education might provide to datafication, naïve approaches to data, and the data divide. By no means do the perspectives of any chapter or section aspire to a complete coverage of the phenomenology that might be attached to each data epistemology, and an overall complex approach to data. However, these chapters portray clear examples and a good sampling of the debates and scholarly literature around reactive and proactive data epistemologies. Moreover, effort is focused on connecting the three policy-making agendas introduced in this chapter towards an integrated vision of the academic profession. Opening the first part, we introduce the problem of extraction and mining of students’ data and the conversion of data into actionable dashboards or recommending systems for teachers and learners. The chapter “Fair learning analytics: design, participation and transdisciplinarity in the techno-structure” by Motz (computer scientist) and Diaz (lawyer), offers an interdisciplinary perspective analysing the pillars of fairness relating to data-driven automated systems entering the life of a university. Based on their work in Uruguay, Latin America, the authors argue that interdisciplinarity and openness are crucial elements to be considered from the design of any data-driven practices adopted by institutions, while at the same time, it is necessary to consider the debate around data sovereignty as the ultimate context of development. Continuing this first part, another reactive perspective is presented by Raffaghelli and Grion, who elaborate on the use of data for assessment and the significant problems threatening assessment for learning, more than “of learning”. In their chapter, “Beyond just metrics: for a renewed approach to assessment in higher education”, the authors scrutinise the evolution of recent discourses relating assessment in higher education to uncover the fallacies of quantification, later transformed into data-driven practices connected to assessment. They problematise the academics’ willpower and ability to analyse and understand data fairly, under the pressures of metrics and quantification within neoliberal universities. They boldly ask: How
1 Data Cultures in Higher Education: Acknowledging Complexity
29
much and when does the numerical data give us information on the essential elements of training and learning, not just on some isolated or less significant components? Moreover, borrowing the famous discourse of J.F. Kennedy, she warns on the risk of measuring “air pollution and cigarette advertising, and ambulances to clear our highways of carnage” without vital information about “which makes life worthwhile”. A third problem is considered by Manca and Gleason, moving to a whole different universe though very present in HEIs: that of social media for formal and informal learning. Their starting point relates to the threats that manipulate user behaviour via algorithms and misinformation pushed in disparate ways to democratic participation and online civic engagement. Therefore, the authors address the main potential and pitfalls of social media use in higher education and how the increasing challenges posed by big data and increasing datafication demand new literacy practices. They also present data literacy issues applied to social media and indications for professional development, problems of use of social media data, ethics in social media research and sustainable archiving, and retrieving digital data. As anticipated, the second part reveals a totally different set of problems relating to HE and society’s proactive data epistemologies. Though the ideals of democratisation of knowledge and data sovereignty are present, the authors’ attention is driven by data production and appropriation rather than by reacting to data extraction and usage by third parties. Moving from open educational resources and open software as a driver of reflection and change around datafication, the following chapter by Quarati, Santos, Loria and Raffaghelli focuses on the existing problems of the “(open) data divide”. Thus, they reveal the delta between open data publication, quality standards, and actual quality and usage. The authors uncover problems such as data attrition, insufficient awareness of the relevance of open data policies and practices, and unwillingness to move towards a field that is still inappropriately framed for academics’ career development. They claim that while public data is there for usage and enhancement, the intentions of scholars, students, and citizens are driven by many motivations, abilities and knowledge that still put them far from actual engagement with the emancipatory principles of open data. Bridging the previous topic, Davinia Hernández-Leo et al. reveal some responsible research elements in the educational technology field, stemming from the rationale of open science. Data collection is displayed as a space to build transparency as reproducible, accurate and verifiable research, bringing benefits for the individual researchers, the research community, and society. As a result, the chapter discusses perspectives in ethics within educational technology research, which is important when collecting and sharing data, and the design and development of technologies, especially based on data analytics or artificial intelligence techniques. The latter aspects relate to educational software systems’ capacity to support human agency and preserve human well-being. Closing this second part, Kuhn invites us to reflect on the interplay between structure, culture, and students’ agency in the context of open educational practices in HE from a critical and realistic perspective. She claims that the current situation
30
J. E. Raffaghelli and A. Sangrà
of datafication might be approached by looking into the deeper levels of social reality, where young people are embedded, particularly the students’ relationship with open and participatory tools in HE. In this regard, educators might provide pedagogical opportunities for open educational practices that enable an explorative and critical mindset. As a result, the students might be put in the position of overcoming the mere acceptance of the socio-political structures. Indeed, they are called on to uncover and question apparatuses and structures that perpetuate capitalistic mechanisms of surveillance, to put it in Zuboff’s terms (2019). The third part foregrounds educational interventionist perspectives vis-à-vis datafication with its entailed data epistemologies. Stewart’s section opens with a thought-provoking question: Does higher education have a responsibility to approach digital datafication with policies and practices that centre equity? Her chapter outlines the data practices and perspectives reported by university educators in an international summer 2020 survey and reveals a landscape in which knowledge workers’ understanding of the systems that structure learning and knowledge is profoundly limited. Ultimately, Stewart posits that higher education needs ethics-focused policies and communications that foster data literacies and plain-language understanding of the risks and implications of datafied platforms among educators and learners. Next, Guitert, Romeu & Romero focus on the digital attitude as a key element of digital competence to support data practices in HEI. Drawing on their extensive experience as educators, they portray the evolution of digital competence from a conception of mere technological development to a holistic vision of its application from a personal and collective/community point of view. The chapter shows the importance of achieving new attitudes that must be consistent with a rapidly evolving digital world. Being digital citizens implies incorporating a dynamic understanding of technology. In recent developments, the skills for taking part in data cultures become crucial elements of the digital attitude. Delving into the problem of developing data literacy programmes relevant to the ongoing situation, Bhargava introduces the educational movement “data that matter” in the following chapter. Such programmes introduce students to the social structure and processes that have produced data and can have the most impact. Bhargava offers case studies of some of these efforts and summarizes four guiding principles to support them. These examples encourage creating playgrounds in which to learn, connecting students to real data and communities, balancing learning goals with student interests, and letting learners take risks. He closes with a “call to arms”, supporting data educators in challenging the historical structures of power embedded in data, diving into the ethical complexities of real work, and teaching how to use data for the overall social good. The next chapter introduces the results of a yearly project aimed at cultivating critical data literacies in higher education. Atenas, Havemann, Kuhn and Timmerman explain their educational approach to support academics teaching data literacies using a critical approach through bridging research with real-life problems. By using open data as Open Educational Resources (OER), the authors show how learners and educators can be supported to co-create knowledge in an
1 Data Cultures in Higher Education: Acknowledging Complexity
31
interdisciplinary manner through research-based learning activities as a catalyst for the appropriation of datafied public spaces. The curriculum created by the authors provides academics with a data ethics framework and solid theoretical background alongside tools and activities to develop lifelong learning that can transcend the classroom in society. The authors’ ultimate goal is to shape informed and transformative democratic practices and dialogue, empowering citizens to address social justice concerns. Shifting the perspective from academic educators and students to all stakeholders in HEI, Yang and Li explore the contribution of data literacy to quality in education. The authors argue that in the era of big data, most higher education data have not been transformed into actionable insights, unlike other fields such as business intelligence in companies. Based on a polysemic notion of quality, the chapter aims to discuss the relationships between different conceptions of quality, the key stakeholders’ role in quality enhancement or quality assurance, and how their data literacy might impact the quality of higher education. Ultimately, Rivera, Cobo, Jacovkis and Passeron focus on the compulsory (and compulsive) use of digital platforms in higher education. In the post-digital context we depicted in this introduction, the authors’ analysis of 31 data centres calls into question the role of universities to deal with data generation in their day-to-day activities. Although the centres analysed in this chapter have different profiles and expertise, they all seek to better train higher education institutions to cope with the datafication of society manifested in different ways (e.g. digital inclusion, artificial intelligence, privacy, ethical use of data, etc.). The authors adopted a co-design and virtual ethnography, structuring their work in two phases: (1) identification and analysis of the university datafication centres, and (2) selection and deepening of four core dimensions of work connected to these centres. Their research adds to the chapter a relevant analysis that goes beyond data literacy, to align with the complex approach to data practices in higher education as a resultant of layers of intervention and context-dependent approaches. Their results indeed highlight global trends, research agendas, and priorities. The authors go beyond this effort by illustrating the need of understanding data not only as “tools” but also as “subjects” with an increasingly economical and symbolic power. To conclude, we introduce a case study and a discussion on its implication to illustrate the idea of data cultures in higher education. Moreover, we discuss the relevance of mere faculty development approaches to acknowledge a data culture, to understand its complexity and to act upon it. We posit that a lack of awareness on the fragmentation that frequently characterises a data culture prevents academics and HEIs from intervening to set policies or implement a professional praxis beyond a narrow, externally driven focus on data instead of a contextualized, fair data culture. Against this panorama, the chapter introduces the university’s role in advancing interdisciplinary theoretical reflection, blending it with empirical research and practice around datafication as phenomenon flowing from the classroom to the institution, from the local to the global society. Finally, the chapter questions the frequent adoption of “faculty development” as strategy to “introduce HEIs change”. An approach to cultivate the literacies needed to explore or transform data cultures
32
J. E. Raffaghelli and A. Sangrà
which might enmesh oppression and inequities, requires more than just professional learning. Instead, academics’ activisms to engage with the data culture, with an eye on care for the self and others as well as a sense of agency and emancipation might be the way ahead. Beyond the three book’s part announced, a chapter brings the perspective of discussants to this book’s ideas and apparatuses. Jen Ross and Jeremy Knox, researchers and leaders at the Centre for Research in Digital Education and Education Futures fellow at the Edinburgh Futures Institute, bring crucial expertise on an advanced agenda around data in education and society. Given their leadership in digital education research, open education, critical post- humanism and new materialism, their perspective will reinforce the book’s message by deepening the problems and conceptual implications introduced by several authors. On the whole, the chapters portraying the discussants’ viewpoints will complete the book proposal by expanding the horizon towards a global scenario of research and practice. To conclude, our editorial perspective highlights the situatedness that characterises data cultures and the need of democratising the digital transformation of HEIs. Therefore, participants’ engagement in data practices and narratives that represent their unique way of envisioning the HEIs they are part of, is what we call a fair data culture. We hope that this concept is actionable, empowering HEI stakeholders to explore forms of transforming data practices into their situated contexts of work. We are convinced that the book’s perspectives support important reflections that instantiate most of the recommendations and good practices envisioned in the political agenda at the crossover between the modernisation of higher education, open science, data privacy and ethics. Most importantly, we expect that this volume will be of interest to those new to data practices in society and higher education. We are optimistic that established scholars in the field will find interesting ideas to frame the debate on what are the most efficient interventions to face datafication in higher education, beyond the dystopia.
References Agger, B. (2014). Cultural studies as critical theory. Routledge. Akkerman, S. F., & Bakker, A. (2011). Boundary crossing and boundary objects. Review of Educational Research, 81(2), 132–169. https://doi.org/10.3102/0034654311404435 Argyris, C. (1977). Double loop learning in organizations. Harvard Business Review. Online. Atenas, J., & Havemann, L. (2015). Open data as open educational resources: Case studies of emerging practice. https://doi.org/10.6084/m9.figshare.1590031.v1 Baack, S. (2015). Datafication and empowerment: How the open data movement re- articulates notions of democracy, participation, and journalism. Big Data & Society, 2(2), 205395171559463. https://doi.org/10.1177/2053951715594634 Bates, A. W. (Tony), & Sangra, A. (2011). Managing technology in higher education: Strategies for transforming teaching and learning. Wiley.
1 Data Cultures in Higher Education: Acknowledging Complexity
33
Bayne, S., Evans, P., Ewins, R., Knox, J., Lamb, J., Macleod, H., O’Shea, C., Ross, J., Sheail, P., & Sinclair, C. (2020). The manifesto for teaching online. MIT Press. Beall, J. (2013). Article-level metrics: An ill-conceived and meretricious idea. In Blog: Scholarly open access. Critical analysis of scholarly open-access publishing. http://scholarlyoa. com/2013/08/01/article-level-metrics/ Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. Wiley. Bhargava, R., Deahl, E., Letouzé, E., Noonan, A., Sangokoya, D., & Shoup, N. (2015). Beyond data literacy: Reinventing community engagement and empowerment in the age of data (DataPop alliance white paper series). Data Pop Alliance. https://datapopalliance.org/item/beyond- data-literacy-reinventing-community-engagement-and-empowerment-in-the-age-of-data/ Borgman, C. L. (2017). Big data, little data, no data: Scholarship in the networked world. MIT Press. Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878 Bozzi, M., Raffaghelli, J. E., & Zani, M. (2021). Peer learning as a key component of an integrated teaching method: Overcoming the complexities of physics teaching in large size classes. Education in Science, 11(2), 67. https://doi.org/10.3390/educsci11020067 Calonge, D. S., & Shah, M. A. (2016). MOOCs, graduate skills gaps, and employability: A qualitative systematic review of the literature. The International Review of Research in Open and Distributed Learning, 17(5). https://doi.org/10.19173/irrodl.v17i5.2675 Carey, K. (2015). The end of college: Creating the future of learning and the university of everywhere. Penguin Publishing Group. https://books.google.com/books?id=FCh-BAAAQBAJ&pgis=1 Castells, M. (2000). End of millennium, volume III: The information age: Economy, society and culture. Wiley. Castells, M. (2011). The rise of the network society. Wiley. Cobo, C., & Vargas, P. R. (2022). Turn off your camera and turn on your privacy: A case study about Zoom and digital education in South American countries. In Learning to live with datafication. Routledge. Costa, C. (2014). Outcasts on the inside: Academics reinventing themselves online. International Journal of Lifelong Education, 34(2), 194–210. https://doi.org/10.1080/ 02601370.2014.985752 Couldry, N., & Mejias, U. A. (2019). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television and New Media, 20(4), 336–349. https://doi. org/10.1177/1527476418796632 Crawford, K. (2021). Atlas of AI. Yale University Press. Czerniewicz, L. (2022). Multi-layered digital inequalities in HEIs: The paradox of the post-digital society. New visions for higher education towards 2030-Part 2: Transitions: Key topics, key voices. https://www.guninetwork.org/files/guni_heiw_8_complete_-_new_visions_for_ higher_education_towards_2030_1.pdf#page=124 Czerwonogora, A., & Rodés, V. (2019). PRAXIS: Open educational practices and open science to face the challenges of critical educational action research. Open Praxis, 11(4), 381–396. https://doi.org/10.5944/openpraxis.11.4.1024 D’Ignazio, C., & Bhargava, R. (2015). Approaches to building big data literacy. Bloomberg Data for Good Exchange. Online. https://dam-prod.media.mit.edu/x/2016/10/20/Edu_D’Ignazio_52.pdf D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press. https://doi.org/10.7551/ mitpress/11805.001.0001 Daniel, B. K. (2017). Big data in higher education: The big picture. In Big data and learning analytics in higher education (pp. 19–28). Springer. https://doi.org/10.1007/978-3-319-06520-5_3 Decuypere, M. (2021). The topologies of data practices: A methodological introduction. Journal of New Approaches in Educational Research, 10(1), 67–84. https://doi.org/10.7821/ naer.2021.1.650
34
J. E. Raffaghelli and A. Sangrà
Decuypere, M., Grimaldi, E., & Landri, P. (2021). Introduction: Critical studies of digital education platforms. Critical Studies in Education, 62(1), 1–16. https://doi.org/10.1080/1750848 7.2020.1866050 Deming, D. J., & Noray, K. L. (2018). STEM careers and the changing skill requirements of work (Working Paper No. 25065). National Bureau of Economic Research. https://doi. org/10.3386/w25065 Dencik, L., & Sanchez-Monedero, J. (2022). Data justice. Internet Policy Review, 11(1). https:// policyreview.info/articles/analysis/data-justice Engenström, Y. (2008). The future of activity theory: A rough draft [Keynote Lecture]. http://lchc. ucsd.edu/mca/Paper/ISCARkeyEngestrom.pdf Engeström, Y. (2008). From teams to knots: Activity-theoretical studies of collaboration and learning at work. Cambridge University Press. Engeström, Y. (2015). Learning by expanding. Cambridge University Press. Eubanks, V. (2018). Automating inequality. How High.tech tools profile, police, and punish the poor (1st ed.). St. Martin’s Press. European Commission. (2016). Open innovation, open science, open to the world – A vision for Europe | Digital single market. European Commission, Publications Office of the European Union. https://doi.org/10.2777/061652 European Commission. (2018). Facts and case studies related to accessing and reusing the data produced in the course of scientific production. https://ec.europa.eu/info/research-and-innovation/ strategy/goals-r esearch-a nd-i nnovation-p olicy/open-s cience/open-s cience-m onitor/ facts-and-figures-open-research-data_en European Commission. (2021). A European approach to artificial intelligence. Shaping Europe’s digital future. In EU Official Website. https://digital-strategy.ec.europa.eu/en/policies/ european-approach-artificial-intelligence European Commission. (n.d.). Digital solutions during the pandemic [Text]. European Commission, Coronavirus Response. https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response/ digital-solutions-during-pandemic_en European Commission – RISE – Research Innovation and Science Policy Experts. (2016). Mallorca Declaration on open science: Achieving open science. European Commission. https://ec.europa.eu/research/openvision/pdf/rise/mallorca_declaration_2017.pdf Fiebig, T., Gürses, S., Gañán, C. H., Kotkamp, E., Kuipers, F., Lindorfer, M., Prisse, M., & Sari, T. (2021). Heads in the clouds: Measuring the implications of universities migrating to public clouds. ArXiv:2104.09462 [Cs]. http://arxiv.org/abs/2104.09462 Fikkema, M. (2016). Sense of serving: Reconsidering the role of universities now – Google Books. VU University Press. Fry, H. (2019). Hello world: Being human in the age of algorithms. W.W. Norton. Germany’s Presidency of the Council of the EU. (2020). Berlin declaration on digital society and value-based digital government. In Declaration (pp. 1–16). Council of Europe. https://www.bmi. bund.de/SharedDocs/pressemitteilungen/EN/2020/12/berlin-declaration-digitalization.html Gleason, B., & Heath, M. K. (2021). Injustice embedded in Google Classroom and Google Meet: A techno-ethical audit of remote educational technologies. Italian Journal of Educational Technology. Online first. https://doi.org/10.17471/2499-4324/1209 Green, B. (2021). The contestation of tech ethics: A sociotechnical approach to ethics and technology in action. http://arxiv.org/abs/2106.01784 Hartelius, E. J., & Mitchell, G. R. (2014). Big data and new metrics of scholarly expertise. Review of Communication, 14(3–4), 288–313. https://doi.org/10.1080/15358593.2014.979432 Hummel, P., Braun, M., Tretter, M., & Dabrock, P. (2021). Data sovereignty: A review. Big Data & Society, 8(1), 2053951720982012. https://doi.org/10.1177/2053951720982012 Jafari, A., & Kaufman, C. (2006). Handbook of research on EPortfolios (Google eBook). Idea Group Inc (IGI).
1 Data Cultures in Higher Education: Acknowledging Complexity
35
Janssen, M., Charalabidis, Y., & Zuiderwijk, A. (2012). Benefits, adoption barriers and myths of open data and open government. Information Systems Management, 29(4), 258–268. https:// doi.org/10.1080/10580530.2012.716740 Johnson, N. (2007). Two’s company, three is complexity. In Simply complexity: A clear guide to complexity theory. Oneworld Publications. Kapp, K. M. (2012). The gamification of learning and instruction: Game-based methods and strategies for training and education. Wiley. Kennedy, H., Poell, T., & van Dijck, J. (2015). Data and agency. Big Data & Society, 2(2), 2053951715621569. https://doi.org/10.1177/2053951715621569 Kitchin, R. (2014). The data revolution: Big data, open data, data infrastructures & their consequences. SAGE. Klein, J. T. (1996). Crossing boundaries: Knowledge, disciplinarities, and interdisciplinarities. University of Virginia Press. https://books.google.com/books?id=bNJvYf3ROPAC&pgis=1 Knight, S., Shum, S. B., & Littleton, K. (2014). Epistemology, assessment, pedagogy: Where learning meets analytics in the middle space. Journal of Learning Analytics, 1(2), 23–47. https://doi.org/10.18608/jla.2014.12.3 Knox, J. (2016). Posthumanism and the massive open online course: Contaminating the subject of global education. Routledge. Knox, J. (2019). What does the ‘Postdigital’ mean for education? Three critical perspectives on the digital, with implications for educational research and practice. Postdigital Science and Education, 1(2), 357–370. https://doi.org/10.1007/s42438-019-00045-y Kolkman, D. (2020, August 26). ‘F**k the algorithm?’: What the world can learn from the UK’s A-level grading fiasco [Blog Post]. Impact of Social Sciences – Blog of the LSE. https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/ fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/ Macleod, H., Haywood, J., Woodgate, A., & Alkhatnai, M. (2015). Emerging patterns in MOOCs: Learners, course designs and directions. TechTrends, 59(1), 56–63. https://doi.org/10.1007/ s11528-014-0821-y Manca, S., Caviglione, L., & Raffaghelli, J. E. (2016). Big data for social media learning analytics: Potentials and challenges. Journal of E-Learning and Knowledge Society, 12(2). https://doi. org/10.20368/1971-8829/1139 Martí, M. C., & Ferrer, G. T. (2012). Exploring learners’ practices and perceptions on the use of mobile portfolios as methodological tool to assess learning in both formal and informal contexts. Procedia-Social and Behavioral Sciences, 46, 3182–3186. https://doi.org/10.1016/j. sbspro.2012.06.033 Mazon, J. N., Lloret, E., Gomez, E., Aguilar, A., Mingot, I., Perez, E., & Quereda, L. (2014). Reusing open data for learning database design. In 2014 international symposium on computers in education, SIIE 2014. (pp. 59–64). https://doi.org/10.1109/SIIE.2014.7017705 McAleese, M., Bladh, A., Berger, V., Bode, C., Muelhfeit, J., Petrin, T., Schiesaro, A., & Tsoukalis, L. (2013). Report to the European Commission on ‘Improving the quality of teaching and learning in Europe’s higher education institutions’. Meyer, K. A. (2014). An analysis of the cost and cost-effectiveness of faculty development for online teaching. Journal of Asynchronous Learning Networks, 17(4), 93–113. Milan, S., & van der Velden, L. (2016). The alternative epistemologies of data activism. https:// papers.ssrn.com/sol3/papers.cfm?abstract_id=2850470 Moats, D., & Seaver, N. (2019). “You social scientists love mind games”: Experimenting in the “divide” between data science and critical algorithm studies. Big Data & Society, 6(1), 205395171983340. https://doi.org/10.1177/2053951719833404 Morin, E. (2008). On complexity. Hampton Press. Nicolini, D. (2012). Practice theory, work, and organization: An introduction. OUP Oxford. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism by Safiya Umoja Noble. NYU Press. https://doi.org/10.15713/ins.mmj.3
36
J. E. Raffaghelli and A. Sangrà
Nunn, S., Avella, J. T., Kanai, T., & Kebritchi, M. (2016). Learning analytics methods, benefits, and challenges in higher education: A systematic literature review. Online Learning, 20(2). https://doi.org/10.24059/olj.v20i2.790 O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Penguin. O’Neill, K., Singh, G., & O’donoghue, J. (2004). Implementing eLearning programmes for higher education: A review of the literature. Journal of Information Technology Education, 3, 313–323. OECD. (2019a). Benchmarking higher education system performance. Organisation for Economic Co-operation and Development. https://www.oecd-ilibrary.org/education/ benchmarking-higher-education-system-performance_be5514d7-en OECD. (2019b). Forty-two countries adopt new OECD principles on artificial intelligence. In Going digital (p. online). https://www.oecd.org/going-digital/forty-two-countries-adopt-new- oecd-principles-on-artificial-intelligence.htm OECD. (2020). Why open science is critical to combatting COVID-19. OECD. https://www. oecd.org/coronavirus/policy-r esponses/why-o pen-s cience-i s-c ritical-t o-c ombatting- covid-19-cd6ab2f9/ Olive, J., Christianson, C., & McCary, J. (2011). Handbook of natural language processing and machine translation: DARPA global autonomous language exploitation. Springer. Owen, R., Macnaghten, P., & Stilgoe, J. (2012). Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy, 39(6), 751–760. https://doi.org/10.1093/scipol/scs093 Pangrazio, L., & Sefton-Green, J. (2022). Learning to live well with data: Concepts and challenges. In Learning to live with datafication. Educational case studies and initiatives from across the world (Luci Pangrazio and Julian Sefton-Green, p. online first). Routledge. https://www. routledge.com/Learning-to-Live-with-Datafication-Educational-Case-Studies-and-Initiatives/ Pangrazio-Sefton-Green/p/book/9780367683078 Pangrazio, L., Stornaiuolo, A., Nichols, T. P., Garcia, A., & Philip, T. M. (2022). Datafication meets platformization: Materializing data processes in teaching and learning. Harvard Educational Review, 92(2), 257–283. https://doi.org/10.17763/1943-5045-92.2.257 Pearce, N., Weller, M., Scanlon, E., & Kinsley, S. (2010). Digital scholarship considered: How new technologies could transform academic work. In In education (Vol. 16, Issue 1). http:// ineducation.ca/ineducation/article/view/44/508 Perrotta, C., & Williamson, B. (2018). The social life of learning analytics: Cluster analysis and the ‘performance’ of algorithmic education. Learning, Media and Technology, 43(1), 3–16. https:// doi.org/10.1080/17439884.2016.1182927 Poell, T., Nieborg, D., & Dijck, J. van. (2019). Platformisation. Internet Policy Review, 8(4). https://policyreview.info/concepts/platformisation Pozzi, F., Manganello, F., Passarelli, M., Persico, D., Brasher, A., Holmes, W., Whitelock, D., & Sangrà, A. (2019). Ranking meets distance education: Defining relevant criteria and indicators for online universities. International Review of Research in Open and Distance Learning, 20(5), 42–63. https://doi.org/10.19173/irrodl.v20i5.4391 Prinsloo, P. (2017). Fleeing from Frankenstein’s monster and meeting Kafka on the way: Algorithmic decision-making in higher education. E-Learning and Digital Media, 14(3), 138–163. https://doi.org/10.1177/2042753017731355 Prinsloo, P. (2019). A social cartography of analytics in education as performative politics. British Journal of Educational Technology, 50(6), 2810–2823. https://doi.org/10.1111/bjet.12872 Prinsloo, P. (2020). Data frontiers and frontiers of power in (higher) education: A view of/from the Global South. Teaching in Higher Education, 25(4), 366–383. https://doi.org/10.1080/1356251 7.2020.1723537 Pritchard, R. (2004). Humboldtian values in a changing world: Staff and students in German universities. Oxford Review of Education, 30(4), 509–528.
1 Data Cultures in Higher Education: Acknowledging Complexity
37
Purwanto, A., Zuiderwijk, A., & Janssen, M. (2018). Group development stages in open government data engagement initiatives: A comparative case studies analysis (pp. 48–59). Springer. https://doi.org/10.1007/978-3-319-98690-6_5 Quarati, A., & Raffaghelli, J. E. (2020). Do researchers use open research data? Exploring the relationships between usage trends and metadata quality across scientific disciplines from the Figshare case. Journal of Information Science. https://doi.org/10.1177/0165551520961048 Raffaghelli, J. E. (2012). Apprendere in contesti culturali allargati. Formazione e globalizzazione. In Le Scienze dell’apprendimento: Cognizione e Formazione. FrancoAngeli. http://www.francoangeli.it/Ricerca/Scheda_libro.aspx?CodiceLibro=1361.1.1 Raffaghelli, J. E. (2018). Open data for learning: A case study in higher education. In A. Volungeviciene & A. Szűcs (Eds.), Exploring the micro, meso and macro navigating between dimensions in the digital learning landscape. Proceedings of the EDEN annual conference, 2018 (pp. 178–190). European Distance and E-Learning Network. ISBN 978-615-5511-23-3. Raffaghelli, J. E. (2019). Webinar Series «Building Fair Data Cultures in Higher Education: Emerging practices, professionalism and the challenge of social justice» Education: In Webinar Series—Research Project Professional Learning Ecologies for Digital Scholarship: Modernizing Higher Education by Supporting Professionalism. https://bfairdata.net/perspectivas/ Raffaghelli, J. E., & Manca, S. (2022). Exploring the social activity of open research data on ResearchGate: Implications for the data literacy of researchers. Online Information Review. Ahead-of-print (ahead-of-print). https://doi.org/10.1108/OIR-05-2021-0255 Raffaghelli, J. E., Cucchiara, S., & Persico, D. (2015). Methodological approaches in MOOC research: Retracing the myth of Proteus. British Journal of Educational Technology, 46(3), 488–509. https://doi.org/10.1111/bjet.12279 Raffaghelli, J. E., Manca, S., Stewart, B., Prinsloo, P., & Sangrà, A. (2020). Supporting the development of critical data literacies in higher education: Building blocks for fair data cultures in society. International Journal of Educational Technology in Higher Education, 17(1), 58. https://doi.org/10.1186/s41239-020-00235-w Raffaghelli, J. E., Grion, V., & de Rossi, M. (2021). Data practices in quality evaluation and assessment: Two universities at a glance. Higher Education Quarterly., Online first. https:// doi.org/10.1111/hequ.12361 Ramge, T. (2020). Postdigital: Using AI to fight coronavirus, foster wealth and fuel democracy. Murmann Publishers GmbH. Ricaurte, P. (2019). Data epistemologies, the coloniality of power, and resistance. Television and New Media, 20(4), 350–365. https://doi.org/10.1177/1527476419831640 Rider, S., Peters, M. A., Hyvönen, M., & Besley, T. (2020). Welcome to the world class university: Introduction. In S. Rider, M. A. Peters, M. Hyvönen, & T. Besley (Eds.), World class universities: A contested concept (pp. 1–8). Springer. https://doi.org/10.1007/978-981-15-7598-3_1 Rienties, B., Herodotou, C., Olney, T., Schencks, M., & Boroowa, A. (2018). Making sense of learning analytics dashboards: A technology acceptance perspective of 95 teachers. The International Review of Research in Open and Distributed Learning, 19(5). https://doi. org/10.19173/irrodl.v19i5.3493 Salmon, G. (2013). E-tivities: The key to active online learning. Routledge. Sannino, A. (2011). Activity theory as an activist and interventionist theory. Theory & Psychology, 21(5), 571–597. https://doi.org/10.1177/0959354311417485 Saura, G., Gutiérrez, E. J. D., & Vargas, P. R. (2021). Innovación Tecno-Educativa “Google”. Plataformas Digitales, Datos y Formación Docente. REICE. Revista Iberoamericana sobre Calidad, Eficacia y Cambio en Educación, 19(4), Article 4. https://doi.org/10.15366/ reice2021.19.4.007 Scanlon, E. (2014). Scholarship in the digital age: Open educational resources, publication and public engagement. British Journal of Educational Technology, 45(1), 12–23. https://doi. org/10.1111/bjet.12010
38
J. E. Raffaghelli and A. Sangrà
Scheuerman, M. K., Hanna, A., & Denton, E. (2021). Do datasets have politics? Disciplinary values in computer vision dataset development. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 317:1–317:37. https://doi.org/10.1145/3476058 Selwyn, N. (2015). Data entry: Towards the critical study of digital data and education. Learning, Media and Technology, 40(1), 64–82. https://doi.org/10.1080/17439884.2014.921628 Selwyn, N. (2021). Critical data futures. Pre-print of the chapter. In W. Housley, A. Edwards, R. Montagut, & R. Fitzgerald (Eds.), The Sage handbook of digital society. https://doi. org/10.26180/15122448.v1 Shum, S. J. B. (2019). Critical data studies, abstraction and learning analytics: Editorial to Selwyn’s LAK keynote and invited commentaries. Journal of Learning Analytics, 6(3), 5–10. https://doi. org/10.18608/jla.2019.63.2 Siemens, G. (2013). Learning analytics. American Behavioral Scientist, 57(10), 1380–1400. https://doi.org/10.1177/0002764213498851 Singh, G., & Hardaker, G. (2014). Barriers and enablers to adoption and diffusion of eLearning: A systematic review of the literature – A need for an integrative approach. Education and Training, 56(2), 105–121. https://doi.org/10.1108/ET-11-2012-0123 Snow, C. P. (1959). The two cultures and the scientific revolution. Cambridge University Press. Stracke, C., Bozkurt, A., Conole, G., Nascimbeni, F., Ossiannilsson, E., Sharma, R. C., Burgos, D., Cangialosi, K., Fox, G., Mason, J., Nerantzi, C., Obiageli Agbu, J. F., Ramirez Montaya, M. S., Santos-Hermosa, G., Sgouropoulou, C., & Shon, J. G. (2020, November). Open education and open science for our global society during and after the COVID-19 outbreak. In Open education global conference 2020. https://doi.org/10.5281/ZENODO.4275632 Swinnerton, B., Coop, T., Ivancheva, M., Czerniewicz, L., Morris, N. P., Swartz, R., Walji, S., & Cliff, A. (2020). The unbundled university: Researching emerging models in an unequal landscape. In N. B. Dohn, P. Jandrić, T. Ryberg, & M. de Laat (Eds.), Mobility, data and learner agency in networked learning (pp. 19–34). Springer. https://doi.org/10.1007/978-3-030-36911-8_2 Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2), 1–14. https://doi.org/10.1177/2053951717736335 Tsai, Y.-S., & Gasevic, D. (2017). Learning analytics in higher education – Challenges and policies. In Proceedings of the seventh international learning analytics & knowledge conference on – LAK ’17. (pp. 233–242). https://doi.org/10.1145/3027385.3027400 UNESCO. (2020). Virtual discussion of the Ad Hoc Expert Group (AHEG) for the preparation of a draft text of a recommendation on the ethics of artificial intelligence (SHS/BIO/ AHEG-AI/2020/3 REV; Ad Hoc Expert Group). https://unesdoc.unesco.org/ark:/48223/ pf0000373199 van der Zee, T., & Reich, J. (2018). Open education science. AERA Open, 4(3), 233285841878746. https://doi.org/10.1177/2332858418787466 Van Dijck, J. (2014). Datafication, dataism and dataveillance: Big data between scientific paradigm and ideology. Surveillance and Society, 12(2), 197–208. https://doi.org/10.24908/ ss.v12i2.4776 Van Dijck, J., Poell, T., & de Waal, M. (2018). The platform society. Public values in a connective world (1st ed.). Oxford University Press. van Es, K., & Schäfer, M. T. (A c. Di). (2017). The datafied society. Studying culture through data. Amsterdam University Press. https://doi.org/10.5117/9789462981362 Vuorikari, R., Ferguson, R., Brasher, A., Clow, D., Cooper, A., Hillaire, G., Mittelmeier, J., & Rienties, B. (2016). Research evidence on the use of learning analytics (p. 148). Joint Research Center – Publications Office of the European Union. https://doi.org/10.2791/955210 Watkins, K. E., & Golembiewski, R. T. (1995). Rethinking organization development for the learning organization. The International Journal of Organizational Analysis, 3(1), 86–101. https:// doi.org/10.1108/eb028825 Williamson, B., & Hogan, A. (2021). Education international research pandemic privatisation in higher education: Edtech & university reform. Education International.
1 Data Cultures in Higher Education: Acknowledging Complexity
39
Williamson, B., Eynon, R., & Potter, J. (2020). Pandemic politics, pedagogies and practices: Digital technologies and distance education during the coronavirus emergency. In Learning, media and technology (Vol. 45, Issue 2, pp. 107–114). Routledge. https://doi.org/10.108 0/17439884.2020.1761641 Williamson, B., Gulson, K., Perrotta, C., & Witzenberger, K. (2022). Amazon and the new global connective architectures of education governance. Harvard Educational Review, 92(2), 231–256. https://doi.org/10.17763/1943-5045-92.2.231 World Bank. (2021). Tertiary education [Text/HTML]. World Bank. https://www.worldbank.org/ en/topic/tertiaryeducation Zampieri, M., Nakov, P., & Scherrer, Y. (2020). Natural language processing for similar languages, varieties, and dialects: A survey. Natural Language Engineering, 26(6), 595–612. https://doi. org/10.1017/S1351324920000492 Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30(1), 75–89. https://doi.org/10.1057/jit.2015.5 Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books. Zuiderwijk, A., Shinde, R., & Jeng, W. (2020). What drives and inhibits researchers to share and use open research data? A systematic literature review to analyze factors influencing open research data adoption. PLoS One, 15(9), e0239283. https://doi.org/10.1371/journal.pone.0239283 Juliana E. Raffaghelli is Research Professor at the University of Padua and Associate Research of the Edul@b Research Group at the Universitat Oberta de Catalunya. In the last 15 years, she coordinated international research units, networks, and projects in Latin America, the Balkans, Turkey, and Western Europe in the field of educational technologies. Her work covered the topic of professional development for technological uptake in international/global contexts of collaboration through a socio-technical and post-colonial lens. Recently her research activities explored the emergent manifestations of data practices and artificial intelligence through critical, open, and emancipatory pedagogies. She has coordinated six special issues for international journals and contributed to the field with two books and several research articles and chapters in English, Spanish, Italian, and Portuguese.
Albert Sangrà is Director for the UNESCO Chair in Education and Technology for Social Change. He is Professor and Researcher at the Open University of Catalonia, Department of Psychology and Education. He is a member of the founder team of this university (1994–95), where he also served as the Director of the eLearn Center. He has worked as a consultant and trainer in several online and blended learning projects in Europe, America, Asia, and Australia, focusing on implementation strategies for the use of technology in teaching and learning and its quality. He was former Vice-president, European Foundation of Quality on E-Learning (EFQUEL) and former member of the Executive Committee of EDEN. He contributes to different academic journals as a member of the editorial committee and as a reviewer. He has published several books on the integration of ICT in higher education with publishers such as Jossey-Bass, Springer, Octaedro and Gedisa. He is recipient of the 2015 Award for Excellence in eLearning awarded by the World Education Congress and EDEN Senior Fellow.
Chapter 2
Data, Society and the University: Facets of a Complex Problem Juliana E. Raffaghelli
and Albert Sangrà
Abstract This chapter complements the introduction to the book “Data Cultures in Higher Education: Emerging Practices and the challenges ahead”. This chapter explores policy-making areas that impact higher education directly or indirectly. These areas are (a) transformation of higher education (from discourses of modernisation to the problem of managerialism, (b) open science and data connected to research practices and (c) the evolution of Artificial Intelligence (AI). In our view, the aforementioned areas support the initial theoretical assumption that data practices are based on several perspectives on how data are produced and used; hence, they encompass complexity. Moreover, this complexity sets the basis for different reactions from Higher Education Institutions (HEIs), which shape their situated institutional data cultures. Through the evidence of concrete evolution of policy- making around data in society and in education, our goal is to provide a frame to understand the relevance of the cases and proposals presented in each of the following chapters. Keywords Policy making · Data practices · Higher education · Datafication · Managerialism · Platformisation
J. E. Raffaghelli (*) Edul@b Research Group, Universitat Oberta de Catalunya, Barcelona, Spain Department of Philosophy, Sociology, Pedagogy and Applied Psychology, University of Padua, Padua, Italy e-mail: [email protected]; [email protected] A. Sangrà Faculty of Psychology and Education, Universitat Oberta de Catalunya, Barcelona, Spain e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. E. Raffaghelli, A. Sangrà (eds.), Data Cultures in Higher Education, Higher Education Dynamics 59, https://doi.org/10.1007/978-3-031-24193-2_2
41
42
J. E. Raffaghelli and A. Sangrà
Characterising Data Practices’ Complexity In the previous chapter, we offered an initial conceptual lens to the readers in order to facilitate their approach to the context in which the problem of datafication in higher education emerges and evolves. In the following, we introduce an evolving policy context that frames our book’s ideas, apparatuses and recommendations for practice. In this regard, we will consider three areas of policy-making that have evolved parallelly with different impacts on the development of Higher Education Institutions (HEIs). (a) The first area relates to the so-called modernisation of higher education through digital transformation, a trend which encompassed a debate with regard to the problem of managerialism or excessive forms of bureaucratic control over the two main academic activities, teaching and research; (b) the democratisation of academic knowledge through open science and connected practices about open data; (c) the development of AI industry and its consequences for HEIs. These three areas show how the policy context has impacted HEIs and the academics’ professionalism, shaping concrete spaces of intervention around data. But it also highlights areas where tensions and contradictions between several stakeholders around data discourses and practices cause complexity. It would be difficult, if not impossible, to find causal relationships among the increasing production of policy documents on data practices in HEIs in the last ten years. Given that the editors belong to a fully online university located in the EU context, but open to an international context by identity, service and strategy from its inception, our excursus on policies will move from the editors’ position and proposal to a global audience. Moreover, the international profiles of the book’s authors ensure a balanced representation of perspectives of interest to a global audience. Therefore, we will base our analysis in this paragraph on the EU situation around data practices in dialogue with the global context. Considering key driving forces in the EU higher education sector, with no pretension to cover a global situation, is partial and limited, but it offers readers the possibility to understand a specific context. As a result, the reader will be enabled to make connections or compare the cases we present, with closer local practices. By no means do we intend ours as a “global” or “international” contribution, but we see it as a constellation of cases from Western countries (EU, South and North America) with a guest perspective from China. As we will observe below, policy documentation appears to engage with enthusiastic discourses about digitalisation, and in close connection to this, the possibility of enhancing Open and Big Data. Nevertheless, in recent years, there has been a growing concern about the potentially harmful effects of AI developments and their connected data practices. This problem has triggered a prompt response from policy-making, in any case. Still, technological development at a global scale has been a crucial factor in the formulation of compelling discourses around the modernisation of higher education in the EU through digital transformation. This context has had clear implications for
2 Data, Society and the University: Facets of a Complex Problem
43
the professoriate, calling on them to become “digital scholars” and later to practice an “open and networked scholarship” (Goodfellow, 2014). This was the initial context for data-driven practices, which could be also connected with the policymaking area of open science. Open science’s agenda recently included regulations, standards and instruments supporting opening up of research data (European Commission – RISE – Research Innovation and Science Policy Experts, 2016). As might be discerned, ideal scenarios of open data production and sharing put pressure on the professoriate to develop new skills and practices, eventually conflicting with the requirements for career advancement. Open data was progressively linked to data-driven practices, namely, to the possibilities of re-using data for automated training in several AI research and innovation areas. Therefore, we will discuss how the gradual transformation of digital technologies into the so-called “smart technologies”, with the introduction of AI, led to policy recommendations and regulations attempting to adjust the effects of the so- called algorithmic bias and other harmful effects of automation in several areas of life. Smart technologies applied to teaching and learning in higher education evolved rapidly, generating expectations as well as tensions. However, what is considered a promising futuristic scenario needs further reflection to move beyond the data practices required for technological development towards fairer data practices. Overall, in the three mentioned areas (modernisation of higher education through digital transformation; opening science and education through open data; data- driven practices as means to integrate smart technologies in HEIs), data discourses and metaphors progressively entered and evolved rapidly in the last decade. Still, this evolution also encompassed tensions and contradictions, connected with the way several actors, particularly the professoriate, experienced the technological change. In the following section, we will introduce the three mentioned areas through a succinct account on their recent history. Moreover, we will attempt to connect the overall societal change with the impact on HEIs and the professoriate activity. We will also highlight some open debates around the feasibility (and desirability) of integrating data-driven practices into HEIs.
odernisation or Managerialism in HEIs? The Role of Digital M Technologies and Infrastructures Certainly, the prominence given to entering a digital society has permeated all areas of policymaking, involving the University as a central institution and scholars as key actors. The EU digital agenda originated in 2010, and based on earlier discussions in 2005–2009 (European Parliament, 2010), pushed towards the transformation of the current economy through digitisation. Areas of crucial intervention were to establish a fully functioning digital single market, improving system interoperability and compatibility for ICT services, strengthening online security and
44
J. E. Raffaghelli and A. Sangrà
confidence in the Internet, promoting high-speed broadband access and considering ICT societal challenges such as climate change, health and an ageing society. Such a discourse evolved in recent times. Nonetheless, the development of ICT skills for all was deemed central, and it had an impact on several discourses and recommendations from the EU to the member states around education and training at all levels. Nonetheless, such an agenda was rooted in the EU’s need to “catch up” with the international scenario of US dominance and China’s rapid technological evolution, as many of the discourses and recommendations were highly linked to economic growth and professional skill requirements. More recently, the agenda has evolved, giving place to the European Declaration on Digital Rights and principles for a Digital Decade 2020–2030 (European Parliament, 2022), setting the bases to dig into the social impact of digital technologies. This is mainly rooted in the pandemic’s intensification of the adoption of digital technologies and the observed critical effects at all levels, requiring more balanced interventions in terms of privacy, social inclusion and mental health care (Williamson et al., 2020). In this context, the Modernisation Agenda for Higher Education (HE) provided an overarching policy framework for national and EU policies to lead institutional changes towards the past EU2020 benchmark for this sector (40% of young people in the EU university-level qualification by 2020). Amongst the innovations required, the policy-making documents of a decade mentioned competency-based approaches, flexible, personalised, diversified and inclusive learning pathways, better-informed evaluation, tight relationships with society and the labour market and global visibility of the learning offer (High-Level Group on the Modernisation of Higher Education, 2014). The connection between the EU Digital Agenda and the Modernisation of HEIs agenda was established, with the role assigned to digital technologies and competences. Digital technologies and environments were considered a catalyst of the innovations mentioned above, operating in the following ways: (a) More open, transparent and visible results. Technologies might enable institutions to implement open and participatory scientific and educational cultures, leading to global visibility and access to new business/institutional models’ educational offers (Kukulska-Hulme et al., 2020). (b) More networked, flexible and collaborative research and teaching environments; the adoption of formal digital spaces and tools in connection with informal social/professional media would have led to new forms of general and professional learning and reputation recognition; and the circulation of scientific research as well as teaching resources could completely change the evaluation of scientific production as well as the students’ experiences with authentic and open datasets across frontiers (Czerwonogora & Rodés, 2019; Manca & Ranieri, 2017; Ossiannilsson & Creelman, 2012; Stracke et al., 2020). (c) Digital and data-driven learning and teaching. The promising field of learning analytics could support a leap to the advancement of tracking and visualising learning/teaching processes/outcomes, helping learners and educators to make
2 Data, Society and the University: Facets of a Complex Problem
45
better decisions about learning and teaching towards quality and effectiveness (Daniel, 2017; Ferguson, 2012; Siemens, 2013). (d) These discourses evolved after the pandemic to fulfil the compelling need to consider the well-being of students and teachers in digital spaces, including approaches where social interaction and movement are embedded into the digital (Kukulska-Hulme et al., 2022); but also pointing out to the critical consequences of adopting digital platforms and tools, taking into account the students’ access to the digital spaces as well as the epistemic imbalances in delivering content from the HEI as a source of power to the students in their diverse cultural situations (Czerniewicz, 2022). The above-mentioned changes always encompassed a reflection on the digital literacy’s gaps and requirements. For example, in the case of the EU Digital Competence framework, it was initially considered the need of cultivating skills, attitudes and knowledge to understand and manage information; to participate, engage, communicate and create knowledge in digital, open and networked contexts (Carretero et al., 2017). More recently, there was an upgrade which highlighted the need for learning to deal with technologies through more critical approaches, taking care of others and oneself, and last but not least, understanding the presence of AI in daily life (Vuorikari et al., 2022). In close relation to the scenario depicted above, early in the 2010s, policies emphasised the need to modernise teaching and learning in HEIs (European Commission, 2011; High-Level Group on the Modernisation of Higher Education, 2014). In such documents, academics were mostly blamed for being ill-prepared for a digital transformation (Bacow et al., 2012; Mapuva, 2009). However, most training programmes were critiqued for their inability to boost academics’ digital literacy (Entwistle, 2007; Meyer, 2014). Poor design, lack of active professional learning with guided experimentations in authentic settings, lack of alignment with the institutional context and lack of recognition of achieved skills were some of the causes (High-Level Group on the Modernisation of Higher Education, 2014; Stes et al., 2010). Therefore, it was suggested that the adoption of technologies by scholars should be approached contextualising their endeavour within institutional strategies of quality alongside the process of modernising higher education, considering the real training needs of academic staff at several stages of their careers (Crosier et al., 2019). A specific message was very much stressed by the mid-2010s: opening up education. The EU Commission funded numerous projects, and national agendas showed increasing concern about the generation and creation of open educational resources as part of quality education (European Commission, 2013b; Joint Research Centre (European Commission) et al., 2017). Above all, open education was seen as a driver of social inclusion and equity, a fact that was particularly clear during the pandemic (Bali et al., 2020; Bozkurt et al., 2020). The pandemic led the EU to make a communication on “a European Strategy for Universities” (European Commission, 2022), which spotted the relevance of (a) European cooperation and students’ mobility, including increased financial support; (b) a focus on the quality of programmes with an eye on how they impact the
46
J. E. Raffaghelli and A. Sangrà
development of skills, improve diversity, inclusiveness and gender equality, and promote democratic values; (c) put Europe in a central position to become an actor of change in the green and digital transitions, which are seen intertwined; (d) centre the relevance of strategies to place EU in a global role and leadership; emphasise the relevance of monitoring and governance. Evidently, a whole subsection is devoted to the digital transition, considering particularly “hybrid solutions representing a good balance between physical presence and digital tools” while at the same time, it intends to promote a “structured dialogue with Member States on digital education and skills” (p. 17). Although the digital capacity of the universities is mentioned, there is no consideration of the critical problem of data sovereignty and the lack of data literacy in this regard (Hummel et al., 2021; Raffaghelli et al., 2020; Williamson, 2018). The problem of data is addressed by considering that “leading universities in co-developing guidelines and principles for (…) enabling seamless knowledge and data exchange, reflect the need for interoperability and openness” (ibidem), therefore, it only focuses on a proactive vision of data in Higher Education. Up to this point, the reader will have grasped the increasing attention to faculty development programmes around teaching. But the discussion about the modernisation of higher education showed some disconnects among the other area of academic endeavour: research. Instead, such area was more evidently connected with the digital agenda, research being the fuel of technological innovations required to support the digital agenda. In this regard, and across the globe, research, and particularly the way academics communicated science as public knowledge, had started to attract increasing attention to the problem of Open Access (Chan et al., 2002). Consequently, policy recommendations stressed the need for training researchers: opportunities for formal learning on issues like Open Access, opening up science, open peer-review, and open data management and publication. As a matter of fact, a complete picture of what was conceived as digital science was presented within the concept paper digital science at Horizon 2020 by DG Connect (European Commission, 2013a), where a vision of digital science, a conceptual framework and some operational dimensions also guiding researchers’ training were considered. Moreover, the communication of the European Commission COM 2016 178 final (European Commission, 2016a) on European cyberinfrastructures supporting science highlighted that it was important to raise awareness and change incentive structures for academics, industry and public services to share their data and improve data management training, literacy and data stewardship skills (p. 6) – thus, improving participants’ competence to adopt cyberinfrastructures. Next, the High Level Expert Group on the European Open Science Cloud was created. In the first report produced by this group, training was recommended as part of a strategy to promote awareness of open science and researchers’ engagement with it (Ayris et al., 2016). By March 2016, at DG Connect’s Futurium space for public consultation and debate, a working document on Open Scholarship for the adoption of infrastructures introduced a summary of the European Commission’s endeavour on the matter, calling for actions to cooperate with the New Skills and Professions group to design an action plan for training a new generation of scholars and shaping model policies for
2 Data, Society and the University: Facets of a Complex Problem
47
career development in Open Scholarship (Matt, 2016).Also, in October 2015, the Horizon 2020 work programme on science with and for society launched a call to fund projects training scholars for open science (European Commission, 2016b), closed by 2020. A new generation of training activities was promoted through this call, connecting new forms of research that bridged open science with open education. Such an approach deserved full attention with the Covid-19 outbreak (Stracke et al., 2020). The issue of how academics deal with their digital practices appeared in the scholarly literature under the term “digital scholarship” (Pearce et al., 2010, p. 20), which was closely connected to the much earlier debate launched by Boyer’s “new priorities for the professoriate” in the 1990s (Boyer et al., 2015; Crow et al., 2018). The perspective of digital scholarship (Weller, 2012) built on the acceleration and transformation of scholarly work by (a) openness in science, research and teaching; (b) networking as the new professional way to collaborate across geographical and institutional frontiers based on the affordances provided by social networks and the Web 2.0. In 2018, though, Weller himself emphasised how policies had supported openness (particularly in education) much more extensively than other skills such as networking and becoming a social scholar (Weller, 2018). Despite the interconnections between digitality, openness and networking, scholars’ digital practices kept following rather traditional schemes, struggling to live amidst conflicts between tradition and external evaluation of academics’ practice for career advancement (Costa, 2014; Jamali et al., 2016). However, the construct of digital scholarship referred to separate worlds of practice and research, and there were apparent gaps in addressing professionalism for a holistic approach to the modernisation of higher education (Raffaghelli, 2017; Raffaghelli et al., 2016). Data practices might resent this situation, evolving in the same silos and posing the same conflicting questions to scholars (Raffaghelli & Stewart, 2020). However, concern about data practices entered initially through the agenda of open science, for the policies targeted this area of practice more directly than that of data in teaching.
Managerialism and Platformisation in Higher Education The previous paragraphs showed the evolution of digital and data-driven practices about the EU policy context on Higher Education and academics’ professionalism under the broader context of the EU digital agenda. Nonetheless, such a situation was also evolving in an international context, where higher education institutions were under the pressure of managerialism (Peters et al., 2012). Corporate managerialism thrived between the eighties and nineties, boosted by the movement named New Public Management. This movement was built on several theories which considered the public administration a body that could (and should) embrace the corporate values of success, productivity, effectiveness and return of investments. In this way, it legitimated budgetary costs, downsizing, goal planning and assessing the productivity of public institutions like
48
J. E. Raffaghelli and A. Sangrà
health care, social assistance, education and justice. Not only was this approach embraced in OECD countries with competitive economies, but it was also delivered through the International Monetary Fund and other transnational bodies supporting international cooperation in the public administration of the Global South peripheral countries (Stiglitz, 2002). Such an approach pressured over the universities from the Global South, which had been politically active in defending the ideals of autonomy and intellectualism, as well as access to knowledge in many “developing” societies. It also provoked perverse effects on their networking with powerful institutions to become a part of the considered “World Class Universities” (Lee & Naidoo, 2020). The decentralisation of management control contracting external know-how for technical tasks (as we will see further in relation to digital platforms) was part of this approach. The same was supported by a doctrine of self-management towards the idea of “quality”, closely related to competitiveness, which was emphasised (Harvey & Williams, 2010). A particular side of managerialism took the form of an intense search for quantifiable output measures and performance targets that ended up focusing on short-term performance through project management approaches. In the same vein, the increasing pressure on academics’ “productivity” in research and teaching was a key driver of HEIs culture. In the field of research, bibliometric and scientometric models were adopted for career advancement, being the metrics and scores provided by private, powerful companies of publishers like Elsevier and Thomson Reuters (Rider et al., 2020). Other measures were connected to the “system’s productivity” in teaching, namely, the number of graduates and employment of the graduates. This entailed the attention of the universities in careers with the highest number of graduates as well as careers with rapid pathways to job placement, in detriment of philosophy, arts and humanities (Barshay, 2021; Bradburn et al., 2021). Particularly, it has led to a culture of externally controlled performance, concealing a profound alienation amongst the scholars (Lynch, 2010). Two decades of intense influence over HEIs to tap into the values of managerialism has led to an obsession with metrics. The evidence-based education approach has also contributed to the idea of a measurable and accountable academic practice in the field of teaching (Biesta, 2007, 2015). In this regard, data-driven technologies of governance, inserted into the digital agenda of modernisation, provided a crucial asset facilitating measurement and the automatisation of connections between teaching and research metrics, academic performance and the elaboration of international rankings (Goglio, 2016; Pozzi et al., 2019; Sangrà et al., 2019). Similarly, in the European context, the Bologna process drew increasing attention to the generation of “benchmarks” and indicators, allowing the highly diversified HEIs to “harmonise” their performance (Grek & Ozga, 2009). The reverse effect of the emphasis put on measurements led to the increasing efforts made by the same institutions to reach “international standards”. Beyond instrumentalism and performativity, a terrible effect was the sterilisation of local creative characteristics deeply connected with the territories where the universities played a relevant role. Instead, the universities were concerned about producing an image that fitted the ideals of a world-class university: competitive and global (Gibbs, 2020) in an increasing techno-rational movement.
2 Data, Society and the University: Facets of a Complex Problem
49
However, data-driven practices in education “are concerned inherently with attempts to make sense of the social world and understand the ‘way things are’” (Selwyn, 2015, p. 69). And, as Piattoeva puts it, numbers cannot be detached from “a complex living texture of the world” (Piattoeva, 2021, p. 512). In analysing the actors involved in quantified accountability, Piattoeva displays a picture of increasing interest in the materiality of accountability (the numerical side of policy-making) and, more importantly, she insists on the de-contextualisation and detrimental effect that such attention can have on the specific educational contexts. In fact, the alleged neutrality of measurements, later followed by digital automatisms, is “fraught with problems and compromises, biases and omissions” (Selwyn, ibidem). Moreover, the continuing digitalised “data” produced effects of surveillance, which intensified the feeling of control, helplessness and de-professionalisation amongst the scholars (Selwyn & Gašević, 2020). However, another extremely relevant side of managerialism was the subcontracting of technical services to accelerate digital/data-driven transformation. Through this operation, the HEIs could integrate the logic of private know-how into their bureaucratic structures, and this was particularly relevant for the digitisation of activities and services. In a recursive movement of demand (by the universities) and offer (by private companies) the “EdTech” industry has grown to reach the levels of a multi-billionaire business, which has had an exponential increase with the pandemic (Williamson & Hogan, 2021). The study conducted by Fiebig et al. (2021) is particularly eloquent in this regard. They performed a longitudinal study of the migration to the cloud out of a sample from the US, UK and EU (Netherlands, Germany and Austria) retrieved by the Higher Education Ranking, extracting information on the usage of several types of digital platforms in the areas of administration, research, teaching and learning. They observed trends of increasing migration (from 30% to 50% of increased adoption) to the clouds by subcontracting private platforms such as Learning Management Systems (e.g. Blackboard) and videoconferencing systems such as Zoom; office suites (including email) like Microsoft Office or the Google Suite; Amazon Mechanical Turk to support supercomputing capabilities and data storage and other research software supporting quantitative and qualitative methods. It was a relevant discovery that for the US, UK and the Netherlands HEIs, there was an existing trend of privatisation of digital platforms supporting several areas of activity. In contrast, Mitteleuropean institutions preferred to keep regions of technological independence, particularly for teaching and learning. This study clearly underpins the fact that the increasing trend of platformisation will not stop, and, as we expressed above, the recent EU policy documents do not address more “data infrastructure” awareness amongst the HEIs. The investigation of the digital services subcontracted by the universities during the pandemic by Williamson and Hogan just confirms this trend at an international level (Williamson & Hogan, 2021). Nonetheless, a perverse effect of such digitisation is the possibility of students’ data commodification (Williamson, 2018; Williamson et al., 2020). Students’ data is used for alleged “educational innovations” that bring the promise of more effectiveness and better university performance at the cost of students’
50
J. E. Raffaghelli and A. Sangrà
agency, expression and representation (Broughan & Prinsloo, 2020), as well as de- professionalisation of academic teaching (Selwyn & Gašević, 2020). As Castañeda and Selwyn put it (2018, p. 6), when referring to the myriad of big and small companies commercialising their educational products, “[a]ltogether, this industry activity continues to generate substantial pressure to reshape and redirect university education” towards “modernisation” or “innovation”. HEIs are being measured, pressured and criticised for their “low scores”, pushing them to compete with each other and also with the emerging “rapid” learning offers from Silicon Valley and its promise of high standard jobs. The “End of College” is the preferred metaphor (Carey, 2015). By no means should this critique lead to immobility and lack of dialogue between universities and industry. What appears absolutely crucial is to distribute the “epistemic agency”, limited by the platformisation and datafication of HEIs activities (Decuypere & Landri, 2021). Concretely, this means promoting forms of engagement and participation in the digital infrastructures which students, academics and other HEIs stakeholders live by.
Open Data: False Promises and Broken Dreams? In the prior paragraphs, we considered the overall context of policymaking and the evolving economic and cultural landscape as factors of crisis and change within the HEIs. We will now turn our gaze to a more specific phenomenon which moves hand in hand with the discourses of the digital agenda: open science and open data.
The Universities as Generators of Open Data Increasing attention was drawn to the debate on the importance of Open Data [OD] as “data that anyone can access, use and share” (European Data Portal – https:// www.europeandataportal.eu/elearning/en/module1/#/id/co-01) due to the crucial role of this component within the panorama of open, public and accessible science (Molloy, 2011). In 2015, Christine Borgman, a prestigious scholar in information science, already pointed out the importance of exploring data as the most basic research component despite format differences in scientific disciplines (Borgman et al., 2015). Several international organisations increasingly covered data sharing in their funded projects as the debate achieved increasing prominence at global and European policy context (European Commission, 2016c). Cases such as the following attested to the attention given to the issue in the international policymaking agenda: the OpenAIRE portal as a base for the visibility of open data coming from the European research framework Horizon 2020 (European Commission, 2016c); the case of Wellcome Trust’s policy states (Wellcome Trust, 2016); the Netherlands
2 Data, Society and the University: Facets of a Complex Problem
51
Organization for Scientific Research (NWO, n.d.); CERN’s policies (CERN, 2018); or the Bill & Melinda Gates Foundation as a private case (Bill & Melinda Gates Foundation, 2017). For the European Commission, open data’s centrality became a reality through the Mallorca Declaration of 2016 (European Commission, 2016a, b, c, d, e). Such strong public and private efforts to publish open data would allegedly enable researchers to work more transparently. For example, replicating scientific work by reusing open data or working under an economy of shared research resources would be beneficial strategies for the social fabric of science and the researchers themselves (McKiernan et al., 2016). The pressure to open data in science also moved in parallel with the call for open data in all public activities, and particularly in the context of e-government, these activities are being maintained through public funding (Zuiderwijk & Janssen, 2014). In this context, research on the production, use and quality of open government data was deemed to cross-fertilise, with careful consideration, open data practices in research. As Dai et al. point out, “A culture of open government data needs to be nurtured in both the government and across the entire open data ecosystem of users, including researchers” (Dai et al., 2018, p. 14). Along with the increasing importance given to open government data and open research data policies, in the latter field, investigations advanced in understanding the researchers’ engagement with open data publication in data repositories and portals, with studies pointing out the difficulties encountered in organising data, understanding copyright and deciding which data portal to select (R. Kessler, 2018). For the policy context, the emphasis was put on transparency and more rapid circulation of scientific knowledge (European Commission, 2013a; McKiernan et al., 2016). As a result, open science gained a full area within the EU digital agenda (https://www.europarl.europa.eu/factsheets/en/sheet/64/digital-agenda-for- europe), and digital infrastructures were supported to evolve and offer several affordances for the researchers to publish open research data (Matt, 2016). At a global level, open data repositories connected to international or national research infrastructures such as FIGSHARE (www.figshare.com), the Open Science Foundation platform (https://osf.io/) or Harvard Dataverse (https://dataverse.harvard.edu/), adopted international standards, facilitating open data publishing and reuse. The EU repository ZENODO (www.zenodo.org) offered a secure base to open Europe- funded research. Alongside this development, most used academic social networks like ResearchGate and Mendeley included functions relating to open research data accompanying the main publication. However, there was intense debate about the competing or complementary nature of academic social networks and digital repositories supporting open science policies (Lovett et al., 2017). Indeed, the two digital environments were deemed different in several features such as the sources of funding, technological infrastructure adopted, and particularly by the focus of their interfaces, on digital content or on the user, respectively (Borrego, 2017; Lovett & Rathemacher, 2016). Nonetheless, the academic social networks raised critical concerns about the way research data (both publications and datasets) were used for private purposes (Lovett et al., 2017).
52
J. E. Raffaghelli and A. Sangrà
The initial exploration of open data practices by both international funders and the specialised open data platforms mentioned above reported an increasing uptake of open data publication (Digital Science et al., 2019; European Commission, 2016d; Lämmerhirt, 2016). This phenomenon might be effectively triggered by the compulsory open data policies promoted by research funding bodies like the EU (European Commission, 2016d) and the increasing requests by scientific journals to publish open datasets as complementary resources of the publications. Indeed, the COVID-19 pandemic increased the need for collaboration amongst researchers in the health sector, but it was also reported for computer science (Gewin, 2020). Most open data portals rushed to generate pools of resources and sites for data sharing, like the case of the World Health Organization (https:// covid19.who.int/), the Harvard Covid-19 Data collection (https://dataverse.harvard. edu/dataverse/covid19) and the dedicated EU portal (https://data.europa.eu/euodp/ en/data/dataset/covid-19-coronavirus-data).
The Universities as Users of Open Data Notwithstanding the pressure on open data publication, the practices of sharing and reusing seem to be less common and less valued for career development (Jamali et al., 2016; Quarati & Raffaghelli, 2020; Raffaghelli & Manca, 2019). The importance given to data re-use and sharing is particularly evident in the recent efforts to steer quality standards for open research data, such as the case of the FAIR standards adopted at a global level. This acronym stands for Findable, Accessible, Interoperable and Re-usable as essential elements of open research data quality (Wilkinson et al., 2016). On the whole, gathering, producing, elaborating and visualising data has been considered a central piece of researchers’ professional activity and identity (Koltay, 2017; Pouchard & Bracke, 2016; Schneider, 2013). With the increasing digitalisation of data practices, researchers were pushed to further their skills, including data- driven methods (working on data extracted from digital platforms), data management planning (Verhaar et al., 2017; Wiorogórska et al., 2018) and forms of intensive data sharing and collaboration amongst inter-institutional and global research groups (Quigley et al., 2013). Therefore, open data sharing and re-using was deemed an important part of academic practices, embracing the philosophies of openness and social activities on the net as part of scholarly activities (Nielsen, 2012; Veletsianos & Kimmons, 2012, 2016). However, such developments would also imply personal and institutional investments in researchers’ data literacy. The positive discourses on the availability of data and the feasibility of appropriation by both the civil society (open government data) and researchers (open research data) immediately encountered other obstacles to advanced data practices, crowd science, quality in research and second-hand data usage for industry or research purposes. As shown early on by Edwards et al. (2011), sharing data could be heavily influenced by “science fiction” as the “difficulties encountered when two
2 Data, Society and the University: Facets of a Complex Problem
53
scientific disciplines working on related problems try to interoperate” (Edwards et al., 2011, p. 669). According to these authors, a result of the fourth scientific paradigm is the blurring effect between disciplinary data practices and the generation of grey zones. As a result, “every movement of data across an interface comes at some cost in time, energy and human attention” (ibidem), which relies heavily on scientific communities and their efforts to “ground” communication with each other. This conceptualisation went beyond researchers’ data literacy to identify the problem of data cultures. Data cultures in research have never been indifferent to the researchers’ epistemological and methodological approaches to what data is and can potentially be, as the molecular unit of investigation (Borgman, 2015). In fact, along with the processes of publication, research funding, publication sharing and citation, scientific communities give value to specific forms of data processing, requiring professional learning leading to those practices (Koltay, 2017). Digital scholarships, where data practices are to be embedded, encompass mechanisms of research visibility, reputation and initial career pathways, strengthening opportunities for career advancement (Hildebrandt & Couros, 2016). This process could not be considered transparent or nor-linear since forms of gatekeeping shape communities’ and individuals’ efforts to stay and progress within the system of science. Geopolitical conditions, gender, age and experience have been considered crucial factors when determining access, stability and practices within the academic community (Cheryan et al., 2015; Wagner, 2008). Therefore, even if open research data are considered a driving force for transparency and research effectiveness (Lyon, 2016; Molloy, 2011), the forms of publication and particularly of consultation and sharing as social activities around them have been troubled in several ways (Wouters & Haak, 2017). Indeed, open science has never been a straight way forward. As Scheliga and Friesike pointed out early (2014), considerable discrepancies between scholarly reality and theory are considered when coming to implement open data practices. The result of this situation is poor data flow and adoption for various research purposes and the development of robust data infrastructures, from which data- driven practices and AI stem (European Commission, 2018a, b). While the agenda of open data in open science pursued specific values and interests, scholars were called on, more and more, to understand what data can be and should be circulated as public knowledge; which are the digital infrastructures where research data can be stored; and what data could be eventually adopted to train AI systems within research and teaching in fair ways. Nonetheless, it is also necessary to scrutinise a relevant critique of the open data movement, which lies on the fact that it might be “just an expression of a culture of technological neutrality and radical individualism” (Johnson, 2018, p. 469). The scenario we have depicted so far propels for the inevitability of open data as a frontier of democratic access to knowledge, and hence, good practice in research (and public administration overall). Nonetheless, open data might cater social privilege in terms of who is represented; and how either human groups or phenomena from the natural world relating to specific collectives are represented. Upon these bases, open data, referred to as “information (in)justice”, as posed by Johnson (ibidem), “cannot be expected to
54
J. E. Raffaghelli and A. Sangrà
universally promote justice” (p. 467). The underrepresentation of collective in open data has been mainly acknowledged by authors committed to data activism (Gutiérrez & Milan, 2019). However, in Johnson’s terms, the institutions promoting open data are, in any case, fostering their own values and techniques, which particularly apply to the universities and the researchers’ or students’ data usage. As he points out “opening the data (for instance, by allowing students to understand how the recommendations are made) does not change that in the slightest (…)” (p. 269). Conversely, Johnson calls for countervailing data structures and the tradition of data collection by the “privileged point of view of the academic, the programmer, the bureaucrat, or the activist” (p. 272). To this author, information justice (where open data plays a crucial role) depends on the symbolic conception and the infrastructure generated to collect, label and share open data. A focus that leads us to what can be done with data (open or closed) in our contemporary, postdigital societies: we refer not only to statistical representations or dynamic, sharable and reusable data representations, but also to programming actions through data. And that leads us to the field of artificial intelligence.
rtificial Intelligence: What Are the Implications A for Higher Education? The connection between data and artificial intelligence (AI) is essential: their reciprocal relationship is built on the way the AI systems are fed by data (OECD, 2019b, p. 7). More precisely, while the initial development of AI depended heavily on lab design and programming, the way AI was developed and adapted to everyday scenarios relied on the possibilities opened up by Big Data technologies. At present, AI systems are contingent on users’ captured and modelled data. This capture of trace data occurs while interacting with digital and intelligent systems (sensors, webcams, touchscreens integrated into machines, wearables, voice assistants and so on). The more technological interfaces and devices are integrated into our lives, the more data can be captured and processed, and the more the paradigm of Big Data is supported. Moreover, technological advancement increases the fluidity and dynamicity in which data flows are arranged and processed to support immediate visualisations or reactions from the systems. As a result, the data needed for AI moves from the labs, research and public space to private companies with profound impacts. In fact, such impressive data construction supports machine learning, namely, the possibility of algorithms’ self-adaption, which ends up adapting services and making them more accurate in the relationship with the human user. On these bases, AI development might appear wild and relentless (Zhang et al., 2021, 2022). With faster and more exact responses in human-machine interaction, the AI systems’ possibilities seemed to align with fictional tales (S. K. Kessler & Martin, 2017). As a result, the chimaera of automation and personalisation of services, heralded as part
2 Data, Society and the University: Facets of a Complex Problem
55
of technological progress, appeared to come true. However, after some critical events reported in the press and social research, concerns were raised about the safety of AI and modes of global cooperation, standards and norms started to be seen as critical components in the advancement of beneficial AI (Kerr et al., 2020). Countries worldwide started to establish governmental and intergovernmental strategies and initiatives to control this driving force of technological advancement. One important chapter was written by the General Data Protection Regulation as an advanced regulatory proposal by the European Commission to protect EU end users’ privacy and the right to be forgotten (European Commission, 2016e). The most crucial element of AI policies worldwide has been to promote trustworthiness, namely, transparent systems and support awareness amongst end users. AI and artificial general intelligence (AGI) may be destabilising governments, societies and economies. As AI technologies develop further, there will be an ever- greater need for coordination and cooperation between all states and global powers. A glance at the map created by the OECD’s AI Policy Observatory (https://oecd.ai/ dashboards) shows a situation where some countries are promoting more initiatives than others. The US is leading with 47 initiatives, followed by the UK (39), Germany (34), Belgium (22) and Australia (22). This is evidently connected to these four countries’ leadership in the sector. Notoriously, China (9) and Japan (11) have few initiatives in relation to their contribution to the development of the AI industry. The situation looks more balanced for other less influential states in production and debate/policy documents, as is the case of other EU countries like Italy, France, Spain and Portugal (9-6 initiatives); Denmark, Sweden, Finland and Norway (9-6 initiatives); the Russian Federation, Turkey, etc. (12-8 initiatives); and Canada (13 initiatives). It is also worth considering the debate in developing countries, which might have more consumers than producers of AI. As a matter of fact, Colombia (14) and Argentina (11) take the lead in AI initiatives in Latin American countries. In terms of budget, the Russian Federation, France and the UK are devoting most resources to AI initiatives. This map only illustrates countries’ overall efforts to develop and control AI, not sectors or specific resources for the area of education and higher education. Another visualisation of “policy instruments” indeed highlights the prevalence of instruments for “Governance” (635 initiatives), compared to Guidance (162), Infrastructures (175) and Direct Financial Support (225). Of the governance initiatives, 52 focus on regulatory oversight and ethical advice bodies. When analysing the policies’ target groups, 220 are devoted to developing HEI initiatives (out of 695 initiatives for Research and Educational organisations). This situation shows the increasing interest in an ethical debate and the importance given to higher education in this scenario. The Future of Life Institute, an independent observatory for research and activism relating to vital human activities like sustainability and AI, has produced a similar map visualisation (https://futureoflife.org/ai-policy/) with interesting qualitative information. In terms of policymaking, the states engaged with AI activities concur with the OECD map. However, when examining types of policies, this instrument spots differences in their approaches, with some creating national plans or strategies
56
J. E. Raffaghelli and A. Sangrà
(US, Japan, Germany, France, Finland, Russian Federation, Mexico, Argentina, China); or generating task forces, expert groups, or laboratories (Chile, Brazil, Saudi Arabia, Kenya, Italy). Of these, it is important to highlight that the only country embracing an evident discourse on the human side of AI is France, with its national strategy “AI for Humanity”. The strategy is based on the “Villani Report” (Villani et al., 2018), which builds on a combination of topics, but mainly emphasises the need for continuous revision of the ethical implications of algorithms and automation in human activity. Overall, the OECD, UNESCO and EU recommendations (European Commission, 2018a, 2020c; OECD, 2019b; UNESCO, 2020) deal with the inexorable advance of AI. For the EU 2018 document, the focus was on integrating AI into industrial innovation. Later in 2018, a high-level expert group was created, and in April 2019, ethical guidelines were released (High-Level Expert Group on AI, 2019). In 2020, the EU delivered a White Paper on Artificial Intelligence where the emphasis was put on trust and excellence as the two sides of AI (European Commission, 2020c). On the whole, the three transnational bodies acknowledge the potential costs of an uncontrolled and unethical application while supporting the need to continue developing the field. As a matter of fact, all three transnational institutions promote policies emphasising the need for investment in AI research and development, which also foster the digital ecosystem (which provides the data) for AI. Still, they also point out the need to enable policies controlling AI implementation based on international cooperation for trustworthy AI. And they also call for building the skills needed to move in a society and labour market transformed by AI. For example, the five OECD principles adopted by 42 countries (OECD, 2019a) are “value-based”, promoting “responsible stewardship of trustworthy AI” (p. 3). Trust and Trustworthiness are also keywords for the EU documents: indeed, the White Paper subtitle is A European approach to excellence and trust. For the two transnational bodies, trust is based on transparency and responsible disclosure of AI systems; robust technological infrastructures supporting secure and safe functioning; participatory approaches to design and monitoring of the impact of AI systems; and the usage of AI to promote human rights, the rule of law, cultural diversity, and a fair and just society. However, to embrace such principles, governments are expected to collaborate to connect and control data infrastructures. This is currently a huge concern for Europe, whose White Paper highlights that AI is a “market currently dominated by non-EU players” (p. 4). In fact, not only high-quality big data is necessary to support AI, there is also a need for legal clarity in AI-based applications, especially on personal data, to achieve trust. This is where the legal instruments pioneered by the EU (like the General Data Protection Regulation) come into play and are crucial (European Commission, 2020b, p. 21). The High-Level Expert Group created by the European Commission has an Assessment List for Trustworthy AI(European Commission, 2020a), comprising eight areas: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability.
2 Data, Society and the University: Facets of a Complex Problem
57
Finally, UNESCO’s documentation is more focused on universal peace, sustainability and human rights, seeking to “counter the risk of growing digital and knowledge divides that could leave behind those who are relatively disadvantaged or excluded, such as people in the least developed countries, women and girls” (UNESCO, 2020, p. 7) Having considered the global situation and the recommendations of three international bodies on AI development, two elements can be regarded as central. Firstly, the intrinsic relationship of AI development with quality and fair data cannot be neglected. Since the initial innovations were based on a “wild” usage of private companies’ available data, the risks were acknowledged sooner rather than later. Therefore, the discussion on data quality is open, but the alignment between development and critical events detected through human oversight must improve. And this also entails users’ awareness of the systems, procedures, failures and spaces where human oversight can be improved. The other important point is the tension between industry development and innovation and the need to protect human values. As emerges from the European documents, the initial concern was taking part in the growing AI business as a driver of economic development. Later, in 2019, the ethical debate gained attention and pushed the political agenda. Instead, this tension will not disappear and will require continuing interdisciplinary engagement from the scientific community, supported by national and international regulations. It is impossible to put aside the problem of data justice (Taylor, 2017) or the right to preserve data sovereignty (including privacy, representation, access) when dealing with data practices in higher education (Aoun, 2017). The universities will surely make an essential contribution in revisiting and developing a curriculum that supports interdisciplinary reflection, namely, efforts to bring the technological mindset to the humanities and social sciences, and to introduce ethics and social justice into engineering and computer science. In this regard, we are witnesses of the birth of worldwide initiatives to deal with a more balanced vision of AI and the data science behind it. Data ethics programmes and their specialist areas are surely growing, with several funded projects across the globe (Raffaghelli, 2020). But interdisciplinary reflection has never been an easy endeavour (Moran, 2010). A field with such evident tensions between ethics and economic development will put interdisciplinary efforts to the test. A specific area of concern connecting AI developments to higher education is that of learning analytics. It was earlier defined as an interdisciplinary field of research and practice at the 1st International Conference on Learning Analytics and Knowledge as the “…measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs.” (Long & Siemens, 2011, p. 34). The field adopts concepts from computer science, statistics, behavioural science, instructional theory and educational psychology to support teaching practices, support learners’ independent learning abilities and, last but not least, inform the institution. The field has evolved rapidly in terms of development and testing, but LA’s inclusion as part of daily teaching and university practices is not mainstream (Vuorikari et al., 2016). Moreover, LA’s ethics debate slowly evolved into its current
58
J. E. Raffaghelli and A. Sangrà
central importance (Ferguson, 2019; Slade & Prinsloo, 2013; Slade & Tait, 2019). An initial critique discussed the excess of metrics and the quantitative, positivistic and behaviourist paradigm dominating complex objects, such as the pedagogical relationship and the learning process. This should be based on a stochastic and complex model for which the crucial data are not always available, provided they can model the complex pedagogical phenomena (Prinsloo & Slade, 2017; Selwyn & Gašević, 2020). More recently, the problem of students’ data usage has been called into question (Broughan & Prinsloo, 2020). As in the general field of AI developments, the institutions have been called on to strike a balance between tracking more and more student data and requests to respect privacy and protect personal data. The GDPR’s principle of proportionality in data collection and usage (Regulation 2016/679 on the Protection of Natural Persons with Regard to the Processing of Personal Data and the Free Movement of Such Data, 2016) supports this reflection. Learners’ rights to restrict processing, opt out, or be forgotten (Arnold & Sclater, 2017; Hoel & Chen, 2018) are part of an emerging area of work for the institutions (Tsai & Gasevic, 2017), namely, proportionality and transparency in handling students’ data to foster their agency. Moreover, the applicability and transferability of solutions developed in the North to the South, which entail the commodification of students’ data, easily accessible due to the need for support and the possibility of free services offered by huge multinationals, is an apparent concern (Prinsloo, 2020; Williamson, 2018; Williamson & Hogan, 2021). The LA models cannot be borrowed since the algorithms on which they are based rely on data extracted from specific contexts. Nonetheless, several platforms offer services supporting dashboards and other analytical tools of doubtful utility for university educators and students. Seemingly, the development of LA solutions and their integration into learning platforms must be accompanied by institutional reflections on the values and proportionality of data collection; the openness, transparency and fairness of the algorithms applied to support teaching and learning; and the continuing analysis and application of national and international regulations to protect personal data and resolve data ownership. Moreover, the fact that an educational software or platform is offered free of charge must be accompanied by simple and clear instructions on the type of data usage and the eventual commodification of students’ and teachers’ data (Stewart, 2020). At this point, it is essential to mention that after dealing with the first six months of the pandemic, the EU Digital Education Action Plan was released (European Commission, 2020b), which considers two main areas of action: (a) fostering the development of a high-performing digital education ecosystem; (b) enhancing digital skills and competences for digital transformation. Indeed, the COVID-19 crisis triggered clear awareness of the weak coverage of digital access, the incompleteness of technological infrastructures supporting data sovereignty in the EU, the vast dependence on non-EU platforms and apps, and the skill gaps to properly adopt digital technologies, including data awareness. The plan is paving the way to focus on new digital literacies and calling on education to implement important actions to bring new balances between technology usage and human activity.
2 Data, Society and the University: Facets of a Complex Problem
59
So, data practices connected to AI will enter teaching activity through the curriculum (training in basic and advanced skills for living in a world of intelligent technologies) and concern about the involvement of HEIs in automation and personalisation of services through AI systems. And while many projects and initiatives are pointing at increasing interest in the problem (Raffaghelli, 2020), this is an area where policy-making and institutional strategies have yet to be written.
Conclusions The present chapter has introduced three areas of policy-making around data uses and practices that impacted Higher Education Institutions directly or indirectly. We considered areas where policy efforts have gone in the direction of producing regulations and frameworks of practice to address the issues raised by an emergent problem: data-intensive practices leading to either socially desirable or adverse effects. We also pointed out the relevant implications that the aforementioned socio-political debate has had on Higher Education Institutions (HEIs) strategies and agendas in the context of a digital transformation. Moreover, we emphasised how such dynamics also encompassed a profound reflection on the academics’ digital (and data) literacies to understand their contexts of practice, make informed decisions and protect themselves and their students from the unintended consequences of data abuse. As the first area, we introduced the modernisation of higher education through its digital transformation, bringing to the fore the problem of managerialism as the hidden effect of too enthusiastic discourses about the possibilities opened by Higher Education digitalisation and digitisation. Our brief analysis attempted to unveil how forms of bureaucratic control over the academic work could increase, given the possibilities of counting, calculating, observing and representing the professoriate’s productivity. We connected this enthusiastic effort with the processes of academic knowledge democratisation through open science and open data. Though the initial philosophy of open science was not to control, but to make scholarly work more transparent and accessible, this movement had critical implications for the scholars’ digital presence, positionalities and productivity. Specifically, pressure has been put on the scholars to publish and share data despite problems of data justice, particularly relating to the public effort to finance the production of scientific data and the advantageous position of big private companies to use such data. Indeed, we considered the issues raised by data usage as a liaison with the third area, relating the AI industry as a growing sector with ambitious (and somehow unrealistic) expectations from society. The AI industry requires vast amounts of available data and intensive computing. Therefore, data extraction operations are the norm, as purported by Shoshana Zuboff (2019) in her concept of “surveillance capitalism”. In that sense, we introduced several policymaking efforts to promote AI as a new source of socioeconomic development, considering the relevant role HEIs should have in such research and innovation. But we also brought to the fore several transnational efforts (particularly at the EU level) to regulate the rights of data protection and access
60
J. E. Raffaghelli and A. Sangrà
from those primarily producing them. We considered several policy recommendations and reports in our brief analysis to demonstrate how the regulatory frameworks in OECD countries are advancing to achieve data ethics and develop fair AI systems. In a nutshell, the depicted situation is challenging and has clear implications for HEIs and the academics’ professionalism, we alleged. Against this panorama, our point of view is that a better understanding of data infrastructures and instruments will be necessary, along with occasional engagement in active groups to contest externally established data practices leading to managerialism and surveillance. This relates to HEIs stakeholders’ reflective practice and a critical positioning regarding the ongoing digital transformation. In this context, every digital instrument is plethoric of connotations for the users. Whether it is a platform adopted by the administration, or a Learning Management System to capture the students’ learning processes, it relates to researching digital tools and approaches for the dissemination of scholarly work shaping concrete spaces of intervention around data. Each of these tools encompasses the presence of data infrastructures which are more or less clear for the professoriate as key workers in the system, but also for the administrators, managers and particularly for the students. But we also conjecture at this point that HEIs, being organisations with a central role in knowledge creation and circulation, can become creative spaces to search for new approaches to engage with technology as a societal problem as an opportunity. Therefore, the three policy- making areas we briefly examined here should highlight the inherent complexity of data practices and discourses. But we hope our analysis sets the basis to understand the relevance of several chapters following in this book, which undoubtedly contribute to a social and political scenario in the making.
References Aoun, J. E. (2017). Robot-proof: Higher education in the age of artificial intelligence. MIT Press. Arnold, K. E., & Sclater, N. (2017). Student perceptions of their privacy in learning analytics applications. In ACM international conference proceeding series (pp. 66–69). https://doi. org/10.1145/3027385.3027392 Ayris, P., Berthou, J. Y., Bruce, R., Lindstaedt, R., Monreale, A., Mons, B., Murayama, Y., Södergard, C., Tochtermann, K., & Klaus-Wilkinson, R. (2016). Realising the European open science cloud: First report and recommendations of the Commission High Level Expert Group on the European open science cloud. Publications Office of the European Union. https://data. europa.eu/doi/10.2777/940154 Bacow, L. S., Bowen, W. G., Guthrie, K. M., Lack, K. A., & Long, M. P. (2012). Barriers to adoption of online learning systems in U.S. Higher Education (p. 34). S+R. http://www.sr.ithaka. org/research-publications/barriers-adoption-online-learning-systems-us-higher-education Bali, M., Cronin, C., & Jhangiani, R. S. (2020). Framing open educational practices from a social justice perspective. Journal of Interactive Media in Education, 2020(1), Article 1. https://doi. org/10.5334/jime.565 Barshay, J. (2021). The number of college graduates in the humanities drops for the eighth consecutive year. American Academy of Arts & Sciences News. https://www.amacad.org/news/ college-graduates-humanities-drops-eighth-consecutive-year
2 Data, Society and the University: Facets of a Complex Problem
61
Biesta, G. (2007). Why ‘what works’ won’t work: Evidence-based practice and the democratic deficit in educational research. In Educational theory (Vol. 57, Issue 1, pp. 1–22). Wiley/ Blackwell (10.1111). https://doi.org/10.1111/j.1741-5446.2006.00241.x Biesta, G. (2015). Good education in an age of measurement: Ethics, politics, democracy. Routledge. Bill & Melinda Gates Foundation. (2017). Gates Open Research. https://gatesopenresearch.org/ about/policies#dataavail https://gatesopenresearch.org/about Borgman, C. L., Darch, P. T., Sands, A. E., Pasquetto, I. V., Golshan, M. S., Wallis, J. C., & Traweek, S. (2015). Knowledge infrastructures in science: Data, diversity, and digital l ibraries. International Journal on Digital Libraries, 16(3–4), 207–227. https://doi.org/10.1007/ s00799-015-0157-z Borrego, Á. (2017). Institutional repositories versus ResearchGate: The depositing habits of Spanish researchers. Learned Publishing, 30(3), 185–192. https://doi.org/10.1002/leap.1099 Boyer, E. L., Moser, D., Ream, T. C., & Braxton, J. M. (2015). Scholarship reconsidered: Priorities of the professoriate. Wiley. Bozkurt, A., Jung, I., Xiao, J., Vladimirschi, V., Schuwer, R., Egorov, G., Lambert, S. R., Al-Freih, M., Pete, J., Olcott, D., Rodes, V., Aranciaga, I., Bali, M., Alvarez, A. V., Roberts, J., Pazurek, A., Raffaghelli, J. E., Panagiotou, N., De Coëtlogon, P., et al. (2020). A global outlook to the interruption of education due to COVID-19 Pandemic: Navigating in a time of uncertainty and crisis. Asian Journal of Distance Education, 15(1), 1–126. https://doi.org/10.5281/ zenodo.3878572 Bradburn, N., Twonsend, R., Fuqua, C., & Taylor, J. (2021). State of the humanities 2021: Workforce & beyond. American Academy of Arts & Sciences. https://www.amacad.org/ publication/humanities-workforce-beyond Broughan, C., & Prinsloo, P. (2020). (Re)centring students in learning analytics: In conversation with Paulo Freire. Assessment and Evaluation in Higher Education, 45(4), 617–628. https:// doi.org/10.1080/02602938.2019.1679716 Carey, K. (2015). The end of college: Creating the future of learning and the university of everywhere. Penguin Publishing Group. https://books.google.com/books?id=FCh-BAAAQBAJ&pgis=1 Carretero, S., Vuorikari, R., & Punie, Y. (2017). The digital competence framework for citizens with eight proficiency levels and examples of use. European Commission. https://doi. org/10.2760/38842 Castañeda, L., & Selwyn, N. (2018). More than tools? Making sense of the ongoing digitizations of higher education. International Journal of Educational Technology in Higher Education, 15(1), 22. https://doi.org/10.1186/s41239-018-0109-y CERN. (2018). CMS data preservation, re-use and open access policy. CERN Open Data Portal. CERN. https://doi.org/10.7483/OPENDATA.CMS.7347.JDWH Chan, L., Cuplinskas, D., Eisen, M. J., Friend, F., Genova, Y., Guèdon, J. C., Hagemann, M., Harnad, S., Kupryte, R., Johnson, R., Manna, M., Rév, I., Segbert, M., Souza, S., Suber, P., & Velterop, J. (2002). Budapest Open Access Initiative. https://www.budapestopenaccessinitiative.org/ Cheryan, S., Master, A., & Meltzoff, A. N. (2015). Cultural stereotypes as gatekeepers: Increasing girls’ interest in computer science and engineering by diversifying stereotypes. Frontiers in Psychology, 6, 49. https://doi.org/10.3389/fpsyg.2015.00049 Costa, C. (2014). Outcasts on the inside: Academics reinventing themselves online. International Journal of Lifelong Education, 34(2), 194–210. https://doi.org/10.1080/0260137 0.2014.985752 Crosier, D., Kocanova, D., Birch, P., Davykovskaia, O., & Parveva, T. (2019). Modernisation of higher education in Europe. In Eurydice Report (pp. 1–28). Eurydice (Education, Audiovisual and Culture Executive Agency). https://doi.org/10.2797/806308 Crow, R., Cruz, L., Ellern, J., Ford, G., Moss, H., & White, B. J. (2018). Boyer in the middle: Second generation challenges to emerging scholarship. Innovative Higher Education, 43(2), 107–123. https://doi.org/10.1007/s10755-017-9409-8
62
J. E. Raffaghelli and A. Sangrà
Czerniewicz, L. (2022). Multi-layered digital inequalities in HEIs: The paradox of the post-digital society. In New visions for higher education towards 2030-Part 2: Transitions: Key topics, key voices. https://www.guninetwork.org/files/guni_heiw_8_complete_-_new_visions_for_ higher_education_towards_2030_1.pdf#page=124 Czerwonogora, A., & Rodés, V. (2019). PRAXIS: Open educational practices and open science to face the challenges of critical educational action research. Open Praxis, 11(4), Article 4. https://doi.org/10.5944/openpraxis.11.4.1024 Dai, Q., Shin, E., & Smith, C. (2018). Open and inclusive collaboration in science. OECD Science, Technology and Industry Policy Papers, 7, 1–29. https://doi.org/10.1787/2dbff737-en Daniel, B. K. (2017). Big data in higher education: The big picture. In Big data and learning analytics in higher education (pp. 19–28). Springer International Publishing. https://doi. org/10.1007/978-3-319-06520-5_3 Decuypere, M., & Landri, P. (2021). Governing by visual shapes: University rankings, digital education platforms and cosmologies of higher education. Critical Studies in Education, 62(1), 17–33. https://doi.org/10.1080/17508487.2020.1720760 Science, D., Fane, B., Ayris, P., Hahnel, M., Hrynaszkiewicz, I., Baynes, G., & Farrell, E. (2019). The state of open data report 2019: A selection of analyses and articles about open data, curated by Figshare. Digital Science. https://doi.org/10.6084/M9.FIGSHARE.9980783.V2 Edwards, P. N., Mayernik, M. S., Batcheller, A. L., Bowker, G. C., & Borgman, C. L. (2011). Science fiction: Data, metadata, and collaboration. Social Studies of Science, 41(5), 667–690. https://doi.org/10.1177/0306312711413314 Entwistle, N. (2007). Research into student learning and university teaching. BJEP – British Journal of Educational Psychology, BJEP Monograph Series, II, 1–18. https://doi.org/10.134 8/000709906X166772 European Commission. (2011). Europe 2020 flagship initiative Innovation Union. SEC(2010) 1161, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, 1, 1–48. European Commission. (2013a). Digital science in Horizon 2020. In European Commission (pp. 1–30). European Commission. ec.europa.eu/information_society/newsroom/cf/dae/document.cfm?doc_id=2124 European Commission. (2013b). Opening up education: Innovative teaching and learning for all through new technologies and open educational resources. CEDEFOP. https://www.cedefop. europa.eu/en/news/opening-education-innovative-teaching-and-learning-all-through-new- technologies European Commission. (2016a). European cloud initiative—Building a competitive data and knowledge economy in Europe COM(2016)178. In Communication from the Commission to the Institutions. https://ec.europa.eu/transparency/documents-register/detail?ref=COM(2016)178 European Commission. (2016b). Horizon 2020 work programme 2016—2017. Science with and for Society, 2017(16), 1–72. European Commission. (2016c). Open innovation, open science, open to the world—A vision for Europe | Digital Single Market. European Commission, Publications Office of the European Union. https://doi.org/10.2777/061652 European Commission. (2016d). Open science monitor. http://ec.europa.eu/research/openscience/ index.cfm?pg=home§ion=monitor European Commission. (2016e). General Data Protection Regulation (GDPR) EUR- Lex—32016R0679—EN – EUR-Lex (pp. 1–88). European Parliament. https://eur-lex.europa. eu/eli/reg/2016/679/oj European Commission. (2018a). Artificial intelligence for Europe [Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and The Committee of the Regions]. https://eur-lex.europa. eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN
2 Data, Society and the University: Facets of a Complex Problem
63
European Commission. (2018b). Facts and case studies related to accessing and reusing the data produced in the course of scientific production. https://ec.europa.eu/info/ research-a nd-i nnovation/strategy/goals-r esearch-a nd-i nnovation-p olicy/open-s cience/ open-science-monitor/facts-and-figures-open-research-data_en European Commission. (2020a). Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment | Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/ library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment European Commission. (2020b). Digital education action plan (2021–2027) | European education area. https://education.ec.europa.eu/node/1518 European Commission. (2020c). White Paper on artificial intelligence: A European approach to excellence and trust | European Commission. European Commission. https://ec.europa.eu/info/ publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en European Commission. (2022). Commission communication on a European strategy for universities. European Commission. https://education.ec.europa.eu/document/ commission-communication-on-a-european-strategy-for-universities European Commission – RISE – Research Innovation and Science Policy Experts. (2016). Mallorca declaration on open science: Achieving Open Science. European Commission. https://ec.europa.eu/research/openvision/pdf/rise/mallorca_declaration_2017.pdf European Parliament. (2010). A new digital agenda for Europe: 2015.eu (2011/C 81 E/08). European Commission. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX% 3A52010IP0133&qid=1643543113403 European Parliament. (2022). Establishing a European Declaration on digital rights and principles for the digital decade. European Commission. https://digital-strategy.ec.europa.eu/en/ library/declaration-european-digital-rights-and-principles Ferguson, R. (2012). Learning analytics: Drivers, developments and challenges. International Journal of Technology Enhanced Learning, 4(5–6), 304–317. https://doi.org/10.1504/ IJ℡.2012.051816 Ferguson, R. (2019). Ethical challenges for learning analytics. Journal of Learning Analytics, 6(3), 25–30. https://doi.org/10.18608/jla.2019.63.5 Fiebig, T., Gürses, S., Gañán, C. H., Kotkamp, E., Kuipers, F., Lindorfer, M., Prisse, M., & Sari, T. (2021). Heads in the clouds: Measuring the implications of universities migrating to public clouds. ArXiv:2104.09462 [Cs]. http://arxiv.org/abs/2104.09462 Gewin, V. (2020). Six tips for data sharing in the age of the coronavirus. Nature. https://doi. org/10.1038/d41586-020-01516-0 Gibbs, P. (2020). The marketingisation of higher education. In S. Rider, M. A. Peters, M. Hyvönen, & T. Besley (Eds.), World class universities: A contested concept (pp. 221–233). Springer. https://doi.org/10.1007/978-981-15-7598-3_13 Goglio, V. (2016). One size fits all? A different perspective on university rankings. Journal of Higher Education Policy and Management, 38(2), 212–226. https://doi.org/10.108 0/1360080X.2016.1150553 Goodfellow, R. (2014). Scholarly, digital, open: An impossible triangle? Research in Learning Technology, 21. https://doi.org/10.3402/rlt.v21.21366 Grek, S., & Ozga, J. (2009). Governing education through data: Scotland, England and the European education policy space. British Educational Research Journal, 36(6), 937–952. https://doi.org/10.1080/01411920903275865 Gutiérrez, M., & Milan, S. (2019). Playing with data and its consequences. First Monday. https:// doi.org/10.5210/fm.v24i1.9554 Harvey, L., & Williams, J. (2010). Fifteen Years of Quality in Higher Education. Quality in Higher Education, 16(1), 3–36. https://doi.org/10.1080/13538321003679457 High Level Group on the Modernisation of Higher Education. (2014). Report to the European Commission on new modes of learning and teaching in higher education (pp. 1–68). Publications Office of the European Union. https://doi.org/10.2766/81897
64
J. E. Raffaghelli and A. Sangrà
High-Level Expert Group on AI. (2019). Ethical guidelines for Trustworthy AI. https://ec.europa. eu/newsroom/dae/document.cfm?doc_id=60419 Hildebrandt, K., & Couros, A. (2016). Digital selves, digital scholars: Theorising academic identity in online spaces. Journal of Applied Social Theory, 1(1), 87–100. Hoel, T., & Chen, W. (2018). Privacy and data protection in learning analytics should be motivated by an educational maxim—Towards a proposal. Research and Practice in Technology Enhanced Learning, 13(1). https://doi.org/10.1186/s41039-018-0086-8 Hummel, P., Braun, M., Tretter, M., & Dabrock, P. (2021). Data sovereignty: A review. Big Data & Society, 8(1), 2053951720982012. https://doi.org/10.1177/2053951720982012 Jamali, H. R., Nicholas, D., & Herman, E. (2016). Scholarly reputation in the digital age and the role of emerging platforms and mechanisms. Research Evaluation, 25(1), 37–49. https://doi. org/10.1093/reseval/rvv032 Johnson, J. A. (2018). toward information justice: Technology, politics, and policy for data in higher education administration. Springer. Joint Research Centre (European Commission), Bacsich, P., Punie, Y., Inamorato dos Santos, A., Atenas, J., Aceto, S., Burgos, D., & Nascimbeni, F. (2017). Policy approaches to Open Education: Case Studies from 28 EU Member States (OpenEdu Policies). Publications Office of the European Union. https://doi.org/10.2760/283135 Kerr, A., Barry, M., & Kelleher, J. D. (2020). Expectations of artificial intelligence and the performativity of ethics: Implications for communication governance. Big Data & Society, 7(1), 205395172091593. https://doi.org/10.1177/2053951720915939 Kessler, R. (2018). Whitepaper: Practical challenges for researchers in data sharing: Review. Learned Publishing, 31(4), 417–419. https://doi.org/10.1002/leap.1184 Kessler, S. K., & Martin, M. (2017). How do potential users perceive the adoption of new technologies within the field of Artificial Intelligence and Internet-of-Things? – A revision of the UTAUT 2 model using Voice Assistants. http://lup.lub.lu.se/student-papers/record/8909840 Koltay, T. (2017). Data literacy for researchers and data librarians. Journal of Librarianship and Information Science, 49(1), 3–14. https://doi.org/10.1177/0961000615616450 Kukulska-Hulme, A., Beirne, E., Conole, G., Costello, E., Coughlan, T., Ferguson, R., FitzGerald, E., Gaved, M., Herodotou, C., Holmes, W., Mac Lochlainn, C., Nic Giolla Mhichíl, M., Rienties, B., Sargent, J., Scanlon, E., Sharples, M., & Whitelock, D. (2020). Innovating pedagogy 2020: Open University Innovation Report 8. https://core.ac.uk/display/338194918?sou rce=1&algorithmId=15&similarToDoc=525628448&similarToDocKey=CORE&recSetID=d 7a200b0-b33e-4302-a2b3-05eba456c7eb&position=1&recommendation_type=same_repo&o therRecs=338194918,82982789,388713237,352143888,237022676 Kukulska-Hulme, A., Bossu, C., Charitonos, K., Coughlan, T., Ferguson, R., FitzGerald, E., Gaved, M., Guitert, M., Herodotou, C., Maina, M., Prieto-Blázquez, J., Rienties, B., Sangrà, A., Sargent, J., Scanlon, E., & Whitelock, D. (2022). Innovating pedagogy 2022: Open University Innovation Report 10. https://core.ac.uk/display/525628448?source=2 Lämmerhirt, D. (2016). Briefing Paper: Disciplinary differences in opening research data. Pasteur4Oa, June 2013, 1–8. Lee, J. T., & Naidoo, R. (2020). Complicit reproductions in the global south: Courting world class universities and global rankings. In S. Rider, M. A. Peters, M. Hyvönen, & T. Besley (Eds.), World class universities: A contested concept (pp. 77–91). Springer. https://doi. org/10.1007/978-981-15-7598-3_6 Long, P., & Siemens, G. (2011). Penetrating the fog: Analytics in learning and education | EDUCAUSE. In EDUCAUSE Review (pp. 31–40). Lovett, J., & Rathemacher, A. (2016). A comparison of research sharing tools: The Institutional Repository vs. Academic Social Networking among University of Rhode Island Faculty. In Technical Services Faculty Presentations. http://digitalcommons.uri.edu/ lib_ts_presentations/47 Lovett, J., Rathemacher, A., Boukari, D., & Lang, C. (2017). Institutional repositories and academic social networks: Competition or complement? A study of Open Access Policy Compliance vs. ResearchGate Participation. Journal of Librarianship and Scholarly Communication, 5(General Issue). https://doi.org/10.7710/2162-3309.2183
2 Data, Society and the University: Facets of a Complex Problem
65
Lynch, K. (2010). Carelessness: A hidden doxa of higher education. Arts and Humanities in Higher Education, 9(1), 54–67. https://doi.org/10.1177/1474022209350104 Lyon, L. (2016). Transparency: The emerging third dimension of open science and open data. LIBER QUARTERLY, 25(4), 153–171. https://doi.org/10.18352/lq.10113 Manca, S., & Ranieri, M. (2017). Exploring digital scholarship. A study on use of social media for scholarly communication among Italian academics. In A. Esposito (Ed.), Research 2.0 and the impact of digital technologies on scholarly inquiry (pp. 117–142). IGI Global. https://doi. org/10.4018/978-1-5225-0830-4.ch007 Mapuva, J. (2009). Conquering the barriers to learning in HEIs through e-Learning. International Journal of Teaching and Learning in Higher Education, 21(2), 221–227. Matt, S. (2016). E-Infrastructures to facilitate Open Scholarship. In FUTURIUM | European Commission (p. 3). https://ec.europa.eu/futurium/en/content/e-infrastructures-facilitate-openscholarship McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., McDougall, D., Nosek, B. A., Ram, K., Soderberg, C. K., Spies, J. R., Thaney, K., Updegrove, A., Woo, K. H., & Yarkoni, T. (2016). How open science helps researchers succeed. In ELife (Vol. 5, Issue July). https://doi.org/10.7554/eLife.16800 Meyer, K. A. (2014). An analysis of the cost and cost-effectiveness of faculty development for online teaching. Journal of Asynchronous Learning Networks, 17(4), 93–113. Molloy, J. C. (2011). The open knowledge foundation: Open data means better science. PLoS Biology, 9(12). https://doi.org/10.1371/journal.pbio.1001195 Moran, J. (2010). Interdisciplinarity. Routledge.. https://books.google.com/books?id=y8yOAgA AQBAJ&pgis=1 Nielsen, M. A. (2012). Reinventing discovery: The new era of networked science. Princeton University Press. NWO. (n.d.). Open Science. Retrieved 2 November 2018, from https://www.nwo.nl/en/policies/ open+science OECD. (2019a). Forty-two countries adopt new OECD Principles on Artificial Intelligence. https://www.oecd.org/science/forty-two-countries-adopt-new-oecd-principles-on-artificial- intelligence.htm OECD. (2019b). Recommendation of the Council on Artificial Intelligence—OECD/LEGAL/0449. OECD. https://doi.org/10.1787/eedfee77-en Ossiannilsson, E., & Creelman, A. (2012). From proprietary to personalized higher education— How OER takes universities outside the comfort zone. Journal of E-Learning and Knowledge Society, 8(1), 9–22. Pearce, N., Weller, M., Scanlon, E., & Kinsley, S. (2010). Digital scholarship considered: How new technologies could transform academic work. In In education (Vol. 16, Issue 1). http:// ineducation.ca/ineducation/article/view/44/508 Peters, M. A., Liu, T.-C., & Ondercin, D. J. (2012). Managerialism and the Neoliberal University: Prospects for new forms of ‘Open Management’ in higher education (pp. 91–104). Brill. https:// brill.com/view/book/9789460919671/BP000008.xml Piattoeva, N. (2021). Numbers and their contexts: How quantified actors narrate numbers and decontextualization. Educational Assessment, Evaluation and Accountability, 33(3), 511–533. https://doi.org/10.1007/s11092-021-09363-x Pouchard, L., & Bracke, M. S. (2016). An analysis of selected data practices: A case study of the Purdue College of Agriculture. Issues in Science and Technology Librarianship, 2016(85). https://doi.org/10.5062/F4057CX4 Pozzi, F., Manganello, F., Passarelli, M., Persico, D., Brasher, A., Holmes, W., Whitelock, D., & Sangrà, A. (2019). Ranking meets distance education: Defining relevant criteria and indicators for online universities. International Review of Research in Open and Distance Learning, 20(5), 42–63. https://doi.org/10.19173/irrodl.v20i5.4391
66
J. E. Raffaghelli and A. Sangrà
Prinsloo, P. (2020). Data frontiers and frontiers of power in (higher) education: A view of/from the Global South. Teaching in Higher Education, 25(4), 366–383. https://doi.org/10.1080/1356251 7.2020.1723537 Prinsloo, P., & Slade, S. (2017). An elephant in the learning analytics room—The obligation to act. ACM International Conference Proceeding Series, 46–55. https://doi. org/10.1145/3027385.3027406 Quarati, A., & Raffaghelli, J. E. (2020). Do researchers use open research data? Exploring the relationships between usage trends and metadata quality across scientific disciplines from the Figshare case. Journal of Information Science. https://doi.org/10.1177/0165551520961048 Quigley, D. S., Neely, E., Parkolap, A., & Groom, G. (2013). Scholarship and digital publications: Where research meets innovative technology. Visual Resources, 29(1–2), 97–106. https://doi. org/10.1080/01973762.2013.761122 Raffaghelli, J. E. (2017). Exploring the (missed) connections between digital scholarship and faculty development: A conceptual analysis. International Journal of Educational Technology in Higher Education, 14(1), 20. https://doi.org/10.1186/s41239-017-0058-x Raffaghelli, J. E. (2020). Is data literacy a catalyst of social justice? A response from nine data literacy initiatives in higher education. Education Sciences, 10(9), 233. https://doi.org/10.3390/ educsci10090233 Raffaghelli, J. E., Cucchiara, S., Manganello, F., & Persico, D. (2016). Different views on digital scholarship: Separate worlds or cohesive research field? Research in Learning Technology, 24, 1–17. https://doi.org/10.3402/rlt.v24.32036 Raffaghelli, J. E., & Manca, S. (2019). Is there a social life in open data? The case of open data practices in educational technology research. Publications, 7(1), 9. https://doi.org/10.3390/ PUBLICATIONS7010009 Raffaghelli, J. E., Manca, S., Stewart, B., Prinsloo, P., & Sangrà, A. (2020). Supporting the development of critical data literacies in higher education: Building blocks for fair data cultures in society. International Journal of Educational Technology in Higher Education, 17(1), 58. https://doi.org/10.1186/s41239-020-00235-w Raffaghelli, J. E., & Stewart, B. (2020). Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: A systematic review of the literature. Teaching in Higher Education, 25(4), 435–455. https://doi.org/10.1080/13562517.2019.1696301 Rider, S., Peters, M. A., Hyvönen, M., & Besley, T. (2020). Welcome to the World Class University: Introduction. In S. Rider, M. A. Peters, M. Hyvönen, & T. Besley (Eds.), World Class Universities: A contested concept (pp. 1–8). Springer. https://doi.org/10.1007/978-981-15-7598-3_1 Sangrà, A., Guitert, M., Cabrera-Lanzo, N., Taulats, M., Toda, L., & Carrillo, A. (2019). Collecting data for feeding the online dimension of university rankings: A feasibility test. Italian Journal of Educational Technology, 27(3), 241–256. https://doi.org/10.17471/2499-4324/1114 Scheliga, K., & Friesike, S. (2014). Putting open science into practice: A social dilemma? First Monday, 19(9). https://doi.org/10.5210/fm.v19i9.5381 Schneider, R. (2013). Research data literacy. (Communications in Computer and Information Science, 397 CCIS, pp. 134–140). https://doi.org/10.1007/978-3-319-03919-0_16 Selwyn, N. (2015). Data entry: Towards the critical study of digital data and education. Learning, Media and Technology, 40(1), 64–82. https://doi.org/10.1080/17439884.2014.921628 Selwyn, N., & Gašević, D. (2020). The datafication of higher education: Discussing the promises and problems. Teaching in Higher Education, 25(4), 527–540. https://doi.org/10.108 0/13562517.2019.1689388 Siemens, G. (2013). Learning analytics: The emergence of a discipline. American Behavioral Scientist, 57(10), 1380–1400. https://doi.org/10.1177/0002764213498851 Slade, S., & Prinsloo, P. (2013). Learning analytics, ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529. https://doi.org/10.1177/0002764213479366 Slade, S., & Tait, A. (2019). Global guidelines: Ethics in learning analytics. ICDE. https://www. icde.org/knowledge-hub/the-aim-of-the-guidelines-is-to-identify-which-core-principles- relating-t o-e thics-a re-c ore-t o-a ll-a nd-w here-t here-i s-l egitimate-d ifferentiation-d ue-t o- separate-legal-or-more-broadly-cultural-env-5mppk
2 Data, Society and the University: Facets of a Complex Problem
67
Stes, A., Min-Leliveld, M., Gijbels, D., & Van Petegem, P. (2010). The impact of instructional development in higher education: The state-of-the-art of the research. Educational Research Review, 5(1), 25–49. https://doi.org/10.1016/j.edurev.2009.07.001 Stewart, B. (2020). The open page project. Journal of Teaching and Learning, 14(1), 59–70. https://doi.org/10.22329/jtl.v14i1.6265 Stiglitz, J. E. (2002). The roaring nineties. W. W. Norton & Company. http://archive.org/details/ roaringninetiesn00stig Stracke, C., Bozkurt, A., Conole, G., Nascimbeni, F., Ossiannilsson, E., Sharma, R. C., Burgos, D., Cangialosi, K., Fox, G., Mason, J., Nerantzi, C., Obiageli Agbu, J. F., Ramirez Montaya, M. S., Santos-Hermosa, G., Sgouropoulou, C., & Shon, J. G. (2020). Open education and open science for our global society during and after the COVID-19 outbreak. In Open education global conference 2020. https://doi.org/10.5281/ZENODO.4275632 Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2), 1–14. https://doi.org/10.1177/2053951717736335 Tsai, Y.-S., & Gasevic, D. (2017). Learning analytics in higher education—Challenges and policies. In Proceedings of the seventh international learning analytics & knowledge conference on – LAK ’17, 233–242. https://doi.org/10.1145/3027385.3027400 UNESCO. (2020). Virtual discussion of the Ad Hoc Expert Group (AHEG) for the preparation of a draft text of a recommendation on the ethics of artificial intelligence ((SHS/BIO/ AHEG-AI/2020/3 REV; Ad Hoc Expert Group)). https://unesdoc.unesco.org/ark:/48223/ pf0000373199 Veletsianos, G., & Kimmons, R. (2012). Assumptions and challenges of open scholarship. International Review of Research in Open and Distance Learning, 13(4), 166–189. Veletsianos, G., & Kimmons, R. (2016). Scholars in an increasingly open and digital world: How do education professors and students use Twitter? The Internet and Higher Education, 30, 1–10. https://doi.org/10.1016/j.iheduc.2016.02.002 Verhaar, P., Schoots, F., Sesink, L., & Frederiks, F. (2017). Fostering effective data management practices at Leiden University. LIBER QUARTERLY, 27(1), 1–22. https://doi.org/10.18352/ lq.10185 Villani, C., Bonnet, Y., Schoenauer, M., Berthet, C., Levin, F., Cornut, A. C., & Rondepierre, B. (2018). For a meaningful artificial intelligence: Towards a French and European strategy. For Work / Against Work; Conseil national du numérique. https://onwork.edu.au/bibitem/2018- Villani,C%C3%A9dric-Bonnet,Yann-etal-For+a+meaningful+artificial+intelligence+towards +a+french+and+european+strategy/ Vuorikari, R., Ferguson, R., Brasher, A., Clow, D., Cooper, A., Hillaire, G., Mittelmeier, J., & Rienties, B. (2016). Research evidence on the use of learning analytics (p. 148). Joint Research Center – Publications Office of the European Union. https://doi.org/10.2791/955210 Vuorikari, R., Kluzer, S., & Punie, Y. (2022). DigComp 2.2: The digital competence framework for citizens – With new examples of knowledge, skills and attitudes. JRC Publications Repository. https://doi.org/10.2760/115376 Wagner, C. S. (2008). The new invisible college: Science for development. Brookings Institution Press. Wellcome Trust. (2016). Wellcome signs open data concordat. In Wellcome Trust Blog. https:// wellcome.ac.uk/news/wellcome-signs-open-data-concordat Weller, M. (2012). Digital Scholarship and the tenure process as an indicator of change in universities. RUSC. Revista de Universidad y Sociedad del Conocimiento, 9(2), 347–360. https://doi. org/10.7238/rusc.v9i2.1398 Weller, M. (2018). The digital scholar revisited. The Digital Scholar: Philosopher’s Lab, 1(2), 52–71. https://doi.org/10.5840/dspl20181218 Wilkinson, M. D., Dumontier, M., Aalbersberg, I., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., et al. (2016). The FAIR guiding principles for scientific data management and stewardship. Scientific Data, 3(1), Article 1. https://doi.org/10.1038/sdata.2016.18
68
J. E. Raffaghelli and A. Sangrà
Williamson, B. (2018). The hidden architecture of higher education: Building a big data infrastructure for the ‘smarter university’. International Journal of Educational Technology in Higher Education, 15(1), 12. https://doi.org/10.1186/s41239-018-0094-1 Williamson, B., Eynon, R., & Potter, J. (2020). Pandemic politics, pedagogies and practices: Digital technologies and distance education during the coronavirus emergency. In Learning, Media and Technology (Vol. 45, Issue 2, pp. 107–114). Routledge. https://doi.org/10.108 0/17439884.2020.1761641 Williamson, B., & Hogan, A. (2021). Education international research pandemic privatisation in higher education: Edtech & University Reform. Education International. Wiorogórska, Z., Leśniewski, J., & edrzej, & Rozkosz, E. (2018). Data literacy and research data management in two top universities in Poland. Raising awareness. Communications in Computer and Information Science, 810, 205–214. https://doi.org/10.1007/978-3-319-74334-9_22 Wouters, P., & Haak, W. (2017). Open data: The researcher perspective. In Elsevier—Open Science (p. 48). https://doi.org/10.17632/bwrnfb4bvh.1 Zhang, D., Maslej, N., Brynjolfsson, E., Etchemendy, J., Lyons, T., Manyika, J., Ngo, H., Niebles, J. C., Sellitto, M., Sakhaee, E., Shoham, Y., Clark, J., & Perrault, R. (2022). The AI Index 2022 Annual Report (arXiv:2205.03468). arXiv. https://doi.org/10.48550/arXiv.2205.03468 Zhang, D., Mishra, S., Brynjolfsson, E., Etchemendy, J., Ganguli, D., Grosz, B., Lyons, T., Manyika, J., Niebles, J. C., Sellitto, M., Shoham, Y., Clark, J., & Perrault, R. (2021). The AI Index 2021 Annual Report (arXiv:2103.06312). arXiv. https://doi.org/10.48550/arXiv.2103.06312 Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books. Zuiderwijk, A., & Janssen, M. (2014). Open data policies, their implementation and impact: A framework for comparison. Government Information Quarterly, 31(1), 17–29. https://doi. org/10.1016/j.giq.2013.04.003 Juliana E. Raffaghelli is Research Professor at the University of Padua and Associate Research of the Edul@b Research Group at the Universitat Oberta de Catalunya. In the last 15 years, she coordinated international research units, networks, and projects in Latin America, the Balkans, Turkey, and Western Europe in the field of educational technologies. Her work covered the topic of professional development for technological uptake in international/global contexts of collaboration through a socio-technical and post-colonial lens. Recently her research activities explored the emergent manifestations of data practices and artificial intelligence through critical, open, and emancipatory pedagogies. She has coordinated six special issues for international journals and contributed to the field with two books and several research articles and chapters in English, Spanish, Italian, and Portuguese.
Albert Sangrà is Director for the UNESCO Chair in Education and Technology for Social Change. He is Professor and Researcher at the Open University of Catalonia, Department of Psychology and Education. He is a member of the founder team of this university (1994–95), where he also served as the Director of the eLearn Center. He has worked as a consultant and trainer in several online and blended learning projects in Europe, America, Asia, and Australia, focusing on implementation strategies for the use of technology in teaching and learning and its quality. He was former Vice-president, European Foundation of Quality on E-Learning (EFQUEL) and former member of the Executive Committee of EDEN. He contributes to different academic journals as a member of the editorial committee and as a reviewer. He has published several books on the integration of ICT in higher education with publishers such as Jossey-Bass, Springer, Octaedro and Gedisa. He is recipient of the 2015 Award for Excellence in eLearning awarded by the World Education Congress and EDEN Senior Fellow.
Part I
Exploring Reactive Data Epistemologies in Higher Education
Chapter 3
Fair Learning Analytics: Design, Participation, and Trans-discipline in the Techno-structure Regina Motz and Patricia Díaz-Charquero
Abstract The digitalisation of education is increasingly embracing intensive data collection practices. Sentiment analysis – the semantic analysis of social networks and human-computer interaction models among other data-driven practices, helps to understand human behaviour. However, it also poses a great dilemma regarding the invasion of privacy and triggers a reflection on the ethics for the use of data for improving learning. Consequently, the developers of artificial intelligence must engage in active dialogue with educators, sociologists, psychologists, educational technologists, pedagogues, communicators, and experts in data privacy to understand how their solutions have an impact on this educational practice. In this context, an approach based on human rights and ethics must be considered. This chapter presents an interdisciplinary work in an institutional project focused on the adoption of learning analytics. The experience was carried out by the Group of Open and Accessible Educational Resources from the University of the Republic of Uruguay. The two main pillars of fair learning analytics are design-by-privacy and the relationship between learning analytics and open and inclusive education. Furthermore, such phenomena are discussed in an attempt to generate recommendations for this practice in the regional context, and based on it, to contribute to the international debate. Keywords Interdisciplinary · Open education · Learning analytics · Privacy-by-design
R. Motz (*) Instituto de Computación, Universidad de la República, Montevideo, Uruguay e-mail: [email protected] P. Díaz-Charquero Technological University of Uruguay, Montevideo, Uruguay e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. E. Raffaghelli, A. Sangrà (eds.), Data Cultures in Higher Education, Higher Education Dynamics 59, https://doi.org/10.1007/978-3-031-24193-2_3
71
72
R. Motz and P. Díaz-Charquero
he Cornerstones of Fair Learning Analytics: T Interdisciplinarity and Openness In current times, it is inevitable to consider education within the context of the digital transformation of our society – an accelerated digital transformation driven by the development of technologies such as artificial intelligence, big data, and cloud computing. In its broadest sense, the imperative of open education includes open access to educational opportunities without barriers related to the economic condition, gender, race, accessibility, or territory of a student; open access to educational resources for their use, reuse, adaptation, retention, and redistribution; open learning with the flexibility of time, place, and pace of learning; open access to education based on diversity and equity as guiding principles (Czerniewicz, 2020; Atenas et al., 2020; Weller, 2020; Rodés et al., 2018). Open education needs synergies, specifically in university education, due to its relationship with open data and open science and the impacts on its application and social appropriation. All these characteristics are grounded on an open and sovereign technological infrastructure that allows empowerment for the development of digital citizenship. As Kahle (2008) explains, openness “is measured by the degree to which it empowers users to act, making technology [and content] their own, rather than imposing their own foreign requirements and restrictions as inflexible”. Working at the Interdisciplinary Group of Open and Accessible Educational Resources (in Spanish: Núcleo Interdisciplinario de Recursos Educativos Abiertos y Accesibles, NúcleoREAA1), we address learning analytics from the perspective of openness and interdisciplinarity. The NúcleoREAA integrates academics from the Universidad de la República (University of the Republic, Udelar), the main public university in Uruguay, contributing to the subject with different viewpoints. The existence of the NúcleoREAA allows capitalisation on the efforts of these groups, allowing an interdisciplinary approach that is also expressed through the heterogeneity of the nucleus of more than 30 researchers. This staff comes from both vocational paths and the academic professional career in disciplinary fields such as education, technology, information, communication, sociology, anthropology, and law, among others. In collaboration with academics from foreign universities and in conjunction with the Council for Educational Training, the organisations within the National Administration of Public Education and Civil Society organisations – such as the Uruguayan Association of Librarians, Creative Commons, and the National Union of Blinders of Uruguay – combined their experiences and developments. The commitment to collaboration, added to the technical, pedagogical, and research contributions, resulted in the quality of the task and had a direct impact on the conception, design, and the process of knowledge construction in this interdisciplinary field. Interdisciplinarity is the outcome of disciplinary elements (ideas, methods, theories, and practices), which are interactive, not additive, and https://www.nucleorea.ei.udelar.edu.uy
1
3 Fair Learning Analytics: Design, Participation, and Trans-discipline…
73
subject to negotiation and creative adaptations based on the needs of the particular context (Graff, 2015). This form of interdisciplinary work was adopted by most of the nucleus’ members through their direct participation in the projects and actions that were carried out. The aim of this research is to contribute to the theoretical- methodological construction by working on articulated projects in a more trans- disciplinary way – a goal certainly difficult to achieve. We started working with basic problems such as different languages specific to each sector, which led us to understand that each discipline has diversified views of reality and it is essential to overcome the boundaries between disciplines to grasp the issue. We treat open and inclusive learning analytics as a whole, which is not just the sum of the parts that each discipline can contribute to. In addition, there must always be interaction with the user, that is, the person interested in the problem because he/ she is the one who has the problem. This is also another significant feature of openness which transforms social justice into authentic social inclusion. These projects are not targeted only to some actors, but rather, they are projects made by the actors, with those actors, who participate from the beginning – from the design of the research – to be able to appropriate that research and the results of that research in a transparent way. Therefore, they become active participants in the project and not mere spectators who just receive the results. This relationship aligns perfectly with the convergent perspective of the open movement: working with data privacy, with ethical issues, such as how to use data coming from education, which means that everything collected from the data must actively involve the persons concerned. Furthermore, Learning Analytics (LA) is a phenomenon that refers to the use of data and artificial intelligence integrated into education, including the application of ontologies, machine learning, sentiment analysis, semantic analysis of social networks as well as the implementation of techniques for the processing of natural language, human-computer interaction models, and natural interface design, which help to better understand human behaviour (Nunn et al., 2016). To create new educational possibilities, the developers of LA applications must participate in an active dialogue with educators, sociologists, educational technologists, pedagogues, communicators, and experts in data privacy. In this way, they can be aware of how their solutions benefit educational practices employed from the perspective of human rights and ethical framework (Shum, 2019). In this sense, an important challenge is to prepare decision-makers and teachers for artificial intelligence integrated into education and, at the same time, prepare artificial intelligence to understand education, as mentioned by UNESCO in its working document (UNESCO, 2019), contributing to respond to pedagogical problems, and not to exacerbate them.
Learning Analytics in a Nutshell According to Long et al. (2011), learning analytics is the measurement, collection, analysis, and reporting of data on learners and their contexts, with the purpose of understanding and optimising learning and the environment in which it takes place.
74
R. Motz and P. Díaz-Charquero
In this sense, it facilitates the processes of monitoring and analysis, prediction and intervention, tutoring, evaluation and feedback, adaptation, personalisation, recommendation, and reflection (Chatti et al., 2012). As mentioned by Ferguson (2012) and Siemens (2013), learning analytics can be considered at various levels: at the micro-level, where it addresses both teachers and students, and at the macro level, where it is directed at institutional decision making, either at the level of the institution or the country (i.e. the Ministry of Education and Culture). Regardless of the level, for data owners to participate and be included in these projects, it is essential to consider their interests. There are several questions that we, as teachers, always ask ourselves: “How are my courses going?”, “Are my students motivated?”, “Are my strategies working?”, “Can these reviews help me to understand how this is working?” And when it comes to our role as learners, the questions often raised are: “Am I studying hard enough?”, “What am I doing right?”, “What am I doing wrong?”, “How do I improve?”, “Where am I failing?” Having that quick feedback at the right time with the right amount of information is something that learning analytics could provide. Users, either teachers or learners, can be concerned about these issues since they are immersed in activities based on human interaction. Indeed, the way users interact with colleagues and the institution, in a teacher-student role, can be captured. There are also other interactions, such as human-material or human-machine activities, which can be measured and analysed to provide feedback. This kind of response can be the one that allows a change. It can also enable users to look at the information that could go unnoticed otherwise. With the recommendations provided by intelligent systems, teachers can detect students with problems in a timely manner and carry out a meaningful intervention to ensure their inclusion and stimulate the improvement of their performance, considering their possibilities, limitations, and preferences. At the micro-level, the focus is not always on big data. For instance, a teacher interacting with four groups of students (quite frequent in these latitudes), with at least 25 students per group, already has interactions with 100 people making ten weekly publications. Although this may seem scarce, on a human level, the teacher is already dealing with an important and difficult daily volume, and he/she is requested to be updated while paying attention to all the details. In this scenario, LA helps teachers by providing them with elements aimed at the personalisation needed for each student. LA tools allow us to collect all the data we are generating, which, at large, we tend to consider as big data for our environment. It is indeed big data for those who need it and those who are going to analyse it, but sometimes, also for its complexity, since some interactions are captured in a unique way through the system. Furthermore, LA tools allow us to obtain patterns analysed through data mining as well as to present the results with much more user-friendly and usable visualisations for decision making. The problem is not only collecting the data and delivering it, but also making sure that it is cleaned, recognised as reliable, transparent, and aesthetically pleasing. There are certainly many more topics besides technology, so interdisciplinarity is a necessary question.
3 Fair Learning Analytics: Design, Participation, and Trans-discipline…
75
LA needs to be addressed, not just as the collection and supply of data, or as pattern analysis, which can be obtained from data mining through algorithms, but it also needs to be conceived as something to achieve through the placement of algorithms within a global perspective that considers students as persons with their own rights, needs, fears, and privacy. How is it possible to address all this? Therein lies the importance of tackling these issues at least in interdisciplinary teams. Just as the interaction of LA should not be considered only in the context of privacy, ethics should not be relegated to jurisprudence only. This is a problem that needs to be faced with a transdisciplinary approach, especially if what we want to do is open education: the question is not only to be able to access the data, but also to make the whole process transparent as well as inclusive, with respect to the actors who need to know why an algorithm is being applied, what that algorithm does, and how it is used. Sometimes, it is necessary to have the global view of the whole problem rather than focusing on the outcome. Considering the macro level of LA, when processing the data, indeed, only the researcher that created the initial data collection will be aware of the participants’ identities or other sensible information. However, in an educational context where the focus is on the learners’ own environment and their recognition as human beings, it is important to envision the problem from a holistic perspective. We cannot treat just one part of the problem as there are several critical aspects to be considered: it is not only a question of the learner’s data extraction, but it is also crucial to understand the context. What we gather is much more than extracting the learner’s data, because each learner behaves differently depending on the context in which he or she is. In this sense, global learning analytics becomes social learning analytics put in context, and it should be aimed at the extraction of data from both the whole group and the institution in which that learner is incorporated. It is essential to know what norms the institution has, how it is linked to the informal elements of the relationship with the students as well as how it implements educational policies, because all these aspects have an impact on the students and consequently, on the results we have. Therefore, analytics can have a great potential if used consciously and carefully within an open education context, and if designed and implemented by interdisciplinary teams. As an example, some of the aspects of LA in which transdisciplinarity comes into play are how to show the transparency of the algorithms and how to ensure everyone’s compliance with their participation and roles. Conscious participation does not only mean clicking the “I agree to the terms” button, but it also requires awareness of how that compliance is communicated. It is essential for the participants to understand what is happening to them as students, teachers, or as users of LA in a close relationship between data science, pedagogy, psychology, sociology, privacy, and ethics. Based on the definition of Society for Learning Analytics Research (SOLAR), where LA is a systematic and interdisciplinary field, combined with the synergies generated from both the academic and non-academic disciplines and spaces, as well as with LA’s potential to contribute to the theoretical-methodological construction of a field of theories and practices developed, we formulate a new paradigm for LA. Following Miguel Martínez’s concept of transdisciplinary (Martínez, 2007),
76
R. Motz and P. Díaz-Charquero
we affirm that LA is a space for transdisciplinary articulation of fields which are still emerging as disruptive in their disciplines of origin and are distinguished precisely by the multiplicity of links, relationships, and interconnections that constitute them.
Applying LA in a Fair Way For LA to be applied fairly, it is fundamentally necessary that its application offers guarantees of non-discrimination and preservation of the privacy of personal data. To ensure that LA’s application is non-discriminatory, the knowledge, decisions, and actions that can have a real impact on the process, as well as on the results of the analysis and/or data visualisation, must comply with the properties of being explainable and transparent for end-users. The explanation is the characteristic that describes the processes carried out by the algorithms used in LA. It is related to the notion of argument or explanation, in which there is an interface between the user and the decision-maker. If a machine learning model is used, explicability leads to determining the correlations among the training data, which ultimately contributes to discovering causality as well as allowing, for example, identifying whether algorithms are biased. Regarding the privacy of personal data, when educational institutions make the decision to use LA, whether the data analysis is incorporated as a continuous improvement activity or for research purposes, it is necessary to have the informed consent of those involved, so this activity must be clearly reflected in the Institution’s data use policies (Díaz et al., 2015). Although students, upon entering the institution, consent to the institutional privacy policies related to the use of their data, the amount of data collected and its access must be limited to what is strictly necessary to fulfil the institutional purposes/tasks. Educational institutions must be clear enough about what they hope to learn from data analysis or what they will be able to do with that data. In turn, institutions must apply “safeguard measures” to reduce risks: confidentiality contracts, establishment of procedures for dissociation of data in investigation processes when they cannot be totally anonymised, institutional measures that prohibit certain data crossings to avoid re-identification. Data resulting from LA processes, which is used for personalised interventions (i.e. to provide personalised support or guidance to students based on their progress as well as their individual or comparative trajectory), would not be merely justified by either the scope to fulfil the tasks or the legitimate interest of the institution. Therefore, the intervention, based on the student’s informed consent, is valid only when there is no risk of negative consequences if the student does not consent and only if the student is informed in detail on the scope, purpose, exact implications derived from the personalised treatment of their data, and the characteristics of the intervention. The institutions must be sufficiently explicit in the description of the personal data that will be collected, explaining in detail for what purpose they will be used
3 Fair Learning Analytics: Design, Participation, and Trans-discipline…
77
and for how long they will be kept. It is important to clearly specify who in the institution is responsible for conducting interventions guided by learning analytics and how the student is informed about the intervention strategy applied by the institution. It should also be made explicit that LA models are simple tools that serve to prioritise and detect in advance those students who are likely to need additional help, but they are also tools used in context and in conjunction with other non-automated information. Furthermore, from a personal data protection perspective, only personnel with such explicit responsibility should have access to analytical information intended to direct the intervention. Another essential aspect to ensure that the institutions’ LA strategies are compatible with the personal data protection regulations in terms of data security policies is that students’ data is kept safe and is only available to staff with an explicit need for access. This is critical to the reliability that underlies the relationship between students and institutions. It is fundamental that educative institutions have a training plan for teaching and support staff. The personnel in charge of carrying out the interventions must be trained and informed about the limits of their individual responsibilities and must know the necessary steps to take when these limits have been reached. It should also be noted that there may be different requirements for underage students. Regardless of the educational institution’s decisions, it is important to propose a process through which students can request the institution not to carry out interventions based on their analytical footprint. Carmel (2016) raises the idea of thinking beyond informed consent due to the frequent factual impossibility students (or their parents) must refuse or stop consenting to the policies on the use of their data a posteriori. This is due to the possible consequences they could face if they decide not to go along with the educational institution’s decision about using learning analytics tools for monitoring, support, and intervention policy, which leads to a second major problem, that is, not all families have the possibility of changing educational institutions. One suggested approach for the development of LA systems that follow the data privacy and ethical criteria presented, such as the learners’ ability to opt out of having their data tracked and analysed, is “Privacy by Design” (Le Métayer, 2010). This methodology includes all data privacy issues among the initial system requirements, and it also includes lawyers in the interdisciplinary LA system team (Antignac & Le Métayer, 2014). In essence, privacy by design focuses on a pre-emptive logic when designing LA tools rather than dealing with privacy issues a posteriori. It includes the protection of personal data and privacy from the beginning, not as a corrective, but it must be integrated into the design as a necessity. Another serious problem is the responsibility for the actions taken (or not taken) with the information obtained from the collected private data. One of the ethical guidelines presented in Slade and Tait’s study (2019) focuses on the importance of institutional responsibility and its obligation to act. In other words, access to a thorough understanding of students’ learning process generates a moral obligation to act. The functionality of the intervention records in LA tools, unfortunately, is not yet a typical characteristic of these systems. However, the European General Data
78
R. Motz and P. Díaz-Charquero
Protection Regulation (EU GDPR) has recently modified the principle of responsibility for automatised systems in general, which applies to LA systems. The current approach includes the concept of proactive responsibility in everything that has to do with automated system designs. This change directly impacts the privacy by design principles, integrating a form of reasoning that must be internalised by the teams that work on the processing of personal data. Thus, the design principles acquire legal personality, and their application is incorporated through the mandatory implementation of the impact analysis. The article 35 of the GDPR tackles the mandatory implementation of the impact analysis when it comes to high-risk cases and indicates that it must be informed in writing. In Uruguay, only as of February 2020, it is mandatory to generate this impact analysis document when working with data from minors, with large volumes of data, as well as for profiling purposes and analysis of interest or performance, which are closely related LA activities. The way through which the privacy by design principles are operationalised along with impact analysis is to examine at different times what decisions need to be made at both the collection stage and that of processing. System designers must be aware that the issue of privacy should not be considered only on the data collection level, but it should be thought for the entire data processing cycle, both in possible updates and for consents in later stages. All this leads to focus on the different actions that LA tools can perform. For instance, an action that could be taken is to minimise data collection. In this regard, why should we ask permission to access data that we will not use later? What are the exact permissions that we need to have? Based on these questions, we are directed towards two preventive operationalised guidelines: (i) it is important to stick to what is strictly needed and not ask for further superfluous authorisations as well as to (ii) apply privacy alerts when students, teachers, institutions, or authorities are using private data. Privacy and personal data protection nowadays have regulations, mechanisms, and even special authorities to ensure compliance. All European countries are represented by national authorities, while in Latin America, they are consistently increasing. In Uruguay, there is the Regulatory Unit for the Control of Personal Data, which even has sanctioning, monitoring, and auditing powers. Another question is, whether it will have enough budget and autonomy to collaborate with technology companies and avoid being subject to them. This is an issue that must be studied in the context of geopolitics and techno-politics, but despite its complexity, mechanisms exist and can be used to guarantee privacy and data protection. In March 2019, the International Council for Open and Distance Education (ICDE) published a report with guidelines for the application of ethically informed learning analytics (Slade & Tait, 2019). This report highlights the following ten principles: (i) Transparency, (ii) Data ownership and control, (iii) Accessibility of the data, (iv) Validity and reliability of the data, (v) Institutional responsibility and obligation to act, (vi) Communications, (vii) Cultural values, (viii) Inclusion, (ix) Consent, and (x) Student responsibility. In addition to data privacy, these ten guidelines emphasise other ethical issues and rights, such as the issue of fairness, transparency, and interpretability, as well as the consequences that decisions made through LA systems have on people’s lives, and how they are being measured.
3 Fair Learning Analytics: Design, Participation, and Trans-discipline…
79
Unfortunately, we still do not have potential guarantees or sufficient means of protection in relation to these issues, neither in Uruguay, nor in the rest of Latin America, nor in Europe. The problem of bias discrimination generated by algorithms is clearly exemplified through Cathy O’Neil’s expression: “algorithms are opinions embedded in code.” In her book Weapons of Math Destruction (2016), O’Neil looks at the direction taken by data science, but from a very critical perspective. For this purpose, she discusses an example of rankings used to evaluate teachers in American schools. These rankings use algorithms that rate the teacher’s added value in comparison with the students’ grades, referring to previous term as well as other factors. Due to this ranking system, teacher Sarah Wysocki was fired along with 205 other teachers in 2011 for not reaching the score that the institution determined as the minimum for that year within the automated grading system used by schools. Paradoxically, Wysocki was recognised as a teacher of good practices: she was well-evaluated by her classmates, even by the students’ parents. After her dismissal, Wysocki contacted Cathy O’Neil, who in turn asked to access the algorithm to understand how it worked, but she was denied it. Later, she learned that no one in the school district had access to the math formula or understood it: decisions were being made based on an algorithm that the authorities themselves did not understand. Another instance of the proliferation of automated admission systems in universities refers to companies that provide artificial intelligence services to recruit students. When visiting their websites, the central issue emphasised is to avoid biases in the admission procedure, making the process fairer and providing the students with greater guarantees. Ironically, as presented in (Osoba & Welser, 2017), there is no simple efficacious method to avoid bias since it is embedded into users’ training data. In other words, the algorithms used by recruiters are fed by historical data, which itself is biased. For instance, if a gender or racial bias is applied to college admissions and the algorithm trained is already biased or the machine learning training uses biased data, the result will obviously be a biased tool. The main issue is that those who are using the system cannot clearly discover the bias because the evidence is being removed. Simply put, bias is hidden under a cloak of technical legitimacy that is incomprehensible to most people. Algorithms are rules that not everyone can understand, and even if they do, they are not allowed to know how they work. At present, there is no clear appeal or opposition mechanism by which it is possible to request access to the algorithm and the explanation of its functioning. This is either valid for tools that only give suggestions or inform – while people are the ones who ultimately make the decisions – and for tools that execute or decide directly. In short, the pretext that there is a person who eventually decides does not justify the use of the tool to suggest clearly biased options. In terms of artificial intelligence and its approach, we must consider not only the effectiveness of the tool we are using in terms of analytics or problem solving, but we also need to focus on legitimacy issues, that is, we must be informed that the processes and the results are legitimate. A question that is always valid is: what incentives do researchers have and what incentives do companies have to invest in legitimacy? The issues of privacy,
80
R. Motz and P. Díaz-Charquero
transparency, and fairness involve the formation of interdisciplinary teams, the submission of audits, and the generation and publication of documentation, and this often involves costs. Hence, there are several barriers to break down to understand the importance of investing in legitimacy, and the most troubling issues are the conceptual ones. For instance, in the case of transparency, this aspect is often interpreted as the obligation to publish code since the publication of source code can certainly help as a transparency tool, but sometimes it is unnecessary or insufficient. Therefore, along with the publication of the code, transparency of data is required as well (i.e. what data is used for training and validation? Was data used in context or not?). However, it is important to note that the issue of transparency is not synonymous with delivering or showing all the data, which is often not legal. In fact, it is not always possible to reveal the data, but it should be documented how the data was collected, how it was processed, where it came from, whether its quality was analysed, and whether known sources of bias were inspected. When working with personal data and the data that served as a source to train an algorithm cannot be disclosed, it is crucial to provide statistical summaries that preserve the privacy of the data to promote interpretability and transparency, and explain the assumptions and technical effects, not only the details of the operation, but also what assumptions were considered to build the LA tool. On the other hand, data owners, namely, the students, should always have access to their collected data, in addition to identifying exactly how it is used, knowing how their analytical indicator has been calculated, and having basic information on how algorithms and models work. Access to this information, together with the transparency of the system, provides the student with data that allows them to develop a critical perspective and more beneficial learning approaches. In order to achieve interdisciplinarity and transdisciplinarity, it is essential to involve each and every party in the process, thus, stakeholders, technical developers, as well as educational, psychological, and social teams are key to this end. Ultimately, adherence to these ethical guidelines is equally important to build the concept of “openness”, which consists of keeping the user informed and generating transparency tools so that the person is continuously informed about the purpose of the use of their data. Reflecting on the legal tools that ensure LA’s fairness, while in terms of privacy, there is the protection of personal data regulations, from the fairness and transparency side, there is still no legal tool that guarantees to users their right to system explicability. The academic community in LA has been paying attention to these issues. Works such as those by Buckingham (2017), Prinsloo and Slade (2018), Chen and Zhu (2019) and among others, the recent workshop on “Responsible learning analytics: creating just, ethical, and caring LA systems”, held during the LAK21 conference, show concern for achieving a responsible LA. However, we still need a legal tool that, in the style of “habeas data”, allows users to request their personal data through a judicial action, a “habeas algorithm” that guarantees access to decision-making process based on the use of artificial intelligence systems that are applied to a person’s life.
3 Fair Learning Analytics: Design, Participation, and Trans-discipline…
81
Some Experiences Towards an Open and Inclusive LA In this section, we present the experience carried out within the NúcleoREAA through three projects on the protection of personal data, openness, and inclusion in LA: DIIA, Predictive Models, and SELI. DIIA – Discovery of Interactions that Impact on Learning (in Spanish ‘Descubrimiento de Interacciones que Impactan en el Aprendizaje’) was a project founded by the National Agency of Innovation and Research, and it was conducted between 2017 and 2018. Its aim was to discover interactions that impact students’ academic performance and provide teachers with useful information for designing pedagogical strategies and taking timely decisions that contribute to improve students’ learning (Cervantes et al., 2018; Motz et al., 2018). During its development, protocols and guidelines were generated to establish how and when the educational institution could obtain the informed consent of the students, differentiating between participation in the research project and participation in the implementation of the system (Díaz, 2018). These protocols did not exist in Uruguay at the beginning of the DIIA Project, and it was not until the end of 2018 that the Ethics Committee for the Use of Data in Education was created, coordinated by the Ceibal Foundation, that protocols for the execution of LA projects were published. An unexpected aspect of the development of the project was the problem of access to the data registered in the learning platforms (LMS). The DIIA project aimed to collect data from the activities carried out in two LMS widely used by the educational community in Uruguay –Schoology and Moodle. The University of the Republic uses the open-source Moodle platform to offer a virtual learning space, which allows students to have multiple interactions with teachers, other students, and materials. At the time of development of the DIIA project, Moodle did not originally contain a LA tool like the one it currently has, but Moodle’s interoperable design enabled developers to create plugins and integrate external applications to achieve specific functionalities (Moodle). One of the main advantages of using Moodle is that it is possible to access its data by directly consulting its database and to obtain any data stored by the platform, thus maximising the analysis of information without any problems. This was not the situation with access to Schoology data. The Ceibal Plan, in charge of implementing ICT in first and second-level Uruguayan public education, works with a virtual learning platform CREA2 built on Schoology. Schoology is a private educational platform, hosted in the cloud and used for managing learning and interaction between students and teachers. The platform has a tool embedded in the application that allows teachers to visualise data. Its implementation, which was available during DIIA project, allowed teachers to see the following quantitative data about their courses: number of visits (page reloads) and the last visit of each student to the course and to each task, discussion topic, and link; number of detailed visits per month; number of comments and responses published; for each student, last access to the platform, total time in the
82
R. Motz and P. Díaz-Charquero
course, number of posts in the course; for each task, discussion topic, link, number of times a student accessed it. The available implementation of the platform did not allow access to the content of messages or publications. Nevertheless, Schoology exposes an API for data extraction that is described in the official documentation (Schoology) where numerous examples of its use are provided. This API allows not only to obtain information related to a specific student or course, but also to see it as an independent entity that relates to other teachers and other students in different courses. However, as it is a private source application managed by a company, direct access to the database that stores the information neither was obtained nor was the creation of backup files allowed since these activities involved a modification to the signed contract and any modification entailed extra costs. Predictive Models project was founded by the National Agency of Innovation and Research, and it was conducted between 2017 and 2019. It aimed at developing a countrywide learning analytics tool focused on tracking the trajectories of Uruguayan students during their first three years of secondary education. The project consortium consisted of Escuela Superior Politécnica del Litoral (ESPOL), Ecuador, Universidade Federal de Santa Catarina (UFSC), Brazil, and Universidad de la República (UDELAR), Uruguay, (Macarini et al., 2019). The National Public Education Administration (ANEP) authorised the project to have access to the databases of all students of public educational institutions in Uruguay, but we imposed the limit that the databases had to be delivered without personal identifications of the students. Therefore, we negotiated with ANEP that its technical staff would anonymise the data. Even though this decision had a significant impact on the timing in which we were able to access the data, we consider that it was ethically necessary to proceed in this way. Unfortunately, due to the lack of adequate protocols at the time the project started, this situation was not contemplated in the initial schedule of the project. Another decision we had to make during the project, working in an international team with developers from Ecuador and Brazil, was the international transfer of data. Ecuador was not considered a country that met national and European standards for international data transfer, so we had to impose the requirement that the databases would not leave the country. Therefore, we decided that the international data transfer would not be done, and that a Brazilian developer would move to work in Uruguay with extra costs for the project. At the same time, we had to create a virtual server so that Ecuadorian developers could make sample queries for the work they had to do. Currently, we have aligned ourselves with the European standards and we have guidelines that take us step by step through this type of projects. At the moment, our hope for the next project is to avoid time-consuming tasks so that we can soon be able to align ourselves with a methodology proposed by our data protection unit itself. SELI (Smart Ecosystem for Learning and Inclusion) project was funded by ERANet-LAC (2018–2020), which aims to strengthen international cooperation in research and innovation between European Union (EU) agents and those from Latin
3 Fair Learning Analytics: Design, Participation, and Trans-discipline…
83
America and the Caribbean (LAC). The National Research and Innovation Agency (ANII) is in charge of financing in Uruguay. The project addresses the issue of digital inclusion and accessibility of education for disadvantaged groups, improving the digital skills of teachers in the regions involved. The three focus areas are: (1) new pedagogies and methods; (2) new learning environments; and (3) digital training for educators (Tomczyk et al., 2020; Porta et al., 2020). Focusing on the learning environments, the infrastructure provided by the SELI project is based on a blockchain architecture and on having a set of microservices that can be incrementally coupled to offer different functionalities, one of which is learning analytics. The support of LA service in blockchain has the advantages of interoperability and immutability, as pointed out in Hillman and Ganesh (2019) and Ocheja et al. (2019). On the other hand, the works of Amo et al. (2019), Forment et al. (2018), and Hillman and Ganesh (2019) show the limitations of blockchain to manage data privacy. Blockchain alone does not solve the data privacy problem because by its design, any node connected in the network can see the information uploaded by users. However, blockchain, together with smart contracts and data encryption, offers a way to guarantee access to data in a secure and reliable way. From the point of view of inclusion, research has been sparse about how to use learning analytics methods to support inclusive education. As Foster and Siddle (2019) show, students with learning disabilities lack access to information on their learning progress, while teachers cannot obtain enough data to make decisions to improve the accessibility of their courses. Further research is required to establish an inclusive LA that enables understanding of different types of learning and teaching in an inclusive learning environment. The first step in this direction is performed by SELI project, which provides a descriptive LA dashboard by retrieving data on the accessibility relationships between the possibilities offered by the courses and students’ needs (Costas et al., 2021). During the elicitation process with teachers and students, the following indicators were identified: (1) Inclusion indicators from the teacher (number of courses with special requirements support; relation of accessibility support in resources against the total of resources; relation in a course with accessibility requirement support against accessibility support in resources activities; match relation from each accessibility support against accessibility requirement; student’s accessibility support usage; student’s performance against accessibility support use); (2) Inclusion indicators from the student (course progress against student plan; number of activities completed against student plan; relation of activities completed successfully against failed; health goals related with the study plan; student resources preference about accessibility – to be implemented in the next version). These indicators are processes based on available data at SELI platform as well as on an external service related to health data for students. An example of video accessibility analysis is provided in (Costas et al., 2021) where the authors analyse how teachers in the SELI platform provide accessibility features in video-recorded classes, such as sign language, captions, and alternative text for inclusive learning resources. Based on these features, the descriptive inclusive dashboard can show a visualisation related to video accessibility features commonly
84
R. Motz and P. Díaz-Charquero
used by students in a course. This data helps to identify useful accessibility features for future courses and improve the availability of resources supporting inclusive needs. SELI platform also provides a chart that compares the use of accessibility between students requiring accessibility options, which is likely due to a disability, and students not demanding accessibility features. This analysis allows us to identify how useful accessibility features are for students with disabilities. The SELI platform is still in beta and evolving. The LA components still need to improve the visualisation forms and conform to the universal learning design principles (Rose & Meyer, 2002), but the project manages to introduce the topic of LA improvement to strengthen the learning process in inclusive education. Recent interest in this topic is present at LAK2021 Conference with the event ‘Accessible learning, accessible analytics: a virtual evidence café’ (Papathoma et al., 2021).
The Way Forward As we pointed out in the previous sections, the guides on the ethical principles of LA (UNESCO, 2019; Hillaire et al., 2018) are undoubtedly useful and necessary. It is necessary to open access to education based on diversity and equity as guiding principles and LA is a tool that, if designed and used with ethical principles, can provide great benefits to reach this challenge. Fortunately, more and more interdisciplinary dialogues are being established around learning analytics ethics. Learning Analytics is no longer considered only in the context of data privacy, but also of social research (Selwyn & Gasevic, 2020; Cerratto Pargman et al., 2021). Developers of LA applications must participate in an active dialogue with educators, sociologists, educational technologists, pedagogues, communicators, and experts in data privacy to understand how their solutions benefit educational practices from a human rights and ethical framework perspective. Furthermore, it is also necessary to promote national policies to explore how the use of LA can contribute to more effective, better-informed, ethical, and inclusive development. From the NúcleoREAA, we are working towards a fair learning analytics that aims to make important values such as openness and inclusion a reality. Fair learning analytics is a one-way journey. Welcome to join the challenge.
References Amo, D., Fonseca, D., Alier, M., García-Peñalvo, F. J., & Casaña, M. J. (2019). Personal data broker instead of blockchain for students’ data privacy assurance. In Á. Rocha, H. Adeli, L. Reis, & S. Costanzo (Eds.), New knowledge in information systems and technologies (WorldCIST’19 2019. Advances in intelligent systems and computing, Vol. 932). Springer. https://doi.org/10.1007/978-3-030-16187-3_36
3 Fair Learning Analytics: Design, Participation, and Trans-discipline…
85
Antignac, T., & Le Métayer, D. (2014). Privacy by design: From technologies to architectures. In B. Preneel & D. Ikonomou (Eds.), Privacy technologies and policy. APF 2014 (Lecture notes in computer science) (Vol. 8450). Springer. https://doi.org/10.1007/978-3-319-06749-0_1 Atenas, J., Havemann, L., Neumann, J., & Stefanelli, C. (2020). Open education policies: Guidelines for co-creation. Open Education Policy Lab. https://doi.org/10.5281/zenodo.4032993 Buckingham, S. (2017). Black box learning analytics? Beyond algorithmic transparency. Keynote presented at the Learning Analytics Summer Institute. Carmel, Y. (2016). Regulating «Big Data Education» in Europe: Lessons Learned from the US (SSRN Scholarly Paper No. ID 2772755). Social Science Research Network. https://papers. ssrn.com/abstract=2772755 Cerratto Pargman, T., McGrath, C., Viberg, O., Kitto, K., Knight, S., & Ferguson, R. (2021). Responsible learning analytics: Creating just, ethical, and caring. In Companion proceedings 11th international conference on learning analytics & knowledge (LAK21). Cervantes, O.; Motz, R.; Castillo, E.; & Velázquez, J. (2018). Uso de métricas sociales para descubrir patrones de interacción que impactan el aprendizaje. In Proceedings of the 1st Latin American workshop on learning analytics, Guayaquil, Ecuador. CEUR Workshop Vol-2231. http://ceur-ws.org/Vol-2231/LALA_2018_paper_18.pdf Chatti, M. A., Dyckhoff, A. L., Schroeder, U., & Thüs, H. (2012). A reference model for learning analytics. International Journal of Technology Enhanced Learning (IJTEL), 4, 318–331. https://doi.org/10.1504/IJTEL.2012.05181 Chen, B., & Zhu, H. (2019). Towards value-sensitive learning analytics design. In Proceedings of the 9th international conference on learning analytics & knowledge, pp. 343–352. Costas, V., Solomon, S., Caussin-Torrez, L., Barros, G., Agbo, F., Toivonen, T., Motz, R., & Tenesaca, J. (2021). Descriptive analytics dashboard for an inclusive learning environment. IEEE Frontiers in Education Conference (FIE), 2021, 1–9. https://doi.org/10.1109/ FIE49875.2021.9637388 Czerniewicz, L. (2020). 6 ecologies of (open) access: Toward a knowledge society. Making Open Development Inclusive: Lessons from IDRC Research. Díaz, P. (2018). Recomendaciones para el uso ético de herramientas de analíticas de aprendizaje en Instituciones Educativas. http://www.diia.edu.uy/docs/DIIA_recomendaciones_para_instituciones_educativas.pdf Díaz, P., Jackson, M., & Motz, R. (2015). Learning Analytics y protección de datos personales. Recomendaciones. Anais dos Workshops do Congresso Brasileiro de Informática na Educação, 4(1), 981. https://doi.org/10.5753/cbie.wcbie.2015.981 Ferguson, R. (2012). Learning analytics: Drivers, developments, and challenges. International Journal of Technology Enhanced Learning, 4(5–6), 304–317. https://doi.org/10.1504/ IJTEL.2012.051816 Forment, M. A., Filvà, D. A., García-Peñalvo, F. J., Escudero, D. F., & Casaña, M. J. (2018). Learning analytics’ privacy on the blockchain. In Proceedings of the sixth international conference on technological ecosystems for enhancing multiculturality (pp. 294–298). ACM. Foster, E., & Siddle, R. (2019). The effectiveness of learning analytics for identifying at-risk students in higher education. Assessment & Evaluation in Higher Education. https://doi.org/1 0.1080/02602938.2019.1682118 Graff, H. J. (2015). Undisciplining knowledge: Interdisciplinarity in the twentieth century. JHU Press. Hillaire, G., Ferguson, R., Rienties, B., Ullmann, T., Brasher, A., Mittelmeier, J., Vuorikari, R., Castaño Muñoz, J., Cooper, A., Clow, D. (2018) Research evidence on the use of learning analytics: Implications for education policy (R. Vuorikari, & J. Castaño Muñoz (Eds.)). Publications Office. https://data.europa.eu/doi/10.2791/955210 Hillman, V., & Ganesh, V. (2019). Kratos: A secure, authenticated and publicly verifiable system for educational data using the blockchain. In IEEE international conference on big data (Big data), pp. 5754–5762.
86
R. Motz and P. Díaz-Charquero
Kahle, D. (2008). Designing open education technology. In T. Iiyoshi & M. S. V. Kumar (Eds.), Opening up education (pp. 27–45). MIT Press. Le Métayer, D. (2010). Privacy by design: A matter of choice. In S. Gutwirth, Y. Poullet, & P. De Hert (Eds.), Data protection in a profiled world (pp. 323–334). Springer. 978-90-481-8864-2. https://doi.org/10.1007/978-90-481-8865-9_20 Long, P., Siemens, G., Conole, G., & Gasevic, D. (2011). Proceedings of the 1st international conference on learning analytics and knowledge. LAK11, Banff, AB, Canada, Feb 27–Mar 01, 2011. ACM. Macarini, L. A., Cechinel, C., Santos, H. L. dos, Ochoa, X., Rodés, V., Alonso, G. E., Casas, A. P., & Díaz, P. (2019). Challenges on implementing Learning Analytics over countrywide K-12 data. In Proceedings of the 9th international conference on learning analytics & knowledge, pp. 441–445. https://doi.org/10.1145/3303772.3303819 Martínez Miguélez, M. (2007). Conceptualización de la transdisciplinariedad. Polis. Revista Latinoamericana, Número 16. Moodle. Managing a Moodle course – Tracking progress – Analytics. https://docs.moodle.org/35/ en/Analytics Motz, R., Cervantes, O., Echenique, P. (2018). Sentiments in social context of student modelling. In 2018 XIII Latin American Conference on Learning Technologies (LACLO) (pp. 484–491). IEEE. https://doi.org/10.1109/LACLO.2018.00086 Nunn, S., Avella, J. T., Kanai, T., & Kebritchi, M. (2016). Learning analytics methods, benefits, and challenges in higher education: A systematic literature review. Online Learning, 20(2), 10.24059/olj.v20i2.790. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. Ocheja, P., Flanagan, B., Ueda, H., & Ogata, H. (2019). Managing lifelong learning records through blockchain. Research and Practice in Technology Enhanced Learning, 14(1), 1–19. Osoba, O. A., & Welser, W., IV. (2017). An intelligence in our image: The risks of bias and errors in artificial intelligence. Rand Corporation. Papathoma, T., Ferguson, R., & Vogiatzis, D. (2021). Accessible learning, accessible analytics: A virtual evidence café. In Companion proceedings 11th international conference on learning analytics & knowledge (LAK21). Porta, M., Motz, R., Tomczyk, L., Oyelere, S., Eliseo, M. A., Viola, M., Farinazzo, V., Costas, V., & Yasar, O. (2020). SELI: Ecosistemas inteligentes para el aprendizaje y la inclusión. Tópos, para un debate de lo educativo, (12). ISSN 1688-8200. http://ojs.cfe.edu.uy/index.php/rev_topos Prinsloo, P., & Slade, S. (2018). Mapping responsible learning analytics: A critical proposal. In B. H. Khan, J. R. Corbeil, & M. E. Corbeil (Eds.), Responsible analytics & data mining in education: Global perspectives on quality, support, and decision-making. Routledge. Rodés, V., Motz, R., Diaz, P., Czerwonogora, A., Suárez, A. & Cabrera M. (2018). Connecting the dots: Linking Open Access and Open Educational Practices to enhance Open Educational Resources and Repositories adoption among Higher Education Institutions. In OE global 2018 conference. https://repository.tudelft.nl/islandora/object/uuid:6c361f43-1879-463b-884d-e2a951b40ef8 Rose, D. H., & Meyer, A. (2002). Teaching every student in the digital age: Universal design for learning. Association for Supervision and Curriculum Development. Schoology, Documentation Schoology. https://developers.schoology.com/api Selwyn, N., & Gasevic, D. (2020). The datafication of higher education: Discussing the promises and problems. Teaching in Higher Education, 25(4), 527–540. https://doi.org/10.108 0/13562517.2019.1689388 Shum, S. J. B. (2019). Critical data studies, abstraction and learning analytics: Editorial to Selwyn’s LAK keynote and invited commentaries. Journal of Learning Analytics, 6(3), 5–10. https://doi. org/10.18608/jla.2019.63.2 Siemens, G. (2013). Learning analytics. The American Behavioral Scientist, 57(10), 1380–1400. https://doi.org/10.1177/0002764213498851
3 Fair Learning Analytics: Design, Participation, and Trans-discipline…
87
Slade, S., & Tait, A. (2019). Global guidelines: Ethics in learning analytics. https://www.learntechlib.org/p/208251/ Tomczyk, L., Solomon S., Amato C., Farinazzo V., Motz, R., Barros, G., Yasar O., & Muñoz, D. (2020). Smart ecosystem for learning and inclusion-assumptions, actions and challenges in the implementation of an international educational project. In Adult education 2019-in the context of professional development and social capital. Proceedings of the 9th international adult education conference. Czech Andragogy Society. UNESCO (United Nations Educational, Scientific and Cultural Organization). (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development (7 Working papers in policy). https://backend.educ.ar/refactor_resource/get-attachment/1097 Weller, M. (2020). Open and free access to education for all. In Radical solutions and open science (pp. 1–15). Springer. Regina Motz is Full Professor at Instituto de Computación, Universidad de la República. She is Coordinator of the Semantic Information System Research Group and Co-Responsible for the Interdisciplinary Group on Open and Accessible Educational Resources, Universidad de la República. She obtained PhD in Computer Science from Darmstadt Technology University, Germany, MSc degree in Computer Science from Universidade Federal de Pernambuco, Brazil and Bachelor’s degree in Computer Science from Universidad de la República, Uruguay.
Patricia Díaz-Charquero is a Lawyer. She obtained MSc. Degree in International Relations. She is Assistant Professor at the Education Training Council and at the ‘Ethics, technology and society’ Program, Technological University of Uruguay. She is Public Lead for Creative Commons Uruguay. She is Member of the Interdisciplinary Group on Open and Accessible Educational Resources, Universidad de la República, Uruguay.
Chapter 4
Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education Juliana E. Raffaghelli
and Valentina Grion
Abstract Assessment and evaluation in education were a subject of quantification early in the history of the educational systems, given their crucial role to allegedly support transparency, accountability and effectiveness. The aim of this chapter is to scrutinise the evolution of recent discourses relating assessment in higher education, to uncover the fallacies of quantification, later transformed into data-driven practices, connected to assessment. The chapter is divided in four sections where we seek for connections between the evolution of assessment, metrics and quantification in higher education; the technological, data-driven transformation of assessment practices; and, hence, the compelling need of critical data literacy in liaison with assessment literacy. Overall, we conclude that (a) strengthening the idea of assessment as pedagogical practice may or may not adopt data, considering its main outcome to support learners’ development, instead of just producing quantifiable representations of the educational process; (b) data-driven practices within digitally mediated assessment are just another approach to produce representations which might or might not support the so-called “assessment for learning”; (c) the need to embrace critical “pedagogical- data” literacy, as the ability to read data as complex assemblage of practice and instruments. Overall, this approach could lead to a renewed assessment literacy.
J. E. Raffaghelli (*) Edul@b Research Group, Universitat Oberta de Catalunya, Barcelona, Spain Department of Philosophy, Sociology, Pedagogy and Applied Psychology, University of Padua, Padua, Italy e-mail: [email protected]; [email protected] V. Grion Department of Philosophy, Sociology, Pedagogy and Applied Psychology, University of Padua, Padua, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. E. Raffaghelli, A. Sangrà (eds.), Data Cultures in Higher Education, Higher Education Dynamics 59, https://doi.org/10.1007/978-3-031-24193-2_4
89
90
J. E. Raffaghelli and V. Grion
Keywords Higher education · Quantification · Metrics · Assessment · Assessment literacy · Data literacy
…Gross National Product counts air pollution and cigarette advertising, and ambulances to clear our highways of carnage. It counts special locks for our doors and the jails for the people who break them (…) Yet, Gross National Product does not allow for the health of our children, the quality of their education or the joy of their play. It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials. It does not measure our wit or our courage, our wisdom or our learning, our compassion or our devotion to our country; it measures everything in short, except that which makes life worthwhile. Robert F. Kennedy, Remarks at the University of Kansas (Kennedy, 1968)
Introduction Assessment and evaluation in education were a subject of quantification early in the history of the educational systems, given their crucial role to allegedly support transparency, accountability and effectiveness. With the raising interest in the use of quantitative data and metrics by the positivism in science, no scientific activity could escape the need of measurements and hypothesis testing through statistical methods. Pedagogy was, as any other discipline, under this influence, and early in the XX measurements entered the education system to analyse teachers’ and students’ behaviour. During the 1950s and 1960s, the international community became interested in the contribution of education to the national economies. Indeed, the relationship between education and economic growth had been early theorised by Adam Smith in The Wealth of Nations (Smith, 1776, p. 137). In World War II aftermath, the attention was placed on Western countries’ economic development, and the investments on any factors contributing to the economy were under the spotlights. The focus of policy makers and governments quickly moved to literacy as a relevant factor contributing to the quality of the labour force and the base to live in democratic societies. The debate evolved in the following decades, covering the need for more and better skills for the capitalist, Western economies in a situation of cyclic crisis after the 1980s. By the end of that decade, though, a critique of the education systems increased particularly with the inability of the education system to respond to the industry’s skills demand (Mitch, 2005). In such a context, the discourse enclosed in the White Paper coordinated by Jacques Délors from UNESCO for the European Commission in Delors et al., 1996 very much emphasised the compelling need to modernise education, pointing to “the treasure within learning” as a metaphor. Though the reader might see in such metaphor an emphasis on more accountable systems in relation with the economic growth, the group coordinated by Délors attempted to go beyond such an idea, in search for a more complex vision of education. Nonetheless,
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
91
the report was used to set the basis for the European educational and developmental policies, where clear “benchmarks” of development were set, together with a number of indicators to measure such progress of the European education systems, updated through two programmes to establish actions and evaluation (European Commission, 2011). This context of enthusiasm for the measurement of progress was also associated with the US policies claiming “No Child Left Behind” from early 2000. Supported by solid scholars and an elaborated approach to the selection of evidence (mainly quantitative), the efforts were all directed to determine which teaching practices “work, under which circumstances” (Slavin, 2002). For formal education, but also for all forms of learning recognition relating informal and non-formal learning, assessment and evaluation became the beginning of a crucial pathway connecting the performance of the individual with the performance of the systems. As a matter of fact, a recent effort to analyse the education system basing on the assessment of a specific set of skills and knowledge has been the OECD PISA programme (Programme for International Students Assessment, https://www.oecd.org/ pisa/). It “measures 15-years-olds’ ability to use their reading, mathematics and science knowledge and skills to meet real-life challenges” (PISA’s presentation on the webpage). The effort to measure and compare the performance of the national system has reached 79 countries in 2018 and some 600,000 students representing around 32 million of 15-year-olds (OECD, 2018). The PISA measurement has become so relevant that there are national efforts aimed at preparing the students to participate in the tests and the results are widely cited and used in policy making (Biesta, 2015). The critique on the evidence-based education and the PISA has shown the shortcomings of quantification. In the first case, although the approach encompassed rigorous research work and the generation of evidence relating specific programmes on basic literacies (maths and reading), the scholarship was critical about the lack of consideration of social and cultural factors contributing to a broader picture relating educational outcomes (Biesta, 2007). As for the PISA tests, despite their careful design, many concerns have raised relating cultural differences not only between the “nations” (a concept loaded with ideology) but also within the same territories being compared. A review of two decades of literature on this international testing highlighted three fundamental deficiencies: its underlying view of education, its implementation and its interpretation and impact on education globally (Zhao, 2020). Indeed, PISA refined tests and any other national testing system, whose outcomes are a metric of some sort, to have had implications for the whole educational process. As Gert Biesta pointed out (Biesta, 2015): Quantitative measures that can easily be transformed into league tables and into clear statements about gain and loss between different data-trawls which, in turn, provide a clear basis for policy-makers to set targets for ‘improvement’ (see below) – such as gaining a higher place in the league table than apparent competitors, increasing national performance by at least a certain number of points or articulating the ambition to score ‘at least above aver-
92
J. E. Raffaghelli and V. Grion age’ – give PISA a simplicity that is absent in complicated discussions about what counts as good education. (Biesta, 2015, p. 350)
Policy makers’ reliance on such international effort to measure educational outcomes could be seen as the result, in any case, of decades of assessment practice for measurement, where the concern about the system’s performance has overcome the sense and direction of a pedagogical practice. This point was sharply captured in Brown’s words: “We may not like it, but students can and do ignore our teaching; however, if they want to get a qualification, they have to participate in the assessment processes we design and implement” (Brown, 2005, p. 81). These words may seem like a simple expression, a desire of focus on the pedagogical design of assessment. But they make visible the liaison between the metrics used in assessment as part of teaching and learning and the metrics used to engineer the educational system’s accountability. Producing grades is just the basic operation aimed at the later steps connected to aggregating, summarising and comparing grades to set the basis to display and discuss educational quality at an institution or a country level. The latest consequence, as we pointed out before through Biesta’s ideas, is the way such metrics become actionable instruments of policy making. Therefore, the careful focus of the educators to design assessment as a meaningful activity for the students might involve many implications for the educational institutions and the system. One particular effect is the deconstruction of the anxiety for the grade as single relevant element to demonstrate the student’s skills and knowledge to take part in society. In fact, the perverse effect of using grading to support the system’s analysis has been its impact on students’ and teachers’ perceptions of the assessment practice as a bureaucratic operation, not part of the learning process. As it was early theorised by Donald Campbell in the 1970s, there is a distortion in the quantitative representation of performance which moves from the metrics to analyse a phenomenon to become a driver of the actors’ behaviour (Vasquez Heilig & Nichols, 2013). The phenomenon is so frequent that a colloquial expression has been coined: “teaching to the test”. This stands for any pedagogical method heavily focused on preparing students for standardised tests (Styron & Styron, 2011). It pinpoints the depletion of meaning, the de-contextualisation and the lack of attention to students’ diversity, in an attempt by the educators to demonstrate the “quality of the system”. Moreover, those educators that defend their space of pedagogical practice and do the opposite in order to support diversity and equity often fail in reaching the expected benchmarks of quality. In his book The Tyranny of Metrics, Jerry Muller (2018) clearly depicts the problem of doing more harm than good through measurements within the context of the US system and the anxiety to measure “to see” the return of investments made on “closing the gap” relating basic literacies. In his analysis, the author arrives to a disarming conclusion: …the self-congratulations of those who insist upon rewarding measured educational performance in order to close achievement gaps come at the expense of those actually engaged in
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
93
trying to educate children. Not everything that can be measured can be improved – at least, not by measurement. (Muller, 2018, p. 114)
To go beyond measurement, the experts have already pointed out that the need of embracing assessment as complex and participatory process, where the students’ role becomes crucial in designing and applying assessment activities, might lead to assessment literacy, as final outcome beyond the grade (Boud, 1988; Grion & Serbati, 2018). This last concept stands for a set of abilities to assess self-determined learning and others’ learning in contexts other than the classroom, extremely relevant for our democratic societies. Needless to say, there is a long way to go in this sense (Medland, 2019). The beginning of the digital era only expanded the problem. Data was easy to generate and collect, leading to enthusiastic claims relating the educational transparency and accountability, hand in hand with discourses about new requirements for educators’ professional development relating data practices (Mandinach, 2012; Vanhoof & Schildkamp, 2014). Educators’ data literacy entered into the equation. In terms of Mandinach and Gummer (2016), in a context of evidence-based education, not only educational researchers might produce relevant data but also the teachers. As a result, data use in education would become an imperative when it comes to teacher preparation. Being “data literate” for a teacher means having “the ability to transform information into actionable instructional knowledge and practices by collecting, analysing, and interpreting all types of data (assessment, school, climate, behavioural, snapshot, longitudinal, moment-to-moment, etc.)” (Mandinach & Gummer, 2016, p. 367). In the case of higher education, the increasing development of data extraction techniques and the digital, quantified representations upon the activity of the virtual classroom has led to a new field of research and practice, namely, learning analytics (Ferguson, 2012), which has had plenty of implications for the educational community (Raffaghelli, 2018a). Indeed, the enthusiastic adoption of learning analytics was connected early on with the assumption that more data-driven practices in teaching and learning would lead to educational quality and productivity (Siemens et al., 2014). Such an enthusiastic vision has been a driver of the development and testing of increasingly complex learning analytics interfaces which should prevent student drop-out, inform teachers’ decisions on teaching effectiveness and be the driver for the redesign of courses (Viberg et al., 2018). The enthusiasm was also permeated by the possibility of automatising and scaling learning solutions, supporting the learner’s independence and self-regulation across digital learning environments (Winne, 2017). The research within the hyperbolic context of MOOCs development in the last decade of 2010–2020 generated effervescence and the appropriate experimental contexts supporting such developments (Gasevic et al., 2014). Moreover, such a data-driven approach to analyse and assess learning processes, including the same ability of the participant to engage, self-test, peer-evaluate and analyse their own learning achievements, has underpinned the discourses on the need of educator and student data literacy. Some authors connected data literacy
94
J. E. Raffaghelli and V. Grion
with reading learning analytics to support learning design (Persico & Pozzi, 2015), and others claimed for teacher and student pedagogical data literacy to participate in digital, data-intensive and informed learning environments (Wasson et al., 2016). Nonetheless, for other scholars, the discourses around data literacy required deeper reflection, also covering the crucial problem of data ethics connected to student’s data use and impact (Willis et al., 2016). This included data monetisation for the development of further digital services and products, bias and harm to vulnerable students (including gender, race and poverty as vectors of bias), lack or inappropriateness of informed consent, etc. (Raffaghelli, 2020; Stewart & Lyons, 2021). The pandemic only increased the debate, with the exacerbated adoption of private platforms collecting data and without clear consent and understanding from end users or the institutions (Perrotta et al., 2020; Williamson et al., 2020). In fact, the emergence of platforms generated an entire new phenomenon: the monetisation of metrics and quantification through the adoption of data as resource to develop appealing learning analytics. As some argue (Williamson et al., 2020; Williamson & Hogan, 2021), while the metrics adopted by the policy makers and the government have been adopted to “check up” on educational individuals, institutions and system attainment against public benchmarks, the sole purpose of private platforms has been profit – a profit based on data extracted from engagement, participation and assessment to generate alluring data visualisations and recommender systems with the promise to personalise the pedagogical response: more “teacher presence” for less “teacher workload”. Reading between the lines, there is a trade-off between quantification and automation with the human pedagogical relationship, which Muller already considered damaged by the testing excesses in the era of “the measured educational performance”. So, we are in front of a “new wine in old wineskins” problem. Digital data production, entailing data gathering and manipulation to address the actors and the system’s response and direction can be seen as a new approach (new wine). But the actual base (old wineskins) is the interest on metrics for assessment, evaluation, and quality. We also pointed out a problematic new direction, namely, cultivating data literacy as part of the educators’ professionalism and students’ skills within the datafied learning environments. But, while the landscape of educational datafication seems to be inexorable for higher education, a compelling question remains: Can data literacy fix the problem of dealing with increasingly datafied assessment and evaluation in higher education and the education systems overall? The aim of this chapter is to try and build upon the existing literature in order to simply open a window towards possible answer(s) to this question. We posit here that a more complex approach to assessment (for learning), and the resulting assessment literacy as a desirable outcome, now requires the integration of data literacy. Nonetheless, we recall that data literacy is also a multi-perspective and polysemous concept. Therefore, there is a risk of embracing it as the plain acceptance of quantification and metrics aligned with a restricted idea of assessment (for grading). Instead, a holistic, critical approach to data practices and literacies in education
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
95
(Raffaghelli et al., 2020) could support in a more plausible way a perspective of assessment for learning. We will begin our journey with a section connecting assessment for learning and data literacy by understanding the debate around assessment as central pedagogical practice leading to deep learning and the assessment literacy. Though an authentic practice of evaluation and assessment as a driver of critical awareness, appropriate and timely judgemental ability and active participation, our first question here is why there is so much resistance from teachers and students to change assessment practices. The answer to this question comes in the following section, where we will explore the so-called crisis in the practice of assessment for grading and generating educational credentials, into a scenario pervaded by quantification and metrics. We will then move on, in the third section, to explore the fact causing pressure on the pedagogical practice: the strong connections between educational data collection and rankings as “the quantification of educational quality” in a context of increasing competition amongst universities. Our attempt here will be to show the numerous difficulties connected to building indicators and the excessive insistence on some indicators at the expenses of relevant (and never collected) information, including complex approaches to assessment. In the following section, we will liaise the situation to the effect of digital technologies and data-driven practices had on data collection. We will wrap up our concerns about assessment in the context of a datafied university, not only by exploring the role of data literacy but mainly by exploring the potential of a “critical” and “pedagogical” data literacy in higher education. Overall, we focus on (a) strengthening the idea of assessment as a pedagogical practice which may or may not adopt data as an instrument to synthesise outcomes, and which aim is not to produce a quantifiable representation of the educational process; (b) the idea that data-driven practices within digitally mediated assessment are just another approach to produce representations which might or might not support the so called “assessment for learning”; (c) the need to embrace critical “pedagogical- data” literacy, as the ability to read data as complex assemblage of practice and instruments. Overall, this approach could lead to a renewed assessment literacy.
Assessment for Learning: Beyond Grading Assessment methods and activities have a crucial impact on student learning (Boud, 1988; Grion & Serbati, 2018). As a result, assessment and evaluation should be a relevant part of the learning process (Black & Wiliam, 2009) and not a technical and administrative operation aimed at producing the data required by the bureaucratic system. Despite this relevance, only in the last few decades have assessment and evaluation in higher education been considered a “hot” topic, at both research and policy level. At an international level, in fact, the problem was little taken into account in educational research in higher education, before the 1990s (Boud, 1995), probably alongside the plea to take teaching as central priority for professors (Boyer et al., 2015).
96
J. E. Raffaghelli and V. Grion
The Australian professor David Boud was one of the pioneers in focusing the attention of the educational research community on assessment in the university context and one of the most authoritative voices, internationally, in this field of research. Early in 1988, he stated that “assessment methods and tasks have a profound influence on how and what students learn, probably more than any other factor involved. This influence may play a more decisive role than that played by the materials” (Boud, 1988, pp. 39-40). Further discussing the “weight” of assessment in academia, in a later article, the same author stated that “students can, albeit with difficulty, overcome the effects of bad teaching, but they do not have the chance (by definition, if they want to graduate) escaping the effects of ‘poor’ assessment. Evaluative acts represent mechanisms of control over students that are far more pervasive and insidious than most teachers are able to recognise” (Boud, 1995, p. 35). A profuse literature followed Boud’s pioneering perspective, showing how the forms and instruments of assessment used by the teachers often affect student behaviour in relation to their commitment to study, their focus on certain aspects of teaching and the actual skills developed as a result of attending a course (Boud & Soler, 2016). Along these lines, Bloxham and Boyd (2007) affirmed that, at university, assessment activity corresponds to learning activity. Although students may take notes during lectures, follow seminars, underline parts of texts while the lecturer deals with the relevant topic or carry out work assignments proposed in laboratories, preparation for assessment has traditionally been the time when students engage with the study material seriously and perhaps effectively. Therefore, the scholarly literature supported the idea that the assessment tasks should be carefully planned to develop contexts of reflection and activity which lead to relevant learning (Brown, 2005). Indeed, the work of Biggs has been foundational in this trend. This scholar developed the concept of alignment between teaching and assessment as an approach to quality in higher education (Biggs & Tang, 2011). For Biggs, a worrying aspect requiring critical attention related the negative effect on learning that would be produced by superficially designed assessment practices, even in the case of pedagogical innovations. For example, a course based on project-based learning or collaborative learning might end up with low grades if the final test is designed as a multiple-choice exam requiring just a mnemonic exercise (Ghislandi & Raffaghelli, 2015). In a more practical vein, Bloxham and Boyd (2007) highlighted how the assessment strategies adopted and communicated to students influenced the approach to studying, the amount of time students devote to preparation, the depth of content acquisition, the more or less effective ways in which they identify key concepts and so on. Cinque (2016) pointed out that assessment conveys what is important to learn; has a powerful effect on what and how students learn; consolidates the development of learning strategies; influences the value the subject places on training, as well as the sense of personal sense of personal fulfilment and willingness to complete certain learning tasks; and contributes to defining what students associate, in general, with the experience of assessment in the university environment.
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
97
In underlining the need to rethink the assessment of learning in the academic environment, Brown referred to the observations made by the Higher Education Academy (Ball et al., 2012), the European Commission (McAleese et al., 2013) as well as the Yerevan Communiqué (European Higher Education Area EHEA, 2015), about the inadequacy of assessment and evaluation practices in our contemporary universities, in an overall context of concern on the quality of teaching in higher education. According to the aforementioned reports, assessment and evaluation practices highlighted the inability to keep pace with the profound organisational, structural and functional changes in the universities themselves. They were unsuitable for identifying and capturing the results that society expected from the university education, particularly in relation to the skills and knowledge required in several professional fields and in the society overall. Moreover, the HEA has recently reiterated the important role of assessment and evaluation for the modernisation of higher education. This UK institution highlighted areas of assessment practice which are recommended to innovate and hence support modernisation. Amongst them, they included authentic and relevant work and a variety of assessment approaches, feedback practices as a dialogic approach to the learning process and the reinforcement of more reflexive and participatory approaches to assessment like self-assessment and peer assessment. In fact, appropriate, timely and recursive feedback (Carless, 2019) even in large-size lectures (Ludvigsen et al., 2015) and strategically designed self-assessment (Doria & Grion, 2020) and peer assessment (Serbati & Grion, 2019) have been supported by empirical evidence as key approaches to triggering reflection and self-determination as learner. In the same vein, the “Framework for transforming assessment in Higher Education” (Higher Education Academy, 2016) has become a clear tool to support innovation in assessment. While the document put essential attention to measuring, it also required a focus on designing for learning and transferring learning: Assessment plays a vital role in HE. It is essential for measuring the extent of student learning (assessment of learning) as well as of student learning (assessment for learning). Assessment should be designed in ways that promote student learning; whether learning the subject or professional domain or competencies, literacies and skills at a subject or broader level. (The Higher Education Academy, 2016, p. 1)
In her book aimed at proposing an international perspective on university teaching and assessment, Sally Brown has gone a step further (Brown, 2014). She claimed that assessment is a key piece for the modernisation of teaching practice provided that this last activity moves beyond just measuring learning and grading. In the same vein, Boud (2020) highlighted that assessment makes an impact whether the educator searches for it or not. The premises to support this relevant assumption are that (a) the practice of assessment communicates to the students what the teacher (and the system) value; b) hence, it sets the priorities for the students’ academic work (study); (c) from its inception, assessment over-emphasises some things over others; and, (d) according to the type of approach set, assessment might inhibit cooperation and collaboration between students to get focused on the individual knowledge and skills outcomes required within the practice of assessment
98
J. E. Raffaghelli and V. Grion
In Boud’s words: Assessment is seen not as a measurement or judgement that leaves the ‘subject’ (the student) unaffected, but an act that profoundly influences students’ study and their priorities in many ways. Poor choice of assessment activities leads to poor learning and distorts what students end up being able to do. (Boud, 2020, p. 5)
The prior reflections let us conclude that assessment at university level can no longer be considered merely a bureaucratic instrument at the end of the learning and teaching activity, but rather a complex and fundamental process that requires teacher and student interaction and engagement, in view of the young peoples’ future professionalism and their insertion into society. Careful and significant assessment, as a practice embedded into the whole pedagogical relationship, leads to a critical literacy enabling the students to become evaluators. Assessment literacy, as it has been called, is the base of many other cross-cutting skills required in contexts, such as knowing how to make decisions and solve problems and critical thinking (C. D. Smith et al., 2013). Some authors (Boud, 1999; Boud & Soler, 2016) highlighted that the ability of becoming lifelong learners is actually based on becoming lifelong assessors. In short, this means being prepared to face analysis, evaluation and judgement applied to personal and professional life. A learner will be in fact called, in all life situations, to judge their own and others’ performances in specific and diversified contexts, provide and receive feedback on the problematic situations to be faced and critically evaluate the quality of products and processes to make the consequent decisions (De Rossi, 2017). However, being able to assess in a relevant and balanced way is not a competence that arises spontaneously, but must be intentionally trained, being considering an indispensable training objective of every discipline (Boud et al., 2013). Graduates can engage in such an evaluative approach with autonomy and responsibility only if they are offered the opportunity to take an active part in evaluation and feedback processes. The aforementioned processes can be implemented at small scale within the classroom, where the student can participate in self-assessment and peer assessment exercises (Serbati et al., 2019). However, also highlighted was the little attention received to a broader and complex vision for building such literacy (Grion & Serbati, 2018; Medland, 2019). Despite the pedagogical relevance, and beyond the teachers’ skills requirements to implement such activities, there is a further issue preventing the spread of a complex perspective and practice of assessment. Indeed, assessment for learning entails tasks that are not only difficult to design, develop and deploy in real educational settings, but they are even more difficult to account for in quantitative terms. As a result, grading as a central element of certifying, which is more a need to account for the system’s performance than of the teachers and students, is difficult and costly to produce. In a context of the massification of higher education, the need of showing effectiveness, of competing between institutions and of doing so through cost-savvy approaches becomes central. Therefore, most assessment tasks are linked to testing degrees of conceptual knowledge which can be easily transformed into scores. The evident downside of such a situation is the rather superficial learning produced
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
99
given the premises mentioned above relating student engagement with learning activities and resources (Ghislandi, 2005, p. 218). In fact, a common concern by educators is that any invitation to students to deepen their knowledge on a matter is ineffective, beyond the study topics that are not the subject of an exam. However, can the students be blamed for understanding the system’s requirements and what leads them more effectively to the credentials required for the labour system and hence for income in the capitalist society? The analysis of the literature clearly drives our attention to three points. Firstly, despite the many recommendations, assessment is still the “ugly duckling” of teaching in higher education, often being little or badly used for the development of learning processes (Grion et al., 2017). It can be argued that the educators very often forget the strong power that assessment plays on learning (Binetti & Cinque, 2016). Secondly, and deeply connected with the former issue, a key for our reasoning in this chapter, badly designed assessment is based on metrics and yields data that are more a requirement of the system, than a picture of significant learning outcomes (Biesta, 2015). Thirdly, scores and data have a perverse effect on the students’ understanding of what assessment and deep learning is and what they need to do to “achieve” results within the educational system (Pastore & Pentassuglia, 2015). But why is the practice of superficial assessment so frequent and resistant to change? And why is it so connected to the oversimplification of learning outcomes to grades as quantitative representations that hence shape the students’ perception of learning?
ssessment, Metrics, Credentials and the Crisis A of Higher Education Our prior analysis led to spotting the distortions and imbalances produced by assessment in higher education. As we highlighted, inauthentic methods go in detriment of an assessment’s formative function embracing predominantly inauthentic, summative methods. In a personal communication of one of this chapter’s authors with Kay Sambell, it was concluded that “the paradox is that the more the universities are pushed to produce degrees, the more the teaching and particularly assessment are impoverished. As a result, those assessment practices whose final aim is to make students autonomous, capable of self-expression and balanced judgement, critical thinking, collaborative working, as highly required skills in the lifelong learning society, become improbable, when not inexistent” (Kay Sambell, Newcastle, UK, 04.03.2016, 2pm). We posit in this section that the roots of such a problem can be connected to the need of faster and easier approaches to “grading” as an outcome of both the teaching and the learning process, in a context of massification of higher education.
100
J. E. Raffaghelli and V. Grion
The “myth of measurement” (Broadfoot, 1996) spread in the contexts of university education in the 1970s hand in hand with the emphasis of measuring social interventions. In fact, the so-called educational measurement (Ebel, 1972) identified evaluation, tout court, with the practice of measurement and its rules. The US policy making context was particularly sensible to measurement and scientific approaches to social actions, deeply connected with military interventions abroad. In those years, the American psychologist Donald Campbell became a resounding voice in the debate around the shortcomings of quantification in social interventions overall and in education specifically, though his position inclined to support the usage of metrics. Campbell work became well known for the formulation of a “law” relating tests (as operations to yield scores): Achievement tests may well be valuable indicators of general school achievement under conditions of normal teaching aimed at general competence. But when test scores become the goal of the teaching process, they not only lose their value as indicators of educational status but also distort the educational process in undesirable ways (Campbell, 1979, p. 85). The distortions of quantification were well studied in his foundational reflection on “the corrupting effect of quantitative indicators” (p. 84) which the author related to the controversy between the value of qualitative and quantitative modes of knowing which also recall the antagonisms between the “humanistic” and “scientific” approaches to research (p. 69). The debate on the antagonisms of quantitative and qualitative paradigms went ahead for almost three decades (Kincheloe & Berry, 2004), with profound impacts on the conception of evaluation, particularly high-stakes evaluation, and its connections with policy making (Biesta, 2007). Grasping the complexity and the superficiality of applying metrics to the social intervention, the discourse on applied evaluation evolved into more participatory approaches. A clear landmark of such a debate was the Fourth Generation Evaluation by Guba and Lincoln (1989). The educational research community was unable to strike a balance until the mixed methods and design-based research pushed the debate a step forward, in a desperate attempt to capture the complexity of educational practices (Anderson & Shattuck, 2012; Creswell & Garrett, 2008). The effects of such an evolving landscape, as expressed, had a deep impact on the re-conceptualisation of assessment and evaluation, in connection with the analysis of educational quality (Ghislandi et al., 2013). One of the outcomes was the experts’ plea to align teaching with the assessment practice, in the so-called assessment crisis (Stiggins, 2002). This crisis seems to have two connected roots. Firstly, it related to the pedagogical critique to assessment instruments and the actual representation through them of what was valued as the educational final outcome. It can be noted, in this sense, that within the more recent documents regarding the UK Quality Code for Higher Education, produced by the Quality Assurance Agency for Higher Education (QAA), the word “measurement”, which was used almost as a synonym of assessment in the first version of 2000, disappears (Boud,
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
101
2014). Moreover, the word was no longer used to define assessment already in the second version of 2006. It is also indicative that in the latest document on quality assurance in learning assessment, it is mentioned only once (QQA, 2018). How did this word disappear from the expert discourses while it is still implanted in common sense? The American educational researcher Rick Stiggins’ story is telling. He worked for about 40 years not only constructing standardised tests but above all founding and directing an important research centre dedicated to the elaboration of such tests. His attention was clearly focused on the technicalities ensuring ever-greater validity and reliability for the tests to be used to “measure” student learning. At a certain point, he realised that he had made such a big (and prolonged) blunder that “after a decade or so of traditional measurement work, his attention shifted from large-scale testing to studying the instructional dividends of classroom assessment” (Popham, 2017, p. xi). Moreover, Stiggins emphasised that the measurement community had missed an essential point: For decades, our priorities have manifested the belief that our job is to discover ever more sophisticated and efficient ways of generating valid and reliable test scores. Again, to be sure, accurate scores are essential. But a previous question remains unasked: How can we maximise the positive impact of our scores on learners? Put another way, how can we be sure that our assessment instruments, procedures, and scores serve to help learners want to learn and feel able to learn? (Stiggins, 2002, p. 759)
Stiggins’ story unveils a particular problem: that the focus of educators and the research community was placed completely on the technicalities of testing and scores, maintaining a quantitative epistemology of productivity. Nobody questioned the need of quantified results, towards a new, complex understanding of learning. Secondly, the problem of quantification was deeply entrenched with the productivity requirements of the system, moving far beyond assessment as a teaching practice, towards the arena of policy making and the whole capitalist system. Indeed, the so-called credentialism and the entrenched effect of “degrees inflation” are the other side of the coin. Early observed in 1979 for the USA by Randall Collins, credentialism stands for an extreme reliance on formal qualifications or certifications to determine whether someone is permitted to undertake a task and speak as an expert (Collins, 1979). Degrees inflation, instead, was the subtle effect of credentialism, through which the degrees started to lose their value as they were produced in a massive way (Open Education Sociology Dictionary, 2013). Indeed, the massification of higher education and the rigid attachment to academic skills far from the technical and soft skills requirements within the labour market were extremely deceptive for both the unemployed or underemployed graduates and the industry frustrating talent recruitment operations (Carey, 2015). Later on, the problem of credentialism prompted the debate on the modernisation of the curriculum and the teaching practice, though it also uncovered the complexity of factors which could not be controlled entirely by educational institutions and were effectively influencing the students’ achievements. Amongst these, class and the social codes shared by the élites as well as gender and race discriminatory practice in accessing higher education were some of the critiques to the inherent value
102
J. E. Raffaghelli and V. Grion
of university degrees (Andersen & Taylor, 2012, p. 348). Therefore, despite circulation amongst the university imaginaries of the idea of metrics and quantification as expression of objectivity and equity connected to the final credentials achieved, the idea of “measurement” was the object of a strong critique all along with the development of the critique to credentialism and degrees inflation. Additionally, both the massification of higher education as a requirement for the job market and the difficulties faced by universities to respond to the fast challenges of a changing knowledge economy pushed towards a deep institutional crisis (Carey, 2015). The universities were put under pressure to develop more attractive programmes, introducing digital technologies, renewing contents, personalising learning, networking with the industry to ensure job placement, etc. (Crosier et al., 2019). Nevertheless, such activity also entailed increasing costs and the need to become “performative” in the sense of competing and showing teaching and research achievements. Tellingly, the metrics ensured easier representations of activity and relative outcomes. The more the universities produced grades (fast good grades) amongst other indicators of productivity, the better the possibilities of becoming visible and appealing for potential students. But the more superficial the certified skills were, the less these skills were attractive or even effective for the labour market and society overall. Our extremely brief excursus on the system’s pressure to produce metrics of success shed light on the possible causes of the reluctance to adopt more complex assessment and evaluation approaches, as it was Stiggins’ plea. But the problem is far from being solved: instead, it appears that faster and better technologies to produce assessment tools are having a seductive effect on the return of metrics to higher education, as it was clearly pointed out through the PISA mentioned early in this chapter. Despite the experts’ recommendations and the careful avoidance of “measurement” as the kernel of assessment, higher education institutions are nowadays, more than ever, pushed towards competitive national and international benchmarks, where assessment is somehow a key piece: moreover, a piece that must work efficiently in a datafied university. This scenario invites a new player: digital technologies and their key role in faster and more efficient data capture in relation to assessment and educational quality.
ollecting Data for “Success” in Higher Education: From C Techniques, Technologies and Professionalism to Critical Audiences Data collection in higher education has a longstanding tradition, but it has been particularly connected to a number of university activities, with particular attention on research, patents and contracts, and hence teaching as a (paradoxically) minor activity (Fanghanel et al., 2016). Within this activity, student graduation and retention was a quality indicator deemed central (Barefoot, 2004). More recently, the
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
103
discussion has highlighted “teaching quality” as vector of student retention and success. As a matter of fact, the European Commission pushed the debate on higher education teaching quality (McAleese et al., 2013), building over an increasing debate at the OECD and UNESCO (Ghislandi & Raffaghelli, 2014b). In spite of this relevance, the measurement of higher education quality has represented a conundrum, for the various interests and expectations placed on the role of higher education, spanning from research, transference to student satisfaction and, nonetheless, the adherence of student skills and knowledge to the labour market requirements (Hazelkorn, 2016). Both the research and innovation indicators are part of a debate, but the teaching performance leading to learning effectiveness appeared to be more elusive, later entering the quality debate. Indeed, the only possible approach to evaluate “teaching quality” at a large scale (the whole institution, the national and regional university systems) can be only based on “proxies” that go from self- reported measures (student response to end-of-course surveys) to student retention and graduation. Moreover, data collection appears to be difficult and costly (Harvey & Williams, 2010). Despite the difficulties in establishing approaches to measure teaching and learning, technology entered into the equation as game changers. The literature reported practices such as data, adopted massively to understand student engagement within online learning environments and with digital resources and tools (Beer et al., 2010). Nevertheless, the real leap in the hope around data-driven practices connected to learning and quality measurement was gained in the learning analytics movement (Ferguson, 2012; Siemens, 2013; Siemens et al., 2014). The main supposition was that the intelligent integration of data, leveraging on the automatic measurement of learning assessment and engagement, would overcome the problem of collecting self-reported measures (Scheffel et al., 2015). Most initial works in this strand emphasised the potential of immediate data tracking and collection; the capacity of processing data to promote dynamic, automatic and personalised representations used to support feedback; and the reliable approach to data collection based on behaviours and not opinion (Viberg et al., 2018; Yang et al., 2019). Nonetheless, the problems of data collection and their relationship with the “visibility” of learning were far from being successfully covered. The digitisation of activities in higher education brought apparent solutions, as the data collected has been confirmed as sparse, incomplete and collected far from real classroom contexts (Vuorikari et al., 2016). Also, it has been extremely difficult to generate meaningful assemblages which could actually inform on the many complex indicators required to understand teaching quality and its connections with student learning (Knight et al., 2014). Over the facilitated data collection, in any case, university rankings proliferated in the recent years, as a form of “objective” support to understand which universities deliver best-quality education (Sangrà et al., 2019). The rankings, by definition, put the emphasis on the competitive dimension. Nonetheless, they have been tailored in a way that the dimensions analysed favour some institutions in detriment of other, despite different socio-cultural contexts and societal needs with regard to the role of the university (Hazelkorn, 2016). Comparative studies have also shown that research
104
J. E. Raffaghelli and V. Grion
indicators prevail over those relating teaching, and teaching is underinformed with basic data, in what some have denominated the one-dimensional approach to the institutional evaluation (Moed, 2017). As a matter of fact, Hou and Jacob (2017), analysing three of the most influential university world rankings, showed that these last adopted data which was loosely connected to the phenomenon analysed overall and with teaching and learning in particular. In the same vein, Spooren and Christaens have also demonstrated that university ranking indicators were mostly composed of data such as the papers published in prestigious journals such as Nature and Science, the values of the Science and Social Sciences Citation Index and the number of members who won Nobel Prizes and Field Medals, in detriment of, just to mention one, the students’ evaluation of teaching (Spooren & Christiaens, 2017). Only recently, multidimensional rankings such as U-Multirank (https:// www.umultirank.org/) attempted to address the problem, focusing on understanding the diversified expectations several ranking audiences might have with regard to the university (Goglio, 2016). Despite these efforts, the substantial problem is the poor conceptualisation of rankings, based on a naïve or “commonsensical” idea of quality, mainly referring to the famous US Ivy League, with poor statistical support (Soh, 2017). Nonetheless, the university pages are frequently populated by enthusiastic expressions relating recent positions at international rankings, as a source of motivation for the professors, as well as existing and prospective students. The problem remains and lies on the material structure and assemblages needed to generate oversimplified expressions of “success” aimed at competing in the marketisation of higher education (Hazelkorn, 2016). Aligning with our analysis on the role of assessment in such a context of emphasis in measurement and competition, Sangrà et al. (2019) pointed out that the data for evaluating the quality in higher education could be related with different levels of analysis. These are micro (indicators that might inform the teaching and learning process), meso (institutional scorecard boards that display specific indicators of institutional performance) and macro (rankings that are used to compare institutions). The authors assert that the three levels have to be analysed separately for the problems of data collection and the type of relating action taking is entirely different. Nonetheless, each of these measures has to be understood as intertwined from the moment of collection to the representation and relate ongoing societal and institutional values. When referring to data-driven practices, such levels could be integrated into what was denominated a “data culture”. Moreover, actively participating in a “fair data culture” requires new educator literacies, while the sources of information change and evolve including data-driven practices (Raffaghelli et al., 2020). Yet, this strand of research brings evidence about the fact that the most visible type of data relates particularly the macro-level indicators as the only high-stakes measures (Gibson & Lang, 2019). Indeed, there is a juncture concerning data-driven practices at the three levels. As it has been explored by Selwyn and Gašević (2020), stemming from the critique of Selwyn in his research on data practices within schools (Selwyn, 2020, 2021), data is very often adopted just at a performative level. The “perfect scenario” for the
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
105
development of data-based teaching quality really pushes for data visualisations, narratives and metaphors which barely relate the actual needs of pedagogical reflection and a deep debate on the quality of education. The massive use of online systems by the universities and even schools has brought to the fore the shortcomings of an online education provision. While Pozzi et al. (2019) claimed that data analysis could become an interesting ally for collecting evidence on the quality of the performance of teaching in online environments, the actual situation could be that data “for good” is still far from being achieved. This relates not only to the lack of technological and data infrastructures, as Pozzi et al. point out, but also to the fact that there are no “critical audiences” (Kemper & Kolkman, 2019) who can revise, control and halt eventual harmful data practices. The concept is in line with the idea of building a “fair data culture”. Indeed, a critical audience according to Kemper and Kolkman (2019) is an informed collective of users who engage in several phases of the data materialities and assemblages. Indeed, Kemper and Kolkman uncover the relevance of engagement to recognise the contextual nature of data and algorithms as representation of the system leaders’ and data scientists’ values and biases. The fact that there is a prevalent approach to data at the macro level means that we can claim that most educational stakeholders liaise only superficially with data practices and cultures. In the current scenario, the old practice of the superficial assessment of learning for grading and credentialing is part of a performative act leading to data visualisation and hence to comparing universities and courses using rankings. As a result, the critical engagement required to build fair data cultures embracing a contextualised sense of deep learning in connection with the development of transformative institutional processes appears impossible. Supporting this claim, a recent review of the literature by Raffaghelli and Stewart (2020) unveiled that the educators’ skills requirements are much more focused on technical rather than critical data literacy, therefore unable of becoming “critical audiences”.
( Critical) Data Literacy and Assessment Literacy at the Juncture in the “Datafied” University ore Technology Will Not Change Assessment Nor the Quality M of Education Could technological development provide relevant tools for improving assessment and hence data collection to make sense of the educational practices? Research in the field is enthusiastic about the evidence supporting a positive answer. In Re-imagining University Assessment in a Digital World (Bearman et al., 2020b), a recent compendium of the advances in technology-enhanced assessment methods and practices, there are a number of possible scenarios identified. However, the authors are also well aware that data is not the unique source to inform teaching
106
J. E. Raffaghelli and V. Grion
practice. Across the several contributions, the critique of instrumentalist approaches using technologies to making assessment more efficient in standardising, grading and recording data emerges against the idea that technologies should be adopted to develop more imaginative approaches to trigger and share students’ achievements. The questions at the cutting edge here seem to be as follows: How much and when do the digitalised data give us information on the essential elements of teaching and learning and not just on some isolated or less significant components of it? Are educators able to promote combined assessment and data literacies in their students in order to let them make a productive use of the assessment results, beyond those represented as numerical data or using digital data? Are the universities able to promote combined assessment and data literacy policies which could generate sustainable assessment structures? As a matter of fact, one of the opening chapters (Bearman et al., 2020a) highlights that technologies could support a more complex perspective on assessment through: 1. Re-imagining the governance structures. For example, the authors point out that the role of grades could be overcome through e-portfolios as a holistic and more integrated form of data collection and representation (p. 12–13). 2. Re-imagining assessment. Technologies might support student engagement with new forms of expression, such as expressing themselves in several digital spaces in which they can participate. The results of their activities could be captured through digital classroom tools and spaces and used for assessment. Such authentic approaches could trigger deeper self-awareness and also the ability to explore their digital professional identity and networks in connection with their professional futures. 3. Re-imagining knowledge on assessment. Co-constructing knowledge throughout the learning activities in the process of continuing evaluation, based on teacher and automated personalised feedbacks, places the teacher as an expert in the matter but not the absolute knowledge holder. Through the use of digital spaces to deliver more complex tasks and outputs from their learning activity, the students can take the lead in bringing new information, their stories and ideas, to the classroom workspace. In addition, self-evaluation and peer evaluation can be facilitated by several digital tools, as, for example, co-constructing an assessment rubric later implemented for feedback in the online classroom. All these activities not only entail building professional skills, but they are also connected with judgmental capacities on the student’s own and others work (v.p. 16). On the whole, the authors see that technology-enhanced assessment could improve through completely or partially automatising feedback practices to become more immediate, timely and personalised; analysing the grading policies through longitudinal representations which might support alignment amongst teaching, assessment criteria and grading across courses and along students’ careers towards fairer practices of qualification; supporting better observation of learning processes; and exploring emotional and relational phenomena in students’ communication and the artefacts produced by them.
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
107
The authors conclude that “It is increasingly imperative that we reimagine how we design and administer assessment. Without doing this, our assessment will become (…) irrelevant in a digital world” (p. 17). However, Bearman and colleagues (Bearman et al., 2020c), in another relevant chapter, explain that “the intersection of assessment (powerful) and technology (omnipresent) (…) deserves close scrutiny” (p. 24). The excessive hope placed on only technology as a game changer in education is not new, having its roots on “techno-solutionism”, as the idea is that more digital practices, and ultimately data-driven practices, could support transparency, innovation, professionalism, etc. (Ranieri, 2011; Selwyn, 2013). Digital systems overall promised personalisation in parallel with the improved productivity and efficiency of systems, particularly through the automatisation and the expansion of teacher functions. This applied particularly to the most burdensome teacher task and the most expected requirement by the system: feedback and assessment. Automatised and more precise actions, as well as the student engagement, would be enhanced by increasing the collection of data by digital systems. However, as we expressed earlier through the voice of Bearman, Boud, and Tai (2020a), this could never encompass more than an oversimplified understanding of what digitally mediated assessment could be. The pandemic taught us the hard difficulties in overcoming traditional assessment practices in spite of a forced use of technologies (Hodges & Barbour, 2021). Aligning with the socio-technical debate on the overall usage of technologies in education, a vision of digitally mediated assessment starts from the conception of assessment. In fact, the paraphernalia of techniques developed to analyse student interaction activities, digital feedback, automated scoring for assignments and visual dashboards might only scratch the surface. Despite very interesting developments connected with the better insights of teachers for the assessment, monitoring and evaluation of collaborative processes and self-regulated learning (Cerro Martínez et al., 2020) and the potential envisaged in such developments (Essa, 2019; Rose, 2019), the problem is open. Therefore, the distance to uncover between the educators as critical audiences, empowered by data, and the actual data infrastructures, design and developments by data scientists is still too relevant. Instead, we claim that the reflection about metrics and the use of extractive data techniques is still connected to the basic quantification and quantitative epistemologies. As it was pointed out by Ghislandi and Raffaghelli (2014a), the evaluation of quality depends on a series of particular adjustments of interests, the negotiation of meaning and institutional development objectives, which, rather than evaluating an “objective” phenomenology, is concerned with building a culture of quality. Mediation is therefore of particular importance, i.e. the construction of tools and spaces for the negotiation of meaning connected with professional practices and institutional processes. In a recent research project, Ghislandi et al. (2020) underlined the risks of an excessive reliance on the quantitative assessment of quality teaching by the students. Although quantitative approaches are more concise and faster, compared to qualitative investigations, they do a poor service in understanding the impact of innovative pedagogical practices and the overall perception of
108
J. E. Raffaghelli and V. Grion
educational quality as an experience. The process of quantification, applied to education and to any social phenomenon, is always a reductionism, a technical and mathematical synthesis of complexity, as we observed in the case of university rankings (Pozzi et al., 2019). Therefore, facilitating the conditions for digital data extraction and the production of relating representations should go beyond the technical possibility. Any technical operation should be accompanied by a debate on the validity and trust which the society can deposit on such artefacts, for we might otherwise support a society where surveillance is normalised and monetised (Zuboff, 2019). We go back here to the literacies to act as a “critical audience” able of understanding and questioning the data infrastructures and practices. As a practical implication for faculty development and institutional strategies, the focus should be placed on understanding the implications of using assessment to measure and hence move towards a critique of the technological mediation, which only serves “old wine in new vessels”. Hence, the starting point must be to uncover the myths which support scores as the very basic input for educational data infrastructures, before making technology enter into the equation. An expected result of these strategies would be a deeper and more complex culture of assessment deployed by the key players (educators and learners), as significant and authentic assessment practices independent of superficial data visualisation, aggregation or used for algorithmic developments applied to teaching and learning. Moreover, the institutional leaders and policy makers would not oversimplify the contextual nature of quality, using rankings and data representations as mere performative and marketing operation. They would engage with data coming out from assessment to generate spaces for conversation, to discuss about educational quality, to contextualise achievements and to find plausible (glocal) answers to the question posed by Gert Biesta: “What constitutes good education?” (Biesta, 2020, p. 1023). These strategies stem from a critical perspective on data practices and embrace a socio-technical approach to the technological mediation of assessment, by overcoming old values translation into digital environments. Instead, they explore technological support to enhance their affordances as means of transformation. Specifically, these strategies operate against technological solutions which may only automate and standardise grading and credentialing, as the new technologies could also entail new risks, worth being scrutinised when considering the literacies required within the datafied university.
New Systems, New Risks: The Bayesian Trap The new technological tools bring new risks. We are referring in this sense to what we would like to call “the Bayesian trap”. Bayes’ probabilistic approach (or posterior probability, namely, prediction based on past results of a system) is the kernel of machine learning techniques adopted to make predictions over users’ future behaviour (Malik, 2020). Though it can be useful in a number of fields and usages
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
109
and applied to high-stakes domains of human activity, Bayesian statistics can “trap” the user of a system, in the past. Indeed, the prediction is made over the basis of past human responses, like the number of successes and the score obtained in a test. Applied to the field of learning analytics, the automation of student recommendations is based on past grades or scores from continuous assessments combined with other parameters such as socio-demographic information, success in prior higher education courses and even grades in high school and, last but not least, engagement measured through clicks and downloads. But what if any of these parameters, based on the past, would prevent an “outlier” student from reaching their own, unpredictable achievements? What if such outliers are students that are reaching the higher education system for the first time after overcoming injustices and discrimination connected to race, diversity or gender? What if a whole generation of students assessed and used as a population for a specific task would be the holders of a particular condition which is not that of a current cohort? The posterior probability would create a perennial loop of repetition linked to the past actions. What was meant as a mechanism to facilitate teacher workload would limit the freedoms of the students to be a whole new and unprecedented phenomenon in a particular moment. This situation is exactly the type of critical issues currently debated to embrace artificial intelligence overall, as it has been introduced, for example, in the European White Paper driving services and the whole industry (European Commission, 2020). In this regard, the ethical concerns of using learning analytics have been carefully explored in the recent literature, and they encompass all the possible perils of any algorithmic decision-making system, as clearly expressed by Tzimas and Demetriadis (2021). In their comprehensive review of the literature, the authors list areas of concern relating learning analytics like student privacy, the transparency in data collection operations, the problem of student stigmatisation upon their past mistakes in approaching learning and studying, algorithmic fairness and the impossibility of controlling and acting in the case of seeing a critical malfunction in an automatised system. More recently, the massive adoption of online technology to support education during the pandemic has led to a further concern, relating the “platformisation” of education: the entrance of private platforms, as “rescuers” in a context of pandemic chaos to reorganise educational services, without reflection on the lack of correct regulations preventing the monetisation of student data. In the case of higher education, this issue has been clearly depicted by the report of Williamson and Hogan (2021).
ritical Data Literacy as Part of Assessment Literacy: The Need C for Renewed Digital Scholarship The two prior paragraphs emphasise the need to cultivate stakeholder awareness on metrics, assessment and digitally mediated systems and the emerging data-driven practices.
110
J. E. Raffaghelli and V. Grion
More than ever, we see here the connection to a complex and critical ability to read data in their old and new shapes. In the prior paragraphs, we have carefully listed the fallacies and tricks of data usage from assessment to the performance of the educational system. We have now depicted the perils of data-driven practices in technology-enhanced environments, old numerical and quantitative forms of data representations in grades, and other metrics encounter here the phenomena of data- driven practices adopted to produce automatisms and systems’ recommendations. Reading data in such diversified shapes in higher education cannot only refer to teacher skills, but it also deeply concerns student data literacy at the crossover with their assessment literacy (Boud & Soler, 2016) in a datafied society. Over the basis of Crawford lucid debate on the perils of AI and data (Crawford, 2021), we could affirm that teachers’ and students’ new competences should stem from an understanding of data as a semiotic representation connected to the conceptions and ideas about “what is good education” within higher education. These conceptions can be connected with professors and also with the institutional management and even the data scientists working in the implementation of learning analytics and overall data representations. Teachers and students should also be aware and intervene from the design, on the pedagogical ideals represented through data integrated into assessment practices. Moreover, they should be able to read, interpret and use the information provided by data visualisation, automatic feedback and the system recommendation. In time, these capacities to deal with data within the educational and the specific assessment landscape might be interchangeable with media literacy applied to controlling personal data usage, as well as critical approaches to data across social media platforms (Pangrazio & Selwyn, 2019). Without a doubt, this emerging panorama is calling for faculty development to take action in datafied higher education institutions, starting from the uncommon concept of educator critical data literacy (Raffaghelli & Stewart, 2020). Overall, this would entail an effort to renew the idea of digital scholarships including critical approaches to data in higher education (Raffaghelli, 2021). Nevertheless, we are witnessing the initial, very shy efforts in such directions (Stewart & Lyons, 2021). Data literacy, indeed, might be seen as an abstract, interesting but rather “futuristic” set of skills which is difficult to relate with the actual and more pressing problems experienced by the educators in their online classrooms. In this regard, if the educators experience data in their professional activity as a sort of “small” data handled at the institutional level and through digital infrastructures that are often far from the integrated complex settings required to run algorithmic operations, they will consider data as a problem “in the ether” and still the domain of an abstract critique to educational “datafication” and “dataveillance” (Selwyn, 2020). Issues such as the increased platformisation and the datafication of contemporary schools over the past decade connected to “tendencies towards pervasive data extraction and surveillance” (Perrotta et al., 2020) can still be deemed a problem of data infrastructure development, an item in the political agenda or a space for activism, rather than a specific professional concern for teachers. This is well depicted by the very little use of open-source software as key to maintaining data sovereignty
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
111
(Hummel et al., 2021). Though the educators might work to raise awareness, showing young people and families the problem of data trace and reuse for commercial uses, the social media platforms, as far as the educational platforms used (and maybe imposed) at institutional level, leave little space for the teacher agency. As it was reported by Pangrazio and Selwyn (2020), it is also challenging to step away from data surveillance when the social life of young people happens within social media, as it is difficult for an educator to make choices that align with ethical concerns as a “solo player”. In a similar vein, Buckingham (2018) suggests that when dealing with media literacy and the phenomenon of misinformation and fake news, regulations must accompany educational efforts. We must recall, at this point, that the data discourses have entered higher education through several channels, potentially producing educator disorientation and misunderstandings. The very enthusiastic approaches relating to data management as the source of educational quality (Mandinach, 2012; Vanhoof & Schildkamp, 2014) are a matter of critique which assumes the perils of an external, private governance of education (Williamson, 2016). Moreover, after a decade of emphasis on the active engagement of students with a range of digital media, we must come to terms with their vulnerabilities in relation to personal data collection and profiling (Pangrazio & Selwyn, 2020). In the specific case of education, while learning analytics is a field that has nearly 10 years of development, the critical discussion on data use, the needed transparency in algorithms, participatory designs of technology, etc. are part of a recent social and political debate (Eubanks, 2018; Kennedy et al., 2015; Noble, 2018; Zuboff, 2019). Current research in the field of critical studies is uncovering the forms of discrimination, abuse or the monetisation of student data, as well as making proposals to address the problem from the point of view of critical pedagogy (Markham, 2018; Tygel & Kirsch, 2016). Also considered is the need to rethink the concept of data literacy and to prepare educators to deal with a broader picture relating the knowledge and abilities needed to work in a datafied university and live in the datafied society (Raffaghelli & Stewart, 2020). Against this complex panorama, data literacy appears to be a relevant skill. Approaches such as that of Caterine D’Ignazio and Rahul Barghava (2015) are showing the explorations made in education (in this case, adult education for civic participation) to generate agentic practices around datafication. Also Pangrazio and Selwyn (2020) investigated the ways higher education and school students would engage with personal data collected via social media and personal apps. Their design-based research encompassed a number of interventions aimed at building understandings about the lack of transparency and the monetisation of data, but also uncovered passive attitudes by the students in the trade-off between data extraction and the usage of digital environments they live by. In the case of HE, important reflections about the way students and professors should engage with the academic and learning analytics systems yielded relevant reflections on the need of privacy by design, usability and the need of engagement and transparency in student data usage (Jivet et al., 2020; Tsai & Gasevic, 2017). However, starting the pathway of data literacy by the same operations of data usage within assessment might become a clear initial approach to uncover the data
112
J. E. Raffaghelli and V. Grion
infrastructures and conceptual assemblages. Assessment is a daily practice, to which the students are socialised from the early stages of the schooling system. Data tracking, representations, privacy and ethics could be discussed in light of the assessment practice, as a formative approach embedded into a strategy of assessment for learning. Data collection could also be presented in light of the positive connotations supported by the open data movement embedded in higher education (Coughlan, 2019). In fact, the data gathered with and by the students within a course, beyond serving their direct assessment, could support the generation of worked examples for other teachers and students to use, as well as nurture educational research (Raffaghelli, 2018b). On the whole, in the context of the datafied university, the “priorities of the professors”, to paraphrase Ernst Boyer as a cornerstone of a renewed scholarship, require careful reflection, more than ever, on the digital, data-driven systems they and their students live by. Professors are being called to redesign pedagogical and assessment practice in a meaningful but also ethical way, adopting what is known, relating the impact of assessment for learning combined with the understanding of the technologies adopted to support assessment tasks, particularly those extracting and reusing student data in several ways.
Conclusions Until here, the reader might have grasped the converging phenomena of metrics, quantification and the technological promise and its traps, connected to the need of new literacies to live in datafied systems. Unquestionably, there are an increasing number of research projects and studies in social sciences addressing a critical perspective on the problem of data practices in general and in HE as one of the key institutions of our contemporary society. The discussion over the literacies required is also becoming an evident matter of concern. But the connections between data literacy as part of assessment literacy are not so evident, in spite of the profuse literature relating metrics, quantification and the problem of traditional assessment practices which technology has only refurbished. As we demonstrated in this chapter, the lack of awareness on assessment as complex practice encompasses a limited understanding of what assessment means for student learning. In addition, the weak understanding of the relevance of assessment is also deeply connected to the pressure for performance in the higher education system entailing the oversimplification of performance and quality. Last but not least, technology has arrived with a false promise, that of improving assessment through more and better data collection. The literature considered in this chapter before and after the pandemic clearly debunked such promise. In light of the observations made, we have promoted two relevant areas of intervention requiring a renewed (digital) scholarship. One first area relates to assessment for learning, as part of a pedagogical practice which aims at cultivating the student agency and their judgemental capacity to
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
113
become a critical citizens and a committed lifelong learners. Rethinking assessment is not a new debate, but it is still closely connected to the problem of conceiving learning as a technical outcome beyond or above the fundamental role of education in society. This approach, as largely demonstrated in the literature, is the driving force for rethinking the role of metrics and standards in higher education bridging assessment for learning with quality cultures connected to deeper educational values. The words of Gert Biesta on the role of OECD measurements are resounding and, we believe, applicable to all the measurement movement, from the research community engaged in the positivistic and technocratic view of learning analytics to the evidence-based education narrative and, last but not least, to the industry of metrics and rankings: By suggesting, as is implied in (OECD) measurements, that education is ‘all about learning,’ without ever asking the question of what such learning actually is, what educational learning is supposed to be about and supposed to be for, and who should have a say in answering these questions, the global educational measurement industry has actually promoted a very specific definition of education’s good, without ever articulating this definition explicitly, let alone providing a justification for it. Moreover, it has managed to do so in a very seductive way (for politicians, policy makers and the wider public)–by tapping into populist ideas about the alleged ‘basics’ of education and the suggestion that large- scale measurement can relieve us from difficult normative and political questions about the content, form and direction of what happens in schools, colleges and universities. (Biesta, 2020, p. 1023)
A second area of change is based on the assumption that digitally mediated assessment can only become a game changer if (and only if) the premises of a renewed conception of assessment practice are set. Therefore, this second area compels to search for understanding the technological assets and infrastructures before engaging with the glossy tools and tales of automation, personalisation and efficiency. In this regard, this area of work intertwines the concept and practice of critical data literacy with that of assessment literacy: • By interpreting the role of metrics and the assemblages behind educational measurements and its downsides • Through reading data representations as semiotic objects, part of meaning- making processes within higher education as power system • Through understanding the critical side of algorithmic decision-making processes providing feedback and automated assessment • By endowing the key stakeholders in the higher education system to engage in the design, control and halting of data-driven practice as soon as these are not respondent to intrinsic educational values, including provoking harm to the educators and learners Actors’ awareness (learners, the professors in both their teaching and research endeavour, staff, HEI management, even families supporting the students) on the contextual and material characteristics of data imaginaries (including the value of metrics and the role of technological mediation of quantified systems) can
114
J. E. Raffaghelli and V. Grion
potentially set the basis to uncover power issues, misrepresentation and inequities, paving the way to build fair assessment and data practices. It is clear that the role of the university is to blend advanced, interdisciplinary theoretical reflection and empirical research and practice over teaching and learning. As it was envisioned early by Humboldt, academics and students engage in a conversation which ends up in endowing the latter to take active part in transforming the society as reflective citizens and professionals (Pritchard, 2004). Over this basis, the university actions to promote new quality, assessment and data cultures should be based on open spaces where the connected pedagogical and technological practices are discussed. As assessment literacy can be a driver or better judgemental approaches to situations requiring learning and innovation, a critical data literacy embedded in assessment might lead to the problematisation about quantification, metrics and data-driven techniques, central to our datafied society, within and beyond the university.
References Andersen, M. L., & Taylor, H. F. (2012). Sociology: The essentials. Cengage Learning. Anderson, T., & Shattuck, J. (2012). Design-based research: A decade of progress in education research? Educational Researcher, 41(1), 16–25. https://doi.org/10.3102/0013189X11428813 Ball, S., Bew, C., Bloxham, S., Brown, S., Kleiman, P., May, H., Morris, E., Orr, S., Payne, E., Price, M., Rust, C., Smith, B., & Waterfield, J. (2012). A marked improvement. Transforming assessment in higher education (No. 978-1-907207-65–5) (p. 58). Higher Education Academy. https://www.heacademy.ac.uk/system/files/A_Marked_Improvement.pdf Barefoot, B. (2004). Higher education’s revolving door: Confronting the problem of student drop out in US colleges and universities. Open Learning, 19(1), 9–18. https://doi. org/10.1080/0268051042000177818 Bearman, M., Boud, D., & Tai, J. (2020a). New directions for assessment in a digital world. Re-imagining University Assessment in a Digital World, 7, 297. Springer. https://doi. org/10.1007/978-3-030-41956-1_2 Bearman, M., Dawson, P., Ajjawi, R., Tai, J., & Boud, D. (2020b). Re-imagining university assessment in a digital world. Springer. Bearman, M., Dawson, P., & Tai, J. (2020c). Digitally mediated assessment in higher education: Ethical and social impacts. In M. Bearman, P. Dawson, R. Ajjawi, J. Tai, & D. Boud (Eds.), Re-imagining university assessment in a digital world (pp. 23–36). Springer. https://doi. org/10.1007/978-3-030-41956-1_3 Beer, C., Clark, K., & Jones, D. (2010). Indicators of engagement. Proceedings Ascilite Sydney, 75–85. http://ascilite.org/conferences/sydney10/procs/Beer-full.pdf Biesta, G. (2007). Why ‘what works’ won’t work: Evidence-based practice and the democratic deficit in educational research. Educational Theory, 57(1), 1–22. Wiley/Blackwell. https://doi. org/10.1111/j.1741-5446.2006.00241.x Biesta, G. (2015). Resisting the seduction of the global education measurement industry: Notes on the social psychology of PISA. Ethics and Education, 10(3), 348–360. https://doi.org/10.108 0/17449642.2015.1106030 Biesta, G. (2020). What constitutes the good of education? Reflections on the possibility of educational critique. Educational Philosophy and Theory, 52(10), 1023–1027. https://doi.org/10.108 0/00131857.2020.1723468
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
115
Biggs, J., & Tang, C. (2011). Teaching for quality learning at university (4th ed.). McGraw-Hill International. Binetti, P., & Cinque, M. (2016). Valutare l’Università & Valutare in Università. Per una ‘cultura della valutazione’. FrancoAngeli. Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability (Formerly: Journal of Personnel Evaluation in Education), 21(1), 5. https://doi.org/10.1007/s11092-008-9068-5 Bloxham, S., & Boyd, P. (2007). Developing effective assessment in higher education: A practical guide (Book, 2008). (WorldCat.org). Open University Press. Boud, D. (1988). Developing student autonomy in learning (1st ed.). Taylor and Francis. Boud, D. (1995). Enhancing learning through self-assessment. Routledge. Boud, D. (1999). Situating academic development in professional work: Using peer learning. International Journal for Academic Development, 4(1), 3–10. https://doi. org/10.1080/1360144990040102 Boud, D. (2014). Shifting views of assessment: From secret teachers’ business to sustaining learning. In C. Kreber, C. Anderson, J. McArthur, & N. Entwistle (Eds.), Advances and innovations in university assessment and feedback (pp. 13–31). Edinburgh University Press. https://www. cambridge.org/core/books/advances-and-innovations-in-university-assessment-and-feedback/ shifting-views-of-assessment-from-secret-teachers-business-to-sustaining-learning/4FEDC88 1D1EE52292B58E0D44A2D6CEE Boud, D. (2020). Challenges in reforming higher education assessment: A perspective from afar. Revista Electrónica de Investigación y Evaluación Educativa, 26(1). https://doi.org/10.7203/ relieve.26.1.17088 Boud, D., & Soler, R. (2016). Sustainable assessment revisited. Assessment and Evaluation in Higher Education, 41(3), 400–413. https://doi.org/10.1080/02602938.2015.1018133 Boud, D., Lawson, R., & Thompson, D. G. (2013). Does student engagement in self-assessment calibrate their judgement over time? Assessment and Evaluation in Higher Education, 38(8), 941–956. https://doi.org/10.1080/02602938.2013.769198 Boyer, E. L., Moser, D., Ream, T. C., & Braxton, J. M. (2015). Scholarship reconsidered: Priorities of the professoriate. Wiley. Broadfoot, P. M. (1996). The myth of measurement. Contemporary Issues in Teaching and Learning, 203–231. Brown, S. (2005). Assessment for learning. Learning and Teaching in Higher Education, 1(2004–05), 81–89. Brown, S. (2014). Learning, teaching and assessment in higher education: Global perspectives. Macmillan International Higher Education. Buckingham, D. (2018). Taking charge: Media regulation, digital democracy and education. David Buckingham Blog. https://davidbuckingham.net/2018/10/03/ taking-charge-media-regulation-digital-democracy-and-education/ Campbell, D. T. (1979). Assessing the impact of planned social change. Evaluation and Program Planning, 2(1), 67–90. https://doi.org/10.1016/0149-7189(79)90048-X Carey, K. (2015). The end of college: Creating the future of learning and the university of everywhere. Penguin Publishing Group. https://books.google.com/books?id=FCh-BAAAQBAJ&pgis=1 Carless, D. (2019). Feedback loops and the longer-term: Towards feedback spirals. Assessment & Evaluation in Higher Education, 44(5), 705–714. https://doi.org/10.1080/0260293 8.2018.1531108 Cerro Martínez, J. P., Guitert Catasús, M., & Romeu Fontanillas, T. (2020). Impact of using learning analytics in asynchronous online discussions in higher education. International Journal of Educational Technology in Higher Education, 17(1), 39. https://doi.org/10.1186/ s41239-020-00217-y Cinque, M. (2016). Valutare per valorizzare. In Valutare l’Università & Valutare in Università. Per una cultura della valutazione (pp. 71–102). FrancoAngeli.
116
J. E. Raffaghelli and V. Grion
Collins, R. (1979). The credential society: An historical sociology of education and stratification. Academic. Coughlan, T. (2019). The use of open data as a material for learning. Educational Technology Research and Development, 1–28. https://doi.org/10.1007/s11423-019-09706-y Crawford, K. (2021). Atlas of AI. Yale University Press. Creswell, J. W., & Garrett, A. L. (2008). The “movement” of mixed methods research and the role of educators. South African Journal of Education, 28(3), 321–333. Crosier, D., Kocanova, D., Birch, P., Davykovskaia, O., & Parveva, T. (2019). Modernisation of higher education in Europe. In Eurydice report (pp. 1–28). Eurydice (Education, Audiovisual and Culture Executive Agency). https://doi.org/10.2797/806308 D’Ignazio, C., & Bhargava, R. (2015). Approaches to building big data literacy. Bloomberg data for good exchange, online. De Rossi, M. (2017). Methodological demands, soft skill and ICT integration. Formazione & Insegnamento, XV(1), 193–204. Retrieved from https://ojs.pensamultimedia.it/index.php/siref/ article/view/2174 Delors, J., Al Mufti, I., Amagi, I., Carneiro, R., Ching, F., Geremek, B., Gorham, W., Kornhauser, A., Manely, M., Padrón-Quero, M., Savané, M.-A., Singh, K., Stavenhagen, R., Won Suhr, M., & Zhou, N. (1996). Learning: The treasure within (pp. 1–46). UNESCO. https://unesdoc. unesco.org/ark:/48223/pf0000109590 Doria, B., & Grion, V. (2020). Self-assessment in the university context: A systematic review. Form@re – Open Journal per La Formazione in Rete, 20(1), 78–92. https://doi.org/10.13128/ form-8247 Ebel, R. L. (1972). Essentials of educational measurement (p. xiv, 622). Prentice-Hall. Essa, A. (2019). Is data dark? Lessons from borges’s “funes the memorius”. Journal of Learning Analytics, 6(3), 35–42. https://doi.org/10.18608/jla.2019.63.7 Eubanks, V. (2018). Automating Inequality. How High-tech tools profile, police, and punish the poor (1st ed.). St. Martin’s Press. European Commission. (2011). Europe 2020 flagship initiative Innovation Union. SEC(2010) 1161, Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, 1(0), 1–48. European Commission. (2020). White Paper on Artificial Intelligence: A European approach to excellence and trust|European Commission. European Commission. https://ec.europa.eu/info/ publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en European Higher Education Area EHEA. (2015). Yerevan Communiqué. Bologna Process. Fanghanel, J., Pritchard, J., Potter, J., & Wisker, G. (2016). Defining and supporting the Scholarship of Teaching and Learning (SoTL): A sector wide study. Higher Education Academy. Ferguson, R. (2012). Learning analytics: Drivers, developments and challenges. International Journal of Technology Enhanced Learning, 4(5–6), 304–317. https://doi.org/10.1504/ IJ℡.2012.051816 Gasevic, D., Kovanovic, V., Joksimovic, S., & Siemens, G. (2014). Where is research on massive open online courses headed? A data analysis of the MOOC Research Initiative. The International Review of Research in Open and Distance Learning, 15(5). Ghislandi, P. M. M. (2005). Didattiche per l’università (1st ed.). Trento University Press. Ghislandi, P. M. M., & Raffaghelli, J. E. (2014a). Il maharaja, l’elefante e la qualità dell’(e) Learning. ECPS – Educational, Cultural and Psychological Studies, 10, 49–81. https://doi. org/10.7358/ecps-2014-010-ghis Ghislandi, P. M. M., & Raffaghelli, J. E. (2014b). Quality teaching matters: Perspectives on quality teaching for the modernization of higher education. A position paper Importanza della qualità dell’insegnamento per la modernizzazione della formazione universitaria. Un position paper. Formazione & Insegnamento, European Journal of Research on Education and Teaching, 1(XII), 57–88. https://ojs.pensamultimedia.it/index.php/siref/article/view/372
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
117
Ghislandi, P. M. M., & Raffaghelli, J. E. (2015). Forward-oriented designing for learning as a means to achieve educational quality. British Journal of Educational Technology, 46(2), 280–299. https://doi.org/10.1111/bjet.12257 Ghislandi, P. M. M., Raffaghelli, J. E., & Yang, N. (2013). Mediated quality. International Journal of Digital Literacy and Digital Competence, 4(1), 56–73. https://doi.org/10.4018/ jdldc.2013010106 Ghislandi, P. M. M., Raffaghelli, J. E., Sangrà, A., & Ritella, G. (2020). The Street Lamp Paradox: Analysing students’ evaluation of teaching through qualitative and quantitative approaches. ECPS – Educational Cultural and Psychological Studies, 0(21), 65–85. https://doi.org/10.7358/ ecps-2020-021-ghis Gibson, A., & Lang, C. (2019). Quality indicators through learning analytics. In M. A. Peters (Ed.), Encyclopedia of teacher education (pp. 1–6). Springer. https://eprints.qut.edu.au/200971/ Goglio, V. (2016). One size fits all? A different perspective on university rankings. Journal of Higher Education Policy and Management, 38(2), 212–226. https://doi.org/10.108 0/1360080X.2016.1150553 Grion, V., & Serbati, A. (2018). Assessment of learning or assessment for learning? Towards a culture of sustainable assessment in higher education. Pensa Multimedia. Grion, V., Serbati, A., Tino, C., & Nicol, D. (2017). Ripensare la teoria della valutazione e dell’apprendimento all’università: Un modello per implementare pratiche di peer review. Italian Journal of Educational Research, 19, 209–226. Guba, E., & Lincoln, Y. S. (1989). Fourth generation evaluation. Sage. Harvey, L., & Williams, J. (2010). Fifteen years of quality in higher education. Quality in Higher Education, 16(1), 3–36. https://doi.org/10.1080/13538321003679457 Hazelkorn, E. (2016). Global rankings and the geopolitics of higher education: Understanding the influence and impact of rankings on higher education, policy and society. Rankings and the Geopolitics of Higher Education: Understanding the Influence and Impact of Rankings on Higher Education, Policy and Society. https://doi.org/10.4324/9781315738550 Higher Education Academy. (2016). Framework for transforming assessment in higher education. Higher Education Academy. https://www.heacademy.ac.uk/sites/default/files/downloads/ transforming-assessment-in-he.pdf Hodges, C. B., & Barbour, M. K. (2021). Assessing learning during emergency remote education. Italian Journal of Educational Technology, 29(2), 85–98. https://doi. org/10.17471/2499-4324/1208 Hou, Y.-W., & Jacob, W. J. (2017). What contributes more to the ranking of higher education institutions? A comparison of three world university rankings. International Education Journal: Comparative Perspectives, 16(4), 29–46. Hummel, P., Braun, M., Tretter, M., & Dabrock, P. (2021). Data sovereignty: A review. Big Data & Society, 8(1), 205395172098201. https://doi.org/10.1177/2053951720982012 Jivet, I., Scheffel, M., Schmitz, M., Robbers, S., Specht, M., & Drachsler, H. (2020). From students with love: An empirical study on learner goals, self-regulated learning and sense-making of learning analytics in higher education. The Internet and Higher Education, 100758. https://doi. org/10.1016/j.iheduc.2020.100758 Kemper, J., & Kolkman, D. (2019). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society, 22(14), 2081–2096. https://doi.org/1 0.1080/1369118X.2018.1477967 Kennedy, R. F. (1968). Remarks at the University of Kansas, March 18, 1968. John F. Kennedy Presidential Library and Museum; Robert Kennedy Speeches. https://www.jfklibrary. org/learn/about-j fk/the-k ennedy-f amily/robert-f -k ennedy/robert-f -k ennedy-s peeches/ remarks-at-the-university-of-kansas-march-18-1968 Kennedy, H., Poell, T., & Van Dijck, J. (2015). Data and agency. Big Data & Society, 2(2), 205395171562156. https://doi.org/10.1177/2053951715621569 Kincheloe, J., & Berry, K. (2004). Rigour and complexity in educational research. McGraw-Hill International.
118
J. E. Raffaghelli and V. Grion
Knight, S., Buckingham Shum, S., & Littleton, K. (2014). Epistemology, assessment, pedagogy: Where learning meets analytics in the middle space. Journal of Learning Analytics, 1(2), 23–47. Ludvigsen, K., Krumsvik, R., & Furnes, B. (2015). Creating formative feedback spaces in large lectures. Computers and Education, 88, 48–63. https://doi.org/10.1016/j.compedu.2015.04.002 Malik, M. M. (2020). A hierarchy of limitations in machine learning. http://arxiv.org/ abs/2002.05193 Mandinach, E. B. (2012). A perfect time for data use: Using data-driven decision making to inform practice. Educational Psychologist, 47(2), 71–85. https://doi.org/10.1080/0046152 0.2012.667064 Mandinach, E. B., & Gummer, E. S. (2016). What does it mean for teachers to be data literate: Laying out the skills, knowledge, and dispositions. Teaching and Teacher Education, 60, 366–376. https://doi.org/10.1016/j.tate.2016.07.011 Markham, A. N. (2018). Critical pedagogy as a response to datafication. Qualitative Inquiry, 107780041880947. https://doi.org/10.1177/1077800418809470 McAleese, M., Bladh, A., Berger, V., Bode, C., Muelhfeit, J., Petrin, T., Schiesaro, A., & Tsoukalis, L. (2013). Report to the European Commission on ‘Improving the quality of teaching and learning in Europe’s higher education institutions’. Medland, E. (2019). ‘I’m an assessment illiterate’: Towards a shared discourse of assessment literacy for external examiners. Assessment and Evaluation in Higher Education, 44(4), 565–580. https://doi.org/10.1080/02602938.2018.1523363 Mitch, D. (2005). Education and economic growth in historical perspective. In EH.Net encyclopedia (Online). Economic History Association. https://eh.net/encyclopedia/ education-and-economic-growth-in-historical-perspective/ Moed, H. F. (2017). A critical comparative analysis of five world university rankings. Scientometrics, 110(2), 967–990. https://doi.org/10.1007/s11192-016-2212-y Muller, J. (2018). The tyranny of metrics. Princeton University Press. Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism by Safiya Umoja Noble. NYU Press. https://doi.org/10.15713/ins.mmj.3 OECD. (2018). What is PISA?|PISA 2018 results (Volume I): What students know and can do. OECD Library. https://www.oecd-ilibrary.org/sites/609870a0-en/index.html?itemId=/content/ component/609870a0-en Open Education Sociology Dictionary. (2013). Credentialism. In K. Bell (Ed.), Open education sociology dictionary. University of Wollongong. https://sociologydictionary.org/credentialism/ Pangrazio, L., & Selwyn, N. (2019). ‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media and Society, 21(2), 419–437. https://doi.org/10.1177/1461444818799523 Pangrazio, L., & Selwyn, N. (2020). Towards a school-based ‘critical data education’. Pedagogy, Culture and Society. https://doi.org/10.1080/14681366.2020.1747527 Pastore, S., & Pentassuglia, M. (2015). What university students think about assessment: A case study from Italy. European Journal of Higher Education, 5(4), 407–424. https://doi.org/10.108 0/21568235.2015.1070277 Perrotta, C., Gulson, K. N., Williamson, B., & Witzenberger, K. (2020). Automation, APIs and the distributed labour of platform pedagogies in Google classroom. Critical Studies in Education, 00, 1–17. https://doi.org/10.1080/17508487.2020.1855597 Persico, D., & Pozzi, F. (2015). Informing learning design with learning analytics to improve teacher inquiry. British Journal of Educational Technology, 46(2), 230–248. https://doi. org/10.1111/bjet.12207 Popham, J. (2017). Forewords. In The perfect assessment system (pp. ix–xii). ASCD. Pozzi, F., Manganello, F., Passarelli, M., Persico, D., Brasher, A., Holmes, W., Whitelock, D., & Sangrà, A. (2019). Ranking meets distance education: Defining relevant criteria and indicators for online universities. International Review of Research in Open and Distance Learning, 20(5), 42–63. https://doi.org/10.19173/irrodl.v20i5.4391
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
119
Pritchard, R. (2004). Humboldtian values in a changing world: Staff and students in German universities. Oxford Review of Education, 30(4), 509–528. QQA. (2018). UK quality code for higher education. Advice and guidance: Assessment. The Quality Assurance Agency for Higher Education. https://www.qaa.ac.uk//en/quality-code/ advice-and-guidance/assessment Raffaghelli, J. E. (2018a). Educators’ Data Literacy Supporting critical perspectives in the context of a “datafied” education. In M. Ranieri, L. Menichetti, & M. Kashny-Borges (Eds.), Teacher education & training on ICT between Europe and Latin America (pp. 91–109). Aracné. https:// doi.org/10.4399/97888255210238 Raffaghelli, J. E. (2018b). Pathways to openness in Networked Learning research: The case of Open Data—Resources and Notes from the Field (Online). Universitat Oberta de Catalunya. https://zenodo.org/record/4446013#.YR4sf4j7SUk Raffaghelli, J. E. (2020). Is data literacy a catalyst of social justice? A response from nine data literacy initiatives in higher education. Education Sciences, 10(9), 233. https://doi.org/10.3390/ educsci10090233 Raffaghelli, J. E. (2021). “Datificazione” e istruzione superiore: Verso la costruzione di un quadro competenziale per una rinnovata Digital Scholarship. Excellence and Innovation in Learning and Teaching – Open Access, 0(0), 128–147. https://doi.org/10.3280/exioa0-2021oa11132 Raffaghelli, J. E., & Stewart, B. (2020). Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: A systematic review of the literature. Teaching in Higher Education, 25(4), 435–455. https://doi.org/10.1080/13562517.2019.1696301 Raffaghelli, J. E., Manca, S., Stewart, B., Prinsloo, P., & Sangrà, A. (2020). Supporting the development of critical data literacies in higher education: Building blocks for fair data cultures in society. International Journal of Educational Technology in Higher Education, 17(1), 58. https://doi.org/10.1186/s41239-020-00235-w Ranieri, M. (2011). Le insidie dell’ovvio: Tecnologie educative e critica della retorica tecnocentrica. ETS. Rose, C. P. (2019). Monolith, multiplicity, or multivocality: What do we stand for and where do we go from here? Journal of Learning Analytics, 6(3), 31–34. https://doi.org/10.18608/ jla.2019.63.6 Sangrà, A., Guitert, M., Cabrera-Lanzo, N., Taulats, M., Toda, L., & Carrillo, A. (2019). Collecting data for feeding the online dimension of university rankings: A feasibility test. Italian Journal of Educational Technology, 27(3), 241–256. https://doi.org/10.17471/2499-4324/1114 Scheffel, M., Drachsler, H., & Specht, M. (2015). Developing an evaluation framework of quality indicators for learning analytics. In ACM international conference proceeding series, 16–20 March 2015. https://doi.org/10.1145/2723576.2723629 Selwyn, N. (2013). Education in a digital world: Global perspectives on technology and education. Routledge. Selwyn, N. (2020). ‘Just playing around with Excel and pivot tables’ – The realities of data-driven schooling. Research Papers in Education. https://doi.org/10.1080/02671522.2020.1812107 Selwyn, N. (2021). “There is a danger we get too robotic”: An investigation of institutional data logics within secondary schools. Educational Review, 1–17. https://doi.org/10.1080/0013191 1.2021.1931039 Selwyn, N., & Gašević, D. (2020). The datafication of higher education: Discussing the promises and problems. Teaching in Higher Education, 25(4), 527–540. https://doi.org/10.108 0/13562517.2019.1689388 Serbati, A., & Grion, V. (2019). IMPROVe: Six research-based principles to realise peer assessment in educational contexts. Form@re – Open Journal per La Formazione in Rete, 19(3), 89–105. https://doi.org/10.13128/form-7707 Serbati, A., Grion, V., & Fanti, M. (2019). Caratteristiche del peer feedback e giudizio valutativo in un corso universitario blended. Giornale Italiano Della Ricerca Educativa, XII(Special Issue), 115–137.
120
J. E. Raffaghelli and V. Grion
Siemens, G. (2013). Learning analytics. American Behavioral Scientist, 57(10), 1380–1400. https://doi.org/10.1177/0002764213498851 Siemens, G., Dawson, S., & Lynch, G. (2014). Improving the quality and productivity of the higher education sector. White Paper for the Australian Government Office for Learning and Teaching. Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educational Researcher, 31(7), 15–21. https://doi.org/10.2307/3594400 Smith, A. (1776). An inquiry into the nature and causes of the wealth of nations (Online). Creech, Mundell, Doig, Stevenson. https://play.google.com/books/ reader?id=xTpFAAAAYAAJ&pg=GBS.PP8 Smith, C. D., Worsfold, K., Davies, L., Fisher, R., & McPhail, R. (2013). Assessment literacy and student learning: The case for explicitly developing students ‘assessment literacy’. Assessment and Evaluation in Higher Education, 38(1), 44–60. https://doi.org/10.1080/0260293 8.2011.598636 Soh, K. (2017). The seven deadly sins of world university ranking: A summary from several papers. Journal of Higher Education Policy and Management, 39(1), 104–115. https://doi.org/10.108 0/1360080X.2016.1254431 Spooren, P., & Christiaens, W. (2017). I liked your course because I believe in (the power of) student evaluations of teaching (SET). Students’ perceptions of a teaching evaluation process and their relationships with SET scores. Studies in Educational Evaluation, 2017(54), 43–49. Stewart, B. E., & Lyons, E. (2021). When the classroom becomes datafied: A baseline for building data ethics policy and data literacies across higher education. Italian Journal of Educational Technology. https://doi.org/10.17471/2499-4324/1203 Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83(10), 758–765. https://doi.org/10.1177/003172170208301010 Styron, J., & Styron, R. A. (2011). Teaching to the test: A controversial issue in measurement. IMSCI 2011 – 5th International Multi-Conference on Society, Cybernetics and Informatics, Proceedings, 2, 161–163. Tsai, Y.-S., & Gasevic, D. (2017). Learning analytics in higher education – Challenges and policies. Proceedings of the Seventh International Learning Analytics & Knowledge Conference on – LAK ’17, 233–242. https://doi.org/10.1145/3027385.3027400 Tygel, A. F., & Kirsch, R. (2016). Contributions of Paulo Freire for a critical data literacy: A popular education approach. The Journal of Community Informatics, 12(3). Tzimas, D., & Demetriadis, S. (2021). Ethical issues in learning analytics: A review of the field. Educational Technology Research and Development, 69(2), 1101–1133. https://doi. org/10.1007/s11423-021-09977-4 Vanhoof, J., & Schildkamp, K. (2014). From ‘professional development for data use’ to ‘data use for professional development’. Studies in Educational Evaluation, 42, 1–4. https://doi. org/10.1016/J.STUEDUC.2014.05.001 Vasquez Heilig, J., & Nichols, S. L. (2013). A quandary for school leaders: Equity, high-stakes testing and accountability. In L. Tillman & J. J. Scheurich (Eds.), Handbook of research on educational leadership for equity and diversity. Routledge. Viberg, O., Hatakka, M., Bälter, O., & Mavroudi, A. (2018). The current landscape of learning analytics in higher education. In Computers in human behavior (Vol. 89, pp. 98–110). Pergamon. https://doi.org/10.1016/j.chb.2018.07.027 Vuorikari, R., Ferguson, R., Brasher, A., Clow, D., Cooper, A., Hillaire, G., Mittelmeier, J., & Rienties, B. (2016). Research evidence on the use of learning analytics (p. 148). Joint Research Center – Publications Office of the European Union. https://doi.org/10.2791/955210 Williamson, B. (2016). Digital education governance: Data visualization, predictive analytics, and ‘real-time’ policy instruments. Journal of Education Policy, 31(2), 123–141. https://doi.org/1 0.1080/02680939.2015.1035758 Wasson, B., Hansen, C., & Netteland, G. (2016). Data literacy and use for learning when using learning analytics for learners. In S. Bull, B. M. Ginon, J. Kay, M. D. Kickmeier-Rust,
4 Beyond Just Metrics: For a Renewed Approach to Assessment in Higher Education
121
& M. D. Johnson (Eds.), Learning analytics for learners, 2016 workshops at LAK (pp. 38–41). CEUR. Williamson, B., & Hogan, A. (2021). Education international research pandemic privatisation In Higher education: Edtech & University reform. Education International. Williamson, B., Eynon, R., & Potter, J. (2020). Pandemic politics, pedagogies and practices: Digital technologies and distance education during the coronavirus emergency. Learning, Media and Technology, 45(2), 107–114). Routledge. https://doi.org/10.1080/17439884.2020.1761641 Willis, J. E., Slade, S., & Prinsloo, P. (2016). Ethical oversight of student data in learning analytics: A typology derived from a cross-continental, cross-institutional perspective. Educational Technology Research and Development, 64(5), 881–901. https://doi.org/10.1007/ s11423-016-9463-4 Winne, P. H. (2017). Learning analytics for self-regulated learning. In C. Lang, G. Siemens, A. Wise, & D. Gašević (Eds.), Handbook of learning analytics (pp. 241–249). https://doi. org/10.18608/hla17.021 Yang, N., Ghislandi, P. M. M., Raffaghelli, J., & Ritella, G. (2019). Data-driven modeling of engagement analytics for quality blended learning. Journal of E-Learning and Knowledge Society, 15(3), 211–225. https://doi.org/10.20368/1971-8829/1135027 Zhao, Y. (2020). Two decades of havoc: A synthesis of criticism against PISA. Journal of Educational Change 2020, 21(2), 245–266. https://doi.org/10.1007/S10833-019-09367-X Zuboff, S. (2019). The age of surveillance capitalism. Profile Books. Juliana E. Raffaghelli is Research Professor at the University of Padua and Associate Research of the Edul@b Research Group at the Universitat Oberta de Catalunya. In the last 15 years, she coordinated international research units, networks, and projects in Latin America, the Balkans, Turkey, and Western Europe in the field of educational technologies. Her work covered the topic of professional development for technological uptake in international/global contexts of collaboration through a socio-technical and post-colonial lens. Recently her research activities explored the emergent manifestations of data practices and artificial intelligence through critical, open, and emancipatory pedagogies. She has coordinated six special issues for international journals and contributed to the field with two books and several research articles and chapters in English, Spanish, Italian, and Portuguese.
Valentina Grion is Associate Professor in the Department of Philosophy, Sociology, Education and Applied Psychology at the University of Padova, Italy. Her main research interests include assessment and learning in school and university contexts; educational technologies; and teacher education. She is also known as the national advocate of the Student Voice approach. With Anna Serbati, she founded and currently leads the PAFIR (Peer Assessment and Feedback International Research group) promoting scientific initiatives on the topic of assessment and feedback. In these areas, she has published numerous articles and book chapters. Together with Anna Serbati, she won the National “Aldo Visalberghi” award offered by the Italian Educational Research Society for the best article in the year 2018 on assessment and technologies.
Chapter 5
“We Used to Have Fun But Then Data Came into Play…”: Social Media at the Crossroads Between Big Data and Digital Literacy Issues Benjamin Gleason and Stefania Manca
Abstract With the significant growth of social media use for formal and informal learning, research scholarship has also progressively focused on the threats that the manipulation of user behavior via algorithms and the misinformation pushed in disparate ways are causing to democratic participation and online civic engagement. In this chapter, we address the main potential and pitfalls of social media use in higher education and how the increased challenges posed by big data and the growing datafication demand new literacy practices. Whether it is to analyze the teaching and learning practices in the university lecture halls or academic staff’s professional learning, it has become imperative to be aware of the inner mechanisms that may hinder the deployment of social media both for educational aims and for social media research. In this light, we adopt a learning ecologies perspective to explore issues of data literacy applied to social media and provide indications for professional development and problems of use of social media data. Keywords Social media · Digital literacy · Data literacy · Higher education · Informal learning
B. Gleason School of Education, Iowa State University, Ames, IA, USA e-mail: [email protected] S. Manca (*) Institute of Educational Technology, Italian National Research Council, Genova, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. E. Raffaghelli, A. Sangrà (eds.), Data Cultures in Higher Education, Higher Education Dynamics 59, https://doi.org/10.1007/978-3-031-24193-2_5
123
124
B. Gleason and S. Manca
Introduction While social media use continues to be steady, with 4.20 billion people as active users and half a billion of new social media users in the last year (We Are Social, 2021), literature on the educational use of social media also continues to grow (e.g., Barrot, 2021). The recently published Faculty Social Media Use and Communications report (Johnson & Veletsianos, 2021), which is part of a series of publications resulting from the 2020 Digital Faculty survey conducted by Bay View Analytics (formerly Babson Survey Research Group), includes the results of a survey conducted in the USA and aimed at investigating the social media habits of faculty members, particularly patterns of prevalence and personal/professional use. Results show that faculty use Facebook (75%), LinkedIn (65%), and Instagram (45%), with more than half of them using these platforms daily or every few days, while research- focused platforms like Academia.edu and ResearchGate are used far less frequently. Those who have a personal website, such as a blog or portfolio site (around 25%), also tend to have an increased use of Twitter and are more likely to use Facebook, Instagram, Reddit, and Twitter for a mix of professional and personal uses. Overall, however, faculty social media use is mostly passive as they report seldom (or never) posting content on any platform. Respondents also report mixed feelings about social media and a general low use of social media to communicate with students (39%), stating a preference for videoconferencing and a variety of low-tech options (such as text messages, phones, or listservs). While they acknowledge some positive features of social media, such as the ability to interact with others easily across distances and with a broad spectrum of people through idea exchange, and as a tool for professional and personal development, faculty also report negative opinions: viewing social media as a distraction, with the potential for obsessive misuse; spreading disinformation and fake news; and sowing political discord. Factors that may contribute to hindering the use of social media by faculty staff include cultural resistance, pedagogical issues, or institutional constraints (Manca & Ranieri, 2016), along with lack of general digital literacy skills and data literacy specifically. For instance, studies that have investigated matters of data privacy among pre-service teachers who were studying education courses at three universities (Marín et al., 2021) report both educational and distracting potential in social media, but lack of knowledge regarding relevant policies and regulations, which reflects trends in the broader population. In another study, Stewart and Lyons (2021) posit a need for faculty development around data and online classroom tools and for data ethics to be addressed as part of institutions’ emergency remote education transition online. Within the field of educational research, scholars have begun to suggest that social media may be reshaping the way that scholarship is conducted (Greenhow et al., 2019a; Manca & Ranieri, 2016). Inspired by scholars’ increased use of social media as a personal and professional communications space, as well as by opportunities created through social media (i.e., large-scale collaboratives to address entrenched global problems like the climate crisis) and innovative research
5 “We Used to Have Fun But Then Data Came into Play…”: Social Media…
125
practices (i.e., opportunities to submit pre-print work to peer review), the field of higher education is beginning to take seriously the promise of social scholarship. In our work together (Manca et al., 2021), we highlight the promises that social media can support new ways of scholarship while also centering the role that literacy plays in this process. Here, we conceptualize literacy in social media spaces as a “combination of technological, cognitive, social, and ethical skills needed for critical evaluation of social media” (Manca et al., 2021, p. 2) and articulate the need for social media literacy to be linked to data literacy in ways that facilitate user agency, civic participation, and engaged communities. Likewise, Pangrazio and Sefton-Green (2020) suggested that data literacy “might protect individuals from being manipulated by datafication processes, meaning the conceptualisation of data literacy seems central to democratic processes in the way that learning to read and write has been central to definitions of democratic franchise” (p. 218). However, problems of misinformation and fake news, which are often used to demonize social media and prevent many teachers from using them in education, are quite complex as also indicated in times of COVID-19 pandemic. A recent study conducted at MIT about the use of scientific data indicates that controversial understandings of the pandemic have turned data visualizations into a battleground (Lee et al., 2021). This study (Lee et al., 2021) analyzed conversations about COVID-19 data on Twitter and Facebook, showing that political differences in positions lead people to draw drastically different inferences from similar data, thus reflecting a deeper socio-political rift regarding the place of science in public life more than the ability to interpret data correctly. This chapter raises the issue that social media use in higher education is never quite that simple; first is that digital literacy is, as we have suggested before, not a checklist of digital skills to be “mastered” by the individual, but a “repertoire of practices” (Gutiérrez & Rogoff, 2003) demonstrated through participation; second is that digital platforms like Facebook and Google are motivated by corporate interests, which require us the adoption of critical practices that challenge their ordinary practices (Krutka et al., 2019; Vaidhyanathan, 2018); third is that digital literacy is more complex than teaching secondary and post-secondary students how to tell fact from fiction by evaluating sources; and fourth is that understanding digital literacy means understanding meaning-making more broadly as epistemological and ideological. For the purposes of this chapter, we will introduce a case study approach (Flyvbjerg, 2006) that provides educators, researchers, parents, community members, and academic administrators a series of narratives to illustrate the complex challenges and opportunities that arise from the integration of social media into the educational context. Through the use of case study vignettes, we will present the story of college professors who aim to introduce social media as a tool and space for learning. These educators, like many of us, aim to use social media as a tool to leverage the affordances of social media (i.e., to make connections between students and instructor; to support real-life interactions with experts in real time; to legitimize students’ social lives as valuable resources; to promote a way for learners to create multimedia content aligned that is the currency of digitally literate world) (see
126
B. Gleason and S. Manca
Manca & Ranieri, 2013). However, these educators also realize that the integration of social media into a formal learning space (i.e., their classes) also raises a number of significant issues that must be negotiated as students face competing challenges – around the limits of free speech, around data privacy in a commercial digital platform, and even around our notions of what a “classroom” looks like in the metaphoric “ubiquitous learning” world. Through the case studies, we will attempt to provide a “real-life” context for some of the many challenges that arise through social media and shed light on how to make use of these challenges as opportunities for students to develop as competent, literate users in a complex digital world. Though we acknowledge the particularity of learning contexts in global contexts, we attempt to use this narrative form to raise points and perspectives that apply to educators in a range of learning spaces.
Conceptual Framework In this chapter, we adopt a socio-ecological perspective to provide a conceptual framework within which to investigate more complex interactions between persons and social media environments (Steinberg, 2001). An ecological perspective offers a way to simultaneously emphasize both individual and contextual systems and their interdependent relations. Such perspective relies on a number of interrelated types of environmental systems – the micro-, meso-, exo-, and macro-systems – which range from smaller, proximal settings in which individuals directly interact to larger, distal settings that indirectly influence development (Bronfenbrenner, 1979; Bronfenbrenner & Morris, 1998). According to this model, proximal settings are connected to each other, and events taking place in one setting have ramifications for individual behavior and development in another. Evolution of this approach has recently resulted in a networked approach to highlight the intersecting nature of social circles that “views ecological systems as an overlapping arrangement of structures, each directly or indirectly connected to the others by the direct and indirect social interactions of their participants” (Neal & Neal, 2013, p. 722). A networked socio-ecological approach to social media directs the gaze towards the relationships between individuals and the socio-technical systems implemented by social media platforms as an ecological environment, where diverse structures overlap each other directly or indirectly by the social interactions of the participants. The socio-ecological perspective has also gained momentum in the field of education. The concept of learning ecologies has been used to describe new conceptualizations of learning environments where personalized and self-initiated learning may be intertwined with formal learning through multiple and non-linear paths of mutual relationship and influence. Although the term is used with nuances and different meanings (Sangrá et al., 2019), it is a construct that may explain the phenomena of learning in and with the digital (Barron, 2006). If the original (and widely adopted) definition conceives of learning ecology “as the set of contexts found in physical or virtual spaces that provide opportunities for learning,” where “each
5 “We Used to Have Fun But Then Data Came into Play…”: Social Media…
127
context is comprised of a unique configuration of activities, material resources, relationships, and the interactions that emerge from them” (Barron, 2006, p. 195), other theoretical approaches such as sociocultural and situated approaches to learning (Lave, 2019; Lave & Wenger, 1991), or socio-constructivism (Dron & Anderson, 2014), have been used to complement the original idea of ecological development (Bronfenbrenner, 1979; Bronfenbrenner & Morris, 1998). These ideas are particularly relevant when learning situations such as lifelong learning (Peters & Romero, 2019) and more generally the combination of formal and informal learning through participatory digital cultures (Greenhow & Lewin, 2016) come into play. In this light, social media can be seen, for example, as learning environments for professional learning and academic development where human practices and professional and personal identities are shaped and new professional learning ecologies are configured in the interplay between diverse networks or systems (Carpenter et al., 2022; Greenhow et al., 2019a, b; Ranieri et al., 2012; Staudt Willet & Carpenter, 2021; Veletsianos et al., 2019). In this chapter, we focus on the need of an ecological social media literacy framework to conceptualize digital literacy in social media conceived as socio-technical systems where to develop learning practices and participatory ecologies. Since understanding and managing datafication is one of the main features of data literacy, we will focus in particular on the need to revisit the literacies required to fully embrace the potential of social media in higher education investigating, in particular, what it means to be digitally literate in the world of social media.
Digital Literacy in Social Media World Conceptualizing digital literacy in social media spaces from an ecological perspective means expanding our theoretical focus from the individual as the “unit of analysis” (Gerring, 2004) to seeing social media interaction as the activity to be analyzed. This means, simply, rather than seeing digital literacy through a theoretical lens of individual competence or mastery, we broaden our scope to consider the individual in context, where the individual user’s active participation is always linked to social, cultural, political, economic, and affective forces (Veletsianos et al., 2019). In a social media world, this may appear obvious. There is no individual activity, as participating on social media is always social. As the Russian psycho-linguist pointed out, we, as intimately social beings with a dialogic imagination, are always responding to a conversation that is already in progress (Bakhtin, 1929). In fact, the technical design of social media platforms provides a visual demonstration of this (i.e., threaded conversations on Twitter or Facebook that visualize the “flow” of conversation, indicating the chain of responses). On social media, it is literally impossible to participate as an individual. Even our most articulate, seemingly original articulations of personal philosophy are influenced by the socio-technical dimensions of the space. In these spaces, our words are strongly influenced by those around us, as their words become the raw material for our own. Writing almost a
128
B. Gleason and S. Manca
hundred years ago, the literacy critic Bakhtin (1929) encouraged us to see the interconnected nature of human speech: “I live in the world of others words” (page 143), a perspective that has only been heightened and amplified in the social media space. Seeing digital literacy from a social, as opposed to individual and psychological, perspective means considering literacy to be a co-constructed process that occurs through interaction not just with other individuals, but as activity in “real-world” places that have their own history. We respond not just as individuals but as people who are shaped by forces broader than our own – by culture, history, politics, economics, etc. (Holland & Lave, 2009; Lave, 2019). This is not to dismiss individual agency as delusion, but rather to broaden our theoretical perspective to understand activity as both more horizontally expansive (i.e., involving interactions between people across multiple contexts) and vertically grounded (i.e., as occurring in systems of activity that have histories longer than an individual tweet or post, conversation, or even an individual’s lifespan) (Engeström, 1999). In this perspective, individual activity is always part of an ecology of interaction that is broader than just the individual. It is possible to analyze just the individual, but we are likely to miss the complex interactions that allow for deeper, more real-life, more interdisciplinary connections that better represent reality or lead to more rich theoretical perspectives. On social media, literacy becomes more than just a cognitive achievement completed by an individual, but rather can be seen as a co-constructed, cultural activity that requires participation in real-world social spaces. Recent reviews of literature have suggested that a significant dimension of digital literacy, especially in a social media ecology, is data literacy (Manca et al., 2021). For example, both the UNESCO Digital Literacy Global Framework and the European Commission’s Digital Competence Framework position data literacy as an important skill that facilitates participation in a global society. As people learn to locate and retrieve data, evaluate it, and then manage it through the use of digital platforms, they may be developing important digital literacy skills. As people become more skillful at negotiating information in digital contexts, they have more opportunities to participate in broader social, political, cultural, and economic activities. Participation in these activities often requires a diverse set of digital literacy practices, including the ability to understand, interrogate, and adapt user behavior in order to challenge personalized algorithmic systems (Hobbs, 2020). Literacy today is an ever-evolving range of specific digital skills, constructed in relationship with others and in a context of powerful platforms with an all-consuming appetite for data. Invariably, these opportunities require interactions with people who hold a range of different perspectives, worldviews, and ways of being in the world. It is against this backdrop that we present the following case studies that present, through narrative, examples of some of the challenges and opportunities for students, instructors, and others to develop digital and data literacy. The three case studies below suggest a complicated world where people’s social, emotional, cultural, and psychological experiences, interests, and backgrounds influence the development of data literacy.
5 “We Used to Have Fun But Then Data Came into Play…”: Social Media…
129
ase Study 1: Teaching with Twitter – The Tension Between C “Keeping It Real” and “Keeping Students Safe” Into this world enters the college professor, Sam, a mid-career professor at a public institution who is looking for a way to keep their students engaged about their discipline. Sam understands that there is a certain amount of background or foundational knowledge that students must leave the course with, and so class often involves a moderate amount of lecturing on topics that are critical to the discipline – without this knowledge, students will be unprepared for the next class and for more difficult concepts. Sam is eager for his students to be more engaged in class and also for their students to make sense of course content beyond merely regurgitating material on formative assessments. Sam envisions social media as a way for students to communicate with each other and for them to demonstrate their knowledge (or challenge areas). As a mid-career academic, Sam is both eager and a bit traditional in his digital media savvy. He wants to try social media, but also wants to use something that has been around for a while. Sam has done background reading on social media use as a tool for teaching and learning (Gleason & Manca, 2020), and has colleagues who have integrated social media, and so decides to use a stable (i.e., not the newest or flashiest!) social media platform – Twitter. Sam has read about how enterprising instructors are using Twitter in their courses, especially as a way to support the development of digital literacy. In a history course, for example, learners begin to understand how complex a particular issue is by attempting to understand different perspectives on it (Krutka & Carano, 2016). In a communications course, learners may interact with a number of real-life experts on the topic, facilitating conversations that would be difficult to have without social media (Dunlap & Lowenthal, 2009; Manca & Ranieri, 2013). In his field of English, Sam realized that one introductory step into the social media waters would be to assign students to tweet from the point of view of particular characters. This assignment goes over well, with one student telling him that they were able to understand Atticus Finch, as a result of tweeting their thoughts to him. Students reported that they were more engaged with To Kill a Mockingbird and were more interested in listening to the opinions and perspectives of their peers than if they were using a discussion board or completing a reading log. After this recent success, Sam decides to continue to use Twitter in the classroom and encourages his students to use the full range of affordances of the popular social media platform. For the next assignment, students will work in groups to develop curricula about some of the most important books in their lives and how these books tackle important social issues of their time. One group of students, who all share a love of great literature that addresses historical challenges, tweeted their opinion that Beloved, by Toni Morrison, moved them to reconsider the dastardly trauma as a result of enslavement. One student who was particularly enthusiastic about the book, Nautika, wondered if Beloved, published in the late 1990s, was one of the best books of the past century. Nautika noted that the book doesn’t shy away from depicting the horrors of slavery, as well as the enduring bonds that arise in response
130
B. Gleason and S. Manca
to this terrible institution. Students in the other groups share their own opinions about other books, and at the end of class, discussion turns to the next assignment. The next time class meets, 2 days later, however, Nautika is distraught. She noted how her simple tweet expressing admiration for Beloved subjected her to hate mail and trolling. In class, she noted how casually her detractors used “the n-word” in their tweets. Nautika noted that it seemed obvious that her group, made of the four Black students in the class, received aggressive harassment that they considered “racist,” while the rest of the class did not. During discussion, Sam explained that he did not expect this to happen, noting that he was still learning how to use Twitter, and that he had never faced this sort of behavior from other users. Sam noted how sorry he was that Nautika and the other students had such an unpleasant and potentially harmful experience as a result of this assignment. Nautika responded that she appreciated this apology and that she accepted it; at the same time, she wondered aloud what steps they would take in the future to protect marginalized students. “Just because Black students are less than 10 percent of the student population at Midwest State does not mean that we should be an afterthought for you. Just because you did not face harassment and racism on Twitter does not mean that it won’t happen. I need you to think about this, and to think about how you can create safety for all your students, not just the white ones. If you are asking students to do something that subjects them to harm, then that is not good. We won’t do it.” This point causes Sam to stop and think: what would he do differently in the future to protect students? Sam wonders how he can balance the tension between wanting students to participate in “real-world” activities that facilitate interaction with peers, experts, and discourse communities and understanding that these interactions might subject marginalized students to harassment, racism, sexism, and other harms. He wondered how other instructors balance this tension and what steps they take to protect student safety in a world that is routinely hostile, if not deadly, for people of color.
Platform Perspectives: Online Speech Recently, social media research has begun to suggest that social media can support a number of beneficial educational outcomes related to the promotion of characteristics associated with civic engagement and democracy in general (i.e., becoming informed about current events through exposure to multiple viewpoints and perspectives and engaging in meaningful dialogue on social media). For example, Bakshy et al. (2015) have found that social media, such as Facebook, can promote exposure to diverse perspectives through its unique design features and that social media can facilitate political participation in global social movements (Tufekci, 2017). One important facet of this participation is how social media seems to enable conversations on important topics, such as climate change, politics, racial justice, and others. Facebook, for example, has been an early proponent of linking
5 “We Used to Have Fun But Then Data Came into Play…”: Social Media…
131
algorithmic design with content promoted on users’ newsfeeds (Cooper, 2021), which prioritizes posts that encourage engagement and social interaction. Whereas Facebook seems to encourage conversation between close friends and well-known connections (“strong ties”), researchers have found that Twitter seems to facilitate interaction with more distant connections (“weak ties”) (Valenzuela et al., 2018). This research suggested that Twitter was particularly effective in mobilizing participation between “young citizens” (Valenzuela et al., 2018, p. 129), which can support broader goals of digital citizenship and civic engagement. In other examples, Twitter has been found to be an important social space for young people to articulate their opinions on important social issues of the day and to become informed through sharing relevant perspectives (Gleason & von Gillern, 2018). Perhaps most importantly, Twitter bridges online interaction and offline activity by serving as a “connected” space that facilitates constant, mobile communication – in short, conversation. And while some researchers have argued that social media participation may be linked to significant outcomes such as a greater trust in science (Huber et al., 2019), there are still a number of concerns that remain about its use in educational contexts. Integrating social media into formal and informal teaching and learning spaces requires, as Nautika pointed out, more than mere good intentions. It also requires the desire to understand social media platforms as supporting complex interactions between individual users, networked structures, and algorithms (Starbird, 2021). In this model, the network structure is both influenced by human actions (i.e., Sam’s decision to follow particular users) and also influencing other users (i.e., shaping the following structure of other users). At the same time, algorithms play an increasingly important role in influencing human actions in social media sites, as they determine which feeds are seen and recommended to others (van Dijck, 2013a). Throughout this process, novice Twitter users, such as Sam, are often unaware that their behavior is being shaped by algorithms and that their behavior continues to update, in real time, the fine-tuning of these algorithms. Recently, Internet researchers have begun to articulate compelling evidence that the algorithms that drive social media platforms such as YouTube, and also Twitter, require additional scholarly scrutiny. At worst, they have argued that these algorithms are programmed to recommend harmful content that can lead to radicalization and extremism (Kaiser & Rauchfleisch, 2019). These recommendation engines prioritize engagement above all else, and, as research has suggested, engagement on social media is facilitated by powerful emotions. At the very least, there is a clear need for educators to understand how algorithmic design can cause harm to users, potentially far beyond their action “on” social media platforms themselves. Citing Benjamin (2019) and O’Neil (2016), Rospigliosi (2021) argued that harm from algorithmic design comes as artificial intelligence (i.e., machine learning) leverages user behavior to make predictions that are then compared with actual results. In educational settings, according to Rospigliosi (2021), users would benefit from algorithms that are transparent and support user autonomy. That is, algorithms should be clear and easy to understand so that users are able to make intelligent decisions to exert some degree of control on the site. This argument aligns with an overall push from popular
132
B. Gleason and S. Manca
scholars such as Baratunde Thurston (2018) who issued a “manifesto” to Big Tech that argued for transparency, autonomy, diversity, and policies to encourage regulation. Both Rospigliosi and Thurston, along with a host of others concerned with techno-ethics, have pointed out the challenges of commercial platforms that link near-constant surveillance with widespread data collection, storage, and sales. While these practices are being challenged and regulated in the European Union (i.e., the General Data Protection Regulation), the USA continues to lag behind (Sherman, 2021). For formal educational institutions such as schools, this lack of regulation for student data suggests a troubling lack of protection for minor students. Not only are students at risk from commercial platforms (i.e., through data collection on their platforms), but this data can then be integrated into a range of other data collection systems (i.e., school systems) that may carry their own risks. As Boyd (2014) so presciently noted, one constant challenge of social media is the decontextualization of student activity; once something is “out there,” it loses all sense of the original context.
ase Study 2: Going Viral – The Challenges of Content C and Context in a Digital World An experienced high school teacher who has been teaching for 15 years, Em, is interested in integrating social media into their English class. One of Em’s guiding philosophies as a teacher is to build relationships with students in order to motivate them to connect with course content. Over 15 years at Central High School, Em has proven to be a committed teacher who’s been the “go to” teacher for hundreds of students, helping them with their college applications, attending their band concerts and sporting events, and being an eager listener for their tales of adolescent drama. More than a decade ago, students at Central introduced Em to social media. First on Twitter, then on Snapchat, and now on Instagram, Em has watched their students navigate the intricate social worlds of high school through attention to academics amidst numerous intra- and interpersonal challenges. While Em would never use the term “early adopter,” it is clear that Em has an active social media presence and finds a number of benefits from its use, mostly related to building lasting relationships with students. When administrators at Central ask for the pedagogical purpose of social media, Em has responded: “At Central, we talk a lot about helping our students to develop their voice in order to participate fully in the world. With social media, I can see, and hear, students’ real voices as they go through their daily activities. It’s like an unfiltered look at their everyday lives, and it lets me get to know my students even better than before.” When colleagues wonder about how this translates into helping students learn more about English, or History, Em has repeatedly articulated the necessity of building relationships with students as “the first step to getting students engaged in content. It’s everything.”
5 “We Used to Have Fun But Then Data Came into Play…”: Social Media…
133
After attending a professional development session about how to integrate technology into classes as a way to support student engagement, a lightbulb goes off in Em’s head: “I remember thinking that it might be possible to bring all of this stuff into my classroom in a more formal way. I love using technology to get to know students, and over the years, I’ve begun to see that students are more engaged with me, and the class, as a result of our interactions on social media.” Em begins to plan, starting small. It is spring, and the students in Em’s 11th grade English class are getting restless. For their unit on The Great Gatsby, Em assigns students to record videos about the “Roaring Twenties,” and students are thrilled with getting the chance to dress up as Daisy, Tom, and Gatsby. In groups, students recreate their favorite scenes from the novel, speculating on the symbolism of the “green light” and relationship to greed and American exceptionalism. Another group creates a video where immaculately dressed juniors throw an elaborate party; in their commitment to novelistic reality, they act inebriated, and their glasses of (pretend) alcohol fall to the ground. When students meet for class the next day, the students make it clear that they were pretending to be drunk, and the drinks are actually juice and water. However, one student records the re-enactment and takes it to the Assistant Principal, who calls in Em for a more lengthy conversation about the assignment. The Assistant Principal begins, “Em, you know that I’ve had nothing but respect for you in the two years that I’ve been in my current role. Your students score off the charts and they refer to you as one of their favorite teachers. However, students recording themselves in a drunken state and then circulating the video is not a good look for Central. From now on, I will ask you to refrain from any interaction with students on social media, as the content of this video, as an official assignment for your class, gives the appearance that Central condones underage drinking, especially since this group was given the highest grade.” Em responds that students completed this assignment on their own time, off school property, and that they were acting, but the Assistant Principal isn’t moved by this argument. “Let’s take some time to reflect on this conversation and we can circle back to it when our emotions have cooled. But now, I have to run to another meeting.”
Data Literacy in a Social Media World At this point, the issue of data literacy deserves a specific focus. As we have illustrated above in the social media literacy framework (Manca et al., 2021), “information and data literacy” is one of the competence areas of an ecological digital literacy framework. This is especially relevant when dealing with the use of social media, given the complexity of data management involved on these platforms. Dealing with data in social media entails much more than being able to browse, search, filter, evaluate, and manage data. In social media, data issues may refer to privacy and security, data processed by algorithms to orient users’ behaviors, user agreement policies, and ethics in the (re)use of data. Tasks of educators may range from helping students understand the landscape of social media privacy and data collection
134
B. Gleason and S. Manca
laws and policies to identifying possible ways to take action on these issues (Krutka et al., 2019). In order to understand how personal data originate, circulate, and are managed, according to Pangrazio and Selwyn (2019), educating around personal data literacies includes at least five domains: data identification, data understandings, data reflexivity, data uses, and data tactics (see also Crooks & Currie, 2021). As already highlighted several times in this chapter, “efforts to cultivate digital literacies also need to encompass the social and ethical aspects of social media, as technical skills alone are not sufficient in preparing young people for the complex situations and decisions they must navigate as part of use” (Pangrazio & Selwyn, 2019, p. 7). The tensions between the personal, curative, or performative character of social media practices and the commercial usefulness of platforms and apps require developing new data literacies to deal with increasing forms of datafication (McCosker, 2017). The paradox of datafication (van Dijck, 2013b) – according to which social media are both stages for communication, self-expression, and presentation and sites of conflict between users and platform owners in the control of online identities and digital data – requires a specific focus on the development of data literacy skills. Putting information distortion and proactive citizens at the center is the focus of educational programs aimed at designing skills that go beyond the individual and develop critical thinking of the online ecosystem (Carmi et al., 2020). Other programs focus on “critical data education” to reimagine the dominant forms of digital “cybersafety” education in schools that go beyond. In one of these programs, learning objectives were “materialising data and data processing; understanding the implications of data processing; and trialing strategies and tactics to manage and protect personal data” (Pangrazio & Selwyn, 2020, p. 1). Supporting social justice against datafication is another recent conceptual approach that has been elaborated in the provision of pedagogical reflection and practice (Raffaghelli, 2020). This approach integrates four dimensions and data literacies in connection with a social justice’s vision: (1) a critical data literacy, which integrates a post-colonial perspective in the Global North-South relationship; (2) a personal/ethical data literacy, which accommodates the use of data in public and private platforms; (3) a civic data literacy, which includes use of open data for good; and (4) a pedagogical data literacy, which orients the ethical use of educational data and supports the pedagogical and academic institutional development. However, despite the plethora of academic programs aimed at developing critical data literacy skills, there is still attention on how higher education staff manage these programs. A recent review of the literature (Raffaghelli & Stewart, 2020) shows, for instance, that approaches to educators’ data literacy mostly rely on management and technical abilities, while less emphasis is put on critical, ethical, and personal approaches to datafication in education. Other elaborations have also pointed out the need for critical data literacy that considers students’ vulnerability and agency in the use of learning analytics by instructors and educators (Raffaghelli et al., 2020).
5 “We Used to Have Fun But Then Data Came into Play…”: Social Media…
135
ase Study 3: Developing Data Literacy Through Strategies C of Resistance and Creativity This time, our college professor, Sam, who is investigating new approaches to data literacy, wants his students to become aware of personal data as a part of the cultural and material practices of their daily lives. Following Pangrazio and Selwyn’s recommendations (2019), he proposes to his students an activity aimed at identifying their personal data within some platforms they use daily, i.e., Facebook and Instagram. Students are asked, for instance, to identify the data such as pictures, personal information, geolocalization, and other data that may be surreptitiously extracted from them without their knowledge. The next activity focuses on answering the question “What are the origins, circulations, and uses of these different types of your personal data?”. Here, students are asked to think about how and where their personal data were generated and the ways in which they are likely to be processed and used by other parties. Data understanding implies awareness of possible (re) uses of one’s data, which can be done by using some practical tools to analyze data trails and traces to see how their data travel across the Internet. Understanding and interpreting the sources and visual representations based on these sources is a key competence at this stage. Next, Sam encourages his students to reflect individually and collectively on the implications of the different types of personal data for themselves and others by using techniques of processing personal data, such as sentiment analysis or tools for natural language processing, to analyze and evaluate the profiling. In addition, qualitative methods such as auto-ethnographic approaches may be used to complement computational approaches. Auto- ethnographic approach centers reflection, relational, and contextual analysis of one’s own practices in a particular cultural space. In his class, Sam has his students analyze their Facebook use before reflecting about the ways that their commenting on a friend’s post influences what they will see on their newsfeed or the implications of their actions on Facebook in terms of their subsequent experiences on other platforms. For Sam and his students, the next phase of the activity is to build technical skills and interpretive competencies, such as reading the terms and conditions, adjusting privacy settings, implementing ad block technologies, or setting performance targets, and to apply the information that is represented by processed data. The aim of this activity is to be engaged in strategies of data management in order to influence the feedback of data. Finally, students are asked to develop tactics of resistance and obfuscation and to repurpose data for personal and social reasons through creative applications. A number of oppositional approaches, or “data disobedience” strategies, may be taken at this point to mitigate, evade, or sabotage dominant structures of data reuse and recirculation: the deliberate use of false information such as entering an erroneous birthdate or gender identity, the setting up of an independently assessed VPN to protect privacy, or repurposing personal data to create visualizations and representations for particular purposes.
136
B. Gleason and S. Manca
For Sam and his students, the process of developing data literacy through analysis of their own data is a productive one. One student tells him “I’ve never considered my social media posts or activity to contain so much data. For me, it was fascinating to see that my Instagram posts where I tag my friends and include my location to be full of information. While I don’t have the sophisticated computational skills to do natural language processing, I was able to complete an auto- ethnography of my social media activity. From this auto-ethnography, I was able to analyze how my friends’ commentary on my posts ‘gave away’ so much data.” Another student mentioned how their introductory analysis allowed them to see differences in the kind of messages communicated across various social media platforms – on Instagram, their posts constructed a profile as someone who travels to noteworthy, even “hip” locations, while on Twitter they mostly posted about hyperlocal events and happenings. A third student, who self-identified as a queer activist, described how they used a number of fake accounts (i.e., with fake names and other information) as a way to represent themselves differently than on their official accounts. On these “Finsta” (“fake Instagram”) accounts, for example, they posted content for a select group of followers, namely, close friends, who were likely to “get” what they were trying to do. For these students, and for Sam, the process of developing data literacy begins self-study and moves outward to the ways that user activity is part of broader socio-political trends, tactics, and strategies.
Implications for Practice In this chapter, we have attempted to introduce both the challenges and opportunities that may arise as educators integrate social media into their pedagogical repertoires. Though social media may be seen as synonymous with social networks such as Facebook, Instagram, Snapchat, and Twitter, this chapter argues that being digitally literate in these spaces is more than just acquiring a compendium of technical skills (i.e., of how to craft an engaging post, interact with followers, or integrate captivating imagery). Being digitally literate in social media spaces means developing complex knowledge about how, and why, participation is never a singular task but rather a co-constructed activity in broader socio-ecological contexts that have histories of oppression. For example, while educators such as Sam and Em attempted to integrate social media as tools for communication, creativity, and real-life interaction, they perhaps did not imagine these academic tasks to be part of networked activity. Not only do these tasks travel far beyond their intended environment (i.e., the classroom), not only do they persist beyond the assignment’s due date, but they also can have a range of unforeseen negative consequences for students, educators, families, community, and countless others. The features of social media for connectivity, collective authorship, permanence, and social media activity to “go viral” mean that, invariably, context collapse (Boyd, 2014) will ensure that different audiences will interpret the story, image, or video differently. For educators, this means that we have a responsibility to deepen our theoretical and practical understanding
5 “We Used to Have Fun But Then Data Came into Play…”: Social Media…
137
of digital literacy to situate not as an individual, cognitive accomplishment, but as a co-created sociocultural task dependent on networked activity. Further still, taking a critical approach suggests centering this activity as grounded in systems of oppression, where those people with marginalized identities and experiences both have been subjected to injustice and have found agency to survive (Sabzalian, 2019). As suggested by Marín et al. (2021), there is a need for more up-to-date policies and regulations related to educators’ social media use and teacher education programs to help pre-service teachers develop data literacy skills related to social media. If we look from the students’ perspective, efforts to develop media literacy in a “post- truth” world have also been made in light of the infodemic during COVID-19 pandemic (Scheibenzuber et al., 2021). One identified solution is to invest in problem-based learning as an instrument for combating fake news illiteracy because it is rooted in authentic materials including interactive media elements (links to webpages, videos, or images) that work to contextualize the problem. This is not to critique the efforts of educators, such as fictionalized Sam and Em, who incorporated social media as learning spaces in their courses, but rather to call attention to the complex, pernicious, and inequitable dynamics at play in social media. It is essential that educators do not imagine social media platforms to be neutral spaces for intellectual play. They are not. They are “walled gardens” with a commercial imperative to facilitate communication and engagement and through affective interaction (Krutka et al., 2019). Whether we want to imagine the academic tasks assigned as existing in multiple nested systems (i.e., social, cultural, educational, platformed) or as networked activity (i.e., where social media activity pushes up against educational policies around protected speech), we would do well to understand the multiple, competing influences that arise from educational activity in social media spaces. These tensions are not mere adolescent social media “drama” (Marwick & Boyd, 2011), but reflect broader ideological tensions about the purposes of digital media, how it is related to broader civic participation, and so on. Here, we note that while emerging research aligned with critical educational scholarship suggests that social/ digital media as a space for the development of powerful intellectual growth for individuals and society (O’Byrne, 2019), this runs up against recent legislation which aims to limit opportunities for meaningful civic engagement (see Texas, for one). Putting these case studies together suggests a range of understandings, tactics, and approaches to data literacy, datafication, and the need for educators, students, and communities to develop nuanced understandings, practices, and responses to datafication that go beyond individual mastery of technical skills for students, that go beyond educators acting as advocates for marginalized students and communities, and that go beyond student understandings of how intersectional approaches to data literacy affect those students, families, and networks. There is a great deal of scholarship from educational researchers who advocate for the need for increased literacy skills, for the need to develop critical literacy, and for students to develop data literacy – all of these focus on the (obvious) need to develop student competence and skills to navigate the increasingly complex global media and information
138
B. Gleason and S. Manca
ecology of the twenty-first century. These approaches are necessary but not sufficient. In fact, what might be equally useful would be to adopt an approach that centers the asymmetry of power between individual users, on the one hand, and the powerful organizations, institutions, and commercial platforms that seek to consolidate power, promote particular (i.e., conservative and/or neoliberal/capitalist) ideologies, and punish those who wish to educate, inform, or resist the power of these perspectives, on the other hand. Crooks and Currie (2021) and others (Constanza- Shock, 2020) argue for an approach that recognizes the asymmetry of power, intersectionality, and the harmful consequences that arise when powerful actors create policies, practices, and methods of living that purportedly aim for freedom without acknowledging the wide disparity in power and possibility of marginalization, inequity, and lack of justice. Crooks and Currie’s (Crooks & Currie, 2021) “agnostic” approach deliberately raises confrontation and conflict as a constructive method for consciousness-raising, for political activism, and for reimagining current and future political practices that may result in a more egalitarian society. In this vision, educators who wish to educate students about data literacy and datafication may heed the blunt words of Brandi Levy, the high school student suspended for her Snapchat story “fuck school, fuck softball, fuck cheer, fuck everything.” This seemingly “vulgar” adolescent opinion may, counterintuitively, reflect a profound political orientation to the powerful institutional and organizational practices that govern life in an information and media ecology. Through deliberately drawing on affect, Brandi’s middle finger to the powerful elite represents a striking approach to the importance of viewing data literacy as more than just a set of discrete technical skills, but as an embodied approach in which being literate is an ongoing activity in a world where groups of highly powerful actors (i.e., commercial platforms, educational institutions) have political, cultural, economic, and social incentives to surveil, monitor, and punish those who do not conform to dominant norms.
Conclusion and Recommendations for Further Research In this chapter, we have addressed a number of topics revolving around social media, such as digital literacy applied to social media, how online conversations are driven by the algorithms that steer the platforms, and data literacy from a critical perspective. Our idea was to highlight some of the critical issues that arise when social media platforms are used for educational purposes in higher education. Obviously, this is an ongoing debate in which very different sociological, educational, socio-technical, and cultural positions converge. As recently pointed out, “reclaiming conversations” (Turkle, 2015) that are meaningful and capable of creating “authentic connection in a time of uncharted challenges” (Turkle, 2021) is not an easy challenge. If, over time, the fears of the ed-tech community have mainly focused on the dangers of the process of increasing robotization of our societies (e.g., Selwyn, 2019; Turkle, 2011), today, the risks inherent in the growing
5 “We Used to Have Fun But Then Data Came into Play…”: Social Media…
139
“connectivity” inherent in social media mechanisms (van Dijck, 2013a) provide a warning to citizens and educators. If human actions are influenced by network structure and algorithmic design, which are motivated by profit, we need to develop user behaviors that are able to overcome the growing pessimism in part of the scientific community. In this sense, alongside the development of critical digital literacy programs, it is also important to build research methods that are ecological and rooted in the experiences of users and learners. From this point of view, there is a growing need to develop mixed-method research approaches that combine computational and data- driven methods with narrative approaches based on netnographic and auto- ethnographic observations, content analysis, and other methods of qualitative research (Kozinets, 2020). If social media research is committed to providing increasingly sophisticated tools to overcome the challenges inherent in the management of increasingly pervasive “big and broad data” society (Sloan & Quan-Haase, 2018), there is also a need to embrace social media research not only as a sector of educational technology but as “the” key to understanding contemporary public life and its techno-ethical implications. In this sense, scholars are required to make a special effort to compose a research framework that is as rich and articulate as possible, but which today still remains fragmentary and partial.
References Bakhtin, M. M. (1929). Problems of Dostoevsky’s art (Russian). Priboj [Emerson, Caryl (translator), Problems of Dostoevsky’s Poetics. University of Minnesota Press, 1984]. Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132. Barron, B. (2006). Interest and self-sustained learning as catalysts of development: A learning ecology perspective. Human development. KARGER. Barrot, J. S. (2021). Scientific mapping of social media in education: A decade of exponential growth. Journal of Educational Computing Research, 59(4), 645–668. Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Wiley. Boyd, D. (2014). It’s complicated: The social lives of networked teens. Yale University Press. Bronfenbrenner, U. (1979). The ecology of human development: Experiments by nature and design. Harvard University Press. Bronfenbrenner, U., & Morris, P. A. (1998). The ecology of developmental processes. In W. Damon & R. Lerner (Eds.), Handbook of child psychology. Theories of development (Vol. 4, pp. 999–1058). Wiley. Carmi, E., Yates, S. J., Lockley, E., & Pawluczuk, A. (2020). Data citizenship: Rethinking data literacy in the age of disinformation, misinformation, and malinformation. Internet Policy Review, 9(2), 1–22. Carpenter, J. P., Krutka, D. G., & Trust, T. (2022). Continuity and change in educators’ professional learning networks. Journal of Educational Change, 23, 85–113. https://doi.org/10.1007/ s10833-020-09411-1 Constanza-Shock, S. (2020). Design justice. Community-led practices to build the worlds we need. The MIT Press. Cooper, P. (2021). How the Facebook algorithm works in 2021 and how to make it work for you (blog post). https://blog.hootsuite.com/facebook-algorithm/
140
B. Gleason and S. Manca
Crooks, R., & Currie, M. (2021). Numbers will not save us: Agonistic data practices. The Information Society, 37(4), 201–213. https://doi.org/10.1080/01972243.2021.1920081 Dron, J., & Anderson, T. (2014). Teaching crowds: Learning and social media. Athabasca University Press. Dunlap, J. C., & Lowenthal, P. R. (2009). Tweeting the night away: Using Twitter to enhance social presence. Journal of Information Systems Education, 20(2), 129–135. Engeström, Y. (1999). Activity theory and individual and social transformation. In Y. Engeström, R. Miettinen, & R. Punamäki (Eds.), Perspectives on activity theory (Learning in Doing: Social, Cognitive and Computational Perspectives) (pp. 19–38). Cambridge University Press. https://doi.org/10.1017/CBO9780511812774.003) Flyvbjerg, B. (2006). Five misunderstandings about case-study research. Qualitative Inquiry, 12(2), 219–245. Gerring, J. (2004). What is a case study and what is it good for? American Political Science Review, 98(2), 341–354. Gleason, B., & Manca, S. (2020). Curriculum and instruction: Pedagogical approaches to teaching and learning with Twitter in higher education. On the Horizon, 28(1), 1–8. Gleason, B., & Von Gillern, S. (2018). Digital citizenship with social media: Participatory practices of teaching and learning in secondary education. Journal of Educational Technology & Society, 21(1), 200–212. Greenhow, C., & Lewin, C. (2016). Social media and education: Reconceptualizing the boundaries of formal and informal learning. Learning, Media and Technology, 41(1), 6–30. Greenhow, C., Gleason, B., & Staudt Willet, K. B. (2019a). Social scholarship revisited: Changing scholarly practices in the age of social media. British Journal of Educational Technology, 50(3), 987–1004. Greenhow, C., Li, J., & Mai, M. (2019b). From tweeting to meeting: Expansive professional learning and the academic conference backchannel. British Journal of Educational Technology, 50(4), 1656–1672. Gutiérrez, K. D., & Rogoff, B. (2003). Cultural ways of learning: Individual traits or repertoires of practice. Educational Researcher, 32(5), 19–25. Hobbs, R. (2020). Propaganda in an age of algorithmic personalization: Expanding literacy research and practice. Reading Research Quarterly, 55(3), 521–533. Holland, D., & Lave, J. (2009). Social practice theory and the historical production of persons. Actio: An International Journal of Human Activity Theory, 2(1), 1–15. Huber, B., Barnidge, M., Gil de Zúñiga, H., & Liu, J. (2019). Fostering public trust in science: The role of social media. Public Understanding of Science, 28(7), 759–777. Johnson, N., & Veletsianos, G. (2021). Digital faculty: Faculty social media use and communications. Bay View Analytics. Kaiser, J., & Rauchfleisch, A. (2019, June 27). The implications of venturing down the rabbit hole. Internet Policy Review. https://policyreview.info/articles/news/ implications-venturing-down-rabbit-hole/1406 Kozinets, R. V. (2020). Netnography: The essential guide to qualitative social media research (3rd ed.). Sage. Krutka, D. G., & Carano, K. T. (2016). “As long as I see you on Facebook I know you are safe”: Social media experiences as humanizing pedagogy. In Rethinking social studies teacher education in the twenty-first century (pp. 207–222). Springer. Krutka, D. G., Manca, S., Galvin, S. M., Greenhow, C., Koehler, M. J., & Askari, E. (2019). Teaching “against” social media: Confronting problems of profit in the curriculum. Teachers College Record, 121(14), 140310. Lave, J. (2019). Learning and everyday life: Access, participation, and changing practice. Cambridge University Press. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge University Press.
5 “We Used to Have Fun But Then Data Came into Play…”: Social Media…
141
Lee, C., Yang, T., Inchoco, G., Jones, G. M., & Satyanarayan, A. (2021). Viral visualizations: How coronavirus skeptics use orthodox data practices to promote unorthodox science online. In Proceedings of CHI ’21, May 8–13, 2021, Yokohama, Japan. Manca, S., & Ranieri, M. (2013). Is it a tool suitable for learning? A critical review of the literature on Facebook as a technology-enhanced learning environment. Journal of Computer Assisted Learning, 29(6), 487–504. Manca, S., & Ranieri, M. (2016). Facebook and the others. Potentials and obstacles of Social Media for teaching in higher education. Computers & Education, 95, 216–230. Manca, S., Bocconi, S., & Gleason, B. (2021). “Think globally, act locally”: A glocal approach to the development of social media literacy. Computers & Education, 160, 104025. Marín, V. M., Carpenter, J. P., & Tur, G. (2021). Pre-service teachers’ perceptions of social media data privacy policies. British Journal of Educational Technology, 52(2), 519–535. Marwick, A. E., & Boyd, D. (2011). I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society, 13(1), 114–133. McCosker, A. (2017). Data literacies for the postdemographic social media self. First Monday, 22(10). Neal, J. W., & Neal, Z. P. (2013). Nested or networked? Future directions for ecological systems theory. Social Development, 22(4), 722–737. O’Byrne, W. I. (2019). Educate, empower, advocate: Amplifying marginalized voices in a digital society. Contemporary Issues in Technology and Teacher Education, 19(4), 640–669. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. Pangrazio, L., & Sefton-Green, J. (2020). The social utility of ‘data literacy’. Learning, Media and Technology, 45(2), 208–220. Pangrazio, L., & Selwyn, N. (2019). ‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media & Society, 21(2), 419–437. Pangrazio, L., & Selwyn, N. (2021). Towards a school-based ‘critical data education’. Pedagogy, Culture & Society, 29(3), 31–448. https://doi.org/10.1080/14681366.2020.1747527 Peters, M., & Romero, M. (2019). Lifelong learning ecologies in online higher education: Students’ engagement in the continuum between formal and informal learning. British Journal of Educational Technology, 50(4), 1729–1743. Raffaghelli, J. E. (2020). Is data literacy a catalyst of social justice? A response from nine data literacy initiatives in higher education. Education Sciences, 10(233), 1–20. Raffaghelli, J. E., & Stewart, B. (2020). Centering complexity in ‘educators’ data literacy’ to support future practices in faculty development: A systematic review of the literature. Teaching in Higher Education, 25(4), 435–455. Raffaghelli, J., Manca, S., Stewart, B., Prinsloo, P., & Sangrà, A. (2020). Supporting the development of critical data literacies in higher education: Building blocks for fair data cultures in society. International Journal of Educational Technology in Higher Education, 17(58), 1–22. Ranieri, M., Manca, S., & Fini, A. (2012). Why (and how) do teachers engage in social networks? An exploratory study of professional use of Facebook and its implications for lifelong learning. British Journal of Educational Technology, 43, 754–769. Rospigliosi, P. A. (2021). The risk of algorithmic injustice for interactive learning environments. Interactive Learning Environments, 29(4), 523–526. Sabzalian, L. (2019). Indigenous children’s survivance in public schools. Routledge. Sangrá, A., Raffaghelli, J. E., & Guitert-Catasús, M. (2019). Learning ecologies through a lens: Ontological, methodological and applicative issues. A systematic review of the literature. British Journal of Educational Technology, 50(4), 1619–1638. Scheibenzuber, C., Hofer, S., & Nistor, N. (2021). Designing for fake news literacy training: A problem-based undergraduate online-course. Computers in Human Behavior, 121, 106796. Selwyn, N. (2019). Should robots replace teachers?: AI and the future of education. Polity Press. Sherman, J. (2021, July 20). Weak US privacy law hurts America’s global standing. Wired. Available at: https://www.wired.com/story/weak-us-privacy-law-hurts-americas-global-standing/
142
B. Gleason and S. Manca
Sloan, L., & Quan-Haase, A. (2018). The Sage handbook of social media research methods. Sage. Starbird, K. (2021). A conceptual model for understanding how network structure, algorithms, and human actions are mutually shaping. https://twitter.com/katestarbird/ status/1388207965765701633/photo/1 Staudt Willet, K. B., & Carpenter, J. P. (2021). A tale of two subreddits: Change and continuity in teaching-related online spaces. British Journal of Educational Technology, 52(2), 714–733. Steinberg, L. (2001). Contextual studies: Methodology. International Encyclopedia of the Social & Behavioral Sciences, 2705–2709. Stewart, B. E., & Lyons, E. (2021). When the classroom becomes datafied: A baseline for building data ethics policy and data literacies across higher education. Italian Journal of Educational Technology, 29(2), 54–68. Thurston, B. (2018). A new tech manifesto. One Zero (blog). https://onezero.medium. com/a-new-tech-manifesto-21d251058af3 Tufekci, Z. (2017). Twitter and tear gas. Yale University Press. Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books. Turkle, S. (2015). Reclaiming conversation: The power of talk in a digital age. Penguin. Turkle, S. (2021). The empathy diaries. A Memoir. Penguin. Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines democracy. Oxford University Press. Valenzuela, S., Correa, T., & Gil de Zuniga, H. (2018). Ties, likes, and tweets: Using strong and weak ties to explain differences in protest participation across Facebook and Twitter use. Political Communication, 35(1), 117–134. van Dijck, J. (2013a). The culture of connectivity. A critical history of social media. Oxford University Press. van Dijck, J. (2013b). ‘You have one identity’: Performing the self on Facebook and LinkedIn. Media, Culture & Society, 35(2), 199–215. Veletsianos, G., Johnson, N., & Belikov, O. (2019). Academics’ social media use over time is associated with individual, relational, cultural and political factors. British Journal of Educational Technology, 50(4), 1713–1728. We Are Social. (2021). Digital 2021. Global overview report. https://wearesocial.com/digital-2021 Benjamin Gleason is Associate Professor of Educational Technology in the School of Education at Iowa State University. He obtained PhD in Educational Psychology and Educational Technology from Michigan State University, with a focus on social media for teaching and learning. Prior to academia, he worked as a youth development specialist and high school teacher in Richmond, California; as an instructor of English and Social Studies in Guatemala; and as a graduate student instructor in Michigan. He is interested in researching people’s everyday practices on social media, particularly as people make meaning, develop identities, and participate in civic engagement in highly contextualized and contentious spaces.
Stefania Manca is Research Director at the Institute of Educational Technology of the Italian National Research Council. She has a Master’s Degree in Education and is a PhD student in Education and ICT (e-learning) at Universitat Oberta de Catalunya, Spain. She has been active in the field of educational technology, technology-based learning, distance education, and e-learning since 1995. Her research interests are social media and social network sites in formal and informal learning, teacher education, professional development, and digital scholarship. She has been the project coordinator of the IHRA grant no. 2020-792 “Countering Holocaust distortion on Social Media. Promoting the positive use of Internet social technologies for teaching and learning about the Holocaust”. Her recent work includes a contribution to the journal Computers & Education “Think globally, act locally”: A glocal approach to the development of social media literacy”, co- authored with Stefania Bocconi and Benjamin Gleason.
Part II
Exploring Proactive Data Epistemologies in Higher Education
Chapter 6
Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy Gema Santos-Hermosa, Alfonso Quarati, Eugenia Loría-Soriano, and Juliana E. Raffaghelli Abstract Open data has been conceptualised as a strategic form of public knowledge. Tightly connected with the developments in open government and open science, the main claim is that access to open data (OD) might be a catalyser of social innovation and citizen empowerment. Nevertheless, the so-called (open) data divide, as a problem connected to the situation of OD usage and engagement, is a concern. In this chapter, we introduce the OD usage trends, focusing on the role played by (open) data literacy amongst either users or producers: citizens, professionals, and researchers. Indeed, we attempted to cover the problem of OD through a holistic approach including two areas of research and practice: open government data (OGD) and open research data (ORD). After uncovering several factors blocking OD consumption, we point out that more OD is being published (albeit with low usage), and we overview the research on data literacy. While the intentions of stakeholders are driven by many motivations, the abilities that put them in the condition to enhance OD might require further attention. In the end, we focus on several lifelong learning activities supporting open data literacy, uncovering the challenges ahead to unleash the power of OD in society. G. Santos-Hermosa (*) Universitat de Barcelona, Barcelona, Spain e-mail: [email protected] A. Quarati Institute of Applied Mathematics and Information Technologies, National Research Council of Italy, Genoa, Italy e-mail: [email protected] E. Loría-Soriano Universitat Oberta de Catalunya, Barcelona, Spain e-mail: [email protected] J. E. Raffaghelli Edul@b Research Group, Universitat Oberta de Catalunya, Barcelona, Spain Department of Philosophy, Sociology, Pedagogy and Applied Psychology, University of Padua, Padua, Italy e-mail: [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. E. Raffaghelli, A. Sangrà (eds.), Data Cultures in Higher Education, Higher Education Dynamics 59, https://doi.org/10.1007/978-3-031-24193-2_6
145
146
G. Santos-Hermosa et al.
Keywords Open data · Usage · Data literacy · Lifelong learning
Introduction Open, public, and accessible knowledge has been a matter of concern for societies since the Universal Declaration of Human Rights (UDHR) expressed that: Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.1 (United Nations, 1998, Art. 19)
This right emphasises the need for universal access to information, as a springboard of knowledge building for social, cultural, and economic purposes. The advancement of technologies has embedded the debate around free access to human knowledge, emphasising the potential of the technological medium, particularly the Internet, to such a purpose. Indeed, the Internet ubiquity and cultural change were outlined in what Castells (2001) called the network society. Nonetheless, the utopia had to be deconstructed. In fact, the socio-economic and cultural issues preventing people from enhancing Internet affordances led to what was characterised as the “digital divide” (Hargittai, 2003). Despite such tensions, the digital phenomenon kept on advancing at a fast pace, leading to the emerging phenomenon of “datafication”. This refers to the massive ways into which digitised data have permeated our lives, with several effects and unpredicted impacts. However, the dynamics of knowledge generation, access, and circulation in digital space and through digital instruments has been seemingly replicated, leading to a sort of data divide. The debate on the relevance of open data as “data that anyone can access, use and share”2 was quickly embraced by several transnational bodies as a central piece of policy making, considering the crucial role of this component within the panorama of open, public, and accessible knowledge. Open data assumes several facets and forms of existence with highly specific debates, particularly connecting to the data generated in the public sphere of government, on the one hand, and the science and technology system, on the other. Having in common the public funding which supports their outcomes, these two areas of human activity are claimed to be accountable and transparent. Moreover, they should unleash public knowledge to support civic empowerment and social innovation (Baack, 2015). As purported by Zuiderwijk and Janssen (2014), civic monitoring and appropriation of the results of open data as part of public knowledge are imperative.
https://www.un.org/en/about-us/universal-declaration-of-human-rights According to the European Data Portal definition: https://www.europeandataportal.eu/elearning/ en/module1/#/id/co-01 1 2
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
147
In such a context, the research over the production, use, and quality of open government data (OGD) was an initial concern that quickly spread to open research data (ORD). As Dai, Shin, and Smith point out, “A culture of OGD needs to be nurtured in both the government and across the entire OGD ecosystem of users, including researchers” (Dai et al., 2018, p. 14). Along with the increasing importance given to OGD and to ORD in policies, the investigation advanced in understanding public bodies and researchers’ publications of open data in data repositories and portals, for instance, with studies pointing out the difficulties encountered when organising data infrastructures that are usable for the citizens (Davies, 2010; Zuiderwijk et al. 2012), as well as understanding copyright and selecting data portals by researchers (Kessler, 2018). Relevant reports have also been produced relating the increasing publication of open data, both in the case of ORD (Braunschweig et al., 2012; Digital Science et al., 2019) and in that of OGD (OECD, 2018; Open Data for Development Network & Omidyar Network, 2018; UKtransparency & CabinetofficeUK, 2012). The impact of open data is, as any source of information, limited if not used (Máchová & Lnenicka, 2017) or transformed into knowledge by the society, despite being considered a driving force for transparency (Lourenço, 2015) of (e-)government policies, for economic growth (Janssen et al., 2012), and for the effectiveness, productivity, and reproducibility of scientific results (Gregory et al., 2020). Indeed, the numbers reported around the publication of open data do not serve in any way to testify to their actual reuse (Mergel et al., 2018; Janssen et al., 2012). Moreover, as noted by Lassinantti et al. (2019), the use of open data is “still considered problematic, and knowledge about open data use is scarce”, and, at the same time, the research in the field “lacks empirical evidence” as pointed out by Wirtz et al. (2019). As a result, open data as a source of information could be either superficial or not aligned with the potential users, and this fact will lead to the need to uncover the factors preventing usage, as well as the approaches to promote it. In this chapter, we claim that (open) data literacies and the connected actions to develop them could be a game-changer relating to the problem of open data usage. To support this assumption, we start from analysing the usage trends and uncovering blocking factors, such as data attrition and insufficient awareness about the relevance of open data policies and practices. The unwillingness to move towards a field that is still inappropriately framed, for the open data stakeholders (i.e. producers and/or consumers), is another hurdle. Within this landscape, we also pay particular attention to open data quality, given the remarkable connections between open data quality as driver and facilitator to the end user. Nevertheless, producing quality also requires advanced knowledge and skills, embedded into open data literacy. Therefore, we will introduce some of the most relevant data literacy concerns relating to the usage of open data. In the end, we will cover several actions supporting open data literacy, as interventions and recommendations to unleash the power of open data in society.
148
G. Santos-Hermosa et al.
The Value of Open Data for Open Knowledge The pressure to open data in science has moved in parallel with the call for open data in all public activities and particularly in the context of e-government, these activities being maintained through public funding (Zuiderwijk & Janssen, 2014). In the case of open government data, the open data movement is an emerging political and socio-economic phenomenon that promises to promote civic engagement and drive public sector innovations in various areas of public life (Kassen, 2020). The Open Data Handbook3 describes open data as data that can be freely used, reused, and redistributed by anyone, subject only to the minimum requirement of equal attribution and sharing. Along the same lines, the Open Knowledge Foundation, established in 2004, is recognised for its mission of a fair, free and open future, where all non-personal information is open and free for all to use. Open data are inherently a nonexcludable resource, no one can exclude others from using (Jetzek et al., 2019, p. 704). In 2012, the Open Data Institute (ODI) (https://theodi.org/) was founded to showcase the value of open data and to encourage its innovative use to bring positive change around the world. Several conferences were carried out over the past 10 years along with the creation of initiatives and networks of organisations worldwide. As examples, we can mention the International Open Data Conference (IODC), networks committed to the use of open data such as the Open Data for Development (OD4D) and global initiatives such as the Open Data Barometer (ODB) and the Open Data Charter (Davies et al., 2019). Also significant are organisations such as the Open Contracting Partnership and, at the regional level, the Latin American Open Data Initiative (ILDA). Indeed, there was an expectation relating the several benefits connected to the opening of government data, such as improving transparency and reliability in administration, promoting public participation and public-private collaboration, economic renewal, and recognition of public data as an asset of people (Sumitomo & Koshizuka, 2018). It was also pointed out that governments could benefit from the skills and knowledge of data-versed users to identify high-priority datasets usable for the creation of new products and services (Cruz & Lee, 2019) and innovation in the private sector from the release of government data (Magalhaes & Roseira, 2020). As per open research data, it has been asserted that the concept of open data could concretise the conceptual ideals of openness in science. As an object of socialisation, exchange, and public appropriation of scientific knowledge, open data could become an instantiation of the open science movement (Fecher & Friesike, 2013; Greco, 2014). Several international organisations have increasingly covered data sharing in their funded projects, as in the international and European policy context the debate achieved increasing relevance (European Commission, 2016). Cases like the following make evident the attention given to the issue in the https://opendatahandbook.org/
3
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
149
international agenda of policy making: the OpenAIRE portal as a base for the visibility of open data coming from the European Research Framework Horizon 2020 (European Commission, 2016); the case of Wellcome Trust’s policy states (Wellcome Trust, 2016); the Netherlands Organisation for Scientific Research (NWO, n.d.); the CERN’s policies (CERN, 2018); or the Bill & Melinda Gates Foundation as private case (Bill & Melinda Gates Foundation, 2017). For the European Commission, the centrality of open data has become a reality throughout the Mallorca Declaration of 2016 (European Commission – RISE – Research Innovation and Science Policy Experts, 2016). Such strong public and private efforts to publish open data are allegedly said to endow the researchers to replicate the scientific work and to be reused under an economy of research resources which is also beneficial for the same researchers (McKiernan et al., 2016). But having enunciated these hopeful principles, another different thing is to see concrete practices around open data. Open data publication can be considered a very first action connected to the real applications of the concept. Even more important and highlighting the appropriation and circulation of open data as part of public knowledge, we will refer in the following paragraphs to the situation of their usage.
Issues in Open Data Usage Open Data Usage At present, a (limited) number of studies analysed the use of government and research open data according to two perspectives: a qualitative one, mainly focused on the investigation of the motivations, practices, and experiences of users when they approach OD (Safarov et al., 2017; Wirtz et al., 2019; Lassinantti et al., 2019; Ruijer et al., 2020), and a quantitative one, based on the parameters of use of the datasets, such as the number of views and downloads provided by the OD portals and repositories, which shows some usage trends (Barbosa et al., 2014; Quarati & Martino, 2019; Quarati & Raffaghelli, 2020; Raffaghelli & Manca, 2019; Quarati, 2021 – in press). Safarov et al. (2017) carried out a systematic academic literature review on OGD, by analysing studies, which discussed at least one of the four factors of OGD utilisation: the types, the conditions, the effects, and the users. A framework analysing the relations between these four factors has been defined highlighting that most of the examined studies focus on aspects related to the release of OGD, taking for granted, but not empirically testing, the use in its various forms. The authors also point out that “More rigorous empirical research is needed to assess if the estimated effects of OGD are actually measurable”. Focusing on the users, Wirtz et al. (2019) aimed to investigate antecedents of OGD usage, by empirically testing a structural equation model to survey data collected from 210 German citizens. They found that citizens’ intention to use OGD is
150
G. Santos-Hermosa et al.
primarily driven by their usefulness, that is, the “extent to which citizens expect that using OGD “improves their personal performance”. Also, the belief that OGD can enhance transparency, participation, and collaboration is a usage-motivating factor, along with the ease of use (i.e. the effort required to consume OGD) of OGD- enabling platforms. The motivations urging people to use OGD were also explored by Lassinantti et al. (2019) who performed a document analysis to clarify what motives urge people to use OGD and what important user types exist. The relevant social groups theory has been applied to a set of 60 topic reports published on the EU knowledge portal for public sector information.4 These reports provide guidelines and best practices related to a broad range of OGD topics. Five main relevant social groups have been identified according to the motives that drive people towards open data use: (i) exploring for creativity, (ii) creating business value, (iii) enabling local citizen value, (iv) addressing global societal challenges, and (v) advocating the open data agenda. Although remarking that not accessing direct data may limit the extent of their results, they hope their analysis will be a spur to more in-depth research aimed at supporting wider use of OGD. Up to this point, we have considered the specific user interaction with the OGD infrastructures, although another relevant component is the organisations as human collectives embedding technical infrastructures. In this regard, Ruijer et al. (2020, p.6) state that “in order to better understand the complexity of OGD usage”, it is necessary to examine the interactions between governments and citizens on OGD platforms and what “people actually do with OGD” in their daily lives. To empirically validate their assumption, they followed an action research approach in which 23 participants, social servants and stakeholders, of a rural province in the Netherlands, collaborated by using the provincial OGD portal containing around 70 datasets. A discrepancy has emerged between user needs and expectations and what is offered by the existing datasets. As the authors point out, “A more participatory approach to the development of OGD platforms is needed to prevent a situation where there are great technological opportunities but no usage”. The previous works deal with the use of open data, theoretically or based on empirical research focused on surveys, arriving at profiling the type of users, their motivations, and the relationships between them. Thus, they provide supporting guidelines to open data providers and platform designers for implementing strategies and practices that are more focused on the needs and expectations of users towards the open data ecosystem. This type of investigation, however, does not help data publishers and consumers to understand to what extent open datasets are used. Barbosa et al. (2014) provide a quantitative analysis on the use of approximately 9000 datasets of 20 Socrata-based city portals in the United States. Amongst the general statistics on the content, size, and nature of the examined sample, the study derives information on the popularity of the datasets based on the number of views and downloads. The results relating to the number of views show that almost 60% of datasets are seen no more than 100 times and another 30% of datasets are viewed
www.europeandataportal.eu
4
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
151
up to 1000 times. The number of downloads is considerably less than the views, with just 13% of the datasets downloaded more than 100 times. According to the authors, the difference between views and downloads could be caused by the fact that most users do not feel the need to download the datasets, since their content is displayed directly in tabular form, as it belongs to Socrata-based portals. Quarati and De Martino (2019) supply an evaluation of the usage trends of OGD national portals from 98 countries, by investigating the presence of usage metadata directly visible to the users. This checking led to a sample of six portals that also allow programmatic access via API to two usage indicators, i.e. the number of views and downloads, for each portal dataset. Their analysis revealed two outcomes. Firstly, most of the portals examined lack any information on their datasets’ usage. Secondly, for the six OGD portals publishing programmatically retrievable usage data, the results show that most of the datasets are mainly unused, with median view values that do not exceed the value of 30 views obtained from the UK portal datasets. Raffaghelli and Manca (2019) explore practices of ORD publication and sharing in the field of educational technology, based on a small sample of open datasets published in three open data repositories (Figshare, Zenodo, and Mendeley Data) and two open data portals (OpenAIRE and LearnSphere), as well as in the academic social network site ResearchGate. The authors analyse usage information such as number of downloads, views, and citing ORD in connection with ORD quality. The results show a very low usage activity in the platforms and very few correspondences in ResearchGate that highlight a limited social life surrounding open datasets. Quarati and Raffaghelli (2020) investigate the usage trends of more than 6,000,000 open research datasets published on the Figshare research repository through two metrics: number of views and downloads. The view and download frequency distribution curves show heavy-tailed distributions with very few datasets with a high frequency of use and most of them with very low frequencies of usage, denoting that the majority of the published datasets, almost independent of their scientific discipline, are far from been used.
Open Data: Metadata Quality The quality of the datasets published has been deemed one of the main barriers and impediments hindering the success of data opening policies (Conradie & Choenni, 2014; Barry & Bannister, 2014; Martin, 2014; Janssen et al. 2012; Detlor et al. 2013; Stagars, 2016). The focus in this case has moved specifically to the associated metadata (Zuiderwijk et al., 2018; Neumaier et al., 2016). As data describing data, the quality of metadata is crucial for searching and consulting datasets’ descriptions, potentially improving their access and reuse. In fact, the W3C Data on the Web Best Practices Working Group (W3C Working Group, 2017) recommends (Best Practice 1: Provide metadata) that “Providing metadata is a fundamental requirement when publishing data on the Web” and points out that metadata should
152
G. Santos-Hermosa et al.
help “human users and computer applications” to understand “the data as well as other important aspects that describe a dataset or a distribution”. Similarly, the FAIR principles recommend OD publishers creating rich and adequate metadata to make datasets fully traceable, accessible, interoperable, and reusable for both humans and machines (Wilkinson, et al., 2016). As a result, several scholars in the field (Braunschweig et al., 2012; Zuiderwijk et al., 2018; Neumaier et al., 2016; Reiche & Höfig, 2013; Wilkinson et al., 2016) have emphasised the importance of providing proper metadata to improve the use of OD datasets. For instance, Braunschweig et al. (2012, p. 2) affirm that “Without sufficient metadata … neither manual nor automatic search can find the dataset and it will not be helpful for any user”. Zuiderwijk et al. (2018, p.120) observe that “it is essential for the correct interpretation and use of Open Data to offer sufficient metadata simultaneously to data”. Reiche et al. (2013, p. 236) argue that “The discoverability is bound to the quality of the metadata” and, consequently, “it is desirable to have high quality metadata”. In Figshare’s annual report, “The State of Open Data 2018”, Wilkinson claims that “Increasingly funders of research are requiring verifiable quality”, adding that “The quality of the data has to be able to be assessed” (Science et al., 2018, p. 2). It is also well known, from the first database management systems up to the current Web platforms, that the poor quality of data can be an obstacle to the full dissemination and reuse of information (Ouzzani et al. 2013). Being recognised as a “multifaceted concept” that includes different dimensions (Batini & Scannapieco, Data and Information Quality – Dimensions, Principles and Techniques, 2016), data quality has been analysed according to different methodologies and measured by technological frameworks through multiple metrics (Wang & Strong, 1996; Bizer & Cyganiak, 2009; Zaveri, et al., 2016; Batini et al., 2009). Recently, some studies have concerned the evaluation and monitoring of open data portals and repositories, examining the quality of the datasets and their metadata. These works build on the existing literature on data quality; define their own, more or less complex, assessment frameworks; and test their effectiveness on samples of public portals with a different level of detail (Reiche & Höfig, 2013; Neumaier et al., 2016; Vetró, et al., 2016; Oliveira et al., 2016; Máchová & Lnenicka, 2017; Kubler et al., 2018; Zhu & Freeman, 2019; Wilkinson, et al., 2019). Reiche and Höfig (2013), suggesting that good-quality metadata is the key to dataset discoverability, have defined and implemented five quality metrics applying them to three CKAN5-based portals, GovData.de (Germany), data.gov.uk (United Kingdom), and publicdata.eu (Europe), and then comparing their quality performance. On this basis, the authors recommend the creation of automatic evaluation platforms, thus, to constantly monitor OGD metadata quality. Following Reiche and Höfig’s suggestion, Neumaier et al. (2016) have also designed and implemented a metadata quality framework able to be applied to OGD CKAN is an open-source platform (a DMS or data management system) developed by the Open Knowledge Foundation (OKF), which has become a worldwide benchmark in terms of data opening programs. 5
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
153
portals based on different software platforms (i.e. CKAN, Socrata, and OpenDataSoft). Their solution maps the different platforms’ metadata profiles to the DCAT W3C metadata schema, to apply a set of quality metrics on the same metadata structure, thus automatically calculating the quality of the OGD at the dataset level. They also deployed a Web platform “Open Data Portal Watch”6 which periodically assesses the quality of 278 open data catalogues. Vetrò et al. (2016) validated their framework considering two OGD samples, respectively, applying a decentralised (i.e. no common data structure) and a centralised (i.e. with standardised data structures) disclosure strategy, thus highlighting possible quality difference on the published data. The former sample involves the datasets of three Italian municipality portals; the latter, a dataset of the portal OpenCoesione,7 managed by the Italian Department for Cohesion Policy at the Presidency of the Council of Ministers. A set of 14 metrics have been applied, both automatically and manually, on the portal, dataset, and cell (i.e. considering datasets in tabular form) level. Their findings, although not generalisable, due to the small number of datasets evaluated, confirmed that centralised data disclosure yields better-quality data and metadata. Oliveira et al. (2016) analysed the quality of 13 Brazilian OGD portals at the federal, regional, and municipal administrative level, after having applied automatic and manual metadata extraction. From their analysis, it emerged that most of the evaluated datasets lacked metadata or presented a naive version of them. Màchovà and Lnenicka (2017) elaborated a benchmarking framework for the evaluation of various quality dimensions of the portals, applied to 67 national portals in the world. The evaluation of 28 quality criteria has been carried out through a paper questionnaire fulfilled by 10 students trained by 2 professors. For each portal, aggregated measures were collected, resulting in an overall ranking of the 67 portals, where portals powered by CKAN achieved better scores than the others. The authors suggest that the results of the quality assessments are used to improve the performance of the OGD portals, to make them more usable, and therefore to increase their adoption. Kubler et al. (2018) present an “Open Quality Portal Quality” framework aimed at comparing OGD portals based on their quality assessment. Assuming that quality is a multidimensional concept (Batini & Scannapieco, Data and Information Quality – Dimensions, Principles and Techniques, 2016), and that different users may have different preferences concerning which quality constructs are more relevant in a given operating context (Quarati et al., 2017), the framework adopts a decision support approach, based on the Analytic Hierarchy Process (Saaty, 1980). Quality assessment is carried out through the “Open Data Portal Watch” platform (Neumaier et al., 2016). The integration of the latter with an online module (http:// mcdm.jeremy-robert.fr) for the application of AHP to a set of portals allows them to be compared based on the preferences expressed by the user. Based on a
https://data.wu.ac.at/portalwatch https://opencoesione.gov.it/it/
6 7
154
G. Santos-Hermosa et al.
comparative analysis of the quality of over 250 monitored portals, the authors highlight the usefulness of this system in identifying problems related to the adoption and use of OGD portals. Zhu and Freeman (2019) present a “User Interaction Framework” with 5 quality dimensions, made operational by mapping 30 quality criteria, with 1 or more yes- or-no questions, for a total of 42 measurement items. The data collection, based on a coding book, and evaluation of 34 US municipal open data portals (mainly based on the Socrata platform), was carried out by 2 coders in December 2016, achieving a city ranking based on an overall performance score. Results suggest that portals should make better use of their platforms for greater user participation. The authors conclude that “more research is needed to understand who uses the portals and the data, and for what purposes”. Wilkinson et al. (2016) have designed a framework and developed a series of metrics to measure the compliance of a Web resource with the FAIR principles. The “FAIR Evaluation Services” tool8 implements 22 Maturity Indicator Tests, grouped by the 4 principles, respectively, in 8 (Findable), 5 (Accessible), 7 (Interoperable), and 2 (Reusable). The first group checks to which extent a Web resource is findable via Web engines in virtue of the presence, persistence, or uniqueness of its identifier. The five accessibility metrics check if data and metadata may be retrieved through a free and open resolution protocol that implements authentication and authorisation. Six of the seven interoperability metrics implement either a “Strong” or “Weak” version of the same indicator, checking if data and metadata adopt formal languages broadly applicable for knowledge representation and use FAIR vocabularies or other linked resources. Finally, the two reusability metrics implement the “Weak” and “Strong” versions of the “Metadata Includes License” indicator, with the latter testing if “the linked data metadata contains an explicit pointer to the license”. The “Weak” metric loosens up the stringency of the Strong one, looking just at the metadata content. The “FAIR Evaluation Services” tool allows users to evaluate the quality of a given Web resource by providing its Global Unique Identifier, according to all 22 FAIR metrics or to just 1 of the 4 subgroups. Once the evaluation process is complete, the tool summarises the successes and failures of the resource for the selected metrics in an evaluation report. Users can interact with the evaluator directly via the Web interface or via API. Although most of the works on OD quality is based on the assumption that quality is (at least) a necessary condition to facilitate data reuse, these works do not provide empirical evidence of a relationship between quality and OD datasets’ usage. Nor is this relationship considered in papers that have focused on factors promoting or barriers preventing the release of OGD (Barry et al., 2014). At present, this relationship has been studied by two recent works. Quarati and Raffaghelli (2020) provide a contribution in this research direction by empirically exploring the relationship between the usage of open research datasets published on Figshare and the quality of their metadata. After collecting
https://fairsharing.github.io/FAIR-Evaluator-FrontEnd
8
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
155
programmatically via API, the metadata associated with around 6,000,000 open research resources published by Figshare, they extracted their usage information, gauged by the number of views and downloads, and assessed the quality of their metadata by means of the “Open Data Portal Watch” tool (Neumaier et al., 2016). In order to observe relationships between the variables, they carried out a Spearman’s rho correlation between views, downloads, and quality. Their outcomes do not support the idea that the highest quality encompasses more attention (views) and eventual reuse (downloads). In some cases, they reported inverted rho values, underlining a situation where certain ORDs with low-quality metadata are preferred to better- quality ORDs. The authors point out a mostly random behaviour and conclude that users might be pushed to adopt ORD by other factors far from clear metadata, open licences to reuse objects, etc. Similar conclusions are reported in Quarati (2021), which explores the relationship between usage trends and metadata quality and compliance with FAIR principles, of around 400,000 datasets of 28 national, municipal, and international OGD portals. Their findings show that regardless of their size, the software platform adopted, and their administrative and territorial coverage, most OGD datasets are underutilised. Furthermore, OGD portals pay varying attention to the quality of their datasets’ metadata (measured by mean of the “Open Data Portal Watch” tool (Neumaier, et al., 2016)) as well as to the compliance with FAIR (measured by mean of the “FAIR Evaluator” tool), partly depending on the software platform adopted. Finally, the statistical analysis carried out did not reveal a clear positive correlation between datasets’ usage (measured through two metrics: number of views and downloads) and their metadata quality and FAIR compliance.
The Relevance of Open Data Literacy ngaging with Open Data in the Public Space: Open Data E Literacy for a Critical Citizenship The situation of low data usage portrayed in the prior paragraph highlighted the contours of a problem. Digging into the motivations, the quality of the published open data was also explored, leading to the conclusion that the highest quality is not a clear driver of usage. In the literature, not only quality is mentioned: other barriers can be the lack of updates of published open data (Degbelo et al., 2018), citizens’ lack of familiarity with technology design and use (Jarque, 2019, p. 2), the acceptance of technology (Zuiderwijk et al., 2015), and, also, the absence of a data link (Haklae Kim, 2018) since linking datasets to related datasets or simply making their semantics more specific through a conversion into a Resource Description Framework (RDF) reduces ambiguities regarding their interpretation (Degbelo et al., 2018). Other authors point out a mismatch between the needs of users and the possibilities offered
156
G. Santos-Hermosa et al.
by available datasets (Ruijer et al., 2020) or that open data are not relevant to the problems that users want to solve (Bonina & Eaton, 2020). Nevertheless, all such barriers lead to specific contexts of knowledge which require awareness and skills to relate to open data or to search and claim for the publication, the improvement, or the technological facilitation to use open data. This leaves us the space to claim that the lack of awareness, knowledge, and skills towards open data might be a factor impeding open data appropriation and fruitful usage by the citizens, professionals, and researchers. In this section, we will explore the literature relating open data literacy, for the two types of open data identified (ORD and OGD). In the case of open government, open data literacy has been the object of attention. Open data is part of the information that citizens need, but it is not, by itself, necessarily easy to understand, use, or work with (Robinson & Johnson, 2016). Van Veenstra et al. (2020, p. 11) affirm that “the use of public sector data analytics requires the development of organisational capabilities to ensure effective use, foster collaboration and scale up, as well as legal and ethical capabilities to carefully balance these concerns with goals, such as increased efficiency or operational effectiveness”. Research in Nigeria explains that making open data work in the agriculture sector requires building capacity and educating extension agents and stakeholders to appreciate open data benefits in agricultural communication by organising workshops and seminars (Ifeanyi-obi & Ibiso, 2020). In another study with adults, carried out by Jarke (2019), it has been shown that the lack of technological skills of older citizens makes them marginalised from the focus groups of users for the development of civic technology applications. Information technologies can serve as a cross-cutting dimension that could improve or hinder any of the other dimensions in the use of government open data (Gil-Garcia et al., 2020). In this sense, it is worth considering that there can be various literacies needed to engage with open data. As a matter of fact, open data technical literacy refers to the competencies, knowledge, and skills necessary to download, clean, order, visualise, analyse, and interpret open data. Technical skills refer to database management, statistical analysis, and the interpretation of results. In this regard, Zuiderwijk et al. (2015) delimit the use of open data as the activity that a person or organisation performs to view, understand, analyse, and visualise a set of data that a government organisation has published. On the other hand, a broader conception is the so-called critical open data literacy, which refers to the skills, knowledge, and attitudes to review the meaning of the concepts, how they are operationalised, as well as the visualisations and procedures carried out with data that can put specific groups of users at risk of inequity or affect ethical aspects. The need for a critical reading of data has been considered by several points of view like agency (Baack, 2015), data justice (Taylor, 2017), and contextualisation in which the analysis is developed (D’Ignazio & Klein, 2020). Indeed, the decisions between who is literate and who is not have usually been entrenched with the constructions and perpetuation of power structures within societies and not as a “necessarily empowering and enlightening force” (Data-Pop Alliance and Internews,
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
157
2015). As part of this critical open data literacy, then, equity must be placed at the centre of data analysis, and practitioners must drive a reflection on data inclusion gaps and the harm they can bring to certain populations (Montes & Slater, 2019). As more people become literate in open data, citizen participation in the political and social sphere could be increased, since the same data that creates a higher level of transparency for the expert offers less transparency for someone with less access and knowledge of how to use open data (Matheus & Janssen, 2020). The technical skills might encompass the discovery and correct interaction with new valuable, open information and even trigger innovation, problem-solving, or products and services improvement by analysing “clean”‘open government data, in a specific context. However, a critical and holistic approach should take from the discovery of the underlying or hidden information to the creation of knowledge that is respectful with vulnerable collectives or can unveil embedded injustice. Nevertheless, the initial stage of interaction with data requires technical abilities, which hence encompasses understanding, insights, fair innovation, and ethical principles. Therefore, it is important to embrace a complex view of the sets of skills required for open data usage.
Critical Data Literacy in Researchers’ Data Practices A crucial factor for the ORD usage is researchers’ data literacy, which seems to be still a concern. As a matter of fact, early in 2013, Schneider (2013) considered the need of generating a framework to address research data literacy. Several studies also referred to the relevance of support and coaching for the researchers in order to develop a more sophisticated understanding of data platforms and practices, like the case of the Purdue College of Agriculture (Pouchard & Bracke, 2016). In this last study, the results showed rather basic data usages with no consideration of technical support from the libraries. Wiorogórska et al. (2018) investigated data practices through a quantitative study in Poland led by the Information Literacy Association (InLitAs) in the context of an international research project named ReDaM. The results revealed that a significant number of respondents knew some basic concepts around research data management (RDM), but they had not used institutional solutions elaborated in their parent institutions. In another EU case, Vilar and Zabukovec (2019) studied the researchers’ information behaviour, in all research disciplines and in relation to selected demographic variables, through an online survey delivered to a random sample of the central registry of all active researchers in Slovenia. Age and discipline and in a few cases also sex were noticeable factors influencing the researchers’ information behaviour including data management, curation, and publishing within digital environments. McKiernan et al. (2016), studying the literature until 2016, attempted to show several of the benefits of sharing data in applied sciences, life sciences, math, physical science, and social sciences, where the advantages relate to the visibility of research in terms of relative citations rates, pointing out the need of supporting the researchers in pathways to open data practices. Detecting the skills’ gap has not been
158
G. Santos-Hermosa et al.
the only matter of concern in the literature. Professional development, mostly carried out by university libraries, has taken an active part in developing programs and activities to develop researchers’ data literacy. In fact, libraries have been traditional stakeholders in open access and, more recently, in providing and planning Data and Research Services (DRS), which include management of institutional data repositories, guidance in metadata for datasets and help in the creation of data management plans, assistance in intellectual property, training, and other issues around OD and Openness (Tenopir et al., 2017; Santos-Hermosa, 2019). According to Mani et al., (2021, p. 282), libraries have placed ORD as strategic priorities for the coming years to address new skills, expertise, and partnerships. As a result, they actively contribute to the development of frameworks of competences relating data in research and academic tasks (Koltay, 2017). In determining the data information literacy needs, Carlson, Fosmire, Miller, and Nelson (2011) noticed early on that the researchers need to integrate the disposition, management, and curation of data along research activities. These authors carried out a number of interviews and the analysis of advanced students’ performance in geoinformatics activities, within the context of what they called the data information literacy (DIL) program preparing to achieve such needed skills. Also, the research reproducibility based on open data has been considered a relevant skill supporting transparency in science. For example, Teal et al. (2015) developed an introductory 2-day intensive workshop on “Data Carpentry”, designed to teach basic concepts, skills, and tools for working more effectively and reproducibly with data. The ability of reusing open data both for future research and for teaching was instead remarked by Raffaghelli (2018), through workshops to discuss, reflect, and design open data activities in the specific field of online, networked learning. It is not documented whether the mentioned activities integrated the topic of ASNs (academic social networks). Rather, as exposed in the paragraph above, the ASNs have been largely considered a space for informal professional learning, and their usage is, indeed, intuitive. The researchers move within such platforms, particularly ResearchGate and Academia, using the provided affordances and learning from each other (Kuo et al., 2017; Manca, 2018; Thelwall & Kousha, 2015). Therefore, one might ask to what extent the researchers’ skills cover the effective adoption of ASNs for the social purpose of sharing and reusing ORD, not to disregard the effective publication of such items. To strike a balance between the literacy requirements around institutional repositories and infrastructures for open science and the informal knowledge connected to ASNs, the researchers should improve their knowledge around the digital infrastructures they adopt through not only a technical but also a critical lens. Indeed, an open and social scholarship can cater to many informal benefits for career advancement, the researcher reputation, and the social impact of research (Greenhow et al., 2019; Veletsianos & Kimmons, 2012). However, nowadays and in a context of datafication, the bad usage of science results through social media, for example, should be just an example of the need to disentangle the socio-technical implications of using social media platforms (Manca & Raffaghelli, 2017). Finally, the new participatory approaches to science, including the so-called crowd science and responsible research and innovation (Owen et al., 2012),
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
159
generate a pushing effect on the need to bridge open data in research with and for society. In this sense, the circle closes: the new paradigms in science lie increasingly in citizens’ engagement, therefore requesting from them awareness and certain levels of knowledge and skills to take part in data collection, analysis, or visualisation. The Wikipedia article “List of citizen science projects”9 lists nearly 200 projects which particularly relate climate change, ecology, biology, computer science, astronomy, cognitive science, and biomedical research, highlighting the advance in this approach where the borders between the academic activity and citizens’ participation in the making of science. As it can be imagined, the scholars’ interventions according to such a scheme require the development of new competences to deal with research design and data collection (Taebi et al., 2014). Through this researcher’s endeavour, the ORD crosses the line of academic space to become a central matter of reflection and citizens’ activity. Nevertheless, OGD and ORD can enter the academics’ activity through teaching, when embracing new approaches to student inquiries. As it has been pointed out by several scholars, open data can be considered open educational resources for all purposes (Atenas et al., 2015; Coughlan, 2019). In this regard, data are starting to be included into some teaching collections at institutional repositories, but because of the weakness of pedagogical metadata schema (which is closely linked to the software used by each institution), it is not easy to monitor its presence; for instance, they could be included in the “Other” metadata type (Santos-Hermosa et al., 2020). It is also a challenge for the scholars to embrace approaches for teaching and learning that are based on authentic resources that might encompass social innovation, critical thinking, and a sense of connection with the communities from which open data originates. Considering open data as an educational topic is in line with the concept of “open science education”, which refers to the education for introducing open science as a subject (Stracke et al., 2020). Therefore, open data, as part of open science, might have a potential impact on education. This situation encompasses increasing challenges for higher education as a system able of connecting several forms of open data. These last include data from research, opened to the society, and from the public sphere, opened to research, academic teaching, and learning, as creative and generative spaces.
ctions Supporting Literacies to Unleash the Power A of Open Data The paragraphs above have stressed the need of developing open data literacy to approach in a more creative and constructive way. Nevertheless, such a vision should be accompanied by actions taken to address such learning needs, which is clearly connected, wrapping up our rationale, to societies’ ability to generate https://en.wikipedia.org/wiki/List_of_citizen_science_projects
9
160
G. Santos-Hermosa et al.
lifelong learning spaces, beyond specific or structured educational interventions within the formal schooling system. Indeed, a well-directed lifelong learning strategy around the value of open data can be considered an effective approach for facing the no use of open government and open research data. Some reasons support this statement. Firstly, open data is one of the challenges for implementing the open science paradigm, which is accompanied by the recommendation of providing researchers with the open science skills’ development they need to publish, share, analyse, and reuse open data (European Commission, 2016). This also includes the application of the FAIR principles, namely, generating findable, accessible, interoperable, and reuseable data, along with the creation of the European Open Science Cloud (EOSC) as an environment for hosting and processing research data to support EU science. In addition, open data is included in the major European Guidelines and Frameworks concerning researchers’ skills and career development to include open science in the research cycle, such as the European Framework for Research Careers (European Commission, 2011a), the Human Resources Strategy for Researchers (HRS4R),10 and the Innovative Doctoral Training Principles (IDTP) (European Commission (2011b). Some national plans also emphasise the importance of skills development in open science and open data, such as the National Framework on the Transition to an Open Research Environment11 in Ireland. Last, but not least, lifelong learning is a solution to close the open data skills gap which exists between the demand for skilled data workers and the supply of such employees (Carrar et al., 2020). At a recruitment level, such a competence is becoming a crucial requirement for specific ICT jobs. In this regard, the European Data Science Academy (EDSA) is working on it by analysing the industry requirements for skilled data scientists and data workers. As a matter of fact, the skills shortage regarding open data is a problem that involves not only the researchers but also many other users. The list includes workers with different intended uses, such as government employees, innovators, data journalists, activists, and citizens. Thus, while government employees may require publishing, management, and leadership skills, innovators could need some management skills (related with community building, business, and analysis), and citizens require mainly reading and understanding levels of literacy. Specifically, researchers, data journalists, and activists primarily demand analysis skills and some leadership skills to conduct a change by using OGD (Gascó-Hernández et al., 2018). In a nutshell, open data skills and training is becoming essential as information management is playing a critical part of our everyday roles. Below, we will focus on lifelong learning approaches supporting engagement with the two main types of open data (OGD and ORD). In our view, it is worth
10 11
https://euraxess.ec.europa.eu/jobs/hrs4r https://repository.dri.ie/catalog/0287dj04d
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
161
keeping the two types of approaches separated, since they include quite different strategies and activities, also encompassing different institutional contexts.
ifelong Learning Strategies to Support Open Government L Data Usage Here, we report a few interesting initiatives that are trying to cover the deficit skills gap through lifelong learning approaches. We use the Cingolani (2021) model of OGD platforms as a basis to classify these projects and training actions into three main forms, depending on who produces information and who its main recipient is: citizen-to-government (C2G), government-to-citizen (G2C), and citizen-to- citizen (C2C). The C2G involves collective inputs from the public to governments to leverage collective intelligence, create policy-related awareness, or even contribute to policy execution (Nam, 2012; Linders 2012). Barcelona Open Data Initiative12 (Spain) and FixMyStreet13 (United Kingdom) are two examples. The first one aims to educate and empower citizens in the use of open data through numerous training activities, ranging from public awareness activities to short training courses of a few hours, to intensive workshops tailored to specific NGOs. Good examples of such an approach are the Data Citizen School program or the World Data Viz Challenge 2018 Barcelona-Kobe. The second case is an NGO-led initiative that makes use of OGD to allow registered users to post geolocalised pictures of public space irregularities to engage local councils for their follow-up. In this regard, a training manual explains to citizens how to use the app, to include GIS data showing assets, and to automatically import data. The G2C form is used by the government to share specific digital resources (in this case, OGD) so that citizens have the chance to improve the data available and create new products out of them. Some examples include the open data portals launched by governments, such as the French Agency for Public Data portal14 and the Portal Brasileiro de Dados Abertos15 (by the INDA National Open Data Infrastructure). In the first case, the Etalab16 unit organises different training events (hackathons, open data camps, conferences), and the Ministerial Data Administrator, Algorithms, and Source codes17 (AMDEC) also systematises trainings, with data protection officers, through the portal data value (Bothorel et al., 2020). The second initiative is a mechanism of the Brazilian government to guide data initiation https://opendata-ajuntament.barcelona.cat/en https://www.fixmystreet.com 14 https://www.data.gouv.fr/fr/ 15 https://dados.gov.br/ 16 A service reporting to the Prime Minister, available at https://www.etalab.gouv.fr/ 17 https://www.datavalue.fr/formation-amdec 12 13
162
G. Santos-Hermosa et al.
projects, by establishing technical standards, to make this data available in a standard format and readable by machine, in addition to providing training (Bittencourt et al., 2019). An example is the workshop “Capacitação: Conceitos, Processos e Boas práticas da abertura de dados do MP”,18 to explain concepts, processes, and best practices to the employees of the Ministry of Planning. A last example of this category (G2C) is the “Future Public Managers” (United States) (Gascó-Hernández et al., 2018), in which OGD training is embedded in a Master of Public Administration and related graduate certificate program in Albany, New York. All the data for the in-class activities and homework assignments are based on data from Health Data NY, the state’s open health data portal, and HealthData.gov. The third classification, citizen-to-citizen (C2C), is the “do-it-yourself” government form (Linders 2012), where citizens produce and receive information that is relevant for their self-organisation and that relates to the public sphere, including governmental aspects. “Monithon initiative” (Italy) is a civil society initiative, created in 2013 by a group of open data activists, seeking to engage the public in using data from the Italian open data portal OpenCoesione.gov.it to verify how Italy spends Structural and Investment Funds from the European Union. The “monitoring marathon” training for public spending has been organised amongst interested citizens and NGOs and also high school students. Finally, at a more global level, there are some awareness-raising and skills development actions from the Open Government Partnership (PGO) and European Commission (EU). The Open Gov Week19 is a global call to action to transform the way governments worldwide serve their citizens. The EU Datathon20 is an annual open data competition organised by the Publications Office of the EU since 2017. They are organised to create new value for citizens through innovation, interaction, and promoting the use of open data available on the EU open data portal. Datathons gather teams of young people from all over Europe to stimulate the reuse of data, bringing to the fore the idea that free and creative engagement with data is fundamental to enact the usage of data artefacts in the information era. Another similar event to generate value through reuse is the EU Open Data Days,21 where NGOs, CSOs, governments, and international organisations participate by establishing and sponsoring events and external people create products or services with companies and public sector’s data. Having introduced the several types of learning opportunities around OGD, let’s also consider the challenges posed by the different approaches. Although all these OGD initiatives are promising in terms of skills development at various levels, there is limited empirical research on how to train citizens and professionals to actively
https://wiki.dados.gov.br/Capacitacao-Conceitos-Processos-e-Boas-praticas-da-abertura-de- dados-do-MP.ashx 19 https://www.opengovweek.org/ 20 https://op.europa.eu/es/web/eudatathon 21 https://data.europa.eu/en/news/eu-open-data-days 18
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
163
encourage their use of OGD. Problems such as which methods are effective, or which kind of datasets are more successful according to the users’ age, professional profiles, social activities of citizens, cultural groups, etc., require further attention. According to Gascó-Hernández et al. (2018, p. 241), “training is a sine qua non condition to increase OGD usage… but alone may not necessarily result in increasing OGD use”. Awareness of OGD (introductory skills, creating value of OGD, etc.) is not enough either, since real use depends on additional capacities, as for analysis skills. In this respect, OGD training seems to be more powerful when complemented with knowledge about the users and their context, since users have different needs and interests, and open data uses might require local adaptations. It is also crucial to focus training on government employees because, as OGD custodians, they should develop their knowledge and help the public navigate the same provisions (Kimball, 2011). Therefore, some challenges for OGD reuse are tailoring the training to the users’ motivations and the service required for the government professionals. Additionally, adopting an appropriate metadata schema to describe and let the retrieval of the data (Santos-Hermosa et al., 2017) is also a challenge. Finally, another driver is an emerging OGD citizen-led engagement (Purwanto et al., 2020). Finally, it is also crucial to develop social capital, which means having the skills to interact with diverse actors (such as governments, policy makers, or the media).
ifelong Learning Strategies to Support Open Research L Data Usage Currently, skills development around ORD is mainly carried out through three strategies: as a part of a general training program about open access or open science, focused on the practice of research data management (RDM), or concentrated in data itself. Multiple examples are out there but scattered across different countries, institutions, and portals. Within the first category, we can start with the FOSTER22 portal, which offers broad training on all aspects of open science though face-to-face training, online courses,23 training materials, an open science handbook (Bezjak et al., 2018), and a multiplier event (an open science trainer Bootcamp). Other courses about open access and open science, created from higher education institutions, used to include OD modules as well, such as the online open access course (Santos-Hermosa & Boixadera, 2020) for PhD at Universitat Oberta de Catalunya (UOC).24 The Open
https://www.fosteropenscience.eu/ Amongst them, the ones about “Data Protection and Ethics” and “Managing and Sharing Research Data.” 24 Also available (web version) at http://materials.cv.uoc.edu/cdocent/PID_00266068/index.html 22 23
164
G. Santos-Hermosa et al.
Science MOOC, launched by the Delft University of Technology, incorporates open data25 (FAIR data, data storage, DMP, reuse, etc.) as one of the principles to apply open science to academic work. Training based on gaming is another trend, for instance, the Open Science Quest,26 presented during the week of the Luxembourg Open Science Forum27 and consisting of a travel pack to know about the best practices in finding and sharing data, amongst other important topics. “Datapolis” is also a board game about building things – services, websites, devices, apps, and research – using closed and open data, launched by the ODI to show the value of OD and to advocate for the innovative use of it. The usage of open data as an open educational resource in higher education has been emphasised. For example, the course developed within the aforementioned FOSTER “Open Research Data in Teaching” has been applying the concept particularly to open research data (Andreassen et al., 2019). The second strategy related to research data management (RDM) contains programs, courses, and materials for self-paced learning in different countries. As examples, the University of Tartu (Estonia) offers the online course Research Data Management and Publishing28 which includes how to create a data management plan (with their own DMPonline tool) and to find OD for reuse. Within the framework of the National Forum for Research Data Management, Danish universities have developed an e-learning course on RDM29 (with FAIR Principles and DMP’s modules). The University of Warsaw (Poland),30 with support from the National Open Science Desk,31 offers researchers a series of workshops and lectures with both a practical and legal dimension of OD and Leiden University’s Centre for Digital Scholarship, some specialised courses focused on data management and use of digital data.32 The third strategy, concentrated on data itself, ranges from introducing what the OD are and their entire research cycle (creation, collection, use, processing, sharing, and management) to a more complex issue: data protection and ethics. The European Data Portal, for instance, provides an e-learning program with 16 online training modules to discover more about open data. On the other hand, the “Data Conversations” events33 (which originated in Lancaster University Library and were evolved to the “Luxembourg Data Conversations”) are about researchers sharing their data stories with other researchers: whether it is about the key concept of personal data and the General Data Protection Regulation (GDPR) or the importance of
https://www.edx.org/course/open-science-sharing-your-research-with-the-world Materials can be found on https://doi.org/10.5281/zenodo.2646121 27 https://www.openaire.eu/blogs/luxembourg-open-science-forum 28 https://sisu.ut.ee/andmekursus/home0 29 https://medarbejdere.au.dk/en/research-data-management/e-learning-course/ 30 Materials can be found on https://doi.org/10.5281/zenodo.2646121 31 https://akademia.icm.edu.pl/szkolenia/warsztaty-z-zarzadzania-danymi-badawczymi-w-lublinie/ 32 https://www.library.universiteitleiden.nl/research-and-publishing/centre-for-digital-scholarship 33 https://www.lancaster.ac.uk/library/research-data-management/data-conversations/ 25 26
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
165
setting appropriate safeguards, including data minimisation and pseudonymisation, as well as the question of transferring data to third countries. Finally, we should not overlook the train-the-trainer programs, which are aimed at keeping trainers skilled. Some of them are offered by FOSTER Plus, like the case of Carpentry Instructor training, and OpenAIRE, Essentials 4 Data Support.34 Created by the Research Data Netherlands, it is a course for people supporting researchers in storing, managing, archiving, and sharing their research data, as part of a citizen and crowd science approach. Despite the three lifelong learning strategies to cover the needed literacies to engage with open research data, researchers consider that training opportunities are not yet widely offered. As it was highlighted in the paragraphs above in this chapter, the reuse is not being satisfactorily achieved. One of the main challenges, given the heterogeneity of data and research practices, is the provision of open science skills specific to disciplines (Swiatek, 2019). In this sense, an appropriate balance of training programs based on the needs of research communities should be ensured. Another essential challenge is not just dealing with what training is delivered (contents, skills, etc.) but how it is delivered (practice being preferred over courses) and when (introducing specific skills in educational stages before the higher education) (O’Carroll et al., 2017). A last challenge involves incentives for sharing and reuse: as sharing data is often an activity apart from the researcher evaluation, this dimension focuses on both internal (academic reward system) and external (policies) factors associated with data sharing (Meijer et al., 2017). In relation to these challenges to be met, certain good practices should be considered for improving each situation. A domain-based data management training for liaison librarians, adopted at the Indiana University Libraries, was effective in understanding concrete needs from different disciplines (Wittenberg et al., 2018). Some “learning by doing” case studies (Clare et al., 2019) have demonstrated how to engage practically the entire research community with RDM. The Bratislava Declaration of Young Researchers addressed the need to engage with open science from the high school as a part of the learning process. Finally, the service Open Science Framework (OSF), from the Centre for Open Science, is encouraging journals to reward researchers with badges in publications for sharing data.35 All these training experiences, good practices, and challenges must be taken into account for the future to promote the reuse and unleash the power of open data.
34 35
https://www.openaire.eu/blogs/data-conversations-gdpr-and-your-research-data https://www.cos.io/initiatives/badges
166
G. Santos-Hermosa et al.
Discussion and Conclusions In the last decade, the movement of open and public knowledge has made huge efforts to further the agenda of open data, considering them a crucial factor. On the one hand, governments around the world have created open government data (OGD) portals and repositories to make data more accessible and usable by the public, motivated by values such as transparency and citizen participation (Harrison et al., 2012). On the other hand, several data repositories and portals have specialised in open research data (ORD) as the kernel of the open science movement (Molloy, 2011). In this chapter, we explored usage trends, actual impacts, and lifelong learning strategies around open data. Our aim was to deepen our understanding of the connections between open data usage and data literacy in both the simpler forms of engagement with open data, namely, citizen use, and the more advanced requirements of usage by professionals and researchers. By analysing the various issues preventing open data usage, with a particular attention to open data quality, we set the basis to understand the role of data literacy. Hence, by analysing the literature on open data literacy in the case of OGD and ORD, and by carefully exploring the various lifelong learning approaches to develop awareness, knowledge, skills, and attitudes around open data, we demonstrated the relevance of being (open) data literate to unleash the embedded potential of both OGD and ORD. Indeed, in these two strands of aforementioned practice, we uncovered a situation of growing concern around the publication and usage of open data. Overall, many institutions still measure the success of their data opening policies almost exclusively based on the number of published datasets (Mergel et al., 2018). Nonetheless, despite OGD and ORD’s potential value after being published, there is still scarce usage (Quarati, 2021 –in press, Quarati and Raffaghelli 2020). In the case of OGD, even if there are huge numbers of open datasets and public portals, only a limited number have been more intensely used (Zuiderwijk et al., 2015). Also, even in the case of usage for the incipient artificial intelligence industry embedded in other public actions like smart cities, or the development of advanced commercial products based on open data like self-driven cars, the usage is linked to highly specific sectors and technological elites (Haklae Kim, 2018, p. 27). Some of the problems analysed related the low quality of open data and the type of data published, which do not encounter citizen’s needs, but we also demonstrated the role of literacy. In this regard, the political will to support open data policies as a significant instrument of participatory government is “what makes or breaks the success of open data” (Open Data for Development Network & Omidyar Network, 2018). In the case of ORD, the situation seems to be also critical. According to Edwards et al. (2011, p. 669), a result of the fourth scientific paradigm is the blurring effect between disciplinary data practices and the generation of grey zones. A resultant phenomenon is that of “data friction” or the cost of time, energy, and human attention to generate specific data, which becomes incompatible when crossing the disciplinary line. Research data appears to be contextual heavily relying on the different
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
167
scientific communities and traditions (Borgman, 2015). Not only data but most importantly metadata falls within this highly context-dependent activity. Therefore, the factors influencing data practices are extremely relevant, and the creation of spaces of reflection and informal or non-formal learning amongst researchers are a source of grounding communication over data to move beyond the sole expression of interest on ORD (Zuiderwijk et al., 2020). We hence stated the fact that the users, as professionals from the private sector, citizens, or even researchers have a limited awareness of what they can do with the open data and they do not have the skills required to assess their quality and use for either OGD (Martin & Begany, 2018; Safarov et al., 2017) or ORD (Koltay, 2017). For example, in the case of citizens and professionals willing to adopt OGD, the participatory approaches require preparing instructions, support, coaching, tutorials, and, last but not least, full courses where the stakeholders understand how the open data is delivered, to which extent it can be applied to the community problems, or which types of innovations could be developed. In more advanced cases, the relevant problem is how to dialogue with the institutional infrastructures producing open data, to have a “push effect” over the quality and the release of high-quality open data. This might also be the case of researchers around ORD. Researchers’ data literacy is crucial to develop quality ORD, which in time might support effective replication or reusage in further scientific developments. Therefore, lifelong learning strategies are particularly interested in addressing the researchers to understand the value of the FAIR principles and their application. In a more conceptual vein, we also distinguished two relevant approaches to data, namely, technical and critical, and highlighted the cross-cutting and concurrent presence of these two approaches in both OGD and ORD cases. In fact, there are technological and structural issues which require several levels of technical skills to search, find, extract, and elaborate open data. Moreover, the effective integration in products and services can only be possible through coding and generating data visualisations. Instead, dealing with an organisational culture which might or might not be close to the principles of data openness, understanding the contexts of application, or getting engaged within the design of open data-driven services and products requires understanding the political and socio-cultural contexts of application. These two sides of data literacy around open data should lead to the development of specific lifelong learning strategies. Nonetheless, the situation has evolved from a central focus on technical approaches to data towards deeper understanding of data in the society. Indeed, the debate nowadays has started to uncover the issues relating the problem of “no data”, namely, how the data collected underrepresents some human collectives and social problems or which data is completely absent and not being gathered at all, making the human problems behind it invisible (D’Ignazio & Klein, 2020). Another relevant problem is the way citizens’, professionals’, or researchers’ learning to achieve the needed literacies supporting open data usage is being implemented. Certainly, the approaches mainly focused on traditional courses might help at the level of the schooling system or lifelong learning. Nevertheless, adults and professional learning require complex, self-directed pathways including all sorts of
168
G. Santos-Hermosa et al.
engagement with resources, activities, and networks to fulfil the personal developmental goals, which in our case relate the value and the role of open data in human activity. In this regard, higher education institutions (and libraries, as potential stakeholders) might assume a key role in their ability to support the dynamics of innovation from crowd and citizen science, by adopting OGD to develop students’ critical approach to engage and participate in democracy, by engaging in a fruitful dialogue supporting the technical skills needed to develop new businesses led by the private sector over ORD or OGD, but also by responding to social accountability. Nonetheless, government organisations also have the potential to support lifelong learning, by engaging in a dialogue with citizens and other stakeholders along professional activities (Ruijer et al., 2020). While much professional and adults’ learning will occur this way, the presence of universities could lead to advanced forms of competences’ awareness and recognition. The concept of lifelong learning ecologies might be of some utility in this regard (Sangrá et al., 2019). In fact, it considers learning as something that happens at the crossover of engaging with resources, participating in activities, and cultivating relationships that are triggered by contexts of learning and hence connected and chosen by the individual. It is expected that open data will be an important node in eventual learning ecologies for citizens, professionals, and scholars. Indeed, open data behave as complex cultural artefacts, which can promote various levels of interaction and demand different levels of literacy, from digital and coding skills, statistics, and the critical attitude in the contextualisation of the datasets and the types of human groups they represent. Nonetheless, the lifelong learning ecologies will require forms of support, coaching, and learning recognition to connect knowledge and abilities with effective competences (Jackson, 2013). In fact, the same use of open data could trigger key skills, but such an engagement would be more potent if guided or organised to effectively lead to visible learning and literacies. This is a challenging pathway in which universities have to engage jointly with government and expert institutions and bodies devoted to the development of open data. Moreover, lifelong learning strategies are also required for the professoriate, not only because they might be producers of ORD but also because they are responsible for the students’ open data literacy, who will become professionals and citizens contributing to open data ecosystems. This cycle could open up to citizenship through participatory science approaches in which the students are engaged or through learning by using open data as open educational resources. Therefore, academics could catalyse through research and teaching, a methodological and deontological endeavour towards open knowledge. Our opening question was clearly rhetorical, and our exploration of the literature and ongoing projects carried out here provide to a single, simple answer: open data are underused because the users lack the basic skills, but also the critical awareness and understanding leading to a clear motivation to actively engage with open data as
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
169
a catalyst of social innovation and a more participatory approach to public knowledge. Therefore, the publication of items or the generation of data infrastructures for sharing public knowledge is a considerable effort that requires accompanying measures so that the end users can engage at several levels, relating their creative interests and motivations. We claimed the value of open data literacy, carefully exposing several approaches to develop skills and knowledge paving the way to open data usage. In this regard, the institutional and community agendas might put pressure over the professionals, the citizens, and the researchers to focus on specific forms of literacies. Particularly in the case of scholars, they might offer resistance to activities like open data adoption in teaching, and open data publishing and reusage for these are still not rewarded institutionally or at least not to a high degree. The case of professionals might differ, since as seen, they can take immediate advantage of generating products and services over the basis of open data. Finally, the citizens could be in either position: not rewarded by their political systems or communities, and, thus, not interested in engaging with open data, or being able to take advantage of their social and cultural contexts of life. In any of these cases, the lack of skills and understanding and the lack of opportunities to develop them will be a barrier, as demonstrated in our overview of the literature and practice. Perhaps activism and/or civil disobedience, as part of a political position, might make pressure over the system to provide the appropriate means to become “open data” literate. It goes without saying, while the formal training imposes an explicit institutional agenda which traces the types of desired literacies, a critical approach to self-determined data literacies for citizens, professionals, and researchers is to be connected to more informal spaces, like collaboratories, hackathons, campaigns, manifests, and round-tables with policy makers, around emerging dilemmas like the ethical implications of using data and the need to keep on building valuable public knowledge. Nonetheless, open data literacy, which we purported as a key to solve the problem, cannot be the only one vector to success in open data usage. As we reported only briefly in this chapter, other relevant areas of research and development relate digital data infrastructures, metadata quality, and the particular cultures of data publishing and sharing, which also includes the systems adopted to reward those willing to share their data. However, achieving confidence and becoming aware of the potential benefits of open data could lead to changing cultural mindsets that see data sharing as non- relevant or too time-consuming and not connected with the direct benefits of open, public knowledge sharing. We believe the evidence here reported strongly supports the claim that lifelong learning, namely, formal, non-formal, and informal learning opportunities across the several socio-technical spaces where data can be shared, is crucial to feed synergies between actors’ literacies and the technological development of key data infrastructures, standards, and quality, in the near future. Nonetheless, this is also the way to promote a truly emancipatory and fair (open) data cultures.
170
G. Santos-Hermosa et al.
References Andreassen, H. N., Låg, T., Schoutsen, M., & van der Meer, H. (2019). Can research data improve how we live, learn, and act? The use of open data in teaching and the role of the library. Symposium, LILAC. April 24–26. Atenas, J., Havemann, L., & Priego, E. (2015). Open data as open educational resources: Towards transversal skills and global citizenship. Open Praxis, 7(4), 377–389. https://doi.org/10.5944/ openpraxis.7.4.233 Baack, S. (2015). Datafication and empowerment: How the open data movement re- articulates notions of democracy, participation, and journalism. Big Data & Society, 2(2), 205395171559463. https://doi.org/10.1177/2053951715594634 Barbosa, L. S., Pham, K., Silva, C. T., Vieira, M. R., & Freire, J. M. (2014). Structured open urban data: Understanding the landscape. Big Data, 2(3), 144–154. https://doi.org/10.1089/ big.2014.0020 Barry, E., & Bannister, F. (2014). Barriers to open data release: A view from the top. Information Polity, 19, 129–152. https://doi.org/10.3233/IP-140327 Batini, C., & Scannapieco, M. (2016). Data and information quality – dimensions, principles and techniques. Springer. https://doi.org/10.1007/978-3-319-24106-7 Batini, C., Cappiello, C., Francalanci, C., & Maurino, A. (2009). Methodologies for data quality assessment and improvement. ACM Computing Surveys, 41, 1. https://doi. org/10.1145/1541880.1541883 Bezjak, S., Clyburne-Sherin, A., Conzett, P., Fernandes, P., Görögh, E., Helbig, K., Kramer, B., Labastida, I., Niemeyer, K., Psomopoulos, F., Ross-Hellauer, T., Schneider, R., Tennant, J., Verbakel, E., Brinken, H., & Heller, L. (2018). Open Science training handbook. Zenodo. https://doi.org/10.5281/zenodo.1212496 Bill & Melinda Gates Foundation. (2017). Gates open research. Retrieved November 2, 2018, from https://gatesopenresearch.org/about/policies#dataavail Bittencourt, C. J., Estima, G., & Pestana, G. (2019). Open data initiatives in Brazil (pp. 1–4). 14th Iberian Conference on Information Systems and Technologies (CISTI). https://doi. org/10.23919/CISTI.2019.8760592 Bizer, C., & Cyganiak, R. (2009). Quality-driven information filtering using the WIQA policy framework. Web Semant, 7, 1–10. https://doi.org/10.1016/j.websem.2008.02.005 Bonina, C., & Eaton, B. (2020). Cultivating open government data platform ecosystems through governance: Lessons from Buenos Aires, Mexico City and Montevideo. Government Information Quarterly, 37(3), 101479. https://doi.org/10.1016/j.giq.2020.101479 Borgman, C. L. (2015). Big data, little data, no data: Scholarship in the networked world. MIT Press. Bothorel, E., Combes, S., & Vedel, R. (2020). Mission Bothorel. Pour une politique publique ee la donnée. https://www.gouvernement.fr/sites/default/files/contenu/piece-jointe/2020/12/ rapport_-_pour_une_politique_publique_de_la_donnee_-_23.12.2020__0.pdf Braunschweig, K., Eberius, J., Thiele, M., & Lehner, W. (2012). The state of open data limits of current open data platforms. The State of Open Data Limits of Current Open Data Platforms. Carlson, J., Fosmire, M., Miller, C. C., & Nelson, M. S. (2011). Determining data information literacy needs: A study of students and research faculty. Portal: Libraries and the Academy, 11(2), 629–657. https://doi.org/10.1353/pla.2011.0022 Carrar, W., Fisher, S & van Steenberg, E. (2020). Analytical report 2: E-skills and open data. : Publications Office of the European Union. doi:https://doi.org/10.2830/429131 Castells, M. (2001). La era de la Información: Economía, sociedad y cultura. Vasa. CERN. (2018). CMS data preservation, re-use and open access policy. CERN Open Data Portal. https://doi.org/10.7483/OPENDATA.CMS.7347.JDWH Cingolani, L. (2021). The survival of open government platforms: Empirical insights from a global sample. Government Information Quarterly, 38(1), 101522. https://doi.org/10.1016/j. giq.2020.101522
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
171
Clare, C., Cruz, M., Papadopoulou, E., Savage, J., Teperek, M., Yan Wang, W., & Yeomans, J. (2019). Engaging researchers with data management. The Cookbook. https://doi. org/10.11647/OBP.0185 Conradie, P., & Choenni, S. (2014). On the barriers for local government releasing open data. Government Information Quarterly, 31, S10–S17. https://doi.org/10.1016/j.giq.2014.01.003 Coughlan, T. (2019). The use of open data as a material for learning. Educational Technology Research and Development, 68, 1–28. https://doi.org/10.1007/s11423-019-09706-y Cruz, R. A. B., & Lee, H. J. (2019). Open governance and duality of technology: The open data designer-user disconnect in the Philippines. EJournal of EDemocracy and Open Government, 11(2), 94–118. https://doi.org/10.29379/jedem.v11i2.545 D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press. https://doi.org/10.7551/ mitpress/11805.001.0001 Dai, Q., Shin, E., & Smith, C. (2018). Open and inclusive collaboration in science. OECD Science, Technology and Industry Policy Papers, 7, 1–29. https://doi.org/10.1787/2dbff737-en Data-Pop Alliance and Internews. (2015). Beyond data literacy: Reinventing community engagement and empowerment in the age of data beyond data literacy. Reinventing Community Engagement and Empowerment in the Age of Data Working Paper for Discussion. Davies, T. (2010). Open data, democracy and public sector. Interface, 1–47. Retrieved from http://practicalparticipation.co.uk/odi/report/wp-content/uploads/2010/08/How-is-open- government-data-being-used-in-practice.pdf Davies, T., Walker, S., Rubinstein, M., & Perini, F. (Eds.). (2019). The state of open data: Histories and horizons. African Minds and International Development Research Centre. Degbelo, A., Wissing, J., & Kauppinen, T. (2018). A comparison of geovisualizations and data tables for transparency enablement in the open government data landscape. International Journal of Electronic Government Research, 14(4), 39–64. https://doi.org/10.4018/ IJEGR.2018100104 Detlor, B., Hupfer, M. E., Ruhi, U., & Zhao, L. (2013). Information quality and community municipal portal use. Government Information Quarterly, 30, 23–32. https://doi.org/10.1016/j. giq.2012.08.004 Digital Science, Fane, B., Ayris, P., Hahnel, M., Hrynaszkiewicz, I., Baynes, G., & Farrell, E. (2019). The state of open data report 2019: A selection of analyses and articles about open data, curated by Figshare. Digital Science. https://doi.org/10.6084/M9.FIGSHARE.9980783.V2 Edwards, P. N., Mayernik, M. S., Batcheller, A. L., Bowker, G. C., & Borgman, C. L. (2011). Science friction: Data, metadata, and collaboration. Social Studies of Science, 41(5), 667–690. https://doi.org/10.1177/0306312711413314 European Commission. (2011a). European framework for research careers. https://cdn5.euraxess. org/sites/default/files/policy_library/towards_a_european_framework_for_research_careers_ final.pdf European Commission. (2011b). Using the principles for innovative doctoral training as a tool for guiding reforms of doctoral education in Europe. Report of the ERA Steering Group Human Resources and Mobility (ERA SGHRM). https://cdn5.euraxess.org/sites/default/files/principles_for_innovative_doctoral_training.pdf European Commission. (2016). Open innovation, open science, open to the world – A vision for Europe. Digital Single Market. https://doi.org/10.2777/061652 European Commission – RISE – Research Innovation and Science Policy Experts. (2016). Mallorca declaration on open science: Achieving open science. Mallorca. Retrieved from https://ec.europa.eu/research/openvision/pdf/rise/mallorca_declaration_2017.pdf Fecher, B., & Friesike, S. (2013). Open Science: One term, five schools of thought. SSRN Electronic Journal, RatSWD_WP. https://doi.org/10.2139/ssrn.2272036 Gascó-Hernández, M., Martin, E. G., Reggi, L., Pyo, S., & Luna-Reyes, L. F. (2018). Promoting the use of open government data: Cases of training and engagement. Government Information Quarterly, 35(2), 233–242. https://doi.org/10.1016/j.giq.2018.01.003
172
G. Santos-Hermosa et al.
Gil-Garcia, J. R., Gasco-Hernandez, M., & Pardo, T. A. (2020). Beyond transparency, participation, and collaboration? A reflection on the dimensions of open government. Public Performance & Management Review, 43(3), 483–502. https://doi.org/10.1080/15309576.2020.1734726 Greco, P. (2014). In P. Greco (Ed.), Open Science, Open Data: La scienza Trasparente. Egea. Greenhow, C., Gleason, B., & Staudt Willet, K. B. (2019). Social scholarship revisited: Changing scholarly practices in the age of social media. British Journal of Educational Technology, 50(3), 987–1004. https://doi.org/10.1111/bjet.12772 Gregory, K. M., Cousijn, H., Groth, P., Scharnhorst, A., & Wyatt, S. (2020). Understanding data search as a socio-technical practice. Journal of Information Science, 46, 459–475. https://doi. org/10.1177/0165551519837182 Haklae Kim. (2018). Interlinking open government data in Korea using administrative district knowledge graph. Journal of Information Science Theory and Practice, 6, 18–30. https://doi. org/10.1633/JISTaP.2018.6.1.2 Hargittai, E. (2003). The digital divide and what to do about it. In D. C. Jones (Ed.), New economy handbook (pp. 822–841). Academic Press. Harrison, T., Pardo, T., & Cook, M. (2012). Creating open government ecosystems: A research and development agenda. Future Internet 2012, 4(4), 900–928. https://doi.org/10.3390/fi4040900 Ifeanyi-obi, C., & Ibiso, H. (2020). Extension agents perception of open data usage in agricultural communication in Abia State. Journal of Agricultural Extension, 24(4), 91–99. https:// doi.org/10.4314/jae.v24i4.10 Jackson, N. J. (2013). The concept of learning ecologies. In E. & P. D. e-book Lifewide Learning (Ed.), Lifewide learning, education & personal development e-book. Work based in UK. http:// www.lifewideebook.co.uk/uploads/1/0/8/4/10842717/chapter_a5.pdf Janssen, M., Charalabidis, Y., & Zuiderwijk, A. (2012). Benefits, adoption barriers and myths of open data and open government. Information Systems Management, 29, 258–268. https://doi. org/10.1080/10580530.2012.716740 Jarke, J. (2019). Open government for all? Co-creating digital public services for older adults through data walks. Online Information Review, 43(6), 1003–1020. https://doi.org/10.1108/ OIR-02-2018-0059 Jetzek, T., Avital, M., & Bjorn-Andersen, N. (2019). The sustainable value of open government data. Journal of the Association for Information Systems, 20, 702–734. https://doi. org/10.17705/1jais.00549 Kassen, M. (2020). Understanding motivations of citizens to reuse open data: Open government data as a philanthropic movement. Innovations, 23(1), 1–27. https://doi.org/10.1080/1447933 8.2020.1738940 Kessler, R. (2018). Whitepaper: Practical challenges for researchers in data sharing: Review. Learned Publishing, 31(4), 417–419. https://doi.org/10.1002/leap.1184 Kimball, M. B. (2011). Mandated state-level open government training programs. Government Information Quarterly, 28(4), 474–483. https://doi.org/10.1016/j.giq.2011.04.003 Koltay, T. (2017). Data literacy for researchers and data librarians. Journal of Librarianship and Information Science, 49(1), 3–14. https://doi.org/10.1177/0961000615616450 Kubler, S., Robert, J., Neumaier, S., Umbrich, J., & Traon, Y. L. (2018). Comparison of metadata quality in open data portals using the Analytic Hierarchy Process. Government Information Quarterly, 35, 13–29. https://doi.org/10.1016/j.giq.2017.11.003 Kuo, T., Tsai, G. Y., Jim Wu, Y.-C., & Alhalabi, W. (2017). From sociability to creditability for academics. Computers in Human Behavior, 75, 975–984. https://doi.org/10.1016/J. CHB.2016.07.044 Lassinantti, J., Ståhlbröst, A., & Runardotter, M. (2019). Relevant social groups for open data use and engagement. Government Information Quarterly, 36, 98–111. https://doi.org/10.1016/j. giq.2018.11.001 Linders, D. (2012). From e-government to we-government: Defining a typology for citizen coproduction in the age of social media. Government Information Quarterly, 29(4), 446–454. https:// doi.org/10.1016/j.giq.2012.06.003
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
173
Lourenço, R. P. (2015). An analysis of open government portals: A perspective of transparency for accountability. Government Information Quarterly, 32, 323–332. https://doi.org/10.1016/j. giq.2015.05.006 Máchová, R., & Lnenicka, M. (2017). Evaluating the quality of open data portals on the national level. Journal of Theoretical and Applied Electronic Commerce Research, 12, 21–41. https:// doi.org/10.4067/S0718-18762017000100003 Magalhaes, G., & Roseira, C. (2020). Open government data and the private sector: An empirical view on business models and value creation. Government Information Quarterly, 37(3), 101248. https://doi.org/10.1016/j.giq.2017.08.004 Manca, S. (2018). Researchgate and academia.edu as networked socio-technical systems for scholarly communication: A literature review. Research in Learning Technology, 26, 1–16. https:// doi.org/10.25304/rlt.v26.2008 Manca, S., & Raffaghelli, J. E. (2017). Towards a multilevel framework for analysing academic social network sites: A networked socio-technical perspective (pp. 193–201). Proceedings of the 4th European Conference on Social Media, ECSM 2017. https://www.scopus.com/inward/ record.uri?eid=2-s2.0-85028564003&partnerID=40&md5=61bd59fdd42ef107e4c209412c4e cb6d Mani, N. S., Cawley, M., Henley, A., Triumph, T., & Williams, J. M. (2021). Creating a data science framework: A model for academic research libraries. Journal of Library Administration, 61(3), 281–300. https://doi.org/10.1080/01930826.2021.1883366 Martin, C. (2014). Barriers to the open government data agenda: Taking a multi-level perspective. Policy & Internet, 6, 217–240. Martin, E. G., & Begany, G. M. (2018). Transforming government health data into all-star open data: Benchmarking data quality. Journal of Public Health Management and Practice: JPHMP, 24(6), E23–E25. https://doi.org/10.1097/PHH.0000000000000799 Matheus, R., & Janssen, M. (2020). A systematic literature study to unravel transparency enabled by open government data: The window theory. Public Performance & Management Review, 43(3), 503–534. https://doi.org/10.1080/15309576.2019.1691025 McKiernan, E. C., Bourne, P. E., Brown, C. T., Buck, S., Kenall, A., Lin, J., et al. (2016). How open science helps researchers succeed. eLife, 5. https://doi.org/10.7554/eLife.16800 Meijer, I., Berghmans, S., Cousijn, H., Tatum, C., Deakin, G., Plume, A., Rushforth, A., et al. (2017). Open data: The researcher perspective. University of Leiden. Mergel, I., Kleibrink, A., & Sorvik, J. (2018). Open data outcomes: U.S. cities between product and process innovation. Government Information Quarterly, 35, 622–632. https://doi. org/10.1016/j.giq.2018.09.004 Molloy, J. C. (2011). The open knowledge foundation: Open data means better science. PLoS Biology, 9(12). https://doi.org/10.1371/journal.pbio.1001195 Montes, M. G., & Slater, D. (2019). Issues in open data: Data literacy. In T. Davies, S. Walker, M. Rubinstein, & F. Perini (Eds.), The state of open data: Histories and horizons (pp. 274–286). African Minds and International Development Research Centre. http://stateofopendata. od4d.net Nam, T. (2012). Suggesting frameworks of citizen-sourcing via government 2.0. Government Information Quarterly, 29(1), 12–20. https://doi.org/10.1016/j.giq.2011.07.005 Neumaier, S., Umbrich, J., & Polleres, A. (2016). Automated quality assessment of metadata across open data portals. Journal of Data and Information Quality, 8(2), 1–29. https://doi. org/10.1145/2964909 NWO. (n.d.). Open science. Retrieved November 2, 2018. https://www.nwo.nl/en/policies/ open+science O’Carroll, C., Hyllseth, B., Rinkse, v. d. B., Kohl, U., Kamerlin, C., Brennan, N., & O’Neill, G. (2017). Providing researchers with the skills and competencies they need to practise open science. European Commission. Directorate-General for Research and Innovation. https://data. europa.eu/doi/10.2777/121253
174
G. Santos-Hermosa et al.
OECD. (2018). Open government data report. Enhancing Policy Maturity for Sustainable Impact. OECD. https://doi.org/10.1787/9789264305847-en Oliveira, M. I., de Oliveira, H. R., Oliveira, L. A., & Lóscio, B. F. (2016). Open government data portals analysis: The Brazilian case. In Proceedings of the 17th International Digital Government Research Conference on Digital Government Research (pp. 415–424). ACM. https://doi. org/10.1145/2912160.2912163 Open Data for Development Network & Omidyar Network. (2018). Open Data Barometer 4th Edition. https://opendatabarometer.org/doc/4thEdition/ODB-4thEdition-GlobalReport.pdf Ouzzani, M., Papotti, P., & Rahm, E. (2013). Introduction to the special issue on data quality. Information Systems, 38, 885–886. https://doi.org/10.1016/j.is.2013.03.001 Owen, R., Macnaghten, P., & Stilgoe, J. (2012). Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy, 39(6), 751–760. https://doi.org/10.1093/scipol/scs093 Pouchard, L., & Bracke, M. S. (2016). An analysis of selected data practices: A case study of the Purdue College of Agriculture. Issues in Science and Technology Librarianship, 2016(85). https://doi.org/10.5062/F4057CX4 Purwanto, A., Zuiderwijk, A., & Janssen, M. (2020). Citizen engagement with open government data: Lessons learned from Indonesia’s presidential election. Transforming Government: People, Process and Policy, 14(1), 1–30. https://doi.org/10.1108/TG-06-2019-0051 Quarati, A. (2021). Open Government Data: Usage trends and metadata quality. Journal of Information Science. https://doi.org/10.1177/01655515211027775 Quarati, A., & De Martino, M. (2019). Open government data usage: A brief overview. Proceedings of the 23rd International Database Applications & Engineering Symposium, IDEAS 2019, June 10–12, 2019., (p. platform:1–28:8). https://doi.org/10.1145/3331076.3331115 Quarati, A., & Raffaghelli, J. E. (2020). Do researchers use open research data? Exploring the relationships between usage trends and metadata quality across scientific disciplines from the Figshare case. Journal of Information Science, 48, 423. https://doi.org/10.1177/0165551520961048 Quarati, A., Albertoni, R., & De Martino, M. (2017). Overall quality assessment of SKOS thesauri: An AHP-based approach. Journal of Information Science, 43, 816–834. https://doi. org/10.1177/0165551516671079 Raffaghelli, J. E. (2018, May 15). Pathways to openness in networked learning research – The case of open data. Retrieved August 30, 2020, from https://www.networkedlearningconference.org. uk/abstracts/ws_raffaghelli.htm Raffaghelli, J. E., & Manca, S. (2019). Is there a social life in open data? The case of open data practices in educational technology research. Publications 2019, 7(9), 7–9. https://doi. org/10.3390/PUBLICATIONS7010009 Reiche, K. J., & Höfig, E. (2013). Implementation of metadata quality metrics and application on public government data (pp. 236–241). 2013 IEEE 37th Annual Computer Software and Applications Conference Workshops. https://doi.org/10.1109/COMPSACW.2013.32 Robinson, P. J., & Johnson, P. A. (2016). Civic Hackathons: New terrain for local government- citizen interaction? Urban Planning, 1(2), 65–74. https://doi.org/10.17645/up.v1i2.627 Ruijer, E., Grimmelikhuijsen, S., van den Berg, J., & Meijer, A. (2020). Open data work: Understanding open data usage from a practice lens. International Review of Administrative Sciences, 86(1), 3. https://doi.org/10.1177/0020852317753068 Saaty, T. L. (1980). The analytic hierarchy process: Planning, priority setting, resource allocation. McGraw-Hill. Tratto da. https://books.google.it/books?id=Xxi7AAAAIAAJ Safarov, I., Meijer, A. J., & Grimmelikhuijsen, S. (2017). Utilization of open government data: A systematic literature review of types, conditions, effects and users. Information Polity, 22, 1–24. Sangrá, A., Raffaghelli, J. E., & Guitert-Catasús, M. (2019). Learning ecologies through a lens: Ontological, methodological and applicative issues. A systematic review of the literature. British Journal of Educational Technology, 50(4), 1619–1638. https://doi.org/10.1111/ bjet.12795
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
175
Santos Hermosa, G. (2019). L’educació oberta a Europa: avenços, integració amb la ciència oberta i rol bibliotecari. BiD: textos universitaris de biblioteconomia i documentació, núm, 43. https:// doi.org/10.1344/BiD2019.43.14 Santos-Hermosa, G., & Boixadera, M. (2020). Open access [learning material online]. Universitat Oberta de Catalunya. http://hdl.handle.net/10609/101366 Santos-Hermosa, G., Ferran-Ferrer, N., & Abadal, E. (2017). Repositories of open educational resources: An assessment of reuse and educational aspects. The International Review of Research in Open and Distributed Learning, 18(5). https://doi.org/10.19173/irrodl.v18i5.3063 Santos-Hermosa, G., Estupinyà, E., Nonó-Rius, B., Paris-Folch, L., & Prats-Prat, J. (2020). Open educational resources (OER) in the Spanish universities. Profesional De La Información, 29(6). https://doi.org/10.3145/epi.2020.nov.37 Schneider, R. (2013). Research data literacy. In Communications in computer and information science (Vol. 397, pp. 134–140). CCIS, Springer. https://doi.org/10.1007/978-3-319-03919-0_16 Science, D., Hahnel, M., Fane, B., Treadway, J., Baynes, G., Wilkinson, R., & et al. (2018). The State of Open Data Report 2018. The State of Open Data Report 2018. figshare. https://doi. org/10.6084/m9.figshare.7195058.v2 Stagars, M. (2016). Promises, barriers, and success stories of open data. In Open data in Southeast Asia: Towards economic prosperity, government transparency, and citizen participation in the ASEAN (pp. 13–28). Springer International Publishing. https://doi. org/10.1007/978-3-319-32170-7_2 Stracke, C.., Bozkurt, A., Conole, G., Nascimbeni, F., Ossiannilsson, E., Sharma, R. C., Burgos, D., Cangialosi, K., Cox, G., Mason, J., Nerantzi, C., Obiageli A., Jane F., Ramírez Montoya, M. S., Santos-Hermosa, G., Sgouropoulou, C., & Shon, J.G. (2020). Open Education and Open Science for our Global Society during and after the COVID-19 Outbreak. https://doi. org/10.5281/zenodo.4275669 Sumitomo, T., & Koshizuka, N. (2018). Progress and initiatives for open data policy in Japan. Computer, 51(12), 14–23. https://doi.org/10.1109/MC.2018.2879993 Swiatek, C. (2019). LIBER digital skills working group: Case studies on open science skilling and training initiatives in Europe. Zenodo. https://doi.org/10.5281/zenodo.3901485 Taebi, B., Correljé, A., Cuppen, E., Dignum, M., & Pesch, U. (2014). Responsible innovation as an endorsement of public values: The need for interdisciplinary research. Journal of Responsible Innovation, 1(1), 118–124. https://doi.org/10.1080/23299460.2014.882072 Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society July–December 2017. https://doi.org/10.1177/2053951717736335 Teal, T. K., Cranston, K. A., Lapp, H., White, E., Wilson, G., Ram, K., & Pawlik, A. (2015). Data carpentry: Workshops to increase data literacy for researchers. International Journal of Digital Curation, 10(1), 135–143. https://doi.org/10.2218/ijdc.v10i1.351 Tenopir, C., Talja, S., Horstmann, W., Late, E., Hughes, D., Pollock, D., Schmidt, B., Baird, L., Sandusky, R., & Allard, S. (2017). Research data services in European academic research libraries. Liber Quarterly: The Journal of European Research Libraries, 27(1), 23–44. https:// doi.org/10.18352/lq.10180 Thelwall, M., & Kousha, K. (2015). Research gate: Disseminating, communicating, and measuring scholarship? Journal of the Association for Information Science and Technology, 66(5), 876–889. https://doi.org/10.1002/asi.23236 UKtransparency, & Cabinetoffice UK. (2012). Open Data White Paper: Unleashing the Potential. Printed in the UK for The Stationery Office Limited on behalf of the Controller of Her Majesty’s Stationery Office. https://data.gov.uk/library/open-data-white-paper United Nations. (1998). The universal declaration of human rights, 1948–1998. United Nations Dept. of Public Information. http://www.un.org/en/universal-declaration-human-rights/ van Veenstra, A. F., Grommé, F., & Djafari, S. (2020). The use of public sector data analytics in the Netherlands. Transforming Government: People, Process and Policy. https://doi.org/10.1108/ TG-09-2019-0095
176
G. Santos-Hermosa et al.
Veletsianos, G., & Kimmons, R. (2012). Networked participatory scholarship: Emergent techno- cultural pressures toward open and digital scholarship in online networks. Computers & Education, 58(2), 766–774. https://doi.org/10.1016/j.compedu.2011.10.001 Vetró, A., Canova, L., Torchiano, M., Minotas, C. O., Iemma, R., & Morando, F. (2016). Open data quality measurement framework: Definition and application to open government data. Government Information Quarterly, 33, 325–337. https://doi.org/10.1016/j.giq.2016.02.001 Vilar, P., & Zabukovec, V. (2019). Research data management and research data literacy in Slovenian science. Journal of Documentation, 75(1), 24–43. https://doi.org/10.1108/JD-03-2018-0042 Wang, R. Y., & Strong, D. M. (1996). Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12, 5–33. Tratto da http://www.jmis-web. org/articles/1002 Wellcome Trust. (2016). Wellcome signs open data concordat. Wellcome Trust Blog. Retrieved from https://wellcome.ac.uk/news/wellcome-signs-open-data-concordat Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., et al. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific data, 3. https://doi.org/10.1038/sdata.2016.18 Wilkinson, M. D., Dumontier, M., Sansone, S.-A., da Silva Santos, L. O., Prieto, M., Batista, D., et al. (2019). Evaluating FAIR maturity through a scalable, automated, community-governed framework. Scientific Data, 6, 1–12. Wiorogórska, Z., Leśniewski, J., & Rozkosz, E. (2018). Data literacy and research data Management in two top Universities in Poland. Raising awareness. In Communications in computer and information science (Vol. 810, pp. 205–214). Springer. https://doi. org/10.1007/978-3-319-74334-9_22 Wirtz, B. W., Weyerer, J. C., & Rösch, M. (2019). Open government and citizen participation: An empirical analysis of citizen expectancy towards open government data. International Review of Administrative Sciences, 85, 566–586. https://doi.org/10.1177/0020852317719996 Wittenberg, J., Sackmann, A., & Jaffe, R. (2018). Situating expertise in practice: Domain-based data management training for liaison librarians. The Journal of Academic Librarianship, 44(3), 323–329. https://doi.org/10.1016/j.acalib.2018.04.004 W3C Working Group. (2017). Data on the web best practices. W3C Recommendation 31 Jan 2017. https://www.w3.org/TR/2017/REC-dwbp-20170131/ Zaveri, A., Rula, A., Maurino, A., Pietrobon, R., Lehmann, J., & Auer, S. (2016). Quality assessment for linked data: A survey. Semantic Web, 7, 63–93. https://doi.org/10.3233/SW-150175 Zhu, X., & Freeman, M. A. (2019). An evaluation of U.S. municipal open data portals: A user interaction framework. Journal of the Association for Information Science and Technology, 70, 27–37. https://doi.org/10.1002/asi.24081 Zuiderwijk, A., & Janssen, M. (2014). Open data policies, their implementation and impact: A framework for comparison. Government Information Quarterly, 31(1), 17–29. https://doi. org/10.1016/j.giq.2013.04.003 Zuiderwijk, A., Janssen, M., Choenni, S., Meijer, R., & Alibaks, R. S. (2012). Socio-technical impediments of open data. Electronic Journal of E-Government, 10(2), 156–172. https://doi. org/10.1641/b570402?ref=search-gateway:885882d1830675b0f27af0760faeaef8 Zuiderwijk, A., Janssen, M., & Dwivedi, Y. K. (2015). Acceptance and use predictors of open data technologies: Drawing upon the unified theory of acceptance and use of technology. Government Information Quarterly, 32(4), 429–440. https://doi.org/10.1016/j.giq.2015.09.005 Zuiderwijk, A., Volten, C., Kroesen, M., & Gill, M. (2018). Motivation perspectives on opening up municipality data: Does municipality size matter? Information, 9, 267. https://doi.org/10.3390/ info9110267 Zuiderwijk, A., Shinde, R., & Jeng, W. (2020). What drives and inhibits researchers to share and use open research data? A systematic literature review to analyze factors influencing open research data adoption. PLoS One, 15(9), e0239283. https://doi.org/10.1371/journal.pone.0239283
6 Why Does Open Data Get Underused? A Focus on the Role of (Open) Data Literacy
177
Gema Santos-Hermosa is Lecturer at the Faculty of Information and Audiovisual Media and Co-director of the Postgraduate Studies Program ‘Open Science: Promotion, Support and Assessment’ at the University of Barcelona. She holds a PhD in Information Science and Communication. Her doctoral thesis discusses the development and reuse of open educational resources in higher education. She has published several research on the field of open education and open science, connecting both worlds and aiming at strategies to develop the skills to get engaged in both movements.
Alfonso Quarati is Researcher at the National Research Council of Italy. He has been working on grid computing, cloud computing and distributed computing. Recently, he has been studying approaches to automatize the analysis of open data and metadata quality. He has authored several papers connecting open data quality and usage, exploring the relationship between technological infrastructures and actual users’ engagement with open data.
Eugenia Loria-Soriano is PhD candidate in Educational Technologies at the Universitat Oberta de Catalunya. She is working on a thesis relating the development of a competency framework to support open data literacy in the context of Open Government Data. She is interested in professional learning to develop data literacy as key competence to interact with the wealth of open data in society.
Juliana E. Raffaghelli is Research Professor at the University of Padua and Associate Research of the Edul@b Research Group at the Universitat Oberta de Catalunya. In the last 15 years, she coordinated international research units, networks, and projects in Latin America, the Balkans, Turkey, and Western Europe in the field of educational technologies. Her work covered the topic of professional development for technological uptake in international/global contexts of collaboration through a socio-technical and post-colonial lens. Recently her research activities explored the emergent manifestations of data practices and artificial intelligence through critical, open, and emancipatory pedagogies. She has coordinated six special issues for international journals and contributed to the field with two books and several research articles and chapters in English, Spanish, Italian, and Portuguese.
Chapter 7
Responsible Educational Technology Research: From Open Science and Open Data to Ethics and Trustworthy Learning Analytics Davinia Hernández-Leo, Ishari Amarasinghe, Marc Beardsley, Eyad Hakami, Aurelio Ruiz García, and Patricia Santos
Abstract This chapter unfolds some elements of responsible research in the educational technology field and provides examples about how these elements have been considered in initiatives by the Interactive and Distributed Technologies for Education (TIDE) research group at Universitat Pompeu Fabra in Barcelona. First, it focuses on open science, an ongoing movement that promotes, on the one hand, transparent and frequent open-access updates of the research progress and the collected data and, on the other hand, reproducible, accurate, and verifiable research, bringing benefits for the individual researchers, the research community, and the society. Second, the chapter discusses ethics perspectives in educational technology research, relevant when collecting and sharing data and also in the design and development of technologies, especially when they are based on data analytics or artificial intelligence techniques. The latter aspects relate to the capacity of educational software systems to support human agency and preserve human well-being. Keywords Responsible Research and Innovation (RRI) · Open science · Ethics · Educational technologies · Learning analytics
D. Hernández-Leo (*) · I. Amarasinghe · M. Beardsley · E. Hakami · A. R. García · P. Santos TIDE, ICT Department, Universitat Pompeu Fabra, Barcelona, Spain e-mail: [email protected]; [email protected]; Marc.Beardsley@ upf.edu; [email protected]; [email protected]; [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 J. E. Raffaghelli, A. Sangrà (eds.), Data Cultures in Higher Education, Higher Education Dynamics 59, https://doi.org/10.1007/978-3-031-24193-2_7
179
180
D. Hernández-Leo et al.
Introduction The “scientific ethos” comprises the guiding norms, beliefs, and ideas that provide the foundations on which science as a social institution operates. Merton provided in 1942 one of the first and most influential descriptions of the post-war contemporary scientific ethos, based on four norms, interrelated with each other: universalism (conformance to previously accepted knowledge and evaluation subject to pre- established impersonal criteria), communalism (a sense of common ownership of goods, in return to recognition and esteem), disinterestedness (appearance to act under passion for knowledge, ensuring integrity), and organised scepticism (a detached community scrutiny of beliefs in terms of empirical and logical criteria) (Merton, 1942). The “scientific ethos” provides shared foundations for those self-identified and recognised as part of the scientific community, as well as a source of legitimacy from the society to them. The perceived social credibility of the corresponding practices to these norms (such as certification and ruling organised through peer review) has traditionally provided the scientific community with a great autonomy of functioning as a social institution, including the selection of research questions, the internal recognition mechanisms of individuals and organisations, the internal governance mechanisms, and, even, how public funding was distributed. Openness was intimately linked to the concept of good science, in terms of the stimulation of discovery, its impact on society, and the guarantee of systemic integrity. The openness of these practices was an implicit requirement to keep this legitimacy. Although the Mertonian view has been evolving over time (with the contributions by Kuhn and Latour as examples of the most popular evolutions and controversies), it still represents an accepted tacit representation for the scientific community and its contract with society. But the pace and impact of the scientific and social transformations we have been experiencing in the last decades have, as a minimum, transformed the best practices associated with those beliefs, if not posing pressures to redefine them for the new and future contexts (Krishna, 2020). Like any other aspect of society, information and communications technology (ICT) has impacted the way science is conducted and generated new areas of research (such as educational technologies). It has also transformed how we communicate, share, and access results (and even generated a new range of research products, such as research data and software). The globalisation of the economy has greatly expanded the number of actors in the scientific community, both in terms of geography and with respect to the increase of the participation of private, profit- oriented research organisations (such as large multinationals, most relevantly in the ICT and biomedical sectors). The rise of democracy across the world, in connection with a massive increase in the access of education in several regions as a needed condition to participate in the “knowledge society”, has promoted openness in new terms, such as citizen participation (such as the case of “crowd science”, Franzoni & Sauermann, 2014) but also criticism (such as the rise of denialism and questioning in society, combined with the so-called reproducibility crisis), which is even
7 Responsible Educational Technology Research: From Open Science and Open Data…
181
questioning the role of science as the provider of general unquestionable knowledge. Finally, the acknowledgment of grand, global challenges (such as climate change or the ongoing COVID-19 pandemics), where science is perceived as not only a key driver to tackle them but a relevant actor in policy-making, has fostered new roles for science which are based on different standards to those of academic science. As a result of this evolution, a “renaissance” of the discussions around the scientific ethos emerged. On the one hand, it resulted in a widening of the understanding of the concepts of universalism, communalism, and disinterestedness, represented in Europe by the rise of the concept of Responsible Research and Innovation (RRI; EC, 2015). RRI has been extensively used in recent years to describe an approach to governing science, including its processes and results, but also including an assessment of its implications and the societal expectations. RRI expands the traditional evolution of open science, often focused on technical organisation, with an extra emphasis on normative concerns and democratic deficits. RRI is built on multiple evolving pillars. The first relates to public engagement and considers the ways in which research can be shared with the public. The second has to do with gender and ranges from gender equality in research to the incorporation of the gender dimension in conducting research. The third is about promoting science education, with a strong emphasis in the inclusiveness of this education. Ethics is the fourth pillar and covers topics such as ethical research methods or professional ethics. Open science is a pillar including topics such as open access to research results, publications, data, software, etc.; open processes, such as open peer review; or infrastructures, such as open science cloud. Other pillars have to do with governance, social justice, inclusion, and sustainability which, in addition to open science, aim at updating the scientific ethos and its associated best practices to the current and anticipated evolution of society and the role of science and its community in it. The relevance of data to the RRI pillars is clear. Indeed, data practices are intrinsically connected with several of these pillars, with more emphasis in the cases of the ethics and open science pillars. Research in education and in educational technologies is not an exception. Technologies are also increasingly transforming the practice of education in general, the type of research conducted on education, and the way it is conducted. All dimensions of RRI are of special relevance to it, as education is a complex human phenomenon and is increasingly considered as a global common good (UNESCO, 2015). Researchers in this field are increasingly recognising these challenges and adopting strategies towards what we could call “responsible educational technology research”. This chapter aims at contributing to this community effort by sharing practices and meta-research (Ioannidis, 2018) studies that align with elements of responsible research (Fig. 7.1) in the educational technology field. The cases summarise examples about how these elements have been considered in initiatives by the Interactive and Distributed Technologies for Education research group at Universitat Pompeu Fabra in Barcelona (TIDE-UPF, www.upf.edu/web/tide).
182
D. Hernández-Leo et al.
Fig. 7.1 Elements of responsible educational technology research addressed in the chapter
The first element of responsible educational technology research tackled in the chapter is directly connected to the notion of open science, in particular with the reproducibility and open data facets. Like other fields, educational technology research suffers from a reproducibility crisis. There are even voices in the community alerting that educational research has been found to have lower replication rates than other academic fields and call for “conducting replications on important findings” as “essential to moving toward a more reliable and trustworthy understanding of educational environments” (Makel, 2014). There are several initiatives that aim to change closed and nontransparent approaches to research. van der Zee and Reich (2018) discuss that it is possible to engage in “Open Educational Science” by making more transparent and accessible each stage of the scientific cycle (research design, data collection, analysis, and publication) with open approaches to science, such as preregistration, data sharing, transparent analyses, and open-access publication. There are now several specialised Web platforms (e.g. Zenodo, Figshare, LearnSphere, GitHub) where researchers can share datasets and software used in studies so that other researchers can reproduce or extend the analysis. However, despite their relevance, these initiatives have not been broadly adopted (Raffaghelli & Manca, 2019). The second and third responsible educational technology research elements discussed in this chapter are related to ethics in educational technology research. The second element deals with the perspective of ethical data collection. The presented perspective recognises that standard ethics procedures in educational technology research already take into consideration key aspects but it also questions the relevance of the details in the implementation of those procedures, such as the previous knowledge of the participants and the articulation of consent forms. Finally, the third element tackles the perspective of the ethical design of data-driven educational technology interventions. This perspective also connects with responsible dimensions of social justice, inclusion, and sustainability, mentioned above, and aligns with emerging discussions about desired ethical considerations in the design of trustworthy artificial intelligence (AI) systems. In the hope that AI and data-driven systems (including those used to support educational scenarios) should clearly
7 Responsible Educational Technology Research: From Open Science and Open Data…
183
prevent and minimise their risks while still exploiting their benefits (HLEG-AI, 2019), a focus on human-centred and well-being-driven design approaches offers a path for data-driven systems to respect human autonomy and the common good. The following sections in the chapter elaborate on each of these responsible educational technology research elements with examples about how each of them has been considered or meta-researched.
fforts Towards Open Science in Educational E Technology Research Reproducibility and open data are key facets in the open science movement, which promotes, on the one hand, transparent and frequent open-access updates of the research progress and the collected data and, on the other hand, reproducible, accurate, and verifiable research, bringing benefits for the individual researchers, the research community, and the society. This section presents two examples of efforts carried out by TIDE-UPF towards achieving these principles in educational technology research.
Seeking Reproducibility Multimodal learning analytics (MMLA) is a research domain within the educational technologies field that aims to better understand and measure learning processes by making use of data from an array of sources including “recorded video and audio, pen strokes, position tracking devices, biosensors” (Ochoa et al., 2017). Increasingly, educational technology researchers are exploring MMLA as there is a rising need to model across physical and digital worlds to gain a more holistic view of student learning and supporting technologies and techniques are becoming more accessible. However, the complexity and exploratory nature of MMLA can make it difficult to adhere to open science standards. Standards such as the Transparency and Openness Promotion (TOP) Guidelines call for greater transparency in relation to research designs, materials, data, and analysis to support reproducible research (Nosek et al., 2016). Thus, to explore challenges related to incorporating multimodal data on learning into research in a reproducible manner, a direct replication of a multimodal wordlist experiment on the forward effect of testing (Pastötter et al., 2011) was conducted. The study made use of both behavioural and physiological data (electrophysiological measures of brain activity) but differed from the original study in that more accessible low-cost equipment and open-source software were used. Further, two rounds of the experiments were run with the second round focused on increasing the power of the study and improving its reproducibility. A summary of the description of the study and its findings follows.
184
D. Hernández-Leo et al.
The replication study, Seeking reproducibility: Assessing a multimodal study of the testing effect (Beardsley et al., 2018), served two purposes. The first was to become more familiar with and contribute an empirical study related to retrieval learning also referred to as test-enhanced learning as it is an underutilised teaching and learning strategy (McDaniel & Fisher, 1991; Roediger & Karpicke, 2006). The second was to try to validate multimodal setup upon which future conceptual replications of the wordlist experiment and the testing effect could be conducted (e.g. such as replacing the wordlists with more authentic learning materials). In the original study, Pastötter et al. (2011) found behavioural and physiological evidence that retrieval during learning facilitates the encoding of subsequent learning. Participants studied five distinct wordlists. The group that did a retrieval activity (free recall of words) rather than a restudy activity (reviewing the list of words) prior to learning the last wordlist was able to recall more words from the last wordlist. Moreover, alpha wave oscillations, as measured by an electroencephalogram (EEG), differed between the groups. The alpha power of the group performing the restudy activities increased across wordlists, whereas the retrieval activity group did not show such an increase in alpha power. Increases in alpha power corresponded with poorer recall of words from the target wordlists. The original study was conducted in a laboratory setting and made use of costly equipment and proprietary software. The replication study made use of a low-cost EEG (electroencephalogram) device and an open-source software and was conducted in a classroom setting. University students (n = 46) took part in the replication study that was conducted across two rounds. A low-cost Emotiv EPOC® was used to acquire electrophysiological data. PsychoPy (Peirce, 2009) was used to present the wordlists and collect behavioural data. OpenViBE was used to acquire and process the EEG signal (Renard et al., 2010). RStudio was used to perform the statistical analysis (RStudio Team, 2015). However, the replication attempt had participants study three rather than five distinct wordlists due to the limited battery life of the low-cost EEG device. Behavioural results of the wordlist experiment were replicated, but physiological results were not. The retrieval group (M = 6.32, SD = 1.84) performed significantly better than the restudy group (M = 2.33, SD = 1.40) in recalling words from the third and final wordlist (p