Quantifying human resources uses and analyses 9781119721758, 111972175X, 9781119721765, 1119721768

Cover -- Half-Title Page -- Dedication -- Title Page -- Copyright Page -- Contents -- Acknowledgments -- Introduction --

215 90 3MB

English Pages 243 p [243] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover......Page 1
Half-Title Page......Page 3
Dedication......Page 4
Title Page......Page 5
Copyright Page......Page 6
Contents......Page 7
Acknowledgments......Page 11
I.1. The omnipresence of quantification in Western societies......Page 13
I.2. The specific challenges of human resources quantification: quantifying the human being......Page 15
I.3. HR quantification: effective solution or myth? Two lines of research......Page 20
I.5. Structure of the book......Page 24
1. From the Statisticalization of Labor to Human Resources Algorithms: The Different Uses of Quantification......Page 27
1.1.1. The statisticalization of individuals and work......Page 28
1.1.2. Informing and justifying decisions concerning individuals......Page 37
1.2.1. HR reports and dashboards: definitions and examples......Page 42
1.2.2. HR analytics and statistical studies......Page 52
1.3.1. Big Data in HR: definitions and examples......Page 58
1.3.2. The breaks introduced by Big Data in HR......Page 66
2. Quantification and Decision-making......Page 71
2.1.1. The myth of objective quantification......Page 72
2.1.2. Limited objectivity......Page 78
2.1.3. Objectivity, a central issue in HR......Page 84
2.2. In search of personalization......Page 89
2.2.1. Are we reaching the end of the positioning of statistics as a science of large numbers?......Page 90
2.2.2. Personalization: a challenge for the HR function......Page 98
2.3.1. Are we heading toward a rise in predictability at the expense of understanding?......Page 101
2.3.2. The predictive approach: an issue for the HR function......Page 108
3. How are Quantified HR Management Tools Appropriated by Different Agents?......Page 113
3.1.1. Bureaucracy......Page 114
3.1.2. New Public Management......Page 117
3.1.3. Algorithmic management......Page 122
3.2.1. Providing data, not such a harmless approach for employees......Page 125
3.2.2. Can numbers be made to reflect whatever we like?......Page 132
3.3.1. Decisions made solely on the basis of figures......Page 135
3.3.2. Decisions made solely by algorithms......Page 140
4.1. Quantification for HR policy evaluation?......Page 145
4.1.1. Measuring the implementation of HR policies......Page 146
4.1.2. Measuring the effects of HR policies......Page 152
4.2. Quantifying in order to legitimize the HR function?......Page 155
4.2.1. Measuring the performance of the HR function......Page 156
4.2.2. Measuring the link between HR function performance and organizational performance......Page 159
4.3.1. HR professions with a high risk of automation......Page 165
4.3.2. Support for the employees concerned......Page 169
5. The Ethical Issues of Quantification......Page 173
5.1. Protection of personal data......Page 174
5.1.1. Risks relating to personal data......Page 175
5.1.2. Obligations and actions of companies with regard to the protection of personal data......Page 178
5.2. Quantification and discrimination(s)......Page 181
5.2.1. Quantification as a shield against discrimination......Page 182
5.2.2. The risks of discrimination related to the use of quantification......Page 188
5.3. Opening the “black box” of quantification......Page 191
5.3.1. Training HR actors, employees and their representatives as well as data experts on HR quantification......Page 192
5.3.2. Mobilizing organizational leverage......Page 198
C.1. Summary of the book......Page 203
C.2. Toward an analytical framework for HR quantification......Page 205
References......Page 213
Index......Page 227
Other titles from iSTE in Innovation, Entrepreneurship and Management......Page 229
EULA......Page 241
Recommend Papers

Quantifying human resources uses and analyses
 9781119721758, 111972175X, 9781119721765, 1119721768

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Quantifying Human Resources

To Ariane

Technological Changes and Human Resources Set coordinated by Patrick Gilbert

Volume 2

Quantifying Human Resources Uses and Analyses

Clotilde Coron

First published 2020 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 27-37 St George’s Road London SW19 4EU UK

John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2020 The rights of Clotilde Coron to be identified as the author of this work have been asserted by her in accordance with the Copyright, Designs and Patents Act 1988. Library of Congress Control Number: 2019957535 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-78630-446-9

Contents

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ix

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

Chapter 1. From the Statisticalization of Labor to Human Resources Algorithms: The Different Uses of Quantification . . . . .

1

1.1. Quantifying reality: quantifying individuals or positions . . . . 1.1.1. The statisticalization of individuals and work . . . . . . . . 1.1.2. Informing and justifying decisions concerning individuals 1.2. From reporting to HR data analysis. . . . . . . . . . . . . . . . . 1.2.1. HR reports and dashboards: definitions and examples . . . 1.2.2. HR analytics and statistical studies . . . . . . . . . . . . . . 1.3. Big Data and the use of HR algorithms . . . . . . . . . . . . . . 1.3.1. Big Data in HR: definitions and examples . . . . . . . . . . 1.3.2. The breaks introduced by Big Data in HR . . . . . . . . . .

. . . . . . . . .

2 2 11 16 16 26 32 32 40

Chapter 2. Quantification and Decision-making . . . . . . . . . . . . . .

45

2.1. In search of objectivity . . . . . . . . . . . . . . . . . . 2.1.1. The myth of objective quantification . . . . . . . 2.1.2. Limited objectivity . . . . . . . . . . . . . . . . . . 2.1.3. Objectivity, a central issue in HR . . . . . . . . . 2.2. In search of personalization . . . . . . . . . . . . . . . 2.2.1. Are we reaching the end of the positioning of statistics as a science of large numbers? . . . . . . . . 2.2.2. Personalization: a challenge for the HR function 2.3. In search of predictability . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . . . . . .

. . . . .

. . . . .

46 46 52 58 63

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64 72 75

vi

Quantifying Human Resources

2.3.1. Are we heading toward a rise in predictability at the expense of understanding? . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2. The predictive approach: an issue for the HR function . . . . . . . .

75 82

Chapter 3. How are Quantified HR Management Tools Appropriated by Different Agents? . . . . . . . . . . . . . . . . . .

87

3.1. The different avatars of the link between managerial rationalization and quantification . . . . . . . . . . . . . . . 3.1.1. Bureaucracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2. New Public Management . . . . . . . . . . . . . . . . . . . . . . 3.1.3. Algorithmic management . . . . . . . . . . . . . . . . . . . . . . 3.2. Distrust of data collection and processing . . . . . . . . . . . . . . . 3.2.1. Providing data, not such a harmless approach for employees . 3.2.2. Can numbers be made to reflect whatever we like? . . . . . . . 3.3. Distrust of a disembodied decision . . . . . . . . . . . . . . . . . . . 3.3.1. Decisions made solely on the basis of figures . . . . . . . . . . 3.3.2. Decisions made solely by algorithms . . . . . . . . . . . . . . .

. . . . . . . . . .

88 88 91 96 99 99 106 109 109 114

Chapter 4. What Effects are the Effects of Quantification on the Human Resources Function? . . . . . . . . . . . . . . . . . . . . . .

119

4.1. Quantification for HR policy evaluation? . . . . . . . . . 4.1.1. Measuring the implementation of HR policies . . . . 4.1.2. Measuring the effects of HR policies . . . . . . . . . 4.2. Quantifying in order to legitimize the HR function? . . . 4.2.1. Measuring the performance of the HR function . . . 4.2.2. Measuring the link between HR function performance and organizational performance . . . . . . . . 4.3. The quantification and risk of HR business automation . 4.3.1. HR professions with a high risk of automation . . . 4.3.2. Support for the employees concerned . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

119 120 126 129 130

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

133 139 139 143

Chapter 5. The Ethical Issues of Quantification . . . . . . . . . . . . . .

147

5.1. Protection of personal data . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1. Risks relating to personal data . . . . . . . . . . . . . . . . . . . . 5.1.2. Obligations and actions of companies with regard to the protection of personal data . . . . . . . . . . . . . . . 5.2. Quantification and discrimination(s) . . . . . . . . . . . . . . . . . . . 5.2.1. Quantification as a shield against discrimination . . . . . . . . . 5.2.2. The risks of discrimination related to the use of quantification .

. . . .

148 149

. . . .

152 155 156 162

. . . .

Contents

vii

5.3. Opening the “black box” of quantification . . . . . . . . . . . . . . . . . 5.3.1. Training HR actors, employees and their representatives as well as data experts on HR quantification . . . . . . . . 5.3.2. Mobilizing organizational leverage . . . . . . . . . . . . . . . . . . .

165 166 172

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

177

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

187

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

201

Acknowledgments

I would like to warmly thank all the people working at IAE Paris, the administrative staff and the teachers–researchers, for the stimulating working atmosphere and exchanges. In particular, I would like to thank Patrick Gilbert for his trust, support and wise advice. My gratitude also goes to Pascal Braun for his attentive review and enriching remarks. Finally, I would like to thank the team at ISTE, without whom this book would not have been possible.

Introduction

This book arises from an initial observation: quantification has gradually invaded all modern Western societies, and organizations and companies are not exempt from this trend. As a result, the human resources (HR) function is increasingly using quantification tools. However, quantification raises specific questions when it concerns human beings. Consequently, HR quantification gives rise to a variety of approaches, in particular: an approach that values the use of quantification as a guarantee of objectivity, of scientific rigor and, ultimately, of the improvement of the HR function; and a more critical approach that highlights the social foundations of the practice of quantification and thus challenges the myth of totally neutral or objective quantification. These two main approaches make it possible to clarify the aim of this book, which seeks to take advantage of their respective contributions to maintain a broad vision of the challenges of HR quantification. I.1. The omnipresence of quantification in Western societies In The Measure of Reality, Crosby (1998) describes the turning point in Medieval and Renaissance Europe that led to the supremacy of quantitative over qualitative thinking. Crosby gives several examples illustrating how widespread this phenomenon was in various fields: the invention and diffusion of the mechanical clock, double-entry accounting and perspective painting, for example. Even music could not escape this movement of “metrologization” (Vatin 2013). It became “measured”, rhythmic and obeyed quantified rules. Crosby goes so far as to link the rise of

xii

Quantifying Human Resources

quantification to the supremacy that Europeans enjoyed in the following centuries. The author reminds us that the transition to measurement and the quantitative method has been part of a very important change in mentality, and that the deeply rooted habits of a society dominated by quantification today make us partly blind to the implications of this upheaval. Crosby gives several reasons for this upheaval. First, he evokes the development of trade and the State, which has manifested itself in two emblematic places, the market square and the university, and then the renewal of science. But above all, it underlines the importance attached to visualization in the Middle Ages. According to him, the transition from oral to written transmission, whether in literature, music or account books, and the appearance of geometry and perspective in painting, accompanied and catalyzed the transition to quantification, which became necessary for these different activities: tempo and pitch measurement to write music, double-entry accounting to write in accounting books and the calculation of perspectives are all ways of introducing quantification in areas that had not previously benefited from it. Supiot (2015, p. 104, author’s translation) also notes the growing importance of numbers, particularly in the Western world: “It is in the Western world that expectations of them have constantly expanded: initially objects of contemplation, they became a means of knowledge and then of forecasting, before being endowed with a strictly legal force with the contemporary practice of governance by numbers.” Supiot thus insists on the normative use of quantification, particularly in law and in international treaties and conventions, among others. More precisely, he identifies four normative functions conferred on quantification: accountability (an illustration being the account books that link numbers and the law), administration (knowing the resources of a population to be able to act on them), judging (the judge having to weigh up each testimony to determine the probability that the accused is guilty) and legislation (using statistics to decide laws in the field of public health, for example the preventive inoculation of smallpox that could reduce the disease as a whole but be fatal for some people inoculated in the 18th Century).

Introduction

xiii

I.2. The specific challenges of human resources quantification: quantifying the human being Ultimately, these authors agree on the central role of quantification in our history and in our societies today. More recently, the rise in the amount of available data has further increased the importance of this role, and has raised new questions, leading to new uses and even new sciences: the use of algorithms in different fields (Cardon 2015; O’Neil 2016), the rise of social physics that uses data on human behavior to model it (Pentland 2014), the study of social networks, etc. Organizations are no exception to this rule: quantification is a central practice in organizations. Many areas of the company are affected: finance, audit, marketing, HR (human resources), etc. This book focuses on the HR function. This function groups together all the activities that enable an organization to have the human resources (staff, skills, etc.) necessary for it to operate properly (Cadin et al. 2012). Thus, it brings together recruitment, training, mobility, career management, dialog with trade unions, promotion, staff appraisal, etc. In other words, it is a function that manages the “human”, insofar as the majority of these missions are related to human beings (candidates during recruitment, employees, trade unionists, managers, etc.). HR quantification actually covers a variety of practices and situations, which we will elaborate on throughout the book: – quantification of individuals: measurement of individual performance, individual skills, etc. This practice, the stakes of which are specified in Chapters 1 and 2, can be identified during decisions regarding recruitment, salary raises and promotion, for example; – work quantification: job classification, workload quantification, etc. This measure does not concern human beings directly, but rather the work they must do. Chapters 1 and 2 will examine this practice at length; – quantification of the activity of the HR function: evaluation of the performance of the HR function, the effects of HR policies on the organization, etc. This practice, which is discussed in detail in Chapter 4, becomes all the more important as the HR function is required to prove its legitimacy. These uses may seem disparate, but it seemed important to us to deal with them jointly, as they overlap on a number of issues. Thus, their usefulness for the HR function, or their appropriation by various agents, constitutes

xiv

Quantifying Human Resources

transversal challenges. In addition, in these three types of practices, quantification refers to the human being and/or their activities. However, the possibility of quantifying the human and human activities has given rise to numerous methodological and ethical debates in the literature. Two main positions can be identified. The first, which is the basis of the psychotechnical approach, seeks to broaden the scope of what is measurable in human beings: skills, behaviors, motivations, etc. The second, resulting from different theoretical frameworks, criticizes the postulates of the psychotechnical approach and considers on the contrary that the human being is never reducible to what can be measured. The psychotechnical approach was developed at the beginning of the 20th Century. It is based on the idea that people’s skills, behaviors and motivations can be measured objectively. As a result, the majority of psychotechnicians’ research focuses on measuring instruments. They highlight four qualities necessary to make a good measuring instrument: standardization, ranking result, fidelity, and validity (Huteau and Lautrey 2006). Standardization refers to the fact that all subjects must pass exactly the same test (hence the importance of formalizing the conditions for taking the test, for example). Similarly, the correction of the test must leave as little margin as possible for the corrector. The stated objective of formalization is to make the assessment as objective as possible, whilst trying to avoid having the test results influenced by the test conditions or the assessor’s subjectivity. Then, the test must make it possible to differentiate individuals, in other words to rank them, usually on a scale (e.g. a rating scale). This characteristic implies having items whose difficulty is known in advance, and with a variation in the levels of difficulty. Indeed, the easy items, passed by the vast majority of individuals, are just as low ranking as the difficult items, passed by very few individuals. As a result, psychotechnicians recommend that items of varying levels of difficulty be mixed in the same test in order to achieve a more differentiated ranking of individuals. Accuracy refers to the fact that test results must be stable over time. Individual test results are influenced by random factors such as the fitness level of individuals, and the objective is to minimize this hazard. Finally, validity refers to the fact that the test must contribute to an accurate diagnosis or prognosis that is close to reality. This is called the “predictive value” of the test. This predictive value can be assessed by comparing the results obtained on a test with the actual situation that follows: for example, comparing a ranking of applications received for a position based on a test with the scores obtained on individual assessments by successful

Introduction

xv

candidates, so as to infer the match between the test used for recruitment and the skills of candidates in real situations. Two typical examples of this approach are: the measurement of intellectual quotient (IQ), and the measurement of the factor (Box I.1). The psychotechnical approach is therefore very explicitly part of an approach aimed at measuring the human being and demonstrating the advantages of such a measurement. Thus, psychotechnical work emphasizes that measurement allows for greater objectivity and better decision-making if it follows the following three assumptions (McCourt 1999). First of all, a good evaluation is universal and impersonal. Second, it must follow a specific procedure (the psychotechnical procedure). The last assumption is that organizational performance is the sum of individual performance. IQ tests are probably the most widely known tests of human intellectual ability for the general public. There are actually two definitions of IQ: intellectual development speed index (IQ-Stern) or group positioning index (IQWechsler). IQ-Stern depends on the age of the individual and measures the intellectual development of children. The IQ-Wechsler, defined in the late 1930s, is not a quotient, as its name suggests, but a device for calibrating individuals’ scores on an intellectual test. For example, an IQ of 130 corresponds to a 98 percentile (98% of the population scores below 130), while an IQ of 115 corresponds to the third quartile (75% of the population scores below 115). There are many debates about IQ tests. In particular, its opponents point out that tests measure only one form of intelligence, or that test results may depend to a large extent on educational inequalities, which makes them of little use in formulating educational policies. Less well known to the general public, Spearman’s theory of the g factor is based on the observation that the results of the same individual on different intelligence tests are strongly correlated with each other, and infers that there is a common factor of cognitive ability. The challenge is therefore to measure this common factor. Multiple models were thus proposed during the 20th Century. Box I.1. Two incarnations of the psychotechnical approach: the IQ test and the theory of the g factor (sources: Gould 1997; Huteau and Lautrey 2006)

The second stance takes the opposite approach to this one by demonstrating its limits. Several arguments are put forward to this effect. The first challenges the notion of objectivity by highlighting the many

xvi

Quantifying Human Resources

evaluation biases faced by the psychotechnical approach (Gould 1997). These evaluation biases constitute a form of indirect discrimination: an apparently neutral test actually disadvantages some populations (women and ethnic minorities, for example). For example, intelligence tests conducted in the United States at the beginning of the 20th Century produced higher average scores for whites than blacks (Huteau and Lautrey 2006). These differences could be interpreted as hereditary differences, and could have contributed to racist theories and discourse, whereas in fact they illustrated the importance of environmental factors (such as school attendance) for test success, and thus showed that the test did not measure intelligence independently from a social context, but rather intelligence largely acquired in a social context (Marchal 2015). Moreover, this type of test, like craniometry, is based on the idea that human intelligence can be reduced to a measurement, subsequently allowing us to classify individuals on a onedimensional scale, which is an unproven assumption (Gould 1997). The second argument criticizes the decontextualization of psychotechnical measures, whereas many individual behaviors and motivations are closely linked to their context (e.g., work). This argument can be found in several theoretical currents. Thus, sociologists, ergonomists and some occupational psychologists argue that the measurement of intelligence is all the more impossible to decontextualize since it is also distributed outside the limits of the individual: it depends strongly on the people and tools used by the individual (Marchal 2015). However, as Marchal (2015) points out, work activities are “situated”, i.e. it is difficult to extract the activity from the context (professional, relational) in which it is embedded. This criticism is all the more valid for tests aimed at measuring a form of generic intelligence or performance, which is supposed to guarantee superior performance in specific areas. The g factor theory (Box I.1) is an instructive example of this decontextualized generalization, since it claims to measure a generic ability that would guarantee better performance in specific work activities. In practice, the same person, therefore with the same measure of g factor, may prove to be highly, or on the contrary, not very efficient depending on the work context in which he or she is placed. The third argument questions the ethical legitimacy of the measurement of the individual and highlights in particular the possible excesses of this approach. Thus, the racist or sexist abuses to which craniometry or intelligence tests have given rise to are pointed out to illustrate the dangers of measuring intelligence (Gould 1997). In a more precise field of evaluation,

Introduction

xvii

many studies have highlighted the harms of quantified, standardized evaluation of individuals. In particular, Vidaillet (2013) denounces three of them. The first harm of quantified evaluation is that it contributes to changing people’s behavior, and not always in the desired direction. A known example of such a perverse effect is that of teachers who, being scored on the basis of their students’ scores on a test in the form of MCQs, are encouraged either to concentrate all their teaching on learning the skills necessary to succeed on the test, to the detriment of other, often fundamental skills, or to cheat to help their students when taking the test (Levitt and Dubner 2005). The second disadvantage is that it may harm the working environment by accentuating individual differences in treatment and thus increase competition and envy. The third harm is that it substitutes an extrinsic motivation (“I do my job well because I want a positive evaluation”) for an intrinsic motivation (“I do my job well because I like it and I am interested”). However, extrinsic motivation may reduce the interest of work for the person and therefore the intrinsic motivation: the two motivations are substitutable and not complementary. Finally, the fourth argument emphasizes that, unlike objects and things, human beings can react and interact with the quantification applied to them. Thus, Hacking (2001, 2005) studies classification processes and more particularly human classifications, i.e. those that concern human beings: obesity, autism, poverty etc. He then refers to “interactive classification”, in the sense that the human being can be affected and even transformed by being classified in a category, which can sometimes lead to changes in category. Thus, a person who is entering the “obese” category after gaining weight may, due to this simple classification, want to lose weight and may therefore leave the category. This is what Hacking (2001, p. 9) calls the “loop effect of human specifications”. He recommends that the four elements underlying human classification processes (Hacking 2005) be studied together: classification and its criteria, classified people and behaviors, institutions that create or use classifications, and knowledge about classes and classified people (science, popular belief, etc.). Therefore, the possibility of quantifying human beings in a neutral way comes up against these interaction effects. Finally, the confrontation between these two stances clearly shows the questions raised by the use of quantification when it comes to humans, and in HR notably: is it possible to measure everything when it comes to human

xviii

Quantifying Human Resources

beings? At what price? What are the implications, risks and benefits of quantification? Can we do without quantification? I.3. HR quantification: effective solution or myth? Two lines of research In response to these questions on the specificities of human quantification, two theoretical currents can be identified on the use of HR quantification. One, generally normative, tends to consider quantification as an effective solution to improve HR decision-making, whether in recruitment or other areas. This approach thus supports evidence-based management (EBM), in other words management based on evidence which is most often made up of figures and measurements. In the EBM approach, quantification is therefore proof and can cover a multiplicity of objects: quantifying to better evaluate individuals (in line with the psychotechnical approach), or to know them better, or to better understand global HR phenomena (absenteeism, gender equality), all in order to make better decisions. The EBM approach thus considers that quantification improves decision-making, processes and policies, including HR. Lawler et al. (2010) thus believe that the use of figures and the EBM approach have become central to making the HR function a strategic function of the company. For example, they identify three types of metrics of interest in an EBM approach: the efficiency and effectiveness of the HR function, and the impact of HR policies and practices on variables such as organizational performance. More generally, according to the work resulting from this approach, quantification makes it possible to meet several HR challenges. The first challenge is to make the right human resources management decisions: recruitment, promotion and salary increases, for example. The psychotechnical approach already mentioned seems to provide an answer to this first challenge: by measuring individuals’ skills, motivations and abilities in an objective way, it seems to guarantee greater objectivity and rigor in HR decision-making. The second challenge is to define the right HR policies. Rasmussen and Ulrich (2015) thus give an example where an offshore drilling company uses quantification to define a policy linking management quality, operational performance and customer satisfaction (Box I.2). This example therefore illustrates how quantification can help identify problems and links between

Introduction

xix

different factors in order to define more appropriate and effective HR policies. An offshore drilling company commissioned a quantitative study that demonstrated several links and influential relationships between different factors. First, the study shows that the quality of management (measured through an annual internal survey) influences turnover, on the one hand, and customer satisfaction, on the other hand (measured through a company’s customer relationship management tool). Staff turnover influences the competence of teams (measured according to industry standards) and their safety and maintenance performance (measured using internal company software, such as falling objects), which also has an impact on customer satisfaction, and is also strongly linked to the team’s operational performance. This study therefore provided the company with evidence of the links between these various factors, which made it possible to define a precise plan of action: improving the quality of management through training and a better selection of managers, improving team competence through training and increased control, among other things. Box I.2. Quantification as a source of improvement in the definition of HR policies (source: Rasmussen and Ulrich 2015)

Finally, the third challenge is to prove the contribution of the HR function to the company’s performance. As Lawler et al. (2010) point out, the HR function suffers from the lack of an analytical model to measure the link between HR practices and policies, and the organizational performance, unlike the finance and marketing functions for example. To fill this gap, they suggest collecting data on the implementation of HR practices and policies aimed at improving employee performance, well-being or commitment, but also on organizational performance trends (such as increasing production speed or the more frequent development of innovations). This trend therefore values quantification as a tool to improve the HR function via several factors: more objective decision-making, the definition of more appropriate and effective HR policies and proof of the link between HR practices and organizational performance, which can encourage the company to allocate more financial resources to HR departments. The other, more critical trend is part of a sociological approach and takes a more analytical look at the challenges of quantification. Desrosières’ work (1993, 2008a, 2008b) founded the sociology of quantification, which focuses on quantification practices and shows how they are socially constructed

xx

Quantifying Human Resources

(Diaz-Bone 2016). This analytical framework is based, among other things, on the concept of conventions, which are interpretative frameworks produced and used by actors to assess situations and decide how to act (Diaz-Bone and Thévenot 2010). The economics of conventions focuses on coordination that allows institutions and values to emerge, and shows how this coordination is based on conventions, which make it possible to share a framework for interpreting and valuing objects, acts and persons, and thus acting in situations of uncertainty (Eymard-Duvernay 1989). The originality of Desrosières’ work lies in mobilizing this concept of convention to analyze quantification operations, which amounts to studying “quantification conventions” (Desrosières 2008a), namely a set of representations of quantification that will make it possible to coordinate behaviors and representations (Chiapello and Gilbert 2013). Desrosières thus seeks to deconstruct the assumptions that accompany the myths surrounding quantification (the myth of statistics that are ostensibly a transparent and neutral reflection of the world, for example, and that constitute a guarantee of objectivity, rigor and impartiality), in particular by emphasizing the extent to which quantification is based on social constructions, and not on physical or natural quantities. He suggests that statistical indicators should be considered as social conventions rather than measures in the sense of the natural sciences (e.g. air temperature) (Desrosières 2008a). Gould (1997), without claiming to be part of the sociology of quantification, also provides very illuminating illustrations of how quantification can be influenced by social prejudices, making objectivity impossible. In one of his books, Desrosières (2008a) also highlights the extent to which statistics, far from being merely a transparent reflection of the world, create a new way of thinking about it, representing it, measuring it and, ultimately, acting on it. However, his work also focuses on the history of statistics and the dissemination of new methods in the field. Thus, Desrosières (1993) highlights the link between the State and statistics. The latter, historically confined to population counting, has gradually been enriched by new methods and theories (probabilities with the law of large numbers, then econometrics with regression methods, to cite only two examples), which have partially loosened its ties with the State, and have brought it closer to other sciences, such as biology, physics and sociology. In another book, Desrosières (2008b) highlights the developments in modern statistics after the Second World War (reorganization and unification of official statistics, willingness to act on indicators such as the unemployment

Introduction

xxi

rate, etc.). These founding works have since been widely adopted by many authors. Chiapello and Walter (2016), for example, are interested in the dissemination of calculation conventions used in finance. They show that, contrary to a rational ideology that would have the algorithms mobilized in finance be so because they are the most effective and rigorous, this dissemination is sometimes entangled in the power games between different functions or professions in the world of finance. Similarly, Juven (2016) shows that the activity-based pricing policy introduced in French hospitals does not always respond solely to the rational logic of improving hospital performance, but comes from choices and trials and errors that can only be understood by looking at the sociological foundations of the decisions taken (Box I.3). Finally, Espeland and Stevens (1998) focus on the social and sociological processes underlying “commensuration” operations, which make it possible to compare different entities (individuals and positions, for example) according to a common metric. The introduction of activity-based pricing in French hospitals is a long-term process that takes several years. It required, among other things, a quantification of medical procedures and patients: how much a particular medical procedure costs and should be remunerated, or the management of a particular type of patient. However, this statisticalization has been the subject of many controversies between doctors, health authorities and patient associations. These different actors obviously have divergent interests, between reducing hospital costs and improving the management of a specific pathology. This case therefore illustrates the way in which the quantification of reality, far from being merely a neutral reflection of reality, proceeds from choices, negotiations and controversies that illustrate its sociologically constructed dimension. Box I.3. Example of the introduction of activity-based pricing in French hospitals (source: Juven 2016)

Finally, this second trend takes a more critical approach to quantification. While the first trend is based in particular on the idea of quantification that can supposedly provide objectivity, transparency, neutrality and rationalization, the second trend questions this vision and these assumptions, thus questioning more generally the contributions of quantification to management.

xxii

Quantifying Human Resources

I.4. The positioning of this work Our book seeks to provide a nuanced and didactic perspective on the use of HR quantification. Therefore, it draws on these two currents to try to reflect as much as possible both the advantages and limitations of quantification. More precisely, we ask ourselves the question of the use that companies can make of HR quantification, but also the evolutions that the rise of quantification can represent for HR and the appropriation of these new devices by the various agents involved. In parallel, this book pays interest to the different theoretical and disciplinary trends that allow us to better understand the challenges of HR quantification. To do this, this book mobilizes several types of sources and examples. Some of the information used comes from academic work. Another part is based on empirical surveys carried out within companies. These empirical materials are of several kinds: interviews with HR, employees, trade union representatives; participant observation as part of a professional experience as a Big Data HR project manager; company documents on the use of HR quantification; quantitative analyses conducted on personnel data. Thus, this book aims to provide both theoretical and empirical knowledge on HR quantification. Finally, a few semantic clarifications must be added. The concepts of quantification, statistics and measurement are frequently used throughout this book. Quantification corresponds to a very broad set: all the tools and uses producing figures (or quantified data), and the figures thus produced. It therefore includes the concepts of statistics and measurement. The term “statistics” is employed when referring to the scientific and epistemological dimension of quantification, as Desrosières does, for example. Finally, the term “measurement” will be used when discussing the specific activity of quantifying a phenomenon, an object or a reality. I.5. Structure of the book The book is divided into five chapters of equal importance. Chapter 1 seeks to delineate the subject by providing definitions and examples of the three major uses of HR quantification: personal and labor statisticalization, reporting and analysis, Big Data/algorithms. The next three chapters take up elements of this introductory chapter by analyzing them

Introduction

xxiii

each from a different angle and can therefore be read independently of each other, and in the order desired by the reader. Chapter 2 deals with the issue of decision-making. Indeed, as we have

seen, the “EBM” approach sees the benefits of quantification as coming mainly from improving decision-making. Therefore, Chapter 2 examines the paradigms and beliefs that drive this link between quantification and decision-making. Chapter 3 focuses on the appropriation of the different uses of

quantification by the multiple actors involved in HR – managers, employees and trade unions, in particular. Chapter 4 is based on the potential changes introduced by the increasing use of HR quantification, and questions the consequences of these changes for the HR function.

Finally, Chapter 5 deals with the ethical issues of quantification, particularly with regard to the protection of personal data and questions of discrimination.

1 From the Statisticalization of Labor to Human Resources Algorithms: The Different Uses of Quantification

Quantification can be used in many HR processes, such as recruitment, evaluation and remuneration (with job classification, for example). In fact, human resources management gives rise to a very wide variety of uses of figures. The first use refers to decision-making concerning individuals (section 1.1), i.e. using quantified information to inform or justify decisions concerning specific individuals, for example, candidates for recruitment, employees for career management or remuneration. The second use corresponds to a general increase in the adoption of figures and their adoption at the collective level, no longer at the individual level (section 1.2). Historically, this use involved legal reporting and dashboards. It is therefore a question of defining relatively basic indicators and metrics, but aimed at monitoring or steering a situation (e.g. number of employees) or phenomenon (e.g. absenteeism). However, these basic indicators are not always sufficient, particularly because of the complexity of certain HR phenomena. The phenomenon of absenteeism can certainly be measured and monitored by basic bivariate indicators, but these will not be sufficient to identify the determinants of absenteeism, and therefore to define appropriate policies to reduce it. As a result, more sophisticated statistical methods have gradually been introduced in the HR field, both on the HR research side and on the business side: this approach is regularly referred to as “HR analytics”. More recently, the emergence of Big Data and the mobilization of algorithms in different sectors of society have gradually spread to the HR

Quantifying Human Resources: Uses and Analyses, First Edition. Clotilde Coron. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

2

Quantifying Human Resources

sphere, even if the notion of “Big Data HR” remains vague (section 1.3). This new horizon raises new questions and challenges for the HR function. It should be stressed that the boundaries between these different uses are tenuous and shifting, and therefore this distinction remains partly arbitrary and personal. Thus, a dashboard can mobilize figures initially constructed with a view to decision-making about individuals. In addition, traditional reporting, which is particularly rich in cross-referencing, can be the beginning of a more sophisticated quantitative analysis, and produce similar results. Similarly, prediction and customization algorithms such as job or training recommendations, which we will classify under the category of Big Data and algorithms, are essentially based on statistical analysis tools (correlation, linear or logistic regression, etc.). However, this chapter will focus on defining the outlines of these three types of uses, using definitions and examples. 1.1. Quantifying reality: quantifying individuals or positions The HR function is regularly confronted with the need to make decisions about individuals: recruitment, promotion, remuneration, etc. However, under the joint pressure of ethical and legal issues, particularly around nondiscrimination, it is also motivated to back these decisions up as much as possible in order to justify their legitimacy. One response to this search for justification is to mobilize quantified assessments of individuals or work (Bruno 2015). These statisticalization operations of the concrete world (Juven 2016) or commensuration (Espeland and Stevens 1998) aim to both inform decisions and justify them. 1.1.1. The statisticalization of individuals and work To report on these operations, the focus here is on two types of activities. The first concerns the quantification of individuals and refers to, among other things, tools proposed by the psychotechnical approach briefly described in the introduction. The second refers to the quantification of work, necessary, for example, to classify jobs and thus make decisions related to remuneration, but which raises just as many questions because of the particular nature of the “work commodity” (Vatin 2013).

From the Statisticalization of Labor to Human Resources Algorithms

3

1.1.1.1. Different tools for the quantified assessment of individuals Faced with the need to make decisions at an individual level (which candidate to recruit, which employee to promote, etc.), the HR function has had to take advantage of different types of quantified evaluation tools (Boussard 2009). Some tools are, in fact, partly the result of psychotechnical work, but HR agents do not necessarily master the epistemology of this approach. The tools are often used without real awareness of the underlying methodological assumptions. The use of quantified HR assessment tools has been relatively progressive, and two main factors have promoted it (Dujarier 2010). First of all, the transition to a market economy was accompanied by a division of labor and a generalization of employment, which required reflection on the formation and justification of pay levels and differences in pay levels within the same company. Secondly, the practices of selecting and assigning individuals within this division of labor have stimulated the quantified assessment of individuals. Several examples of this are given here and highlight the uses made by the HR function, but also the criticisms resulting from this. However, in this chapter we do not insist on possible biases and therefore on questioning the notion of objectivity, because this will be the subject of section 2.1. Psychological testing is a first example of a quantified assessment tool. Its use is frequent in the case of recruitment, and it can have several objectives. First, it can aim to match a candidate’s values with the company’s values. In this case, the test is based on the values and behaviors of the individual. Then, it may aim to match the personality of a candidate with what is generally sought by the company. In this case, the test includes questions that focus on behavior in the event of stress, uncertainty and conflict, for example. Finally, it may aim to match the personality of a candidate with the psychological composition of the team in which a position is to be filled. This variety of uses underlines the fact that the implementation and use of this type of test require upstream reflection in order to provide answers to the following questions: What are we trying to measure? What is the purpose of this measurement? Once these questions have been answered, the second step is to answer the question: how do we measure what we are trying to measure? To this end, the academic and managerial literature provides many scales for measuring different characteristics and different attributes of individuals.

4

Quantifying Human Resources

Finally, after passing the test, a final reflection must be carried out on how to use it: to classify individuals, as a support point for the recruitment interview, and as a decision-making aid. A characteristic of these tests is that they can lead to a ranking of individuals in different profiles that are not necessarily ranked hierarchically. Thus, a test on one’s relationship to authority may lead to a classification of different types of relationship (submission, rebellion, negotiation, etc.) without one of these relationships necessarily being unanimously considered as preferable to the others. The preference for one type of profile over the others may depend, for example, on the sector of activity or type of company: a recruitment in the army will probably place a higher value on an obedient profile, unlike recruitment in a start-up or in a company with a flatter hierarchy, for example. Psychological tests are still widely used in recruitment today, although their format and administration methods may have changed (Box 1.1). Psychological tests have been used since the second half of the 20th Century for recruitment purposes. However, they evolved at the beginning of the 21st Century, mainly due to the increasing use of the Internet and tests measuring the adequacy between a person and a profession (person job-fit test). Among the companies in the Fortune-500 American index, in 2006, 20% used personality tests as part of their recruitment. In addition, 9% use online tests as a pre-recruitment tool. However, these tests are criticized for their lack of standardization and the doubts that remain about their predictive validity. Personality tests are now generating renewed interest due to the development of “affinity recruitment” based on the matching model operated by dating networks such as Meetic or Tinder. Box 1.1. Psychological tests and recruitment (source: Piotrowski and Armstrong 2006)

The aptitude, competence or intelligence test is a second tool that is often used, in the context of recruitment, for example. Although the distinction between aptitude, competence and intelligence remains relevant, it is necessary to place these tests in the same category here, because they are used to measure a characteristic of the individual considered useful and relevant for success in a given position. In addition, unlike psychological tests, aptitude, competence or intelligence tests are most often used to rank individuals on a one-dimensional scale. However, as with psychological

From the Statisticalization of Labor to Human Resources Algorithms

5

tests, aptitude, competence or intelligence tests require upstream reflection, in this case on the competencies or skills required for successful performance in the role (Marchal 2015). Although theories such as the g factor or measures such as IQ outlined in the introduction assume that a single competency measure predicts or evaluates a set of interdisciplinary competencies, most aptitude and competency tests are designed to correspond to a specific position. However, the division of a position into skills or aptitudes is not without its difficulties (Box 1.2). As early as the 20th Century, some psychotechnologists recommended making an inventory of the skills and abilities needed to hold a position, and suggested conducting extensive in situ analyses. However, this procedure represents significant costs, especially since a rigorous approach requires reproducing the analysis each time there is a change, even a minor one, in the organization or working or employment conditions. In addition, in situ analyses are initially very focused on the physical actions performed (by typists, for example), which loses its relevance with the tertiarization of employment. Under these two combined effects, the analyses are likely to evolve, focusing on behaviors and no longer on actions, and focusing on the identification of behaviors specific to a group of jobs and not to a specific job. In doing so, however, the tests produced from this type of analysis lose their specificity, accuracy and ultimately their predictive validity. In addition, there are many criticisms of these analyses and tests. The first type of criticism highlights the many biases that job analyses can face, particularly because of the importance of the person observing the situation. The second type of criticism highlights the fact that the same job does not correspond to the same reality according to the organizational context in which it is practiced: being an engineer or nurse does not require the same skills or competences in every different organization. More precisely, exercising a trade does not only require skills intrinsically linked to the trade, but also cognitive and relational skills linked to the organization. As a result, it becomes illusory to hope to isolate skills needed by an employment group, regardless of organizational contexts. The third type of criticism comes from the fact that the observation of work situations does not make it possible to observe skills directly, but rather manifestations of skills. The transition from the manifestation of competence to competence requires a translation that is not obvious. These criticisms have led to the implementation of new aptitude tests based on the simulation of work activities. This type of test is used in assessment centers sometimes used in recruitment. The aim is to put candidates in a situation close to the working situation. However, once again, the limitations of these

6

Quantifying Human Resources

methods are regularly highlighted. In particular, they involve identifying the most common or important work situations to be tested which can be difficult depending on the position concerned. They also represent a significant cost, since they require specific simulations to be defined for each workstation. Box 1.2. The difficult division of a position into skills (source: Marchal 2015)

A third tool, used in particular to decide on promotions, is the quantified evaluation by the manager or other stakeholders with a grid of criteria. This tool is therefore based on a direct assessment by a third party, but generally the definition of fairly precise criteria seeks to limit the intervention of this third party and the intrusion of their subjectivity into what is supposed to constitute an objective and fair assessment (Erdogan 2002, Cropanzano et al. 2007). Two scenarios can be discerned according to the number and status of the people who assess a situation where the workers are assessed by their manager and a situation where they are assessed by all the clients with whom they come into contact. Evaluation by the manager is an extremely common situation in organizations (Gilbert and Yalenios 2017). However, this situation also varies greatly depending on the organizational context: the degree of formalization, frequency, criteria and use may differ. In terms of formalization, there are companies where the manager conducts an assessment interview with their subordinate without a prior grid, and others where the manager must complete an extremely precise grid on their subordinate, sometimes without this giving rise to an exchange with the person being assessed. In terms of frequency, some companies request annual assessments and others semiannual. In terms of criteria, situations where the criteria focus more on the achievement of objectives should be dissociated from situations where they are interested in the implementation of behaviors. Finally, in terms of use, some companies may take managerial evaluation into account as part of the remuneration process, others in promotion, others in development, etc. (Boswell and Boudreau 2000). It should also be recalled that evaluation methods have evolved over time (Gilbert and Yalenios 2017). Thus, the Taylorism of the first half of the 20th Century gave rise to a desire to rate workers on precise criteria and relating to their activity and the achievement of objectives, while the school of human relations in the second half of the 20th Century valued dialogue, and thus the implementation of appraisal interviews aimed both at evaluating and establishing an exchange between managers and subordinates (Cropanzano

From the Statisticalization of Labor to Human Resources Algorithms

7

et al. 2007). Evaluation by third parties, and, in particular, clients, is a very different but increasingly common situation (Havard 2008), particularly in professions involving contact with third parties (Box 1.3). The measurement of the achievement of objectives is a fourth tool used, in particular, for remuneration decisions (e.g. allocation of raises or variable portions). This approach, which is part of objective-based management, popularized by the American consultant Peter Drucker, does not focus on the resources implemented by the workers, but on the results achieved (Gilbert and Yalenios 2017). It requires each individual to define the objectives they must achieve, and the criteria for measuring their achievement. These two operations are less obvious than they seem at first glance. Thus, for a sales profession, the first reflex would be to evaluate the salesman according to the turnover or the number of sales made. However, a criterion of this type would strongly reward a seller who made very large sales, but with a significant number of returns from customers dissatisfied with their purchase, even though this situation would be more damaging to the company than one where a seller made a smaller number of sales, but with fewer customer returns. Moreover, depending on the positions considered, measuring the achievement of objectives can be an easy or more difficult operation. How can one measure the achievement of objectives such as “carrying out a particular project” or “organizing a particular event in a satisfactory manner”? Finally, this results-based evaluation method does not take into account the hazards of work and the fact that achieving an objective requires the cooperation of several people or company functions in many professions. Third-party rating systems are developing considerably under the combined effect of the rise of rating sites (TripAdvisor, for example) and the rise of the platform economy. Indeed, a platform that connects a customer and a service provider has very little information on the respective qualities of its customers and suppliers. In this context, having the client evaluated by the service provider and vice versa seems to be a privileged way of guaranteeing a certain quality of relationship and service. Both the average and the number of scores are then used as quality indices. For example, the drivers and customers of the Uber platform are evaluated after each trip. It might seem surprising to mobilize third-party rating in more traditional organizations where the quality of an employee’s work is supposed to be known by their manager. However, parts of activity and work, and therefore of performance, can escape the manager’s attention: the quality of the relationship

8

Quantifying Human Resources

with the client, for example. It is in order to reduce this blind spot that rating systems by third parties, and in particular customers, have been developed. They can take different forms (customer satisfaction survey, mystery shoppers, etc.) but always have the aim of evaluating the parts of the activity that escape the manager. Box 1.3. The development of rating systems by a third party

Thus, the quantified evaluation of individuals is both a very common and variable practice within organizations. We have given four examples, in some cases exposing the criticisms they may have given rise to, but without dwelling on the question of potential bias, which will be dealt with in Chapter 2. However, this does not exhaust all the quantification operations carried out by the HR function. In particular, the quantified evaluation of positions is another important aspect of this type of operation. 1.1.1.2. Quantification of work and positions The quantification of work and positions, which was an essential foundation of Taylorism, then continued as an essential element of job classification operations. From the end of the 19th Century, in the context of the rise of industrialization and mass production, reflections were launched with a view to maximizing the productivity of companies. An American engineer, Taylor, developed a system of “scientific organization of work”, which he believed would provide maximum performance for a given set of resources (Taylor 1919). This system is based on three conditions. First of all, a detailed and rigorous analysis of work methods (workers’ actions, pace, cadence, etc.) makes it possible to identify the causes of lost productivity. The second step consists of defining very precisely the actions and tasks to be performed by each worker in order to achieve maximum productivity (which is termed the one best way, or best practice). Finally, the purpose of setting remuneration is to ensure greater objectivity and motivation for employees (Box 1.4). Taylorism is based in particular on the measurement of work. More precisely, through careful observation of the work, each position is broken down into work processes, which in turn are broken down into tasks. During this observation phase, the workers’ actions are timed, which makes it possible to measure how much time is dedicated to the task in the current organization of work and to

From the Statisticalization of Labor to Human Resources Algorithms

9

determine from the observation of the most productive workers the minimum time required for each task. In addition, breaking down work processes aims to eliminate unnecessary operations, and to select the best way to proceed for each operation. Thus, the observation phase is followed by a prescription phase: each worker must carry out a precise task, according to an imposed and detailed operating procedure. This reflection and these changes on work are accompanied by specific HR practices: recruitment that insists on the necessary fit between the individual and the position, adapted training to promote the acquisition of prescribed operating methods, close supervision of management on workers to limit room for maneuver and thus uncertainties, and a salary partly indexed to the achievement of time objectives. Taylor gave several examples of how the approach was applied. The first and simplest example concerns the loading and unloading of pig iron: in the company concerned, each of the 75 workers must collect a piece of cast iron from a pile, walk to a truck, climb up a ramp that leads into the truck and unload their piece of cast iron inside the truck. After observing the workers, Taylor notes that the average loading rate is 12.5 tonnes per man per day. However, he also shows through a precise study of the tasks performed that the most productive workers should be able to produce up to 47 tonnes per day. Without going into the details of a tedious calculation, the reasoning is as follows: observation shows that a worker can carry 42% of the time, but must carry nothing 58% of the time to let his muscles rest. However, the walking time spent returning from the truck to the cast iron piles is a time when workers do not carry anything. They should therefore use this time to let their muscles rest, except that in reality they walk fast on the way back, and then find themselves in need of extra breaks. Taylor explains that these breaks can be eliminated as long as workers take advantage of their time returning from the truck to the piles for muscle recovery. Taylor’s goal is therefore to ensure that each worker can load 47 tonnes per day. To this end, he selects from among the average workers a worker who he considers physically fit enough to load up to 47 tonnes per day, in return for an increase in his remuneration. He explains that he will be able to increase his remuneration if he follows exactly the instructions of one of the experts: “When he tells you to pick up a pig and walk, you pick it up and you walk, and when he tells you to sit down and rest, you sit down. You do that right straight through the day” (Taylor 1919, p. 46). The experiment proved conclusive: the worker concerned managed without apparent additional fatigue 47 tonnes per day and received the planned pay increase. Box 1.4. Taylorism or measuring work to improve productivity (source: Taylor 1919)

10

Quantifying Human Resources

It is interesting to return to the term “scientific organization of work”. Indeed, the adjective “scientific” is justified by, among other things, the use of the measurement of work: “In most trades, the science is developed through a comparatively simple analysis and time study of the movements required by the workmen to do some small part of his work, and this study is usually made by a man equipped merely with a stop-watch and a properly ruled notebook” (Taylor 1919, p. 117). Indeed, the measurement of work is at the heart of Taylorism: measurement of the time required to perform each task, productivity gains, average and maximum worker productivity, pay gains that can be proposed, etc. Taylorism was implemented at the beginning of the 20th Century, but has given rise to many criticisms, not all of which will be discussed here. Thus, the philosophers Weil (2002) and Linhart (1980) experienced factory work and described both the very high physical difficulty of this work and its alienating dimensions. In another vein, sociologists Crozier and Friedberg (1996) show that it is illusory to want to remove all individual margins of maneuver: individuals will always find a way to find spaces of freedom, thus recreating forms of uncertainty. Finally, the evolution of work in developed countries, and, in particular, the tertiarization and reduction of factory work, has reduced the relevance of Taylorism, which seems more suitable for lowskilled jobs involving repetitive tasks. These criticisms and limitations have led to a gradual decrease in the use of Taylorism as the main method of management and work organization, but the measurement of work has not been abandoned. Indeed, in the second half of the 20th Century, the desire of the State to set pay in order to avoid inflation, followed by the need to justify pay hierarchies and therefore to define appropriate pay for each position, led to large-scale job classification operations in many countries. However, the classification or weighing up of positions is also based on the quantification of each position. Whatever method is used (see Box 1.5 on the Hay method, probably one of the best known), this consists of evaluating each position on a list of criteria, and aggregating these criteria according to an ad hoc formula. This makes it possible to associate an index to each position, and thus to prioritize them, then to match an index to a salary level.

From the Statisticalization of Labor to Human Resources Algorithms

11

The Hay method was created in the United States in the 1940s by Edward N. Hay and spread to European countries in the second half of the 20th Century. It consists of evaluating positions on three main factors: purpose (effect of the position on the company’s results), competence (requirements required to hold the position) and creative initiative (degree of initiative and reflection that the position implies). Each factor is broken down into subfactors. For example, the purpose factor can be broken down into three aspects: latitude of action (constraints, degree of control, decision-making power), scope of action (monetary scope of the field of activity) and impact (direct or indirect). A group must therefore rate each position on each of these criteria. A formula aggregates the scores on each criterion into a global index, which in turn can be transformed into an indicative salary level. Box 1.5. The Hay method or measuring work to prioritize positions (source: Lemière and Silvera 2010)

Once again, the use of quantification corresponds to an objective of rigor and objectivity. Yet, as will be discussed in Chapter 2, work has highlighted the many potential biases of these job classification operations (Acker 1989, Lemière and Silvera 2010). Finally, Taylorism and job classification are largely based on operations of work quantification. They are used to support HR decision making, particularly with regard to work organization and remuneration. Even if these quantification operations do not concern individuals but rather positions, they ultimately lead, as the operations mentioned in the previous section do, to decision-making that can have an impact on individuals. 1.1.2. Informing and justifying decisions concerning individuals The operations of quantifying individuals or work described in section 1.1.1 have several objectives. In this section, a return will be made to the three main purposes: to make individuals or objects (position, activity) comparable, in particular by classifying them, and to justify the decisions made.

12

Quantifying Human Resources

1.1.2.1. Enabling comparison by quantification: commensurability and classification In the case of the measurement of individuals as well as the measurement of activities or positions, it is a question of making them comparable. Thus the tests used to recruit aim to make individuals comparable so as to select a few of them from a larger number, and the weighing up of positions is to make jobs comparable so as to rank them and thus define an appropriate salary scale. Finally, these operations consist of reducing the wide variety of information available on an individual or a position in order to represent them all on a single classification dimension, a single scale. Espeland and Stevens (1998) refer to this process as “commensuration”. According to them, commensuration corresponds to the comparison of different entities using a common metric (in our examples, a score for a test, or an index in the job classification). As a result, commensuration has several characteristics. First, it transforms qualities into quantities, and thus reduces information, which tends to simplify information processing and ultimately decision-making processes. Commensuration also corresponds to a particular form of standardization, since it seeks to bring objects into a common format (the common metric). Unlike other forms of quantification, commensuration is fundamentally relative and not absolute: it allows comparison between entities, and has little value outside this comparison objective. However, the authors also point out that commensuration processes can take a variety of forms. A first factor of variation refers to the level of technical development. Thus, the cost–benefit analysis developed by the government and economists is based on a particularly technical tool. At the other end of the spectrum, they give the example of more empirical estimates of feminists seeking to measure the time women spend on domestic tasks. A second factor of variation refers to the degree of explanation and ultimately institutionalization of the commensuration. Some commensuration operations are so institutionalized that they help to define what they measure (e.g. unemployment rate and poverty rate) and influence the behavior of agents, even when they are criticized. Espeland and Stevens give the example of academic institutions that encourage their researchers to comply with international ranking criteria while regularly questioning these criteria. Other commensuration operations remain poorly disseminated and therefore have little effect on the definition of the objects they measure and on the

From the Statisticalization of Labor to Human Resources Algorithms

13

behavior of the actors. Finally, the third factor of variation is based on the agents of commensuration, from quantification experts to “lambdas” individuals, including interest groups, for example. We can use these three variation factors to qualify the commensuration operations discussed in the previous section (Table 1.1). Factor of variation Technological complexity Institutionalization Stakeholders involved

Application to HR commensuration High degree of technological complexity High degree of institutionalization Experts Managers Trade unions, collectives

Table 1.1. The characteristics of HR commensuration

In HR, the level of technical development is high for most of the examples given. Thus, psychological or aptitude tests, the measurement of work in Taylorism and job classification are based on complex tools and are sometimes even backed by substantial theoretical corpuses. The rating by the manager or client can also be very well equipped when it gives rise to a grid of precise criteria, designed to reduce managerial arbitrariness. The institutionalization of commensuration operations is also important. Indeed, in all the cases studied, commensuration is explicitly used to act on reality since it is mobilized to make decisions. Finally, the actors concerned are more variable, from experts (occupational psychologists for psychological tests, for example) to trade unions or employee collectives (who are involved, for example, in the implementation of the Hay method). Espeland and Stevens also point out the consequences of these processes. For example, commensuration can make certain aspects invisible by selecting the information to be included in the comparison. In HR, this is the case for aptitude tests that measure certain skills or competencies to the detriment of others, or for job classification operations that only take into account aspects of the work that meet predefined criteria. However, commensuration can also highlight certain aspects. Thus, the two authors give the example of feminist movements that have sought to measure the value of domestic work in order to integrate into the gender pay gap the inequalities related to the gendered distribution of unpaid domestic work. In HR, to give just one example, the implementation of Taylorism relies on,

14

Quantifying Human Resources

among other things, making the sources of decreased productivity visible (workers who walk fast when they have unloaded their pig iron but who must therefore take additional breaks for muscular recovery, for example). Finally, Espeland and Stevens are also interested in the process of commensuration as a social process: how to build an agreement on the common metric, how to make it accepted and how to use it in decisionmaking. In particular, they show the role of institutions and experts in this process. In HR, in the same way, it is crucial to be able to mobilize metrics that are acceptable to all stakeholders, including managers, employees and employee representatives. To promote this acceptability, HR can mobilize the work of experts or rely on participatory approaches to involve the various stakeholders (employees and managers, for example) in order to limit the possibilities of contestation. Commensuration can sometimes take the form of a classification. Out of interest, this is a human classification, in the sense that it refers to human beings or their activities (Hacking 2005). Classification processes are confronted with a realistic point of view, which considers that the classes exist outside the human beings who define them, and a nominalist point of view, which considers that only the human being is responsible for grouping into classes. The nominalist point of view raises the question of how the classes are constructed and then used. Hacking (2001) highlights the elements necessary for this analysis. First of all, it is necessary to analyze the criteria used to define the classes and who belongs to which class: for example, weight and height to calculate the body mass index needed to define obesity. In HR, the level of diploma or position held represents the criteria necessary to define who is an executive or non-executive. Second, the human beings and behaviors that are the subject of the classification may vary. Thus, in HR, the different classifications can relate to positions (professional category), individuals (“talent”, “high potential”, etc.), behaviors (such as “committed employees”), etc. Classification is also carried out by institutions. Hacking gives the example of diseases, whose classification is institutionalized by doctors, health insurance systems, and professional journals, among others. In HR, the institutions that contribute to the definition and sustainability of a classification include the social partners, managers, management and payroll systems. Finally, a classification also gives rise to (and in return is maintained by) knowledge and beliefs: in HR, for example, it concerns knowledge and beliefs about the behavior of managers in relation to non-managers, of committed employees in relation to those less committed, etc.

From the Statisticalization of Labor to Human Resources Algorithms

15

1.1.2.2. Justifying decisions The use of quantification in the cases mentioned also responds to the challenge of justifying the decisions taken: quantification is seen as providing guarantees of neutrality and objectivity (Bruno 2015). Chapter 2 will return to the link between objectivity and quantification and the factors that challenge this link. Here, the desire is simply to highlight the existence of strong incentives to mobilize quantification in cases where neutrality requirements are formulated. In the United States, the Civil Right Acts of 1964 and the Equal Employment Opportunity Commission, established in 1970, have strongly encouraged companies to standardize their individual assessment systems, both in recruitment and career management (Dobbin 2009). In many cases, this standardization has involved, among other things, the use of quantification. This has several advantages, as recalled by Dobbin. First of all, it seems to reduce bias by reducing managerial arbitrariness. Secondly, it offers the possibility of building a body of evidence to support decisionmaking in the event of litigation and legal remedies (see Box 1.6 on legal remedies related to the use of tests). It also facilitates the production of reports (requested by the Equal Employment Opportunity Commission). Finally, it contributes to strengthening the legitimacy of the HR function’s activity. In 1971, the US Supreme Court ruled that the selection tests used by Duke Power were discriminatory: black candidates had much lower success rates than white candidates. This decision caused a stir in the United States. Indeed, since the early 1960s, many companies had been using standardized and quantified tests to protect themselves against accusations of discrimination. However, the Supreme Court’s decision acknowledged that in practice, certain test methods could be used or misused to justify discrimination. In 1966, experts defined two conditions necessary for tests to remain effective barriers against discrimination: to ensure that the content of the test corresponds to the requirements of the job and to ensure the scientific validity of the tests used. The 1971 judgment did not in any way mark the end of the use of standardized and quantified tests in the selection and evaluation of staff, but rather supported the search for greater scientific validity of the tests. Box 1.6. Use of selection tests and legal remedies in the United States (source: Dobbin 2009)

16

Quantifying Human Resources

Finally, the use of the quantification of individuals or positions to support decisions that affect individuals is common in HR, and corresponds to the ultimate aim of commensurating and justifying decisions made (using the argument of neutrality). 1.2. From reporting to HR data analysis Beyond this individual dimension, the HR function also has to regularly make decisions at the collective level: the definition of HR policies, decisions concerning collective raises, strategic HR decisions, etc. This explains why, in addition to this first use of quantification, there are other uses that allow for greater generality at the organizational level. Reporting and dashboards illustrate this approach well. However, more recently, the emergence of HR analytics has brought new dimensions to this approach. 1.2.1. HR reports and dashboards: definitions and examples Since the second half of the 20th Century, in most western countries companies have had to publish figures on their workforce, practices and characteristics. However, this legal reporting, which, in some cases, may be supplemented by reports resulting from negotiations with unions, is not always used by companies. Several obstacles to its use can be identified, in particular the fact that the figures required by the legal framework are not always those that would be most relevant according to the context of the companies in question. Some companies then voluntarily produce additional indicators or metrics, defined according to a given situation and need. For example, a company that identifies a gradual increase in turnover and considers it a problem could define figures to quantify and monitor this phenomenon over time. This is therefore part of an “HR dashboard” approach. In both cases, it is a descriptive approach to measure phenomena that fall within the field of HR. 1.2.1.1. Legal reporting The legal obligations to produce HR indicators in France and other European countries have multiplied since the 1970s (see Box 1.7 for the example of France).

From the Statisticalization of Labor to Human Resources Algorithms

17

Large French companies have a number of legal obligations with regard to the production of human resources reporting. These obligations have been accumulating since the 1970s. Some of them are mentioned here. 1977: The law of 1977 (July 12) requires large companies (with 300 or more employees) to produce and present annually a document to the unions called a “social report”, containing the main figures on the company’s social situation. The content of the document is strongly defined by the law, which precisely defines all the indicators that must be included in it. The social report covers the following areas: employment (number of employees, external workers, hiring, departures, promotions, unemployment, disability, absenteeism), remuneration (amount of remuneration, hierarchy, method of calculation, ancillary charges, overall pay costs, participation), health and safety at work (accidents at work and commuting accidents, distribution of these accidents by cause, occupational diseases, activity of the hygiene, health and safety committee, expenditure in this area), working conditions (duration and organization of working time, work organization and content, physical working conditions, transformation of work organization, expenditure on improving working conditions, occupational medicine, unfit workers), training (continuous vocational training, leave training, apprenticeship), industrial relations (staff representation, information and communication, disputes and litigation) and other working conditions in companies (social activities, other social charges). 1983: The Roudy law of 1983 (July 13) requires large companies to produce and present annually a report to the unions called a comparative situation report, containing quantified indicators on the respective situations of women and men in the company. The indicators, defined by the law, cover the following areas: general conditions of employment (number of employees, working time and organization, leave, recruitment and departures, positioning, promotion, seniority), remuneration (range, average remuneration, median, ten highest salaries), training (number of hours, type of activity), working conditions (exposure to occupational risks, difficulty), occupational safety and health (accidents, occupational diseases), parental leave (additional pay, paternity leave) and working time organization (part-time, local services). 2001: The NRE (New Economic Regulations) law of 2001 (May 15) requires French listed companies to publish information on the social consequences of their activities in their annual management report. A decree published in 2012 increased the required information to 42 subjects, divided into three themes, including social issues (employment, labor relations, health and safety, etc.). The list includes some of the main international non-financial reporting standards.

18

Quantifying Human Resources

2013: The Employment Security Act of 2013 (June 14) requires large companies to integrate a certain number of figures (including the social balance sheet and the comparative situation report) into a single database provided to the social partners, the “economic and social” database. Box 1.7. Legal reporting obligations in France

The importance of social reporting obligations in France is no exception. For example, the European Union adopted a directive about non-financial reporting in 2014. This directive requires large companies to include nonfinancial information in their annual management reports, particularly with regard to personnel management and the diversity of governance bodies. This therefore creates social reporting obligations. This legal reporting has several purposes. First, it encourages the company to produce figures on phenomena in the HR field, and thus to become aware of them. For example, one of the stated objectives of the 1983 law on the publication of the comparative situation report in France was to formalize and quantify inequalities between women and men in order to have their existence recognized by employers and unions. Similarly, the obligation imposed by the European directive to provide detailed information on diversity within governance bodies is intended to highlight the importance of this subject. Second, this reporting requires the company to provide information to its social partners (e.g. trade unions) in the HR field. Indeed, most of the above-mentioned reporting obligations concern the publication of figures but also their transmission to unions or even the establishment of a dialogue with the unions based on the figures. This takes into account both the role of trade unions in policy making and decisionmaking on collective HR issues, and the importance of indicators as the first diagnostic element of the situation. Finally, reporting allows intercompany comparison, at the national level but also in some international cases, by stabilizing the definition and calculation of indicators. Thus, in 1997, the creation of the Global Reporting Initiative (GRI) made it possible to establish a complete set of indicators on a wide range of subjects, particularly on social and HR-related themes (Box 1.8). The publication of a single standard for calculating indicators thus ensures the reliability of international comparisons.

From the Statisticalization of Labor to Human Resources Algorithms

19

Since its creation, the GRI has brought together a group of international actors (companies, NGOs, consulting firms, universities) in order to define precise quantitative indicators on various subjects related to corporate social responsibility. Some of these indicators (notably from section 4) relate directly to the HR domain, as shown in the list of standards and the examples of quantified indicators below (2016 list). GRI 401: Employment – examples of indicators – Number and rate of hires by age, gender and region. – Turnover rates by age, gender and region. GRI 403: Occupational health and safety – examples of indicators Types and rate of accidents, incidence rate of occupational diseases, rate of days of absence, by region and gender. GRI 405: Diversity and equal opportunities – examples of indicators – Percentage of individuals in the governance bodies of each of the diversity categories: gender, age, any other relevant diversity category (minorities, vulnerable groups). – Percentage of individuals by employee category for each of the diversity categories outlined above. Box 1.8. GRI indicators as a means of ensuring the reliability of international comparisons (source: GRI official website)

In other cases, it is with a view to an audit or obtaining a label that a company works to calculate and provide quantified indicators in the HR field. For example, obtaining the GEEIS label (Gender Equality European & International Standard) requires providing figures on different dimensions of gender equality within the company. Similarly, the international certification standards related to working conditions, regularly used during social audits, are based, in part, on quantified information (Barraud de Lagerie 2013). 1.2.1.2. HR dashboards Beyond these obligations, which depend on the legal contexts specific to each country, HR actors have strong incentives to produce indicators on the different themes that concern them, particularly for management purposes. These statistical measurements usually lead to results in the form of cross-

20

Quantifying Human Resources

tabulated statistics. Le Louarn (2008) combines this approach with the production of dashboards, which he believes are steering tools that can be used to guide decision or action. Several examples can be provided: absenteeism monitoring, social climate surveys, recruitment process monitoring, etc. These examples can first be analyzed by following Desrosières’ (2008b) distinction between investigation and the administrative register. Desrosières applies this distinction in the context of official statistics, but it is also enlightening for the HR field. It makes it possible to distinguish between administrative data, produced by administrative forms, for example, and survey data, collected by questionnaires sent to the whole or a subset of the population. Thus, most administrative data are accessible in the HRIS (HR Information System), which has gradually grown considerably in companies (Bazin 2010; Laval and Guilloux 2010). Historically, the payroll process was the first to be computerized, requiring and enabling the computerized collection of individual employee data. Gradually, other processes have been faced with this computerization (Cercle SIRH 2017): time and activity management, recruitment, training, career management, etc. These data have the advantage of exhaustively covering the entire employee population of a company. They can be used, for example, to draw up a statistical portrait or a dashboard of absenteeism within a company (absence data are usually computerized, in particular because they can have an effect on remuneration) or to build the comparative situation report mentioned in the legal reports section above. However, on some HR topics, HRIS data may be insufficient. For example, variables that could be useful in addressing a phenomenon such as employee engagement are not very available in the HRIS. As a result, companies that wish to measure this phenomenon most often use employee surveys. These surveys generally take the form of online or face-to-face questionnaires, to which an anonymized sample of employees respond. The company has two options: mobilize a standard survey, the questions of which are predefined by the organization selling the survey (such as the Gallup survey, see Box 1.9), or construct a specific questionnaire. The first option has the advantage of facilitating comparison with other companies, while the second allows for a better consideration of the context of the company concerned. However, the second also requires in-depth reflection on what the company is trying to measure, given the variety of concepts related to engagement: job satisfaction, organizational commitment etc. In

From the Statisticalization of Labor to Human Resources Algorithms

21

addition, companies must also define the temporality and frequency of their engagement survey. Are they an annual or biannual survey, or much more frequent and shorter surveys, sent out weekly, for example, on specific topics, so as to take the pulse (hence the name of these surveys: pulse surveys) of the population? Thus, recently, startups (Supermood and Jubiwee) have developed offers dedicated to measuring engagement or quality of life at work based on very short questionnaires, called “micro-surveys”, and sent regularly, at a weekly rate, for example (Barabel et al. 2018). Gallup is an American company specializing in surveys in the field of management and HR. Gallup offers a 12-question engagement measurement scale, for example: “I know what is expected of me at work”; “In the last seven days, I have received recognition or reward for doing a good job”; “My company’s missions or objectives make me feel that my work is important”. Gallup regularly publishes a report using this measure of engagement to make international comparisons. Thus, in its 2017 report, the company states that employees who are truly committed to work represent only 15% of full-time employees. The report also indicates that this percentage varies by region, with a particularly low rate of committed employees in Western Europe and East Asia. This measure has the advantage of allowing comparison between different companies and different countries. However, it is regularly criticized by academic research because of the lack of a conceptual definition of what employee engagement is, and therefore the insufficient justification of the questions asked to measure it. In particular, Gallup does not define whether engagement is an attitude or behavior, and leaves uncertainty about the relevant level of understanding: is it a purely individual phenomenon, or does it incorporate collective elements? Box 1.9. The Gallup engagement measurement scale (source: Gallup official website; Little and Little 2006)

However, the distinction made by Desrosières (2008b) is insufficient in the HR field because it does not cover all available data sources. Indeed, in addition to these administrative data (HRIS) and survey data, process performance data are also available, the collection of which is now made possible by the increasing computerization of these processes. For example, most companies have now computerized their recruitment process, in the sense that they use a software or platform dedicated to this process. Yet these softwares or platforms produce a considerable amount of data, for

22

Quantifying Human Resources

example, on candidates, but also on the performance of the process. Therefore the company can collect information on the conversion between the number of clicks on the offer and the number of applications, the duration of each recruitment stage, the time required to complete the offers etc. All this information can be valuable in measuring, for example, the attractiveness of the company or the performance of the recruitment process. Proponents of the evidence-based management (EBM) approach recommend defining three types of indicators related to the performance of HR processes (Cossette et al. 2014, Marler and Boudreau 2017): efficiency indicators (example for recruitment: quantity of candidates, time required to recruit them, recruitment costs), effectiveness (measurement of the quality of successful candidates and their fit with the organization’s needs) and impact (measurement of the impact on the organization’s performance of successful recruitment). The purpose of these indicators is to improve the performance of HR processes. In summary, HR dashboards gather data defined by the company in order to monitor an HR process or phenomenon. Louarn (2008) suggests classifying them into four types: operational dashboards, related to HR processes (recruitment, remuneration, training, evaluation, for example), HR results dashboards, related to employees (workforce, attitudes, behavior, for example), HR strategic dashboards, related to strategic HR management tools (recognition, skills, for example) and cost and revenue dashboards, related to HR expenditure and added value. This typology gives a good idea of the variety of HR topics that can be covered by dashboards. 1.2.1.3. Reporting and dashboards, characterized by a bivariate vision and an objective of compliance Unlike quantification, allowing decision-making at the individual level, reporting and dashboards aim at informing decision-making at the collective level. Thus, the figures in the report are most often aggregated indicators at the level of the organization or entities. Most organizations also define rules to ensure that no figures are provided for groups of less than five people in order to ensure anonymity. However, two phenomena limit this role of reporting and dashboards: bivariate indicators generally remain insufficient to account for the complexity of certain HR phenomena, and situations where significant efforts are devoted to the production of reports or dashboards which are subsequently rarely or not used are relatively frequent.

From the Statisticalization of Labor to Human Resources Algorithms

23

First of all, both reporting and dashboards most often adopt a univariate or bivariate view of the phenomena they measure, in the sense that they present their measures and results in the form of cross-tabulated statistics: absenteeism crossed with gender, profession, level of responsibility, etc. The example given above of the comparative situation report is particularly illustrative of this approach, since companies are required to systematically produce gendered indicators (i.e. cross-tabulated with gender). Similarly, the standard GRI indicators are also bivariate. Being able to cross two variables is a valuable tool, but it can also prove insufficient, particularly for understanding complex, multifactorial phenomena, i.e. those that refer to several factors, as is the case with many HR phenomena. The example of equal pay for women and men sheds particular light on this limitation (Box 1.10). The pay gap between women and men has many causes: differences in initial qualifications, levels of responsibility, working time, but also direct discrimination (referring to the fact that women are paid less than men with equivalent profiles). As a result, the measurement of the average aggregate pay gap is limited by the very wide range of possible explanations. Economists and statisticians agree to mobilize a decomposition of the pay gap that dates back to the 1970s: the Blinder-Oaxaca decomposition (Blinder 1973, Oaxaca 1973). This decomposition makes it possible to distinguish the part of the pay gap explained by differences in diploma, level of responsibility and occupation between the female and male populations, and the unexplained part. The fact that several variables explaining the pay gap (level of qualification, years of experience, working time, etc.) are considered simultaneously also makes it possible to quantify the effect of each of these variables on the overall pay gap. This is therefore a valuable tool for policies to reduce the pay gap. Indeed, the actions to be taken differ according to the source of the pay gap: efforts on the promotion of women if it comes from a difference in levels of responsibility, salary adjustment if the gap with the same profile is significant, etc. As a result, bivariate pay gap indicators are not necessarily the most relevant measure for defining actions to reduce pay inequality. Thus, in a large French company, the average pay gap (estimated for full-time pay) at the end of 2013 was 6.44%. The difference can also be calculated by classification level, resulting in a difference of 4.14% at the lowest classification level and 0.42% at the highest level. However, these two ways of calculating the pay gap provide little information on the sources of the pay gap. The Blinder-Oaxaca decomposition shows that the pay gap not explained by differences in responsibility levels and professions is less than 1%. Conversely, much of the gap is due to differences in

24

Quantifying Human Resources

hierarchical levels between women and men. This leads to the idea that reducing the overall pay gap will necessarily require an effort to promote women, whereas the company’s policy, based on bivariate indicators, focuses more on achieving the same pay for the same job. Box 1.10. Bivariate indicators of equal pay (source: Actual data from a large French company)

In addition, the production of indicators, particularly for reporting purposes, is often carried out under pressure from legal obligations or, more generally, from compliance aims, in the case of figures produced for audits or certifications, for example. These figures do not always lead to awareness or concrete actions. They are produced for a specific purpose (obtaining a label, complying with the law) and are not necessarily used outside it. This phenomenon is also found at the local level. The management of a department may ask an entity to produce figures; the entity may produce them, but this does not guarantee that it will use them to improve its understanding or decision-making. This is what we have observed within the French division of a large multinational company (Box 1.11). As we have seen, French law requires the publication of indicators on gender equality. In the French division concerned, the legal indicators were supplemented by indicators defined during negotiations with the unions, and the company has undertaken to provide these figures, not annually, but semiannually. This results in a half-yearly report containing more than 80 indicators. In addition, the report must be produced at the division level, but also at the level of each of its 20 entities. However, if the report is indeed looked at and used at a divisional level, particularly during the dialogue with the unions, it is not particularly mobilized at the local level. Local HR actors who have to produce the report at a local level complain that its frequency is too high, the number of indicators is too high, and explain that they do not use the report because they find it unnecessarily complicated, a position that is also found among unions at the local level. Thus, they produce the report to comply with the division’s request, but do not use it. Box 1.11. Indicators produced to ensure compliance but not used (source: Coron 2018a)

From the Statisticalization of Labor to Human Resources Algorithms

25

On the other hand, in the case of dashboards, the objective stated by people who request their production is often to improve and inform decisionmaking. Le Louarn (2008) explains that HR dashboards should help HR management to make better decisions and manage their actions, and thus contribute to the achievement of objectives. However, this normative purpose overlooks the fact that there may be gaps within organizations, between discourse and practice, and between the design of a system and its use at the local level. Thus, like any management tool, a dashboard requires updating in use (Chiapello and Gilbert 2013). It is in this update that a gap can occur between the objectives stated by the designers of the tool and the concrete use of it. Box 1.11 provides an example of such a gap. Chiapello and Gilbert (2013) allow us to analyze it by recalling the limits of rational approaches that underlie beliefs in the power of management tools. The sociotechnical approach therefore affirms the need for joint optimization of management tools and social systems, and emphasizes that one dimension cannot be changed without acting on the other. As a result, the introduction of a new dashboard, for example, cannot be done without taking into account the potential resistance or deviations from the standard of use that will inevitably arise. Finally, reporting and dashboards have two main limitations: their bivariate dimension, knowing that HR phenomena are often too complex to be understood by simply crossing variables, and their often incomplete use, at both central and local levels. However, the dashboard approach is explicitly part of an EBM approach: the goal set by the designers of this type of tool remains to improve decision-making and thus human resources management. Le Louarn’s (2008) statement is quite exemplary of this approach. It advocates the use of a “staircase model”, which would link HRM practices, HR results, organizational results and long-term business success. According to him, each of these dimensions can lead to one or more dashboards, and the links between these dimensions could be estimated through correlation measurements between indicators in the dashboards. Other authors also support this vision. Lawler et al. (2010) therefore explain that HR must develop their data collection and publication activity if they want to make the HR function a strategic player in the organization. However, Boudreau and Ramstad (2004) make a clear distinction between producing or publishing data and integrating these data into an analytical model that makes it possible to value them and, above all, to value the activity of the HR function.

26

Quantifying Human Resources

1.2.2. HR analytics and statistical studies HR analytics is part of this second approach. The literature review conducted by Marler and Boudreau (2017) allows us to precisely define what it is. While the reporting and dashboard approach aims to produce HR metrics, HR analytics uses statistical techniques to integrate these data into analytical models. The literature on the subject is still relatively young, but allows us to identify some examples of the implementation of HR analytics in order to determine its main characteristics. 1.2.2.1. HR analytics: definitions and examples Marler and Boudreau (2017) mobilize the various articles published on HR analytics to define their main characteristics. The first characteristic, identified, in particular, by Lawler et al. (2010), refers to the use of statistical and analytical methods on HR data. This first characteristic therefore emphasizes a methodological distinction between reporting and dashboards, on the one hand, and analytics, on the other hand. This distinction can be linked to the limitations of reporting and dashboards that have been highlighted, and, in particular, the fact that these are bivariate approaches, whereas many HR phenomena are multivariate. The second characteristic, presented, in particular, by Rasmussen and Ulrich (2015), refers to the identification and measurement of causal relationships between different phenomena, HR and non-HR. The identification of these cause-and-effect relationships generally requires the use of relatively sophisticated statistical methods (e.g. “all other things being equal” reasoning), and, in any case, a multivariate approach. The third characteristic refers HR analytics to a decision-making process. This characteristic is clearly oriented toward the EBM approach already mentioned. The idea put forward by the authors identified by Marler and Boudreau (2017) is based on the premise that HR data collection, combined with the use of sophisticated quantitative methods, provides evidence to improve management. Finally, Marler and Boudreau conclude by proposing a definition of HR analytics:

From the Statisticalization of Labor to Human Resources Algorithms

27

“An HR practice enabled by information technology that uses descriptive, visual, and statistical analyses of data related to HR processes, human capital, organizational performance, and external economic benchmarks to establish business impact and enable data-driven decision-making” (Marler and Boudreau 2017, p. 15). This definition remains abstract. Several examples can be used to show how this is achieved. The first, cited by Garvin et al. (2013) in a case study, comes from Google, which has created a People Analytics team dedicated to analyzing HR data to model certain HR phenomena. Google mainly employs engineers and has a culture of data analysis. These two characteristics create strong incentives to adopt an EBM approach in an attempt to demonstrate to employees the contribution of the HR function to the company’s daily organization and performance. The People Analytics team also examined the question of the role of the manager: is the manager really essential? And, if so, what is a good manager? To answer these two questions, it began by collecting information on the reasons for employees’ departure in the exit interviews. However, as this information was not sufficient, the team focused on assessing the link between team performance and manager satisfaction. As the relationship is measured as significantly positive (managers with the highest satisfaction rate also have less turnover in their team, a higher well-being index and higher performance, for example), the next step was to define what a “good” manager is. To this end, interviews were conducted with managers who received the highest and lowest ratings. A semantic analysis was carried out on these interviews and made it possible to define eight managerial “good practices”. Finally, employees were asked to rate their managers on these eight practices in order to individually target the training needs of each manager. This example therefore uses relatively simple statistical methods, with the exception of semantic analysis, but differs from a reporting or dashboard approach in its ability to integrate a heterogeneous set of data into a meaningful approach and model. The second example is on the modeling of workplace accidents (Pertinant et al. 2017). Occupational injury is a highly predictable phenomenon, in the sense that it is determined by a number of variables that can be easily identified. To do this, the authors use regression techniques that refer to “all other things being equal” reasoning. The principle of this reasoning consists of comparing identical profiles, differing only on one point. This then makes it possible to isolate the effect of this factor on the variable of interest (in this

28

Quantifying Human Resources

case, whether or not there is an accident). Methodologically, this means studying the effect of a single characteristic, while controlling the effect of the other characteristics. This method has the great advantage of isolating the effect of one or more variables on another, especially in the case of multifactorial phenomena. On the other hand, it is sometimes criticized because it does not adequately reflect the actual mechanisms. In other words, the “all other things being equal” situation is an artifice that does not exist in reality. The authors are able to isolate the effect of exposure types on injury (Box 1.12). This example therefore illustrates a case where a complex and multivariate phenomenon, such as accidenteeism, manages to be analyzed in detail, taking into account all the explanatory variables. The authors conduct a study on the determinants of work accidents. They begin by selecting the variables that could influence accidentology, and identify five types: working conditions and rhythm, work organization (steady work, versatility, etc.), company context (size, sector, etc.), employee profile (age, gender, etc.), and employer actions related to prevention (training, for example). The use of a regression model with “all other things being equal” reasoning makes it possible to measure that working conditions are in fact the first determinant of accidentality. The authors test the effect of different types of exposure to harsh conditions, including wind, toxic products, carrying heavy loads, noise, standing, risk of injury, etc. They compare accidenteeism in cases of exposure with accidenteeism without exposure. They conclude that two types of exposure increase the risk. First of all, exposure to physical constraints (e.g. handling sharp objects) is the main antecedent of accidenteeism. However, psychological constraints also play a role, since activities where employees feel “always in a hurry” also record a significant increase in accidents. They then look at the effectiveness of the employer’s actions to reduce workplace accidents. Box 1.12. Analyzing the determinants of occupational accidents (source: Pertinant et al. 2017)

The third example comes from a research study on well-being at work (Salanova et al. 2014). Researchers collect data on employees from different companies and sectors, and end up with a typology of four types of employees: “relaxed”, “enthusiastic”, “workaholic”, and “burned-out” (Box 1.13). This typology, beyond its own interest for HR, has the advantage of underlining the importance of the “pleasure” factor in work, since it is the dimension that most structures the typology. The typology method can prove valuable in HR, as it has long been in marketing, since it makes it possible to

From the Statisticalization of Labor to Human Resources Algorithms

29

segment a population (that of employees, for example). As a result, it offers the opportunity to consider the employee population not as a homogeneous whole, but to identify different groups of employees not predetermined in advance. Once again, this method is based on taking into account a large number of variables simultaneously, and therefore goes beyond the limits of bivariate reasoning. The authors focus on workplace well-being and have two objectives: to identify groups of employees who are homogeneous in terms of well-being, and to determine the main characteristics that separate these groups. To do this, they start from several theoretical models (affective, cognitive, affective-cognitive approach) and integrate them into a multifactorial model including: pleasure and excitement, depression and enthusiasm, challenges and skills, energy and identification. They use data from the administration of an online questionnaire covering more than 750 Spanish employees from different sectors and ages. The questionnaire contains several dimensions: well-being, work-related constraints, work-related resources, personal resources, psychological effects of work (interest in work, intention to leave the company, etc.). A typological analysis then makes it possible to identify homogeneous groups of employees: “relaxed”, “enthusiastic”, “workaholic”, and “burned-out”. The authors then make a comparison with working time and show that the four groups are characterized by very variable working hours: enthusiastic and workaholic employees have longer working hours on average than relaxed and burned-out employees. Finally, they point out that the “pleasure at work” dimension is the one that best differentiates the different groups of individuals. This study was carried out as part of a research project and concerns a sample of employees from different companies. However, the authors make suggestions for HR practitioners. Thus, the importance of the “pleasure at work” dimension can constitute a course of action for the HR function (setting up actions to increase pleasure at work, to reduce the risk of burnout). In addition, the authors suggest that HR actors take advantage of this typology to propose targeted interventions aimed at preventing burnout or addiction to work, but also at promoting commitment. Box 1.13. Analyzing the main lines of well-being at work (source: Salanova et al. 2014)

1.2.2.2. A multivariate approach, for an analytical, decision-making and argumentative purpose The examples given above make it possible to more precisely characterize the objectives put forward for HR analytics. We focus here on

30

Quantifying Human Resources

three of these objectives: the analysis of complex HR phenomena, decisionmaking and argumentation. As we have pointed out, many HR phenomena are multifactorial, and therefore too complex to be understood by simple cross-tabulating several variables. HR analytics is seen as a way to overcome this problem by providing statistical methods capable of integrating several factors or variables simultaneously, as we have seen in the example of the determinants of work accidents (Box 1.12). The examples given in the previous section also show the importance of the decision-making purpose of HR analytics. Indeed, in several of the cases presented, the purpose of data analysis is to guide or inform decisionmaking. This inclusion in the EBM approach is present in managerial speeches, but also in some research on the subject (Lawler et al. 2010). The decision-making purpose is also to be linked to the analytical purpose. Indeed, the name of the EBM approach emphasizes the notion of “evidence”. In the context of HR analytics, it is both the sophistication of the methods used and their valorization by the scientific community as “scientific” methods that provide this guarantee of proof. The underlying idea is that a better analysis of reality provides better evidence and ultimately informs decision making. This argument may seem difficult to refute at first sight. For example, in the example of the gender pay gap, the use of a scientifically proven decomposition provides a more accurate picture of the causes of pay gaps, and thus allows more appropriate methods to be defined (Box 1.10). However, it is also possible to argue that this decomposition of the pay gap tends to justify the part of the pay gap explained by differences in the characteristics of the female and male populations, and thus clears companies of any responsibility for this gap and its reduction (Meulders et al. 2005). Finally, HR analytics also has an argumentative purpose. Thus, the results of the data analysis can be used to support statements or theses. For example, in the project conducted by Google, the results confirm or validate the hypothesis of managers’ influence on the performance of their team and therefore, more generally, the company.

From the Statisticalization of Labor to Human Resources Algorithms

31

More specifically, data analysis is sometimes used to demonstrate the importance of the HR function, in particular by demonstrating the existence of measurable links between variables related to HR activity and performance variables. The Le Louarn staircase model (2008) was a first step in this approach, which is often described as a business case, and which has greatly benefited from the contributions of HR analytics, seen as providing more rigorous evidence of the existence of these links. We can give two examples of business cases here. The first is the business case of commitment, developed in particular by Gallup (see Box 1.9 on Gallup). Indeed, in parallel with the construction of a commitment scale, Gallup conducts quantitative studies on the link between employee engagement and company performance. As a result, the firm is able to show in its 2017 report that those in the highest quartile of employee engagement are 17% more productive and 21% more profitable than those in the lowest quartile. The second example of a business case concerns gender diversity in companies. A relatively significant body of research has developed around the measurement of the link between gender diversity (of staff, boards of directors, executive committees) and the performance (economic, financial) of companies. At the same time, a managerial discourse has also spread on the subject, under the impetus of diversity departments wishing to enhance and legitimize their action (Box 1.14). Once again, sophisticated methods of data analysis have supported and partially legitimized these discourses. These two examples highlight the argumentative purpose of HR analytics, which validates but, above all, supports managerial theses or speeches, particularly on the importance of the HR function in the company. Finally, HR analytics addresses some of the limitations of reporting and dashboards, while being part of a relatively similar approach to using data to analyze phenomena at the organizational level and to improve decisionmaking. In particular, HR analytics, reporting and dashboards do not focus on the individual level, since they only provide aggregate indicators. Nevertheless, HR analytics is more explicitly part of an analysis, decisionmaking and argumentation process.

32

Quantifying Human Resources

The business case for gender diversity was developed by “women entrepreneurs” (Blanchard et al. 2013) to promote gender equality within companies. This discourse is based on several types of arguments (Amintas and Junter 2009): women would bring specific skills and behaviors that would benefit companies; a greater variety of profiles would allow a better match between the company’s population and society; the company’s consideration of gender equality would correspond to a societal trend; the fight against gender stereotypes would not deprive itself of potential “talents”; the feeling of equality in the company could have a positive effect on employee productivity. This business case has given rise to numerous criticisms, around the submission of gender equality to the notion of performance (Sénac 2015), the depoliticization of the principle of equality, or the essentialization of female and male characteristics that this business case implies. Nevertheless, a great deal of quantitative research has been carried out on this business case, aimed at measuring the link between gender diversity and performance. This work involves different gender indicators (employees, management committees, executive committees, etc.), different performance indicators (Tobin’s Q, profitability, etc.) and different methods (correlations, instrumental variables, regressions). However, the results of these studies contradict each other on the link between gender and performance, leaving room for ambiguity (Smith et al. 2006). Box 1.14. The business case for gender diversity

1.3. Big Data and the use of HR algorithms More recently, the emergence of Big Data and the rapid spread of the notion of algorithms and artificial intelligence have created new uses for HR quantification. While these uses are still in their infancy and, to date, represent a horizon, more than a concrete reality, they introduce relatively significant breaks with reporting, dashboards and HR analytics. 1.3.1. Big Data in HR: definitions and examples Several terms have emerged in the wake of the notion of “big data”1. “Big data” refers to very large data produced partly as a result of 1 In the rest of the book and for the sake of clarity, we write “big data” in lower case when we refer to data (big data, plural), and “Big Data” in upper case and singular when we refer to all new uses related to these data.

From the Statisticalization of Labor to Human Resources Algorithms

33

digitization: data from social networks, biometric sensors and geolocation, for example (Pentland 2014); “Big Data” refers to new uses, methods and objectives related to these data; “algorithms” refers to one of these new uses, perhaps the one that causes the most important changes in everyday life; and finally, artificial intelligence refers to a particular class of learning algorithms which are capable of performing tasks previously reserved for human beings (image recognition, for example). In any case, it seems necessary to make an effort to define these different terms. Moreover, the transposition of these terms and concepts into the HR field is not without its difficulties. 1.3.1.1. Big Data: generic definitions The term Big Data remains poorly defined (Ollion and Boelaert 2015). In 2001, Gartner defined Big Data using three “Vs” (Raguseo 2018). First of all, the Volume of data must be large. Even if few definitions are given of a “significant volume”, it can be stressed, for example, that this volume may require working on specific servers and storage platforms instead of traditional computers. These data are also characterized by their Variety, in the sense that heterogeneous data sources (internal and external) can be used, and that they can be structured and unstructured (unstructured data, such as text or images, cannot be stored in a traditional spreadsheet, unlike structured data). Finally, these data are dynamic, i.e. updated in real time, which is called Velocity. Two other “V”s have been added more recently (Bello-Orgaz et al. 2016, Erevelles et al. 2016): Veracity, referring to the issue of data quality, and Value, referring to the idea of benefiting from these data. This definition therefore emphasizes the technical characteristics of the data mobilized (volume, variety, velocity, quality), and leaves in the background the question of the methods used to process these data and the purposes of these processing operations. Yet, Kitchin and McArdle (2016) show that very few data sets usually considered as “big data” (Internet searches, image sharing on social networks, data produced by mobile applications, etc.) correspond to the different characteristics identified. It is then necessary to review this definition, focusing on other dimensions: different methods for utilizing the data, uses, etc. Other works focus more on methods. Mayer-Schönberger and Cukier (2014) quickly evoke, in addition to the technical characteristics of the data, the passage from a paradigm of causality to a paradigm of correlation. More specifically, they argue that, if correlation analysis does not provide

34

Quantifying Human Resources

information on the nature of a relationship between two variables, it is sufficient to conduct so-called predictive analyses, as this identifies observable data that provide information on the behavior of unobservable data (Box 1.15). In a similar line of thought, Kitchin (2014) questions the need to review our epistemological research paradigms in light of the emergence of such large volumes of data, which could lead us from a knowledge-driven science to a data-driven science, i.e. a totally inductive science from which all lessons are drawn from the data, or deductive, but with assumptions generated from the data and not from theory. The same trend could be observed within organizations, with the advent of data-driven management (Raguseo 2018). Correlation analysis is regularly criticized because it does not provide information on the rationale for a phenomenon or how it works. As a result, it rarely leads to a better understanding of reality. On the other hand, it can be very effective in predicting a phenomenon: if two variables A and B are correlated with each other, a change in A can predict a change in B, even if there is no causal relationship between A and B. In other words, correlations allow us to identify surrogate variables (A in our example) for unobservable variables (B). In addition, within a very large volume of data, a correlation, even of a small magnitude, may be relevant. This notion of a surrogate variable is essential to understand how some Big Data programs work. For example, to predict breakdowns, the parcel delivery company UPS now uses many sensors affixed to vehicles as surrogates for the imminent risk of failure (which cannot be observed). In the health field, the analysis of data flow from various sensors (an electrocardiogram alone records 1,000 values per second) makes it possible to predict the onset of infection before symptoms appear. Again, this predictive analysis is not based on causality research, but on the identification of observable variables (data from sensors) that can be substituted for unobservable variables (risk of infection). Box 1.15. Big Data and the renewed interest in correlation (source: Mayer-Schönerger and Cukier 2014)

Finally, some researchers are interested in the uses of Big Data. In particular, they highlight the notion of algorithms: algorithms for suggesting content, ranking Internet pages, predicting for the insurance or justice sector (Cardon 2015, O’Neil 2016), etc. While the notion of an algorithm is very old and simply refers to a finite sequence of instructions, an important evolution has recently been introduced by the emergence of learning

From the Statisticalization of Labor to Human Resources Algorithms

35

algorithms, which evolve according to input data, and which have allowed significant progress in the field of artificial intelligence (CNIL 2017). Cardon points out that this type of algorithm is particularly useful for processing large volumes of data, prioritizing them and selecting the information to present to the user. He insists on the fact that few aspects of contemporary society escape the presence of algorithms: Internet research, consumption of cultural products, decision-making, etc. This omnipresence of algorithms, linked to the explosion in the volume of data produced by individuals leads to what he calls the “computing society”. In particular, he defines four main types of algorithms that structure our relationship to the Internet and therefore to information: – audience and popularity measurements: counting the number of clicks (e.g. Médiamétrie); – authority measures: ranking of sites based on the fact that they can be referenced on other sites (e.g. PageRank); – reputation measures: counting the number of exchanges a content gives rise to (e.g. number of retweets on Twitter); – predictive measures: customization of the content offered to the user according to the traces they leave on the Internet (e.g. Amazon product recommendations). According to Cardon, these different types of algorithms profoundly structure more and more aspects of our lives, beyond our relationship to information. O’Neil (2016) goes further, pointing out that algorithms can have a very strong influence on our lives, since they have been used to make decisions in many areas (health, insurance, police, justice, etc.), and that they present a high risk of increasing inequality (Box 1.16). O’Neil mentions many examples of the use of algorithms to make decisions in the United States: granting a loan, calculating an insurance rate, recruiting a person, etc. She examines these examples from the perspective of the issue of inequality, and thus supports the following thesis throughout her book: algorithms tend to increase inequality. Thus, pricing algorithms for automobile insurance are mainly indexed, not on driving style or fine history, but on credit score, a score calculated according to an individual’s ability to manage their budget and repay their loans. For example, in Florida, drivers with a good driving history but a poor credit score pay on average about $1,500 more than drivers with a good

36

Quantifying Human Resources

credit score but who have already had drunk driving fines. The growing importance attached to the credit score as an indicator of the wider behavior of American citizens and as a decision-making tool is creating an economy that disadvantages the poorest. She points out another phenomenon: algorithms are built from past data, and therefore risk reproducing past biases. For example, a recruitment algorithm could construct the typical profile of a good engineer, e.g. from recruitment history data. However, if the company has never recruited a female engineer, the algorithm may infer that a good engineer is necessarily a man. Finally, these two examples highlight an important point: algorithms are built by human beings, who have the ability to ask themselves the question of inequality and discrimination and the power to build algorithms that produce less inequality. The challenge is therefore to spread a culture of algorithm ethics throughout the entire algorithm production and use chain, from the data scientists who produce them to the professionals who use them to make decisions. This point will be discussed further in Chapter 5. Box 1.16. The omnipresence of algorithms, a risk for equality? (source: O’Neil, 2016)

These two authors therefore show the new possibilities created by the use of algorithms, and, among other things, the risks associated with them. Other authors are more positive, recalling the many advances made possible by the use of algorithms in the field of medicine: better quality diagnostics, for example (Mayer-Schönberger and Cukier 2014). Thus, the Big Data phenomenon corresponds to a combination of the technical characteristics of data, the methodologies mobilized to process them and the emergence of new uses. 1.3.1.2. Big Data and the use of HR algorithms However, the transposition of the concepts of Big Data and algorithms into HR is not so obvious. First, in terms of the technical characteristics of the data, the various “Vs” mentioned above are in fact rarely found in the HR field. For example, the volume of data contained in an HRIS rarely exceeds a computer’s storage capacity. The data of interest for the HR function that are the most voluminous are undoubtedly those related to emails (email exchange flows, or even exchange content), but the use of email data remains underdeveloped.

From the Statisticalization of Labor to Human Resources Algorithms

37

Second, HR data are rarely updated in real time: for example, annual reports containing figures for HR are usually produced several months after the end of the year, reflecting the difficulty in producing the necessary data and making it reliable. On the other hand, HR has a certain variety of data sources, from HRIS data to data contained in candidates’ CVs, as well as data produced by employees on the internal social network, for example. However, so far, it is the data from the HRIS that has been most mobilized, for example, in the context of reporting, dashboards or HR analytics (Angrave et al. 2016). Social network data such as CV data have only more recently emerged as statistically mobilizable data, which is probably related to the fact that they are unstructured data. Thus, as we have seen, many companies commission time- and energy-consuming engagement surveys, and the idea that content from internal social networks could be used as an indicator of employee opinion and social climate has only recently emerged. Similarly, the possibility of using biometric sensors to measure cooperation or exchanges between or within teams is still relatively recent and little explored (Pentland 2014). The second obstacle to the transposition of Big Data and algorithms into HR comes from the difficulty in identifying potential new uses. For example, among the algorithms identified by Cardon (2015) and described above, which could be relevant in HR? With our limited hindsight and the few examples we have today, it seems that prediction algorithms are the most transposable into the HR field, whether in recruitment, internal mobility, training or HR risk prediction (Box 1.17). O’Neil’s book and the content from specialized HR sites allow us to identify some examples of the use of Big Data and HR algorithms. In the field of recruitment, algorithms for (pre-)selection of applications have already been developed and implemented in some companies. These algorithms may be based on highly variable statistical methods and principles. Thus, the most basic algorithms transform CVs (unstructured data) into structured data, to which human-defined sorting criteria can be easily applied. A second approach is to construct aptitude or personality tests and use the results of these tests to perform an initial screening. A third type of algorithm can look for the most explanatory variables of the performance of the company’s current employees (level of qualification, international experience, personality test, etc.), and identify among the candidates which ones most closely match these characteristics. A fourth type of algorithm can work on the principle of matching

38

Quantifying Human Resources

keywords contained in CVs and offers. Finally, a fifth type of algorithm can retrieve and analyze data from millions of LinkedIn profiles to identify the most frequent routes leading to a particular position, and thus determine which candidates have the closest routes to it. In the area of internal mobility, perhaps the most common case of use is for customized job suggestions (based on Amazon’s customized suggestions to its customers). The most powerful custom suggestion algorithms, referred to as “collaborative filtering” models, are, in fact, relatively simple. They bring individuals closer together through their history and suggest to one the additional content that the other has accessed. Thus, if A and B have a relatively similar history (of product consumption, or work experience, for example), but B has purchased a product or accessed an additional position, this product or position will be suggested to A. Today, LinkedIn is already starting to suggest jobs to its users. As a result, companies may have an interest in developing their own internal suggestion tool in order to avoid having their mobility management delivered to an external actor of this type. In the field of training, several cases of the use of Big Data in HR can be identified. The first is to provide employees with personalized training suggestions, modeled on what can be done for internal mobility. The second, more complex case is the use of what is called adaptive learning, and is beginning to develop, particularly for e-learning in the United States. The principle of adaptive learning consists of adapting the training content and materials to the learner based on the analysis of the traces they leave on the Internet when taking the training. For example, an algorithm can identify that an employee is less successful at a quiz after reading a text than after seeing a video on the same subject, suggesting that videos are a better source of learning for them than texts. In the same way, it can identify that a given individual is more effective when they perform several short sessions a day rather than when they perform one of longer duration... Adaptive learning promises, among other things, gains in the effectiveness of training, but its development is now facing high costs (since this implies, among other things, producing the same training content on different media). Finally, Big Data can also be used to predict HR risks, such as absenteeism or resignations. Companies are thus offering tools to predict with high reliability the probability of absenteeism within a team in the coming weeks (based on epidemiological data, for example) or individual resignation probabilities (based on variables such as home-to-work travel time or traces left on social networks, for example). Box 1.17. Examples of how to use Big Data in HR (source: O'Neil 2016; specialized HR websites)

From the Statisticalization of Labor to Human Resources Algorithms

39

Beyond these cases of Big Data use in HR, another notion has emerged in connection with the emergence of the platform economy (Beer 2019): management by algorithms (or “algorithmic management”). This notion refers to an increasingly frequent reality. On platforms such as Uber, or Amazon Mechanical Turk, for example, a worker receives their work through an algorithm, not from a human person, and the pay for this work is also determined by the algorithm (Box 1.18). This notion goes beyond the strict framework of HR since it concerns management in general, but raises many questions for the HR function. In particular, major HR processes such as recruitment, training, mobility and career management can be totally disrupted by this operating model. In addition, this notion requires a rethinking of the balance of the HR–manager–employee triptych, for which the HR function is often the guarantor. When they want to start working, Uber drivers or Deliveroo couriers connect to the application that geolocates them. They then quickly receive instructions for errands to be carried out. A trip consists of an address where they must pick up the customer or the delivery, an address of where to drop them off, but also an indicated travel time and a route to follow. In other words, almost all of the work is prescribed by the algorithm (up to the route to be followed). On a regular basis, the couriers receive an evaluation report containing metrics such as the average time to accept the errand or the average travel times. In addition, particularly on the Uber application, the customers are strongly encouraged to rate the drivers. The algorithm therefore plays the role of the manager in almost all aspects, from work planning to evaluation. In other cases, the work is entrusted to a human being, but the individuals receive the work instructions without any human interaction. For example, Google raters or Turkers2 work from home, with their own computers, and receive their work instructions on an online platform. Box 1.18. Examples of management by algorithms (source: Lee et al. 2015; Paye 2017)

2 Google raters complement the work done by Google’s algorithm by manually evaluating the quality of web pages referenced by Google, but most often work for intermediary companies that sell their work to Google. Turkers work on Amazon’s digital work platform (Amazon Mechanical Turk) and provide tasks to complement the work of Internet algorithms (e.g. rating audio or video files, content moderation).

40

Quantifying Human Resources

Finally, even if the transposability of the notion of Big Data into the HR field raises certain questions, it is indeed possible to identify uses that are similar to this notion. However, how do these uses differ from the other quantification uses mentioned in the previous sections? 1.3.2. The breaks introduced by Big Data in HR

The different uses identified in the previous section make it possible to identify three major breaks introduced by Big Data in HR: automation, prediction and customization. Algorithms also present potentialities and dangers for the HR function. The term “break” may seem strong, but it corresponds to fundamental changes, both in the HR stance and in the definition of quantification. In the next chapter, we will discuss the meaning and implications of these fundamental changes. The main purpose of this section is first to describe them. 1.3.2.1. Automation Algorithms and Big Data in HR are positioned in relation to an automation horizon (whether getting closer or further away). Thus, two discourses coexist on the subject. The first is explicitly aimed at automation. For example, CV pre-selection algorithms are often presented as effective substitutes for recruitment managers: faster, more efficient, less expensive, etc. However, this discourse may be confronted with risks of resistance to change and concerns about the future of the HR function. Moreover, in some countries, leaving decision-making entirely to an algorithm on a subject as important as recruitment remains legally and socially unacceptable. Another discourse therefore coexists with the first: it insists on the fact that these algorithms aim to be decision aids but in no way to replace human decisionmaking. Thus, IBM Watson’s recruitment algorithm (Watson Recruitment) is presented as a decision-making tool by structuring and selecting the information to be presented to the recruitment manager, who no longer needs to read all the CVs since a summary is made3. 1.3.2.2. Prediction Algorithms and Big Data also provide a predictive dimension to the HR function. Unlike HR reports, dashboards and analytics that focus on 3 See, for example, https://www.youtube.com/watch?v=ZSX75SIySiE (accessed October 2019).

From the Statisticalization of Labor to Human Resources Algorithms

41

understanding past phenomena to make decisions in the present, Big Data HR uses past data to predict behaviors or wishes. Thus, constructing an algorithm of job or training suggestions is like trying to predict which job or training will be of interest or most suitable for which employee. The example of the prediction of HR risks such as resignations or absenteeism was also given. In this respect, this approach is finally, to some extent, similar to the quantification of the individual mobilized in the context of recruitment and promotion and presented in section 1.1: quantification is mobilized for predictive purposes. This predictive dimension breaks with two other dimensions, present in reporting, dashboards and analytics: the descriptive dimension and the explanatory dimension. However, this rupture is not as strong as it seems at first sight. Indeed, from a methodological point of view, the methods that make it possible to explain (linear or logistic regression methods, for example) are generally the same as those that make it possible to predict, since they make it possible to identify the determinants of a variable that one is trying to explain (or predict). On the other hand, the disruption takes place on two levels. First of all, this implies a change in the positioning of the HR function. Second, it raises new ethical questions in HR. Chapters 2 and 5 will come back to this. 1.3.2.3. Customization Finally, several examples were given where algorithms and Big Data allow for some form of customization: sending customized suggestions for positions and training, for example. The idea that mobilizing large volumes of data can allow for some form of customization may seem counterintuitive. However, the underlying principle is that the multiplication of data on individuals offers the possibility of gaining in accuracy and thus returning to the individual. This goes beyond the simple idea of segmentation, which is based on connecting individuals to large “groups” and has had some success, both in HR and marketing. Indeed, a collaborative filtering algorithm could theoretically result in a set of unique suggestions for each individual. In practice, this case is rarely verified, but the interindividual variety of the sets of suggestions potentially sent is well in line with a customization logic. Once again, this raises relatively new HR questions and issues, which we will come back to in the next chapter. 1.3.2.4. HR algorithms: potential, dangers and issues As we have seen, algorithms are increasingly present and have an increasing influence on different aspects of our daily or professional lives.

42

Quantifying Human Resources

They offer many possibilities, both for the HR function, the organization and employees. For the HR function, algorithms can present potential sources of productivity gains, if they save time on tasks of little additional value, as shown in the Watson Recruitment video. Productivity gains can also come from predicting HR risks. An HR function capable of predicting absenteeism or resignations can thus take the necessary measures upstream to avoid or at least limit the associated losses, for example, by offering schedules that include absenteeism prediction, or by defining more appropriate retention programs. They also represent an opportunity for the HR function to reflect on its own positioning (particularly in relation to the transition from description or explanation to prediction). Finally, they provide the opportunity to offer new services to employees (suggestions for training, for example) and perhaps gain legitimacy from them. For the organization, as we have seen, management by algorithms is sometimes presented as a tool to better allocate tasks and resources, thus aiming at productivity gains. Finally, employees may find it beneficial to access services such as customized training suggestions. However, algorithms also present dangers, which we have already mentioned and some of which we will return to. We have thus referred to the risks of discrimination and inequality highlighted, for example, by O’Neil (2016). In addition to these risks, there is a lack of transparency in some algorithms, which remain poorly accessible to most of the individuals concerned and whose operating modes are rarely explained (Christin 2017). Finally, these algorithms raise the question of the hegemony of the machine over the human being and the responsibility of decision-making: who is responsible and who is accountable for the decisions made by an algorithm? Its designers? Its users? Finally, one of the major challenges also lies in the possibilities of studying and analyzing these algorithms. A current of research is beginning to develop around this question, and highlights the need for an “ethnography of algorithms”, i.e. a science that would study the different actors involved in the construction and use of algorithms, and that would highlight not only the technical aspects, but also the political and social choices underlying these two stages (Lowrie 2017, Seaver 2017). This first chapter therefore allowed us to delineate different types of use of HR quantification: quantification of the individual and work to inform and support decision-making, reporting and dashboards to describe situations and

From the Statisticalization of Labor to Human Resources Algorithms

43

HR analytics to analyze them, and algorithms and Big Data HR as emerging trends embodying a use of data for automation, prediction and customization purposes. The following chapters focus on a dimension outlined in this chapter: the link between quantification and decision-making, the appropriation of these tools by certain agents, the subsequent effects on the HR function, and the ethical and legal dimensions.

2 Quantification and Decision-making

Chapter 1 highlighted the links between quantification and HR decisionmaking. Thus, quantification is used not only to inform and illuminate decision-making ex ante, but also to justify ex post decisions taken. However, this dyad of information/justification is suspended in what Desrosières (2008a) calls the “myth of objectivity”, i.e. the idea that quantification is a neutral reflection of reality. This myth has its origin in the positivist stance of the natural sciences. Although it has been strongly challenged by many studies, both in epistemology and sociology, the myth of objective quantification is still valid. It is all the more important in HR because this function needs arguments to support decisions that are often crucial for employees (section 2.1). More recently, as has been seen, the emergence of Big Data and algorithms has brought a new dimension to HR, based on the notion of personalization, which refers to decision-making linked to individuals. This notion creates a change both for quantification and statistical science, historically positioned as a science of large numbers far from individuals, and for the HR function, historically positioned as a function managing collectives of individuals and not individualities (section 2.2). Finally, the rise of predictive models is further reformulating the links between quantification and decision-making, focusing on decisions related to the future. Once again, this creates a break with the historical positioning of statistics and the HR function. In addition, it encourages the notion of performativity to be discussed and the effect of quantification on individuals’ behavior to be questioned, regardless of the cases in which it explicitly predicts such behavior (section 2.3).

Quantifying Human Resources: Uses and Analyses, First Edition. Clotilde Coron. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

46

Quantifying Human Resources

2.1. In search of objectivity Using quantification to inform but also to justify decision-making means attributing characteristics and qualities to it that are difficult to find elsewhere, which can be described as a “data imaginary” (Beer 2019). These qualities can be summed up in fact in a key notion, with many ramifications: objectivity (Bruno 2015; Beer 2019). The myth of objective quantification is not new, and is deeply rooted in the history of our Western societies (Gould 1997; Crosby 2003; Supiot 2015). Yet much research has debunked this myth, for example by showing to what extent quantification operations respond to socially or politically constructed choices, but also by highlighting the biases and perverse effects of quantification. It therefore seems legitimate to ask what explains the persistence of this myth, particularly within the HR function, or even in HR research, which often attributes this characteristic of objectivity to quantification. In fact, the HR function must regularly make decisions that can be crucial for individuals: recruitment, promotion, salary increases, etc. Having an apparently solid argument to justify these decisions probably makes them more acceptable and less questionable by the social body. 2.1.1. The myth of objective quantification The myth of objective quantification dates back to the Middle Ages in Western civilization (Crosby 2003). To be able to claim to be objective, the quantification assumed by this myth has several characteristics: neutrality, precision, timeliness, coherence and transparency. 2.1.1.1. The origins of the myth Crosby (2003) studied the origins of the supremacy of quantification in Western society. The use of quantification goes back to ancient times. Thus, many philosophers of Greek antiquity were interested in these questions, and Plato considered commensurate operations as essential to making better decisions and making human values less vulnerable to passions and fate (Espeland and Stevens1998). It was not until the 13th Century, however, that quantification took on the importance it has today. Crosby even dates this turning point very precisely, which he believes took place between 1275 and 1325. During these 50 years, several major evolutions based on quantification have occurred: the mechanical clock, geographical maps,

Quantification and Decision-making

47

double-entry accounting. This rise in quantification, made possible in part by scientific developments during the same period, extended to the arts, with painting in perspective and measured music (Box 2.1). Between 1275 and 1325, Western civilization experienced several major inventions or developments involving the measurement of time, space, commercial transactions, music and painting. Before the end of the 13th Century, time was measured in relatively imprecise hours, which were punctuated by church bells. The length of the hours varied according to the seasons and time of day to ensure that both day and night contained 12 hours, in both summer and winter. The invention of mechanical clocks, which can be traced back to the end of the 13th Century, changed this mode of operation and the principle of equal hours replaced that of unequal hours from the beginning of the 14th Century. At the same time, mapping made considerable progress when cartographers began to consider the Earth’s surface as an area on which they could superimpose a grid of coordinates (latitudes and longitudes). This change implied adopting a much more measured vision of space than that which had prevailed until then. At the same time, the growth of trade meant that Western merchants had to manage a considerable number of transactions. The management of these transactions was complicated by credit practice and fluctuations in the value of currencies. Therefore, keeping a careful account book became necessary to avoid payment or cash management errors. Double-entry bookkeeping was invented at that time in Italy and quickly spread throughout Europe as a practical solution for carefully tracking individual transactions. The various arts did not escape this trend toward a more measured world. Thus, music shifted from unmeasured Gregorian chants (in the sense that the duration of each note was not predefined), monophonic and transmitted orally to written (on a staff) and measured music, a necessary condition for making polyphony possible. In painting, the contributions of optics and geometry have enabled the advent of perspective painting, of which the painter Giotto was one of the first masters. Box 2.1. From 1275 to 1325, the incarnations of the rise of quantification (source: Crosby 2003)

48

Quantifying Human Resources

Looking at more recent periods, Supiot (2015) identifies the major functions that have gradually been conferred on quantification in Western society, particularly in the field of law and government: accountability, administration, judgment and legislation. Thus, accounting is in line with the objective to be accountable and has several characteristics. First of all, it must be a true and fair view of reality. It must, then, represent a probative value (the numbers entered in the accounts becoming evidence). Finally, it uses money as a measurement standard, making objects or things of very different orders (a material object, a service rendered, etc.) commensurable, i.e. comparable. The administration of an entity (State, organization, community, etc.) requires knowledge of its resources. Here, quantification is valuable in defining, cataloguing and measuring these resources. It even enables going further by defining regularities, or recurring trends, which is just as valuable for an administrator. Judgment or decision-making in situations of uncertainty can also be based on quantification, and more specifically on probabilistic calculation. Being able to measure the probability of an event occurring reduces uncertainty. Probabilistic calculation has proved valuable in the production of some legislation. Thus, as early as the 18th Century in France, philosophers were in heated debate about the question of whether preventive inoculation of smallpox should be made mandatory. Indeed, inoculation made it possible to reduce the disease as a whole, but could be fatal for some individuals. It was then on the basis of a probabilistic calculation proposed by Bernoulli (probability of dying in case of inoculation versus general gain in life expectancy for the whole population) that philosophers such as Voltaire or d'Alembert were even opposed to each other’s work. Subsequently, the field of public health has seen similar debates reappear, particularly in the 19th Century, between doctors valuing a method of standardizing care based on medical statistics, and doctors valuing a more individual approach, consisting of giving great importance to the exchange with the patient. Gardey (2008) looked at an even more recent period, and studied the evolution and diffusion of calculating machines between 1800 and 1940. She shows that the degree of complexity of the calculations carried out by machines has evolved from arithmetic calculation–dedicated to bookkeeping–to actuarial calculation–integrating probabilities for example but the purposes of these machines have remained relatively stable over time: to produce a reliable result quickly.

Quantification and Decision-making

49

In these various examples, quantification is perceived as a way of getting closer to reality but also a way of distancing it. Thus, double-entry bookkeeping and perspective painting make it possible to build a more accurate picture of reality on paper than older, less quantified methods. On the other hand, the control of time or space allows the human being to no longer merge with reality, to depend less on it, but to bend it to social norms (equal hours, longitudes and latitudes, bookkeeping, judgment in situations of uncertainty, actuarial calculation). This is where the myth of “objective” quantification takes on its full importance. Desrosières’ work is valuable in understanding this. He emphasizes that the “reality status” of statistical objects and measures is crucial (Desrosières 2008a). In other words, statistics are mobilized to the extent that a discourse and belief can be maintained that they represent a reflection of reality, or that they must be as close as possible to it. These two arguments are not interchangeable. Indeed, the notion of “reflection” seems to imply a difference in nature between reality and the statistics that measure it, while the notion of “rapprochement” seems to imply a similarity between reality and statistics. However, in both cases, the statistics are estimated and valued according to their relationship to reality. Desrosières then identifies three different discourses that value the links between statistics and reality, which he describes as “realistic” (Box 2.2). The first discourse, described as “metrological realism”, comes largely from the paradigms underlying the natural sciences. Thus, the object to be measured is considered as existing in reality in an indisputable way (such as the air temperature or the height of a tree). As a result, this discourse is based on the premise that there is a reality independent of the measure that is made. Therefore, the challenge of this discourse and stance is to measure this real object as reliably as possible. Thus, the focus is on reliability, as revealed by the use of words such as accuracy, bias, measurement error and confidence interval. The second discourse, described as “accounting pragmatism”, has its source in corporate or national accounting. In this discourse, the reality to be measured does not consist of physical quantities such as temperature or distance, but of money, which creates a form of equivalence between objects of different natures. This discourse has two particularities compared to the first. Firstly, since money itself is by nature a social convention, this discourse does not refer to a physical reality independent of human beings; secondly, accounting sometimes consists of assigning probable (and therefore uncertain) values to objects (doubtful debts,

50

Quantifying Human Resources

risks, for example). However, this discourse also leads to an attitude of seeking reliability, proximity to reality and reducing errors. Finally, the third discourse, described as “proof by use”, refers the “realism” of statistics to the coherence of the data between them, i.e. the fact that two different sources of data must not contradict each other. This is the whole purpose of internal or external validation procedures used in the social sciences. Proponents of this discourse may accept that raw data may be modified to be more consistent with each other, even if this leads to a deviation from the original measurements. This type of consistency is relatively common, for example, in cases where the same data are produced by different statistical services. These three discourses therefore position statistics and reality differently. On the other hand, they all focus on the idea that statistics should reflect or approach this reality in the most reliable way possible. They therefore contribute to making measurement reliability a central issue. Box 2.2. “Realistic” discourses on quantification (source: Desrosières 2008a)

This “realistic” stance therefore leads to an attempt to increase the reliability of statistics and measurements as much as possible: statistics must come as close as possible to the object they measure, which refers to the notion of objectivity. Porter then evokes a “language of objectivity” (Porter 1996, p. 229) mobilized to strengthen confidence in quantification, and based in part on the idea of a “mechanical objectivity” (Ibid. p. 213) of quantification. 2.1.1.2. What are the characteristics of “objective quantification”? This “mechanical objectivity” presupposes several characteristics, which can be identified, for example, in the work of Porter or Desrosières. Porter (1996) emphasizes the link between objectivity and neutrality or impersonality. Objectivity implies reducing as much as possible the intervention of the observer, the evaluator, ultimately of the human being. It is then defined as knowledge that depends little or not at all on the individual who produces it. Porter gives the example of mental tests and their evolution, which sought to limit the evaluator’s intervention as much as possible.

Quantification and Decision-making

51

Desrosières (2008b) specifies the conventions and quality criteria that underlie quantification and, among other things, the myth of objective quantification. Thus, the precision criterion refers to the desire to reduce measurement and sampling errors and biases. Statistical information must be correct at a given moment t, and a change in reality must result in the data being updated. The coherence criterion is a central issue in maintaining the myth of objective quantification, in a context where several sources can provide the same data. Thus, a (not so rare) situation where two institutions provide different measurements, of the unemployment rate for example, would result in a questioning of the quality of the measurement and therefore of the link between the measurement and reality. The myth of objective quantification is also based on a criterion of clarity and transparency (Espeland and Stevens 2008). Users of statistics must be able to have access to the way in which they have been produced, their exact definition, any methodological choices made, etc. More generally, the link between the concept of transparency and quantification has been the subject of extensive literature, both on the transparency of statistical tools and on the fact that quantification can be considered as providing transparency. Hansen and Flyverbom (2015), for example, study two quantification tools seen as providing transparency: rankings and algorithms. They show how rankings are mobilized by public policies as arguments of transparency (e.g. rankings relating to anticorruption or press freedom). For their part, algorithms, although sometimes criticized for their lack of transparency, are in some cases perceived and presented as affording unprecedented access to reality, because of the mobilization of a greater quantity of data. Algorithms are therefore increasingly used by governments in the fight against fraud or crime. In any case, both rankings and algorithms are presented as conveying a perception of transparency. Hansen and Flyverbom, on the contrary, stress the fact that both tools create mediation between the measured object and the subject. A last characteristic seems necessary to guarantee this objectivity: the ability to report or accountability. This characteristic is often linked to the notion of transparency (Espeland and Stevens 2008). To report on reality is to make things visible, in other words, to introduce a form of transparency. This link can be even further developed by calling upon Foucault’s work on transparency (Foucault 1975). Indeed, Foucault uses the example of the panoptic prison system, where a guard can observe all prisoners through

52

Quantifying Human Resources

transparent walls. In this system, the transparency of the walls allows the guard to see all the actions of the prisoners and thus to report on them. As Espeland and Stevens (2008, p. 432) point out, “an ethics of quantification should recognize that we live at a time in which democracy, merit, participation, accountability and even ‘fairness’ are presumed to be best disclosed and adjudicated through numbers”. The objectivity of quantification thus seems to constitute an essential argument for its mobilization in certain functions and discourses. This objectivity presupposes the respect of certain technical or methodological criteria, such as impersonality, precision, timeliness, coherence and transparency. In practice, this translates into a form of trust in figures, seen as providing the necessary objectivity, as long as they meet these criteria. However, at the same time, many studies question this ideal of objectivity, even when these criteria are respected. 2.1.2. Limited objectivity The objectivity of quantification may be threatened on several levels at least. First of all, quantification implies, “putting reality into statistics”, an investment of form (Thévenot 1989) that is always based on choices and conventions, which calls into question the idea of a totally neutral statistic. Second, quantification can give rise to many biases; the example of the quantified evaluation of work or individuals can illustrate this. Finally, quantifying reality can also, in some cases, have an effect on it, which calls into question the discourse of metrological realism. 2.1.2.1. Quantification conventions and the statisticalization of reality Quantifying reality implies carrying out operations to “put into statistics” the world and things. The sociology of quantification has focused in particular on these operations, which constitute “quantification conventions”, in the sense that they are socially constructed (Diaz-Bone 2016) and provide interpretative frameworks for the various actors (Diaz-Bone and Thévenot 2010). The particularity of quantification conventions lies in the fact that they are based on scientific arguments and techniques, among other things, which reinforces the illusion of their objectivity (Salais 2016). Nevertheless, the sociology of quantification shows that these statisticalization operations, although necessary, may not be so “objective”, at least if the criteria

Quantification and Decision-making

53

mentioned are used. Thus, they involve human beings, which calls into question the criterion of impersonality: Gould (1997) clearly shows the role of individual careers and prejudices in the development, choice, mobilization or interpretation of quantified measurements. These operations can also mask a certain imprecision, since quantification always amounts to a form of reduction of the real world (Gould 1997). Finally, they are not always very transparent. Indeed, these statisticalization operations are often taken for granted and therefore negligible when it comes to using the data they produce. In other words, statisticians much more often raise the question of the neutrality and rigor of the methods they use than the question of the quality of the data they use. As a result, few users or recipients of the data question their initial quality, which calls into question the ideal of clarity and transparency. Taking an interest in these statisticalization processes can be very enriching, by showing that they are not neutral, mechanical processes from which human biases are absent. Several examples illustrate these points, two in the health field and the others in the HR field. Juven (2016) studies the implementation of activity-based pricing in hospitals. This pricing system requires the ability to calculate the costs of hospital stays and medical procedures. First, it was necessary to quantify the costs of each medical procedure. However, these costs may vary according to the type of patient concerned (e.g. operating on an appendicitis patient does not have the same consequences and therefore the same cost if it is a child, an adult, an elderly person or a person with another condition, etc.). This has led to the creation of supposedly homogeneous “patient groups”, for which the costs of operations will therefore be identical. Once these patient groups were created, the cost of each medical procedure for each patient group could be quantified. However, these two operations (creation of patient groups and costing of procedures) correspond to choices, partly managerial, but also political and social. For example, Juven gives the example of patient associations that have worked to ensure that the disease and the patients they represent are better taken into account (Box 2.3). Moreover, even with this degree of precision, many cases of individual patients are in fact difficult to categorize (due to the multiplicity of conditions and procedures performed, for example), which makes these choices partly dependent on hospital administrative staff.

54

Quantifying Human Resources

Activity-based pricing leads to an overvaluation of technical medical procedures to the detriment of patient support services (e.g. social or psychological care). However, in the case of chronic diseases such as cystic fibrosis, where the patient is expected to participate actively in the care work (by taking their treatments, following daily therapeutic instructions, for example), these accompanying acts are essential. Thus, patients with cystic fibrosis must receive very demanding care, and must do the care work themselves (or their relatives must do it). The association Vaincre la mucoviscidose has therefore decided to propose to the public authorities new calculations of the costs of treating the accompanying acts related to this disease. Box 2.3. The intervention of the association “Vaincre la mucoviscidose” in the statisticalization of hospital costs (source: Juven 2016)

Still in the health field, Hacking (2005) is interested in obesity. Obesity is measured on the basis of the body mass index, mobilized by Quetelet in the 19th Century to define a model of human growth, then by doctors in the 20th century to define dangerous slimming thresholds. It was only at the end of the 20th Century that obesity was defined as a situation where this index exceeded 30, particularly because studies showed that life expectancy decreased from this stage onwards. However, this definition of obesity is quickly challenged by the definition of overweight (index above 25). Today, many publications use the threshold of 25, not 30. This example and the variation of thresholds clearly illustrate the importance of quantification choices and conventions. Similarly, in the HR field, the statisticalization of reality is based just as much on certain choices and conventions. The example of measuring employees’ commitment has been mentioned previously, which gives rise to a very wide variety of scales. This variety reflects the difficulty in agreeing on the definition of commitment. However, phenomena that seem easier to measure can lead to equally large variations. Thus, the measurement of absenteeism can give rise to a wide variety of indicators: total number of days of absence, number of absences, average number of days of absence per absence, number of days of absence counting only working days or not. In addition to this variety of indicators, there is also a variety of definitions of absence. Should all types of absence be included in “days of absence”? Or only absences due to illness? What about maternity or paternity leave? These choices are of course not insignificant at all. Taking maternity leave into account, for example, will lead to a mechanical increase in the measurement of absenteeism among women. Whether imposed by public authorities or

Quantification and Decision-making

55

decided internally following negotiations with social partners, these statistical choices clearly illustrate the fact that the same phenomenon can in fact give rise to a wide variety of measures. This calls into question the idea of a quantification that reflects the real situation in an impersonal and neutral way. In Chapter 1, there was a focus on the example of job classification to illustrate the importance of quantifying HR work. However, the methods used to classify jobs, although presented as objective and rigorous, may also be based on conventions, leading to biases that call into question this ideal of objectivity (Box 2.4). The Hay method is probably one of the most widely used methods for job classification. However, studies have shown that it contributes to the undervaluation of the most feminized jobs due to a series of biases. The first bias is based on the choice of evaluation criteria and their assessment. Thus, some criteria are assessed in a potentially restrictive way. For example, in the Hay method, the problem-solving criterion seems to be reserved for strategic or technical jobs, even though administrative or customer relations jobs may well require problem-solving. The second bias is due to the omission of certain criteria. The Hay method does not define, at least in its initial form, criteria for measuring working conditions requirements. The third bias is related to the overvaluation of certain elements. The Hay method strongly overvalues the link between measured employment and financial results, neglecting the fact that some jobs, for example, may have an indirect or long-term effect on financial results. The fourth bias is to be linked to the weighting of the criteria. In the initial form of the Hay method, the number of levels assigned to the relational skills subcriterion is lower than the number of levels for the knowledge subcriterion (three vs. eight). The fifth bias refers to the overall lack of transparency of the job evaluation process. The working group that evaluates the jobs is not supposed to provide employees with the intermediate rating grids: only the final results are generally communicated. Box 2.4. Gender biases in job classification methods (source: Acker 1989; Lemière and Silvera 2010)

56

Quantifying Human Resources

These statisticalization operations, which are rarely questioned by data users, are therefore social or political choices. The variety of possible choices calls into question the idea of an univocal relationship between quantification and reality, and therefore the idea that quantification is only a neutral and objective reflection of reality. 2.1.2.2. The biases of quantified evaluation The example of quantified evaluation also allows a long list of possible biases in the quantification of individuals to be specified. As seen in Chapter 1, quantified evaluation can be done in several ways: by mobilizing a variable external to the non-declaratory organization (sales volume, for example), by mobilizing a declarative external variable (such as customer satisfaction) and by mobilizing a declarative internal variable (evaluation by the manager or colleagues, for example). However, in all cases, biases are possible. An apparently neutral criterion such as sales volume may in fact contribute to a form of indirect discrimination. Indirect discrimination refers to a situation where an apparently neutral criterion disadvantages a population on prohibited criteria. For example, sales volumes are potentially higher on average on Saturdays, evenings or end-of-year holidays, regardless of the qualities of the salespeople. However, the populations that are available in these particularly high “selling” niches probably have certain sociodemographic characteristics: people without family responsibilities, among whom the youngest people are overrepresented, for example. Direct discrimination refers to making decisions based directly on prohibited criteria (e.g. giving a lower rating to a woman because she is a woman). This type of discrimination can occur when an individual is marked by another, such as a client or manager (Castilla 2008). Indeed, this type of situation gives rise to a large number of possible biases (Box 2.5). The halo effect corresponds to a situation where the assessor extends an initial favorable or unfavorable judgment on a specific point to the whole person. The contrast effect occurs when the evaluation of one individual is conditioned by the evaluation of other individuals. Thus, the order in which individuals are assessed can have a significant impact on the final outcome of the assessment. The central tendency effect corresponds to the fact that the evaluator does not use all available ratings, but prefers average ratings. The anchoring effect refers to the fact that an assessor can be influenced by a number given to him or her

Quantification and Decision-making

57

just before the assessment, even when that number is not related to the assessment. Availability bias refers to situations where an evaluator rates an individual based on his or her ability to find examples of behaviors of the individual that are considered good or bad, regardless of the measurable frequency with which these behaviors occur. Box 2.5. Different types of bias most common in quantified evaluations (source: Kahneman 2015; Gilbert and Yalenios 2017)

Quantified evaluation is also biased due to an unavoidable gap between prescribed work and actual work (Dejours 2003; Dejours 2006; Vidaillet 2013; Clot 2015), and a simplification that the management system makes when reporting on work (Hubault 2008; Le Bianic and Rot 2013). Thus, the choice of criteria and methods of quantified evaluation is based on a representation of work that does not always correspond to the daily work actually done. As a result, the evaluation, which often takes the form of a “quantified abstraction” (Boussard 2009), fails to report on all the efforts made by the worker, and therefore on all the performance and skills that they implement (Dejours 2003). Belorgey (2013a) gives the example of the use of the patient waiting-time indicator in emergency services. This indicator is intended to report on the quality of organization of services and institutions. However, despite its relative complexity and the fact that it takes into account dimensions related to the profile of patients or the care required, it only reflects part of the caregivers’ work, since it does not include information on the quality of care, the potential return of patients who have relapsed, for example, the possible complexity of making contact with certain patients (language problems for example), and the social characteristics of patients. This last example highlights the fact that it is impossible for a quantified evaluation tool to take everything into account, or even to report on everything, which refers to the notion of accountability mentioned in the previous section. More specifically, it seems that quantified evaluation does indeed make it possible to account for some of the work or reality (possibly with biases, as just illustrated), but that, in doing so, it masks the rest, which it fails to account for. Recently, the discourse on the notion of algorithms and big data has suggested the possibility of accounting for everything, or almost everything, or at least more objects and dimensions than before, because of the emergence of new data and new methods to process them

58

Quantifying Human Resources

(Hansen and Flyverbom 2015). However, these discourses neglect the fact that algorithms and data themselves constitute mediations between the object, and the subject who measures it. Besides, these mediations sometimes constitute “black boxes” that are difficult to read, and ultimately not very transparent (Christin 2017). The myth of objective quantification therefore does not stand up to the examination of multiple potential biases. Moreover, quantification contributes to a reduction in the complexity of reality (Berry 1983; Salais 2004), which is useful to reflect, in a standardized way, one or some dimensions of this reality, but which masks others. However, the choices made to reduce this complexity (which imply, for example, focusing on certain dimensions to the detriment of others) are not neutral, and may come from such things as power games or ideological debates. 2.1.3. Objectivity, a central issue in HR Despite the relative ease with which the objectivity of quantification can be questioned, the discourse or myth of objective quantification is spreading and remains significant, particularly in the HR field. This can be partly explained by the fact that the notion of objectivity is a central HR issue. Indeed, many decisions taken in this field can have a significant influence on the professional and personal future of individuals: recruitment, remuneration, promotion, etc. Therefore, being able to justify these decisions is a crucial issue. In this context, guaranteeing a certain objectivity, or an illusion of objectivity, seems necessary to maintain social order and collective cohesion. Managerial discourses idealizing quantification and built on rhetoric oscillating between rationalization and normativity (Barley and Kunda 1992) then support this illusion of objectivity. 2.1.3.1. The myth of objective quantification in the HR field In the HR field, the myth of objective quantification remains as prevalent as in other fields. It is reflected in a form of trust in figures, indicators and metrics (Box 2.6): the production and publication of figures regularly appears as a guarantee of transparency, which reflects one of the objectivity criteria mentioned above.

Quantification and Decision-making

59

An opportunity arose to observe negotiation sessions between unions and management on the subject of gender equality in a large French company. During these negotiations, the unions repeatedly asked management to provide new quantitative indicators. These indicators were used to identify inequalities. While the legal framework requires the company to produce 27 indicators on the subject each year, negotiations with the unions have resulted in the definition of some 50 additional indicators. Moreover, the provision of indicators was seen as a necessity to create the conditions for a calm dialogue based on a form of transparency between management and trade unions. Similarly, at one point management also asked unions to provide figures on their respective rates of feminization. Following the negotiations, interviews were held with the main negotiators. Several negotiators, particularly on the trade union side, stressed that the indicators made it possible to “objectify the situation”, i.e. to identify objectively the inequalities between women and men. This “objective” identification was compared with the supposed lack of objectivity of perceptions on the subject, whether these perceptions come from management, employees or trade unions themselves. Box 2.6. Trust in figures as a guarantee of objectivity (source: Coron and Pigeyre 2018)

The myth of quantification generating transparency is found in the discourse of some providers offering solutions for comparing companies. For example, the Glassdoor website offers employees and former employees the opportunity to assess their working environment and provide information on pay and working conditions. This information, in the form of anonymous comments, but also and above all quantitative data, can then be used by job seekers to find out if the company could suit them, and by the company to identify the points of dissatisfaction of its employees. However, Glassdoor bases a large part of its commercial discourse on the notion of transparency. Its name is of course based on this idea; one of its presentations begins as follows: “Glassdoor, founded in 2007, is the world’s most transparent jobs community that is changing the way people find jobs and companies recruit top talent”. This illustrates the idea that data and quantification are vectors of transparency. However, maintaining the myth of the objectivity of quantification can be explained in part by the need for the HR function and management to be able

60

Quantifying Human Resources

to justify the objectivity of the decisions taken. Thus, figures, whether actual measures or projections, are regularly used in the HR field to justify decisions taken at an individual or collective level, as illustrated by Noël and Wannenmacher (2012) on restructuring cases. 2.1.3.2. The importance of (the illusion of) objectivity in the world of work The notion of organizational justice provides a partial understanding of the importance of the perceived objectivity of decisions made in the professional field. This notion refers to the perceived fairness within the work environment (Schminke et al. 2015). The work from this field highlights the fact that perceived justice – or on the contrary perceived injustice – can have considerable effects on employees’ behavior, in terms of loyalty to the company, performance and commitment (Colquitt et al., 2013)1. However, perceived fairness can be conceptually broken down into four dimensions (Ambrose and Schminke 2009): – perceived procedural justice refers to the way decisions are made (Cropanzano and Ambrose 2001): criteria, actors in decision-making, general rules related to decision-making, etc. For example, in the case of a pay rise decision, procedural justice could refer to the question of the criteria for raises, to their greater or lesser transparency or explanation, or to the people who decide on these raises; – perceived distributive justice corresponds to the perceived justice of the results of the decision. Thus, in the same example, distributive justice would refer to the following question: is the increase I received fair given the efforts I have made and what my colleagues have received? – perceived interactive justice emphasizes the interpersonal dimension (Jepsen and Rodwell 2012). Thus, the impression of being treated with respect, courtesy, and of being treated as well as the other members of the team is part of this dimension; – perceived informational justice (sometimes included in interactional justice) underscores the importance of interpersonal communication. An employee may wonder if he or she had the same access to information as his 1 It should be emphasized that this work focuses only on perceived justice, not real justice, and raises little question about the relationship between the two.

Quantification and Decision-making

61

or her peers, if the rules and procedures were explained to him or her as well. In the case of pay rise decisions, this dimension may, for example, refer to the way in which a team manager communicates on the raise criteria to each member of his/her team. However, these four dimensions can be linked to the myth of objective quantification, which guarantees a more positive perception on each dimension (Table 2.1). Dimension of perceived justice

Influence of the myth of objective quantification

Perceived procedural justice

A decision taken on the basis of a quantified indicator and therefore considered as objective is perceived as fairer

Perceived distributive justice

Facilitated consistency between expectations and prognoses of employees and the decisions taken

Perceived interactional justice

Depersonalization of decisions made, less importance on interpersonal relationships

Perceived informational justice

Explicitation of criteria made easier by reducing complexity

Table 2.1. The influences of the myth of objective quantification on perceived justice

Once quantification is perceived as objective, a rule or procedure consisting of taking decisions on the basis of quantified indicators will be perceived as fairer (procedural justice). In addition, basing decision-making on quantified indicators reduces uncertainties and facilitates the construction of prognoses for the decision. Thus, a seller who knows that their bonus depends closely on the sales they make simply has to monitor their own sales to know what kind of bonus they will receive. This ensures that they build expectations consistent with what they will ultimately receive, and ensures that they will perceive the decision as fair (distributive justice). Second, as previously shown, the perceived objectivity of quantification implies a form of depersonalization by reducing the role of the evaluator and introducing a form of standardization (Porter 1996), which can improve perceived interactive justice. For example, once an employee knows that their manager ultimately has little room for maneuver in a promotion decision affecting them – because this decision is based above all on quantified indicators – they cannot suspect that their relationship with this manager comes into play in the decision-making process. More generally, the introduction of a form of depersonalization reduces the importance of

62

Quantifying Human Resources

interpersonal relationships. Finally, as pointed out, quantification responds to a logic of reducing the complexity of reality, by measuring only a part of this reality, i.e. by representing in only a few dimensions an infinitely more complex reality. This reduction in complexity facilitates the communication and clarification of decision-making criteria (informational justice). It is therefore easier for a manager to explain to an employee that the decision to promote is based on x criteria defined in such a way, than to explain an evaluation based on an overall perception of behaviors, actions and skills. 2.1.3.3. Quantification and reduction of the possibility of criticism Ensuring a certain level of perceived justice therefore reduces the opportunities for questioning the decisions taken. Quantification tools further reduce these opportunities, for at least two reasons. First, the dominant managerial discourse mobilizes both rationalization rhetoric and normativity rhetoric (Barley and Kunda 1992) to support the illusion of objective quantification. Rationalization rhetoric emphasizes the scientific and methodological guarantees related to quantification, while normativity rhetoric emphasizes the need for objectivity and transparency to provide a peaceful working environment. It can therefore become difficult for organizations and individuals alike to resist both types of rhetoric. Second, quantification tools reduce the questioning of decisions taken, especially as they become more complex. Indeed, statistical complexity sometimes produces side effects that prevent individuals from questioning a numerical result or its interpretation (Box 2.7). More recently, the emergence of algorithms, which sometimes constitute “black boxes” (Faraj et al. 2018), has made this issue even more acute. Indeed, the impossibility of accessing the principles of algorithm construction prevents both criticizing the results and playing with them (Christin 2017) which leads to a significant loss of autonomy and room for maneuver for employees. Thus, a worker may, to some extent, manipulate a rating system with which he or she is familiar and whose criteria and measures he or she knows, as do the medical staff described by Juven (2016), who choose some codifications rather than others to ensure the budgetary balance of their institution. This possibility disappears, or at least decreases considerably, when the criteria and principles for constructing metrics are not known or are difficult to understand.

Quantification and Decision-making

63

In the same company (Box 2.6), a complex statistical tool for measuring the pay gap has been introduced to complement the more traditional indicators. This complex tool, described in Chapter 1 (Box 1.10), uses econometric methods and “all other things being equal” reasoning to break down pay between an explained and an unexplained gap. The introduction of this tool within the company has had the effect of reducing the possibility of questioning the figures and their interpretation. Indeed, the difficulty in explaining the calculation method and the way of interpreting the results to the different people involved (trade unions as well as management) prevented their appropriation of the tool and above all the emergence of a fair debate on methodological choices or on the final interpretation of the figures. Traditional indicators (comparison of the average salaries of women and men according to the classification level for example) could give rise to exchanges on calculation methods, the choice of parameters and interpretation. On the other hand, this type of exchange did not emerge at all following the presentation of the more complex tool. Box 2.7. When a complex statistical tool prevents criticism (source: Study by the author; Coron 2018b)

The notion of objectivity therefore makes it possible to establish a first link between quantification and decision-making. Even though the myth of objective quantification has given rise to many criticisms and challenges, its persistence in the HR field ensures that it can justify decisions that can have a significant influence on the future of employees, and ultimately reduce the possibility of these decisions being challenged. 2.2. In search of personalization The link between quantification and decision-making is also based on the notion of personalization. While for a long time statistics was positioned as a science of the impersonal and large numbers, algorithms now offer the promise of a form of taking into account the individual through quantification. This contributes to the evolution of the positioning of the HR function, which has long been based on impersonal or segmented employee management.

64

Quantifying Human Resources

2.2.1. Are we reaching the end of the positioning of statistics as a science of large numbers? It is with Quetelet that statistics begins to be constructed as a science which, starting from data on multiple individuals, succeeds in producing unique measures (Desrosières 1993). Statistics, the science of quantification, is then defined in opposition to sciences based on the observation of individual cases (Desrosières 2008a). The notions of large numbers, averages and representative samples, which structure the methodology and mathematical validity of the vast majority of statistical laws, are thus part of a vision of statistics as a science, dealing with large groups of individuals. However, this historical positioning is now being undermined by the emergence of quantification aimed at personalization and better consideration of the individual. 2.2.1.1. Statistics, the science of the collective and large numbers? Before Quetelet, scientists like Laplace or Poisson were always interested in individuals. Quetelet, on the other hand, mobilizes statistical rules to produce new, societal or at least collective objects, and no longer individual ones (Desrosières 1993). The history of the notion of the average, recounted by Desrosières (1993, 2008a), gives a good account of this movement. Indeed, the notion of average accepts two definitions. First, it refers to the approximation of a single magnitude (e.g. the circumference of the earth) from the aggregation of several measures of that magnitude; it also refers to the creation of a new reality from the aggregation of the same measure over several individuals (e.g. the average size of human beings). It was mainly Quetelet the astronomer who showed the possibility and interest of the second type of average based on the fiction of the “average man”. Quetelet took physiological measurements of his contemporaries (height, weight, length of limbs, etc.) and observed that the distribution of these measurements was based on a bell curve (later called the normal law). He deduced from this the existence of an “average man”, bringing together all the averages of the measurements made. This second type of average has thus allowed the emergence of new measurement objects, no longer related to individuals but to society or the collective. However, its dissemination has come up against heated debates linked to the deterministic and fatalistic vision that this definition of the average seems to imply, and in contradiction with the notions of individual free will and responsibility. Moreover, it has also given

Quantification and Decision-making

65

rise to practical controversies, for example in the field of medicine, between those in favor of case-by-case medicine, in which the doctor bases his diagnosis on the knowledge of each patient, and those in favor of “numerical” medicine, mobilizing the observation of interindividual regularities to establish diagnoses (Desrosières 1993). Despite these debates, this definition of the average has gradually become central in statistics, under the influence of works such as those of Galton. Desrosières also points out that at the end of the 19th Century, Durkheimian sociology helped to strengthen this position by using statistics to identify regularities, i.e. average behaviors. What is more, Durkheim initially aligned the notion of the average man with that of normality or the norm, and then presented deviations from the mean as pathologies. However, in Le Suicide, he revisited this idea, distinguishing the notion of the average type from that of the collective type. While he considers the average man (the average of individual behaviors, for example) to be a rather mediocre citizen, with few scruples and principles, he defines the collective man (understood as the collective moral sense) as an ideal citizen, respectful of the law and others. However, whatever the philosophical or epistemological significance given to the notion of average, Durkheim did indeed base his remarks, analyses and theories on the calculations of average and statistical regularity. In this respect, he contributed to the positioning of statistics as a science of the collective and not of the individual. Beyond the notion of average, several methodological foundations are necessary to ensure the validity of a large part of statistical results, and these methodological principles contribute to positioning statistics as a science of the collective and not of the individual. A return is made here to two foundations of statistical inference rules, namely the possibility of generalizing results obtained on a sample, to a larger population (Porter 1996): the law of large numbers, and the notion of a representative sample. The law of large numbers makes it possible to ensure a correspondence between a random sample and a target population (Box 2.8). The law of large numbers can be expressed in a relatively intuitive way. When a balanced coin is flipped five times, or a very small number of times, it seems relatively unsurprising to arrive at extreme values (five tails, for example). On the other hand, the more the number of draws is increased, the more intuitively we feel that the distribution of the “heads” and “tails” will each be

66

Quantifying Human Resources

closer to ½. In other words, the effect of “chance” or randomness will decrease as the number of draws increases. Without going into mathematical details, this law results in the fact that, when the number of individuals increases (tends toward infinity), the observed mean tends toward the theoretical mean. This implies that the probability law followed by a random variable can be approximated by the observed distribution of the variable over a sufficiently large sample. This law is essential to justify the possibility of generalizing the results obtained on a sufficiently large sample to an entire population. Box 2.8. The law of large numbers

The notion of representativeness qualifies the characteristics that a sample must have in order to be eligible for the possibility of generalizing the results (Box 2.9). In its mathematical formulation, the law of large numbers is based on the notion of a random sample (drawn randomly within a population). However, this notion has been enriched and corrected by the notion of a representative sample. A sample is a subset of a population. In order for the results obtained on a given sample (e.g. voting intentions at an election) to be extrapolated to the population, it is necessary that this sample be representative. Several definitions can be given of the notion of representativeness, but the general idea is that the sample must fairly accurately represent the characteristics of the population. The challenge is therefore to define the characteristics by which the sample must be representative of the population: gender distribution, socioprofessional categories and ages. Several sampling methods are proposed to ensure a representative sample: random selection, quota method, etc. Some, such as the quota method, offer the possibility of adjusting a posteriori a sample that would not be sufficiently representative (due to the existence of selection bias, for example). A representative sample is necessary to generalize a result obtained on the sample to the entire population. Box 2.9. The notion of representativeness (source: Fox, 1999; Didier, 2011)

Quantification and Decision-making

67

These two principles (the law of large numbers and notion of representativeness) therefore also underlie statistics as a science of the collective, insisting on the notion of inference, i.e. the search for the possibility of generalization to entire populations. In this view of statistics, the individual level is referred to as the notion of measurement hazard, a hazard presented as harmful to the quality of the results obtained, and which should therefore be reduced to a minimum. However, as shown in Chapter 1, recently, the emergence of algorithms has introduced a new promise in relation to quantification, based on the idea that quantification can instead contribute to a better consideration of the individual. Thus, suggestion algorithms (e.g. purchasing and content) are designed to suggest the right product or content to the right person. In Chapter 1, the example of collaborative filtering algorithms was given, which bring individuals closer together based on their history (purchases, content consumption, etc.). This type of algorithm corresponds to a form of personalization in the sense that, theoretically, each individual should be able to receive a unique set of suggestions. 2.2.1.2. Using statistics to customize This promise, to take the individual into account, is based on several conditions: a large amount of data, good quality data, data updated in real time and the possibility of identifying variables that can be substituted for each other. The amount of data seems to be a first essential condition for quantification, to allow customization. This notion of quantity is in fact divided into two dimensions. First of all, it refers to the number of variables available and their richness. Thus, the more information the statistician has about individuals, the richer and more varied the possibilities of personalization will be, because the information will allow a greater degree of accuracy. Second, it also refers to the number of individuals existing in the database. Interestingly, in fact, having a large number of individuals also allows for greater personalization, because having more cases once again allows for greater accuracy. One dimension can compensate for the other. In the case of the collaborative filtering algorithms presented above, therefore only the history of individuals is really necessary, which is very thin. On the other hand, to be able to effectively match individuals to each other based on their history, it is better to have a very large number of

68

Quantifying Human Resources

individuals, to maximize the probability that two individuals will have the same history, or very close histories. However, the amount of data is not enough, not least because it does not always compensate for poor data quality. This poor quality can be expressed in different ways, including unreliable information (Ollion 2015) or missing information for a large number of individuals. Unreliable information, which is a problem for quantification in general, calls into question the proximity between data and reality. However, measuring reliability remains difficult and the same data can be considered reliable or unreliable depending on the context. Self-reported data on beliefs, behavior and level of education are regularly denounced as unreliable because of the existence of a social desirability bias, which leads respondents to want to present themselves in a favorable light to their interlocutors, and therefore to give the answers that seem closest to the social norms in force (Randall and Fernandes 1991). However, they can be considered quite reliable in cases where the focus is precisely on self-reporting (of beliefs, behaviors, diploma level) by individuals. The reliability of the data is therefore highly contingent. Data quality can also be threatened by non-response, or the lack of information on a significant number of individuals, which poses a particular problem when quantification is used to personalize since it means that a large number of people will be deprived of this personalization. This problem is found particularly in data from social networks. These data have many “holes” (in the sense that many users do not take any action on social networks), which are impossible to neglect, but also difficult to account for. Indeed, these “holes” come from selection bias (Ollion and Boelaert 2015), in the sense that active and inactive populations on social networks certainly do not have exactly the same characteristics – sociodemographic characteristics for example. On the other hand, data from social networks have a major disadvantage, related to the fact that we do not know the characteristics of the entire population of members: we cannot adjust the samples and avoid this type of bias (Ollion 2015). Beyond the question of representativeness, these “holes” prevent the provision of personalized information to those concerned. The reliability of the data is partly based on a third condition, close to the criteria of timeliness and punctuality mentioned by Desrosières: the regular, even real-time updating of the data. This condition ensures that the data

Quantification and Decision-making

69

accurately reflect the situation at a given time t, which becomes all the more important when the reasoning is aimed at the individual level and not the collective or societal level. Indeed, at a collective level, variations are slowed down and reduced by inertia linked to mass (thus, monthly variations in unemployment rates are very small, whatever the country concerned). However, at an individual level, variations can be much faster, as an individual can change status, behavior, representations, almost instantly. This criterion also refers to the “velocity” characteristic highlighted by Gartner’s report on Big Data. Beer (2019) thus underlines the importance of speed and “real time” as the basis of the data imaginary. The ability to use quantification to personalize also depends in some cases on the ability to identify variables that can be substituted for each other (Mayer-Schönberger and Cukier 2014). Thus, a content suggestion algorithm must identify which content may be appropriate for which individual. The most effective way to do this would probably be to have information about the individual’s tastes and preferences. However, this type of variable is rarely observable. The algorithm must then find surrogate or proxy variables, i.e. observable variables correlated to tastes and preferences (which are unobservable). The history of content consumption plays this role as a surrogate variable in collaborative filtering algorithms. Quantification can indeed be used for customization purposes, as long as a few criteria are met. The example of targeted advertising provides a concrete illustration (Box 2.10). Targeted advertising first appeared a few years ago. It mobilizes the traces left by Internet users to offer them products, so two Internet users will not see the same adverts. The amount of data comes from the mass of Internet users, and the mass of information left by each individual on the Internet. Most targeted advertising algorithms actually use “only” the user’s Internet history, but the triviality of this data is compensated by the large number of Internet users who provide so many points of comparison. In terms of data quality, the personalization of advertising is only possible if the Internet user actually leaves traces on the Internet (e.g. if they have already made purchases there). People who do not leave enough traces for personalization to be possible, i.e. people for whom too much information is

70

Quantifying Human Resources

missing, will generally have access to the most common advertising, which is contrary to the idea of personalization. Updating data in real time ensures that the advert meets the needs of users at the time t. Targeted adverts are regularly criticized because they often suggest to the Internet user a product that they have just purchased. More frequent updates of data would help to solve this problem. Targeted advertising algorithms operate with very little information and assumptions about the determinants of consumer preferences and tastes (unlike so-called traditional targeted advertising where advertising is chosen based on sociodemographic characteristics that the Internet user has indicated when answering a questionnaire, for example). They only use the traces left by Internet users, which are substitutable variables for tastes and preferences. Box 2.10. Targeted advertising (source: Peyrat, 2009; Kessous, 2012a)

In addition, beyond these data requirements, the promise of personalization through quantification leads to several changes: epistemological, methodological and practical. First of all, from an epistemological point of view, this implies renewing the measurement of the relevance of methods and models. Indeed, the measurement of the relevance of a statistic produced in order to report a collective phenomenon is measured according to several factors: the homogeneity of the population which ensures that a measure such as the mean makes sense (obtained from the examination of variance for example), the verification of statistical assumptions related to the statistical laws used, the meaning and interpretation that can be inferred of the statistics. In quantification aimed at personalization, a relevance measurement will seek to reflect the consideration of each individual, which instead implies some interindividual variability and may require, for example, taking into account individual feedback. From a methodological point of view, the possibility of taking into account individuals’ feedback about the relevance of a result that concerns them seems valuable when quantification is aimed at personalization; whereas this possibility is almost never explored when quantification is aimed at a collective level. Asking for feedback from individuals enables

Quantification and Decision-making

71

the relevance and quality of the models to be measured, as has been shown, but also to improve the quality of customization (Box 2.11). Most recommender algorithms (for products, content, etc.) have the option for the user to “decline” the suggestion or indicate that it is not relevant: cross to close the window, arrow to move to another suggestion, click on a message indicating that the suggestion is not relevant... This feedback is valuable for improving the algorithms designed, as they generate new data that can be used to refine future suggestions. For example, the music suggestion algorithm mobilized by Deezer starts by asking the user for their favorite music styles. Based on this initial declarative information, it suggests content to the user. The user has the possibility to listen to the song, or to move on to the next song, but also to indicate that they particularly like the song, or on the contrary to ask that the song is no longer suggested to the user. This then allows the algorithm to be refined as it goes along, and ultimately offer each user a series of unique song suggestions. As part of one of my professional experiences, a project was piloted to build an algorithm for personalized training suggestions. Very quickly, the technical impossibility of including a functionality for refusing suggestions – on the platform provided – was highlighted as particularly damaging by statistical and computer experts. Indeed, they felt this would prevent the algorithm’s suggestions from being refined gradually, whereas according to them, the first draft of suggestions could only be rough. Box 2.11. Taking into account user feedback

From a practical point of view, using quantification to personalize requires not anonymizing the data (or at least requires the possibility of tracking the same user), which creates new issues related to the protection of personal data, as discussed in Chapter 5. The use of quantification for a customization objective corresponds well to a break with the traditional positioning of statistics. This break is embodied in new criteria of rigor, quality and relevance, and in changes on epistemological, methodological and practical levels.

72

Quantifying Human Resources

2.2.2. Personalization: a challenge for the HR function Personalization through quantification is now a real challenge for the HR function. Initially coming from targeted marketing, the notion of personalization has gradually entered the HR function, and has generated a certain interest, among other things, in the trend of “HR marketing”. 2.2.2.1. A model from marketing During the 20th Century, marketing developed the idea of taking consumers’ needs into account and adjusting to them (Kessous 2012b). This has required the industrial world to renew its operating methods, which were based in the first half of the 20th Century on the notion of maximum product standardization. Thus, in the automotive sector, this has been achieved by combining a standardized basic product with options that can be added at the customer’s request (functionalities, color, etc.). The first evolution toward adjustment to client needs has been in the field of quantification through the use of segmentation techniques, which make it possible to divide a population into several groups of clients with homogeneous needs and expectations (Kessous 2012b). These segmentation techniques, described in Box 2.12, are based in particular on the sociodemographic characteristics of clients and on the assumption that two people with similar sociodemographic characteristics will have similar expectations of the brand. More recently, the development of loyalty card systems has enabled brands to record not only sociodemographic characteristics, but also customers’ purchasing histories. This new data have made it possible to introduce a form of personalization (Kessous 2012b): offering coupons to customers for a specific product that they rarely buy, for example. The growing success of online commerce then made it possible to collect even more precise data on purchasing habits via the traces left by Internet users. This has led to the emergence of behavioral marketing, which aims to record all actions carried out online and then make suggestions for purchases or content (Kessous 2012a). The targeted advertising model is so profitable that new business models have emerged: offering free access to services in exchange for user data recovery, and remuneration by advertisers.

Quantification and Decision-making

73

Marketing has therefore been part of a progressive movement from segmentation to personalization. This movement is embodied in particular by a change in quantification conventions (Box 2.12). Segmentation methods make it possible to divide a population into groups. Generally, the statistician knows in advance on which variables he/she wants to build the groups but does not know how many groups he/she wants to build. Several steps are then necessary. First of all, it is necessary to define the variables that will contribute to the definition of the groups (called active variables). Then, the statistical software proposes segmentation. The next step is to interpret intergroup differences and intragroup similarities; in other words, to describe the groups. The last step consists of defining differentiated marketing actions and strategies adapted to each group: differentiated subscription formulas, differentiated product types, etc. Even if they are part of taking into account the needs of individuals, these methods therefore remain at a very aggregate level. On the contrary, personalization methods go down to the individual level in that they could result in unique sets of content for each individual. The example already mentioned of Deezer’s song suggestion algorithm clearly shows the difference between segmentation (which results in the production of an identical offer for an entire group, and differentiated between groups) and personalization (which results in the production of unique suggestions for each individual). Box 2.12. From segmentation to personalization in marketing

Marketing, then, clearly enacts today the principle of personalization through quantification. Recently, however, the notion of “HR marketing” has emerged (Panczuk and Point 2008). The aim is to apply marketing methods and techniques to the HR field, both for candidates and employees, with a view to attracting and retaining employees. The logic of personalization, resulting from marketing, can be introduced in this movement in HR. 2.2.2.2. The misleading horizon of an individualizing HRM? Like marketing, HRM can give rise to different forms of personalization (Arnaud et al. 2009): collaborative, adaptive, cosmetic and transparent personalization. Collaborative personalization is based on the employee’s expression of his or her individual needs, then their consideration by the company. Adaptive personalization refers to the adaptation of HRM

74

Quantifying Human Resources

practices to each employee: individualized schedules, early retirement formulas or work choice tools. Cosmetic personalization refers to the fact of offering the same service to all employees, but with a different presentation according to the employee’s profile. Transparent personalization consists of offering each employee unique services, based on his or her preferences, without the employee having to express them. The authors do not focus on the particular case of customization allowed by quantification, but it seems that it is closer to this fourth type. The examples given in Chapter 1 (Box 1.17) make this point by showing what forms personalization can take through quantifying in HRM: algorithms for personalized job suggestions in the context of internal mobility, training and career paths, for example. The introduction of the concept of HR personalization is not without its difficulties. Indeed, classically, as Dietrich and Pigeyre (2011) point out, HRM has positioned itself as a management activity based on different segments (e.g. type of contract or status), and has given relatively little prominence to the idea of taking into account individual needs and expectations. However, Pichault and Nizet (2000) have identified the existence of a form of HRM that is described as “individualizing”. This form of HRM is characterized by the establishment of interindividual agreements on the acquisition and enhancement of skills, and seems particularly suitable to organizations with multiple statuses. However, it also requires reflection and work on organizational culture in order to compensate for interindividual differentiation through integrative mechanisms. One would think that customization through quantification would fit into this model, however, this assumption does not stand up to scrutiny. Indeed, the individualizing model implies an interindividual negotiation (e.g. between the manager and the employee) over the employee’s needs, his/her recognition, etc. It therefore places great emphasis on interpersonal relationships and the expression of needs by employees themselves (such as the collaborative personalization mentioned above). However, this relational dimension is most often absent from the personalization devices through quantification mentioned above (suggestion algorithms, for example). Moreover, although personalized, quantification always introduces a form of standardization: even if each employee receives a unique set of job suggestions, all employees remain subject to the same process of collecting data and sending job suggestions.

Quantification and Decision-making

75

Personalization through quantification is an evolution for both statistical science and HRM. Quickly adopted in the marketing field, it is still spreading tentatively in HRM, supported by the growth of HR marketing. While this customization does not refer to a complete change in paradigm or HRM model, it does require adjustments within existing models, and may perhaps help to blur the distinctions between them. 2.3. In search of predictability The link between quantification and decision-making is being challenged by the emergence of so-called predictive approaches. These approaches thus modify both the positioning of statistics and the HR function. 2.3.1. Are we heading toward a rise in predictability at the expense of understanding? Historically, the science of statistics has positioned itself as a science that measures the past or present in an attempt to understand and explain it. However, the rise of so-called predictive approaches calls into question this positioning by introducing an almost “prophetic” dimension (Beer 2019). The notion of prediction also raises questions about the effect of statistics on reality, in relation to the notion of performativity. 2.3.1.1. Statistics, the science of description and explanation, but also of prediction? Initially, and even if national histories may differ on the subject, European statisticians have generally focused on the question of measuring human, social, individual or collective quantities (Desrosières 1993). Although now they appear trivial, population census operations have contributed greatly to the emergence of statistical science since the 18th Century, which explains its name2. At that time, statistics had several characteristics. It aimed at descriptive knowledge, i.e. it aimed to describe the world through numbers. Moreover, it was synchronic, in the sense that it 2 The word “statistics” comes from the German Statistik, a word coined by the economist Gottfried Achenwall, which he defines as the body of knowledge that a statesman must possess. The roots of this word therefore underline the preliminary intertwining of government and statistics (Desrosières 1993).

76

Quantifying Human Resources

gave an image of this world at a given moment t (unlike history which is interested in developments and the reasons for them). More recently, the rise of econometrics and modeling has helped to highlight another goal, that of explanation, i.e. the search for cause-andeffect links between different measurable phenomena. Even if this objective could already be seen in the 19th Century in Galton’s study of heredity or in Durkheim’s study of the social determinants of suicide, it became almost unavoidable in the 20th Century. It is in particular the crossing between, on the one hand, the progress made in the field of probability theory and, on the other hand, the concern to be able to model reality that allows the birth of modern econometrics, which aims to confront economic or sociological theories with empirical data (Desrosières 1993) but also to highlight causal relations (Behaghel 2012). The Econometric Society and its journal (Econometrica, founded in 1933) affirm that econometrics aims to understand quantitative relationships in economics and to explain economic phenomena. The challenge of identifying causal relationships obviously comes up against the difficulty of proving that the relationships are indeed causal, and not just simultaneous. The development of the reasoning “all other things being equal” (or ceteris paribus) supports the process of identifying causes, by making it possible to control third-party variables. More precisely, two variables may appear artificially linked to each other by the fact that they are both linked to a third variable, and the “all other things being equal” reasoning makes it possible to control this third variable and thus eliminate these cases. However, this methodology is not sufficient to prove a causal relationship, i.e. an anteriority relationship, between two variables. Behaghel (2012) traces the multiple methodological developments made in the 20th Century to make it possible to identify causalities. The first development is called a structural approach. This approach requires preliminary modeling, often based on theoretical models, links between variables and the identification of three types of variables: a variable to explain, an explanatory variable and an instrumental variable, which has an impact on the explanatory variable but no direct impact on the explained variable. However, the entire validity of this approach is based on theoretical assumptions formulated ex ante on causalities between variables (and in particular on the central assumption that the explained variable is not influenced by the instrumental variable). The second development, temporal sequences, is based on the observation of temporality: if a variation of a

Quantification and Decision-making

77

variable X occurs before the variation of a variable Y, then it is assumed that X has an effect on Y. However, again, this approach requires a fundamental assumption of a link between X and Y, and the possibility of excluding third variables. The third development, increasingly used in public policy evaluation, is the experimental model. It is based on the implementation of controlled experiments, with a comparison of a test group and a control group (as in medical studies on the effects of drugs). These three cases are linked in a common aim, that of identifying causal relationships and explaining the phenomena observed. They also point out that the epistemological paradigm of econometrics is essentially based on an approach that mobilizes a theory, which allows hypotheses to be formulated and tested, hence the notion of modeling. This is therefore a hypotheticaldeductive approach (Saunders et al. 2016). Moreover, the econometric approach is most generally part of a positivist epistemological paradigm (Kitchin 2014), which assumes the existence of a reality independent from the researcher, and which can be known (and in this case measured). However, another purpose may sometimes have been assigned to quantification during the 20th Century and has recently gained prominence: prediction. In fact, the psychotechnical tests mentioned in this book’s introduction are intended to measure human skills, but with the aim of predicting behaviors and performance levels. One of the criteria for the relevance of these tests is based on their ability to effectively predict the success of individuals. They are therefore a first step toward the search for prediction. Similarly, in most Western countries, administrations regularly provide forecasts (of growth, employment rates, etc.) for the coming year or years (Desrosières 2013). More recently, the emergence of so-called predictive analysis, particularly in relation to Big Data, has contributed to the dissemination of this concept (Box 2.13). Predictive analysis involves the analysis of past or present data to infer the probabilities of events occurring. Today, it has many applications, whether in medicine (predicting the evolution of a disease), in the legal field (predicting a probability of recurrence), or in marketing (predicting purchasing behavior), etc. Predictive analysis can be based on methods used in descriptive or explanatory statistical analysis. The identification of causal relationships between variables, made possible for example by the use of econometric

78

Quantifying Human Resources

methods, is very useful for predicting phenomena. Indeed, it establishes a prior relationship between two sets of variables, which means the behavior of a variable can be predicted from all its causal variables. For example, in the field of recruitment, an econometric analysis would make it possible to identify which characteristics of employees most determine their performance: diploma, professional experience, skills, sociodemographic characteristics, etc. Based on the analysis carried out, it is sufficient to know this information about candidates (which most often is known because it is present in their CV) to be able to predict their performance within the company (which is unknown at the recruitment stage). Similarly, the simple identification of correlations can be very useful to be able to predict a phenomenon, since it is sufficient to have information on a variable A to infer (predict) information on a variable B, if the link between A and B is known (see Chapter 1, Box 1.15 and the notion of “proxy variable”). Box 2.13. So-called predictive analysis (source: Mayer-Schönberger and Cukier 2014)

Predictive statistics are therefore based on the same methods as descriptive or explanatory statistics. Despite this, it represents important changes in epistemological and methodological practice. On the epistemological level, the notion of prediction introduces three evolutions. First, it implies a lower interest in the meaning and interpretation of the model, linked to a focus on its predictive quality. In explanatory statistics, it is essential to be able to interpret the model from the perspective of causality, and to be able to literally reproduce the links between the variables. This explains why theoretical models have been used to determine a priori the meaning of the links between the variables. However, the focus on predictive quality results in a decrease or even disappearance of the importance of theoretical models (Kitchin 2014; Cardon 2018). Indeed, if the only purpose of a model is to predict at most a variable Y, and if the computational powers and the quantity of data are such that all the relationships between a large number of variables can be tested, why bother with models that would lead to preselecting the variables considered relevant ex ante? Box 2.14 reflects these two concomitant developments. The rise of predictive analysis introduces essential questions about the notion of individual free will, and pits two discourses against each other; one convinced that the majority of human behavior is predictable, the other that human beings always retain a form of unpredictability.

Quantification and Decision-making

79

Predictive analysis focuses on the notion of prediction. If only prediction is important, the ability to explain why the model is constructed in such a way, or why it gives such a result, becomes less important. This leads to a lack of interest in the interpretation of the models, or even in the models themselves. Kitchin (2014) refers to the emergence of a fully data-driven science in which the links between variables and the relevance of variables would not be determined upstream by theory: they would emerge with the results of quantitative analysis. In other words, all available variables could be provided to the analysis tool, which would itself produce the links between them. This lack of interest in interpretation is visible in the restitution of results. In the case of explanatory statistics, the results are presented by emphasizing the identification of the most important variables and the measurement of their effects; while in the case of predictive statistics, it is generally the final probabilities of occurrence of an event, predicted by quantitative analysis, that are presented. For example, I observed two very different restitution workshops on a similar subject. The first case involved the restitution of a study conducted by an internal company on the determinants of absenteeism. This study was therefore based on an explanatory approach. The results were presented with an emphasis on identifying these determinants (at the individual, team and company level) and measuring their respective significance. The second case refers to a solution for predicting absenteeism. Although mobilizing a methodological approach relatively similar to that used in the first case, the results were very different, focusing on the identification of teams and periods of the year likely to show higher absenteeism. Box 2.14. Predictive analysis, a sign of a lack of interest in the meaning and interpretation of models or the total disappearance of models (Meyer-Schönberger and Cukier, 2014; Kitchin, 2014; Cardon, 2018, and study by the author)

Methodologically or practically, the relevance of a predictive statistic is not measured in the same way as that of a descriptive or explanatory statistic. Indeed, the focus is on predictive validity, not on the adequacy of a model to theory or data (Box 2.15). Predictive analysis focuses on a single question: does my model successfully predict the variable I want to predict? Therefore, this question structures the measurement of model relevance, which is based on the adequacy between what the model predicted and what actually happened. A model is considered to be a “good predictor” of an event if its predictions of that event have proven to be justified. In practice, this means that a comparison can be made ex post between what the model predicted and what actually happened, which means the relevance of the model can only be measured a posteriori. Moreover,

80

Quantifying Human Resources

this measurement is not always possible. For example, in the case of a CV pre-selection algorithm, the main purpose of which is to predict the future performance of candidates, only the performance of the successful candidate is measurable a posteriori and not that of the unsuccessful candidates. In this case, another possibility is to compare what the model predicts and what the human being anticipates: in this example, compare the ranking of applications made by the algorithm, and that made by a human recruitment manager. However, this approach is open to criticism. Indeed, it postulates that human choice is the “right” choice and that predictions made by the machine must resemble those made by humans, therefore precluding the possibility that predictions made by the machine are better than those made by humans (an argument that nevertheless regularly appears in speeches in favor of the use of algorithms in the recruitment field, for example). Box 2.15. Measuring the relevance of predictive analysis (source: Study by the author)

During the 20th Century and at the beginning of the 21st Century, statistics, long confined to descriptive and explanatory dimensions, were given a new objective: that of prediction. This goal has grown significantly in recent years, particularly in HR, in line with the promises associated with Big Data. Managerial discourse now distinguishes between so-called decisional analysis, which corresponds to the use of data to make decisions, and therefore to the EBM approach, so-called augmented analysis – which refers to tools capable of interpreting data to facilitate understanding – and so-called predictive analysis – which aims to anticipate behavior based on trends (Baudoin et al. 2019). Although based on the same quantitative methods, this new purpose nevertheless introduces relatively significant epistemological and practical changes. 2.3.1.2. Prediction or performativity? The question of prediction refers to the possibility of anticipating the occurrence of an event in reality. The positivist paradigm can fully accommodate this objective. However, the constructivist approach, which challenges the idea of reality independent of the measurement made of it, leads to a more cautious stance on this notion of prediction. Indeed, many studies have highlighted the performativity of quantification, namely the fact that quantification has an effect on reality (see Callon’s (2007) work on the economy). This performativity can take

Quantification and Decision-making

81

many forms (MacKenzie 2006). Generic performativity is the simple use of quantification tools (methods, metrics and results). For example the use of quantified measurements of worker performance constitutes a change in HR practices. Effective performativity refers to the effect of this use on reality. Within effective performativity, so-called Barnesian performativity characterizes situations where reality is modified in the direction provided by the quantification tool. The notion of a self-fulfilling prophecy is a good illustration of this type of performativity: the very fact of making a quantified prediction about an event makes it happen.Counterperformativity, on the other hand, refers to cases where reality is modified in the opposite direction to that provided by the quantification tool. However, these definitions of the performativity of quantification remain relatively theoretical. For reasons of clarification, the effects that quantification can visible have on reality are categorised below. First, quantification creates a new way of seeing the world and makes objects that may not have been so before (Espeland and Stevens 2008). Espeland and Stevens (1998) give the example of feminists who have sought to measure and value unpaid domestic work, so as to highlight inequalities in the distribution of wealth, on the one hand, and domestic tasks, on the other hand. Similarly, the work of Desrosières (1993, 2008a, 2008b) or Salais (2004) gives multiple examples of categories of thought created by quantification, from the average man already mentioned to the unemployment rate, including inequalities. Second, quantification can lead individuals to adopt certain behaviors, what Espeland and Sauder (2007) refer to as “reactivity’, and Hacking (2001, 2005) as loop or interaction effects. Espeland and Sauder give the example of rankings (e.g. academic institutions) and show how members of institutions and the institutions themselves adapt their behavior according to the ranking criteria. In another register, an algorithm that suggests content (or posts, or training) to an individual can induce behavior in that individual (to follow the suggestion). Hacking shows how certain human classifications produce a loop effect, because they contribute to “shaping people” (Hacking, 2001, p. 10). The literature is also particularly rich on what can be described as the perverse effects of quantification, corresponding to situations where, in response to quantification, individuals adopt behaviors that are at odds with the objective of the quantification. This is particularly the case for quantified work and performance evaluations. Teachers assessed on their students’ scores on a standardized test may adopt deviant behaviors

82

Quantifying Human Resources

(cheating for example) or even focus all their teaching on learning (by heart eventually) the answers to the test, thus moving away from their fundamental mission of transmitting intellectual content (Levitt and Dubner 2006). Similarly, hospital doctors evaluated on the number of patients treated may be tempted to select the easiest patients to treat, or to reduce the quality of their care (Vidaillet 2013). The latter strategy can lead to an increase in patient return rates to hospital, which ultimately defeats the purpose (cost reduction). Quantification can directly modify the real world without going through the intermediary of individuals. Matching algorithms are a good example of this form of performativity (Roscoe and Chillas 2014), especially when they no longer leave room for human intervention. For example, a recruitment algorithm that would be entirely left with the pre-selection decision of the candidates, without human intervention, acts directly on the real situation (through the selection of candidates), without the need for a human intermediary. These different examples therefore illustrate the multiple effects that quantification can have on reality. Therefore, it seems illusory to define predictive analysis as the only anticipation of future states of reality, regardless of the quantification performed. By predicting a future state, quantification influences, directly or indirectly, the probability of that state occurring. 2.3.2. The predictive approach: an issue for the HR function These conceptual debates should not obscure the fact that the predictive approach is now an issue and represents changes for the HR function. First of all, it is part of an attempt to renew the relationship between HR and employees. Then, it recomposes the relationship between the HR function and the company’s management. 2.3.2.1. An issue in the relationship with employees Employees, being consumers, are more and more used to algorithmic tools that anticipate their wishes and needs: Amazon recommends products, Deezer music, Netflix movies, etc. In addition, some players have already invested in the HR field. As seen in Chapter 1 (Box 1.17), LinkedIn already seeks to anticipate the career development wishes of its members and suggests

Quantification and Decision-making

83

positions or training. It would therefore be conceivable today that an employee working in a given company could receive, via LinkedIn, an offer for a position in the same company, which amounts to a form of outsourcing, or even uberization of internal mobility. Employees can thereofore expect the same form of proactivity from their company’s HR function (Box 2.16). In a multinational company in the digital sector, employees have at their disposal a catalogue of extensive training courses containing thousands of pieces of content, in particular e-learning content. During focus groups aimed at identifying areas for improvement in their daily work, several employees expressed the wish that a predictive algorithm, capable of predicting their training needs and thus suggesting content, be integrated into the training platform. Box 2.16. Setting up a system of training suggestions for employees (source: Study by the author)

In the context of a crisis of legitimacy of the HR function, often suspected of focusing on the most administrative aspects and cost reduction, adopting this predictive approach can therefore send the signal of an HR function that is indeed concerned about employees and their development. 2.3.2.2. An issue in the relationship with the company management In addition, the HR function has a strong interest in developing a proactive approach and stance. This is the ambition of forward-looking policies, for example around the forward-looking management of jobs and skills. Today, these policies are based on trend projections and assumptions about market developments to identify not only the key skills of tomorrow but also the jobs that will need to be recruited for. This type of policy can benefit significantly from predictive analysis, which could improve the accuracy of measuring changes but also recruitment needs, for example, through resignation prediction algorithms (Box 2.17). Today, many players (HR software companies, for example, but also consulting firms, start-ups) say they have developed an algorithm for predicting resignations. These algorithms can mobilize different data. The Workday algorithm uses career progression, career path and labor market conditions as predictors of resignation. Other algorithms, sometimes developed internally within companies, mobilize sociodemographic characteristics, but also home-to-

84

Quantifying Human Resources

office travel time, job satisfaction and even behavior on professional social networks. The interest for the company is twofold. First of all, it allows it to potentially set up additional loyalty actions for employees it wishes to keep and who present a high risk of resignation. Second, it offers the possibility of anticipating by setting up succession plans, for example, or by planning the necessary recruitment further in advance. Box 2.17. Predicting resignations (source: Press articles3; Yang et al. 2018)

The predictive approach has at least two benefits for the company. The first is based on the possibility of implementing corrective actions upstream if necessary: strengthening policies to prevent absenteeism before periods when high absenteeism is expected, for example. The second is the possibility of anticipation (such as providing reinforcements in teams and periods at high risk of absenteeism). Integrating a form of this predictive approach into decision making represents a change in stance and an improvement for the HR function, both in its relations with employees and with the company. It is therefore easier to understand the success of this type of approach in the managerial literature, for example. This chapter therefore focused on the question of the link between quantification and decision-making, and explored three components of this link. First, the myth of a quantification that allows more objective decisions to be made was documented and questioned. Then two changes were studied. The first concerns the use of quantification for personalization purposes (and therefore decision-making related to individuals) and the second concerns prediction purposes (and therefore decision-making related to the future). These two developments introduce epistemological and possibly methodological changes for statistical science, and changes in stance and logic for the HR function. They are also part of an idealized vision of quantification, based on the idea that quantification improves decision-making, whether it is directed toward individuals or toward the 3 See https://www.businessinsider.fr/us/workday-predicts-when-employees-will-quit-2014-11 (accessed October 2019).

Quantification and Decision-making

85

future. This therefore underscores the persistence of the myth around quantification and raises the question of the effects that challenging this myth could have within organizations.

3 How are Quantified HR Management Tools Appropriated by Different Agents?

The quantification tools used in HR can be considered as management tools (Chiapello and Gilbert 2013). Yet, the dissemination and appropriation by the various actors of a management tool is not necessarily immediate. Thus, Vaujany (2005) recommends looking from different angles to understand the appropriation of a management tool: designers, on the one hand, and users, on the other hand. He shows that there may be a gap between the vision of designers and that of users, and that the use of a management tool always constitutes a gap compared to what was expected by designers. In this case, the designers of HR quantification tools are extremely diverse, from the HR actors themselves to data experts, consulting firms or researchers, for example. In this chapter, dedicated to the appropriation of quantification tools, beyond this dichotomy between designers and users, it seems important to me to distinguish, albeit roughly, between management actors (management and HR function), on the one hand, who may have an interest in disseminating quantification tools, and employees and their representatives, on the other hand, who may sometimes be reluctant. Indeed, historically, the management or HR function has regularly used quantification as a rationalization tool (section 3.1). Conversely, employees may show a certain distrust of these tools. This mistrust may relate to data collection and processing (section 3.2), which refers more to a psychocognitive dimension, but also to decision-making based on it (section 3.3), which refers more to a sociopolitical dimension. It should be noted, however, that this very crude and schematic distinction should not overshadow the many variations that

Quantifying Human Resources: Uses and Analyses, First Edition. Clotilde Coron. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

88

Quantifying Human Resources

can exist between individuals, but also between organizations and between sectors. Thus, a hypothesis can be formulated: employees are more convinced of the potential benefits of quantification, but are also more aware of its limitations, in companies in the technical sectors with numerous engineering employees. 3.1. The different avatars of the link between managerial rationalization and quantification Management and the HR function have regularly shown their interest in rationalizing the organization. In the organizational field, rationalization aims at organizational efficiency and the optimization of human resources management. However, this interest in rationalization has been reflected on several occasions since the beginning of the 20th Century in the mobilization of quantification tools (Salais 2016). The example of Taylorism was already given previously: Taylorism has relied essentially on the measurement of work to rationalize it. Three new examples structure this section. First, the bureaucracy as studied and presented by Weber and then Crozier; second, New Public Management (NPM), extensively studied by sociology and management; finally, more recently, algorithmic management. 3.1.1. Bureaucracy Among those three examples, bureaucracy was the first to occur historically. Initially reserved for administration, bureaucracy has spread to other organizations. However, its characteristics, analyzed in particular by Weber (1971) and Crozier (1963), remain the same. The use of quantification, although rarely mentioned by these two authors, may constitute one of the characteristics of bureaucracy, particularly because quantification embodies a form of rational-legal authority. 3.1.1.1. The Weberian ideal type of bureaucracy and its extensions Weber (1971) identified the six principles of the bureaucratic ideal type. First, jobs, tasks and responsibilities are precisely defined. Recruitment and selection for these jobs are based on technical skills, verified by obtaining a diploma or through a competition. Then, these different jobs are integrated into a formal hierarchical structure, which precisely defines the links of submission and authority. In the same way, the rules are formalized and the

How are Quantified HR Management Tools Appropriated?

89

work is standardized, through rules, codes, methods, as well as through strict control of compliance with these rules. As a result, work relations can and should be impersonal, avoiding both conflict and emotional attachment. Finally, the salary is essentially fixed and depends closely on the job held, and promotions are based in particular on seniority. The list of these principles makes it possible to identify more general characteristics of this ideal type. First of all, depersonalization is a central issue in bureaucracies. Everything is put in place to prevent the interpersonal dimension from taking precedence over the general order of the structure. Thus, the rules of work, remuneration and promotion aim to avoid excessive submission and dependence on others (the line manager, for example, but also colleagues). Second, standardization is a key element: this makes it possible to ensure the coordination of work in a rigid relational context, contributes to reducing the importance of people and personalities, and ensures that the general structure is maintained. This standardization is implemented and maintained with the use of various tools: organization charts, job descriptions, work control and evaluation, procedures and standards. Finally, the strict division of labor is a final essential feature, which includes recruitment criteria based on technical skills and a clear formalization of tasks, responsibilities and reporting relationships. Weber (1971) argues that the bureaucratic model represents a definite gain in efficiency. However, this position was challenged by Crozier (1963), who exposes the limitations and flaws that reduce the (economic) efficiency of this model. Thus, standardization and depersonalization do not eliminate power games, territorial struggles or conflicts of interest, which are even further exacerbated by the stability of the system (maintained by the rules). Similarly, formalization does not prevent the emergence of areas of uncertainty that structure power relations and conflicts. Finally, Crozier refers to the establishment of “bureaucratic vicious circles”, constituted by the multiplication of rules that rigidify the organization. Mintzberg (1982) distinguishes between different bureaucratic forms and identifies the contingency factors that explain an organization’s adoption of one form over another. Thus, it distinguishes between machine bureaucracy, based on work rules and processes, and professional bureaucracy, based on the qualifications and skills of each individual. In both cases, work coordination is impersonal and work is highly standardized.

90

Quantifying Human Resources

Finally, Pichault and Nizet (2000) establish a link between this organizational model and the associated HRM model, described as “objectifying HRM”, which is characterized by the predominance of quantification, which corresponds to the following practices, among others: quantitative recruitment planning, evaluation based on standardized criteria measured using quantitative scales, promotion based on seniority or contests, etc. These different bureaucracy models may differ in some respects, but they are similar in terms of high standardization, depersonalization and strict division of labor. Yet, the use of quantification can promote the emergence and maintenance of these three characteristics. 3.1.1.2. The rational-legal authority of quantification Quantification contributes to a form of standardization, as the quantification tools are the same for everyone and encourage objects, phenomena etc. to fit into pre-established formats (Espeland and Stevens 1998). As discussed in Chapter 2, Porter (1996) also links the myth of objective quantification to depersonalization, a link based on the idea that a quantification operation reduces the influence and importance of the human being. Taylorism is a particularly emblematic example of the links that can be established between division of labor and quantification. According to Weber (1971), bureaucracy is also characterized by a particular form of authority or domination: rational-legal authority. This form of domination is based on a belief in the legality and rationality of the rules and authority exercised. More precisely, rationality can be “instrumentally rational” (allowing the effective adaptation of means to the goals pursued) or “value-rational” (corresponding to convictions). In both cases, rational-legal authority is characterized by a form of depersonalization: this explains why Pichault and Nizet (2000) define in part the objectifying HRM model by the notion of rational-legal authority. Quantification could precisely be seen as one of the avatars of this rational-legal authority. It is characterized by depersonalization, and has, as previously seen, generated a number of myths (including that of objective quantification) that have reinforced the belief in its legality and rationality, and thus conferred significant power on it. Moreover, Weber insists that rational-legal authority is generally based on knowledge and technical expertise, with statistics being part of this body of knowledge, for example

How are Quantified HR Management Tools Appropriated?

91

(Bruno 2013). Beyond the etymological origin of the word “statistic” already mentioned in the previous chapter, this may explain why quantification tools are regularly used in bureaucracies for different HR or managerial purposes: recruitment, staff appraisal, promotion, etc. Thus, several of the HRM practices that Pichault and Nizet (2000) describe for the objectifying HRM specific to the bureaucratic model are based on quantification: evaluation based on standardized criteria and quantified tools (such as rating scales) and accurately recorded working time, for example. Thus, bureaucracy, aimed at rationalizing work, can instrumentalize quantification for standardization and depersonalization purposes, but also make it a tool for strengthening rational-legal authority. 3.1.2. New Public Management More recently, at the end of the 20th Century, public action targeted another form of rationalization, this time directly inspired by methods from the private sector, and which was called New Public Management (NPM). However, the use of quantified tools (indicators, metrics and dashboards, in particular) is one of the central characteristics of NPM (Belorgey 2013a; Remy and Lavitry 2017). 3.1.2.1. Rationalization over time NPM aims in particular at introducing market mechanisms in the supply of goods and services of general interest, which implies, for example, directing activities and allocating services according to users’ needs and not according to pre-established rules or procedures, whilst also abandoning the specificities of civil servant status and the principles of advancement based on seniority. It also seeks to introduce more transparency into the quality and cost of services, which implies a greater use of evaluation. All this is aimed at greater efficiency in the use of public funds (Chappoz and Pupion 2012). The concern about the efficiency of public activity is not new, as previously seen with Weber’s work. Moreover, since the first half of the 20th Century, Fayol had been interested in rationalizing the organization of administrations and had already theorized precursor elements of NPM, or at least positioned halfway between the Weber bureaucracy and NPM (Morgana 2012). Thus, he emphasizes the State’s accountability to taxpayers; he advocates remuneration and promotion on the basis of merit

92

Quantifying Human Resources

rather than seniority; he suggests controlling work through timekeeping and a methodical analysis of work (similar to that proposed by Taylor), which he links to greater transparency regarding the quality and cost of administrative services. However, NPM is a new doctrine, because it translates a main objective, that of efficiency, into an arrangement of numerous subobjectives that will translate into practices, introducing neoliberalism into the bureaucracy (Bruno 2013). Thus, the subobjective of orienting and allocating activities and services according to users’ needs is reflected in management control practices aimed, on the one hand, at measuring users’ needs and, on the other hand, at measuring the adequacy between the supply of public services and these needs. The subobjective of abandoning civil servant status and promotion on the basis of seniority is reflected in individual assessment practices based on work tests and the implementation of quantified objectives. The transparency subobjective is reflected in practices of evaluating work and activity as well as in the communication of evaluation results (Espeland and Sauder 2007). More concretely, these evaluation practices are most often based on activity indicators, dashboards and rankings, which can then determine the resources allocated to the structures (Belorgey 2013b). 3.1.2.2. The role of quantification in the institutionalization and definition of NPM These NPM practices are in fact largely based on the use of quantification (Remy and Lavitry 2017). Several concrete elements characterizing NPM are based on measurement practices: – definition and monitoring of activity-related indicators (aiming at the transparency of the costs and benefits of public action); – definition and monitoring of work-related indicators (in particular for the evaluation of staff); – implementation of systematic procedures for a quantified evaluation of the effects of public policies; – mobilization of benchmarking, particularly internationally. Several examples have already been given of the use of indicators to measure work or activity in jurisdictions. In the French hospital sector, for example, activity is measured by indicators related to the number and complexity of acts performed (Belorgey 2013b; Juven 2016); in agencies

How are Quantified HR Management Tools Appropriated?

93

helping people to return to work, the construction of indicators aims to measure the rate and productivity of counselors, but also the quality of the service provided to the job seeker, or the maintenance of the employability of the unemployed (Remillon and Vernet 2013; Remy and Lavitry 2017). In another field, at the international level, many studies have focused on indicators to measure the work of researchers and the reputation of their institution, which are used in international rankings, such as the Shanghai Ranking (Box 3.1). Thus, studies have examined the construction of indicators (Altbach 2015), or the construction of the aggregate measure used to rank institutions (Dehon et al. 2010), or the way in which researchers and institutions adapt their behaviors and practices according to the ranking (Espeland and Sauder 2007). The activity of educational and research institutions is now regularly measured by international indicators, and this measure has been widely publicized since the first publication of the Shanghai Ranking (ARWU – Academic Ranking of World Universities) in 2003. It is used by governments to determine the allocation of certain public funds, or is used by students to choose where to apply. The Shanghai Ranking is based on six criteria to measure academic performance. The “Alumni” criterion measures the number of Nobel Prizes and Fields Medals won by former students of the institution. The “Award” criterion measures the number of these prizes won by members of the institution. “HiCi” (Highly Cited) measures the number of frequently cited researchers. “N&S” measures the number of articles published in the journals Nature and Science. “PUB” measures the number of articles referenced in citation indices (Science Citation Index-expanded and Social Science Citation Index). Finally, “PCP” (Per Capita Performance) is a weighted average of the scores obtained in each category, divided by the equivalent of the number of full-time members of the institution. Despite the undeniable success of this ranking, it is regularly criticized and gives rise to many questions. Can the activity of research institutions be reduced to these few indicators, which overvalue scientific productivity in the fields of science and technology, to the detriment of productivity in the human sciences and in teaching missions? Is there not a risk that the publication of the ranking will produce a self-fulfilling prophecy effect, for example, by leading the best students to apply to the highest ranked institutions, thus leading to further improvements in their ranking? Is it rigorous to compare institutions taken from such varying national contexts? Box 3.1. Measuring the activity of research institutions (source: Altbach 2015; Dehon et al. 2010; Espeland and Sauder 2007)

94

Quantifying Human Resources

The quantified evaluation of public policies has led to many methodological developments in the field of statistics and economic measurement. Thus, the implementation of random experiments is one of the most common methods used to isolate and measure the effects of public policies (Bruno 2015). Box 3.2 illustrates the use of this method to evaluate the French policy known as the anonymous CV. In 2009 and 2010, Pôle emploi (a French agency dedicated to helping the unemployed) tested the removal of the civil status section on CVs. One of the objectives of this experiment was to reduce discrimination. The evaluation of this experiment was therefore intended to answer several questions. First of all, does anonymization change the probability that some groups exposed to discrimination (women, seniors, etc.) will be selected for an interview and then recruited? What are the consequences of anonymous CVs on companies’ recruitment costs? Are companies led to abandon the Pôle emploi service in their recruitment process when Pôle emploi only sends them “anonymous CVs”? The test protocol was based on a random assignment principle to compare offers for anonymous CVs with comparable offers, but for nominative CVs. The authors clearly point out that this protocol “corresponds to international standards in terms of public policy evaluation” (Behaghel et al. 2011, p. 2). Finally, the results of the experiment showed that the anonymous CV reduced the tendency of recruiters to be homophilic, particularly in relation to gender and age, but was detrimental to candidates with a migrant background or residing in disadvantaged areas. They also showed that there was no additional cost for companies to use anonymous CVs, and finally that this did not lead them to abandon the Pôle emploi service in favor of other recruitment agencies offering named CVs. Box 3.2. Evaluation of the anonymous CV (source: Behaghel et al. 2011)

The mobilization of benchmarking is part of a practice of comparing public policies at the international level, particularly at the European level. This practice, which comes from the private world, aims to compare different entities on previously chosen measures (Bruno 2013): it is therefore a good illustration of what a commensuration operation is (Espeland and Stevens 1998). Benchmarking first requires defining the criteria against

How are Quantified HR Management Tools Appropriated?

95

which the entities will be compared, and then transforming these criteria, which are sometimes relatively broad and conceptual (e.g. the “performance” of a service or the “quality” of a product), into quantitative indicators (Salais 2004). Then, it involves recovering the quantified measurements from the services that can produce them, and comparing the entities with each other on the basis of these measures. Finally, benchmarking generally includes a results communication phase (as part of the transparency sub-objective mentioned above). Finally, benchmarking is both a production of knowledge (through quantitative measures) and a power tool (since it demonstrates that a certain level of performance is achievable by others, see Bruno 2015). The description of a specific example of the implementation of these different steps clearly shows the difficulties faced in setting up benchmarking, and the knowledge and power issues that can be associated with it (Box 3.3). Since the early 2000s, the European Union has sought to coordinate and improve the actions of its Member States in the social field, in particular in the fight against poverty and social exclusion. This research has led various countries to want to set up a European benchmarking system on the subject. Soon, the working group that was set up for the occasion was overwhelmed by the profusion of national social indicators and the diversity of their calculation methods. Finally, in 2001 they were able to define and adopt 18 indicators (e.g. the rate of low income after social transfers, or the long-term unemployment rate), but States retained the possibility of adding indicators of their own accord (France has added 162 indicators, for example, to compensate for the fact that indicators defined at the European level are essentially performance indicators, which do not reflect the efforts made by the public authorities). However, the following years revealed a significant difficulty in producing reliable and comparable data between each State, and the first results of the benchmark could only be published in 2005. Finally, some States quickly became concerned about their poor score relative to this benchmark, and questioned the principle of ranking as well as certain measures (showing, for example, that the inequality rate was lower in Eastern European countries, new members of the European Union). Finally, following a rationalization movement, the inclusion-related benchmark project was merged with other benchmarks in 2005. Box 3.3. An example of benchmarking at the European level, “social benchmarking” (source: Bruno 2010)

These various examples clearly show the central role played by quantification in institutionalization and even the definition of NPM. This

96

Quantifying Human Resources

central role is explained by the qualities attributed to quantification, which have already been mentioned: quantification is perceived as a tool of transparency, objectivity and neutrality, which in turn promotes efficiency and is therefore part of a rationalization process. The questioning of this myth and the observation of clandestine or deviant practices demonstrating the possibility of the instrumentalization of quantification (Remy and Lavitry 2017) have thus far not been sufficient to reduce its force in public discourse. 3.1.3. Algorithmic management Finally, as seen in Chapter 1, the recent development of platforms for direct contact between customers and service providers has led to the emergence of algorithmic management, which can be considered as a form of rationalization taken to the extreme. 3.1.3.1. Algorithmic management and its challenges Algorithmic management corresponds, as previously seen, to a situation where an individual’s work is entrusted to him/her by an algorithm, and not by a human being (manager). In extreme cases, the algorithm is also used to assess the quality of work. This leads to a total disappearance of the role of the manager as we know it. Chapter 1 already gave examples of management situations by algorithms (Uber drivers, Deliveroo couriers, Turkers, Google raters). Box 3.4 lists some of the questions raised by this new type of management. The notion of algorithmic management is beginning to emerge in the academic literature, which makes it possible to identify some questions and theses (not exhaustive) that it generates. The first question concerns organizational effectiveness (Schildt 2017). While some see algorithmic management as a source of productivity gains based on a more efficient allocation of resources and a better sharing of tasks, others point to the loss of valuable creative space for productivity. The second question concerns the notions of control and autonomy (Wood et al. 2018). There is indeed an uncertainty on the subject: does being managed by an algorithm give more room for maneuver and autonomy than being managed by a human being, or is it the other way around? The Uber driver can choose their own working hours, which gives them a great deal of autonomy, but

How are Quantified HR Management Tools Appropriated?

97

once they have started working, they are almost forced to comply with the machine’s instructions. Finally, the third question concerns the perceptions and experiences of workers confronted with algorithmic management (Lee et al. 2015; Lee 2018). Are the decisions made by a machine perceived as fair? Transparent? Motivating? Box 3.4. The notion of algorithmic management and the questions it raises

3.1.3.2. Extreme rationalization? Algorithmic management raises a large number of questions. Beyond these questions, it seems that this is an extremely rational approach. Indeed, it has several rationalization characteristics: efficiency, cost and staff reduction, standardization. The search for efficiency is seen in particular in the concern to reduce the time spent on each task and break time. Thus, the Uber algorithm, which indicates to the driver the route to follow, aims to reduce the duration of the journey, by integrating, for example, information related to traffic and possible roadworks, and by proposing instantaneous detours. Similarly, when the driver has finished the journey, they are immediately offered another journey, when possible, and close to the finish point of the last journey. This reduces the journey times without customers, which can represent forms of break time. The objective of cost reduction is visible through several features of these platforms. First, most workers provide their own work tools (car, bicycle, computer, Internet connection), which reduces equipment costs accordingly. Second, the almost total disappearance of management line reduces staff costs. Finally, most of these platforms play or have played for several years on the ambiguities of national laws, ensuring that workers are not technically employed by them, which reduces employer charges and offers more flexibility in labor management. The use of highly prescriptive algorithms tends to standardize work. Uber drivers follow the indicated itinerary and therefore all work more or less in the same way. However, as seen in Box 3.4, workers can regain some room for maneuver and autonomy in areas other than the pure performance of the task, such as in the choice of their working hours or their work tool

98

Quantifying Human Resources

(interior design of the car for Uber drivers, personal computer choice for Turkers or Google raters, for example). Finally, during the 20th Century, management and the HR function were able to use quantification as a rationalization tool. The links between managerial rationalization and quantification may have evolved and been reformulated, but the three examples clearly given highlight their existence. Embodying a form of rational-legal authority characteristic of bureaucracy, quantification appeared to be part of the NPM definition. Finally, it is inseparable from algorithmic management, which is essentially based on quantification tools. In these three examples, quantification is used as a rationalization tool, aiming, for example, at efficiency and cost reduction, until it reaches a form of paroxysm in algorithmic management. However, this encourages me to revisit the notion of rationalization and to reconsider Berry’s (1983) distinction between universal rationality and local rationalities. Universal rationality can be defined as the rationality of the organization, transcending individual points of view and dissensions between departments for example, while local rationalities reflect these dissensions. The examples of the Shanghai Ranking (Box 3.1) and European social benchmarking (Box 3.3) clearly illustrate the difficulty of reconciling these two levels (the corresponding universal rationality in both cases at the international level, local rationalities referring to national levels or research institutions). The standardization provided by quantification, which seeks to reconcile the different levels, often amounts to making one level prevail over the others. Algorithmic management gives the example of a situation where universal rationality (the efficiency objectives of the company, Uber for example) totally prevails over local rationalities, by erasing, among other things, the possibilities of contestation and collective constitution at the local level, due to the virtual disappearance of interpersonal and collective labor relations. This movement of crushing local rationalities then passes through a form of “datacracy” (Bruno 2013), i.e. a situation where power is held by those who possess the data, or even delegated to algorithms for processing these data (Cardon 2018). Beyond the notion of rationalization, this first section also shows to what extent the use of quantification embodies a form of technical expertise that may have contributed to the professionalization or at least to the professional identity of the HR function (Dubar 1998). Indeed, professional identities are often understood in terms of power relations and the perception of the

How are Quantified HR Management Tools Appropriated?

99

position of each actor within the organization (Sainsaulieu 2014), and it seems, in the examples given, that the quantification mobilized in the service of managerial rationalization can constitute a power tool at the service of management and the HR function. 3.2. Distrust of data collection and processing While management and the HR function can appropriate quantification as a tool for rationalization, employees and their representatives do not necessarily share the same concerns. This is partly due to a very different role in the quantification infrastructure: employees become providers of their data, while management and the HR function are rather the users. However, providing data requires a certain amount of trust in the company and in the way it may process it. Today, it seems that a form of mistrust is fueled by two fears on the part of employees and their representatives: fear linked to the aims pursued by the company, and fear linked to the idea that figures could be “made to say anything”. 3.2.1. Providing data, not such a harmless approach for employees While many social networks and digital services are based on the principle of providing data in exchange for a free service, which creates a routine form of providing data, companies may encounter difficulties in this area. Employees may be reluctant to provide their personal data, even those with a professional dimension, to their company. This is due, among other things, to the lack of visibility regarding the objectives pursued by the company and the potential gains for employees. 3.2.1.1. One observation: employees hesitate to provide their data to the company The current deluge of digital data, which is widely documented (Beer 2019), is due in part to the emergence of a new economic model. This model consists of providing a digital service (messaging, access to a social network, access to an application) in exchange for data. In other words, the user does not pay for the service in hard cash, but with his or her data. Second, the company providing the service is paid by advertisers, who see these data deluge as an opportunity to offer targeted ads, perceived as more effective

100

Quantifying Human Resources

than non-targeted ads. Most Google services, from Gmail to Google Maps, work on this model, as do most social networks, from Facebook to LinkedIn. This means that individuals are used to providing their data in exchange for a service. Moreover, some services are only of interest if the user provides their data, and their interest for the user increases with the amount of data provided (Beuscart et al. 2009). Thus, a user who does not share any content on Facebook or Twitter probably obtains less benefit from the social network than a particularly active user. Moreover, a user registered on LinkedIn but who has not uploaded information on their professional background or skills totally loses one of the interests of the network – that of being identified by recruiters. In addition, several of these networks have implemented gamification strategies to encourage users to complete their profiles as fully as possible. On LinkedIn, this takes the form of points and profile levels based on the data completed. A study conducted on the Flickr photo and video sharing site reports on this trend (Box 3.5). Flickr is a website for sharing photo and video content. It allows users, amateur or professional photographers, but also content finders on the Internet, to share their creations or findings. Flickr offers two types of accounts: a free account, with advertising and storage space limited to 1,000 files, and a paid account, without advertising, and storage space of up to 1 TB. Flickr has developed several strategies to encourage data publication. In particular, the site suggests that users make their photos “public by default”. In addition, it incorporates a series of interactive features: the ability to comment on photos, to add value to them by marking them as “favorites”, to create contact groups among users... But interactions between users are often reciprocal: posting a comment under the photo of another member increases the probability of receiving a comment on one’s photos in return, for example. As a result, network members who want their photos to gain visibility have a strong interest in posting many comments and marking a large number of photos as “favourites”. This all encourages Flickr users to produce or continuously provide new data to the platform. Box 3.5. Sharing photos on Flickr (source: Beuscart et al. 2009)

All these incentives to produce and provide data contribute to a very large increase in the volume of existing data, structured or unstructured, which gives extremely variable information about the shared content itself, such as

How are Quantified HR Management Tools Appropriated?

101

photo tags, but also about individuals, such as purchases on Amazon (Box 3.6). The data provided and produced on the Internet represents a volume that is constantly growing. For example, in 2018, every minute, at the international level: – 97,000 hours of video are watched on Netflix; – 120 new members register on LinkedIn; – 1,100 packages are shipped by Amazon; – 79,000 posts are published on Tumblr; – 475,000 tweets are published on Twitter; – 176,000 calls are made on Skype; – 49,000 photos are posted on Instagram; – 3,877,000 searches are performed on Google; – etc. This movement is not about to decrease. It is estimated that by 2020, for every human being, 1.7 MB of data will be created every second. Box 3.6. The data deluge in a few figures 1 (source: Domo 2018 )

Individuals are therefore used to providing their data, and few are concerned about the sometimes unprotective conditions of use of the platforms that store them. However, recent data use scandals (e.g. the Facebook–Cambridge Analytica data leak) have led to greater, albeit still fledgling, awareness. Yet, it seems that companies have a real difficulty in succeeding in getting their employees to voluntarily provide their personal data. This is an important issue. Few companies have data on the individual skills of their employees. More precisely, most large companies have a competency dictionary that combines jobs with skills, and know within which trade each employee is positioned, which can give an idea of each employee’s skills. 1 https://www.domo.com/learn/data-never-sleeps-6 (accessed October 2019).

102

Quantifying Human Resources

But this idea remains rather theoretical and is subject to several factors of imprecision. Indeed, an employee may have many more skills than his or her job requires, particularly because of training or professional experience; moreover, an employee, particularly a beginner in a job, may not have all the defined skills. Few companies are able to find out the individual skills of their employees at any given time. However, these data can be crucial for certain HR processes, such as job and skills planning, or for establishing a training plan. This data does exist, albeit in a self-reporting form, on networks such as LinkedIn, but companies have difficulty obtaining this type of self-reporting from their employees (Box 3.7). A large multinational corporation in the digital sector is trying to develop a kind of “internal LinkedIn”. More precisely, the aim is to provide a social network where everyone can indicate their professional background, declare their skills and apply for internal job offers. A debate quickly emerged on the link between this network project and the existing internal social network. A first solution proposed is to merge the two, by enriching the existing network with the functionalities mentioned (application for internal job offers in particular, knowing that the network already offers the possibility to declare one’s skills). To legitimize this solution, the existing network team has a strong interest in demonstrating that employees will actually use the network for these purposes. However, in 2015, only 10% of employees reported skills on the network, and the vast majority reported only one or two skills. Thus, the network team is exploring several avenues to recover skills data from employees. A first approach is based on a partnership with LinkedIn, allowing the recovery of the skills of employees registered both on the internal social network and on LinkedIn (with their agreement). However, this avenue has been criticized for the form of dependence it gives to LinkedIn. A second approach is based on the implementation of incentives to declare one’s skills (e.g. by suggesting basic skills, particularly linguistic skills, to all employees or by organizing internal communication campaigns on the subject). A third approach is to retrieve data on individual competencies from other internal data sources. A study is being conducted to identify these sources. For example, the individual interview tool gives the employee the opportunity to declare skills, and allows their manager to validate this declaration or not. However, again, less than one-third of employees have completed this part of the form, and the vast majority have not updated it for several years. The study also shows that there is a low rate of overlap between the different sources (employees who have reported skills on the social network are generally not the same as those who

How are Quantified HR Management Tools Appropriated?

103

have reported in the individual interview), and that it is risky to group the different sources together, as they correspond to a very different reporting and collection method. The study concludes that the company does not have a truly relevant source of data on individual skills. The most accepted solution seems to be to set up communication campaigns to encourage employees to declare their skills on the network. However, the company quickly realizes that these campaigns are not enough: still few employees declare their skills. In addition, internal debates are emerging about the status and quality of these self-reported data, and on the need for managerial or peer validation. Box 3.7. The difficulties for a company in getting employees to declare their skills on the internal social network (source: study by the author)

3.2.1.2. Suggestions for an explanation Several explanations can help to understand this contrast and difficulty. The first is that the population present on social networks is not necessarily representative of the population of employees in companies. The second is the lack of identification by employees of the services that the company will be able to provide for on the basis of this data. Finally, the third is the fear of how the data will be used. First of all, the considerable mass of users on social networks and the content exchanged on them should not make us forget that many individuals remain either absent from these networks or inactive. For example, the percentage of Europeans using Facebook, the most popular social network, was 41.7% in 20172. This means that more than half of Europeans do not use it. In addition, some Facebook members use it as a watch tool, i.e. to look at other members’ activity, but don’t have any activity themselves. Besides, the population of social network members is not necessarily representative of the overall population of a country. On Facebook, 18–34 year olds are significantly overrepresented, unlike those over 553. Finally, the active population on social networks, i.e. those who are willing to share and 2 https://www.statista.com/statistics/241552/share-of-global-population-using-facebook-byregion/ (accessed October 2019). 3 Demographic distribution of Facebook members in the United States: https://www.statista. com/statistics/187041/us-user-age-distribution-on-facebook/ (accessed October 2019).

104

Quantifying Human Resources

provide their data, is not necessarily representative of the population of a country, let alone of a given company. It is this lack of representativeness of populations active on social networks that makes social science research that is solely based on these data risky (Ollion 2015). This may therefore explain the apparent gap between the sharing and data provision behaviors of individuals on social networks and the difficulty for companies in obtaining the same behaviors internally. Second, the economic model of social networks and digital models described above implies an exchange between, on the one hand, a service provided to the user without monetary compensation, and, on the other hand, user data through which the company can be remunerated by advertisers. Thus, this model is based primarily on providing a service to users, and if possible a service that they can hardly do without: email on Gmail, routes on Google Maps, for example. Moreover, some companies in this sector have long been or are still unprofitable, due to a time lag between the free provision of the service to users and the collection and stabilization of advertisers’ payments. However, companies still communicate relatively little about the services they will be able to provide for employees with their data. This lack of communication is due to two factors. First of all, some of the uses that can be made using employees’ individual data benefit the company more than employees. For example, identifying individual skills for workforce planning purposes is more a business objective than an employee need. Second, it may be difficult to identify upstream, i.e. even before the data are available, what services can be provided from them. In other words, companies often adopt an approach that is the opposite of that used by digital players. Instead of offering a service that will become so essential that individuals will provide their data almost without even asking themselves questions (smartphone, messaging, etc.), they expect employees to provide their data in advance without having any visibility on what it will bring them. This reversal of logic probably explains a large part of the reluctance of employees to provide their data. Employees and their representatives may show a certain distrust of the company and the way in which it may use (or even misuse) their personal data. Indeed, examples illustrate the possibility of using data from social networks for disciplinary purposes (Box 3.8). Several cases have shown that the employer may in some cases use information published on social networks such as Facebook to justify dismissal.

How are Quantified HR Management Tools Appropriated?

105

For example, statements damaging the employer’s reputation, defamatory or insulting statements about other employees of the company, or even statements demonstrating an employee’s dishonesty (in France, an employee was dismissed for posting holiday photos on Facebook when he was supposed to be on sick leave) may justify dismissals, in different countries. Several parameters are taken into account, in particular the private or public nature of the comments, their seriousness and their degree of harm to the company’s reputation. Box 3.8. Using data from social networks for disciplinary purposes 4 (sources: press articles, blog articles )

Beyond this disciplinary dimension, the very fact of having the data is a form of controlling employees. Thus, at the time of recruitment, retrieving information on candidates from their profile on social networks makes it possible to check their compliance with the company’s values and expectations. Foucault (1975) has clearly shown the link between transparency, knowledge and the possibilities of control. Thus, the panoptic form of surveillance (where one person can look at and monitor all the others) described by Foucault corresponds well to a situation where individuals are led to provide a maximum of data on platforms, whether external platforms or a platform for their own company. In this type of situation, power is very discreet, but control is in fact widespread. This may explain the reluctance of employees to provide their data to their company. This reluctance is often supported by employee representatives (Box 3.9). In various countries, trade unions are very cautious, even suspicious, about issues of data collection by the employer. In France, the CFE-CGC has joined forces with an association to propose an “Ethics & Digital HR” charter. This charter aims in particular to regulate the practices of collecting data on employees and candidates in the context of recruitment: it recommends, for example, that data on candidates should not be collected without their consent in the context of recruitment, or that practices of collecting employee consent through the acceptance of voluminous general conditions of use that are difficult to read and understand should be prohibited. Box 3.9. Unions distrustful of employer data collection

4 See especially https://www.employmentbuddy.com/HR-Blogs/Details/Fair-dismissal-followinghistoric-derogatory-comments-on-Facebook (accessed October 2019).

106

Quantifying Human Resources

Finally, the appropriation by employees and their representatives of HR quantification tools is limited by a form of passive resistance: employees do not actively contribute to the production of data sets on themselves, which would allow the development of new services or new uses of quantification. Despite its apparent discrepancy with individual behavior on the Internet, which most often consists of providing large amounts of data without worrying too much about it, this passivity can easily be explained, among other things, by the lack of visibility of the services to be expected and by a form of mistrust linked to possible uses of data for control and discipline purposes. 3.2.2. Can numbers be made to reflect whatever we like? The appropriation by employees and their representatives of HR quantification is also confronted with a relatively common discourse, based on the idea that figures can be “made to say anything and everything”. This discourse, which refers to a form of instrumentalization of quantification, is therefore the opposite of the discourse on its objectivity and rigor, but it is also part of the “fantasy of quantification”. It then encourages the consideration of ways to ensure a form of rigor in interpreting the figures. 3.2.2.1. The other side of the myth of objective quantification: the myth of instrumentalized quantification While the data are seen as objective and neutral in reflecting reality, their interpretation is sometimes subject to virulent criticism, from both specialists and neophytes. Thus, many experts denounce common misconceptions such as the confusion between correlation and causality, or the reductionism involved in trying to represent reality by means of a single variable (Gould 1997). On the neophyte side, several attitudes can be identified, on a continuum ranging from a form of passive acceptance of interpretations, under the effect of a kind of amazement by the numbers (see Chapter 2, Box 2.7, for example), to a virulent criticism expressing the idea that the same number can be interpreted in several ways. This criticism is partly justified, particularly when interpretations are not formulated precisely enough (Box 3.10).

How are Quantified HR Management Tools Appropriated?

107

During a negotiation with the unions on professional equality, I was able to observe a debate on the interpretation of the same figure. This figure indicated that the feminization rate of beneficiaries of raises related to mobility or professional development was 37%. Management interpreted this figure to mean that women were not discriminated against when awarding these increases, since this 37% feminization rate was slightly higher than the company’s overall feminization rate (36%). However, one union contested this interpretation, on the basis of two observations. First, the company employs both civil servants and contract workers, but contract workers are overrepresented among the beneficiaries of these raises (a figure not provided by the company but obtained by the union in another negotiation). Second, the feminization rate of contract workers is over 37% (40%). The union deduced that women suffered from discrimination when it came to granting these raises. This example therefore illustrates the fact that the same figure can indeed give rise to two opposing interpretations, which is linked, inter alia, to the fact that the wording of the interpretation (women suffering/not suffering from discrimination in the attribution of these raises) remains relatively vague, and above all constitutes an important leap compared to what the figure alone indicates (a simple feminization rate). Box 3.10. Two opposing interpretations of the same number (source: Coron 2018a)

It is therefore relatively tempting to contrast the stage of setting the world in data, seen as an objective and a guarantee of rigor, and the stage of interpreting quantified results, seen as potentially biased and prone to being instrumentalized toward certain ends. For example, several studies have examined the rhetorical instrumentalization of numbers and statistics in political or public discourse (Gould 1997; Espeland and Stevens 2008; Obradovic and Beck 2012). This instrumentalization may involve selecting the numbers, methods and results on which the interpretation is based, or even reducing interpretation and purpose to a single key figure, formulating deliberately vague interpretations, or introducing a significant leap between the mathematical meaning of the figure and what is said about it (e.g. from correlation to causality). However, an attempt has already been made in Chapter 2 to deconstruct the myth of objective data setting; here an attempt can be made to deconstruct the myth of instrumentalized quantification, by highlighting certain conditions aimed at limiting this instrumentalization.

108

Quantifying Human Resources

3.2.2.2. How do we limit the instrumentalization of quantification? The first condition, set out by Salais (2016), is to not forget the constructed nature of quantification. This includes recognizing the role of prejudice, social conventions and human bias in the implementation of worldwide statistics and the interpretation of numbers. This is then intended to limit the sideration effect linked in part to the myth of quantification. The second condition refers to a number of statistical and scientific rules. Thus, it is important to avoid measuring percentages on very small samples. On these small samples, a percentage can vary considerably: for example, going from three to four women in a management committee of 10 people corresponds to an increase of 10 points if one reasons in percentages. Similarly, it is preferable to limit the interpretative leap between what exactly the number measures and the interpretation made of it (see Box 3.10). Thus, it is tempting to interpret correlations between employee engagement and company performance as causalities, or changes in the percentages of employees engaged as direct effects of policies put in place, but few methods really make it possible to confirm this type of causality – and certainly not a simple correlation calculation. Finally, communicating about the whole process, such as the choice of indicators and methodology, seems necessary to make it possible to discuss these choices. The creation of discussion forums on numerical interpretations is a third condition that seems essential in order to limit the possibilities of instrumentalizing quantification instruments. Indeed, quantification is no more instrumentalizable than other types of evidence gathering or other scientific approaches. However, it may sometimes leave less room for criticism because of the impression that a greater technical and scientific background is needed to understand it and therefore possibly challenge it. However, in companies, it seems crucial that there are forums for discussion around HR quantification, made up of trained individuals and professionals in this subject. The role of employee representatives, for example, seems to be very important on this subject. It seems possible for companies to organize or finance training on data analysis or on mastering quantitative methods, in order to spread the possibility of a balanced and informed debate on figures and their interpretation. This point will be returned to in Chapter 5.

How are Quantified HR Management Tools Appropriated?

109

Finally, the employees’ relationship to HR quantification processes is first structured by a form of distrust in the collection and processing of data and the uses and interpretations of figures that can be made by the company. This mistrust is reflected in particular by not engaging in the voluntary provision of personal data to the company, a passive resistance that contrasts with the lightness with which individuals entrust their data to external digital actors (Google, Facebook, LinkedIn, etc.), and which limits the company’s possibilities for data collection and therefore, ultimately, quantification. 3.3. Distrust of a disembodied decision This relationship is also structured by a distrust of decision-making based essentially on figures, which is becoming somewhat disembodied. In Chapter 2, the links between quantification and decision-making were explored (objectivity, personalization, prediction). In this section, the focus is on employees’ and their representatives’ perception of decisions based on quantification. By using numbers to make decisions, a human being resists a form of responsibility and does not really make the decision themselves. In addition, cases where decisions are made without human mediation are becoming more and more common. The notions of responsibility for decision-making and employee empowerment seem crucial to understanding this distrust. 3.3.1. Decisions made solely on the basis of figures Situations where a human being makes a decision based almost exclusively on figures have become relatively common, particularly with regard to remuneration or promotion decisions in companies. However, these situations have two characteristics that can be criticized by employees and their representatives. First of all, the employee’s voice becomes inaudible and is often not taken into account. Second, the use of an often standardized quantification makes it more difficult to take into account particular and individual circumstances. 3.3.1.1. Has the employee been silenced? Many examples can be given of decisions taken on the basis of figures in organizations, from collective decisions (such as the number of redundancies

110

Quantifying Human Resources

under a restructuring plan) to individual decisions (such as individual raises or promotions in companies that have highly standardized their processes). The measures used for these decisions may also vary, from economic data (as part of restructuring plans) to aptitude tests, or individual activity and performance indicators (e.g. figures about sales made by a seller). A recommendation is made to the reader to refer back to Chapter 1 (section 1.1) for more specific examples. Here, the focus is on the consequences for the employee of this form of decision-making. The first consequence refers to the limited consideration given to the employee’s voice. Thus, when the decision is based on figures, the employee has few means of contesting it, and any individual claims cannot be taken into account. Indeed, if the process allows for decisionmaking based essentially on figures in order to be able to value its objectivity and justice, as seen in Chapter 2, leaving the possibility of individual adjustments undermines this image of objectivity and justice (Box 3.11). Pichault and Nizet (2000) point out that the quantified standards aim, among other things, to limit managerial and interpersonal arbitrariness. However, several studies have shown the importance of giving individuals the opportunity to express their opinions on decisions that concern them. Thus, Marchal (2015) opposes, in the context of recruitment, the “distance” selection, which does not allow the candidate to express themself during an exchange with the recruiter. The selection is more based on the expression of individual characteristics. Some of the normative literature on evaluation recommends that only standardized and quantified indicators should be used. Thus, scientific management recommends the use of standard and explicit rules, which are supposed to leave less room for the evaluator’s subjectivity, as seen previously. The competitive recruitment model in the French civil service reflects this positioning. Bourguignon and Chiapello (2005) also give the example of a company which, faced with criticism linked to the subjectivity of qualitative evaluation objectives and indicators, decides to evaluate individuals only on quantified objectives. However, introducing a form of taking into account the expression of individuals (e.g. workers) in this type of assessment may give the impression of introducing arbitrariness and subjectivity. The HRM models identified by Pichault and Nizet (2000) provide a good account of this phenomenon. Indeed, the

How are Quantified HR Management Tools Appropriated?

111

objective HRM model is essentially based on standard rules, identical for all, and quantified indicators. Conversely, taking into account employees’ opinions would be similar to the individualizing or conventionalist HRM models, which are supposed to be the opposite of the HRM objective, since it is a question of taking into account the expression of employees. Box 3.11. The impossibility of making individual adjustments when the decision is based on figures in order for it to be more objective (source: Bourguignon and Chiapello 2005; Pichault and Nizet 2000)

In the context of the evaluation, the School of Human Relations highlighted the importance of organizing an interpersonal exchange between the person being evaluated (the employee, for example) and the person assessing (generally, their manager), enabling the employee to shed light on, or even discuss, the decisions taken, but also providing a privileged opportunity for listening and communication. Indeed, the supporters of this school insist on the importance of the human factor and interpersonal relationships in the productivity and commitment of individuals. Therefore, rather than focusing on an unattainable ideal of objective and fair evaluation, they recommend using evaluation as a means of creating a space for exchange and discussion. For its part, the current of organizational justice also strongly values the importance of hearing the employee’s voice. For example, a review of the literature on evaluation processes shows that employees are more satisfied, consider the process fairer and are more motivated when they feel they can express themselves (Cawley et al. 1998; Cropanzano et al. 2007). Cawley et al. (1998) identified five ways to encourage employee expression in relation to evaluation: the opportunity to express an opinion on the evaluation process and outcome, the opportunity to influence the outcome through this expression, the opportunity to selfassess, the opportunity to participate in the development of the evaluation system, and the opportunity to participate in setting objectives. Finally, the critical currents on evaluation also highlight the fact that it can be a vector of employee domination (Gilbert and Yalenios 2017). In fact, evaluation can be defined as a constraint imposed on employees. However, this constraint can be perceived as more alienating and enslaving when the employees do not have the opportunity to express themselves. As a result, employees and their representatives may show a certain distrust of systems that leave no room for the employee’s voice.

112

Quantifying Human Resources

Moreover, silencing employees prevents the always particular and individual circumstances of the exercise of work from being taken into account. 3.3.1.2. The difficulty in taking into account particular circumstances Indeed, the second consequence refers to the poor consideration of particular or individual circumstances. Adjusting a decision based on figures with an ideal of objectivity and justice to individual characteristics is indeed a threat to that ideal. However, individual performance is almost always influenced by the particular circumstances, personal or professional, in which the work is performed (Box 3.12). Several types of special circumstances can influence performance. First, professional circumstances refer to characteristics related to the professional environment. Thus, a lack of support from colleagues or the manager, complex or inappropriate procedures, contradictory instructions, can lead to a decrease in performance, if performance is understood as the achievement of results. Personal circumstances, however, refer to situations related to the individual’s personal or family life: parenthood, state of health, etc. Therefore, the issue is whether to evaluate the final performance (the obligation of results, in a way) or the means implemented by the person (the obligation of means). While the obligation of results does not take into account particular characteristics, the obligation of means may include them. Thus, a decision taken solely on the basis of performance indicators (including figures) prevents individual adjustments to these circumstances. In some cases, decisionmaking actors may seek to define other standardized rules in order to take into account individual circumstances. Thus, in some research disciplines, academic careers are essentially based on the criterion of the number of publications, which is supposed to be regular over the years; some commissions then institute a standard rule “one year = one publication or one child”, to take into account the effect of maternity on scientific productivity (Pigeyre and Sabatier 2011). However, this rule remains standardized and will be equally powerless to take into account other special circumstances. Box 3.12. Work performance and special circumstances

The impossibility of taking into account these particular circumstances, characteristic of the use of quantified tools (Pichault and Nizet 2000), may ultimately undermine the ideal of justice that is supposed to be guaranteed by the use of quantification. The Aristotelian distinction between equality

How are Quantified HR Management Tools Appropriated?

113

and equity allows us to understand this phenomenon. Indeed, Aristotle presents equity as the possibility of taking into account particular circumstances. Other authors have since made a more precise distinction between equity, equality and the consideration of needs. According to Cropanzano et al. (2007), equity refers to assessing and rewarding employees according to their respective contributions, equality refers to rewarding all employees in the same way (the principle of general raises without taking into account individual performance, for example) and consideration of needs refers to assessment and reward according to individual needs (such as proposals for adapted development plans). Decision-making based essentially on quantification corresponds to a combination of equality and equity (if the respective contributions of individuals are taken into account), but does not allow individual needs to be taken into account. If the ideal of justice is closer to the consideration of needs than to equality or equity, then this type of decision-making may not achieve that ideal. Therefore, workers and their representatives may oppose recruitment, selection or evaluation systems based solely on quantified indicators. 3.3.1.3. Decision-making without accountability Decision-making based essentially on quantified indicators tends to remove the responsibility of the person who is supposed to embody the decision (Marchal and Bureau 2009). Thus, the multiplication of quantified tools allows the evaluator to relieve themself of the burden of judgment, and to thus disengage themself from it. For example, a manager may communicate a decision on an individual raise or promotion to an employee in their team, while blaming the figures for this decision. This disengagement therefore corresponds to a certain lack of responsibility on the part of decision-makers, which ultimately corresponds to a form of disembodied decision-making. However, this phenomenon may be poorly perceived by employees, who may see it as a sign of disengagement on the part of their manager and who may also suffer from the impossibility of attributing the decision taken to a person who is responsible for it. This depersonalization, disembodiment or unaccountability of decisionmaking finds its extreme form in situations where decisions are only made by algorithms, without the mediation of a human being to embody them and to explain them to employees.

114

Quantifying Human Resources

3.3.2. Decisions made solely by algorithms The notion of “algorithmic management” has already been mentioned. It refers to situations where the role of the manager, for example in the allocation or evaluation of work, is entirely assigned to an algorithm. These situations raise two major questions. First, that of responsibility: who is responsible for the decision made by the algorithm? Second, these situations question the possibility for employees to maintain room for maneuver and autonomy with regard to the algorithm, which refers to the notion of empowerment. 3.3.2.1. The question of liability The many current debates on autonomous cars and other robots that must take decisions in place of human beings underline the importance of the notion of responsibility. For example, research has shown that, even if autonomous cars result in fewer accidents than humans, these accidents would be less “accepted” by individuals, as liability could not be clearly defined (Hevelke and Nida-Rümelin 2015; CNIL 2017). Indeed, the machine probably cannot be held responsible for the decision. However, the human beings who participated in the decision production chain are extremely numerous. They include, among others: – the management of the company that decided to produce and then market the machine; – data experts (Beer 2019) who have developed the computer codes necessary for the proper functioning of the machine; – the testers (internal and external to the company) who decided, following the tests that were carried out, that the machine could be marketed safely; – experts mandated by commissions, who have authorized its placing on the market; – the users and owners of the machine. The length of this chain of responsibilities clearly shows the impossibility of attributing a decision taken by a machine to a particular human being, or even a group of human beings. Moreover, algorithms now learn from data sets, which raises the question of liability in a different way. Thus, the conversational robot Tay, put online in 2016 by Microsoft, had to be

How are Quantified HR Management Tools Appropriated?

115

suspended after only 24 hours, when confronted with data from the users of social networks, it began to make racist and sexist comments (CNIL 2017). Moreover, how can the quality of a decision taken by a machine alone be measured? Should this quality be measured against the human decision, considering that the human decision is the “right” decision and that the machine must adjust to it? But in this case, how can we take into account the variability of human decision-making (Box 3.13)? Or should we consider that the interest of the machine is precisely to make better decisions? But how can we know if the machine has made a better decision than a human being? In the context of the development of the autonomous car, scientists have sought to measure the decision-making criteria of human beings in order to evaluate the actions to be taken in the event of an accident. The “Moral Machine” study therefore consists of an online questionnaire that puts human beings in situations of inevitable accidents. For each question, the Internet user has a choice between two decisions, both of which have deadly consequences on human lives. In some cases, the driver has the choice between sacrificing their own life or that of a pedestrian; in other cases, they have the choice between two groups of pedestrians to sacrifice. The lessons of this study are extremely rich and make it possible to define global rules for setting up decisions. Thus, human beings generally favor actions that save the large number of lives, or those that favor human life at the expense of animal life. However, other choices are more disturbing, such as favoring the lives of children over the lives of older people, favoring pedestrians who respect the signs (crossing the pedestrian crossing) over those who do not, or sacrificing the lives of overweight people, homeless people or offenders more easily. In addition, this study reveals wide variations between countries. For example, in Eastern countries, the protection of children at the expense of the elderly is much less systematic. In some countries, the difference between human and animal lives is less pronounced. Box 3.13. The “Moral Machine” study and the great variability of human decisions 5 (source: official websites and press articles)

5 In particular: http://moralmachine.mit.edu/ and http://moralmachineresults.scalablecoop.org/ (accessed October 2019).

116

Quantifying Human Resources

Cardon (2018) raises the question of the responsibility of algorithms from another angle, that of power. He insists on the personification of algorithms in current discourses, which attribute to them a form of responsibility and power in the organization of information and social life. And, in fact, this personification also finds its source in the growing autonomy of algorithms in relation to their designers6. This empowerment also stems from the fact that, by their construction, an algorithm must transform substantial rules into procedural rules. Indeed, a human being or a company may want to develop an algorithm that will suggest the most appropriate content for each individual (substantial rule). The algorithm, which has no symbolic understanding of this rule nor the data it manipulates, must transform this substantial rule into a procedural rule, i.e. into procedures for calculating and coding information that will best approximate people’s tastes. Collaborative filtering, which consists of approaching individuals’ tastes by their similarity in history with other individuals, illustrates this transition from the substantial rule to the procedural rule. Finally, it is also this passage that contributes to a form of empowerment of algorithms, in that they do not obey the principles of rationality and human modes of understanding. According to Cardon, many voices are calling for a guarantee of “neutrality” on the part of algorithms. However, an algorithm cannot by definition be neutral, since its essential purpose is to select, order, sort, filter and classify information. Cardon proposes replacing the imperative of neutrality with an imperative of loyalty. In other words, platforms using algorithms must clearly explain what they do, how they are built, what criteria they use for rating or filtering, etc. In companies, this rule is just as important, and its application is demanded by employee representatives (e.g. the CFE-CGC in France in the “Ethics & Digital HR” charter). They also highlight the fact that the more complex an algorithm is, the more difficult it will be to explain. There is therefore an argument in favor of mobilizing simpler, and therefore more explainable, algorithms. The question of responsibility for the decision taken by the algorithm remains unresolved, and several answers can be provided by experts, States and even the international community, from the responsibility of the companies that produce and use the algorithms to the individuals who own the machines based on the algorithms. 6 We will examine this in more detail in Chapter 5.

How are Quantified HR Management Tools Appropriated?

117

In companies, this issue is sensitive and important. Indeed, many HR decisions can have a significant impact on the professional and personal future of individuals. Therefore, to know who can be held responsible for these decisions, for example in the event of a dispute or litigation, seems necessary. For the time being, in the countries of the European Union, the General Data Protection Regulation 2016/679 explains that individuals have the right to not be the subject of a decision based exclusively on automated processing that produces effects that significantly affect them. This currently limits the possibility of fully automating processes such as CV preselection or promotions. But this rule does not exist in other countries and, moreover, its application in the European Union may raise questions, insofar as a company can always add a human intermediary, which will give the impression that the rule is respected, whereas it will not be if the intermediary simply blindly follows the instructions of the algorithm. 3.3.2.2. Algorithms perceived as black boxes: an impossible empowerment? The notion of loyalty is based on the idea that it is necessary to be able to explain exactly what algorithms do. In fact, it seems necessary to explain how algorithms work in order to guarantee a form of employee empowerment. The notion of empowerment gives rise to very varied definitions, particularly when it comes to workers. However, this notion is based on the idea of giving power to employees, and thus contributing to a redistribution of power within the company (Greasley et al. 2005). The literature on the subject often focuses on employee empowerment vis-à-vis their manager, but the notion of empowerment seems to also be applicable in the context of the relationship between employees and algorithms. Thus, Christin (2017) refers to cases where workers manage to “play with the algorithm”7 because they understand and are able to master its operating rules. For example, journalists whose performance is partly measured by the e-reputation of their articles use titles that are particularly attractive in terms of the number of clicks, but which do not necessarily reflect the content of the article, or ask that their article be positioned at the top of the page at times when there is more traffic on the Internet, which then increases their ereputation as measured by the algorithm. This type of manipulation with

7 According to the definition of Espeland and Sauder (2007, p. 29): “We define “playing” as manipulating rules and numbers in ways that are unconnected to, or even undermine, the motivation behind them.”

118

Quantifying Human Resources

algorithms or quantification refers to a form of “reactivity” (Espeland and Sauder 2007) characteristic of a worker taking power over the algorithm. However, this is only possible to the extent that the worker understands how the algorithm works, what data and calculation rules it uses. Yet, algorithms sometimes remain “black boxes” (Christin 2017) whose mode of operation remains incomprehensible. This type of situation then seems to be the opposite of the idea that quantification provides a form of transparency (Espeland and Stevens 2008; Hansen and Flyverbom 2015). Once again, this underlines the changes brought about by the increasing mobilization of algorithms in the world of quantification. Finally, employees and their representatives may therefore show a certain level of mistrust of the company’s intentions when collecting and processing data, and of a form of disembodied decision, whose responsibility may be difficult to establish. This may limit their appropriation and acceptance of the quantification tools used in HR. This chapter focused on the appropriation of HR quantification tools by the company’s stakeholders. It very schematically proposed an analytical distinction between management and HR, on the one hand, and employees and their representatives, on the other hand. While management and the HR function may see quantification as a rationalization tool, which advocates its dissemination, employees and their representatives may be reluctant to do so, when providing their data to their company, for example. Indeed, they may see quantification as a threat to the quality of decisions taken and to their room for autonomy. The HR function is then encouraged to develop strategies to reduce this resistance. It can thus seek to highlight the contributions of quantification for individuals, building on the arguments outlined in the previous chapter: a guarantee of objectivity, the possibility of providing new personalized services to employees, the possibility of entering a more proactive and less reactive approach, etc.

4 What Effects are the Effects of Quantification on the Human Resources Function?

The increased use of quantification has consequences for the positioning of the HR function within the company. Indeed, quantification can be a tool for evaluating HR policies, and thus enable the HR function to ensure their implementation, their effects and finally define appropriate HR policies (section 4.1). Being able to evaluate HR policies is also a first step toward legitimizing the HR function within the company, with regard to other functions such as finance or executive management (section 4.2). This legitimization operation through quantification can involve measuring the performance of the HR function, and especially the link between the performance of the HR function and that of the organization’s performance. However, the more recent use of algorithms may also pose a threat to some parts of the HR function, making it possible to automate them (section 4.3). This raises the question of how to support the employees concerned. However, it should be noted that these various points are not specific to the HR function: most of the company’s support functions (marketing, information systems, etc.) seem to be concerned. The specificity of the HR function may lie in its greater distance from the figures and in a greater difficulty in measuring its action. 4.1. Quantification for HR policy evaluation? Public policy evaluation is an important part of economic, statistical and econometric research. This evaluation can indeed involve, among other

Quantifying Human Resources: Uses and Analyses, First Edition. Clotilde Coron. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

120

Quantifying Human Resources

things, the use of sophisticated quantitative methods. Without necessarily going as far as this, HR policy evaluation can also mobilize quantified tools, particularly in two areas: measuring the implementation of policies and measuring their effects. 4.1.1. Measuring the implementation of HR policies Measuring the implementation of HR policies generally involves a key step in defining monitoring indicators to ensure that the measures are applied. However, the vision provided by the indicators does not always coincide with the reality on the ground, as the figures only provide a distorted vision of the appropriation of HR policies by the actors. 4.1.1.1. The definition of monitoring indicators It has become common practice to include monitoring indicators in the definition of an HR policy. This is in order to measure its application. These monitoring indicators thus include the main dimensions of the policy and define a reporting or dashboard dedicated to monitoring their implementation. For example, if a company has defined that employees with disabilities should benefit from workstation accommodation, a monitoring indicator may be percentage of employees with disabilities who have benefited from workstation accommodation. Similarly, if a company has defined that employees should be able to benefit from an interview with the HR department on request about their career development, several monitoring indicators can be defined: – number of employees who have had a career development interview with the HR department; – number of employees who have requested an interview; – ratio between the number of employees who received an interview and the number of employees who requested one. The combination of these indicators ensures that the company’s policy has been communicated to employees (and that they are therefore aware of this new right), that it meets a need and that the right is granted as provided for in the policy. It should also be noted that monitoring indicators can be of several types: fully numerical (e.g. number of employees interviewed) or binary coded in

What Effects are the Effects of Quantification?

121

yes/no format (establishment of a committee, commission, etc.). In addition, they can concern the different actors involved in the policy: managers, HR, employees, etc. They can also vary widely, in the sense that the same action can result in a wide variety of monitoring indicators. On the other hand, it is important to be able to establish a form of concordance between the aims pursued by the policy and the monitoring indicators defined (Box 4.1). In a large multinational company, the training plan is subject to an annual review. This assessment includes about 100 monitoring indicators, and its production requires two months of work for two full-time employees. There are several kinds of monitoring indicators. First of all, many indicators measure the number of hours or training actions, or the number of people trained. These indicators therefore aim to measure the adequacy between the training actually carried out in year N and the provisional training plan proposed in year N-1. Other indicators focus on the distribution of training by professional field, or by type of training (certifying or not, for example). This type of indicator then reports on the adequacy between the training provided and the company’s skills needs. Other indicators focus on the way training is provided (face-to-face, digital, mixed, etc.). These indicators are based on the company’s more general digitalization policy, which involves, among other things, the digitalization of tools and processes. Finally, other indicators focus on the quality of training, measured in particular by employee feedback. These indicators therefore aim to monitor the implementation of a more global objective of improving the training quality and employee satisfaction at the end of the training. Finally, all the indicators can be justified and meaningful in terms of the aims of the training policy. Even if the managerial literature regularly gives advice on appropriate indicators on a particular subject, and on training in particular, it is therefore quite possible that another company, not having exactly the same objectives, may not define the same indicators. Box 4.1. Alignment between policy objectives and follow-up indicators (source: Study by the author)

122

Quantifying Human Resources

Measuring the implementation of HR policies can be an important issue as there may be a significant gap between the definition of a policy and its implementation by stakeholders (Box 4.2). Human resource management (HRM) research has long highlighted the possible gap between HR policy as defined by the company and as applied by stakeholders. For example, Khilji and Wang (2006) seek to measure the gap between “intended” policies and implemented practices. Thus, they identified a significant gap between two in nine of the 12 organizations studied. This gap may result either from not implementing the tools planned because HR subjects are not considered a priority, or from implementing practices that are contrary to those planned (using co-optation and word of mouth recruitment, when the policy provides for the publication of advertisements and the use of standardized tests). However, the authors show that this gap is associated with lower employee satisfaction with their company’s HR function, and that this lower satisfaction is in turn associated with lower organizational performance. For his part, Guest (2011) highlights the factors that can contribute to widening the gap between the HR policies defined by the company and their application. In particular, he emphasizes the role of line managers, who often have an important role to play in the implementation of policies, but who may also disagree with the practices defined, or prioritize other subjects (e.g. financial, commercial). In addition, the HR function generally has little power over these first-level managers, which suggests that the implementation of HR policies by managers necessarily requires their dissemination and endorsement by the company’s top management team). Box 4.2. From the planned policy to the implemented policy, a gap that is sometimes significant (sources: Khilji and Wang 2006; Guest 2011)

The definition of monitoring indicators therefore indicates the possible existence of a gap between the planned policy and the measures implemented. However, these indicators are not sufficient to fully reflect the reality on the ground. 4.1.1.2. Monitoring indicators versus appropriation by local actors Indeed, the monitoring indicators themselves are tools that local actors can appropriate, and can in part divert. Appropriation of management tools theories thus recommend distinguishing three dimensions of management tools (De Vaujany 2005, 2006):

What Effects are the Effects of Quantification?

123

– a so-called rational dimension, which corresponds to the purposes attributed to the tool by its designers; – a so-called psycho-cognitive dimension, which focuses on the learning necessary for the actors to appropriate a tool; – a so-called sociopolitical dimension, which looks at the relationships between actors, how they are modified by the tool, and how they affect its appropriation. If we consider monitoring indicators as management tools (Chiapello and Gilbert 2013), we can therefore apply this framework to illustrate how a set of monitoring indicators is limited when it comes to reflecting the reality on the ground. First of all, it must be stressed that the HR policy itself is a management mechanism or tool that can lead to selective appropriation by local actors. Secondly, monitoring indicators are also management mechanisms or tools, which are also characterized by selective appropriation (Figure 4.1). Finally, HR policies are translated into monitoring indicators based on the policy objectives. At the same time, they are translated into practices that reflect selective appropriation.

Figure 4.1. From selective policy appropriation to selective management tool appropriation

The diagram then highlights that monitoring indicators can be doubly ineffective in reporting on the reality of HR policy implementation. First,

124

Quantifying Human Resources

they provide a distorted view of the practices implemented; second, they can themselves be subject to a selective appropriation (Box 4.3). Finally, monitoring indicators, however sophisticated they may be, have limitations that make them incomplete measures of HR policy implementation. The three dimensions highlighted by the researches about the appropriation of management tools reflect the different factors of incompleteness (Table 4.1). As we have seen, appropriation of HR policies can be selective. For example, an HR policy may require a significant learning effort, which local actors are not willing to provide (psychocognitive perspective). In addition, due to local interpersonal relationships, appropriation may be reduced (sociopolitical perspective). However, monitoring indicators may fail to capture the variety of practices that reflect selective policy implementation. For example, imagine the case of a company that has defined that decisions on individual salary increases have to be explained to each employee in an interview with his or her manager. The corresponding monitoring indicator can be defined as the percentage of employees who have benefited from this interview, or the percentage of managers who comply with this obligation. However, these indicators do not reflect concrete practices. Thus, for example, they do not signify the difference between a manager who would actually see each employee individually and take the time to explain the decision, justify it and perhaps exchange with the employee on this subject, and a manager who would, for example, simply give all employees at a team meeting the forms formalizing individual increase decisions, or a manager who would very quickly see each employee, give the form, and would not seek to justify the decision or exchange with the employee on the subject. In addition, the appropriation of the indicators themselves may be selective. Thus, cases are not uncommon for managers who either do not complete the reporting tables, do not update them regularly enough or arrange to slightly enhance the indicators. In the case mentioned above, if it is the HR managers who have to report the defined indicator (e.g. the percentage of employees who have benefited from an interview to explain the decision), they will surely have to collect this information from the managers themselves, who will have an interest in overdeclaring the number of interviews conducted, or from the employees, who will perhaps not report the same information, but may use the indicator to question or on the contrary support their hierarchy. Box 4.3. From selective policy appropriation to selective indicator appropriation appropriation

What Effects are the Effects of Quantification?

Policy appropriation Rational dimension (policy makers)

125

Monitoring indicator appropriation

The policy has certain predefined The indicator aims to measure the and corresponding goals implementation of the policy by to the company’s strategy stakeholders

But the policy can lead to Examples of factors of selective appropriation, and the And the indicator itself can lead to selective appropriation indicators may be selective appropriation unable to account for this

1) Psychocognitive dimension

Complexity of practices Complexity of measurement and to be implemented reporting of information Gap between managers’ perceptions on the importance of Misunderstanding regarding the definition of the indicator a subject and the design of the company

2) Sociopolitical dimension

Poor relations between two actors who must coordinate to implement Insufficient coordination between the policy the actors who have to report Opposition by some actors to a information and those who have to create the indicator measure that they believe is causing them to lose power in the Using indicators to gain power organization

Table 4.1. The appropriation of monitoring indicators (source: De Vaujany 2005; Grimand 2012, 2016; Coron and Pigeyre 2019)

Thus, situations where local managers find that a measure is not appropriate, or where they have poor relations with other actors with whom they are supposed to coordinate to implement a measure (e.g. recruitment officers for recruitment related measures), may result in minimalist and partial implementation of the measures (Coron and Pigeyre 2019). However, the quantified indicators related to the implementation of this measure may not differentiate between a minimalist and a more sophisticated implementation. In addition, these quantified indicators can lead to complex feedback, either because of the complexity when constructing the indicator (e.g. an indicator that would be composed of complex subindicators to be measured, such as the pay gap, as we have seen), or because of the complexity of the feedback chain (in the case of indicators that require feedback from several actors, for example). Finally, quantification is regularly used to measure the implementation of HR policies. Measuring this implementation is indeed essential, given that

126

Quantifying Human Resources

there may be a significant gap between the definition and implementation of these policies. However, quantified indicators are often found to be limited to reflect the reality of this implementation and may themselves give rise to partial appropriations. 4.1.2. Measuring the effects of HR policies Measuring the effects of HR policies is the second aspect of HR policy evaluation. By “effects”, I mean direct effects, i.e. the achievement of policy objectives. For example, an equal pay policy aims to reduce the pay gap; a commitment policy aims to increase employee commitment. Therefore, in this section, the potential indirect effects of HR policies are not addressed (e.g. reducing the pay gap can indirectly lead to greater female satisfaction with the company, or even greater employee retention), which will instead be addressed in the next section. In this case, some HR policies have quantified commitments, which make it relatively easy to assess the achievement of objectives. However, it is often difficult, if not impossible, to isolate the effects of HR policies, which are often dependent on structural or contextual effects. 4.1.2.1. The definition of quantified commitments More and more, HR policies defined by companies are accompanied by quantified commitments. For example, gender equality policies have targets for increasing the rate of feminization by professional field or by responsibility or for reducing pay gaps; disability policies have targets for the employment of people with disabilities and for workplace accommodation; policies for setting targets for increasing the rate of commitment measured in annual social climate surveys (Box 4.4), etc. Defining quantified commitments makes it possible to demonstrate the company’s commitment, and gives precise indicators to monitor the company’s progress in the areas concerned. This is a first step toward measuring the effects of the policy. The instructions given by the managerial literature on the definition of quantified objectives refer to recommendations on the definition of objectives for employees: measurable, achievable, timebound objectives in particular. However, some cases may lead to the definition of unattainable quantified commitments, particularly when stakeholders do not agree on projections showing that they are unattainable (Box 4.5).

What Effects are the Effects of Quantification?

127

The agreements defined by large companies, particularly in France, on diversity policies (disability, intergenerational, gender equality) are generally accompanied by quantified objectives for progress. Thus, in its agreement in favor of people with disabilities, Thales defines a recruitment target of 120 employees and 38 part-time employees with disabilities over a 3-year period. A study conducted on agreements in favor of the employment of older workers shows that quantified objectives for retaining employees aged 55 and over are much more frequently defined than quantified objectives for recruiting employees aged 50 and above (Claisse et al. 2011). Finally, Rabier (2009) shows that the gender equality agreements signed by companies with trade unions reflect very heterogeneous degrees of commitment, and she mentions the definition of quantified objectives as an illustrative factor of commitment. Box 4.4. The quantified commitments of disability policies, retention of older workers in employment, and gender equality policies (source: Online agreements; Claisse et al. 2011; Rabier 2009)

In a large French company, the negotiation of an agreement on equality between women and men (within the French perimeter) highlighted the importance, particularly symbolic, attached by the unions to the commitment regarding the feminization rate. Several trade unions have thus made the definition of quantified objectives on the subject a sine qua non condition for their signature. However, the projections made by the company showed that the objective requested by these unions (to increase the feminization rate by 1 point per year) was unattainable. Indeed, over 3 years, it would have required much more recruitment than the company could have achieved, due in particular to the high inertia of the feminization rate of a company of this size (around 90,000 employees). In the end, the objective was nevertheless included in the agreement. On the management side, this was due to the need to sign the agreement; on the trade union side, it was due to the weight placed by the trade unions on the issue of the feminization of the workforce and to mistrust the company’s numerical projections. Box 4.5. Defining unattainable commitments to obtain a signature (source: Coron and Pigeyre 2018)

128

Quantifying Human Resources

Defining quantified commitments and targets is therefore a first step in facilitating the evaluation of HR policies, as it allows for a comparison between what was planned and what has been achieved. However, it is still risky to interpret the achievement or non-attainment of numerical objectives as an indicator of the impact of HR policies. 4.1.2.2. Is isolating the effects of HR policies an impossible task? Indeed, all public policy evaluation methods face the same difficulty: how can we isolate the effect of policies from contextual or structural effects themselves (Behaghel 2012)? Thus, how can we ensure that any measured change (in the unemployment rate, for example, or the death rate on the roads) comes from the policy implemented, and how can we ensure that a lack of change reflects a lack of policy effects? This question is just as central in HR. Indeed, the majority of HR phenomena are multifactorial, i.e. they react to multiple factors. Thus, a company’s absenteeism rate depends not only on the company’s policy on absenteeism, but also on individual factors (gender, age, number of children, etc.) and external variables such as epidemiology. As a result, if a company defines a policy to reduce absenteeism, and the following year the annual influenza epidemic is particularly fierce, absenteeism indicators may remain stable, giving the impression of there being no effect, while the policy may still have contributed to reducing absenteeism. An HRM situation may therefore evolve according to structural and contextual effects. Structural effects refer, for example, to the demographic structure of the company. Thus, in terms of workforce, a population with a high average age or a high percentage of employees close to retirement age will structurally experience retirements in the coming years. Contextual effects refer to temporary circumstances at a given time t. For example, in terms of employee commitment, a company experiencing an economic crisis due to an unexpected drop in subsidies may see its employees’ commitment decline because they are aware of the precariousness of their situation. Isolating the effect of HR policies then requires the ability to compare the situation knowing that the policy has been implemented with an often hypothetical situation of no policy, which makes it possible to control the structural and contextual effects. Several methodological strategies can be

What Effects are the Effects of Quantification?

129

used to solve this difficulty (Behaghel 2012): instrumental variables, controlled experiments, such as cohort or panel monitoring. For structural effects only, a somewhat inaccurate but simpler strategy is to mobilize situation projections (Box 4.6). The company’s demographics can have a significant effect on the change of its feminization rate. Thus, in a company where women are older than men, and where a high percentage of them are close to retirement, the feminization rate will drop structurally in the coming years (and vice versa if it is men who are closer to retirement). Thus, before defining a quantified commitment regarding the change in the feminization rate, the actors can agree on projections making it possible to define a structural change of the feminization rate. Once this agreement has been reached, the quantified objectives may aim at a higher rate of feminization than projected, which will therefore reflect (albeit very imperfectly) the efforts made by the company to improve the structural situation. Box 4.6. Mobilizing projections to monitor structural effects (source: Study by the author)

Despite these methodological strategies, it remains complex to isolate the effects of HR policies. Finally, the evaluation of HR policies, whether to measure their implementation or their effects, remains a very difficult and sometimes impossible operation. However, quantification makes it possible to provide elements for reflection and exchange, particularly with social partners. 4.2. Quantifying in order to legitimize the HR function? The evaluation of HR policies becomes all the more important in contexts where the HR function needs some form of legitimization. More generally, the use of quantified tools can be used to legitimize the HR function. Two points linked to each other allow me to illustrate this point. First, quantification can be used as a tool to measure the performance of the HR function, and thus provide quantified evidence of performance to the company’s management. Then, quantification can be used to demonstrate the link between the performance of the HR function and the performance of the organization more generally. The particularly rich managerial and academic

130

Quantifying Human Resources

literature on these two subjects clearly illustrates their importance, but also the debates to which they give rise. 4.2.1. Measuring the performance of the HR function Measuring the performance of the HR function is regularly considered by the managerial or normative literature as a necessary condition for transforming the HR function into a strategic actor, a partner of other functions and in particular of the company’s executive management (Boudreau and Ramstad 2004; Boudreau and Lawler 2014). However, the question of defining the performance of the HR function and therefore its measurement is not self-evident; moreover, defining indicators that are too standard may not take sufficient account of organizational contexts and contingencies. 4.2.1.1. How can the performance of the HR function be defined? Many studies suggest that different types of HR function performance should be distinguished (Boudreau and Ramstad 2004): – the impact, which refers to the effect of HR policies on the company’s strategic activity. For example, if a company improves its recruitment policy, the question of impact would require considering the effect of a potential better selection on the strategic activity; – the effectiveness, which refers to the effect of HR policies on employees. For example, if a company implements a policy to improve working conditions, effectiveness could correspond to a potential increase in commitment; – the efficiency, which refers to a cost–benefit calculation, and corresponds to a kind of return on investment for the HR activity. For example, if a company implements an ambitious and costly training program, the notion of efficiency raises questions about the gains it derives from it, and the relationship between these gains and the costs involved (Box 4.7). These three types of performance can be arguments for the importance of the HR function to the company’s management, but also to the financial and operational management.

What Effects are the Effects of Quantification?

131

Boudreau and Lawler (2015) propose several indicators to measure the performance of the HR function (what they call talent metrics). These indicators are classified into efficiency, effectiveness and impact indicators. Efficiency indicators: – financial efficiency of HR operations; – costs associated with HR programs and processes. Effectiveness indicators: – effects of specific HR programs (e.g. have training programs enabled employees to acquire the targeted skills?); – cost–benefit analysis of these HR programs. Impact indicators: – impact of HR programs and processes on the company’s business activity; – quality of HR decisions made by non-HR actors; – impact on economic activity of employee performance. Box 4.7. Proposals for indicators to measure performance of the HR function (source: Boudreau and Lawler 2015)

However, these indicators have several limitations, and first and foremost they are particularly difficult to translate into calculation rules and therefore to measure. For example, how can we measure the financial efficiency of HR operations, or the benefits associated with HR programs? Other authors have focused on proposing more precise indicators for measuring HR performance by a major process (Cossette et al. 2014). These indicators also have three dimensions: effectiveness, efficiency and impact (Box 4.8). Cossette et al. (2014) propose indicators of effectiveness and efficiency in recruitment, selection and training. Recruitment efficiency: – number of applications; – duration of recruitment;

132

Quantifying Human Resources

– costs related to recruitment. Recruitment effectiveness: – number or rate of quality applications to the organization. Recruitment impact: – organizational performance (productivity, service quality, etc.). Training efficiency: – time spent on needs analysis; – usefulness of training; – time to design and develop the training program; – total number of individuals trained per year. Training effectiveness: – participants’ reactions to the training; – quality of learning; – improvements in work behavior. Training impact: – organizational performance (productivity, service quality, etc.). Box 4.8. Examples of effectiveness and efficiency indicators on HR processes (source: Cossette et al. 2014)

Similarly, some of these indicators may be difficult to translate into specific calculation rules (e.g. on learning related to training). In addition, the same phenomenon is observed as that highlighted in the previous section, namely that indicators are always unable to reflect the totality of a reality. As a result, companies may be tempted to define a very large number of indicators, hoping to better understand the performance of the HR function. However, there is a risk of getting lost in the process and ultimately not really mobilizing this mass of information.

What Effects are the Effects of Quantification?

133

4.2.1.2. Choices of indicators according to the organizational context In addition, the indicator suggestions provided by these various studies are confronted with an important limitation: the choice of indicators is strongly influenced by the organizational context, and in particular by the HR strategy and strategy of the company concerned. Thus, a company that is dealing with a cost reduction strategy may not define the same indicators of efficiency and effectiveness of its HR function as a company in a development and innovation strategy. In fact, Cossette et al. (2014) advise the following questions be asked before defining indicators: – What are the issues facing the organization? – What are the HR issues arising from these organizational issues? – Which HR activities are concerned by these HR issues? Only then can the two questions that will define the indicators of effectiveness and efficiency come up: – effectiveness: how can we determine if HR activities are achieving their objectives? – efficiency: what are the costs associated with these activities? The importance and scope of these issues underline the impossibility of adopting a standardized approach to define quantified indicators to measure the performance of the HR function. However, measuring this performance is an important issue for the positioning and legitimacy of the HR function in the company. This is all the more important as most other functions (finance, marketing, etc.) are more easily able to demonstrate their added value in terms of organizational performance. 4.2.2. Measuring the link between HR function performance and organizational performance Levenson (2018) emphasizes the importance of being able to establish a link between the performance of the HR function and the effect on the company’s business or economic performance. It is this purpose that underlies the impact indicators previously mentioned (Boudreau and Ramstad 2004; Boudreau and Lawler 2015).

134

Quantifying Human Resources

However, Levenson also shows the difficulties that companies face in measuring this link. Thus, one-third of the companies surveyed in its research believe that their information systems did not allow them, or very little, to measure the effect of their HR activity on the company’s economic activity. Despite these difficulties, many discourses attempt to define this link and suggest tips for measuring it. Thus, the business case approach of HRM – which can cover topics as varied as gender equality, commitment, or HRM in general – reflects this desire. The staircase model (Le Louarn 2008; Cossette et al. 2014) that was briefly discussed in Chapter 1 reports on this business case approach (Figure 4.2). Long-term company success Organizational results • • •

Operational Economic Financial

HR results • •

Attitudes Behaviors

HRM • •

HR policies HR practices

Figure 4.2. The staircase model (sources: Le Louarn 2008; Cossette et al. 2014)

Thus, the first step corresponds to the measurement of HR activity (policies and HR practices defined and implemented, measured, for example, through monitoring or performance indicators of the HR activity). Then, the second step refers to the attitudes and behaviors of employees. The link between the two measures the effect of HR activities on employees. The third step refers to the results of the organization (team, company). Finally, the last step concerns the long-term sustainability of the organization. Kirkpatrick’s model on training evaluation follows the same logic. This model defines four levels of training effects: employee reactions (satisfaction

What Effects are the Effects of Quantification?

135

with training), learning (skills and knowledge acquired), results (impact of training on company results) and return on investment (comparison between benefits and costs). Following this type of model requires defining measurement indicators for each step, then measuring the links (black arrows) between these groups of indicators. Two operations are in fact necessary to try to establish a link between HRM and company performance: demonstrating a link between HR function performance and employee behavior (transition from the first to the second step), and then demonstrating a link between employee behavior and company performance (transition from the second to the third step). Examples have already been given of this business case approach in Chapter 1. The focus is now on deconstructing both the rhetoric of this approach and the methodological and epistemological difficulties it encounters. 4.2.2.1. From HR function performance to employee behavior The transition from the first to the second step is based on the argument that HR function activities are supposed to have an effect, on the one hand, on employee behavior at work (such as cooperation or compliance with procedures) and, on the other hand, on employee attitudes toward the company (commitment, loyalty, for example). This rhetoric is therefore intended to highlight the key role of the HR function for employees. It is based on intuitive reasoning, but also on the perhaps illusory idea that employees react to the incentives put in place by the HR function, by changing their behavior and attitudes. Thus, with regard to behavior, intuitively, it seems that deploying training on compliance with safety rules on a mass scale leads to better compliance by employees and therefore a reduction in workplace accidents. However, many studies have highlighted the gap between this rational conception of organization and the reality of organizational daily life. Indeed, many factors can limit the effect of HR activities on employees: misunderstanding of objectives, power games, inadequacy of activities with real work, etc. For example, training on safety rules may not be sufficient to change practices, if not complying with the rules saves employees’ time, if they have long established work routines that are based on non-compliance with these rules, or if they feel that these rules limit their individual freedom of action (Greasley et al. 2005).

136

Quantifying Human Resources

Concerning attitudes toward the employer, the rhetoric is based on the idea that the relationship between employees and the employer depends on the HR policies put in place by the employer. In fact, several theoretical currents are based on this link or seek to demonstrate it. For example, work on the employer brand concept seeks to link the perception of the benefits of working for a given employer to variables such as business attachment, faithfulness or loyalty (Ambler and Barrow 1996; Charbonnier-Voirin et al. 2014). Similarly, work in the field of organizational justice highlights the influence of perceived justice on the intention to remain in the company, commitment, motivation and loyalty (Cropanzano and Ambrose 2001; Bourguignon and Chiapello 2005; Jepsen and Rodwell 2012; Hulin et al. 2017). Finally, some studies consider that one of the main missions of the HR function is to develop employee commitment as much as possible by using different levers: working conditions, working interest and development opportunities (Cleveland et al. 2015). The success of this rhetoric in managerial discourses, but also in academic work on the HR function is undoubtedly explained by the need for the HR function to legitimize itself with the management, as well as employees and operational management. It is a question of justifying the importance of its activity and therefore ultimately its operating costs, knowing that it is regularly perceived as a cost center and not a profit center for the company. However, many methodological obstacles limit the scope of this rhetoric. First of all, quantitative studies often give contradictory results on the extent to which HR activities have an impact on employee behavior and attitudes. Second, and this explains part of the previous point, measuring these different elements (HR activities and employee attitudes) requires the construction of variables that reflect them. Yet, there may be a wide variety of measures of commitment, faithfulness, loyalty, but also HR activities, as underlined in Chapter 1. For example, measuring commitment can take very different forms, illustrating the methodological difficulty in understanding this construct. Recently, companies have been offering solutions for measuring the social climate, at a very frequent rate (weekly, for example) and on very specific points. Thus, if a company decides to move its premises, this type of pulse survey can make it possible to quickly evaluate the effect of the moving announcement, and then of each project stage, on the social climate.

What Effects are the Effects of Quantification?

137

These solutions are therefore part of the myth of objectivity described in Chapter 2: the frequency of measurements gives the impression of reporting changes in real time (which is highlighted in the communication of these companies), and the selling points of this type of solution are based on the idea of reporting a reality that would be inaccessible without these data. However, this type of solution must stand out and prove its added value compared to other measures of the social climate, such as annual surveys. Finally, as seen above, it is difficult to isolate the effect of HR activity on employee behavior and attitudes, as the latter are strongly influenced by other contextual and structural factors. For their part, qualitative studies highlight the gap between the objectives of HR policy makers and what is happening in the field at the local level. Thus, it was recalled that it is not at all certain that a training policy on compliance with safety rules will lead to better compliance with these rules (De Vaujany 2005; Greasley et al. 2005). 4.2.2.2. From employee behaviors to organizational performance The transition from the second to the third step implies, for its part, demonstrating a link between employee behaviors, attitudes and their performance (and therefore ultimately organizational performance). This is based on several rhetorical arguments. The first argument conveys the idea that more satisfied, more faithful, more loyal employees will perform better. This argument comes from several theoretical currents that have their source in the school of human relations. Indeed, this school and Elton Mayo’s experiences in the Hawthorne factory suggest that employees’ individual productivity depends not only on financial incentives or the way work is planned, like Taylorism, for example, but also on valuing and listening to employees, and paying attention to the work environment. Subsequently, other trends have followed this path, emphasizing the variety of sources of individual motivation and the fact that extrinsic motivations such as remuneration or control are not enough. The second argument suggests that overall organizational performance is closely linked to the individual performance of employees. This argument therefore adopts a rational vision of the organization, where organizational performance is made up of the sum of individual performances. However, this has been challenged by various trends underlining the importance of group formation or the impossibility of detaching individual and collective performance (Marchal 2015).

138

Quantifying Human Resources

As in the previous cases, these two arguments are relatively difficult to demonstrate. In addition, they also raise ethical questions. Thus, they tend to subject the imperative of well-being (or autonomy, good working conditions, job satisfaction) of employees to a performance imperative. In Chapter 1, the example of the gender equality business case was given. This business case is criticized in particular from this angle (Sénac 2015): is it legitimate, ethical, to subject the imperative of equality to a performance imperative? What will companies do if one day it is demonstrated that equality does not bring about more performance? Finally, this business case approach seems to represent an important issue for the HR function, which is seeking to legitimize itself. The many methodological and ideological limitations do not diminish the interest of the HR function in this rhetorical use of quantification. For their part, some academic trends continue to use the staircase model and measure links between HR activities, employee attitudes and behaviors, and organizational performance (Box 4.9). Levenson et al. (2004) were interested in a competency management system, and sought to measure the effect of this type of system on organizational performance. Thus, they formulated hypotheses on the links between the characteristics of the system implemented (ease of understanding the system by managers, justice and validity of the system, alignment with other HR practices and processes, role of managers in the system), the encouragement to acquire and demonstrate skills, the promotion of managers and finally organizational performance. They thus show the following results: – a better understanding of the system, but also the perceived fairness and relevance of the system, is associated with better ratings on individual competencies; – better ratings on individual competencies are associated with better individual performance; – better ratings on individual competencies are associated with better collective performance (e.g. concerning a given institution). Box 4.9. Example of switching from the first to the third step (source: Levenson et al. 2004)

What Effects are the Effects of Quantification?

139

Finally, the HR function has a strong interest in mobilizing quantification to demonstrate its performance and its effects on, on the one hand, the company’s strategic success (with senior management in particular) and, on the other hand, the social climate and employee commitment (with operational management in particular). Once again, by taking up Sainsaulieu’s (2014) assumptions on collective and individual identity at work, quantification can then contribute to the construction of the professional identity of the HR function, by providing arguments that enable it to strengthen its positioning and legitimacy with other functions and the top management. 4.3. The quantification and risk of HR business automation However, quantification is not just a simple assessment tool or rhetorical argument for the HR function. Indeed, studies now highlight the link between quantification, and more precisely the increased use of algorithms, and business automation (Villani 2018). Yet, some HR professions present a high risk of automation. It is then the question of accommodating the employees concerned that arises. 4.3.1. HR professions with a high risk of automation Contrary to a relatively widespread discourse, high-risk automation jobs are not systematically the least qualified jobs. Identifying automation risk factors is therefore necessary to identify which HR professions are involved. 4.3.1.1. Automation risk factors The jobs with the highest automation risks are those that combine information processing tasks with a low relational level (Deming 2017). Indeed, advances in algorithms and artificial intelligence now allow machines to be more efficient than human beings in processing information. This efficiency results in particular in greater speed, and therefore the possibility of processing the exhaustiveness of information, instead of having to carry out a first sorting or a first synthesis as a human being, who does not have the same capacities, particularly in terms of memory, should do. This is one of the purposes of algorithms: sorting, selecting, prioritizing masses of information (Cardon 2015). This information may or may not be structured, and may, for example, consist of words, figures, images, etc.

140

Quantifying Human Resources

Thus, an algorithm will be much easier (and faster) than a human being to perform, among other things, the following tasks: – locating and counting specific words in a text; – measuring word co-occurrences; – performing calculations based on a set of figures; – quickly tagging a set of images. However, it should be noted that the algorithm does not analyze the meaning of this information as a human being could: this is the difference between the substantial approach and the procedural approach highlighted by Cardon (2018). In other words, the reconciliations that the algorithm will be able to make between two keywords (computer and laptop, for example) will come from calculations that will measure a regularity in the proximity of these two words, and not from an understanding of the meaning of these words. This explains why Google Translation can nowadays offer translations from very rare languages into other very rare languages (in cases where, for example, there is no bilingual dictionary between these two languages): the system is not based on an understanding in parallel with both languages, but on word matching calculations based on very large volumes of written content, and can use a very common intermediate language (such as English) as a bridge between two rarer languages (Mayer-Schönberger and Cukier 2014). However, some discourses explain that the distinction between the substantive and procedural approaches is changing due to the ability of algorithms to find or reconstruct concepts of some kind. Thus, from a large number of cat photos, an algorithm can nowadays reconstruct more or less the cat concept in order to be able to identify cats in other photos: this is the principle of unsupervised learning1 (CNIL 2017). On the other hand, algorithms remain relatively less efficient than human beings in terms of relational skills (Villani 2018): for example, they do not currently experience the empathy or emotions that human beings may experience. Similarly, the creative field is still relatively protected from any automation. As a result, the most easily automated tasks are those that combine information processing activities with poor interpersonal skills or creativity. 1 See, for example, https://www.nytimes.com/2012/06/26/technology/in-a-big-network-ofcomputers-evidence-of-machine-learning.html (accessed October 2019).

What Effects are the Effects of Quantification?

141

Physical tasks can also be automated, but here the focus is on the HR function, which is not very concerned with this type of task. I only wish to point out that tasks that seem easy for a human being are sometimes less so for a machine, which has different constraints (fewer physical constraints, but more constraints related to understanding the environment, for example). Taking these automation risk factors into account has made it possible to develop models that predict the automation risk of each profession. The bestknown study on the subject was conducted by Frey and Osborne (2017) and covers data for the United Kingdom. In particular, it was publicized by the BBC, which transformed the study into a search engine, giving each business a probability of automation. It shows that administrative occupations – for example in financial or legal services – present a high risk of automation, unlike occupations that have a strong relational component (e.g. psychologists and social workers2). 4.3.1.2. The HR professions concerned The HR function is no exception. The most likely occupations for automation are therefore those that combine information processing tasks with few relational components. This sometimes involves distinguishing between different activities within the same profession. Thus, depending on the company, a recruitment manager can ensure the recruitment process from start to finish, from the sorting of CVs to the integration of the selected candidate into the company. In other companies, this set of activities can be split between several professions: selection officer who is only responsible for pre-selecting CVs, recruitment officer who is responsible for interviewing candidates and liaising with the linemanager concerned, and integration officer who is in charge of the candidate once they have been selected. However, these different tasks do not present the same risk of automation. While CV sorting seems easily automated, since it is an information processing task, which does not involve a relational relationship with candidates, the other two activities seem to present lower automation risks. For example, many companies have embarked on the development of CV pre-selection algorithms (Box 4.10).

2 See: https://www.bbc.com/news/technology-34066941 (accessed October 2019).

142

Quantifying Human Resources

The CV pre-selection stage of recruitment can be particularly time consuming, especially in large companies, which can receive a significant number of CVs for each job offer. Several companies from different countries now offer solutions to automate this step. As early as 2014, Kuncel et al. explained that algorithms are often better than human beings at predicting the future performance of candidates and thus selecting those most likely to succeed in the company. For example, Cornerstone offers solutions to integrate data analysis into recruitment, including some forms of automation of certain steps. For its part, Assessfirst offers a range of recruitment solutions, including candidate selection assistance, based on a match between candidates and the criteria requested: “All you have to do is answer a few questions to describe the position you’re recruiting for. Our algorithm analyzes your answers and compares them to over 5,000,000 profiles to give you a clear recommendation of the type of profile to look for” (US website of Assessfirst). Mindmatcher offers a solution that, based on a job offer, automatically proposes the most suitable candidates “thanks to a matching based on skills, training, experience” (English website of Mindmatcher). Some applications are rather positioned on the candidate’s side, such as Kudoz in France (acquired by Leboncoin), which promises candidates to send them offers corresponding to their skills and experience (which LinkedIn also does). Other companies develop their own tools internally. Thus, in a multinational company in the digital sector, an internal team of data scientists designed a tool that automatically ranks CVs according to their proximity score to a given offer (the score being calculated on the basis of the proximity of the keywords present in the CVs and in the offer). Box 4.10. The automation of CV pre-selection, the promise of several companies (sources: Kuncel et al. 2014; company sites and study conducted by the author)

Thus, the activity or occupation of CV pre-selection seems to present a significant risk of automation. Similarly, administrative activities and professions are also threatened by the development of algorithms on the subject. More recently, the rise of artificial intelligence has led to considerable progress in the field of conversational robots (chatbots), suggesting many possibilities for automating the handling of administrative issues (Box 4.11).

What Effects are the Effects of Quantification?

143

The development of HR administrative chatbots is increasingly developing, at the initiative of companies that develop more generic chatbots, or companies specialized in the HR field. Thus, Ubisend, a company specializing in chatbots, offers, among other things, chatbots specially designed to answer employees’ questions on HR topics. Their sales pitch is simple: HR practitioners spend a large part of their time answering repetitive or routine questions, and could save part of that time by using a robot to answer the simplest questions3. For its part, Slack designed the Lucy Abbot chatbot, which is able to handle employees’ questions and requests, but also to assist the HR function in communication tasks (emailing, for example). Other chatbot solutions focus on recruitment and answering candidates’ questions. Thus, the American army uses a chatbot, Sergeant STAR, to introduce military occupations and conditions of employment in the army. These few examples allow us to point out several common points of these chatbots. First of all, they require a trade-off between humanization (as shown by the fact that they have proper names) and robotization (they generally take the form of an avatar instead of a human photo). Then, they are presented as a timesaving solution for the HR function. Finally, they are dedicated to the most repetitive and simple tasks. Box 4.11. HR administrative chatbots, a reality in the near future?

A certain number of HR activities are thus carried out by machines and no longer by human beings. 4.3.2. Support for the employees concerned As a result, the HR function is faced with a major challenge in supporting the employees concerned. This issue also creates tensions or contradictions for the HR function. Indeed, the latter is strongly encouraged to position itself as a promoter of new technologies within organizations (Ulrichet al. 2013), and can therefore hardly adopt a more critical stance when these technologies threaten it. There are two scenarios: cases where the professions will disappear completely and those where they will evolve. 3 www.ubisend.com/chatbots/hr/hr-chatbot (accessed October 2019).

144

Quantifying Human Resources

4.3.2.1. Professions that will disappear.... The HR professions that will disappear are mainly found in structures where the work of the HR function is highly segmented and compartmentalized. Thus, as we have seen, CV selection officer positions in companies that segment recruitment activity between CV pre-selection and interviewee qualification may be more affected by this threat of automation than recruitment officer positions that handle the recruitment process from start to finish. Similarly, operational HR dedicated solely to handling employees’ administrative issues may see their positions disappear, unlike those dealing with a more diverse range of activities. Therefore, several options are available to the companies concerned. A first option is to reposition the people concerned in other professions. This option requires a reflection on the skills that can be transferred from one profession to another, but also the collection of the wishes and desires of the employees concerned. Thus, contrary to what seems most intuitive, it is not obvious that a selection officer can easily and willingly convert to the qualification of candidates through interviews. Indeed, this is a position with a relatively similar purpose, but with very different working methods (in terms of human contact, organization of working time, for example). A second option is to create roles related to machine monitoring, in a movement similar to that of robotization that has prompted plants to create positions related to robot management. An operational HR dedicated to handling routine administrative questions from employees could thus take care of managing the chatbot (which automates part of its tasks) while remaining present to deal with more complex questions that are not handled by the robot. A third option is to rethink HR occupations to encompass a wider variety of tasks. Thus, companies that, in the interest of profitability, have highly segmented HR activities could revisit this segmentation and think about broader HR activities. The segmentation between CV selection officer and recruitment officer could thus be abolished, as well as the segmentation between HR operational staff responsible for answering administrative HR questions and administrative HR experts. These three options allow companies to take advantage of the disappearance of certain professions to reduce the HR workforce. Thus, option 1 implies assigning the people concerned to other tasks, possibly

What Effects are the Effects of Quantification?

145

outside HR. Option 2 would in any case involve a reduction in the number of jobs, since a machine manager could manage machines replacing several employees. Finally, option 3 would consist of taking advantage of the productivity gains allowed by automation to enrich each profession. Admittedly, it is possible that some companies may initially remain resistant to this automation under pressure from the social partners, for example. But the strong and permanent incentives to reduce the operating costs of the HR function will undoubtedly lead them to revise their positioning. 4.3.2.2. ... or jobs that are likely to evolve? Other professions may evolve. To take the example of recruitment managers who follow the recruitment process from start to finish, this profession would have to evolve, in particular toward a reduction in automated tasks (e.g. CV pre-selection stage) but also toward collaboration with machines or algorithms. For example, a recruitment officer may have to use the results provided by an algorithm to decide on the final list of candidates to be interviewed. This evolution implies two major changes. First of all, it requires the HR function to be trained in the use of results from algorithms in order to interpret them without fetishizing them. Thus, understanding how results are produced, from which data, on which rules, makes it possible to question them, to reconsider them and finally not to consider them as absolute truths. This freedom of criticism and questioning seems necessary so that HR actors retain responsibility for decision-making and do not delegate it to machines whose operation they do not understand, i.e. ultimately to the designers of these machines. Second, it would probably be preferable for these HR actors to participate themselves in algorithm design. Indeed, they cannot only provide expertise in the HR field, but also guarantee a form of algorithm ethics. For example, it is the recruitment officers who are able to explain the criteria used to recruit individuals and to stress the importance of the fight against discrimination. However, this change requires collaboration with statisticians, computer scientists and data experts, which will not be selfevident given the distance between the vocabularies, expertise, skills and positioning of these two functions. In addition, this collaboration may be part of a very unbalanced relationship, as the knowledge and skills of data experts and computer scientists may seem more esoteric, and more difficult to explain to outsiders than those involved in the HR function.

146

Quantifying Human Resources

In fact, these two changes require specific training to enable the HR function to acquire skills in data analysis, statistics and IT. This chapter focused on the positioning of the HR function in relation to quantification. Quantification – a tool for evaluating and even legitimizing the HR function – has recently also emerged as a threat for some HR professions. As a result, the relationship between the HR function and quantification may be ambivalent. Thus, it can be characterized by a certain neutrality, when the HR function uses quantification to measure its action and effects, while paying attention to methodological rigor criteria and the limits of these measures. It can also be characterized by a form of instrumentalization when quantification is used for rhetorical purposes; for example, to highlight the contribution of the HR function to organizational performance. Finally, it may be characterized by a kind of fear, due to the most recent advances in the use of data to automate certain tasks previously performed by human beings. Whatever the prevailing feeling, the actors of the HR function cannot avoid a reflection on their individual and collective positioning with regard to quantification, and the new tools that are emerging and becoming more and more important.

5 The Ethical Issues of Quantification

The philosopher Hans Jonas (1995) devoted part of his work to the question of human responsibility in a context of very rapid technological and logical change. He thus pointed out that the modern context is characterized by a strong unpredictability due to this speed, and by a considerably increased power of the human being over their environment. He thus proposed to renew the categorical Kantian imperative (“Act only according to that maxim whereby you can, at the same time, will that it should become a universal law”) by “act so that the effects of your action are compatible with the permanence of genuine human life”. This imperative underlines the responsibility of human beings toward nature, but also toward future generations. However, the changes that have appeared in the field of quantification and its use in HR seem to contribute to a context similar to that studied by Jonas: a strong unpredictability of future developments for the HR function and organizations, and an increased power of quantification in the HR field. Therefore, in this chapter the question of the ethical issues associated with the use of quantification will be raised. The first issue, highlighted at European level by the General Data Protection Regulation (GDPR) but also at national level (for example by the French National Commission for Data Protection (CNIL) in France), concerns the protection of personal data (section 5.1). Indeed, the rise of quantification and the new uses described require the collection and processing of large amounts of data, which requires a renewal of the associated rules. The second issue, highlighted by many academic studies in particular, refers to the notion of discrimination (section 5.2). Indeed, the link between quantification and decision making documented in Chapter 2 encourages the issue of discriminatory decision

Quantifying Human Resources: Uses and Analyses, First Edition. Clotilde Coron. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

148

Quantifying Human Resources

making to be addressed and the safeguards to be put in place to avoid it. Finally, the third issue refers to training and information for the various stakeholders: HR actors and employee representatives, in particular (section 5.3). Indeed, the increasingly intensive use of quantification makes it a subject that needs to be addressed in discussions with these stakeholders. In addition, training and information for the various stakeholders seem to be essential conditions to enable human beings to “keep control” over algorithms and quantification (CNIL 2017). 5.1. Protection of personal data The increase in the amount of data available and the processing operations that are carried out contributes to the increasing importance of protecting personal data. These data are defined as all information relating to a natural person who can be identified, directly or indirectly. This definition actually includes a lot of data. This includes direct identification data (such as name and e-mail address). Data allowing indirect identification, in particular by cross-referencing data (e.g. address and age), are also included. In other words, the personal nature of the data is based on the key notion of the person’s ability to be identified. As a result, non-personal data are also defined in relation to this key concept. Thus, anonymized data (name replaced by an identifier whose correspondence key is not available, voice modified on an audio recording) are not of a personal nature, as long as they do not allow identification, even by cross-referencing. It is therefore important to note that the personal nature of the data does not prejudge the importance or sensitivity of its content. Thus, non-sensitive data such as an e-mail address is personal data, when sensitive data such as health status can be non-personal data, if it is anonymized. However, developments in Big Data and artificial intelligence have increased personal data processing tenfold1. Indeed, while the anonymization rule has long prevailed in public statistics or company reporting, many uses associated with these two tendencies require that the individual is ultimately returned to, and thus prevent anonymization. Thus, suggesting personalized content implies being able to return to the individual, as well as offering an individualized car insurance rate, for example. 1 The processing of personal data refers to any operation involving personal data (data collection, storage, modification, analysis, reconciliation with other data, etc.).

The Ethical Issues of Quantification

149

This therefore raises the question of potential risks in terms of personal data, and the obligations of companies in the HR field specifically. 5.1.1. Risks relating to personal data Two main types of risks can be distinguished. A first type of risk arises from poor data security, threatening in particular integrity and quality. A second type of risk occurs during the use of data, and then refers in particular to an ethical dimension. 5.1.1.1. Risks related to poor data integrity Poor data security makes it possible, among other things, to hack into data, but also to make accidental changes to data. Acts of piracy can have several purposes, commercial or not. For example, hackers may want to recover data files so they can then resell them. For example, a file of telephone numbers and addresses can be of important value for telephone solicitation companies. Other hackers may want to seize data files and make them public, as illustrated by the hacking suffered by Sony in 2014 (Box 5.1). At the end of 2014, the Sony Pictures Entertainment film studio was hacked. Data files, potentially personal, were leaked on the Internet. Thus, film projects were exposed (non-personal data), but also data on the remuneration and benefits of the actors, or social security numbers (personal data). These data also revealed significant pay inequalities by gender and ethnic origin. Other documents were disseminated, such as information on complaints of sexual harassment, and exchanges of e-mails. Box 5.1. Data piracy suffered by Sony (sources: Press articles)

In addition to contributing to the dissemination of data – whether commercial or not, but in any case not lawful – hacking can also threaten the integrity of data, i.e. modifying it, altering it, making it unavailable, etc. Thus, a hacker who takes control of a Twitter or Facebook account and modifies its description violates the integrity of its data. In HR, data protection is all the more important because many data processed by the company on employees either have a certain market value

150

Quantifying Human Resources

or constitute sensitive data. Thus, contact information (address, telephone number, etc.) can be of great interest to direct selling companies, and therefore has a market value. Information related to health status (such as absenteeism data), disability and sexual orientation (identifiable by the name of the spouse, for example) are particularly sensitive data, which are generally subject to specific protections in national legislation. In addition, the issue of data accuracy or quality (and therefore data integrity) is of particular importance in HR. Indeed, as seen, the HR function is required to take a number of decisions on the basis of personal data. Therefore, the accuracy of this data is an important issue. However, some falsifications can threaten data accuracy. Thus, recurrent scandals of false diplomas correspond to situations where a data (the person’s diploma level) prove to be inaccurate. In general, this is not hacking as such, but data falsification. While claiming a false diploma to obtain a job is illegal in most countries, the same cannot be said for self-reported information such as personal skills or tastes. This is why recruiters generally focus on checking these aspects of the profile during interviews with candidates. However, the rise of self-reported information on professional social networks (selfreported skills on LinkedIn, for example) and their increasing use by algorithms to define the profile of individuals raises the question of the validity of these data (CNIL 2017). In fact, LinkedIn has developed a strategy to validate this information by peers through a system of skill recommendations. Other strategies are possible, such as managerial validation, or checking the consistency between the skills and professional experience declared, or accepting the margin of uncertainty inherent in selfreported data. 5.1.1.2. Risks related to the use of data The protection of personal data refers both to their security and to rules for the use of these data. Thus, one of the most important principles in this respect concerns the purpose of data processing. Most national and international (e.g. European) regulations thus require data controllers to explicitly declare the purpose of their collection and use and not to deviate from it. Deviations from the intended use may refer to several situations. A relatively common case concerns the resale of data. For example, a website that collects data on its users can take advantage of these data by selling it to

The Ethical Issues of Quantification

151

partners. Second, it may involve the use of data for manipulation, as revealed by the Facebook-Cambridge Analytica scandal (Box 5.2). In 2018, American and British newspapers revealed that the personal data of millions of Facebook users were obtained by a data analysis company. More specifically, a developer had created an application presented as a research tool for psychologists that collected information on the identity of users, friends and liked content. The stated purpose was to produce personality profiles. However, two problems were highlighted. The first concerned the fact that the application was able to access not only the data of its users, but also the data of their “Facebook friends”, who had not given their consent for this processing. The second was that these data were then forwarded to Cambridge Analytica, which worked on Donald Trump’s 2015–2016 presidential campaign and was therefore able to use them to target voters. Box 5.2. The Facebook scandal – Cambridge Analytica (source: Press articles; CNIL 2017)

In HR, deviation from the intended use may refer, for example, to a situation where a company that has implemented a data collection tool for security purposes uses it for disciplinary purposes (Box 5.3). Video surveillance tools are particularly regulated by national legislation, notably in Europe. In fact, the employer is responsible for the safety of their employees and can therefore legitimately implement this type of tool, for example in places or establishments where there is a risk of theft or assault that threatens the safety of property or people (bank branches, stores, for example). However, camera location is also subject to rules: it is therefore prohibited to have cameras filming a particular employee, or private premises such as changing rooms. In addition, if the employer declares that these tools are used for security purposes, they cannot be used for disciplinary purposes. For example, a video recording showing that an employee systematically leaves work before the scheduled time cannot then justify dismissal. Box 5.3. Misappropriating video surveillance for disciplinary purposes (source: Baudoin et al. 2019)

Finally, so-called sensitive data were mentioned. Such data include ethnic origins, political opinions, religious beliefs, trade union membership, genetic

152

Quantifying Human Resources

data, health status and sexual orientation. The processing (collection as analysis) of such data is in principle prohibited in the European Union, except where it has been explicitly agreed by the data subject, or where their use is justified by the public interest. Indeed, the misuse of these data can be particularly problematic from an ethical point of view (Box 5.4). An interview with Marc-Antoine Dilhac, Assistant Professor of Ethics and Political Philosophy at the Université de Montréal, provides several examples of possible misuse of sensitive data from images. For example, video surveillance cameras now make it possible to identify individuals, and some countries are developing programs to identify a person’s criminal or terrorist character from their face. American researchers have also drawn attention to the dangers of this type of practice by developing a program that can identify an individual’s sexual orientation from a photograph. These two examples violate several fundamental ethical rules, creating effects of stigmatization, making it possible to discriminate on physical appearance and violating the right to privacy. Box 5.4. Misuse of sensitive data (source: UNESCO 2018)

Finally, many risks can be related to the misuse of personal data, and some of these risks relate to the HR function, which has access to a significant amount and variety of worker data. 5.1.2. Obligations and actions of companies with regard to the protection of personal data This explains why national and international legislation has sought to define obligations for employers in this area. The focus here is on the general European regulation that came into force in 2018 (GDPR, general data protection regulation), then the example of the blockchain as a solution to guarantee a form of data integrity in HR is given. 5.1.2.1. The European regulation: employers’ obligations The GDPR is a regulation that came into force in May 2018, which aims to harmonize personal data protection rules and practices within the European Union. It thus strengthens a directive dating from 1995, and sends a signal that the European Union wishes to impose high standards of personal data protection (Villani 2018).

The Ethical Issues of Quantification

153

The GDPR contains many provisions, the main ones of which are presented here, which affect the employer, particularly in the context of the implementation of projects involving HR algorithms (Box 5.5). It should be noted that employers have disciplinary power over the employee, which de facto involves data processing operations that fall outside certain obligations defined by the GDPR. For example, it is legitimate for the employer to recover the social security number or the assessments of their employees, and an employee cannot request a change in the assessments concerning him, even if the right to the change is part of the provisions of the regulation. Among the provisions that affect employers as controllers of personal data, the first is the notion of “explicit” and “positive” consent. This means that individuals cannot give their consent passively (e.g. by not unchecking a box in a form): consent must be based on an action (such as checking a box). This implies, for example, that a company wishing to deploy a CV pre-selection algorithm, or suggestions for positions or training, must first obtain the explicit agreement of the candidates or employees for this specific processing. In addition, Article 17 of the GDPR enshrines the right to erasure, i.e. the right to obtain from the controller the erasure of their data. This requires the designers of the algorithms to manage the storage and processing of data in detail. Article 22 of the Regulation also provides for the possibility of opposing the fact of being the subject of a decision based exclusively on automatic processing. This article therefore authorizes candidates to refuse that their application be processed solely by a CV pre-selection algorithm, or employees to refuse that their mobility or promotion wishes be studied solely by algorithms2. Box 5.5. The main provisions of the GDPR that affect employers and the mobilization of HR algorithms

Beyond these specific provisions, the GDPR is based on several basic principles. The first principle is based on the concepts of lawfulness, fairness and transparency: the processing of data must not be unlawful, and individuals whose data are processed must be informed of the existence of the processing (e.g. of the personal data that are collected). The second principle, purpose limitation, concerns the purposes of processing, which must be determined, explicit and legitimate, and must not change over time without informing individuals. In other words, data must be collected for a specific, unchanging purpose and communicated to the individuals 2 However, the many exceptions provided for in the same text partially render this principle meaningless (Wachter et al. 2016).

154

Quantifying Human Resources

concerned. The third principle, called data minimization, requires that only adequate, relevant and necessary data be collected for the purposes of the processing operation. The fourth principle refers to the accuracy of the data, and their regular updating. The fifth principle, close to the fourth, is based on the concepts of data integrity and confidentiality: it requires that data be processed (collected and stored in particular) in such a way as to guarantee their security and protection. Finally, the sixth principle, the limitation of retention, sets out rules for the duration of data retention. The GDPR also includes a more practical component, establishing some sort of good practice in the protection of personal data. Thus, the “privacy by design” rule provides for data protection requirements to be taken into account from the design stage of products, applications and software involving the processing of personal data. This implies, for example, that HR actors or data experts wishing to deploy an algorithm in the HR field must think about the rules allowing compliance with the provisions and principles set out above from the very first thoughts on the project. The GDPR is a generic regulation, designed to cover all processing of personal data. As a result, its implementation in the specific area of HR may have caused or may still cause difficulties for companies. For example, the right to data portability, instituted by Article 20 of the Regulation, raises practical questions and difficulties for employers. This right provides that individuals can retrieve all the information concerning them, and can also request its transfer to another structure (where technically possible). This requires employers to map all the personal data they have on their employees and to provide an appropriate technical mechanism to make this data accessible to employees. The right to rectify data and the shortening of processing times also require the establishment of appropriate mechanisms. For example, companies may benefit from switching to self-service portals, on which employees can modify information about themselves, such as their address or family situation, by providing supporting documents. 5.1.2.2. Other possible ways to ensure data security and accuracy: the example of the blockchain Beyond these regulatory or legal obligations, companies can take actions to guarantee the security but also the accuracy of the data they have on employees, which are important issues for the HR function, as witnessed. Thus, the blockchain seems promising in this respect. It is defined as a

The Ethical Issues of Quantification

155

technology for storing and transmitting information without central control. It is thus a data register containing a public history of all exchanges and transactions between its users. This technology has enabled the development of the virtual currency, Bitcoin. Indeed, it allows the transfer of financial assets. But it also allows for better product traceability, better data quality and the automatic execution of certain contracts. As a result, it has some potential in the HR field (Box 5.6). The blockchain operates on the principle of visibility and publicity of actions and transactions. Indeed, the latter are recorded on thousands of simultaneous servers, which makes it all the more difficult to hack into them and corrupt them. As a result, this technology could, for example, be used to certify and guarantee the accuracy of certain data such as diplomas or the description of professional experiences, which are widely used by the HR function in the recruitment field in particular. Some companies such as Gradbase offer this service to educational institutions or companies. Box 5.6. Using the blockchain to ensure integrity and reliability data in the HR field (source: Press articles; Baudoin et al. 2019)

Pentland (2014), for his part, advocates the use of services that promote security and the protection of personal data, but also give more power and visibility to individuals over data concerning them, and that are mobilized by services or applications (openPDS, for example). Thus, the challenges of protecting personal data in HR are multiple, from data security to the uses made of it. These issues are becoming crucial in view of the emergence of the HR function’s use of algorithms that are essentially based on personal data processing, since they ultimately require a return to the individual, as highlighted by the personalization objective studied in Chapter 2, unlike reporting or analytics, which can be based on anonymous data. These issues have led to the definition or reinforcement of obligations for employers, who can also mobilize new technologies such as blockchain to verify the accuracy and reliability of certain data themselves. 5.2. Quantification and discrimination(s) The fight against discrimination is another essential ethical issue in the use of HR quantification. Thus, the links between quantification and

156

Quantifying Human Resources

discriminations are particularly complex and polymorphic. Indeed, quantification is often presented as a shield against discrimination, in particular because it makes it possible to reduce direct discrimination through formalization and standardization, but also because it offers tools for measuring and thus making discrimination visible. However, other studies also highlight the many risks of discrimination associated with the use of quantification, particularly in connection with the rise of so-called predictive algorithms. 5.2.1. Quantification as a shield against discrimination Beyond the myth of objectivity that underlies some discourses on the contribution of quantification in the fight against discrimination, two main arguments support this point of view. First of all, quantification requires the formalization and standardization of criteria (e.g. evaluation and selection), which can be a shield against direct discrimination and some unconscious bias. Second, the illocutionary effect (Austin 1970; Espeland and Stevens 2008) of quantification plays a major role in this field: by allowing discrimination to be measured, quantification helps to ensure that this notion exists in public debates and agendas. 5.2.1.1. Quantification as a tool to reduce direct discrimination In the 1960s, when the Civil Rights Act was enacted in the United States, HR experts and practitioners highlighted the role that aptitude tests and, more generally, quantified assessments could play in the fight against discrimination, but also as proof of the company’s good will in the event of legal disputes related to suspected discrimination (Dobbin 2009). While this view was largely challenged in the following years by several court decisions that considered that quantified tests could quite possibly be discriminatory, it seems that the distinction between direct and indirect discrimination remains necessary to better understand the contributions of quantified tests in the fight against discrimination. Direct discrimination refers to a situation where a person is treated less favorably on prohibited ground (such as gender, disability and ethnic origin). Indirect discrimination refers to a situation in which an apparently neutral system or rule, applied in the same way for everyone, actually disadvantages a category of the population covered by a prohibited criterion. Some direct discrimination is in fact linked to unconscious biases (Box 5.7).

The Ethical Issues of Quantification

157

Kahneman (2015) has clearly explained the underlying mechanism of unconscious biases. Our environment is very complex, which leads to the adoption of procedures to simplify reality. Stereotypes are part of it. Indeed, stereotypes allow us to quickly judge a situation, a person and therefore to make decisions in a short period of time. This process becomes problematic, particularly when it leads to decisions that almost systematically disadvantage certain categories of people (women, the elderly, people with disabilities, etc.). However, it is difficult to control this process because some of the stereotypes or biases that accelerate our judgment and decision making are unconscious. Amadieu (2016) takes the example of physical appearance and shows that certain facial features can trigger positive or negative stereotypes in others: for example, a face that is wider than it is long inspires less confidence. These stereotypes are largely unconscious, in the sense that the human being who has less trust in his interlocutor according to the shape of his face is not necessarily aware of the source of his loss of trust. Tests, called implicit-association tests (IATs), have been developed to measure these unconscious biases. They generally consist for the user in having to quickly associate photos or first names with positive or negative words. The test then measures the ease with which certain types of profiles (women versus men, overweight versus thin people, etc.) can be associated with certain types of words (expressing a positive or negative judgment). Box 5.7. Unconscious discrimination and bias (source: Kahneman 2015; Amadieu 2016)

These unconscious biases are by definition very difficult to control. However, formalization and standardization of criteria can be effective. Thus, basing one’s judgment solely on predetermined criteria makes it possible to avoid the expression of unconscious biases. However, it is necessary that the determination of these criteria be considered in advance to avoid that they themselves reproduce these biases. In this context, the formalization and standardization provided by quantification approaches can be interesting to reduce unconscious biases. Indeed, to avoid direct discrimination, it is sufficient not to use the prohibited discrimination criteria (depending on the country: gender, age,

158

Quantifying Human Resources

family status, health status, etc.) among the selection criteria3. The software used to select applications according to certain criteria is a good example of the formalization and standardization of criteria (Box 5.8). Today, most large companies use software to sort CVs. Without going as far as automated sorting, these software programs allow the recruiter to sort or filter the CVs they receive according to the criteria they want (level of diploma, number of years of professional experience, etc.). However, it is necessary to standardize the information contained in CVs in order to be able to sort them. Two solutions are possible for this. The company can ask candidates to fill in forms with pre-formatted fields. It can also acquire semantic analysis software that will transform unstructured CV data into structured data, i.e. standardized. From the CV, information can be automatically extracted such as the number of years of experience, diploma level and skills. In both cases, the recruiter must then select the criteria on which he or she wishes to sort or even filter the CVs (e.g. to show only profiles with more than a certain number of years of experience). However, it will be extremely difficult at this stage to justify sorting CVs on prohibited criteria, such as gender or age. In addition, certain information such as the physical appearance of candidates in the CV photo, which constitutes a source of discrimination (Amadieu 2016), cannot be processed by the software. As a result, this limits the risks of direct discrimination when sorting CVs. Box 5.8. Software that turns CVs into standardized information (source: Press articles)

Other approaches are part of the same attempt to reduce direct discrimination by formalizing criteria and using indicators, particularly quantified ones. Thus, mobilizing a regression model to identify the factors that determine performance makes it possible to avoid certain unconscious biases about the importance of a given characteristic (the diploma, for example). Some researchers or practitioners believe that this type of model allows a better selection of candidates when recruiting than human choice (Box 5.9).

3 In the next section it will be seen that this does not, however, reduce the risk of indirect discrimination.

The Ethical Issues of Quantification

159

Selecting a person during recruitment involves, in part, seeking to predict the future performance of candidates. This prediction work can be carried out by a human being, on the basis of a set of clues, made up of predetermined criteria (see Box 5.8), or on the basis of their knowledge of the organization for which they are recruiting and their recruitment experience, but also of their unconscious biases, as underlined previously. It can also be done by a machine: from the data on current employees, the machine can identify the determinants of performance and their respective influences; the recruiters then simply apply these influences to the variables they have on the candidates to predict their performance. A systematic literature review on the comparative performance of humans and machines in predicting work performance shows that the machine is on average and in the majority of cases better than humans. Thus, the average correlation between the prediction and the actual measured professional performance of the human being is 0.44 for the machine and 0.28 for the human being. Moreover, the machine is on average better than the human being even when the latter has a high degree of knowledge about the organization. According to the authors, this is due to the fact that, while humans are more efficient in defining selection criteria and searching for information, they lose performance when they summarize all the information they have into a single piece of information (e.g. performance prediction), particularly because they tend to place too much importance on irrelevant information. Box 5.9. Using equations to recruit candidates (source: Kuncel et al. 2013, 2014)

Finally, the use of quantification in recruitment or assessment processes can be a tool for reducing direct discrimination, as it requires a formalization of criteria and thus avoids the use, conscious or not, of prohibited criteria. Some relatively sophisticated tools even make it possible to define the influence of each selection criterion on the basis of existing data. 5.2.1.2. Quantification as a tool for measuring discrimination Espeland and Stevens (2008) point out that quantification can have the effect of making certain objects visible or even creating them. They describe this phenomenon as an “illocutionary” act, referring to Austin’s analysis of the language (1970). Thus, they indicate that: “Numbers often help constitute the things they measure by directing attention, persuading, and creating new categories for apprehending the world” (Espeland and Stevens, 2008, p. 404). However, quantification can be used to measure inequalities or discrimination, and thus reveal phenomena that are sometimes hidden in

160

Quantifying Human Resources

public debate. This is the case for feminists, who have been able to use quantification to assess the time spent in domestic work (cooking, cleaning, childcare, etc.) and make it commensurable with paid work (Espeland and Stevens 1998). In their recruitment survey, Larquier and Marchal (2012) quantify the mobilization of discriminatory criteria by recruiters. Simple descriptive statistics can thus be used to identify situations of collective inequality (Box 5.10). Companies must “report” on their diversity situation, in particular by publishing figures on their workforce. As a result, Silicon Valley companies are regularly pinpointed for the lack of diversity in their workforce. The start-up Blendoor, which specializes in inclusive recruitment, has created a database on the diversity of the digital sector workforce in America. For example, in 2019, LinkedIn’s executive committee included two women (22%) and no non-white individuals, and its board of directors included one woman (11%) and no nonwhite individuals. LinkedIn’s workforce is 42% female, 3% black, 5% Latin American, and 2% other non-white (Asian, for example). Blendoor then aggregates these different metrics into a single indicator, which allows companies to be compared with each other on a single scale. Box 5.10. Descriptive statistics and discrimination (source: Blendoor website)

However, these statistics may not be sufficient, as seen in Chapter 1, to attribute these inequalities to discrimination. To compensate for this limitation, researchers have proposed methodologies that seek to measure discriminatory treatment rather than inequality (Box 5.11). One of the challenges in measuring discrimination and inequality is also to be able to carry out comparable measurements, so that, for example, companies can be compared with each other, and thus make their situations “commensurable” (Espeland and Stevens 1998). However, this search for comparability is confronted with the large quantity and variability of measures of inequality or discrimination. Thus, a company that measures inequalities between women and men using descriptive statistics will obtain very different results from another company that measures them using regression methods or testing. Comparability therefore necessarily requires the definition of a single measure of inequality or discrimination, which will serve as a standard for all companies. This is what the start-up Blendoor

The Ethical Issues of Quantification

161

offers in the United States (see Box 5.10). In other countries, it was the government that decided to implement this commensuration. In France, for example, the government has set up a “gender equality index” to guide companies in assessing their situation on the subject, but also to be able to compare companies with each other and to punish the most poorly positioned companies (Box 5.12). Testing methods aim to compare the behaviors (selection or evaluation, for example) of individuals facing two identical profiles with the exception of one variable. Thus, identical CVs, with the exception of a single variable on which to test discriminatory behavior (gender, age, place of residence, etc.) are sent to companies, and return rates are measured. If the CVs of women, senior citizens or people living in sensitive urban areas receive lower rates of return than those of men, young people and people living in wealthier neighborhoods, this means that recruiters are engaging in discriminatory behavior. The interest in this method is to be able to measure the magnitude of this effect. Many discrimination factors can be tested by this means: gender, age, of course, but also surname, or even physical appearance, by varying the photo on the CV using photo editing software. Box 5.11. Testing methodologies (source: Amadieu 2016)

Since the beginning of 2019, large French companies have had to calculate and publish their “gender equality index”. The precise formula for calculating this index was provided by the government in order to reduce as much as possible the variability of the calculation methods. The index was composed of five indices: gender pay gaps, growth rate gaps, promotion rate gaps, increase of women during maternity leave if the company had granted increases during this period, presence of individuals of the under-represented sex in the 10 highest paid jobs. Each of these indices gave rise to a score, and the company’s final score was a sum of the five scores. The government focused its communication on the subject on several points, including the idea that the system forced companies to communicate their situation in a transparent manner. The argument is that this single measure facilitated the control of companies and conferred on them obligations of result, and no longer only of means. Box 5.12. The gender equality index in France and the commensurability of company situations

162

Quantifying Human Resources

Finally, the use of quantification makes it possible to measure discriminations, and thus to make them known, even to make them appear. 5.2.2. The risks of discrimination related to the use of quantification However, many people have denounced the risks of discrimination caused by the use of quantification. Dobbin (2009) thus lists court decisions that highlight the potentially discriminatory nature of standardized tests and the risk of indirect discrimination they generate. More recently, O’Neil (2016) showed how the increasing use of algorithms may reinforce existing discrimination and inequality. More specifically, the use of quantification for selection can lead to indirect discrimination. In addition, the use of so-called predictive tools, for example in the context of recruitment, evaluation or mobility, can lead to a reproduction of the profiles sought and valued, responsible for a lasting exclusion of the other profiles. 5.2.2.1. The overvaluation of certain measurable criteria and the risk of indirect discrimination As early as the 1970s, following the Civil Rights Acts in the United States, legal experts became interested in the potential discriminatory biases of standardized and quantified recruitment tests. Dobbin (2009) lists several expert reports or research that attest to this concern. For example, in 1966, a special issue of Personnel Psychology dedicated to this subject highlighted the fact that the use of aptitude or skills tests, often favoring the most educated candidates, tended to disadvantage black populations; similarly, in 1968, a report (Report of the National Commission on Civil Disorders) pointed out that the high prevalence of diploma requirements also tended to disadvantage them, due to their lower school enrolment rate. In both cases, these arguments highlight the risk of indirect discrimination caused by the use of standardized quantified tests, i.e. the use of apparently neutral criteria (diplomas, for example), which actually disadvantage a population on a prohibited criterion (the black population in the United States suffered in the 1960s from a lower school enrolment rate than the white population). More precisely, the links between quantification and discrimination can be analyzed from four angles. First, quantification tends to overvalue measurable criteria at the expense of non-quantifiable or more difficult to collect information. This is what Cathy O’Neil (2016) points out throughout

The Ethical Issues of Quantification

163

her book on algorithms. She thus denounces the fact that algorithms for granting motor insurance, for example, use information related to the sound financial management of policyholders as a criterion for determining rates or accepting files. Similarly, some states in the United States have mobilized student success on a standardized test to assess their teachers, without taking into account other factors that may play a role in student success (such as social background). The criteria used in both cases are quantified information that is easy to recover. However, O’Neil questions their relevance in deciding on an insurance rate or teacher assessment, and argues that the use of this criterion tends to reinforce inequalities, since, for example, people already in financial difficulty may have to pay more for their car insurance. The proponents of algorithms and Big Data explain that the relevance of this argument decreases with the explosion in the amount and variety of data available on human beings, but it is still possible to assume that some parts of human life will always remain unquantifiable. Then, the choice of criteria measured by the quantification operations raises the question of the population of designers of these tests. This question is becoming more and more important in the debate on algorithms. Indeed, the population of algorithm designers is mainly composed of young, highly educated white males, which can influence the technical and methodological choices involved in creating algorithms. Indeed, an algorithm or more generally a quantification operation partly reflects the values of the people who construct and implement them (Chiapello and Walter 2016; CNIL 2017; Villani 2018). Finally, in the specific case of learning algorithms, the bias may be found in the data provided to the algorithm, which may then reproduce it. For example, experiments have shown that Google’s Adsense software provides lower paid job offers to female Internet users than to male Internet users, for similar levels of qualification and experience (CNIL 2017). Datasets may also contain a representativeness bias. Thus, the training of image recognition techniques based on photos of mostly white faces may have led to difficulties in recognizing the faces of non-white4 people. In addition, these tests tend to contribute to a certain reproduction of the profiles. Indeed, the fact of defining precise selection criteria, and assigning 4 See, for example, www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligenceswhite-guy-problem.html (accessed October 2019).

164

Quantifying Human Resources

influences to these criteria, creates a risk of standardization and therefore lack of diversity of the selected profiles. 5.2.2.2. The predictive approach and the risk of profile replication The latter phenomenon takes on particular importance in the context of the use of quantification for prediction and even prescription purposes. Thus, the risk of reproduction and confinement of users in still similar types of content has been amply highlighted for content suggestion algorithms (Cardon 2015). The same risk can be identified for algorithms used in the HR field, such as pre-sorting CVs, recruitment, or job or training suggestions (CNIL 2017). Indeed, these algorithms learn from past and potentially biased data. For example, if an algorithm learns from a database of all recruitments made by a technical company in recent years and that database contains very few recruitments of women, it will then tend to exclude women from recruitment. Similarly, if the individuals recruited have the same overall degree levels or backgrounds, the algorithm will reproduce this bias. However, this first factor simply shows that the algorithm may reproduce existing biases if its designers do not work to eliminate these biases. However, another risk can be identified, that of creating new biases. Thus, a well-intentioned data scientist might want to build their algorithm in such a way as to identify the best observable predictors of employee performance and by controlling potentially existing biases in learning data, which is a first step in the fight for diversity, as mentioned in the previous section. In doing so, it can exclude discrimination criteria such as gender or age. At the same time, however, the algorithm thus constructed will produce a “standard profile” of the ideal candidate or employee, and will therefore risk always recruiting the same type of profile, which will ultimately prove detrimental to team diversity (Box 5.13). The notion of diversity goes beyond the notion of non-discrimination. Indeed, diversity also refers to the enhancement of the variety of profiles, skills and backgrounds. It is no longer just a question of paying attention to official criteria of discrimination (gender, age, origin, etc.), but also of seeking to recruit more multicultural profiles, and above all to enable them to express their full potential and originality without hindering them.

The Ethical Issues of Quantification

165

However, if the use of quantification and in particular a predictive approach to quantification (recruitment algorithms, for example) can prove to be an asset in the fight against discrimination, as seen, on the contrary, it can be harmful for the promotion of diversity. Indeed, an algorithm can learn from the data and potentially free itself from unconscious biases in which profiles are the most efficient, but this step will then lead it to define the profile of the ideal employee, and it will then risk recruiting always the same type of employee. Admittedly, if the algorithm was designed to control existing biases in the data, this ideal employee may no longer be a white male of about 30 years of age who graduated from a high school... But the fact remains that there will be only one standard profile, and that the algorithm will always tend to recruit this profile. This phenomenon then contradicts the search for diversity of profiles, multiculturalism and the promotion of differences. One solution could be to ask the algorithm to take into account the composition of the team in which the recruitment is made, and to guarantee a form of diversity... But the risk then shifts to the team level: this could lead to the creation of teams that are always similar, which again goes against the promotion of diversity. Box 5.13. Recruitment algorithms, anti-discrimination and team diversity (source: Bender 2004)

Moreover, the fact that the algorithm learns from past or present data may lock it into a less dynamic approach to the labor market. Indeed, highperformance profiles today or in recent years are not necessarily the highperformance profiles of tomorrow... what today’s data does not necessarily say. 5.3. Opening the “black box” of quantification Beyond the issue of discrimination, many actors agree that it is important for human beings to maintain some form of control over algorithms and other quantification tools (CNIL 2017). Thus, at societal level, many actors are advocating the opening of the “black box” of quantification (Villani 2018) to allow the establishment of a public debate on the relevance and construction of algorithms that are becoming increasingly important in daily and civic life. In companies, the opening of the “black box” of algorithms used in HR is also required in the “Ethics & Digital HR” charter co-signed by the CNIL in France. Thus, this charter requires that:

166

Quantifying Human Resources

– the algorithmic processing of the data is carried out in a “transparent manner, after informing the data subjects of the use of their personal data and the purpose of the processing”; – the logic of the algorithm can be explained to the people who will use it to make decisions (HR managers, for example); this implies explaining the data and variables used by the algorithm, and the margins of error of the results produced. However, these prerequisites can be extended to all HR quantification tools. This requires several complementary strategies. A first strategy is essentially based on training and information for the actors directly concerned, in particular the HR function, employees and employee representatives. A second strategy mobilizes organizational leverage, by encouraging the creation of new functions and responsibilities in the company, and by promoting the formalization of key principles underlying the use of HR quantification. 5.3.1. Training HR actors, employees and their representatives as well as data experts on HR quantification The first strategy is based on training and information for HR actors, employees, their representatives and data experts. HR actors face essential challenges: maintaining a minimum level of control over quantification operations carried out in the HR field, supporting and monitoring the implementation of quantification tools that can lead to the automation of certain trades or activities, and at the same time exercising judgment in the use of these tools. Employees, for their part, can legitimately ask to be trained and informed about how these tools work, and how to use them in order to maintain room for maneuver and a form of empowerment. Training and informing staff representatives seems necessary to ensure a democratic debate on the subject and to promote the creation of safeguards. Finally, it may be useful to train data experts in HR issues so that they can integrate them when building tools and algorithms. 5.3.1.1. Training the HR function to enable it to maintain informed control over quantification The HR function currently suffers from a skills deficit in the field of quantification (analysis and interpretation of data, but also vision of what is

The Ethical Issues of Quantification

167

possible or not from the data, for example), which is explained in part by the fact that initial training in the HR field integrates relatively few courses on the subject. This is in contrast with initial marketing training, which has long given a high priority to data analysis techniques. It is only recently that initial HR training courses have begun to tentatively integrate this type of skill into their curriculum. However, the lack of data analysis skills could lead the HR function to fully delegate responsibility for the design of quantification tools to external actors, not trained in HR, such as data scientists. But this delegation could have several disadvantages. First of all, this would risk leading to a loss of power of the HR function over its own domain, especially since quantification tools would take an important place. Second, it could limit the consideration of specific HR issues on which data scientists are not trained or made aware, such as the fight against discrimination or the trade-off between the company’s strategy and the needs of employees. Finally, it could create a form of mistrust between employees who want to obtain explanations for a particular decision and an HR function that is unable to provide them. The skills gap of the HR function could also lead to another scenario, in which the HR function retains control of the quantification tools (but risks developing them and misusing them because of a lack of the necessary skills). This second scenario would also have several disadvantages, including a decrease in the performance of the HR function and an increase in employee mistrust of it. Thus, training the HR function on data analysis seems to be an essential element to ensure an informed use of quantification in this field. Two types of policies are possible in a complementary way: on the one hand, companies could encourage initial HR training to offer modules on quantification; on the other hand, they could also train their own HR department on the subject. In both cases, this training would aim to give HR actors the keys to open the quantification black box. For the HR function, this would include, among other things: – better mastering of the rules of data analysis and interpretation; – better understanding of the limitations of quantification tools; – stepping back from the myth of objective quantification but also from the idea that “everything can be made to be said with numbers”; – better understanding the functioning and progress of the algorithms used, and even being able to make proposals on the subject;

168

Quantifying Human Resources

– being able to discuss with data specialists and bring an HR perspective to the table. Some actors thus propose to create a kind of “license to use algorithms” (here more generally quantification tools), which would be based on training in these different issues (CNIL 2017). The increasing mobilization of HR algorithms is contributing to the emergence of another challenge. Indeed, some of these algorithms remain opaque for their designers themselves (Villani 2018). Thus, algorithms that involve deep learning retain some mystery even for the data scientists who design and use them (Box 5.14). For a long time, algorithm programming consisted of defining decision making rules that the machine had to scrupulously apply. It was therefore easy to explain the reasons for the decision (e.g. if you get a score below so much on a particular test, you are not selected). However, the most effective learning techniques today, namely deep learning, such as neural networks, are not based on the same operating mode. Indeed, it is the algorithm itself that finds and defines the decision-making rules based on the input data. The difficulty in explaining how the algorithm leads to these rules is partly related to the number of dimensions taken into account: although an algorithm with decision-making rules mobilizes a finite and controlled number of rules, a deep learning algorithm can be used, for example, for image recognition, receives thousands of pixels as input and learns from hundreds of thousands of parameters. It therefore becomes impossible to follow the path of the algorithm based on these thousands of parameters until the final decision to classify the image is made. Box 5.14. The black box of deep learning (source: CNIL 2017; Villani 2018)

While in some cases it is not necessarily crucial to understand how the algorithm arrived at the final decision, this is not the case for the HR domain, where, as we have seen, quantification is more in the service of justifying the decisions made. Therefore, the most radical option is to refuse to use tools whose flow or functioning cannot be explained.

The Ethical Issues of Quantification

169

5.3.1.2. Training employees to enable them to play with quantification tools For their part, employees are directly confronted with decisions or operating methods based on the use of quantification. As a result, they can legitimately ask to be informed and trained on the subject. This is also part of the GDPR’s commitments: Article 15(1) thus provides that individuals are explicitly informed of the use made of their personal data (Box 5.15). Article 15(1) of the GDPR concerns the user’s right to information regarding the processing of his personal data. It contains several provisions: first of all, individuals must be informed that their personal data are processed. If this is the case, they must then be informed about: – the purposes of the processing; – the categories of personal data concerned; – the recipients or categories of recipient to whom the person data have been or will be disclosed; – where possible, the envisaged period for which the personal data will be stored; – the existence of the right to request from the controller rectification or erasure of personal data; – the right to lodge a complaint with a supervisory authority; – where the personal data are not collected from the data subject, any available information as to their source; – the existence of automated decision making where appropriate, but also the underlying logic of such automation and its consequences for the data subject. Box 5.15. Article 15.1 of the GDPR

Beyond the GDPR, which only concerns the processing of personal data, informing and training employees more generally on quantification tools seems necessary to guarantee them room for maneuver and the ability to play with these tools. Christin (2017) thus identifies several types of strategies for workers struggling with quantification tools, in this case algorithms. The first strategy is simply to ignore them, as an employee who, receiving training suggestions

170

Quantifying Human Resources

through an algorithm, would not take them into account could do. The second strategy consists of playing with the tools, manipulating them in a way (Espeland and Sauder 2007), as could a candidate who, knowing that a CV pre-selection software is mobilized, would fill his CV with keywords relevant to the software. The third strategy is to be openly critical of the tool. These three complementary strategies ensure a form of worker autonomy and freedom from the tool. However, the second and third strategies require an understanding of how the tool works. This is where employee training and information become essential: by encouraging employees to adopt these tools, they ensure that certain room for maneuver is maintained. 5.3.1.3. Training staff representatives to establish a democratic debate on the subject Employee representatives can also play an important role in the definition and implementation of HR quantification tools. In the course of this book, several examples have been presented showing their role in the discussion and consideration of these tools. They can thus play three major roles: – questioning the relevance of quantification tools and the resulting interpretations or results (see Chapter 3, Box 3.10); – being proactive in defining and implementing new quantification tools (see Chapter 2, Box 2.6); – opposing the implementation of quantification tools deemed dangerous for the social climate, thus acting as a safeguard. However, the importance that this role can take depends strongly on their expertise in this area. This makes it necessary to train them on the subject. This training can come either from the trade unions, which see quantification as a major issue in the positioning of their representatives, or from companies that can consider that being able to dialogue with representatives trained on the subject is valuable. The training of representatives is not enough: they must also be sufficiently informed about the projects implemented by the company. The aim here is to ensure that the company implements consultation and information practices for representatives on the subject of quantification. Some companies have committed to this (Box 5.16).

The Ethical Issues of Quantification

171

In 2016, Orange signed an agreement on supporting digital transformation. This agreement includes many provisions related to digital technology (analysis of the impact on jobs, taking into account the risks of the digital divide, the right to disconnect, etc.). It also provides for a mechanism for informing and consulting employee representatives, by creating a “National Council for Digital Transformations”, composed of trade union representative organizations and members of management, to meet at least twice a year. One of the functions of this board is to ensure the proper transmission of information between management and employee representatives. For example, management must inform representatives of the implementation of new digital devices (such as algorithms) for employees. Box 5.16. A digital agreement providing for official information trade unions (source: Jeannin and Riche 2017)

The Villani’s report (2018) even recommends integrating elements related to artificial intelligence and the use of algorithms into the mandatory negotiations of companies in France, at different levels (companies, sectors, national). 5.3.1.4. Training data specialists on HR issues Finally, as the CNIL’s report (2017) recommends, it is essential to also train data specialists in HR issues. Thus, training them in the fight against discrimination, in taking diversity into account, in the notion of arbitration between the needs of the company and the needs of employees, in the notion of listening to employees, seems important to enable them to take these issues into account when designing algorithms. The designers of quantification tools thus occupy a particularly sensitive place in the production and use chain of these tools, especially since the technical nature of their profession can generate a form of opacity, and make it difficult to control their choices and actions. For example, a designer of a quantification tool can hide behind technical constraints to justify a particular choice, which may be difficult for an HR actor or employee to question, even if they have been trained on the subject. It is therefore imperative that these designers themselves be made aware of ethical and legal aspects, among others. Finally, training and information for all actors involved in the production and use of quantification tools, whatever their degree of sophistication (from reporting to algorithms), seem necessary to guarantee a form of social democracy, or at least a collective and informed debate, within the company.

172

Quantifying Human Resources

5.3.2. Mobilizing organizational leverage However, these training strategies are probably not sufficient to ensure that the multiplicity of organizational issues related to the use of quantification is properly taken into account. Mobilizing organizational leverage is an equally essential second option. Thus, the creation of new functions and responsibilities within the company can lead to a better consideration of these issues. Moreover, the formalization of major principles can make it possible to clarify the rights and duties of each individual. 5.3.2.1. Creating new functions and responsibilities The creation of new internal company functions and responsibilities on the subject of the use of HR quantification is an approach recommended by several actors. Thus, the GDPR provides for the creation of a position of Data Protection Officer (DPO) in each company (Articles 37–39). This delegate is responsible for ensuring compliance with the rules on the processing of personal data. The officer must therefore inform, advise and train all persons responsible for processing data (called “data managers”) within the company (Box 5.17). Article 37 of the GDPR provides for the creation of a DPO position within organizations. Article 38 defines its function, and Article 39 defines its tasks, which are as follows: – inform and advise the data managers and the employees who carry out data processing of their obligations pursuant to this Regulation; – monitor compliance with this Regulation and relevant community or national provisions; – advise the various actors, in particular on impact assessments; – cooperate with national supervisory authorities; – provide a point of contact between national supervisory authorities and organizations. Box 5.17. The role of the DPO (source: GDPR)

The Ethical Issues of Quantification

173

However, the DPO is only concerned with the processing of personal data. Yet, the use of quantification is much broader, covering in particular cases of processing of anonymized or aggregated data. Other instances can therefore be planned to cover the use of quantification more broadly. Thus, companies could create or strengthen the “Ethics” function (CNIL 2017). This function could be organized around cross-functional committees, or linked to the CSR function, or even directly to the HR function. Its role would include monitoring compliance with ethical issues in the implementation of quantification tools, but also producing ethical rules or principles to be respected (ethical charters, codes of conduct, for example). Organizations could also use some form of auditing quantification tools. This solution, recommended for algorithms by many actors (CNIL 2017; Villani 2018), seems particularly promising to limit the potential adverse effects (discrimination, loss of employee autonomy, loss of responsibility for the HR function) of quantification tools. This could include, for example, the creation of expert committees bringing together HR actors, volunteer employees, staff representatives and data specialists, who meet regularly to discuss the different quantification tools used by the organization’s HR function. These audits could also include a systematic first step of testing the tools with a limited sample of employees, in order to better visualize the results that can be obtained and how they can be used. 5.3.2.2. Formalizing the main principles The creation of these new responsibilities can be accompanied by the definition and formalization of key principles (e.g. in an ethical charter or code of conduct) guiding the design and use of HR quantification tools. Several texts now propose major principles related to the processing of personal data (as is the case with the GDPR), or algorithms or artificial intelligence (CNIL 2017; Villani 2018). It seems important to extend these principles to all the uses of quantification, from reporting to algorithms. Five major principles can be identified: transparency, auditability, loyalty, vigilance/reflexivity and responsibility. The first principle refers to transparency, i.e. the possibility of explaining all the elements integrated in a quantification tool: data collected, statistical processing methods, types of results obtained, how these results are used. This transparency, which therefore refers to a form of intelligibility of the

174

Quantifying Human Resources

tools and requires educational efforts on the part of the quantification tool designers, can be addressed to the different links in the chain presented in the previous section: HR actors, employees and staff representatives. Thus, transparency is necessary so that HR actors can obtain full knowledge of the tools but also question their methodological and technical choices and implications. It is also important for employees to be able to maintain room for maneuver, play and criticism of tools. Finally, it allows employee representatives to exercise a form of control and supervision on the subject. The second principle, which requires and reinforces the first, refers to the auditability of quantification tools. More precisely, it is a question of providing for the possibility of auditing the way in which quantification tools are constructed, the technical and methodological choices on which they are based, the results to which they lead and the way in which they are used. This principle could be achieved by setting up committees of experts, either internal or external to the organizations, but also by the implementation of public platforms (online, for example), accessible to all employees of the company, describing all the quantification tools used and allowing each employee, expert or layperson on quantification, to give his opinion on them. In Germany, the Algorithmic Accountability Lab, created in 2017, could be an example to follow: this laboratory uses both the hard sciences and the human and social sciences to analyze and popularize the functioning of existing algorithms for the general public, measuring the effect of algorithms on individuals and society, and suggesting ways to create algorithms in a more ethical, transparent and auditable way. The third principle corresponds to loyalty. This principle, already formulated in some European countries and reinforced by the GDPR, refers to the notion of good faith of actors mobilizing quantification tools and prohibits in particular the misuse of these tools (CNIL 2017). This presupposes, for example, that the relevance of the criteria used by the tools and the correct information of users can be justified. The GDPR incorporates this principle in Article 5. This is to ensure that the processing operations carried out correspond to the information given to individuals. As already mentioned (Chapter 3), Cardon (2018) insists on the transition from a historical obligation of tool neutrality to an obligation of loyalty. While the search for neutrality is rendered futile by the fact that a quantification tool most often corresponds to a selection of information for the purpose of reducing or modeling reality, the search for loyalty questions the alignment between the discourse delivered by the organization or designers on the tool

The Ethical Issues of Quantification

175

and its reality. Thus, this principle partly covers the principle of transparency, while extending it to the notion of good faith of organizations, designers and users. The fourth principle refers to vigilance and reflexivity. This methodological principle aims to encourage tool designers and users to continually consider the following points: – What are the data protection risks? – What are the effects of the tool on individuals and the organization? – What are the ethical implications (particularly in relation to discrimination but also to employee autonomy)? – What are the possible diversions? – What are the possibilities for the different actors to question the tool? Finally, the fifth principle, which partly includes all the first four, refers to the responsibility of the organization and the various actors involved. Indeed, we have seen that the use of quantification tools can lead in some cases to a dilution or a postponement of liability (see Chapter 3 in particular). The dilution is explained by the large number of actors in the design chain and use of quantification tools, from data specialists to firstlevel managers, for example. This dilution makes it possible to shift responsibility to other actors. Moreover, in some cases, actors may be tempted to shift responsibility – not to others, but to the tool itself – thus attributing to it a form of power surrounded by different myths already presented about quantification. It is therefore necessary to reaffirm the responsibility of all the actors in the chain, but also of the organization more generally. One way to ensure this is to affirm not only the obligation of human intervention in decision making based on quantification, but also the responsibility of the actors in this decision making. Thus, the GDPR provides for the prohibition of fully automated decision making, but the many exceptions to this principle probably make it necessary to reflect on the subject (Wachter et al. 2016). Moreover, if a human being is called upon to make a decision based on a quantification tool, it is probably appropriate to give them some leeway in this decision-making process, so as to give them some responsibility. Finally, if treating quantification tools as neutral and unavoidable forces amounts to abdicating part of our responsibility as human beings (O’Neil

176

Quantifying Human Resources

2016), a more constructivist discourse recalling the social, political and even ideological foundations of the methodological and technical choices governing their design and use allows us to emphasize human responsibility. This chapter focused on the ethical issue related to the use of HR quantification. This ethical question is divided into three main issues. The first refers to the protection of personal data. It is now covered, at least in the European Union by several texts (including the GDPR), which recall the obligations of companies in this area. The second issue concerns diversity and the fight against discrimination. Indeed, while quantification can be an interesting tool for clarifying decisions taken – thus limiting unconscious biases and direct discrimination, but also for measuring discrimination – risks of indirect discrimination and the reproduction of similar profiles can also be identified. Finally, the last issue concerns the opening of the quantification “black box”, which is essential for enabling the various actors to maintain a form of autonomy but also responsibility in the use of quantification tools. I was able to close this chapter with some practical recommendations for organizations and the HR function. Finally, it seems necessary that the debates on quantification figures and tools be conducted by all the actors concerned, including HR actors, and not exclusively by quantification technicians.

Conclusion

The conclusion of this book is divided into two parts. The first part (section C.1) makes it possible to reproduce in a more summative form the progression of each chapter and the overall purpose. The second part (section C.2) aims to integrate all the elements presented in the book into a proposed framework for the analysis of HR quantification tools. C.1. Summary of the book The objective of this book was to provide an overview of the uses of HR quantification, but also of the different discourses and theories held on these uses and tools (Figure C.1). Thus, the first chapter was devoted to three main uses of quantification. The first element of this chapter was based on a reflection on the statisticalization of reality underlying the quantification tools: quantification of the human being (aptitude tests or performance evaluation) and of work (job classification, for example). Then, the next two elements were articulated around more precise tools. Thus, reporting and dashboards were mentioned as the most common first use of HR quantification, followed by HR analytics – i.e. the use of more advanced statistical techniques to better understand HR phenomena. It was pointed out that these two uses were part of an evidence-based management (EBM) approach, which was more assertive for HR analytics. Then, the increasing use of algorithms and the emergence of the notion of “Big Data” were the subject of the last section of the chapter. The aim has been to give concrete examples of this third use of

Quantifying Human Resources: Uses and Analyses, First Edition. Clotilde Coron. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

178

Quantifying Human Resources

quantification, but also to discuss the specificities of Big Data in the HR field and in relation to reporting and dashboards. Chapter 1: Overview of different uses of quantification Statisticalization of individuals and work

Reporting, Analytics

Big Data, algorithms

Chapter 2: Link between decision making and quantification

Chapter 3: Appropriation of quantification in HR by actors

Chapter 4: Effects on the positioning of the HR function RH

A limited myth of objectivity…

For management: a rationalization tool

A tool at the service of evaluation and legitimization of the function

but necessary for the HR function

For employees: mistrust in regards to the use of quantification

But also a vector of automation for HR activities

Chapter 5: Ethical issues linked to the use of quantification in HR Personal data protection

Diversity and antidiscrimination

Opening the “black box” of quantification

Figure C.1. Summary of the work

Chapters 2–4 were devoted to specific questions on the relationship between quantification and HR. For example, Chapter 2 focused on the links between HR decision-making and quantification. The chapter began by highlighting the myth of objectivity and its limitations, and then highlighting the importance of objectivity for the HR function, which probably explains why this myth is maintained within organizations, despite the limitations outlined. The chapter then addressed two important developments for both statistical science and the HR function: the use of quantification for customization and prediction purposes. Chapter 3 then examined the appropriation of HR quantification tools by different actors in the organization. It thus made a very schematic distinction between management and HR, on the one hand, and employees and their representatives, on the other hand. Thus, it highlighted the fact that since the

Conclusion

179

beginning of the 20th Century, management has regularly been able to see quantification as a tool for managerial rationalization. Three examples of this link between quantification and managerial rationalization were studied in chronological order of their appearance: bureaucracy, New Public Management and algorithmic management. On the other hand, the following sections of the chapter were devoted to the potential difficulties of appropriation of these tools by employees and their representatives. The obstacles have thus been highlighted to the provision of personal data by employees, but also their mistrust of disembodied decision making, based essentially on a quantification that may sometimes seem opaque because of its technical nature. Chapter 4 then raised the question of the positioning of the HR function in relation to quantification. In particular, it highlighted the important ambiguities of this positioning linked to the variety of quantification effects for the HR function. Indeed, quantification can be used to evaluate HR policies and their effects, which can contribute in part to the legitimization of the HR function and its action within the organization. On the other hand, more recent developments – particularly in terms of algorithms – also create a risk of automation of certain HR activities, which may encourage the HR function to implement actions aimed at limiting the harmful consequences of this automation. Chapter 5 mobilized the lessons and examples from previous chapters to address the very broad ethical issues involved in the use of HR quantification. There are several kinds of these challenges: protection of personal data, diversity and the fight against discrimination, and finally the opening of the “black box” of quantification. The choice has been made to close this chapter with recommendations for companies, and in particular by formulating five main principles that can guide the design, deployment and use of HR quantification tools: transparency, auditability, loyalty, vigilance/reflexivity and responsibility. In my opinion, these principles should enable ensuring that the ethical issues related to this subject are taken into account. C.2. Toward an analytical framework for HR quantification In the introduction, the fact was highlighted that “HR quantification” covers three practices or situations: quantification of individuals, work and

180

Quantifying Human Resources

HR function activity. Even if these three practices have their own specificities, illustrated in the chapters, I hope I have also highlighted the multidisciplinary challenges they face: link with the myth of objectivity, use by the HR function, appropriation by the different actors and ethical questions. The aim is to now integrate all these elements into an analysis model of HR quantification tools. To do this, I began by considering that these quantification tools are management tools (Chiapello and Gilbert 2013). Indeed, these are objects or devices that the HR function uses in its management activity. Therefore, the theoretical framework proposed by Chiapello and Gilbert (2013) to analyze management tools can be transposed to quantification tools. This framework suggests analyzing management tools from three angles: functional, structural and procedural. The functional angle refers to the utility of the tool: what is it used for? Thus, a management tool is defined in part by the links it has with the management activity and ultimately with the organization’s performance. This angle therefore raises the question of the effects of the tool, but also the discourse of the various actors, and in particular the management function, on these effects. These can be of several types: forecasting, organization, coordination and control, for example. The structural angle corresponds to the structure of the management tool, i.e. its materiality and its existence as an object. A management tool thus mobilizes a certain number of materials (e.g. a dashboard, a database and software) that partially structure the actors’ practices. In addition, the management tool processes a certain amount of information, which can relate to individuals, things, resources, actions and results. Finally, the procedural angle refers to the way in which the tool is used by the different actors. Thus, this angle focuses on the variations in the appropriation of the tool by the people who use it, and the possible gap between the use planned by the designers of the tool and the actual use. Based on the elements of the previous chapters, provided here are examples and information for each of these angles with regard to the different HR quantification tools (Table C.1): statisticalization of individuals and work, reporting and analysis, Big Data and algorithms. In the following, we provide details on the table and specify to which chapters of the book these elements refer. Thus, with regard to the functional dimension, i.e. the objectives assigned to the tools, the first tool, which is the basis of the other two and which

Conclusion

181

involves the statisticalization of reality, is characterized by a great diversity of purposes (Chapter 1): to plan (in the case of aptitude tests, for example), to organize (work measurement in the context of Taylorism), to manage work (individual evaluation) and to legitimize the action of the HR function by measuring effects. This variety of objectives is explained by the fact that the statistical representation of reality is the essential prelude to the other two quantification tools (reporting/analytics and algorithms). Reporting and analytics focus initially on measurement (Chapter 1): measuring phenomena, the implementation of HR policies and their effects. However, this immediate objective must not obscure more indirect goals (Chapters 2 and 4): that of informing decision making and basing it on quantified elements (what has been named the EBM approach), but also that of legitimizing the action of the HR function by measuring the effects of its actions and establishing quantified links between these effects and the organization’s performance (rhetoric of the business case). Finally, Big Data and the use of HR algorithms are characterized by a predictive and personalized approach (prediction of resignations, personalized training suggestions), but also by new possibilities of work organization by algorithms, as in the case of platforms such as Uber (Chapters 1–3). Functional dimension

Structural dimension

Processing dimension

Statisticalization of reality

Measuring reality Forecasting future performance (e.g. aptitude tests) Organizing and coordinating work (e.g. Taylorism) Managing work (e.g. individual evaluation) Legitimizing the action of the HR function (e.g. measuring effects)

Types of mobilized data: Structured data on individuals or work (e.g. measuring the time spent on each task) Types of data produced: Individual indicators (e.g. score on an aptitude test) or aggregated (e.g. job classification)

Use by management and HR: Myth of statistics and data as neutral representations of reality Employee vision: Mistrust, refusal to provide data (e.g. data on an internal social network)

Reporting and analytics

Measuring a phenomenon and evaluations (e.g. F/H inequalities) Measuring the implementation of an HR policy (e.g. monitoring

Types of mobilized data: Structured data on individuals or work (e.g. individual characteristics, absenteeism)

Use by management and HR: Myth of objectivity, rationalization (e.g. NPM)

182

Quantifying Human Resources

Functional dimension indicators) Measuring the performance of the HR function (e.g. indicators of results) Shedding light on decision making (e.g. the EBM approach) Legitimizing the HR function (e.g. business case)

Predicting individual behaviors or HR risks (e.g. prediction of resignations) Big Data, algorithms

Personalizing services (e.g. personalized training suggestions) Organizing work (e.g. management by algorithms)

Structural dimension Types of data produced: Aggregated indicators, results of calculations (e.g. averages, medians, effects “all things being equal”)

Types of mobilized data: Structured or unstructured data on individuals or work Types of data produced: Individual information (e.g. prediction of a candidate’s future performance) or collectives (e.g. analysis of social climate using an internal social network)

Processing dimension Vision of employee representatives: Myth of objectivity, but also discussion of interpretations

Use by management and HR: Rationalization (e.g. algorithmic management) Risk of the HR function being automated (e.g. CV sorting algorithms) Employee vision: Mistrust with regard to disembodied decision making

Table C.1. The functional, structural and procedural dimensions of quantification tools (theoretical framework borrowed from Chiapello and Gilbert (2013))

The structural dimension applied to HR quantification refers to the data used and produced by these tools (Chapter 1). Thus, the statisticalization of reality, as well as reporting and analytics mobilizes structured data, generally individually, on work and activity. On the other hand, while the statisticalization of reality can produce both individual (aptitude test results)

Conclusion

183

and aggregate indicators (job classification), reporting and analytics most often produce aggregate indicators (averages and average effects). Algorithms and Big Data can accept unstructured data as input, which is not the case for reporting and analytics. In addition, they produce information at an individual level (prediction of the risk of resignation, personalized suggestions for training or positions, or even the allocation of routes to Uber drivers) or at a collective level (analysis of the social climate through comments on an internal social network). Finally, the procedural dimension transposed to HR quantification can refer to the appropriation of these tools by the different actors (Chapters 2 and 3). Thus, while the HR function and management may see quantification as a tool for objective decision making, but also for managerial rationalization, employees and their representatives may demonstrate certain mistrust. This mistrust may in particular result in the refusal to provide personal data and can be explained by, among other things, the form of disincarnation or disempowerment of the decision that the extensive use of quantification can generate. In the case of reporting and analytics, employees have relatively little access to the figures produced, which are generally reserved for dialogue with employee representatives. However, as we have seen, these representatives can both adhere to the myth of objective quantification, while retaining certain autonomy in interpreting the figures. Finally, the case of Big Data and algorithms raises another challenge for the HR function, linked to the risks of automating certain HR activities. This table therefore enables integrating many of the elements outlined in the previous chapters. However, it does not cover the elements presented in the last chapter, which are related to ethical issues. Moreover, by applying to HR quantification a framework that refers more generally to management tools, it tends to overwhelm the specificities of HR quantification. The proposal therefore is to complete this table with a diagram that includes the different dimensions, adding the ethical angle (Chapter 5) but also the specificities of quantification compared to other management tools (Figure C.2).

184

Quantifying Human Resources

Figure C.2. Theoretical framework for analyzing HR quantification

The diagram begins by repeating Table C.1 and the three dimensions of the analytical framework for management tools proposed by Chiapello and Gilbert (2013). However, it includes an ethical dimension (Chapter 5): protection of personal data, anti-discrimination and diversity policies, and opacity of tools. The combination of these four dimensions then allows me to identify four main specificities of HR quantification compared to other management tools. The first specificity – highlighted in particular in the introduction to the book, but which reappears as a common thread in the various chapters – refers to the fact that HR quantification concerns the human being which presents particular challenges, particularly methodological and ethical. The second specificity, largely highlighted in Chapter 2, refers to the myth of objective quantification. This myth, although challenged by many studies, is particularly tenacious, notably in the field of HR, which can be explained by the fact that the HR function needs to be able to legitimize its decisions, actions and policies. The third specificity – to which Chapter 5 goes into detail – corresponds to the highly technical nature of quantification, which requires specific knowledge and skills. This technicality can create a form of opacity for users of quantification tools, particularly HR actors and employee representatives, as well as for those affected by these tools, particularly employees. This specificity introduces challenges of popularization and training of actors. Finally, the fourth specificity is linked to the abundance of actors around the HR quantification chain, from

Conclusion

185

designers to users. This abundance can lead to a dilution of responsibilities, as Chapter 5 has shown. It therefore introduces the need to redefine human responsibilities. At the end of this process, the hope is to have drawn up a sufficiently broad, if not exhaustive, overview of the uses of HR quantification and the many questions they raise. The desire is to conclude by proposing some ideas for reflection. The first is the notion of responsibility. Indeed, in Chapter 5 the risks of dilution of responsibility was highlighted and in Chapter 3 the importance of maintaining a form of embodiment of decision making. However, it now seems difficult to define precisely the scope of each actor’s responsibility and to establish rules to avoid the implementation of quantification tools that would make them totally irresponsible. A thorough reflection on the subject could help both organizations and the HR function to define these frameworks. The second concerns the question of the skills of the HR function but also of employees, employee representatives and designers of quantification tools. Indeed, in Chapter 5 the need was mentioned for each actor to develop their skills. However, it is of course still illusory to imagine that each actor would be an expert in both data analysis and HR issues. This raises the question of the skills really needed for each category of actors in order to ensure that all the issues are properly taken into account. The third concerns the differences in national legal contexts on issues related to discrimination but also to the protection of personal data. Indeed, there are now major differences on the subject, for example between the United States and the European Union. These disparities can then lead to significant variations in the type of quantification tools produced and implemented within organizations. Proposing a truly international framework on these two subjects – even if it remains at this stage a utopia in view of the many difficulties it would represent – could constitute a major challenge in the coming decades, particularly because of the increasing internationalization of organizations. The fourth is that the increasing use of quantification creates interfaces between the HR function and other corporate functions, or even society more generally. Thus, quantification gives the HR function new contacts within the company: information systems, IT, statistics, etc. It also requires the HR

186

Quantifying Human Resources

function to establish partnerships with external actors, for example concerning data storage or the proposal of new services for employees. Finally, it creates new regulatory obligations for the HR function, particularly in relation to the protection of personal data and ethics. Finally, the fifth concerns the variations that could be observed within the HR function, between the different departments or major HR processes. This question was avoided by mentioning the HR function as a whole or each department concerned if necessary, but without going into the potential systematic differences between departments. Thus, major processes (recruitment, career management or training) have unequal opportunities to use quantification, particularly because they do not have the same amount of data. However, if quantification becomes a source of legitimacy within the company, it could mean that the HR departments or processes least likely to mobilize quantification tools could lose legitimacy to departments or processes that are richer in data and in opportunities to use these data. This question, therefore, requires further research and development.

References

ACKER J., Doing Comparable Worth: Gender, Class, and Pay Equity, Temple University Press, Philadelphia, 1989. ALTBACH P., “The dilemmas of ranking”, International Higher Education, vol. 42, pp. 2–3, 2015. AMADIEU J.-F., La société du paraître: Les beaux, les jeunes... et les autres, Odile Jacob, Paris, 2016. AMBLER T., BARROW S., “The employer brand”, Journal of Brand Management, vol. 4, no. 3, pp. 185–206, 1996. AMBROSE M.L., SCHMINKE M., “The role of overall justice judgments in organizational justice research: A test of mediation”, Journal of Applied Psychology, vol. 94, no. 2, pp. 491–500, 2009. AMINTAS A., JUNTER A., “L’égalité prise au piège de la rhétorique managériale”, Cahiers du Genre, vol. 2, no. 47, pp. 103–122, 2009. ANGRAVE D., CHARLWOOD A., KIRKPATRICK I. et al., “HR and analytics: Why HR is set to fail the big data challenge”, Human Resource Management Journal, vol. 26, no. 1, pp. 1–11, 2016. ARNAUD S., FRIMOUSSE S., PERETTI J.M., “Gestion personnalisée des ressources humaines : Implications et enjeux”, Management & Avenir, vol. 8, no. 28, pp. 294–314, 2009. AUSTIN J.L., Quand dire, c’est faire, Le Seuil, Paris, 1970. BARABEL M., LAMRI J., MEIER O. et al., Innovations RH : Passer en mode digital et agile, Dunod, Paris, 2018. BARLEY S.R., KUNDA G., “Design and devotion: Surges of rational and normative ideologies of control in managerial discourse”, Administrative Science Quarterly, vol. 37, no. 3, pp. 363–399, 1992.

Quantifying Human Resources: Uses and Analyses, First Edition. Clotilde Coron. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

188

Quantifying Human Resources

BARRAUD DE LAGERIE P., “Objectiver la qualité sociale”, in VATIN F. (ed.), Évaluer et valoriser : Une sociologie économique de la mesure, Presses universitaires du Mirail, Toulouse, pp. 229–245, 2013. BAUDOIN E., DIARD C., BENABID M. et al., Transformation digitale de la fonction RH : Panorama et analyse des pratiques, repères pour une mise en oeuvre opérationnelle, Dunod, Paris, 2019. BAZIN A., “Nouvelles technologies et technologies mobiles : Un levier de la performance organisationnelle et de développement du domaine RH/e-RH ?”, Management & Avenir, vol. 7, no. 37, pp. 263–281, 2010. BEER D., The Data Gaze: Capitalism, Power and Perception, SAGE Publications, New York, 2018. BEHAGHEL L., CRÉPON B., BARBANCHON T.L., Évaluation de l’impact du CV anonyme, Study report, 2011. BEHAGHEL L., Lire l’économétrie, La Découverte, Paris, 2012. BELLO-ORGAZ G., JUNG J.J., CAMACHO D., “Social big data: Recent achievements and new challenges”, Information Fusion, vol. 28, pp. 45–59, 2016. BELORGEY N., “Pourquoi attend-on aux urgences ?”, Travail et emploi, vol. 133, pp. 25-38, 2013a. BELORGEY N., “Offrir les soins à l’hôpital avec mesure”, in VATIN F. (ed.), Évaluer et valoriser : Une sociologie économique de la mesure, Presses universitaires du Mirail, Toulouse, pp. 77–93, 2013b. BENDER A.-F., “Égalité professionnelle ou gestion de la diversité. Quels enjeux pour l’égalité des chances ?”, Revue française de gestion, vol. 4, no. 151, pp. 205– 217, 2004. BERRY M., Une technologie invisible – L’impact des instruments de gestion sur l’évolution des systèmes humains, École polytechnique, CRG-1133, Paris, 1983. BEUSCART J.-S., CARDON D., PISSARD N. et al., “Pourquoi partager mes photos de vacances avec des inconnus ?”, Réseaux, vol. 2, no. 154, pp. 91–129, 2009. BLANCHARD S., BONI-LE GOFF I., RABIER M., “Une cause de riches ? L’accès des femmes au pouvoir économique”, Sociétés contemporaines, vol. 1, no. 89, pp. 101–130, 2013. BLINDER A.S., “Wage discrimination: Reduced form and structural estimates”, The Journal of Human Resources, vol. 8, no. 4, pp. 436–455, 1973. BOSWELL W.R., BOUDREAU J.W., “Employee satisfaction with performance appraisals and appraisers: The role of perceived appraisal use”, Human Resource Development Quarterly, vol. 11, no. 3, pp. 283–299, 2000.

References

189

BOUDREAU J.W., RAMSTAD P.M., Talentship and the Evolution of Human Resource Management: From “Professional Practices” to “Strategic Talent Decision Science”, CEO Publication, University of Southern California, 2004. BOUDREAU J.W., LAWLER E.E., Talent Analytics Measurement and Reporting: Building a Decision Science or Merely Tracking Activity?, CEO Publication, University of Southern California, 2014. BOUDREAU J.W., LAWLER E E., Making Talent Analytics and Reporting into a Decision Science, CEO Publication, University of Southern California, 2015. BOURGUIGNON A., CHIAPELLO È., “The role of criticism in the dynamics of performance evaluation systems”, Critical Perspectives on Accounting, vol. 16, no. 6, pp. 665–700, 2005. BOUSSARD V., “L’incontournable évaluation des performances individuelles : Entre l’invention d’un modèle idéologique et la diffusion de dispositifs pratiques”, Nouvelle revue de psychosociologie, vol. 2, no. 8, pp. 37–52, 2009. BRUNO I., “La déroute du ‘benchmarking social’. La coordination des luttes nationales contre l’exclusion et la pauvreté en Europe”, Revue francaise de socio-économie, vol. 5, no. 1, pp. 41–61, 2010. BRUNO I., “‘Faire taire les incrédules’. Essai sur les figures du pouvoir bureaucratique à l’ère du benchmarking”, in HIBOU B. (ed.), La bureaucratisation néolibérale, La Découverte, Paris, pp. 103–128, 2013. BRUNO I., “Défaire l’arbitraire des faits. De l’art de gouverner (et de résister) par les ‘données probantes’”, Revue francaise de socio-économie, no. 2, pp. 213–227, 2015. CADIN L., GUÉRIN F., PIGEYRE F. et al., GRH, gestion des ressources humaines : Pratiques et éléments de théories, Dunod, Paris, 2012. CALLON M., “What does it mean to say that economics is performative?”, in MACKENZIE D., MUNIESA F., SIU L. (eds), Do Economics Make Markets?, Princeton University Press, Princeton, pp. 311–357, 2007. CARDON D., À quoi rêvent les algorithmes : Nos vies à l’heure des big data, Le Seuil, Paris, 2015. CARDON D., “Le pouvoir des algorithmes”, Pouvoirs, vol. 1, no. 164, pp. 63–73, 2018. CASTILLA E.J., “Gender, race, and meritocracy in organizational careers”, American Journal of Sociology, vol. 113, no. 6, pp. 1479–1526, 2008. CAWLEY B.D., KEEPING L.M., LEVY P.E., “Participation in the performance appraisal process and employee reactions: A meta-analytic review of field investigations”, Journal of Applied Psychology, vol. 83, no. 4, pp. 615–633, 1998.

190

Quantifying Human Resources

CERCLE SIRH, Le SIRH : Enjeux, bonnes pratiques et innovation, Vuibert, Paris, 2017. CHAPPOZ Y., PUPION P.-C., “Le new public management”, Gestion et management public, vol. 1, no. 2, pp. 1–3, 2012. CHARBONNIER-VOIRIN A., LAGET C., VIGNOLLES A., “L’influence des écarts de perception de la marque employeur avant et après le recrutement sur l’implication affective des salariés et leur intention de quitter l’organisation”, Revue de gestion des ressources humaines, vol. 3, no. 93, pp. 3–17, 2014. CHIAPELLO È., GILBERT P., Sociologie des outils de gestion : Introduction à l’analyse sociale de l’instrumentation de gestion, La Découverte, Paris, 2013. CHIAPELLO È., WALTER C., “The three ages of financial quantification: A conventionalist approach to the financiers’ metrology”, Historical Social Research, vol. 41, no. 2, pp. 155–177, 2016. CHRISTIN A., “Algorithms in practice: Comparing web journalism and criminal justice”, Big Data & Society, vol. 4, no. 2, pp. 1–14, 2017. CLAISSE C., DANIEL C., NABOULET A., Les accords collectifs d’entreprise et plans d’action en faveur des salariés âgés : Une analyse de 116 textes, DARES Study document, no. 157, 2011. CLEVELAND J.N., BYRNE Z.S., CAVANAGH T.M., “The future of HR is RH: Respect for humanity at work”, Human Resource Management Review, vol. 25, no. 2, pp. 146–161, 2015. CLOT Y., Le travail à cœur : Pour en finir avec les risques psychosociaux, La Découverte, Paris, 2015. CNIL, Comment permettre à l’homme de garder la main ? Les enjeux éthiques des algorithmes et de l’intelligence artificielle, CNIL, Paris, 2017. COLQUITT J.A., SCOTT B.A., RODELL J.B. et al., “Justice at the millennium, a decade later: A meta-analytic test of social exchange and affect-based perspectives”, Journal of Applied Psychology, vol. 98, no. 2, pp. 199–236, 2013. CORON C., “La définition des indicateurs sociaux, entre recherche d’objectivation et enjeux de pouvoir : Le cas de l’égalité professionnelle”, Gestion 2000, vol. 35, no. 3, pp. 109, 2018a. CORON C., “Quels effets des mesures d’égalité professionnelle, en fonction de leur difficulté d’appropriation ? Une étude de cas”, Revue de gestion des ressources humaines, vol. 4, no. 110, pp. 41–53, 2018b. CORON C., PIGEYRE F., “La négociation collective sur l’égalité professionnelle : Une négociation intégrative ?”, Gérer & Comprendre, vol. 2, no. 132, pp. 41–54, 2018.

References

191

CORON C., PIGEYRE F., “L’appropriation des politiques d’égalité professionnelle par les acteurs : Éléments de contexte et conditions”, Management International, 2019. COSSETTE M., LEPINE C., RAEDECKER M., “Mesurer les résultats de la gestion des ressources humaines : Principes, état des lieux et défis à surmonter pour les professionnels RH”, Gestion, vol. 39, no. 4, pp. 44–65, 2014. CROPANZANO R., AMBROSE M.L., “Procedural and distributive justice are more similar than you think: A monistic perspective and a research agenda”, in GREENBERG J., CROPANZANO R. (eds), Advances in Organizational Justice, Stanford University Press, Stanford, pp. 119–150, 2001. CROPANZANO R., BOWEN D.E., GILLILAND S.W., “The management of organizational justice”, Academy of Management Perspectives, vol. 21, no. 4, pp. 34–48, 2007. CROSBY A.W., La mesure de la réalité : La quantification dans la société occidentale, 1250-1600, Allia, Paris, 2003. CROZIER M., Le phénomène bureaucratique : Essai sur les tendances bureaucratiques des systèmes d’organisation modernes et sur leurs relations en France avec le système social et culturel, Le Seuil, Paris, 1963. CROZIER M., FRIEDBERG E., L’acteur et le système : Les contraintes de l’action collective, Le Seuil, Paris, 1996. DE VAUJANY F.-X., De la conception à l’usage : Vers un management de l’appropriation des outils de gestion, EMS, Cormelles-le-Royal, 2005. DE VAUJANY F.-X., “Pour une théorie de l’appropriation des outils de gestion : Vers un dépassement de l’opposition conception-usage”, Management & Avenir, vol. 3, no. 9, pp. 109–126, 2006. DEHON C., MCCATHIE A., VERARDI V., “Uncovering excellence in academic rankings: A closer look at the Shanghai ranking”, Scientometrics, vol. 83, no. 2, pp. 515–524, 2010. DEJOURS C., L’évaluation du travail à l’épreuve du réel. Critique des fondements de l’évaluation, INRA, Paris, 2003. DEJOURS C., “Aliénation et clinique du travail”, Actuel Marx, vol. 1, no. 39, pp. 123–144, 2006. DEMING D.J., “The growing importance of social skills in the labor market”, The Quarterly Journal of Economics, vol. 132, no. 4, pp. 1593–1640, 2017. DESROSIÈRES A., La Politique des grands nombres. Histoire de la raison statistique, La Découverte, Paris, 1993. DESROSIÈRES A., Pour une sociologie historique de la quantification. L’Argument statistique I, Presses des Mines, Paris, 2008a.

192

Quantifying Human Resources

DESROSIÈRES A., Gouverner par les nombres. L’Argument statistique II, Presses des Mines, Paris, 2008b. DESROSIÈRES A., “Quelques commentaires au prisme d’une carrière dans la statistique publique”, in VATIN F. (ed.), Évaluer et valoriser : Une sociologie économique de la mesure, Presses universitaires du Mirail, Toulouse, pp. 287– 303, 2013. DIAZ-BONE R., THEVENOT L., “La sociologie des conventions. La théorie des conventions, élément central des nouvelles sciences sociales française”, Trivium, no. 5, pp. 1–16, 2010. DIAZ-BONE R., “Convention theory, classification and quantification”, Historical Social Research, vol. 41, no. 2, pp. 48–71, 2016. DIDIER E., “La statistique ou une autre façon de représenter une nation”, in THIERY O., HOUDART S. (eds), Humains, Non-Humains, La Découverte, Paris, pp. 91–100, 2011. DIETRICH A., PIGEYRE F., La gestion des ressources humaines, La Découverte, Paris, 2011. DOBBIN F., Inventing Equal Opportunity, Princeton University Press, Princeton, 2009. DUBAR C., “Identités collectives et individuelles dans le champ professionnel”, in DE COSTER M., PICHAULT F. (eds), Traité de sociologie du travail, 2nd edition, De Boeck Supérieur, Paris, pp. 385–401, 1998. DUJARIER M.-A., “L’automatisation du jugement sur le travail. Mesurer n’est pas évaluer”, Cahiers internationaux de sociologie, vol. 128–129, pp. 135–159, 2010. ERDOGAN B., “Antecedents and consequences of justice perceptions in performance appraisals”, Human Resource Management Review, vol. 12, no. 4, pp. 555–578, 2002. EREVELLES S., FUKAWA N., SWAYNE L., “Big Data consumer analytics and the transformation of marketing”, Journal of Business Research, vol. 69, no. 2, pp. 897–904, 2016. ESPELAND W.N., STEVENS M.L., “Commensuration as a social process”, Annual Review of Sociology, vol. 24, no. 1, pp. 313–343, 1998. ESPELAND W.N., SAUDER M., “Rankings and reactivity: How public measures recreate social worlds”, American Journal of Sociology, vol. 113, no. 1, pp. 1–40, 2007. ESPELAND W.N., STEVENS M.L., “A sociology of quantification”, European Journal of Sociology, vol. 49, no. 3, pp. 401–436, 2008. EYMARD-DUVERNAY F., “Conventions de qualité et formes de coordination”, Revue économique, vol. 40, no. 2, pp. 329–360, 1989.

References

193

FARAJ S., PACHIDI S., SAYEGH K., “Working and organizing in the age of the learning algorithm”, Information and Organization, vol. 28, no. 1, pp. 62–70, 2018. FOUCAULT M., Surveiller et punir, Gallimard, Paris, 1975. FOX W., Statistiques sociales, De Boeck Supérieur, Brussels, 1999. FREY C.B.F., OSBORNE M., “The future of employment”, Technological Forecasting and Social Change, vol. 114, pp. 254–280, 2017. GARDEY D., Écrire, calculer, classer : Comment une révolution de papier a transformé les sociétés contemporaines (1800-1940), La Découverte, Paris, 2008. GARVIN D.A., WAGONFELD A.B., KIND L., Google’s project oxygen: Do managers matter?, Harvard Business School Case 313-110, 2013. GILBERT P., YALENIOS J., L’évaluation de la performance individuelle, La Découverte, Paris, 2017. GOULD S.J., La mal-mesure de l’homme, Odile Jacob, Paris, 1997. GREASLEY K., BRYMAN A., DAINTY A. et al., “Employee perceptions of empowerment”, Employee Relations, vol. 27, no. 4, pp. 354–368, 2005. GRIMAND A., “L’appropriation des outils de gestion et ses effets sur les dynamiques organisationnelles : Le cas du déploiement d’un référentiel des emplois et des compétences”, Management & Avenir, vol. 4, no. 54, pp. 237–257, 2012. GRIMAND A., “La prolifération des outils de gestion : Quel espace pour les acteurs entre contrainte et habilitation ?”, Recherches en sciences de gestion, vol. 1, no. 112, pp. 173–196, 2016. GUEST D.E., “Human resource management and performance: Still searching for some answers”, Human Resource Management Journal, vol. 21, no. 1, pp. 3–13, 2011. HACKING I., Leçon inaugurale au Collège de France, Paris, 2001, available at: https://www.college-de-france.fr/media/ian-hacking/UPL7027195376715508431_ Le_on_inaugurale_Hacking.pdf. HACKING I., Philosophie et histoire des concepts scientifiques, Cours au Collège de France, Paris, 2005. HANSEN H.K., FLYVERBOM M., “The politics of transparency and the calibration of knowledge in the digital age”, Organization, vol. 22, no. 6, pp. 872–889, 2015. HAVARD C., “Transformations du travail opérées ‘au nom du client’ et gestion des ressources humaines”, in BEAUJOLIN-BELLET R., LOUART P., PARLIER M. (eds), Le travail, un défi pour la GRH, Éditions du réseau ANACT, Lyon, pp. 156–173, 2008. HEVELKE A., NIDA-RÜMELIN J., “Responsibility for crashes of autonomous vehicles: An ethical analysis”, Science and Engineering Ethics, vol. 21, no. 3, pp. 619– 630, 2015.

194

Quantifying Human Resources

HUBAULT F., “Le travail dans la gestion : Tensions et contradictions”, in BEAUJOLIN-BELLET R. , LOUART P., PARLIER M. (eds), Le travail, un défi pour la GRH, Éditions du réseau ANACT, Lyon, pp. 22–41, 2008. HULIN A., LEBEGUE T., RENAUD S., “Les attentes différenciées des talents selon le sexe : Une approche par la justice procédurale et la justice distributive”, Revue de gestion des ressources humaines, vol. 1, no. 103, pp. 40–54, 2017. HUTEAU M., LAUTREY J., Les tests d’intelligence, La Découverte, Paris, 2006. JEANNIN H., RICHE L., “Négocier un accord sur le numérique, un exercice de longue haleine : L’exemple d’Orange”, La Revue des conditions de travail, no. 6, pp. 111–121, 2017. JEPSEN D.M., RODWELL J., “Female perceptions of organizational justice”, Gender, Work & Organization, vol. 19, no. 6, pp. 723–740, 2012. JONAS H., Le principe responsabilité : Une éthique pour la civilisation technologique, 3rd edition, Flammarion, Paris, 1995. JUVEN P.-A., Une santé qui compte ? Les coûts et les tarifs controversés de l’hôpital public, PUF, Paris, 2016. KAHNEMAN D., Système 1, système 2 : Les deux vitesses de la pensée, Flammarion, Paris, 2015. KESSOUS E., “Le marketing des traces et la transaction des attentions”, in KESSOUS, E. (ed.), L’Attention au monde. Sociologie des données personnelles à l’ère numérique, Armand Colin, Paris, pp. 59–76, 2012a. KESSOUS E., “Le marketing de la segmentation et la captation de l’attention”, in KESSOUS, E. (ed.), L’Attention au monde. Sociologie des données personnelles à l’ère numérique, Armand Colin, Paris, pp. 49–58, 2012b. KHILJI S.E., WANG X., “‘Intended’ and ‘implemented’ HRM: The missing linchpin in strategic human resource management research”, The International Journal of Human Resource Management, vol. 17, no. 7, pp. 1171–1189, 2006. KITCHIN R., “Big Data, new epistemologies and paradigm shifts”, Big Data & Society, vol. 1, no. 1, pp. 1–12, 2014. KITCHIN R., MCARDLE G., “What makes Big Data, Big Data? Exploring the ontological characteristics of 26 datasets”, Big Data & Society, vol. 3, no. 1, pp. 1–10, 2016. KUNCEL N.R., KLIEGER D.M., CONNELLY B.S. et al., “Mechanical versus clinical data combination in selection and admissions decisions: A meta-analysis”, Journal of Applied Psychology, vol. 98, no. 6, pp. 1060–1072, 2013. KUNCEL N.R., ONES D.S., KLIEGER D.M., “In hiring, algorithms beat instinct”, Harvard Business Review, 2014.

References

195

LARQUIER G. (DE), MARCHAL E., “La légitimité des épreuves de sélection : Apports d’une enquête statistique auprès des entreprises”, in EYMARD-DUVERNAY F. (ed.), Épreuves d’évaluation et chômage, Octarès Éditions, Toulouse, pp. 47–77, 2012. LAVAL F., GUILLOUX V., “Impact de l’implantation d’un SIRH sur la GRH d’une PME : Une étude longitudinale contextualiste et conventionnaliste”, Management & Avenir, vol. 7, no. 37, pp. 329–350, 2010. LAWLER E.E., LEVENSON A., BOUDREAU J.W., “HR metrics and analytics: Use and impact”, Human Resource Planning, vol. 27, pp. 27–35, 2010. LE BIANIC T., ROT G., “Cadrer les cadres”, in VATIN F. (ed.), Évaluer et valoriser : Une sociologie économique de la mesure, Presses universitaires du Mirail, Toulouse, pp. 155–174, 2013. LE LOUARN J.Y., Les tableaux de bord. Ressources humaines : Le pilotage de la fonction RH, Liaisons, Rueil-Malmaison, 2008. LEE M.K., KUSBIT D., METSKY E. et al., “Working with machines: The impact of algorithmic and data-driven management on human workers”, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems – CHI’15, pp. 1603–1612, 2015. LEE M.K., “Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management”, Big Data & Society, vol. 5, no. 1, pp. 1–16, 2018. LEMIERE S., SILVERA R., Comparer les emplois entre les femmes et les hommes : De nouvelles pistes vers l’égalité salariale, La Documentation française, Paris, 2010. LEVENSON A., COHEN S., VAN DER STEDE W.A., Measuring the Impact of a Managerial Competency System: Does Identifying and Rewarding Potential Leaders Improve Organizational Performance?, CEO Publications, University of Southern California, 2004. LEVENSON A., “Using workforce analytics to improve strategy execution”, Human Resource Management, vol. 57, no. 3, pp. 685–700, 2018. LEVITT S.D., DUBNER S.J., Freakonomics, Denoël, Paris, 2006. LINHART R., L’établi, Les Éditions de Minuit, Paris, 1980. LITTLE B., LITTLE P., “Employee engagement: Conceptual issues”, Journal of Organizational Culture, Communications and Conflict, vol. 10, no. 1, pp. 111–120, 2006. LOWRIE I., “Algorithmic rationality: Epistemology and efficiency in the data sciences”, Big Data & Society, vol. 4, no. 1, pp. 1–13, 2017.

196

Quantifying Human Resources

MACKENZIE D., “Is economics performative? Option theory and the construction of derivatives markets”, Journal of the History of Economic Thought, vol. 28, pp. 29–55, 2006. MARCHAL E., BUREAU M.-C., “Incertitudes et médiations au cœur du marché du travail”, Revue française de sociologie, vol. 50, no. 3, pp. 573–598, 2009. MARCHAL E., Les embarras des recruteurs: Enquête sur le marché du travail, Éditions EHESS, Paris, 2015. MARLER J.H., BOUDREAU J.W., “An evidence-based review of HR analytics”, The International Journal of Human Resource Management, vol. 28, no. 1, pp. 3–26, 2017. MAYER-SCHÖNBERGER V., CUKIER K., Big Data : La révolution des données est en marche, Robert Laffont, Paris, 2014. MCCOURT W., “Paradigms and their development: The psychometric paradigm of personnel selection as a case study of paradigm diversity and consensus”, Organization Studies, vol. 20, no. 6, pp. 1011–1033, 1999. MEULDERS D., PLASMAN R., RYCX F., “Les inégalités salariales de genre : Expliquer l’injustifiable ou justifier l’inexplicable”, Reflets et perspectives de la vie économique, vol. 2, no. 44, pp. 95–107, 2005. MINTZBERG H., Structure et dynamique des organisations, Éditions d’Organisation, Paris, 1982. MORGANA L., “Un précurseur du New Public Management : Henri Fayol (18411925)”, Gestion et management public, vol. 1, no. 2, pp. 4–21, 2012. NOËL F., WANNENMACHER D., “Peut-on dépasser la discorde dans les situations de restructuration ? Quatre cas visités à l’aune de la sociologie de la justification”, @GRH, vol. 1, no. 2, pp. 63–91, 2012. OAXACA R., “Male-female wage differentials in urban labor markets”, International Economic Review, vol. 14, no. 3, pp. 693–709, 1973. OBRADOVIC I., BECK F., “Plus précoces et moins sanctionnés ? Usages des statistiques dans les discours sur les jeunes face aux drogues”, Mots. Les langages du politique, no. 100, pp. 137–152, 2012. OLLION É., “L’abondance et ses revers. Big data, open data et recherches sur les questions sociales”, Informations sociales, no. 191, pp. 70–79, 2015. OLLION É., BOELAERT J., “Les sciences sociales et la multiplication des données numériques”, Sociologie, vol. 3, no. 6, pp. 1–20, 2015. O’NEIL C., Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown, New York, 2016.

References

197

PANCZUK S., POINT S., “Introduction”, in Enjeux et outils du marketing RH. Promouvoir et vendre les ressources humaines, Éditions d’Organisation, Paris, pp. 1–6, 2008. PAYE S., “Postface: ‘Un travail de fourmi’”, in MENGER P.-M., PAYE S. (eds), Big data et traçabilité numérique : Les sciences sociales face à la quantification massive des individus, Collège de France, Paris, pp. 185–215, 2017. PENTLAND A., Social Physics: How Good Ideas Spread the Lessons from a New Science, The Penguin Press, New York, 2014. PERTINANT G., RICHARD S., STORHAYE P., Analytique RH : Démarche, bénéfices, défis, EMS, Paris, 2017. PEYRAT B., La publicité ciblée en ligne, Communication, CNIL, 2009. PICHAULT F., NIZET J., Les pratiques de gestion des ressources humaines : Approches contingente et politique, Le Seuil, Paris, 2000. PIGEYRE F., SABATIER M., “Les carrières des femmes à l’université : Une synthèse de résultats de recherche dans trois disciplines”, Politiques et management public, vol. 28, no. 2, pp. 219–234, 2011. PIOTROWSKI C., ARMSTRONG T., “Current recruitment and selection practices: A national survey of Fortune 1000 firms”, North American Journal of Psychology, vol. 8, no. 3, pp. 489–496, 2006. PORTER T.M., Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, Princeton University Press, Princeton, 1996. RABIER M., Analyse du contenu des accords d’entreprise portant sur l’égalité professionnelle entre les femmes et les hommes signés depuis la loi du 23 mars 2006, DARES, 2009. RAGUSEO E., “Big data technologies: An empirical investigation on their adoption, benefits and risks for companies”, International Journal of Information Management, vol. 38, no. 1, pp. 187–195, 2018. RANDALL D.M., FERNANDES M.E., “The social desirability response bias in ethics research”, Journal of Business Ethics, vol. 10, pp. 805–817, 1991. RASMUSSEN T., ULRICH D., “Learning from practice: How HR analytics avoids being a management fad”, Organizational Dynamics, vol. 44, no. 3, pp. 236–242, 2015. REMILLON D., VERNET A., “De l’inaptitude à l’inemployabilité”, in VATIN F. (ed.), Évaluer et valoriser : Une sociologie économique de la mesure, Presses universitaires du Mirail, Toulouse, pp. 117–136, 2013. REMY C., LAVITRY L., “La quantité contre la qualité ?”, Revue française de socioéconomie, vol. 2, no. 19, pp. 69–88, 2017.

198

Quantifying Human Resources

ROSCOE P., CHILLAS S., “The state of affairs: Critical performativity and the online dating industry”, Organization, vol. 21, no. 6, pp. 797–820, 2014. SAINSAULIEU R., L’identité au travail, Presses de Sciences Po, Paris, 2014. SALAIS R., “La politique des indicateurs : Du taux de chômage au taux d’emploi dans la stratégie européenne pour l’emploi”, in ZIMMERMANN B. (ed.), Les sciences sociales à l’épreuve de l’action : Le savant, le politique et l’Europe, Maison des Sciences de l’Homme, Paris, pp. 287–331, 2004. SALAIS R., “Quantification and objectivity. From statistical conventions to social conventions”, Historical Social Research, vol. 41, no. 2, pp. 118–134, 2016. SALANOVA M., DEL LÍBANO M., LLORENS S. et al., “Engaged, workaholic, burnedout or just 9-to-5? Toward a typology of employee well-being: Employee wellbeing and work investment”, Stress and Health, vol. 30, no. 1, pp. 71–81, 2014. SAUNDERS M., LEWIS P., THORNHILL A., Research Methods for Business Students, Pearson, Harlow, 2016. SCHILDT H., “Big data and organizational design – The brave new world of algorithmic management and computer augmented transparency”, Innovation, vol. 19, no. 1, pp. 23–30, 2017. SCHMINKE M., ARNAUD A., TAYLOR R., “Ethics, values, and organizational justice: individuals, organizations, and beyond”, Journal of Business Ethics, vol. 130, no. 3, pp. 727–736, 2015. SEAVER N., “Algorithms as culture: Some tactics for the ethnography of algorithmic systems”, Big Data & Society, vol. 4, no. 2, pp. 1–12, 2017. SENAC R., L’égalité sous conditions. Genre, parité, diversité, Presses de Sciences Po, Paris, pp. 139–188, 2015. SMITH N., SMITH V., VERNER M., “Do women in top management affect firm performance? A panel study of 2,500 Danish firms”, International Journal of Productivity and Performance Management, vol. 55, no. 7, pp. 569–593, 2006. SUPIOT A., La gouvernance par les nombres : Cours au Collège de France, 2012– 2014, Fayard, Paris, 2015. TAYLOR F.W., The Principles of Scientific Management, Harper & Brothers Publishers, New York, 1919. THEVENOT L. (ed.), “Les investissements de forme”, Conventions économiques, PUF, Paris, pp. 21–71, 1989. ULRICH D., YOUNGER J., BROCKBANK W. et al., “The state of the HR profession”, Human Resource Management, vol. 52, no. 3, pp. 457–471, 2013. UNESCO, “Intelligence artificielle : Promesses et menaces”, Le Courrier de l’UNESCO, 2018.

References

199

VATIN F. (ed.), “Introduction : Évaluer et valoriser”, in VATIN F. (ed.), Évaluer et valoriser : Une sociologie économique de la mesure, Presses universitaires du Mirail, Toulouse, pp. 17–37, 2013. VIDAILLET B., Évaluez-moi ! L’évaluation au travail : Les ressorts d’une fascination, Le Seuil, Paris, 2013. VILLANI C., Donner un sens à l’intelligence artificielle. Pour une stratégie nationale et européenne, Parliamentary mission, French government, 2018. WACHTER S., MITTELSTADT B., FLORIDI L., “Why a right to explanation of automated decision-making does not exist in the general data protection regulation”, International Data Privacy Law, vol. 7, no. 2, pp. 76–99, 2016. WEBER M., Économie et société, Plon, Paris, 1971. WEIL S., La condition ouvrière, Gallimard, Paris, 2002. WOOD A.J., GRAHAM M., LEHDONVIRTA V. et al., “Good gig, bad gig: Autonomy and algorithmic control in the global gig economy”, Work, Employment and Society, vol. 33, no. 1, pp. 56–75, 2018. YANG Y., ZHAN D.-C., JIANG Y., “Which one will be next? An analysis of talent demission”, Proceedings of ACM SIGKDD, pp. 1–5, 2018.

Index

A aptitude tests, 4, 5, 13, 37, 110, 156, 162, 177, 181 Artificial Intelligence (AI), 32, 33, 35, 139, 142, 148, 173 automated decision-making, 169 automation, 40, 43, 119, 139–142, 144, 145, 166, 169, 179, 183 B Big Data, 1, 32–34, 36–41, 43, 45, 69, 77, 80, 148, 163, 177, 180–183 bureaucracy, 88–92, 98, 179 business case, 31, 32, 134, 135, 138, 181 C CNIL, 35, 114, 115, 140, 147, 150, 151, 163–165, 168, 171, 173, 174 customization, 2, 35, 40, 41, 43, 45, 63, 64, 67–75, 84, 109, 155, 178 D dashboard, 2, 20, 25–27, 120, 180 data protection, 71, 117, 147, 148–150, 152, 154, 155, 172, 175, 176, 179, 184–186 scientists, 36, 142, 167, 168

discrimination direct, 23, 56, 156–159, 176 indirect, 56, 156, 158, 162, 176 distrust, 87, 99, 104–106, 109, 111, 118, 127, 167, 179, 181, 183 E EBM approach, 22, 25–27, 30, 80, 177, 181 effectiveness, 130, 131 efficiency, 22, 89, 91, 92, 96, 97, 130–133 ethics, 36, 43, 138, 145, 149, 152, 155, 173, 174, 176, 183, 184, 186 evidence-based management (EBM), 22, 177 G, H, I, L g factor, 5 GDPR, 147, 152–154, 169, 172–176 HR analytics, 1, 16, 26, 29–32, 37, 40, 43, 177 impact, 11, 22, 27, 76, 130, 131, 133, 135, 172 legitimacy (of the HR function), 2

Quantifying Human Resources: Uses and Analyses, First Edition. Clotilde Coron. © ISTE Ltd 2020. Published by ISTE Ltd and John Wiley & Sons, Inc.

202

Quantifying Human Resources

M, N

Q, R

management by algorithms, 39, 42, 88, 96, 97, 98, 114, 179, 182 tool, 25, 87, 123, 180 myth of objectivity, 180, 181 New Public Management, 88, 91, 179

quantification black box, 167 conventions, 52, 54, 73 quantified evaluation, 3, 6, 8, 52, 56, 57, 92, 94 rationalization, 58, 62, 87, 88, 91, 95–99, 118, 179, 181–183 recruitment algorithms, 165 remuneration, 1, 2, 6–9, 11, 12, 17, 20, 22, 58, 72, 89, 91, 109, 137, 149 responsibility, 23, 114, 126, 173, 179, 185 rhetoric, 58, 62, 107, 135, 136, 138, 139, 181

O, P opacity, 171, 184 organizational justice, 60, 136 performance of the HR function, 119, 129–133, 135, 167 organizational, 22, 119, 129, 133, 180, 181 performativity, 75 personal data, 99, 101, 104, 109, 148, 150–155, 166, 169, 172, 173, 179, 183 prediction, 2, 34, 35, 37, 38, 40, 42, 43, 75, 77–81, 83, 84, 109, 159, 164, 178, 181–183 prescription, 9, 164 promotion, 2, 6, 17, 23, 24, 46, 58, 61, 89, 90, 91, 109, 113, 117, 138, 153, 161, 165 psychotechnical, 2

S, T, U segmentation, 41, 72, 73, 144 staircase model, 25, 31, 134, 138 statisticalization, 1, 2, 52–54, 56, 177, 180–182 training, 3, 17, 20, 22, 27, 28, 37–39, 41, 83, 102, 121, 130–132, 134, 135, 137, 142, 148, 166–168, 170– 172, 184, 186 unconscious bias, 156–159, 165, 176

Other titles from

in Innovation, Entrepreneurship and Management

2020 ANDREOSSO-O’CALLAGHAN Bernadette, DZEVER Sam, JAUSSAUD Jacques, TAYLOR Richard Sustainable Development and Energy Transition in Europe and Asia (Innovation and Technology Set – Volume 9) CERDIN Jean-Luc, PERETTI Jean-Marie The Success of Apprenticeships: Views of Stakeholders on Training and Learning (Human Resources Management Set – Volume 3) DIDAY Edwin, GUAN Rong, SAPORTA Gilbert, WANG Huiwen, Advances in Data Science (Big Data, Artificial Intelligence and Data Analysis Set – Volume 4) DOS SANTOS PAULINO Victor Innovation Trends in the Space Industry (Smart Innovation Set – Volume 25) GUILHON Bernard Venture Capital and the Financing of Innovation (Innovation Between Risk and Reward Set – Volume 6)

MASSOTTE Pierre, CORSI Patrick Complex Decision-Making in Economy and Finance

2019 AMENDOLA Mario, GAFFARD Jean-Luc Disorder and Public Concern Around Globalization BARBAROUX Pierre Disruptive Technology and Defence Innovation Ecosystems (Innovation in Engineering and Technology Set – Volume 5) DOU Henri, JUILLET Alain, CLERC Philippe Strategic Intelligence for the Future 1: A New Strategic and Operational Approach Strategic Intelligence for the Future 2: A New Information Function Approach FRIKHA Azza Measurement in Marketing: Operationalization of Latent Constructs FRIMOUSSE Soufyane Innovation and Agility in the Digital Age (Human Resources Management Set – Volume 2) GAY Claudine, SZOSTAK Bérangère L. Innovation and Creativity in SMEs: Challenges, Evolutions and Prospects (Smart Innovation Set – Volume 21) GORIA Stéphane, HUMBERT Pierre, ROUSSEL Benoît Information, Knowledge and Agile Creativity (Smart Innovation Set – Volume 22) HELLER David Investment Decision-making Using Optional Models (Economic Growth Set – Volume 2) HELLER David, DE CHADIRAC Sylvain, HALAOUI Lana, JOUVET Camille The Emergence of Start-ups (Economic Growth Set – Volume 1)

HÉRAUD Jean-Alain, KERR Fiona, BURGER-HELMCHEN Thierry Creative Management of Complex Systems (Smart Innovation Set – Volume 19) LATOUCHE Pascal Open Innovation: Corporate Incubator (Innovation and Technology Set – Volume 7) LEHMANN Paul-Jacques The Future of the Euro Currency LEIGNEL Jean-Louis, MÉNAGER Emmanuel, YABLONSKY Serge Sustainable Enterprise Performance: A Comprehensive Evaluation Method LIÈVRE Pascal, AUBRY Monique, GAREL Gilles Management of Extreme Situations: From Polar Expeditions to ExplorationOriented Organizations MILLOT Michel Embarrassment of Product Choices 2: Towards a Society of Well-being N’GOALA Gilles, PEZ-PÉRARD Virginie, PRIM-ALLAZ Isabelle Augmented Customer Strategy: CRM in the Digital Age NIKOLOVA Blagovesta The RRI Challenge: Responsibilization in a State of Tension with Market Regulation (Innovation and Responsibility Set – Volume 3) PELLEGRIN-BOUCHER Estelle, ROY Pierre Innovation in the Cultural and Creative Industries (Innovation and Technology Set – Volume 8) PRIOLON Joël Financial Markets for Commodities QUINIOU Matthieu Blockchain: The Advent of Disintermediation RAVIX Joël-Thomas, DESCHAMPS Marc Innovation and Industrial Policies (Innovation between Risk and Reward Set – Volume 5)

ROGER Alain, VINOT Didier Skills Management: New Applications, New Questions (Human Resources Management Set – Volume 1) SAULAIS Pierre, ERMINE Jean-Louis Knowledge Management in Innovative Companies 1: Understanding and Deploying a KM Plan within a Learning Organization (Smart Innovation Set – Volume 23) SERVAJEAN-HILST Romaric Co-innovation Dynamics: The Management of Client-Supplier Interactions for Open Innovation (Smart Innovation Set – Volume 20) SKIADAS Christos H., BOZEMAN James R. Data Analysis and Applications 1: Clustering and Regression, Modelingestimating, Forecasting and Data Mining (Big Data, Artificial Intelligence and Data Analysis Set – Volume 2) Data Analysis and Applications 2: Utilization of Results in Europe and Other Topics (Big Data, Artificial Intelligence and Data Analysis Set – Volume 3) VIGEZZI Michel World Industrialization: Shared Inventions, Competitive Innovations and Social Dynamics (Smart Innovation Set – Volume 24)

2018 BURKHARDT Kirsten Private Equity Firms: Their Role in the Formation of Strategic Alliances CALLENS Stéphane Creative Globalization (Smart Innovation Set – Volume 16) CASADELLA Vanessa Innovation Systems in Emerging Economies: MINT – Mexico, Indonesia, Nigeria, Turkey (Smart Innovation Set – Volume 18)

CHOUTEAU Marianne, FOREST Joëlle, NGUYEN Céline Science, Technology and Innovation Culture (Innovation in Engineering and Technology Set – Volume 3) CORLOSQUET-HABART Marine, JANSSEN Jacques Big Data for Insurance Companies (Big Data, Artificial Intelligence and Data Analysis Set – Volume 1) CROS Françoise Innovation and Society (Smart Innovation Set – Volume 15) DEBREF Romain Environmental Innovation and Ecodesign: Certainties and Controversies (Smart Innovation Set – Volume 17) DOMINGUEZ Noémie SME Internationalization Strategies: Innovation to Conquer New Markets ERMINE Jean-Louis Knowledge Management: The Creative Loop (Innovation and Technology Set – Volume 5) GILBERT Patrick, BOBADILLA Natalia, GASTALDI Lise, LE BOULAIRE Martine, LELEBINA Olga Innovation, Research and Development Management IBRAHIMI Mohammed Mergers & Acquisitions: Theory, Strategy, Finance LEMAÎTRE Denis Training Engineers for Innovation LÉVY Aldo, BEN BOUHENI Faten, AMMI Chantal Financial Management: USGAAP and IFRS Standards (Innovation and Technology Set – Volume 6) MILLOT Michel Embarrassment of Product Choices 1: How to Consume Differently

PANSERA Mario, OWEN Richard Innovation and Development: The Politics at the Bottom of the Pyramid (Innovation and Responsibility Set – Volume 2) RICHEZ Yves Corporate Talent Detection and Development SACHETTI Philippe, ZUPPINGER Thibaud New Technologies and Branding (Innovation and Technology Set – Volume 4) SAMIER Henri Intuition, Creativity, Innovation TEMPLE Ludovic, COMPAORÉ SAWADOGO Eveline M.F.W. Innovation Processes in Agro-Ecological Transitions in Developing Countries (Innovation in Engineering and Technology Set – Volume 2) UZUNIDIS Dimitri Collective Innovation Processes: Principles and Practices (Innovation in Engineering and Technology Set – Volume 4) VAN HOOREBEKE Delphine

The Management of Living Beings or Emo-management

2017 AÏT-EL-HADJ Smaïl The Ongoing Technological System (Smart Innovation Set – Volume 11) BAUDRY Marc, DUMONT Béatrice Patents: Prompting or Restricting Innovation? (Smart Innovation Set – Volume 12) BÉRARD Céline, TEYSSIER Christine Risk Management: Lever for SME Development and Stakeholder Value Creation

CHALENÇON Ludivine Location Strategies and Value Creation of International Mergers and Acquisitions CHAUVEL Danièle, BORZILLO Stefano The Innovative Company: An Ill-defined Object (Innovation between Risk and Reward Set – Volume 1) CORSI Patrick Going Past Limits To Growth D’ANDRIA Aude, GABARRET

Inés Building 21st Century Entrepreneurship (Innovation and Technology Set – Volume 2) DAIDJ Nabyla Cooperation, Coopetition and Innovation (Innovation and Technology Set – Volume 3) FERNEZ-WALCH Sandrine The Multiple Facets of Innovation Project Management (Innovation between Risk and Reward Set – Volume 4) FOREST Joëlle Creative Rationality and Innovation (Smart Innovation Set – Volume 14) GUILHON Bernard Innovation and Production Ecosystems (Innovation between Risk and Reward Set – Volume 2) HAMMOUDI Abdelhakim, DAIDJ Nabyla Game Theory Approach to Managerial Strategies and Value Creation (Diverse and Global Perspectives on Value Creation Set – Volume 3) LALLEMENT Rémi Intellectual Property and Innovation Protection: New Practices and New Policy Issues (Innovation between Risk and Reward Set – Volume 3)

LAPERCHE Blandine Enterprise Knowledge Capital (Smart Innovation Set – Volume 13) LEBERT Didier, EL YOUNSI Hafida International Specialization Dynamics (Smart Innovation Set – Volume 9) MAESSCHALCK Marc Reflexive Governance for Research and Innovative Knowledge (Responsible Research and Innovation Set – Volume 6) MASSOTTE Pierre Ethics in Social Networking and Business 1: Theory, Practice and Current Recommendations Ethics in Social Networking and Business 2: The Future and Changing Paradigms MASSOTTE Pierre, CORSI Patrick Smart Decisions in Complex Systems MEDINA Mercedes, HERRERO Mónica, URGELLÉS Alicia Current and Emerging Issues in the Audiovisual Industry (Diverse and Global Perspectives on Value Creation Set – Volume 1) MICHAUD Thomas Innovation, Between Science and Science Fiction (Smart Innovation Set – Volume 10) PELLÉ Sophie Business, Innovation and Responsibility (Responsible Research and Innovation Set – Volume 7) SAVIGNAC Emmanuelle The Gamification of Work: The Use of Games in the Workplace SUGAHARA Satoshi, DAIDJ Nabyla, USHIO Sumitaka Value Creation in Management Accounting and Strategic Management: An Integrated Approach (Diverse and Global Perspectives on Value Creation Set –Volume 2)

UZUNIDIS Dimitri, SAULAIS Pierre Innovation Engines: Entrepreneurs and Enterprises in a Turbulent World (Innovation in Engineering and Technology Set – Volume 1)

2016 BARBAROUX Pierre, ATTOUR Amel, SCHENK Eric Knowledge Management and Innovation (Smart Innovation Set – Volume 6) BEN BOUHENI Faten, AMMI Chantal, LEVY Aldo Banking Governance, Performance And Risk-Taking: Conventional Banks Vs Islamic Banks BOUTILLIER Sophie, CARRÉ Denis, LEVRATTO Nadine Entrepreneurial Ecosystems (Smart Innovation Set – Volume 2) BOUTILLIER Sophie, UZUNIDIS Dimitri The Entrepreneur (Smart Innovation Set – Volume 8) BOUVARD Patricia, SUZANNE Hervé Collective Intelligence Development in Business GALLAUD Delphine, LAPERCHE Blandine Circular Economy, Industrial Ecology and Short Supply Chains (Smart Innovation Set – Volume 4) GUERRIER Claudine Security and Privacy in the Digital Era (Innovation and Technology Set – Volume 1) MEGHOUAR Hicham Corporate Takeover Targets MONINO Jean-Louis, SEDKAOUI Soraya Big Data, Open Data and Data Development (Smart Innovation Set – Volume 3) MOREL Laure, LE ROUX Serge Fab Labs: Innovative User (Smart Innovation Set – Volume 5)

PICARD Fabienne, TANGUY Corinne Innovations and Techno-ecological Transition (Smart Innovation Set – Volume 7)

2015 CASADELLA Vanessa, LIU Zeting, DIMITRI Uzunidis Innovation Capabilities and Economic Development in Open Economies (Smart Innovation Set – Volume 1) CORSI Patrick, MORIN Dominique Sequencing Apple’s DNA CORSI Patrick, NEAU Erwan Innovation Capability Maturity Model FAIVRE-TAVIGNOT Bénédicte Social Business and Base of the Pyramid GODÉ Cécile Team Coordination in Extreme Environments MAILLARD Pierre Competitive Quality and Innovation MASSOTTE Pierre, CORSI Patrick Operationalizing Sustainability MASSOTTE Pierre, CORSI Patrick Sustainability Calling

2014 DUBÉ Jean, LEGROS Diègo Spatial Econometrics Using Microdata LESCA Humbert, LESCA Nicolas Strategic Decisions and Weak Signals

2013 HABART-CORLOSQUET Marine, JANSSEN Jacques, MANCA Raimondo VaR Methodology for Non-Gaussian Finance

2012 DAL PONT Jean-Pierre Process Engineering and Industrial Management MAILLARD Pierre Competitive Quality Strategies POMEROL Jean-Charles Decision-Making and Action SZYLAR Christian UCITS Handbook

2011 LESCA Nicolas Environmental Scanning and Sustainable Development LESCA Nicolas, LESCA Humbert Weak Signals for Strategic Intelligence: Anticipation Tool for Managers MERCIER-LAURENT Eunika Innovation Ecosystems

2010 SZYLAR Christian Risk Management under UCITS III/IV

2009 COHEN Corine Business Intelligence

ZANINETTI Jean-Marc Sustainable Development in the USA

2008 CORSI Patrick, DULIEU Mike The Marketing of Technology Intensive Products and Services DZEVER Sam, JAUSSAUD Jacques, ANDREOSSO Bernadette Evolving Corporate Structures and Cultures in Asia: Impact of Globalization

2007 AMMI Chantal Global Consumer Behavior

2006 BOUGHZALA Imed, ERMINE Jean-Louis Trends in Enterprise Knowledge Management CORSI Patrick et al. Innovation Engineering: the Power of Intangible Networks

WILEY END USER LICENSE AGREEMENT Go to www.wiley.com/go/eula to access Wiley’s ebook EULA.