Still Think Robots Can’t Do Your Job? Essays on Automation and Technological Unemployment [1] 9788894830200


213 94 3MB

English Pages 177 Year 2018

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Still Think Robots Can’t Do Your Job? Essays on Automation and Technological Unemployment [1]
 9788894830200

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Riccardo Campa

STILL THINK ROBOTS CAN’T DO YOUR JOB? ESSAYS ON AUTOMATION AND TECHNOLOGICAL UNEMPLOYMENT

Libreria di Neoantropologia A series edited by Riccardo Campa Scientific Committee Antonio Camorrino, Federico II University in Neaples Vitaldo Conte – Academy of Fine Arts in Rome Michel Kowalewicz – Jagiellonian University in Krakow Roberto Manzocco – City University of New York Luciano Pellicani – Guido Carli Free International University for Social Studies in Rome Salvatore Rampone – University of Sannio in Benevento Stefan Lorenz Sorgner – John Cabot University in Rome Daniele Stasi – University of Rzeszów Piotr Zielonka – Kozminski University in Warsaw Campa, Riccardo – Still Think Robots Can’t Do Your Job? Essays on Automation and Technological Unemployment ISBN: 9788894830200 Copyright D Editore © 2018. All right reserved.” D Editore, Rome Contacts: +39 320 8036613 www.deditore.com [email protected] This ebook is made with StreetLib Write editor http://write.streetlib.com”



3

Contents Preface Acknowledgements Chapter 1.

Technological Unemployment: A Brief History of an Idea

Chapter 2.

Automation, Education, Unemployment: A Scenario Analysis

Chapter 3.

The Rise of Social Robots: A Review of the Recent Literature

Chapter 4.

Technological Growth and Unemployment: A Global Scenario Analysis

Chapter 5.

Workers and Automata: A Sociological Analysis of the Italian Case

Chapter 6.

Pure Science and the Posthuman Future

Chapter 7.

Making Science by Serendipity: A review of Robert K. Merton and Elinor Barber’s The Travels and Adventures of Serendipity

Bibliography



4

The first industrial revolution extended the reach of our bodies, and the second is extending the reach of our minds. As I mentioned, employment in factories and farms has gone from 60 percent to 6 percent in the United States in the past century. Over the next couple of decades, virtually all routine physical and mental work will be automated. Ray Kurzweil



5

Preface

This is one of those books that one writes hoping to be wrong. The question with which I begin the book has recently been asked quite often. I ask it also to myself: Do I still think robots cannot do my job? My personal answer is simply “no”. Sooner or later, there will be robots that can teach and do science. In spite of the fact that this is a collection of academic works, I ask my readers to allow me the indulgence of introducing the topic by offering a personal story. I have always been fascinated by technologies, old and new, and especially by Artificial Intelligence and robotics. Not by chance, therefore, before turning into a social scientist I studied electronics. Still, I could never turn my back to the unwanted collateral effects of technological development. When I was a teenager, I worked in a factory in summertime as a manual worker in order to pay for my studies. It was the 1980s, when the first wave of robotization was hitting Italian industries. I remember that every week a new machine was “hired” by my company, and a few fellow workers fired. Being seasonal workers we were not protected by long term contracts. One day a computerized scale was introduced in my department. It was pretty obvious that it was there to do the job of my own team. I was at once fascinated and scared by that machine. On the one hand I was curious to see how it worked, on the other I knew it might lead to my firing. When the meal break started, by getting close to the machinery, I heard the boss saying



7

that the hiring manager was looking but they had not yet found a worker who could supervise its functioning. So instead of joining my colleagues at the canteen, I started reading the instruction manual. When the bell rang to signal the end of the meal break, I went to the boss and told him that I was a student of electronics and I knew how the scale worked. He was quite happy to have the machinery immediately in function, and I was happy to leave the physical work and turn into a supervisor. Even though I had to wait until late that evening to eat, I did not even feel hungry. I was proud of myself, and I thought my parents would be proud of me also, if they just could see me. I was just sixteen years old and it was only a modest seasonal job, but to me that “career advancement” meant a lot. Still, what I predicted would happen, happened. My friends and colleagues were fired. I knew it was not their “fault”. Even if all of them did what I did—give up eating and study the instruction manual—only one supervisor was needed. The machine would have done the rest. I also know that some of those friends did not find a new job for long time. This happened almost thirty-five years ago. It was my first experience with technological unemployment. By resorting to sociological jargon, I can say that my first knowledge in the sociology of work came from “participant observation.” This probably explains why, once I became a professional sociologist, I focused so much on technology and future of work. I wrote much on these topics in both Italian and English. In this volume I present several of my works written in the English language. As often happens in a collection of essays published at different times, a few concepts and quotes are repeated. However, I wanted to leave the writings in their original form,



8

as they were published by scientific journals. Here is a short description of the chapters. The first chapter traces a brief history of the concept of technological unemployment. The historical narration covers four centuries, since the beginning of the industrial revolution up to the present. As a consequence, it is highly selective, mainly based on sources in the English language and referring to only a few of the many social scientists involved in the debate. The scopes of the inquiry are essentially two. The first is to show that focusing on technological unemployment as an idea – and not simply as a phenomenon – is appropriate, because of the high level of controversy that still characterizing the debate. The second is to provide an understanding of critical societal changes occurring in the twenty-first century. The second chapter proposes a short-term scenario analysis concerning the possible relations between automation, education, and unemployment. In my view, the scenario analysis elaborated by the McKinsey Global Institute in 2013 underestimates the problem of technological unemployment and proposes an education model which is inadequate for handling the challenges of twenty-first century disruptive technologies. New technological advances – such as the automation of knowledge work – will also affect the jobs of highly educated workers. Therefore, policy makers will not avert massive unemployment only by extending the study of math, science, and engineering. A better solution could be the establishment of a universal basic income, and the elaboration of an education model capable of stimulating creativity and the sense of belonging to a community. In the third chapter I explore the most recent literature on social robotics and argue that the field of robotics is evolving in a direction that will soon require a systematic collaboration between engineers and sociologists. After discussing several prob-



9

lems relating to social robotics, I emphasize that two key concepts in this research area are scenario and persona. These are already popular as design tools in Human-Computer Interaction (HCI), and an approach based on them is now being adopted in Human-Robot Interaction (HRI). As robots become more and more sophisticated, engineers will need the help of trained sociologists and psychologists in order to create personas and scenarios and to “teach” humanoids how to behave in various circumstances. The aim of the fourth chapter is to explore the possible futures generated by the development of artificial intelligence. The focus is on the social consequences of automation and robotization, with special attention on the problem of unemployment. To start, I make clear that the relation between technology and structural unemployment is still hypothetical and, therefore, controversial. Secondly, as proper scenario analysis requires, I do not limit myself to predicting a unique future; instead I extrapolate from present data four different possible scenarios: 1) unplanned end of work; 2) planned end of robots; 3) unplanned end of robots; and 4) planned end of work. Finally, I relate these possible developments not just to observed trends but also to social and industrial policies presently at work in our society which may change the course of these trends. The aim of chapter five is to determine if there is a relation between automation and unemployment within the Italian socioeconomic system. Italy is second in Europe and fourth in the world in terms of robot density, and among the G7 it is the nation with the highest rate of youth unemployment. Establishing the ultimate causes of unemployment is a very difficult task, and – as we said – the notion itself of ‘technological unemployment’ is controversial. Mainstream economics tends to correlate the high rate of unemployment in Italy with the low flexibility of the



10

labor market and the high cost of manpower. Little attention is paid to the impact of artificial intelligence and robots on the level of employment. With reference to statistical data, we will show that automation can be seen at least as a contributory cause of unemployment in Italy. In addition, we will argue that both Luddism and anti-Luddism are two faces of the same coin both focusing on technology itself (the means of production) instead of on the system (the mode of production). Banning robots or ignoring the problems of robotization are not effective solutions. A better approach would consist in combining growing automation with a more rational redistribution of income. The sixth chapter explores a more remote scenario, namely the hypothesis that machines could sooner or later “wake up”, become conscious, and have a role also in the pursuit of knowledge. It is a scenario analysis that often goes under the label of “transhumanism” and predicts the advent of the Singularity. Since the industrial revolution, humans have tended to reduce science to the ancillary role of an engine of technology. But the quest for knowledge using rational, scientific methods started at least two and a half millennia ago with the aim of setting humans free from ignorance. The first scientists and philosophers (at least that we know about because they wrote things down) saw knowledge as the goal, not as the means. The main goal was to understand the nature of matter, life, conscience, intelligence, our origin, and our destiny, not only to solve practical problems. Being skeptical of myths and religions, they gave themselves the goal to reach The Answer via rational and empirical inquiry. Transhumanism is a unique philosophy of technology because one of its goals is the creation of a posthuman intelligence. Several scientists share this hope: Making technology an ancillary of science, and not vice versa. By evolving and reaching the Singularity, the hope is that posthumans can



11

achieve one the greatest dreams of sentient beings: finding The Answer. In the seventh and last chapter I address the role of serendipity in the development of science and technology. It is a review of Robert K. Merton and Elinor Barber’s book The Travels and Adventures of Serendipity. A Study in Sociological Semantics and the Sociology of Science. Although this book does not treat the issue of technological unemployment directly, it critically discusses the orthodox Marxist theory on automation. According to this theory, scientific and technological discoveries are products of necessity. Industrial automation could not be developed in ancient times, because of the slavery mode of production. The cost of manpower was very low, so there was no need to produce machines. In the capitalistic mode of production characteristic of modern times, however, slaves are not available, so machines can fit the bill. While there is truth in this narrative, it is an oversimplification, because – as Merton and Barber convincingly argue – many scientific discoveries and technical inventions depend on chance and serendipity. Indeed, the fact that Heron and other Alexandrine engineers already projected and built automatons in ancient times does not fit Marxist theory. If we ask common people if we need conscious computers and robots, the answer would probably be mixed, with – I guess – a majority against the idea. Personally, I am not prejudicially against the idea, but I think we should also take into account the possibility that conscious Artificial Intelligence may emerge from a serendipitous discovery, in an unplanned way and regardless of its social necessity. In fact, as the development of Artificial Intelligence progresses, we may ask whether serendipity – intended as the capability of making fortuitous discoveries, or the ability to find something while we are looking for something else – will be a



12

virtue we share with our mechanical children, or whether it will be a factor – maybe THE factor – that continues to differentiate humans from intelligent machines. In any case, we should consider the role of serendipity when we reflect and speculate about the future of work.



13

Acknowledgements

I am particularly indebted to Alan Sparks for his editorial contributions to various parts of this book. Besides being an awardwinning non-fiction writer, Alan is also an accomplished computer scientist and astute social observer, and discussions with him have been very stimulating also with regard to content. I am grateful also to Catarina Lamm for having translated from Italian into English chapters four and five of this book1, and to Matt Hammond and Lucas Mazur for having proofread other fragments of the book. It goes without saying that any remaining inaccuracies in the facts or in the style are my own. With regard to the title of this book, I have to credit Nikhil Sonnad, who published a press article in digital magazine Quartz entitled “Robot all too robot. Still think robots can’t do your job? This video may change your mind” (2014). After struggling to find a title, and after realizing that all the titles I was thinking of were already used for other books, I decided to borrow and reuse a fragment of that article title. I also thank the readers of La società degli automi, a book written in my native language, partly covering the same topic but still more focused on Italian issues, that rapidly became a bestseller in Italy. Without the positive feedback of the public, 1 These two articles have been included, with little modifications and a

different title, also in my book Humans and Automata. A Social Study of Robotics (Campa 2015).



15

my Italian publisher would have probably hesitated to print a second book on automation and technological unemployment in English. It will be a challenge also to D Editore to cross the borders and promote this book worldwide. So, the last thanks goes to Emmanuele Pilia for accepting the challenge.



16

Chapter 1

Technological Unemployment: A Brief History of an Idea

1. Generalities The concept of technological unemployment is regaining momentum in the discourse of economists and economic sociologists. However, when analyzing the debate, what is most surprising is the substantial absence of agreement on the very existence of technological unemployment as a phenomenon. Some observers present technological unemployment as a sprawling monster that is completely subverting the global economy, while others conclude that this picture is just a mirage of doomsayers. Since reputable scholars are engaged in the debate, we cannot simply blame the polarization of narratives on the incompetence of one or the other school of thought. Even if the definitions of technological unemployment provided by different sources do not differ particularly, it has become evident that the terms contained in these definitions may assume different meanings depending on the theoretical perspective. Unemployment is a phenomenon studied by both sociologists and economists. As Tony Elger (2006: 643) remarks, “[s]ociologists often focus on the experience and consequences of unemployment, leaving economists to analyze causes. […] However, consideration of the underlying processes that generate these patterns of unemployment exposes continuing controversy among economists, for example between neoliberal, neo-



17

Keynesian, and neo-Marxist analyses of the political economy of contemporary capitalism. Thus, economic sociologists have to adjudicate between these different causal accounts [...]” Unemployment is a complex phenomenon. “Economists distinguish between frictional unemployment, involving individual mobility of workers between jobs; structural unemployment, resulting from the decline of particular sectors or occupations; and cyclical unemployment, resulting from general but temporary falls in economic activity” (ibid.). To this list, one can add technological unemployment. The Oxford Dictionary of Economics defines technological unemployment as follows: "Unemployment due to technical progress. This applies to particular types of workers whose skill is made redundant because of changes in methods of production, usually by substituting machines for their services. Technical progress does not necessarily lead to a rise in overall unemployment” (Black 2012: 405). As one can see, it is a concept that already includes a theory, since it puts into causal relationship two distinct phenomena: technological progress and unemployment. The disagreement between the different schools of thought mainly concerns the existence of this causal relationship. Technological unemployment can be studied at different levels of the economic system: at the level of individual actors, companies, productive sectors, countries, or global economy. That at least one individual has lost his job because the employer or the customer has purchased a machine that can accurately perform his/her duties is a fact that can hardly be denied. Similarly, it cannot be denied that entire companies have been automated and this process has resulted in a drastic reduction of employment inside the company. As well as it cannot be denied that, owing to technological innovation, entire economic sectors



18

have been largely emptied of their workforce. The transition from traditional agriculture to intensive agriculture, through the use of agricultural machinery, herbicides, fertilizers, fungicides, etc., has led to demographic emptying of the countryside. The evaporation of jobs in the primary sector of the United States of America offers impressive numbers: in 1900 41% of the population was employed in agriculture, a century later, in 2000, only 2% of Americans still worked in same sector (WladawskyBerger 2015). A similar phenomenon was observed in the secondary sector, or manufacturing, at the turn of the twentieth and twenty-first century. In the United States, the ratio between employment in the factories decreased from 22.5% in 1980 to 10% today and is expected further decline to below 3% by 2030 (Carboni 2015). Similar situations can be observed in other industrialized countries, including Italy (Campa 2014a). This emptying of whole sectors of the economy was accompanied by a migration of the workforce from one sector to another. A first migration was observed from agriculture to manufacturing, a visible phenomenon because it also led to a massive migration from rural to urban areas. A second migration of the labor force, less visible but equally significant, occurred from the manufacturing sector to the services sector (Campa 2007). Overall, at least so far, the increase in productivity in individual sectors has not resulted in the emergence of a permanent and chronic technological unemployment on a global level. This does not mean, however, that technological unemployment – at least as a temporary or local phenomenon – does not exist. It should also be clear that the reabsorption of the unemployed into the economy has been possible thanks to two main levers: the first is free market, which enabled the birth and development of new sectors of the economy; the second is social and industrial public policies. The fact that both forces are at



19

work is often obscured by the fact that observers are largely divided into two tribes: those who worship the Market as an almighty God, and those who attribute an analogous divine character to the State. Only those who do not profess either ‘religion’ can see that many factors have contributed to dampen the phenomenon of technological unemployment. Private entrepreneurs have created manufacturing industries and used the cheap labor flowing from countryside to city, in the nineteenth and early twentieth century. New enterprising capitalists have created service companies to redeploy manpower pouring out from factories, in the second half of the twentieth century. At the same time, trade unions and socialist political parties, through tough political and labor struggles, have succeeded in achieving steady reduction of working hours (even a halving of working hours, if we consider the period from the nineteenth century to the present), retirement and disability pensions, paid holidays, paid sickness, maternity leave, and other social rights, which on the whole have forced private employers to hire more workers than they would have hired in a laissez-faire capitalist regime. Moreover, the idea that the equilibrium of a national economy is assured by the Invisible Hand is belied by the fact that employment crises have sometimes been resolved by the mass migration of workers from one country to another. This means that it is not written in the stars that capable private entrepreneurs and creative people who create new jobs, new companies, or even new economic sectors must continually rise. If they do not rise, if there are no social and cultural conditions that permit them to arise, the unemployment crisis generated by the introduction of new technologies can become chronic and irreversible in a specific geographical area. Finally, other forms of public intervention, such as industrial policies, have contributed to cushion the phenomenon. For instance, the creation of public



20

manufactories, the nationalization of private companies, public contracts (just think of the incidence of military spending in the United States), wars, crime (the prison population in the US now exceeds two million individuals), as well as the creation of millions of jobs in the public service – jobs that are sometimes unnecessary and therefore constitute a permanent masked dole. If you consider all these aspects, some of which are ignored by economic theory, it seems difficult to deny the existence of technological unemployment. Somewhat different is the question of whether it is a significant phenomenon on a global scale. From the psychological point of view, being replaced by a machine is certainly a big concern for those who lose their jobs, even temporarily. But the issue begins to acquire political relevance only if the proportion of individuals affected by the phenomenon is likely to disrupt an entire economic system. Throughout history, different moments when the phenomenon of technological unemployment has assumed critical proportions were observed. In these periods, the idea of technological unemployment has gained major relevance in the public debate. 2. Luddism: The First Reaction Notoriously, a rather critical moment in European history was the transition from feudalism to capitalism, and not only for bloody political revolutions that accompanied the transformation. In the so-called feudal system, the creation of work did not constitute a problem, because social mobility was minimal. Children inherited the job of their fathers. The children of the farmers knew that they would be farmers themselves, or serfs. The children of the artisans learned their profession in the workshops of their fathers. The eldest son of an aristocratic family inherited the family estate, while his younger brothers were ini-



21

tiated in a military or ecclesiastical career. Daughters would be wives of men chosen by the father, or nuns. Beggars, robbers, vagabonds, prostitutes, and adventurers formed exceptions to the strict rule. In the Middle Ages, others were the economic concerns: wars, epidemics, famines. A serious problem that could arise was rather labor shortages as a result of these phenomena. With the transition to capitalism, previously unknown problems arise: in particular, overproduction and unemployment. The introduction of machines in the production system and social mobility disrupt the traditional conception of work and life. To many, it appears inconceivable that someone willing to work cannot find a job. So much so that the first reaction of the political authorities is to limit the use of the machines where cause unemployment. Even mercantilist Jean-Baptiste Colbert, who gave great impulse to the industrialization of France by the creation of so-called Manufactures nationales, passed measures to restrict the use of machines in private companies. Where the authorities do not intervene, the workers themselves may make a fierce and desperate struggle against the machine, of which we find a detailed account in Capital by Karl Marx (1976: 554-555): “In the seventeenth century nearly all Europe experienced workers' revolts against the ribbon-loom, a machine for weaving ribbons and lace trimmings called in Germany Bandmühle, Schnurmühle, or Mühlenstuhl. In the 1630s, a wind-driven sawmill, erected near London by a Dutchman, succumbed to the rage of the mob. Even as late as the beginning of the eighteenth century, saw-mills driven by water overcame the opposition of the people only with great difficulty, supported as this opposition was by Parliament. No sooner had Everett constructed the first woolshearing machine to be driven by waterpower (1758) than it was set on fire by 100,000 people who had been thrown out of work. Fifty thousand workers, who had pre-



22

viously lived by carding wool, petitioned Parliament against Arkwright's scribbling mills and carding engines. The largescale destruction of machinery which occurred in the English manufacturing districts during the first fifteen years of the nineteenth century, largely as a result of the employment of the power-loom; and known as the Luddite movement, gave the antiJacobin government, composed of such people as Sidmouth and Castlereagh, a pretext for the most violent and reactionary measures. It took both time and experience before the workers learnt to distinguish between machinery and its employment by capital, and therefore to transfer their attacks from the material instruments of production to the form of society which utilizes those instruments.” David F. Noble (1995: 3-23) maintains that the Luddites are not to be considered technophobic. When the machinery was introduced in manufactures, the workers destroyed it because of necessity, not because of technophobia. Their choice was limited to three options: 1) starvation for them and their families; 2) violence against the uncompassionate owners of the means of production; 3) destruction of the means of production. Choosing the third option was the mildest way to communicate their discomfort as regards unemployment. The reaction of the political authorities was clearly less mild. Such was the incidence of the phenomenon that the English government implemented the death penalty for Luddites. The ‘assassination’ of a machine was put on a par with the assassination of a human being.

3. Classical Political Economy: The First Denial In spite of the fact that the appearance of machinery produces



23

worrisome social disorders, economists are reluctant to modify their theories in order to make place for technological unemployment. There are just a few exceptions. For instance, an attempt at conceptualization is found in James Steuart’s book An Inquiry into the Principles of Political Economy (1767), and precisely in chapter XIX (“Is the Introduction of Machines into Manufactures prejudicial to the Interest of a State, or hurtful to Population?”). Steuart admits that the sudden mechanization of a segment of the production can produce temporary unemployment and, therefore, public policies are needed to facilitate the absorption of the labor force into other tasks. He is still persuaded that the advantages of mechanization outweigh negative side effects, but is also convinced that problems do not solve themselves. However, that of Steuart is an isolated voice. Classical economics is dominated by Adam Smith’s optimistic perspective, which emphasizes the positive effects of mechanization and the self-regulating nature of market economies. In his masterpiece An Inquiry into the Nature and Causes of the Wealth of Nations, he provides evidence of a causal connection between high taxation and unemployment (Smith 1998: 1104), or excessive prodigality of the landlords and unemployment (Smith 1998: 448-449), rather than between the use of machinery and unemployment. Machinery is mainly seen as a means to increase the productivity of laborers: “The annual produce of the land and labour of any nation can be increased in its value by no other means but by increasing either the number of its productive labourers, or the productive powers of those labourers who had before been employed. […] The productive powers of the same number of labourers cannot be increased, but in consequence either of some addition and improvement to those machines and instruments which facilitate and abridge labour; or of a more proper division and distribution of employment” (Smith



24

1998: 455-456). When Smith takes into consideration the possibility of a connection between the mechanization of labor and the redundancy of laborers, he sees this situation uniquely as a chance for capitalists and landlords, and not as a problem for the working class: “In consequence of better machinery, of greater dexterity, and of a more proper division and distribution of work, all of which are the natural effects of improvement, a much smaller quantity of labour becomes requisite for executing any particular piece of work, and though, in consequence of the flourishing circumstances of the society, the real price of labour should rise very considerably, yet the great diminution of the quantity will generally much more than compensate the greatest rise which can happen in the price” (Smith 1998: 338). Afterwards, classical economists developed “the theory that the working class is being compensated for initial sufferings, incident to the introduction of a labor-saving machine, by favorable ulterior effects” (Schumpeter 2006: 652). Marx baptizes this theory as theory of compensation. Among the fathers of the theory, Marx lists James Mill, John McCulloch, Robert Torrens, Nassau W. Senior, and John Stuart Mill. David Ricardo should also be added to the list. In synthesis, this theory states that, if new machines allow to save labor, manpower will be needed for the production of said machinery. Also, if initially the new production processes saves labor, then they boost demand and jobs, through the reduction of costs and, therefore, the price of the goods offered. Finally, it is hypothesized that there is a perfect identity between income and spending, and therefore the theory assumes that the major revenues arising from the reduction of the workforce in factories and farms will result in greater demand for consumer goods by capitalists and landlords, which in turn will create new jobs.



25

4. The Conversion of David Ricardo If this is so, why do laid-off workers get so angry? Evidently, even admitting that there is a medium-term or long-term compensation of losses, the short-term effects are devastating for a social class that has no capital or assets. For those who live for the day, and perhaps have many children to support, even a few weeks unemployment can be lethal. If we consider that, in order to find a new job, the proletarian must sometimes emigrate, leaving loved places and people, or accept a less satisfying and less remunerated job, while he or she sees his or her former employer getting richer thanks to the new machinery, his or her backlash appears less mysterious. It is for this reason that the great economist David Ricardo, in 1821, decided to bring the issue of technological unemployment into economic theory. It must be said that, initially, Ricardo not only remained in the wake of classical economics, denying the issue and arguing that the introduction of machinery is beneficial to all social classes, but had also produced what Blaug (1958: 66) has called “the first satisfactory statement of the theory of ‘automatic compensation’.” Subsequently, however, disorienting his own followers, “Ricardo retracted his former opinion on the subject” (Kurz 1984). In the third edition of Ricardo’s Principles of Political Economy and Taxation, published in 1821 – and precisely in Chapter XXXI, “On Machinery” – one can indeed find both the admission of the conversion and a clear formulation of the idea of technological unemployment. Ricardo (1821: 282) states that it is more incumbent on him to declare his opinions on this question because they have, on further reflection, undergone a considerable change: “Ever since I first turned my attention to questions of political economy, I



26

have been of opinion, that such an application of machinery to any branch of production, as should have the effect of saving labour, was a general good, accompanied only with that portion of inconvenience which in most cases attends the removal of capital and labour from one employment to another.” The English economist proceeds by summarizing the theory of compensation. Afterwards, he states that these “were” his opinions on the matter. More precisely, Ricardo (1821: 283) states that his opinions “continue unaltered, as far as regards the landlord and the capitalist;” but now he is convinced “that the substitution of machinery for human labour, is often very injurious to the interests of the class of labourers.” That this injury concerns both salaries and employment chances is declared a few pages later. First, he provides examples based on numbers. Then, he concludes as follows: “All I wish to prove, is, that the discovery and use of machinery may be attended with a diminution of gross produce; and whenever that is the case, it will be injurious to the labouring class, as some of their number will be thrown out of employment, and population will become redundant, compared with the funds which are to employ it” (Ricardo 1821: 286). Historians of economics often underline the importance of this step. For instance, Heinz D. Kurz (1984) concludes that, thanks to Ricardo, the idea of technological unemployment “marks its first appearance in respectable economic literature.” As we have seen, the Luddites had denounced this problem much earlier, but not until Ricardian economic theory did technological unemployment take on the aura of a scientific concept. After Ricardo, classical economists were obliged to refute the most simplistic forms of compensation theory and to develop more sophisticated forms of it.



27

In his 1848 Principles of Political Economy, John Stuart Mill (2009: 51) states that “[a]ll attempts to make out that the laboring-classes as a collective body can not suffer temporarily by the introduction of machinery, or by the sinking of capital in permanent improvements, are, I conceive, necessarily fallacious.” He stresses that it is “obvious to common sense” and also “generally admitted” that workers would suffer in the particular department of industry to which the change applies. However, he still concludes that, at least in opulent countries, the extension of machinery is not detrimental but beneficial to laborers. In his words, “the conversion of circulating capital into fixed, whether by railways, or manufactories, or ships, or machinery, or canals, or mines, or works of drainage and irrigation, is not likely, in any rich country, to diminish the gross produce or the amount of employment for labor” (Stuart Mill 2009: 252). 4. Karl Marx: Beyond Economic Theory The subtitle of Karl Marx’s Capital is A Critique of Political Economy. As a consequence, to label “political economy” his own scientific work would imply some degree of intellectual violence. It is also true that no discipline can easily describe his theoretical and empirical contributions to social science. Besides being considered a philosopher, a political thinker, an historian and an economist, Marx has been also described as a sociologist (Lefebvre 1982, Durand 1995) and, more specifically, as an economic sociologist (Swedberg 1987: 22-24). This characterization is particularly appropriate when talking about technological unemployment. Economic sociology and political economy are two mutually enriching disciplines, differing in a few important respects (Smelser 1976). One of these is the range of the analysis. The



28

former offers a holistic point of view, by paying attention also to cultural determinants, emotional dimensions, and social consequences of economic phenomena. Economists asks themselves if there is a causal connection between technological development and unemployment, in the short or the long run. Economic sociologists aim also at knowing the life conditions of workers inside and outside the factory, that is: if they work safely or unsafely, if they are mobbed when employed, if they abuse alcohol or fall into depression when unemployed, how and where their family live, how many children they have, if their children go to school, if they were forced to migrate, etc. When we read the chapter on “Machinery and Large-Scale Industry” of Capital, we find much information that we can hardly find in a book of political economy. Here is just an example: “Here we shall merely allude to the material conditions under which factory labour is performed. Every sense organ is injured by the artificially high temperatures, by the dust-laden atmosphere, by the deafening noise, not to mention the danger to life and limb among machines which are so closely crowded together, a danger which, with the regularity of the seasons, produces its list of those killed and wounded in the industrial battle” (Marx 1976: 552). Unlike the economist of his time, who would just deal with laws and regulations by assuming that they are respected and, therefore, constitute a solid basis for predictive theories, Marx takes into account also the possibility that laws and regulations may remain just on paper and never affect real factory life. This is the typical sociological point of view. For instance, Marx (1976: 552) notes that “although it is strictly forbidden in many, nay in most factories, that machinery should be cleaned while in motion, it is nevertheless the constant practice in most, if not in all, that the workpeople do, unreproved, pick out waste, wipe



29

rollers and wheels, etc., while their frames are in motion. Thus from this cause only, 906 accidents have occurred during the six months…” Coming to the problem of unemployment, Marx observes that machinery has not freed man from work and guaranteed widespread well-being as the utopians promised. It has rather caused the loss of any source of income for part of the working class and the inhuman exploitation of those who remained employed in the factory. This is because, by simplifying and easing the physical work, machinery allowed physically stronger adult males to be replaced by women and children. The benefit to the owners of the means of production was threefold: less labor required; lower cost of labor because women and children were considered lower rank workers; and indefinite time extension of work, because the natural physical fatigue of workers was no longer an obstacle to it. The result was the unemployment and brutishness of adult males, who remained at home to laze around or get drunk, while their relatives were buried alive in the factories. Not without sarcasm, Marx (1976: 557) notes that “[i]t is supposed to be a great consolation to the pauperized workers that, firstly, their sufferings are only temporary (‘a temporary inconvenience’) and, secondly, machinery only gradually seizes control of the whole of a given field of production, so that the extent and the intensity of its destructive effect is diminished. The first consolation cancels out the second.” No wonder then that Marx (1976: 565) praises Ricardo for his “scientific impartiality and love of truth.” Similarly, Lowe (1954: 142) will characterize the chapter “On Machinery” by Ricardo as “a rare case of self-destructive intellectual honesty.” The debate on the scientific legitimacy of the concept, however, did not end after Ricardo and Marx.



30

5. The Marginalists: Mathematics versus Luddite Fallacy The birth of neoclassical (or marginalist) economic theory changes the cards on the table. In particular, after the works of Swedish economist Knut Wicksell the concept of technological unemployment enters a crisis and the balance begins to lean again in favor of compensation theory. Wicksell bases his analysis on the law of marginal productivity of factors of production and claims that wages are the key to the problem. According to his theory, there is no direct causal relationship between technological progress and unemployment, because there is another ultimate cause of unemployment. While the expulsion of workers for the implementation of technical innovations creates an increase in labor supply over demand, it is also true that in a free economy the increase in supply leads to a decrease in wages. In turn, the reduction of the remuneration of labor in comparison to that of capital stimulates the demand for labor, for the sectors not yet affected by technological innovation will find it convenient to absorb the excess labor. In other words, the unemployment rate that remains stable in the medium or long term – the one that really worries people and governments – is not attributable to the increase in productivity caused by technological progress, but eventually to the rigidity of a wage bottom which prevents the reabsorption of workers in less advanced sectors. Compared to classical economists, the representatives of the marginalist school adopt more sophisticated mathematical tools, such as infinitesimal calculus, and, thanks to the greater professionalization, the concept of marginal utility – which is the basis of their theory – can be accurately and formally defined. Wicksell was originally a mathematician, and only afterwards entered the field of economics. This is the way he dealt with the



31

problem: “If x and y are the number of labourers per acre on the first and second methods of cultivation respectively, and the productivity function in the one case is f(x) and in the other ø(y); and if we assume that m acres are cultivated on the first method and n acres on the second, then we must look for the conditions under which the expression mf(x) + nø(y) reaches its maximum value if, at the same time, m+n=B and mx + ny = A where B is the number of acres and A the number of labourers available for the industry in question (here agriculture) as a whole. By differentiation and elimination (the partial derivatives of the first expression being put = 0) we can easily obtain the two equations f’(x) = ø’(y) and f(x) - xf’(x) = ø(y) - yø’(y), of which the former indicates that when the gross product is a maximum the marginal productivity of labour, and therefore wages, will be the same in both types of production, The second equation gives the same condition for rent per acre. Thus, although at first sight the going-over of some firms to the new method of cultivation seems to diminish the total product, actually the total product is maximized; but at the same time wages necessarily fall, so long as we assume that the gross product is less in the estates cultivated by the new method than in those cultivated by the old” (Wicksell 1977: 140).



32

The idea that the whole debate among theorists of technological unemployment and theorists of automatic compensation could develop only because of the lack of professionalization of nineteenth century economists becomes widely accepted in academia. For instance, Schumpeter (2006: 652) concludes that “[t]he controversy that went on throughout the nineteenth century and beyond, mainly in the form of argument pro and con ‘compensation,’ is dead and buried: as stated above, it vanished from the scene as a better technique filtered into general use which left nothing to disagree about.” To be precise, the controversy was “dead and buried” only for the economists of the neoclassical school (Montani 1975). For non-orthodox economists, in the folds of the calculations, a bleak thesis (to say the least) was hidden: if the Luddites attributed the ‘fault’ of unemployment to machinery, and Marxists to the capitalist system of exploitation, neoclassical economists unloaded it on workers who were not satisfied to work for a mess of pottage, or on those social democratic governments that imposed a minimum hourly wage so that workers could at least survive. 6. The Keynesians: Technological Unemployment as a Fact The hegemony of neoclassical economics in academia seemed to be unassailable, when a game changer enters the spotlight: the devastating economic crisis of 1929. A new paradigm, the Keynesian, becomes destined to take stake in political and scientific circles. Challenging the orthodoxy, in a 1930 article published in The Nation, John Maynard Keynes reintroduces the concept of technological unemployment in the economic discourse. Quite curiously, he speaks about it as a new disease, as



33

if Ricardo and Marx had never discussed the issue before. These are his words: “We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come – namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour. But this is only a temporary phase of maladjustment. All this means in the long run that mankind is solving its economic problem” (Keynes 1963: 325). Keynes is not a pessimist, nor a Luddite. He sees in technological progress a great resource for humanity. He is convinced that technological unemployment is only a temporary illness. This is because he is confident in the possibility of solving the problem with appropriate public policies, starting with a drastic reduction of working hours. In the same article, the English economist forecasts that “in the course of our life” (that is, in the space of a few decades), we will see ongoing social reforms that will lead us to work three hours a day, five days a week, for a total of fifteen hours per week, at equal income conditions. In short, it seemed reasonable to solve the economic crisis by implementing a simple formula: working less, work for all. That is, by evenly redistributing the benefits of technological progress. During the Great Depression, other outstanding scholars focus on the problem of technological unemployment. In August 1930s, Paul H. Douglas publishes an article entitled “Technological Unemployment” in the American Federationist, but only to say that the introduction of labor-saving improvements cannot cause permanent unemployment. He maintains that we should rather expect an “automatic” absorption into employment of fired workers, because the demand of employers and those workers still employed is destined to grow as a result of the re-



34

duction of costs per unit of output due to technological improvement. One year later, Alvin Hansen responds to Douglas with an article entitled “Institutional Frictions and Technological Unemployment”, appearing in The Quarterly Journal of Economics (1931). Here, Hansen accuses Douglas of reviving the old doctrine of J. B. Say, James Mill, and David Ricardo (meaning the first and second editions of the Principles), and in particular the grave fallacy of compensation theory. Quite significantly, Hansen was still not “the American Keynes” in the moment when he published this article. He still defended the orthodox theory in 1937, when he occupied the chair of Political Economy at Harvard University. His conversion to Keynesianism happened later, but here we can see that there was already a convergence on the issue of technological unemployment. The 1930s polemics does not end here. Gottfried Haberler (1932: 558) immediately takes the defense of Professor Douglas, “for it would be deplorable if an ungrounded hostility and suspicion against technological progress should be aroused or intensified.” That ‘temporary’ technological unemployment exists seems not in doubt even among defenders of the orthodox theory. The question is if ‘permanent’ technological unemployment does exist. Ten years later, Hans P. Neisser upgrades technological unemployment from concept to theory. Indeed, these two words express a causal relation, and therefore a law. More precisely, Neisser (1942: 50) laments that “the theory of technological unemployment is a stepchild of economic science.” We read the following lines and we understand that, for this scholar, there is perfect adherence between this neglected theory and ‘facts’. Permanent technological unemployment is not only a useful the-



35

oretical concept. It is a real phenomenon. Thirteen years after the 1929 crisis, in spite of compensation theory, there are still masses of involuntary unemployed workers: “The facts seem to stand in such blatant contradiction to orthodox doctrine, according to which no ‘permanent’ technological unemployment is possible, that most American textbooks prefer not to mention the problem itself” (ibid.). What is more important is that this ‘silence’ is unprecedented. Neisser reminds the readers also that “[t]he analysis to which Ricardo subjected the displacement of labor by the machine in the last edition of the Principles had stimulated a lively discussion among the later classical economists…” (ibid.). The discussion died down because of the rise of neoclassical equilibrium analysis. However, Neisser correctly underlines that this ‘silence’ concerns only “Anglo-Saxon literature.” Everett Hagen (1942: 553) also remarks that only “[t]wo papers in American economic journals of the past eleven years have address themselves exclusively to the correction of errors in the prevailing analysis of technological unemployment.” He means that written by Hansen in 1931 and that published by Neisser in 1942. He recognizes that Naisser makes a “definite contribution,” but he also reproaches him for having completely ignored Hansen and for having written an article in the “postKeynesian period” that fails “to apply to the problem at hand the theory of saving and investment as determinants of employment.” Hagen gives himself the task of filling the hole. Indeed, the debate is much richer than it seems. First of all, it takes place also in books and not only in articles published in economic journals. An example is the book Value and Capital by John R. Hicks. The first edition appears in 1939. The second edition is published in 1946 and, afterwards, is reprinted many times. Here the term ‘technological unemployment’ appears on-



36

ly at page 291, but the concept to which the term refers is discussed also in other parts of the book. The author stresses the fact that technology may produce unemployment only in specific situations, for instance, “that in which the new equipment, which has been produced, is ‘labour-saving’; in this case there is a fall in the demand for labour, as a result of the whole process, relatively to the situation which would have arisen if no capital had been accumulated at all.” In other words, “there is not necessarily a fall in the demand for labour at all; there will be if early inputs and late inputs of labour are substitutes, but not if they are complementary” (Hicks 1946: 291). Another book assessing the problem very seriously is The Path of Economic Growth, published in 1976 by German economist and sociologist Adolph Lowe. Here the term ‘technological unemployment’ appears many times throughout the book. Besides, being also a sociologist, Lowe is capable of keeping a distance from main economic schools (neo-classical, neoMarxian, Keynesian) in order to assess the controversy from a different point of view: “By centering our investigation of the traverse on the compensation of technological unemployment, we emphasize an issue the relevance of which is highly controversial. It has been debated for more than 150 years and, considering the secular employment trend over this period, it is not surprising that, in the view of the majority of experts, technological unemployment is today regarded as perhaps an occasional irritant but not as an ever-present threat to the stability of the system. Moreover, in the heat of polemics, the arguments on either side have occasionally been overstated. What is still worse, the basic question at issue has been blurred. This question is neither whether, as a rule, nonneutral innovations initially create unemployment (they do) nor whether, given sufficient time, compensation is possible (it certainly is). The question is wheth-



37

er a free market is endowed with a systematic mechanism that assures compensation within the Marshallian short period, thus precluding any secondary distortions that could upset dynamic equilibrium” (Lowe 1976: 250). The literature on the topic appears much richer also if we take into account books and articles written in different languages. For instance, though being a technological optimist, French economist Jean Fourastié wrote much sur le risque de chômage technologique de masse (1949, 1954). Given the parameters of this work, however, we decided to limit our analysis to a few contributions in the English language. More details about the debate on technological unemployment in the Anglo-Saxon culture, with particular attention to the interwar period, can be found in the works by Gregory R. Woirol (1996, 2006). To put it briefly, while marginalist economists keep denying the problem of technological unemployment, Keynesians are sure that the problem exists, but they are also confident that it can be solved with opportune public policies. 7. Reaganomics: The New Denial After the Great Depression – which ended many years later thanks to Franklin Delano Roosevelt’s New Deal (according to the Keynesians) or to the Second World War (according to the Austrian School) – it seemed impossible that humankind could return to laissez-faire capitalism. Nonetheless, the return of the neoliberal paradigm was successful, a few decades later, with the landing of Margaret Thatcher to Downing Street in 1979 and Ronald Reagan in the White House in 1981. What happened next to their policies was not, of course, the end of work, that is the permanent global unemployment of the masses. In spite of the fact that amazing innovations – innova-



38

tions that in the 1930s belonged only to the sphere of science fiction – have been introduced in the productive system, there are still jobs around. However, it must be adequately stressed that the danger of chronic unemployment has been averted only thanks to the flexibility of salaries and job market, in full accordance with the theory of marginal analysis. To give just an example of the new attitude toward automation and unemployment, I will quote some fragments from the article “Does More Technology Create Unemployment?” by R. H. Mabry and A. D. Sharplin, which appeared in 1986. This is the incipit: “Each new generation brings the reemergence of many of the fears of the past, requiring the repetition of old explanations to put them to rest. Today there is a renewed concern that technological advancement may displace much of the manufacturing (and other) work force, creating widespread unemployment, social disruption, and human hardship. For example, in 1983 the Upjohn Institute for Employment Research forecast the existence of 50,000 to 100,000 industrial robots in the United States by 1990, resulting in a net loss of some 100,000 jobs” (Mabry and Sharplin 1986). The authors intend to refute “all these claims and predictions and the rhetoric that surrounds them.” They call rhetoric the discourse strategy of the Keynesians, but in fact their textual approach to the problem presents also the typical rhetoric of scientific discourse. For instance, they try to present themselves as equidistant from both conservatives and progressives – and therefore somewhat neutral or purely scientific. Indeed, they explicitly distance themselves from “conservative economic thinkers”, who “tend to disparage persons who fear the rapid advance of technology by labeling them ‘Luddites’.” This is said to be a term “both unfair and inaccurate.” However, a few lines below, they seem to justify the characterization of progressives as Lud-



39

dites. They state that at least “[i]n part, opposition to technology springs simply from a more or less visceral fear of scientism, which is often taken to imply the dehumanization of humankind.” Again, they try to regain a fair position in the debate by recognizing that “the warnings heard today are thoughtful and well intentioned”, but, in the same sentence, they immediately underline that the theorists of technological unemployment are “often in error or somewhat self-serving.” This narrative implies that the deniers of technological unemployment are not self-serving. After a few sentences aimed at showing a more balanced attitude toward the problem, Mabry and Sharplin simply restate the standard position of orthodox political economy: “Flatly in error are those that predict no more jobs for a very large sector of the population as a result of advancing technology, creating a massive problem of involuntary unemployment. It is not at all clear that a large number of jobs are about to be destroyed; even if they were, such long-run unemployment as would occur would certainly not be involuntary. Rather, it would take the form of even shorter work days, shorter work-weeks, and fewer working members in the family, as it has throughout our history. Some who correctly anticipate that technological change may produce short-run employment-adjustment problems overstate those problems. They also often fail to mention that the short-run unemployment that occurs is primarily the result of artificial imperfections -- a lack of competition -- in certain labor and product markets.” Briefly, according to the authors, there is not long-run involuntary unemployment, while short-run unemployment is not caused by technological advancement but by public policies. In a regime of laissez-faire capitalism, people would immediately



40

find new jobs and enjoy technological advances by working less and earning more. 8. Artificial Intelligence: The Specter of a Jobless Society Has this 1980s prophecy been fulfilled in the following decades? By the end of the twentieth century, a legion of social scientists answers negatively to the question. The specter of a jobless society reappears in books such as The End of Work by Jeremy Rifkin (1995), Progress without People by David F. Noble (1995), and Turning Point by Robert U. Ayres (1998). The alarm takes a larger magnitude if we consider also the publications in other languages. For instance, Italian sociologist Luciano Gallino has written much, in his own mother tongue, about technological unemployment (1998, 2007). The narrative of this wave of social criticism can be summarized as follows: the introduction of computers and robots in factories and offices, in the last forty years, has led to the enrichment of a minority and the insecurity and impoverishment of the majority. There are still jobs on the market, because machines, at their present stage of development, cannot completely replace labor. They can only complement it. Jobs that do not disappear completely are those involving a physical effort that cannot be defined by a tractable list of rules and, therefore, cannot be easily implemented in a machine, or those that are so humble and low paid that, even when their automation is technically possible, it is still more economical to hire humans. However, it is just a matter of time. In the near future, machines will be able to replace humans in any activity. Therefore, a profound reform of our society is needed and urgently. Social scientists with this viewpoint have occasionally attracted the accusation of ‘intellectual Luddism.’ A similar accu-



41

sation could not, however, be raised against a second wave of social criticism arising a few years later, given that its exponents are mainly engineers and computer scientists. An explosion of publications on Artificial Intelligence, seen as the demiurge of a jobless society, takes place after the 2008 financial crises. Authors like Martin Ford (2009, 2015), Erik Brynjolfsson and Andrew McAfee (2012, 2016), Stan Neilson (2011), Jerry Kaplan (2015), just to mention a few, are deeply convinced that technology is a ‘good thing,’ but it cannot but render human beings obsolete. Therefore, the only way to avoid an epochal catastrophe is to redesign our societies, starting from the basements, in order to make place for both humans and machines. These authors tend to underline that our own is an epoch of painful transition, but a ‘golden age’ of humankind is visible at the horizon. We just need to realize that technology is not just a tool of this or that politico-economic system, but rather the actual primum movens of human history. A primum movens which requires its own politico-economic system to work at its best. The introduction of a basic income guarantee (BIG) – that is, an income to be assigned unconditionally to all citizens of industrial countries – is among the various proposed solutions (Hughes 2014, Campa 2014b). The idea of a radical societal change, which has been buried for a few decades in the cemetery of dead ideas, could resuscitate thanks to the crisis of neoliberalism following the 2008 global financial bankruptcy. A crisis that, in the words of sociologist Luciano Pellicani (2015: 397), “has demonstrated the technical – as well as moral – absurdity of the neoliberal paradigm, centered on the idea of self-regulated market.” With the addition that the markets are self-regulating only for the lower classes, given that bankers and capitalists can systematically count on bailouts and public money when something goes wrong.



42

Among the signs that what Ludwik Fleck called Denkkollektiv is changing, we can mention the Nobel Prize for economics assigned in 2008 to Keynesian economist Paul Krugman, who afterwards has also expressed his worries about technological unemployment (2013). Or, perhaps, the planetary success of a book like Capital in the 21st Century by Thomas Piketty (2013). All the optimism of the 1980s has vanished. According to the above-mentioned analysts, the present transition phase is characterized by involuntary unemployment due to automation and precarious jobs due to flexibility policies. True, many jobs have not yet been automatized. In the tertiary sector, we observe a proliferation of caregivers assisting elderly and disabled at home, bellhops, call center operators, waiters, fast foods workers, pizza deliverers, employees of cleaning companies, atypical taxi drivers, external collaborators with VAT registration, refuse collectors, private mail carriers, storekeepers, shop assistants, etc. In many cases, employers still find it more cost effective to hire uneducated workers or desperate immigrants than mechanizing these jobs (assuming that a machine is available or can be designed to do it). However, what is clear is that all-life and full-time jobs – such as jobs in large factories and public offices – which used to be the prerogative of middle class workers, have significantly shrunk in number as in the level of remuneration. Observers seem to be amazed at this phenomenon, as illustrated by a recent article published in The Wall Street Journal: “The typical man with a full-time job–the one at the statistical middle of the middle–earned $50,383 last year, the Census Bureau reported this week. The typical man with a full-time job in 1973 earned $53,294, measured in 2014 dollars to adjust for inflation. You read that right: The median male worker who was employed year-round and full time earned less in 2014 than a similarly sit-



43

uated worker earned four decades ago. And those are the ones who had jobs” (Wessel 2015). This is what we read in ‘the bible of capitalism,’ not in a blog of angry radicals. However, it is not surprising that today workers earn on average less than their fathers or grandfathers, despite all the progress made by humanity in the meantime, if we keep in mind that the theory of compensation does not say that thanks to technological progress we will all live happily ever after. The theory says that there will be no mass unemployment, if the governments guarantee wage flexibility. The negative side effect of this policy becomes what we might call ‘technological impoverishment.’ Moreover, the automation of the tertiary sector is also relentlessly taking place. We already hear of pizza delivery by means of drones, of autonomous vehicles on the roads, of chirurgical interventions made by robots, etc. Occasional households have been replaced by cleaning robots in many homes, software substitute for lawyers (Pasquale & Cashwell 2015), the robotization of the military is in a very advanced phase (Campa 2015), and the automation of social work has also started (Campa 2016). So, it is not surprising that specialist economic literature is now taking seriously the issue of technological unemployment (Feldmann 2013, Feng & Graetz 2015). This does not mean that compensation theory has disappeared from public discourse, but even those analysts still moving in the wake of orthodox economics do not dismiss the hypothesis of mass technological unemployment when talking about the future. For instance, in May 2013, the McKinsey Global Institute published a detailed study of a dozen new technologies defined ‘disruptive’ for their potential impact on the economy. The report is generally optimistic, because it focuses on the chances offered by technological advances to big corporations. However,



44

it also recognizes that “productivity without the innovation that leads to the creation of higher value-added jobs results in unemployment and economic problems, and some new technologies such as the automation of knowledge work could significantly raise the bar on the skills that workers will need to bring to bear in order to be competitive” (Manyika 2013: 151). In a 164-page report, the word ‘unemployment’ appears only once, but at least there is no denial of the problem. The report assumes that policy makers can limit the negative side effects of advanced robotics and automated knowledge work by improving and renewing education. In other words, they “should consider the potential consequences of increasing divergence between the fates of highly skilled workers and those with fewer skills,” and keep in mind that “[t]he existing problem of creating a labor force that fits the demands of a high-tech economy will only grow with time” (ibid.). This is the old recipe of neoliberalism: one does not need the redistribution of wealth to cope with unemployment and impoverishment; one just needs better educated citizens and workers. If in the short term workers may experience problems, in the long term innovation will result in the creation of new higher value jobs. The report maintains that also workers will take advantage of automation. Nonetheless, it is easy to demonstrate that these ‘potential benefits’ could be turned into ‘potential threats’ by simply expressing them with different words. Let us give an example. At page 7, we read what follows: “It is now possible to create cars, trucks, aircraft, and boats that are completely or partly autonomous. From drone aircraft on the battlefield to Google’s self-driving car, the technologies of machine vision, artificial intelligence, sensors, and actuators that make these machines possible is rapidly improving. Over the coming decade, low-cost, commercially available drones and submersi-



45

bles could be used for a range of applications. Autonomous cars and trucks could enable a revolution in ground transportation— regulations and public acceptance permitting. Short of that, there is also substantial value in systems that assist drivers in steering, braking, and collision avoidance. The potential benefits of autonomous cars and trucks include increased safety, reduced CO2 emissions, more leisure or work time for motorists (with handsoff driving), and increased productivity in the trucking industry” (Manyika 2013: 7). As you can see, McKinsey analysts predict a remarkable productivity growth and, among benefits, more free time or working hours for motorists, due to lower mental and physical fatigue. By using a most brutal language, we may say that the ‘benefits’ for workers will be more unemployment or exploitation. 9. Conclusions This debate seems to teach us that, in a laissez faire capitalist economy, the choice boils down to two perspectives: 1) if one introduces policies to safeguard the standard of living of workers by establishing that the minimum wage cannot fall below a certain threshold (moderate left policy), the system produces ‘technological unemployment;’ 2) if it is established that the government must not interfere in negotiations between capitalists and workers, letting the market decide wage levels (moderate right policy), the system produces ‘technological impoverishment.’ All this happens when an impressive technological development may potentially improve the life condition of everybody. Thus, contemporary society seems to be inherently characterized by a ‘technological paradox.’



46

Traditional political forces converge on the idea that improving education could be the ‘weapon’ to contrast technological unemployment. However, not much attention is paid to the fact that Artificial Intelligence develops exponentially and not only promises to further reduce the workforce in manufacturing, but it will begin to erode the work of specialists in the service sector. In the near future, unemployment could concern economic actors who have attended higher education institutions and invested much time and money to acquire their professional skills, such as journalists, physicians, teachers, lawyers, consultants, managers, etc. Typically, those who bring attention to the ‘technological paradox’ characterizing our society are immediately halted with a rather trivial argument: the historically known alternative systems to capitalism – namely: feudalism, fascism, and communism – have failed. But this is stating the obvious. To displace this rhetorical argument, the paradox can be better expressed by the following question: How can it be that sentient beings capable of inventing quantum computers and creating artificial life fail to come up with a new system of production and consumption in which these and other innovations, if they cannot be beneficial to all individuals at the same extent, at least are not detrimental to the majority?



47

Chapter 2

Automation, Education, Unemployment: A Scenario Analysis

1. The McKinsey scenario In 2013, the McKinsey Global Institute published a report entitled: Disruptive Technologies: Advances that will transform life, business, and the global economy. It is a picture of the near future based on the analysis of technological trends. According to the report, societies and policy makers need a clear understanding of how technology might shape the global economy and society over the coming decade, in order to deal with risks and opportunities offered by new technologies. Precisely, “they will need to decide how to invest in new forms of education and infrastructure, and figure out how disruptive economic change will affect comparative advantages” (Manyika 2013: 1). In general terms, the McKinsey’s scenario is optimistic. It shows that the technologies on their list “have great potential to improve the lives of billions of people” (Ibid.: 18). It quantifies the potential economic impact of new technologies on the order of $14 trillion to $33 trillion per year in 2025. However, the report is mainly designed to meet the needs of big corporations. New technologies appear to be an opportunity mainly for the owners of capital. Indeed, the report admits that the future may bring also some negative side effects for other social classes. It recognizes that the benefits of technologies may not be evenly distributed. That is, “progress” could contribute to widening in-



48

come inequality, because the automation of knowledge work and advanced robotics could replace the labor of some less skilled workers with machines and, therefore, create disproportionate opportunities for capitalists and highly skilled workers (Brynjolfsson, McAfee 2011). In other words, disruptive technologies may generate “technological unemployment,” opening the door to a scenario in which the rich get richer and the poor get poorer. This admission does not affect the positive picture of the future elaborated by McKinsey’s analysts. This is because they are convinced that technologies can change anything but the politico-economic order. The globalized free market economy – with politics assuming an ancillary role to it – will always be the frame in which disruptive technologies will display their potential and their power. Therefore, the “medicines” needed to remove unwanted side effects are those already used in the past. In this specific sense, the picture of the future produced by McKinsey’s analysis, behind the fireworks of amazing technological innovations, is still quite “conservative”. First of all, they still trust the old “compensation theory” of classical political economy: any job lost because of machines will reappear in the sector of machine builders, if the job market is flexible enough. This is their narrative: “As with advanced robotics, these technologies could also create jobs for experts who can create and maintain the technology itself” (Manyika 2013: 49). Secondly, they believe that “over the long term and on an economy-wide basis, productivity growth and job creation can continue to grow in tandem, as they generally have historically, if business leaders and policy makers can provide the necessary levels of innovation and education” (Ibid.: 27). In other words, they do not deny the necessity of a government intervention, but



49

they seem to circumscribe this intervention in the realm of education. Brief, more and better education will solve the temporary problem of technological unemployment, as it happened in the past. This concept is repeated in different parts of the report. It is stressed that the problem of income inequality and unemployment “places an even greater importance on training and education to refresh and upgrade worker skills and could increase the urgency of addressing questions on how best to deal with rising income inequality” (Ibid.: 16). Besides, it is stressed that this solution can be profitable also to capitalists. Actually, McKinsey analysts do not address directly policy makers. They rather ask capitalists to exert pressure on policy makers in order to achieve this result. In their words: “Companies will need to find ways to get the workforce they need, by engaging with policy makers and their communities to shape secondary and tertiary education and by investing in talent development and training” (Ibid.: 21). What type of education is needed, in order to meet the needs of big corporations in 2025? Once again, the recipe is the same of the past: more math, more science, more engineering: “The spread of robotics could create new high-skill employment opportunities. But the larger effect could be to redefine or eliminate jobs. By 2025, tens of millions of jobs in both developing and advanced economies could be affected. Many of these employees could require economic assistance and retraining. Part of the solution will be to place even more emphasis on educating workers in high-skill, high-value fields such as math, science, and engineering” (Ibid.: 77). In McKinsey’s scenario, education is not only the recipe to eliminate the unwanted side effects of development. It is also a field that benefits from technological development. Namely, “Cloud computing and the mobile Internet, for example, could



50

raise productivity and quality in education, health care, and public services” (Ibid.: 18). Learning would improve both inside and outside classrooms. Therefore, there exists the possibility to activate a virtuous circle thanks to hybrid online/offline teaching models: “Based on studies of the effectiveness of hybrid teaching models that incorporate mobile devices in instruction, drills, and testing (alongside traditional classroom teaching), an improvement in graduation rates of 5 to 15 percent could be possible. This assumes a gradual adoption rate, with most of the benefit coming closer to 2025, when more students will have benefited from online learning via tablets for most of their K–12 years” (Ibid.: 35). The new approach would obviously also affect higher education, as well as government and corporate training. According to the report, such hybrid models could improve productivity by 10 to 30 percent. In conclusion, “Over the next decade, most types of education and training could adopt Internet-based hybrid education, affecting billions of individuals. The share delivered via mobile devices could have economic impact of $300 billion to $1 trillion annually” (Ibid.: 36). The picture is not complete. Another game changer is mentioned by the McKinsey report: the automation of knowledge work. Knowledge work automation is defined as “the use of computers to perform tasks that rely on complex analyses, subtle judgments, and creative problem solving” (Ibid.: 41). The advances in computing technology (in particular, memory capacity and processor speeds), machine learning, and natural user interfaces (i.e. speech recognition technology) make now knowledge work automation possible. The main novelty of knowledge work automation is that it creates a new type of relationship between knowledge workers and machines. Workers interact with machines exactly in the same way they would interact with



51

coworkers. For instance, “instead of assigning a team member to pull all the information on the performance of a certain product in a specific market or waiting for such a request to be turned into a job for the IT department, a manager or executive could simply ask a computer to provide the information” (Ibid.). Computers of the new generation will also display the ability to “learn” and make basic judgments. The possibility to interact with a machine the way one would with a coworker is illustrated by the report with the following micro-scenario: “Box 6. Vision: The power of omniscience. It’s 2025 and you arrive at your desk for another day at work. As you take your seat, the day’s appointments are displayed in front of you and your digital assistant begins to speak, giving you a quick rundown of the 43 new posts on the departmental communications site. Three are important for today’s meetings; the rest will be summarized by the system and sent in the daily report. The assistant notes that all the reports and multimedia presentations have been uploaded for your meetings. Now it’s time for the tough part of the day: your doctor appointment. You received a request for an appointment yesterday when your biosensor alerted your digital physician to a change in your blood pressure. Your vital signs are scanned remotely, and the system cross-checks this information with journal cases, your family’s history of hypertension, your diet and exercise routines, and the vital signs of other men your age. Good news: “You don’t need drugs, but you do need to stop eating fast food and skipping the gym,” your computerized doctor says. Relieved, you stop at the gym on the way home and ask your mobile device to order a salad to be delivered when you get home” (Ibid.). Setting aside the legal and ethical problems related to the transfer of decision powers to computers (it would be hard to find a subject responsible if a computer were to perform an in-



52

appropriate diagnosis or provide the wrong therapy advice to a patient), the potential impact of knowledge work automation on employment seems to be quite significant. More significantly, McKinsey’s analysts seem to mainly see the positive side of the coin. This happens because – as we already said – they observe the process from the point of view of large corporations. One of the main problems for corporations is the cost of labor. That is why they benefit from the globalization of the markets and the offshoring of production activities. The report emphasizes that, at present, employers spend $33 trillion a year to pay employers. The total global employment costs – given current trend – seem destined to reach $41 trillion by 2025. By focusing on the subset of knowledge worker occupations, employment costs can be estimated around $14 trillion by 2025. McKinsey’s analysts remark that knowledge workers, such as managers, professionals, scientists, engineers, analysts, teachers, and administrative support staff, “represent some of the most expensive forms of labor and perform the most valuable work in many organizations” (Ibid.: 42). Therefore, we may expect that the rapid advances in knowledge work automation, by reducing costs and boosting performance, will make these technologies more attractive to the owners of capitals. This is the forecast: “In advanced economies, we estimate annual knowledge worker wages at about $60,000, compared with about $25,000 in developing economies, and project that increased automation could drive additional productivity equivalent to the output of 75 million to 90 million full-time workers in advanced economies and 35 million to 50 million full-time workers in developing countries” (Ibid.: 43). Would these knowledge workers just lose their jobs? Or would they keep the job and enjoy the augmentation of their capabilities thanks to technology? The report offers a mixed re-



53

sponse but in general, there seems to be faith in a positive outcome of the whole process, thanks to the self-regulative mechanisms of the markets and the wisdom of policy makers.

2. What is missing in this picture? One aspect that has not been adequately stressed in the report is that workers are consumers. If workers evaporate or salaries shrink, we can expect a negative feedback on the economy as a whole. Corporations would find it difficult to sell their products and services. True, goods can also be exported, so one may have sufferance inside a country, while the owners of capitals keep increasing their income. But, in democratic systems, people vote. Therefore, we cannot exclude repercussions on the political system. A system change would render ipso facto inadequate all the forecasts about economic gains and losses. The signals of a system change are already visible. Brexit is the most obvious example, but “no global” movements and parties are growing, both on the left and the right of the political spectrum, in many European countries and North America. Besides, McKinsey’s analysts seem to be perfectly aware that the full automation of manual and knowledge work may render obsolete the present offshoring strategy. Corporations move their factories and offices in countries where the cost of labor is lower and the job market is more flexible. When human workers will be (almost) completely useless and replaced by Artificial Intelligence the offshoring trend may stop and reverse. Factories and offices could move back to USA and Western Europe. However, this “reflux” will not generate jobs in Western countries, and may contribute to increased unemployment in China, India, Eastern Europe, and all the countries that are presently hosting



54

the production units of corporations. This process could undermine the export strategy without revitalizing the internal jobs and goods market. Can a better and different education be the response to these economic, political, and social problems? One month after the appearing of the report, Nobel Prize winner Paul Krugman (2013) published an article quite significantly entitled: “Sympathy for the Luddites.” Krugman is obviously not a Luddite, nor is a Luddite the author of this article (rather the opposite, I would say). However, social problems cannot be denied only because we may be fascinated by technological developments. The American economist maintains that new technologies are qualitatively different from the technologies that made the first and the second industrial revolutions. In what has been called the third industrial revolution (Campa 2007), machines seem to be able to replace not only manual workers, but also knowledge workers, that is, not only proletarians but also the bourgeoisie (or, to use a less ideologically laden concept, the middle class). But can a society function when not only the lower classes struggle to survive, between low paid jobs and crime, but the whole middle class slips into this precarious condition? “Until recently, the conventional wisdom about the effects of technology on workers was, in a way, comforting. Clearly, many workers weren’t sharing fully — or, in many cases, at all — in the benefits of rising productivity; instead, the bulk of the gains were going to a minority of the work force. But this, the story went, was because modern technology was raising the demand for highly educated workers while reducing the demand for less educated workers. And the solution was more education. […] Today, however, a much darker picture of the effects of technology on labor is emerging. In this picture, highly educated workers are as likely as less educated workers to find themselves dis-



55

placed and devalued, and pushing for more education may create as many problems as it solves” (Krugman 2013: 27). Indeed, the McKinsey report clearly indicates that some of the victims of disruption will be knowledge workers – that is, workers who are currently considered highly skilled. Knowledge workers are the “product” of higher education. They have invested much time and money in acquiring their skills. The automation of knowledge work means that in 2025, on a massive scale, software will do things that used to require college graduates. Employment in manufacturing has constantly fallen in recent decades because of industrial robotics, and this trend seems to be unstoppable. But advanced robotics could also replace medical professionals, teachers, managers, clerks, and other skilled workers. “Education, then, is no longer the answer to rising inequality, if it ever was (which I doubt)” – Krugman concludes. So what is the answer? According to Krugman, “[t]he only way we could have anything resembling a middle-class society would be by having a strong social safety net, one that guarantees not just health care but a minimum income, too” (Ibid.). In other words, an advanced societal system should start paying citizens purely for the fact that they are citizens. In future societies, people could be paid to consume goods and services, not to produce them. Work may become obsolete. This scenario deserves to be explored in detail.

3. An alternative scenario In an article entitled Technological Growth and Unemployment: A Global Scenario Analysis, I presented four possible scenarios related to work automation: 1) the unplanned end of work scenario, in which jobs evaporate as an effect of free market econ-



56

omy; 2) the planned end of the robot scenario, in which a Luddite solution prevails; 3) the unplanned end of the robot scenario, in which deindustrialization happens to be the unwanted result of wrong public policies; 4) the planned end of work scenario, in which governments decide to fix the problem of technological unemployment through the anticipated retirement of the entire human race (Campa 2014b). In this section I will briefly present the ‘the planned end of work scenario’ as a possible alternative to the future envisioned by McKinsey’s analysts. Then, in the fourth and last section, I will explore the role that education may have in that alternative scenario. The main reasons why, here, I focus only on the fourth scenario are twofold. Firstly, it seems to me the most plausible one. Secondly, I think it is the most desirable one – if I am allowed to express a value judgement. I will not repeat here the philosophical and political reasons that led me to consider this scenario as the most desirable, having already discussed the problem in other writings, including Technological Growth and Unemployment. Here I will focus on feasibility. The planned end of work scenario is plausible, because we can already observe steps in that direction. The introduction of a universal basic income (hereafter – UBI), to be paid unconditionally to all citizens, is a project already being considered by governments in Finland, Switzerland, Netherlands, France, and UK. The same idea has been proposed by the Five Star Movement – presently the biggest opposition political force in Italy. Finland seems to be the country in the forefront (Sandhu 2016). The Finnish Social Insurance Institution (Kela) has announced that in November 2016 it is to begin drawing up plans for a citizens’ basic income model. A press release specifies that fullfledged basic income would net Finns 560 euro a month. An ex-



57

periment involving 2,000 citizens should start in 2017. If it works, it will be extended to all citizens (Kela 2016). Tim Worstall, in Forbes, states that “[i]t’s hugely important that everyone, simply as of right (whether you call it the right of residence or citizenship is up to you), gets this payment. As is also that it’s not taxable, nor is it conditional” (Worstall 2015). The hope is that citizens would keep working, either accepting precarious or part-time jobs, or starting small businesses to improve their income. In UK, an Early Day Motion on UBI, proposed on January th 20 2016 by Green Party MP Caroline Lucas, asked the government to commission research into the idea’s effects. According to Lucas, there could be three main benefits for UK. Firstly, “[t]he basic income offers genuine social security to everyone and sweeps away most of the bureaucracy of the current welfare system.” Secondly, a UBI would protect people “from rising insecurity in our increasingly ‘flexible’ labour market and help rebuild our crumbling welfare state.” Thirdly, “the stability of a basic income could be a real boost to freelancers and entrepreneurs who need support to experiment, learn and take risks, while keeping their heads above water” (Stone 2016). UBI is just one of the possible responses to the increasing level automation and it is probably the only solution that would save the capitalistic system from a possible collapse. The owners of capitals need consumers to keep producing, competing, accumulating income. Alternative, more radical, solutions have already been proposed in the past to fix the problem of technological unemployment. In the 19th century, as it is well known, Karl Marx proposed the socialization of the means of productions. In the 20th Century, the socialist solution has been experimented with in many countries around the World. When the robotization of car industry took place on a massive scale, a differ-



58

ent but still radical solution was proposed: “In the early 1980s James Albus, head of the automation division of the thenNational Bureau of Standards, suggested that the negative effects of total automation could be avoided by giving all citizens stock in trusts that owned automated industries, making everyone a capitalist. Those who chose to squander their birthright could work for others, but most would simply live off their stock income” (Moravec 1999: 183). Making everyone a capitalist seems to be different than making everyone a socialist, only in a nominal sense. The focus is on individual citizens instead of collective entities such as NationStates, but what we are considering is still a form of public ownership of the means of production. Brief, Albus proposes a kind of socialistic-capitalistic hybrid system. BCI can instead be seen as the “social-democratic” solution to the problems of precariousness, decreasing incomes, and unemployment. Is it ‘the planned end of work scenario’ just a utopia? We should clarify that the expression ‘end of work’ is hyperbolic. It would be more correct to say that, in the near future, we may encounter the end of traditional work. Most people would still work in order to increase their income, but in a different way, for instance by running small businesses. This would anyway sign the gradual disappearing of salaried work, as we know it. Presently, we live in a paradoxical situation. 21st century citizens work more and earn less than 20th century citizens, in spite of all the technological advances that we made in the last century. This means that the owners of capital benefit from robots, computers and other technologies more than their salaried workers. Let us also remember that in pre-industrial societies there were much less working hours than today. Before the industrial revolution, workers were mainly employed in agriculture, therefore they would work only in certain months of the year, only



59

during the daylight, and they benefited from more religious holidays. That is why, in Technological Growth and Unemployment, I concluded that “[t]here is no reason why a technologically advanced society should force its citizens to work harder than their ancestors, when they could work a lot less and without giving up their modern living standards” (Campa 2014b: 99).

4. Education in a jobless society Many economists and policy makers are in denial concerning the problem of technological unemployment. They reject this idea as the ‘Luddite fallacy.’ I traced the history of the concept of technological unemployment in another article, in Italian (Campa 2016a). The idea that there is a causal connection between the automation of work and unemployment has been denied by classical economists in the 18th century and the first half of 19th century, admitted by David Ricardo and Karl Marx in the second half of 19th century, denied again by neoclassical (or marginalist) economists at the beginning of the 20th century, reaffirmed by John Maynard Keynes and his continuators after the 1929 crisis. When Thatcher and Reagan’s neoliberalism conquered the political arena, the dominant paradigm in economics again became the neoclassical one. However, the 2008 financial crisis has given the Keynesians some good arguments to raise their heads and launch a campaign for a new paradigm change. This may explain the reemergence of the concept of technological unemployment in economic literature (Ford 2015). Let us now imagine that in 2025, advanced industrial countries automatize most manual and knowledge work and support their citizens with UBI. What type of K-12 and higher education will be implemented to make the new system work smoothly? McKinsey’s analysts state that 2025 society will need more



60

math, science, and engineering, but their forecast is still inside the frame of the neoclassical economic paradigm, where people need to work in order to survive. Italian writer Ippolito Nievo, in the 19th century, imagined a future society in which robots would get all the jobs and people would receive money for nothing. The result, according to him, would be an orgiastic society, where citizens would spend most of their time using (and abusing) narcotics and having sex with beautiful robots (Campa 2004). This visionary scenario cannot be excluded. People have the right to have fun and to enjoy their lives, however, a total lack of responsibility may generate a dangerous situation. A purely hedonistic society could be vulnerable to external attacks. Societies (or nation-States, if one prefers) that do not share the same values and life-style may take advantage of the situation through an aggressive foreign policy. Therefore, a permanent civil and military education of citizens (like that already implemented in Switzerland), taking a few hours every week, could be necessary to preserve a sense of community and public responsibility. To enhance a sense of community is of fundamental importance in a society where the life of citizens depends more on the belonging to that community than on individual skills. In such a society, learning math, science and engineering will certainly be important, because citizens must understand where their income comes from. They must understand the functioning of computers and robots well in order to prevent dangers coming from the misuse of machines and to contribute to the feeding of the “goose that lays the golden eggs”. Scientists and engineers will still be needed in order to maintain and develop these technologies, and they will keep projecting and building intelligent machines, even if they get the UBI. Probably, citizens still working will be less stressed by the idea of losing their jobs and a



61

likely consequence will be that they could not easily be blackmailed, harassed or exploited by the capitalists, since they can rely on a second source of income. But many citizens would keep studying and working to increase their income, fulfill their ambitions, improve their social status. Contrary to what the McKinsey report seems to surmise, in a totally automated society, we will not register the decline or disappearing of social sciences, fine arts, and humanities. Quite the contrary – if a UBI policy will be implemented. In a World in which all jobs that require precision, speed, effectiveness, regularity, are performed by robots, it makes more sense to acquire – in schools and universities – different types of abilities such as critical thinking, artistic creativity, philosophical understanding, social sensitiveness. Many of the small jobs that will be created by citizens will probably be related to their passions. In other words, since they will be supported by UBI, people would have a chance to turn their hobbies into businesses. Those dreaming of being a musician, a painter, a writer, a poet, a film director, a dancer, or perhaps an influential blogger, may try to tread these paths in an independent way. The Internet will give them access to tutorials, online courses, and humanrobot interactions to find advices and information, but they could still need a traditional education based on human-human interaction to improve their skills. True, a robot may paint or play music better than a human being, but an artistic performance is not based only on the fruition of an artistic product. It is also based on the admiration for the fellow human that performs. We admire the skill of a drummer, because we recognize that (s)he can do something that we cannot do. Even if a human drummer is less precise than a drum machine, a significant number of people prefer to listen to a band that still has a human being playing the drums. People pay for a ticket to see a band



62

playing live, even if they have at home a CD player capable of producing a qualitatively better sound than the instruments played on a stage. What we want, when we go to a concert, is to see the (human) artists performing live. And, very often, we are disappointed if the concert sounds exactly as the CD or mp3 that we have at home. We prefer a different interpretation, improvisations, unpredictable situations, an involvement of the public in the performance, even if these changes may imply some mistakes. We look for a human-human interaction, not only for a perfect sound. This human-human interaction requires a skill. The artist needs to learn not only how to sing and play, but also how to dress, speak, and move on stage. Besides, in an automated society, people will still need human-human interaction, not only in the field of entertainment but also in the field of care. A robot can help the elderly, disabled patients, people affected by depression or other mental diseases by giving them pills or physical support. Still, people with problems (especially psychological problems) need to establish a relationship with fellow humans. Very often it is the lack of a genuine relation with other humans the source of their problems. In the field of social and medical care, robots can help, but not fully replace humans. In other words, our future automated society will need social workers, even more than present society. Many small businesses will probably be in the field of social work. Perhaps, some countries will decide to employ more social workers in the public sector. Other countries may introduce a compulsory civil service for all citizens, asking them to help other citizens in a difficult situation (this, again, to preserve a sense of community). Whatever solution will be implemented, social work will require a social work education.



63

Social workers, even now, choose their job because they feel it as a mission. Social work is not a well-paid profession and there is a component of voluntary service in it. A sincere need to help others certainly plays a role in the decision to choose this career. That is why, we will not see the disappearing of social workers just because all citizens will get an unconditioned UBI. More generally, we will not see the disappearance of work, because working is more than doing something in order to make money. To work means meeting people, making friends, learning new things, achieving goals, and – we may like it or not – also establishing power relationships. It is difficult to think that humans will stop satisfying these basic needs just because, in principle, they could survive without working.



64

Chapter 3

The Rise of Social Robots: A Review of the Recent Literature

1. Social robots and social work The social consequences of robotics depend to a significant degree on how robots are employed by humans, and to another compelling degree on how robotics evolves from a technical point of view. That is why it could be instructive for engineers interested in cooperating with sociologists to get acquainted with the problems of social work and other social services, and for sociologists interested in the social dimensions of robotics to have a closer look at technical aspects of new generation robots. Regrettably, engineers do not typically read sociological literature, and sociologists and social workers do not regularly read engineers’ books and articles. In what follows, I break this unwritten rule by venturing into an analysis of both types of literature.2 This type of interdisciplinary approach is particularly necessary after the emergence of so-called ‘social robots.’ A general definition of social robot is provided by social scientist Kate Darling: “A social robot is a physically embodied, autonomous 2 This

was also my approach in Humans and Automata: A Social Study of Robotics. Some ideas in this article are indeed taken from section 1.4 of that book (Campa 2015: 29–35).



65

agent that communicates and interacts with humans on an emotional level. For the purposes of this Article, it is important to distinguish social robots from inanimate computers, as well as from industrial or service robots that are not designed to elicit human feelings and mimic social cues. Social robots also follow social behavior patterns, have various ‘states of mind,’ and adapt to what they learn through their interactions.” On the same page, Darling provides some examples: “interactive robotic toys like Hasbro’s Baby Alive My Real Babies; household companions such as Sony’s AIBO dog, Jetta’s robotic dinosaur Pleo, and Aldebaran’s NAO next generation robot; therapeutic pets like the Paro baby seal; and the Massachusetts Institute of Technology (MIT) robots Kismet, Cog, and Leonardo” (2012: 4). As we can see, social robots are mainly humanoid or animaloid in form. Their shape is of fundamental importance, since their function is to interact with humans on an emotional level, and this type of interaction is grounded in visual and tactile perception no less than in verbal communication. The use of animaloid robots to comfort and entertain lonely older persons, has already triggered an ethical debate. By discussing the manufacture and marketing of robot “pets,” such as Sony’s doglike “AIBO,” Robert Sparrow (2002) has concluded that the use of robot companions is misguided and unethical. This because, in order to benefit significantly from this type of interaction, the owners of robot pets must systematically delude themselves regarding the real nature of their relation with these machines shaped like familiar household pets. If the search for truth about the world that surround us is an ethical imperative, we may judge unethical the behavior of both the designers and constructors of companion robots, and the buyers that indulge themselves in this type of fake sentimentality. Russell Blackford



66

(2012) disagrees with this conclusion by emphasizing that, to some extent, we are already self-indulgent in such fake sentimentality in everyday life and such limited self-indulgence can co-exist with ordinary honesty and commitment to truth. In other words, Blackford does not deny that a disposition to seek the truth is morally virtuous, however he points out that we should allow for some categories of exceptions. Pet robots for dementia treatment could constitute one of such exceptions. In the case of patients affected by dementia the priority is not giving them an objective picture of reality but stimulating and engaging them. The main goal of the social worker is helping them to communicate their emotions, to reduce their anxiety, to improve their mood states, and these goals may be achieved also by the use of animaloid and humanoid companion robots (Odetti et al. 2007; Moyle et al. 2013). The relevance of social robots should not be underestimated, especially by applied sociologists. In technologically advanced societies, a process of robotization of social work is already underway. For instance, robots are increasingly used in the care of the elderly. This is a consequence of two other processes occurring simultaneously: on the one hand, we have an aging population with a resulting increase in demand for care personnel; on the other hand, technological developments have created conditions to deal with this problem in innovative ways. Priska Flandorfer explains the view of experts from several fields that “assistive technologies nowadays permit older persons to live independently in their home longer. Support ranges from telecare/smart homes, proactive service systems, and household robots to robot-assisted therapy and socially assistive robots. Surveillance systems can detect when a person falls down, test blood pressure, recognise severe breathing or heart problems, and immediately warn a caregiver” (2012: 1).



67

In spite of the fact that we tend to associate physical support with machines and psychological support with the intervention of flesh-and-blood social workers, this rigid distinction vanishes when social robots are involved in elderly care. Indeed, Flandorfer elaborates that “Interactive robots cooperate with people through bidirectional communication and provide personal assistance with everyday activities such as reminding older persons to take their medication, help them prepare food, eat, and wash. These technological devices collaborate with nursing staff and family members to form a life support network for older persons by offering emotional and physical relief” (2012: 1). Social robots are specifically designed to assist humans not only in social work, but also in other activities. One of the main sources of information about robotic trends is a book series published by Springer and edited by Bruno Siciliano and Oussama Khatib. As Siciliano states: “robotics is undergoing a major transformation in scope and dimension. From a largely dominant industrial focus, robotics is rapidly expanding into human environments and vigorously engaged in its new challenges. Interacting with, assisting, serving, and exploring with humans, the emerging robots will increasingly touch people and their lives” (2013, v). As Siciliano has noticed, the most striking advances happen at the intersection of disciplines. The progress of robotics has an impact not only on the robots themselves, but also on other scientific disciplines. In turn, these are sources of stimulation and insight for the field of robotics. Biomechanics, haptics, neurosciences, virtual simulation, animation, surgery, and sensor networks are just a few examples of the kinds of disciplines that stimulate and benefit from robotics research. Let us now explore a few examples in greater detail.



68

2. Effectiveness and safety of human-robot interaction In 2013, four engineers – Jaydev P. Desai, Gregory Dudek, Oussama Khatib, and Vijay Kumar – edited a book entitled Experimental Robotics, a collection of essays compiled from the proceedings of the 13th International Symposium on Experimental Robotics. The main focus of many of these pieces is the problem of interaction and cooperation between humans and robots, and it is frequently argued that the effectiveness and safety of that cooperation may depend on technical solutions such as the use of pneumatic artificial muscles (Daerden and Lefeber 2000). Moreover, each technical device has advantages and disadvantages. For example, one may gain in effectiveness but lose in safety, or vice versa (Shin et al. 2013: 101–102). An inspiring a book on the issue of safety in robotics is Sami Haddadin’s Towards Safe Robots: Approaching Asimov’s 1st Law (2014). Haddadin points out that the topic of research called Human-Robot Interaction is commonly divided into two major branches: 1) cognitive and social Human-Robot Interaction (cHRI); 2) physical Human-Robot Interaction (pHRI). As Haddaddin defines the two fields, cHRI “combines such diverse disciplines as psychology, cognitive science, human-computer interfaces, human factors, and artificial intelligence with robotics.” It “intends to understand the social and psychological aspects of possible interaction between humans and robots and seeks” to uncover its fundamental aspects. On the other hand, pHRI “deals to a large extent with the physical problems of interaction, especially from the view of robot design and control. It focuses on the realization of so called human-friendly robots by combining in a bottom-up approach suitable actuation technologies with advanced control algorithms, reactive motion generators, and path planning algorithms for achieving safe, intui-



69

tive, and high performance physical interaction schemes” (2014: 7). Safety is obviously not a novel problem in robotics, nor in engineering more generally. It has been a primary concern in pHRI, since in this field continuous physical interaction is desired and it continues to grow in importance. In the past, engineers mainly anticipated the development of heavy machinery, with relatively little physical Human-Robot Interaction. The few small robots that were able to move autonomously in the environment and to interact with humans were too slow, predictable, and immature to pose any threat to humans. Consequently, the solution was quite easy: segregation. Safety standards were commonly tailored so as to separate the human workspace from that of robots. Now the situation has changed. As Haddadin puts it: “due to several breakthroughs in robot design and control, first efforts were undertaken recently to shift focus in industrial environments and consider the close cooperation between human and robot. This necessitates fundamentally different approaches and forces the standardization bodies to specify new standards suitable for regulating Human-Robot Interaction (HRI)” (2014: 7). These breakthroughs, and in particular the developments of cHRI, have opened the road to a new subdiscipline, or – if one prefers – a new interdisciplinary field: Social Robotics. In spite of the fact that the name appears to speak to a hybrid between the social sciences and engineering, at present, this subdiscipline is mainly being cultivated by engineers, although with a “humanistic” sensitiveness. It is important to keep these aspects in mind, as it is often the case that both technophiles and technophobes tend to anticipate fantastic or catastrophic developments, without considering the incremental, long and painstaking work on robotics which lay



70

behind and ahead. There are many small problems like those mentioned above that need to be solved before we start seeing NDR-114 from the film Bicentennial Man (1999) or Terminatorlike machines walking around on the streets. 3. Small-scale robots This does not mean that science fiction literature cannot be a source of ideas for robotic research. Just to give an example, another direction in which robotics is moving is that of small and even smaller automatic machines, such as: millirobots, microrobots, and nanorobots. These robots would interact with humans in a completely different way from macroscopic social robots. In the Siciliano and Khatib series, there is an interesting book entitled Small-Scale Robotics: From Nano-to-Millimeter-Sized Robotic Systems and Applications, edited by Igor Paprotny and Sarah Bergbreiter (2014).3 In their preface, the editors make explicit the impact that science fiction has had on this area of research: “In the 1968 movie The Fantastic Voyage, a team of scientists is reduced in size to micro-scale dimensions and embarks on an amazing journey through the human body, along the way interacting with human microbiology in an attempt to remove an otherwise inoperable tumor. Today, a continuously growing group of robotic researchers [is] attempting to build tiny robotic systems that perhaps one day can make the vision of such direct interaction with human microbiology a reality.” 3 The

book contains selected papers based on presentations from the workshop “The Different Sizes of Small-Scale Robotics: From Nano- to Millimeter-Sized Robotic Systems and Applications,” which was held in conjunction with the International Conference on Robotics and Automation (ICRA 2013) in May 2013 in Karlsruhe, Germany.



71

Smaller-than-conventional robotic systems are described by the term “small-scale robots.” These robots range from several millimeters to several nanometers in size. Applications for such robots are numerous. They can be employed in areas such as manufacturing, medicine, or search and rescue. Nonetheless, the step from imagination to realization, or from science fiction to science, is not a small one. There remain many challenges that need to be overcome, such as those related to the fabrication of such robots, to their control, and to the issue of power delivery. Engineers regularly compare the capabilities of robotic systems, including small-scale robots, to those of biological systems of comparable size, and they often find inspiration in biology when attempting to solve technical problems in such areas as navigation and interactive behavior (Floreano and Mattiussi 2008: 399–514; Liu and Sun 2012; Wang et al. 2006). Paprotny and Bergbreiter write: “The goal of small-scale robotics research is often to match, and ultimately surpass, the capabilities of a biological system of the same size. Autonomous biological systems at the millimeter scale (such as ants and fruit flies) are capable of sensing, control and motion that allows them to fully traverse highly unstructured environments and complete complex tasks such as foraging, mapping, or assembly. Although millimeter scale robotic systems still lack the complexity of their biological counterparts, advances in fabrication and integration technologies are progressively bringing their capabilities closer to that of biological systems” (Paprotny and Bergbreiter 2014: 9–10). Presently, the capabilities of microrobotic systems are still far from those of microscale biological systems. Indeed, “biological systems continue to exhibit highly autonomous behavior down to the size for a few hundred micrometers. For example, the 400µm dust mite is capable of autonomously navigating in



72

search for food and traversing highly unstructured environments. Similar capabilities can be found in Amobeaproteous or Dicopomorpha zebra” (Paprotny and Bergbreiter 2014: 9–10). By contrast, microrobotic systems have only limited autonomy; they lack independent control as well as on-board power generation. In spite of the stark performance differences between biological systems and small-scale robots, engineers are far from being resigned to second place. Rather, they think that “these gaps highlight important areas of research while demonstrating the level of autonomy that should be attainable by future robotic systems at all scales” (Paprotny and Bergbreiter 2014: 10–11). Such statements speak to the optimistic mindset of engineers. 4. From navigation and manipulation to interaction In their book entitled Human-Robot Interaction in Social Robotics (2013), Takayuki Kanda and Hiroshi Ishiguro explain quite well the nature of the paradigm change that has accompanied the shift from industrial robots to interactive robots. They remind us that, up to recent times, robotics has been characterized by two main streams of research: navigation and manipulation. The first is the main function of autonomous mobile robots. The robot “observes the environment with cameras and laser scanners and builds the environmental model. With the acquired environmental model, it makes plans to move from the starting point to the destination” (Kanda and Ishiguro 2013: 1). The other stream in early robotics has been manipulation, as exemplified by research on robot arms. Like a human arm, the robot arm is often complex and therefore requires sophisticated planning algorithms. There are countless industry-related applications for both navigation and manipulation, and over the last several decades innovations in these research areas have revolutionized the field.



73

Two different academic disciplines have been competing to solve the problems related to navigation and manipulation: Artificial Intelligence and robotics sensu stricto. According to Kanda and Ishiguro, robotics now needs to engage in a new research issue – interaction: “Industrial robotics developed key components for building more human-like robots, such as sensors and motors. From 1990 to 2000, Japanese companies developed various animal-like and human-like robots. Sony developed AIBO, which is a dog-like robot and QRIO, which is a small human-like robot. Mitsubishi Heavy Industries, LTD developed Wakamaru. Honda developed a child-like robot called ASIMO. Unfortunately, Sony and Mitsubishi Heavy Industries, LTD have stopped the projects but Honda is still continuing. The purpose of these companies was to develop interactive robots” (2013: 1–2). Social robotics is gaining in importance because mobile robots are increasingly required to perform tasks that necessitate their interaction with humans. What is more, such human-robot interactions are becoming a day-to-day occurrence. Japanese companies tend to develop humanoids and androids because of their strong conviction that machines with a human-like appearance can replicate the most natural of communicative partners for humans, namely other humans. In the words of Kanda and Ishiguro, the strongest reason for this research program is “in the human innate ability to recognize humans and prefer human interaction.” They add: “The human brain does not react emotionally to artificial objects, such as computers and mobile phones. However, it has many associations with the human face and can react positively to resemblances to the human likeness” (2013: 5). 5. Scenario and persona: the challenge of verbal interaction



74

Appearance is just one of the problems related to the social acceptance of robots. Verbal interaction is equally important. Bilge Mutlu and others have recently edited a book entitled Social Robotics (2011) that presents interesting developments in the direction of improved HRI.4 In one of the book’s chapters, Złotowski, Weiss, and Tscheligi clearly explain the nature of this general field of research, as well as the methodology that tends to be used. To begin, they emphasize that: “The rapid development of robotic systems, which we can observe in recent years, allowed researchers to investigate HRI in places other than the prevailing factory settings. Robots have been employed in shopping malls, train stations, schools, streets and museums. In addition to entering new human environments, the design of HRI recently started shifting more and more from being solely technologically-driven towards a user-centered approach” (2011: 1–2). Indeed, these particular researchers are working on a project called Interactive Urban Robot (IURO): this “develops a robot that is capable of navigating in densely populated human environments using only information obtained from encountered pedestrians” (2011: 2). Two key concepts in such research are scenario and persona. These were already popular as design tools in Human-Computer Interaction (HCI), but the approach based on them has now been exported and adopted in HRI. Złotowski, Weiss, and Tscheligi explain that “Scenarios are narrative stories consisting of one or more actors with goals and various objects they use in order to achieve these goals” (2011: 2–3). They continue: “Usually the actors used in scenarios are called per 4 This

volume collects the proceedings of the third International Conference on Social Robotics (ICSR), located in Amsterdam, The Netherlands, November 24–25, 2011. Equally interesting are the volumes related to the previous and the following conferences. See: Ge et al. 2010; Ge et al. 2012; Herrmann et al. 2013.



75

sonas. […] The main goal of personas is to ensure that the product being developed is designed for concrete users rather than an abstract, non existing ‘average user’. Often, more than one persona is created in order to address the whole spectrum of the target group” (2011: 3). An interesting aspect of social robotics is that researchers – even when they are basically trained as engineers – must adopt a sociological or psychological perspective in order to create personas. This happens because the process of persona creation starts with the identification of key demographic aspects of the human populations of interest. In their work on robot-pedestrian interaction, therefore, Złotowski, Weiss, and Tscheligi analyzed the “age range profession, education and language skills” of selected pedestrians and then augmented this with data from pedestrian interviews: “This information was then enriched by the data obtained during interviews where we asked participants why they approached specific pedestrians. Not surprisingly, we found that one of the most important factors, which impacts the successfulness of the interaction, was whether the encountered person was a local or not” (2011: 4). It is not difficult to predict that as robots become more sophisticated, engineers will need the systematic help of trained sociologists and psychologists in order to create personas and scenarios and to “teach” humanoids how to behave in various circumstances. In other words, the increased interaction between mobile robots and humans is paving the way for increased interaction between social robotics – the study of HRI undertaken by engineers – and robot sociology – the study of the social aspects of robotics undertaken by social scientists.



76

Chapter 4

Technological Growth and Unemployment: A Global Scenario Analysis

1. Technology and Unemployment: The Vexata Quaestio While discussing the role of slavery and the difference between instruments of production and instruments of action, Aristotle (350 B.C.E.) states that “the servant is himself an instrument which takes precedence of all other instruments. For if every instrument could accomplish its own work, obeying or anticipating the will of others, like the statues of Daedalus, or the tripods of Hephaestus, which, says the poet, ‘of their own accord entered the assembly of the Gods’; if, in like manner, the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves”. In other words, if automata were sophisticated enough to replace humans in every activity, slavery and work would be unnecessary. Analysing the motives behind the revolt of the Luddites, in th 18 and 19th century Europe, Karl Marx (1867) made a sarcastic comment on Aristotle’s philosophy of technology: “Oh! those heathens! They understood (...) nothing of Political Economy and Christianity. They did not, for example, comprehend that machinery is the surest means of lengthening the working-day. They perhaps excused the slavery of one on the ground that it



77

was a means to the full development of another. But to preach slavery of the masses, in order that a few crude and halfeducated parvenus, might become ‘eminent spinners,’ ‘extensive sausage-makers,’ and ‘influential shoe-black dealers,’ to do this, they lacked the bump of Christianity.” Both the Luddites and Marx have noticed that machinery did not free humans from labour, but rather caused unemployment and the inhumane exploitation of those still employed. However, they proposed different remedies. As is well known, the Luddites saw the solution in the destruction of the machines,5 while Marx and the socialists preached that the proletarians would benefit more from a revolution aimed at taking full possession of the machines (the means of production). It is worth noting that not only the anti-capitalist front, but also a supporter of free market economy like John Stuart Mill (1848) was honest enough to admit that “it is questionable if all the mechanical inventions yet made have lightened the day’s toil of any human being.” Since then, it has been restlessly debated as to whether technological development really frees humans from work, or on the contrary produces more exploitation and unemployment. There is a copious literature supporting the first or the second thesis, spanning over the last two centuries. And the debate is still going on. The theory that technological change may produce structural unemployment has been repeatedly rejected by neoclassical economists as nonsense and labelled ‘the Luddite fallacy.’ These scholars contend that workers may be expelled from a company 5 According to David F. Noble (1995: 3-23) the Luddites did not destroy

machines because of technophobia, but because of necessity. They had to choose among starvation, violence against the capitalists, or property destruction. The last choice was the most moderate way to protest against unemployment and the lack of compassion of the factory owners.



78

or a sector, but sooner or later they will be hired by other companies or reabsorbed by a different economic sector. It is however well known that economics is a multiparadigmatic discipline. Therefore, supporters of the idea of technological unemployment keep appearing on the stage. In the 1990s, right after the beginning of the Internet era, a few influential books focusing on the problems of automation and artificial intelligence appeared. Among these, we may cite books like Progress without People by David F. Noble (1995), The End of Work by Jeremy Rifkin (1995), or Turning Point by Robert U. Ayres (1998). Noble stands in “defence of Luddism” and moves accusations of irrationalism to “the religion of technology” on which modern society is supposedly based. According to him, “in the wake of five decades of information revolution, people are now working longer hours, under worsening conditions, with greater anxiety and stress, less skills, less security, less power, less benefits, and less pay. Information technology has clearly been developed and used during these years to deskill, discipline, and displace human labour in a global speed-up of unprecedented proportions” (Noble 1995: XI). Rifkin points out that people that lose a low-skilled job often lose the only job they are able to do. Many of people involved for instance in assembly or packaging can barely read and write. They are on the lowest rung of ability and learning. However, the new job that arises from the machine that ‘steals’ their job is one involving taking care of that machine, which often requires high school computer programming, if not a college degree in computer science. Which, in its turn, is a qualification requiring abilities at the higher end of the ladder. In brief, “it is naive to believe that large numbers of unskilled and skilled blue and white collar workers will be retrained to be physicists, computer



79

scientists, high-level technicians, molecular biologists, business consultants, lawyers, accountants, and the like” (Rifkin 1995: 36). Finally, Ayres emphasizes the fact that, even if we admit that workers can be relocated, new jobs may be less satisfactory than old jobs, in terms of wages, fulfillment, and security. And this is not an irrelevant aspect. This is evidence that globalization and automation are good for some social classes and bad for others. Indeed, “many mainstream economists believe that in a competitive free market equilibrium there would be no unemployment, since labor markets – like other markets – would automatically clear. This means that everyone wanting a job would find one – at some wage.” The problem is that “there is nothing in the theory to guarantee that the market-clearing wage is one that would support a family, or even an individual, above the poverty level” (Ayres 1998: 96). The reaction to these works is in the wake of previous criticism. The main argument against the thesis that automation produces structural unemployment is that the many times predicted catastrophes never happened. The rate of unemployment may go up and down, but it never happened that technological change has produced an irreversible crises. Ten years ago, Alex Tabarrok (2003) confessed to be “increasingly annoyed with people who argue that the dark side of productivity growth is unemployment.” He added that “the ‘dark side’ of productivity is merely another form of the Luddite fallacy – the idea that new technology destroys jobs. If the Luddite fallacy were true we would all be out of work because productivity has been increasing for two centuries.” Apparently, this is an invincible argument, but it did not stop the flux of publications on technological unemployment. The reason is simple: Tabarrok reaches his conclusion by means of



80

inductive reasoning. The premises of an inductive logical argument provide some degree of support for the conclusion, but do not entail it. In simple words, the fact that the catastrophe did not happen until now does not imply that it cannot happen today or tomorrow. After all, every technological change is qualitatively different from the previous ones. In particular, the novelty of the present situation is that artificial intelligence and its products (computers, robots, industry automation, Internet, etc.) intertwine with globalization – that is, it unfolds in a situation in which nation-states have a limited possibility to implement corrective policies. There is also a suspicion that not only the speculations of the bankers but also accelerating computer technology has contributed to the genesis of the financial crisis which exploded in September 2008 with the bankruptcy of Lehman Brothers. This is, for instance, the point made by Martin Ford in his The Lights in the Tunnel (2009). On 13th June 2013, Nobel Prize winner Paul Krugman added his voice to this debate with an article significantly entitled “Sympathy for the Luddites.” This economist recognizes that, in the past, the painful problems generated by mechanization were solved thanks to the education of the masses. However, the problems generated by artificial intelligence are not solvable the same way, because they concern education also. Thus, today, “a much darker picture of the effects of technology on labor is emerging.” Krugman reminds us that “The McKinsey Global Institute recently released a report on a dozen major new technologies that it considers likely to be ‘disruptive,’ upsetting existing market and social arrangements. Even a quick scan of the report’s list suggests that some of the victims of disruption will be workers who are currently considered highly skilled, and who invested a lot of time and money in acquiring those skills. For example, the report suggests that we’re going to be seeing a lot



81

of ‘automation of knowledge work,’ with software doing things that used to require college graduates. Advanced robotics could further diminish employment in manufacturing, but it could also replace some medical professionals.” In the present investigation, we will tentatively assume that the picture drawn by Krugman and others is correct, and we will try to extrapolate possible futures from it. The debate seems to be mainly crystallized on the dichotomy “technology is bad” (Luddites, technophobes) versus “technology is good” (antiLuddites, technophiles), but it is worth noting that there are many more armies on the battlefield. As we have seen above, Marx built his own value judgment by taking into account one more variable: the system. In extreme synthesis, his position was “technology is good, the system is bad.” This third position went somehow in the shadow in the second half of the 20th century, for many reasons that I cannot discuss here, but it seems to be an indispensable one. One does not need to be a revolutionary socialist in order to ask for a more complex analytical model. Also Krugman (2013) points the finger to the degeneration of the system, more than to technology itself. The Nobel Prize winner stresses that “the nature of rising inequality in America changed around 2000. Until then, it was all about worker versus worker; the distribution of income between labor and capital – between wages and profits, if you like – had been stable for decades. Since then, however, labor’s share of the pie has fallen sharply. As it turns out, this is not a uniquely American phenomenon. A new report from the International Labor Organization points out that the same thing has been happening in many other countries, which is what you’d expect to see if global technological trends were turning against workers.” As a response, he does not propose to get rid of the machines, but to activate a policy of redistribution of wealth, “one that



82

guarantees not just health care but a minimum income, too.” Note that he is not asking for a radical change of the system, as Marx does, but just to fix it. Therefore, it is important to elaborate a model capable of taking into account positions with a focus on the system, and of different types, like those of Krugman or Marx.

1.

Some methodological tools for scenario analysis

Many futurological speculations follow a simple pattern: they always and invariably see technology as a cause and social structure as a consequence, never the other way round. Therefore, the attitude toward technology becomes the one that really matters. In other words, these theories do not give much weight to the role that social and industrial policies can play in shaping the future. This typically happens when futurologists are also engineers. They know better than anybody else how technologies are produced and work, but they also tend to underestimate the complexity of the social, political, economic world. On the contrary, social scientists teach us to view social problems in a more complex way, to be aware that it is often hard to distinguish cause and effect, and that the forecasts themselves sometimes bring about the very process being predicted – the socalled ‘self fulfilling prophecy’ (Merton 1968: 477). In the social reality, one more often observes a chaotic interaction between different variables, rather than a simple cause and effect chain. The society of the future will partly depend on structures inherited from the past that cannot be easily changed. Some of our behaviors depend on what sociologists call ‘social constraint,’ on what philosophers call ‘human condition,’ and on what biol-



83

ogists call ‘human bio-physical constitution’ – all variables that change very slowly. However, the future will be partly shaped also by crucial decisions of powerful people and by the predictions of influential futurologists. Even if the different attitudes and beliefs of individuals (Terence McKenna would call it ‘the cultural operating system’) can be rather stable and randomly distributed in society, the equilibrium of power may change in a sudden and unpredictable way. The rulers may become the ruled. Marginal worldviews may become mainstream ideas. So, what really matters is the attitudes and beliefs of the ruling class (politicians, bankers, entrepreneurs, top managers, scientists, opinion leaders, etc.) in the moment in which crucial decisions have to be taken. That is why, to draw pictures of possible futures we need models (attitudinal typologies) a little more complex than a simple dichotomy ‘technophobes vs. technophiles.’ To start we propose an attitudinal typology that combine ‘technological growth’6 and ‘the system.’ We will call ‘growthism’ a positive attitude to technological growth and ‘degrowthism’ its antonym. We will call ‘conservatives’ those that support the invariance of the system and ‘revolutionaries’ those that want to change it. 1. Attitudinal Typology toward ‘Technological Growth’ and ‘The system’ The system

6 See the definition of ‘technological growth’ by EconModel (www.econmodel.com): “Economic growth models (the Solow growth model, for example) often incorporate effects of technological progress on the production function. In the context of the Cobb-Douglas production function Y = Ka(L)1-a, we can identify three basic cases: laboraugmenting (or Harrod-neutral) technological change Y = Ka(AL)1-a, capital-augmenting technological change Y = (AK)aL1-a, and Hicks-neutral technological change Y = AKaL1-a.”



84

Technological growth

accept

reject

accept

Conservative Growthism

Revolutionary Growthism

reject

Conservative Degrowthism

Revolutionary Degrowthism

Put simply, if we focus on technology only, it is rather irrelevant if the rulers are conservatives or revolutionaries. However, technological growth is just one factor in play. The rulers may also decide the invariance of the system or its change. To ‘change the system’ or to ‘make a revolution’ are expressions that have historically acquired many different meanings. Here we stipulate that in order to talk about a ‘system change’ at least points 5 and 7 of Marx and Engels’s Manifesto (1888) must be fulfilled, namely: a) “Centralisation of credit in the hands of the state, by means of a national bank with State capital and an exclusive monopoly”; b) “Extension of factories and instruments of production owned by the State.” As a consequence, we would say that a system change (or revolution) have occurred in EU and USA, if and only if the two central banks (BCE and FED) were nationalized and the robotized industries of the two countries were owned by all citizens. In addition, when it comes to promoting or opposing technological growth, we may find different perspectives: some believe that technology develops in a spontaneous way, while others believe that governments (even in capitalist countries7) play a crucial role in shaping science and technology by mean of industrial policies. Obviously, as often happens, the truth is somewhere in 7 US science and technology would probably be much different if the

government did not have so many contracts with the military-industrial complex.



85

the middle. We may find historical examples supporting the first or the second idea. It is however important what the ruling class believe, and here again we may have various combinations between attitudes. 2. Attitudinal Typology toward ‘Technological growth’ and ‘Industrial policies’ Industrial policies

Technological Growth

accept

reject

accept

Programmed Growthism

Spontaneous Growthism

reject

Programmed Degrowthism

Spontaneous Degrowthism

Finally, when we shift attention from technology to people, we may find other combinations of attitudes. Among growthers, someone is happy with the market distribution of wealth, while someone else calls for the social redistribution of wealth. The same divide may appear also among degrowthers. It is important to emphasize, once again, that social policies do not necessarily imply a system change. 3. Attitudinal Typology toward ‘Technological growth’ and ‘Social policies’ Social policies

Technological Growth

accept

reject

accept

Redistributive Growthism

Distributive Growthism

reject

Redistributive Degrowthism

Distributive Degrowthism

By using these models, we will try to speculate what could happen in the near future to an industrialized nation-state. We will not explore all the possible combinations of the above pre

86

sented attitudes, but only four possible scenarios. We will examine the possibilities of a future to some extent shaped by the desiderata of the ruling class (planned), although with different orientations, involving the extinction either of workers or robots. Then we will consider the same two outcomes, but as unwanted (unplanned) consequences of other attitudes and policies. 4. Typology of possible future scenarios Future

Extinction

workers

robots

unplanned

planned

Unplanned end of work scenario Unplanned end of robots scenario

Planned end of work scenario Planned end of robots scenario

3. The unplanned end of work scenario The unplanned end of work scenario is generated by 1) technological growth; as an outcome of 2) invariance of the system; 3) spontaneous growth; and 4) market distribution of wealth. Let us see how. One author who has tried to foresee the possible developments of automation society on the basis of these premises is Hans Moravec. As a robotic engineer he has a solid grounding in technology, from which he extrapolates present data and projects them into the future. Moravec offers a very interesting point of view that it is worth thinking carefully about. He shows us what might happen in the case of laissez-faire, that is, in the case politics does not try to guide the course of future history. In part one of the essay “The Age of Robots”, Moravec (1993) describes four generations of universal robots, that hap

87

pen to coincide with the first four decades of the 21st century. We will not enter into the technical details, but limit ourselves to observe that the first generation of robots is that of robots that we sometimes see on television or in exhibitions, while the second generation is already able to replace humans in a variety of tasks also outside manufacturing; the third generation displays even more ‘human’ traits and does therefore offer competition in all sectors, while the fourth displays traits that are downright ‘superhuman.’8 In the second part of the article, Moravec dwells on the social consequences of the emergence of universal robots, distinguishing short, mean and long term. Here it is enough to analyse the short term, which coincides with the first half of the 21st century. According to the author, the superhuman robot will gradually be able to design even more potent and intelligent ‘children,’ and therefore in the long term robots will acquire traits that are ‘semi-divine.’ Machines will merge with those humans who stay on – via the technology of mind-uploading – and will colonise space, converting other inorganic matter into thinking matter. These are bold speculations, but not at all impossible. We leave them however to the reader’s curiosity. Let us therefore take a look at the short term. Moravec – who is anything but a Luddite or a left-wing extremist9 – recalls first of all the painful transition from the agricultural society to the 8 Moravec writes: “In the decades while the ‘bottom-up’ evolution of ro-

bots is slowly transferring the perceptual and motor faculties of human beings into machinery, the conventional Artificial Intelligence industry will be perfecting the mechanization of reasoning. Since today’s programs already match human beings in some areas, those of 40 years from now, running on computers a million times faster as today’s, should be quite superhuman.” 9 John Horgan describes him as a Republican ‘at heart,’ a social Darwinist and a defender of capitalism, in The End of Science (Horgan 1997: 255).



88

industrial society. The human cost of millions of workers forced to cram in the suburban areas of industrial districts and to compete for badly paid jobs that are never enough to satisfy demand. Leaving aside the work of minors, the precariousness, the inhumane working hours, as well as the absence of any social security, any health care, any trade union, any pension scheme. But this history is well known. We exited 19th century ‘savage capitalism’ because of tough action by trade unions, revolutions and reforms, to finally arrive at the welfare state. In particular, the system has been saved thanks to the recurring reduction of working hours, aimed at counteracting technological unemployment and reducing exploitation. But in the era of robots will it be possible to continue with these reforms? Not according to Moravec because even if working hours continue to fall (which it should be said is no longer even the case), their decline “cannot be the final answer to rising productivity. In the next century inexpensive but capable robots will displace human labour so broadly that the average workday would have to plummet to practically zero to keep everyone usefully employed.” This appears paradoxical because if one can oblige a private company to reduce the working time of employees, one can certainly not oblige it to hire and pay people to do nothing. But this is not the only problem. Even today, many workers are reemployed to perform ‘frivolous’ services and this will be even truer in the future, because as we have seen, services requiring efficiency rather than creativity will also be performed by robots. In practice, the function of humans is and will increasingly be to ‘entertain’ other humans with games, sports, artistic works or speculative writings (like this one). Some people are even paid to do trivial and utterly uninteresting jobs: think of some state-employed bureaucrats, often hired for no other purpose than an attempt to reduce unemployment, as-



89

signed to useless if not downright harmful tasks of control and regulation, who therefore end up as a burden to other citizens. Will we all be assigned to frivolous or useless services? It could be one solution, but it would seem that even this road is blocked. “The ‘service economy’ functions today because many humans willing to buy services work in the primary industries, and so return money to the service providers, who in turn use it to buy life’s essentials. As the pool of humans in the primary industries evaporates, the return channel chokes off; efficient, no-nonsense robots will not engage in frivolous consumption. Money will accumulate in the industries, enriching any people still remaining there, and become scarce among the service providers. Prices for primary products will plummet, reflecting both the reduced costs of production, and the reduced means of the consumers. In the ridiculous extreme, no money would flow back, and the robots would fill warehouses with essential goods which the human consumers could not buy” (Ibid.). If we do not reach this extreme, there will in any case be a minority of capitalists (the stockholders) continuing to make profits thanks to a legion of efficient workers who do not go on strike, do not fall ill, work twenty-four seven, demand a ‘salary’ equal to the cost of energy and, to cap it all, need no pension because they will retire to a landfill. While for the mass of workers employed at frivolous services or at transmitting information (the so called knowledge industry) and for the chronically unemployed (the proletariat) the prospect is a return to the Middle Ages. Moravec effectively reminds us that “an analogous situation existed in classical and feudal times, where an impoverished, overworked majority of slaves or serfs played the role of robots, and land ownership played the role of capital. In between the serfs and the lords, a working population struggled to make a



90

living from secondary sources, often by performing services for the privileged.” A rather discouraging scenario. And very disturbing, if one keeps in mind that it is an enthusiastic robotic engineer and supporter of capitalism who predicted it. In reality Moravec – perhaps worried by the apocalyptic scenario just outlined – hastily adds that things may not go this way. That is, he envisages an alternative scenario, the possibility of a different future that nevertheless implies a new awareness and an attempt to have history take another path. We will not necessarily venture back to the Middle Ages because today’s workers have reached such a level of political awareness and education that they would hardly allow a minority of capitalists to reduce them to slavery. Were it to arrive at such a level of degradation, the people would “vote to change the system.” But this choice implies a different scenario, a planned one, that must be discussed separately.

3. The planned end of robots scenario The planned end of robots scenario is generated by 1) technological degrowth; as an outcome of 2) change of the system; 3) programmed degrowth; and 4) social redistribution of wealth. Let us see how. To remedy the evaporation of humans from the working environment, various types of solutions have been proposed. Faced with a ‘technological apocalypse’ many are tempted by the idea of a return to the past. More and more citizens appear fascinated by the perspective of a degrowth in technology and industry – and not only visceral technophobes like Theodor Kaczynski. Therefore it would seem that we must include also this idea in



91

our discussion, although no political agenda is currently contemplating a ban on artificial intelligence. Supporters of this position have been given various labels: Luddites, primitivists, passéists, retrograders, reactionaries, bioconservatives, radical ecologists, etc. Since the idea finds consensus both on the left and right, even though its most radical version has as yet no representatives in Parliament, we have decided to call its supporters ‘degrowthers’ – a term that does not yet have strong political connotations and that therefore lends itself to this technical use. By symmetry we call ‘growthers’ the supporters of limitless growth (scientific, technological, industrial, economical). First of all it must be stressed that the degrowthist idea is rather simple and forthright. Its simplest formulation does not demand any particular intellectual effort, any particular competence, but rather a gut reaction: “If technology is bad, ban it!” The message is simple, clear and limpid. For this reason, it has had some success in the media. A slightly more careful analysis shows however that giving up technologies based on artificial intelligence carries no fewer risks than does their diffusion inside a framework of laissez faire. Indeed a policy of degrowth, that is, one geared at maintaining or restoring obsolete systems of production, would not allow the country that adopted it to stand up to the competition of other nations in a global economy. At the level of quality and prices, goods produced by artisanship would not withstand the competition of those produced by a mixed human-robotic system or even an altogether robotic one. Therefore, were one to ban AI, unemployment would not even be reabsorbed in the short term. Not only would it not disappear, but the worsening of other economic parameters and the collapse of many companies would likely increase it.



92

Obviously, all degrowthists are not naïve and therefore we should expect a second policy to take place at the same time as a ban on AI: economic autarchy. It is not by chance that degrowthists are generally also anti-globalisation. Leaving the global market would end the competition between national and foreign goods and services, and employment could thus be rescued. This argument may seem sensible when formulated this way, but it too would carry a hefty bill. Exiting the global economy, closing the borders, imposing a duty on imports, would rescue the situation in the short run by creating a kind of poor but selfsufficient economic enclave. In the long-term however this economy would be under the constant threat of a black market of technologically advanced products from abroad. Repression by police or the military would be necessary to counteract internal mafias that via smuggling would look after their own interests and those of foreign companies. The repression could however convince the same mafias, or foreign governments serving large corporations, to stir up rebellions inside the autarchic system. In other words, a system at once autarchic and degrowthist – given its technological weakness – would make itself vulnerable to being swept away at any time by systems that are technologically more advanced, via conventional and non-conventional wars. This scenario should be kept in mind, unless one has unconditional faith in human beings and thinks of them as capable only of intentions that are benevolent, altruistic, irenic and disinterested (history however seems to contradict this pious illusion). The third move a degrowthist party could make in order to avoid having this sword of Damocles above its head is that of conceiving the ban as global, in a global society, ruled by a global degrowthist government. This is a clearly utopian vision,



93

because an agreement between most sovereign states would not be enough. Just a few dissident growth-oriented nations would suffice to nullify the contract. The realisation of this utopia would require a global degrowthist empire. But only a regional entity that is more powerful than all others can build an empire. It seems highly improbable that such an enterprise could be undertaken by whoever rejects on principle the most revolutionary and powerful technologies. It is often said that science-fiction ideas are the prerogative of technophilic futurologists, but in reality the idea of a global ban on advanced technologies is the most ‘science-fictive’ idea of all. We do however want to continue to examine this hypothesis for the sake of discussion. Let us suppose then that, by some sort of miracle, something like a degrowthist empire came to be (maybe as a result of the global hegemony of a degrowthist religion). So now the question is: how long can it last? This global political regime must not just do away with computers and robots but also with the whole of science that allows the realisation of these machines, that is, with the know-how. The degrowthists must destroy universities and libraries, burn books and journals, destroy data banks, arrest or physically eliminate millions of scientists and engineers who might revitalise AI, as well as all growthist citizens who might side along them. Should anything escape the degrowthist thought police, or once the ‘purification’ terminated, should bright and curious children be born who were able to revitalise science, this would be a U turn. A clandestine growthist movement and a black market would be born. The police state would find itself having to fight with obsolete means hypertechnological dissident guerrilla groups. It is hard to imagine that the new system would not sooner or later be defeated by these groups. Slogans such as “the world must move forward” or “you cannot stop the clock” are of more than just rhetorical value. It is a



94

social mechanism, a social constraint founded on the combination of two elements, that do not allow growth, progress to stop for good. These two elements are the will to power – a force that moves human history or, in the sense asserted by Friedrich Nietzsche, life of the universe itself – together with Bacon’s simple observation that technology is power (scientia potentia est). Therefore, although degrowthists can obtain victories, these are always temporary. This happened for example when JudeoChristianity overcame – with the complicity of other catastrophic events like Barbarian invasions, natural catastrophes and epidemics – the thousand-year old Graeco-Roman civilisation. All that was needed was to leave lying around bits and pieces of that great philosophical, scientific, artistic, technological, commercial and military civilisation for its spores to wake up and regenerate it under other forms, despite the severity and conscientiousness of the Inquisition (Russo 2004, Pellicani 2007, Campa 2010). Therefore, it does seem that the degrowthist solution, in addition to being inefficient and risky, is above all impracticable in its extreme forms. It is not by chance that the governments of the developed world have until now tried to remedy the problem of technological unemployment with all the means save one: banning the new technologies. This however does not mean that technological degrowth is impossible. Actually, industries and technologies can disappear in some regions of the world even if they are welcome. This is another scenario that deserves to be explored.

4. The unplanned end of robots scenario



95

The unplanned end of robots is generated by 1) technological degrowth; as an outcome of 2) invariance of the system; 3) spontaneous degrowth; and 4) market distribution or social redistribution of wealth. Let us see how. If we look at the programs by parties represented in most Western Parliaments, be it the governing coalition or the opposition, we discover that they are all more or less favourable to growth. It is rare to find a member of the Parliament who waves the flag of technological or economic degrowth. At the most we find someone who, courting degrowthist votes, speaks of ‘sustainable growth.’ Similarly, we do not find anyone who welcomes upheavals, social conflicts, high unemployment rates and widespread crime. The ideal societies of the various political forces differ in some essential aspects (some dream of a Christian society, others of a secular one, some want it to be egalitarian, other meritocratic, and so on). As far as growth and employment are concerned, they all agree – at least in principle – that these are positive. Even those who want to abolish the democratic-capitalist system (the political forces at the two extremes: fascists and communists) and who therefore do not rule out a phase of social conflict, do not dream of a permanent chaos, a society of temps, jobless, sick, poor and criminals. Also they view their ideal society as one fulfilling material needs, offering spiritual harmony and possibly without crime. They want to go beyond capitalism precisely because, in their opinion, it fails to guarantee all this. However, we can still find political forces that cause degrowth or social pathologies, out of incapacity, corruption or short sightedness. The worry that traditional political parties lack a vision of the future and that this may generate social instability has been expressed by several social scientists.



96

For instance, now, Europe is facing a very delicate political and economic crisis, characterized by economic depression and a high rate of unemployment. As a response, EU has been imposing an austerity policy on member states to reduce budget deficits. The European ruling class seems to be firmly convinced that the best recipe for stimulating economic growth is the deregulation of the labour market and the reduction of government spending. This perspective has been criticized by many economists. For instance, Boyer (2012) predicts that this economic policy will fail, because it is founded on four fallacies. First, the diagnosis is false: the present crisis is not the outcome of lax public spending policy, but “it is actually the outcome of a private credit-led speculative boom.” Second, it is fallacious to assume “the possibility or even the generality of the so-called ‘expansionary fiscal contractions.’” Third, it is wrong to assume that a single policy may work for all states: “Greece and Portugal cannot replicate the hard-won German success. Their productive, institutional and political configurations differ drastically and, thus, they require different policies.” Fourth, “the spill over from one country to another may resuscitate the inefficient and politically risky ‘beggar my neighbour’ policies from the interwar period.” The analysis produced by Boyer is quite convincing. However, if it is true that unemployment is partly due to growing and evolving automation, the dichotomy austerity versus government spending, or neoclassical economics versus Keynesian theory, is simply inadequate to draw a complete picture of the situation. It is missing one of the main points. A similar problem can be observed also in the USA. Sociologist James Hughes (2004) noticed as long as ten years ago that in the USA “newspapers are full of bewildered economists scratching their head at the emerging jobloss recovery. The right



97

reassures us that job growth is right around the corner, although it wouldn’t hurt to have more tax cuts, deregulation, freer trade and lower minimum wages. Liberals counter that we can cut unemployment with more job retraining, free higher education, more protectionism, more demand-side tax stimulus and nonmilitary public sector investments.” According to this sociologist, “the problem is that none of these policies can reverse the emerging structural unemployment resulting from automation and globalization.” We have seen that, in certain conditions, the neoclassical politico-economic approach may lead to the unplanned end of work scenario imagined by Moravec. However, the same approach, in the presence of a new great depression (an hypothesis that Moravec did not consider in 1993), may lead to an end of robots scenario. Austerity policies and a bad application of neoclassical principles are indeed already producing the deindustrialization of some countries (e. g. Italy). However, also the Keynesian approach per se may lead to the unplanned end of robots scenario. To explore this possibility, all we have to do is to analyse the first path indicated by Moravec to escape the gloomy perspective of an unplanned end of work. The obligatory path is to push through with the politics of the gradual reduction of working hours, but with the sagacity to preserve people’s purchasing power. This could take place in two different ways: 1) by redistributing income via taxation; or 2) by distributing profits based on the assignation of proprietary shareholders. In each case people are excluded almost entirely from the production loop, but the first path may have a collateral unwanted effect. Through the redistribution of income via taxation, the circulation of money can be reactivated by governments as soon as it misfired. In this case citizens’ income would be equal or similar,



98

but in any case sufficient to keep production going via consumption. However, since the level of taxation would be decided by the people, the system could collapse should this level become unsustainable in a still competitive system. In other words, too heavily taxed robotic industries would fail, leaving the whole population without an income. If we postulate the invariance of the system, the governments may be not able to help or buy failing industries. If central banks are private, governments do not control the emission of money. Thus, they have to finance themselves on secondary markets, as if they were private companies. This situation could render it impossible to implement an effective industrial policy and would precipitate the nation-state in a vicious circle leading to deindustrialization and, therefore, unwanted derobotization.

5. The planned end of work scenario The planned end of work scenario is generated by: 1) technological growth; as an outcome of 2) change of the system; 3) programmed growth; and 4) social redistribution of wealth. Let us see how. To explore this scenario, we have to follow the second path indicated by Moravec in direction of the end of work. This path is a kind of socialist-capitalist hybrid founded on allowing the population to own the robotic industries by giving part of the shares to each citizen at birth. In this case, incomes would vary with the performance of the companies. Therefore, for people, it would become more important to elect the best top managers for their factories, instead of the best members of Parliament. Everybody would have enough to live off, but salaries could no longer be decided by political votes. Even if this solution pre-



99

served a few features of capitalism (competition, market economy), the change it entails would be more systemic than what it appears prima facie. True, Moravec does not discuss at all the problem of the banking system and the emission of money, but to assign at least the property of the productive system directly to the citizens is more ‘socialistic’ than taxing the rich to give some charity to the poor. We quote Moravec’s passage in its entirety: “The trend in the social democracies has been to equalize income by raising the standards of the poorest as high as the economy can bear--in the age of robots, that minimum will be very high. In the early 1980s James Albus, head of the automation division of the then National Bureau of Standards, suggested that the negative effects of total automation could be avoided by giving all citizens stock in trusts that owned automated industries, making everyone a capitalist. Those who chose to squander their birthright could work for others, but most would simply live off their stock income. Even today, the public indirectly owns a majority of the capital in the country, through compounding private pension funds. In the United States, universal coverage could be achieved through the social security system. Social security was originally presented as a pension fund that accumulated wages for retirement, but in practice it transfers income from workers to retirees. The system will probably be subsidized from general taxes in coming decades, when too few workers are available to support the post World War II ‘baby boom.’ Incremental expansion of such a subsidy would let money from robot industries, collected as corporate taxes, be returned to the general population as pension payments. By gradually lowering the retirement age towards birth, most of the population would eventually be supported. The money could be distributed under other names, but calling it a pension is meaning-



100

ful symbolism: we are describing the long, comfortable retirement of the entire original-model human race.” Moravec is crediting the idea of the end of work to engineer James Albus, but it is rather a multiple discovery. For instance, James Hughes (2004) also comes to a similar conclusion, even if he would probably implement the distribution of wealth in a different way than Albus. In any case, he warns that “without a clear strategic goal of a humanity freed from work through the gradual expansion of automation and the social wage, all policies short of Luddite bans on new technology will have disappointing and perverse effects. If liberals and the left do not reembrace the end of work and the need to give everyone income as a right of citizenship, unconnected to employment, they will help usher in a much bleaker future of growing class polarization and widespread immiseration. If libertarians and the right do not adapt to the need to provide universal income in a jobless future they may help bring about a populist backlash against free trade and industrial modernization.” Those thinking that the planned end of work scenario is just a utopia should remember that in pre-industrial societies there were much less working hours than today, if for no other reason than people could work only during the daylight. Besides, they benefited from more religious holidays during the year. If now we work so hard, it is because of the invention of gas and electric lighting which has artificially extended the working day, especially in winter time. The introduction of machinery has done the rest. From the point of view of the capitalist, it makes no sense to buy expensive machines and turn them off at every religious celebration or just because the sun goes down. The phenomenon of the prolongation of the working-day, at the beginning of the industrial era, has been analysed in detail by Karl Marx (1867). But this is not yet fully recognized. Indeed, soci-



101

ologist Juliet B. Schor (1993) remarks that “one of capitalism’s most durable myths is that it has reduced human toil. This myth is typically defended by a comparison of the modern forty-hour week with its seventy- or eighty-hour counterpart in the nineteenth century.” The problem is that “working hours in the midnineteenth century constitute the most prodigious work effort in the entire history of humankind.” In the preindustrial era the situation was much different. Just to give a few examples, a thirteenth-century estimate “finds that whole peasant families did not put in more than 150 days per year on their land. Manorial records from fourteenth-century England indicate an extremely short working year -- 175 days -- for servile laborers. Later evidence for farmer-miners, a group with control over their worktime, indicates they worked only 180 days a year.” Indeed, there is no reason why a technologically advanced society should force its citizens to work harder than their ancestors, when they could work a lot less and without giving up their modern life standards. Among other things, this policy would also give workers more free time to take care of their children, the elderly and the disabled. Or they could just spend time with their families and friends, if the care of people will be entrusted to robots. Unfortunately few are aware of the irrationality of our current situation. According to anthropologist David Graeber (2013), the situation has now become even more paradoxical than before, because most of our jobs are not needed at all. It is worth remembering that “in the year 1930, John Maynard Keynes predicted that, by century’s end, technology would have advanced sufficiently that countries like Great Britain or the United States would have achieved a 15-hour work week. There’s every reason to believe he was right. In technological terms, we are quite capable of this. And yet it didn’t happen. Instead, technology



102

has been marshaled, if anything, to figure out ways to make us all work more. In order to achieve this, jobs have had to be created that are, effectively, pointless.” Graeber surmises that this is not happening by chance. According to him “the ruling class has figured out that a happy and productive population with free time on their hands is a mortal danger (think of what started to happen when this even began to be approximated in the ‘60s).” That being said, perhaps we should also consider possible negative collateral effects of the planned end of work scenario. Before reaching the final stage when stocks income are owned by citizens, a phase of gradual reduction of working hours is implemented. During this period of time, a first unknown would be the reaction of firms, given that they – for obvious reasons – would rather increase employees’ working hours and threaten to relocate when this is refused. We need to bear in mind, however, that the problem derives from the globalisation of markets that makes relocating easy and advantageous. In the past (not that many years ago) this threat meant little, because there were import duties. It is not new that the rationality of the private sector at the micro level is in conflict with its own interests on the macro level. It is in the interest of every company to employ a minimum number of workers, pay them a minimum salary and have the highest productivity. But if every company had what it wanted, in a closed system, there would be no consumers and therefore the same companies could not sell what it produced. Therefore capitalists have recently spotted a way out in the ‘open system,’ the global market. But this too, with time, will become a closed system, with the difference that there will be no regulators. In the nation-states, governments have always resolved the contradictions between the micro-rationality of companies and the macro-rationality of the economic systems, mediating between companies and trade unions, or regulating the la-



103

bour market. But the global economy has no government, so national systems will not have any other option than exiting the global economy. Indeed it is not hard to foresee that – were we to arrive at the absurd situation of a technological progress that generates hunger instead of wealth – the nations would one by one withdraw from the global market to rescue the levels of employment. It is true that imposing reduced employee working hours on companies, while maintaining salaries at the same level, might convince them to move production elsewhere. However, the situation would be different from today’s. Since in the future one will employ almost exclusively machines and not humans, it would not be possible for companies to blackmail government and citizens by threatening to fire thousands of workers. In addition it would no longer be possible to sell their products on the domestic market because of potential import duties. Finally, they would have to transfer to more turbulent countries with chronic unemployment and rampant crime. These companies would therefore stand to lose. Thus, if a certain degree of autarchy is a factor of weakness for a degrowthist country, it would not be the same for a technologically advanced country that would have sufficient sources of energy and endogenous factors of technological development (brains and scientific institutions that are up to standards). A hyper-technological semi-autarchic state could maintain domestic order via the distribution of profits and the concurrent reduction of working hours, positing as an asymptotic ideal a society in which unconscious machines would work for sentient beings,10 while sentient beings would devote themselves to recreational or 10 A category that may include humans, transhumans, posthumans, ma-

chine-human hybrids or cyborgs, machine-animal hybrids, some animals, etc.



104

higher activities, such as scientific research and artistic production. A second unknown is linked to the reaction of people who still work – let us call them the ‘irreplaceable’ – when seeing a mass of people paid to enjoy themselves and to consume. Moravec is clearly enthusiastic about the robots that he is designing and convinced that they will be able to do any work. But – if we assume that machines will be very sophisticated but still not conscious – its seems more plausible to think that, even if any work could in itself be executed by an intelligent robot (surgical interventions, repairs, artisanal works, transport of goods and people, etc.) there must always be a sentient being in the loop who acts as supervisor. Someone will have to be there, if only to act as maintenance manager of the machine, or of the machine that maintains the machine, or to gather data on the behaviour of the machine (‘spying’ to prevent unpredictable collateral effects). When trains, planes, taxis will be able to move autonomously, users – for psychological reasons – will want to think that someone is controlling, be it remotely, that all goes well.11 However few these workers may be, why should they work when others do not have to? This problem does however have a solution. The citizens’ income from stocks is not hard to accept psychologically, if one recalls that even today there are plenty of people who live off inherited capital, enjoying the labour of fathers and grandfathers. All it takes is to convince oneself that everybody has the right to live off a capital. Irreplaceable workers could receive a pay adjunctive to the citizens’ wage, which would give them a higher social status in exchange for their job. Or even the society of the 11 At least, this as concerns the first generation of users, because – maybe

– later generations will have more trust in the machines.



105

future – also to preserve a sense of community – could institute a compulsory civil or military service that would employ all citizens for a few hours a week to carry out functions of control and supervision. Besides, it is highly probable that sooner or later all nations will be constrained to remedy to technological unemployment in the same way, and, faced with new homogeneous global conditions, borders could once again be opened – to promote at last the free circulation of people and goods. In the end, because communications and transports continue to develop, the world becomes ever smaller and borders ever more anachronistic. Therefore autarchy could be just a painful but necessary phase to overcome the resistance of capital to global regulation. And at the end of the process we shall have humans truly liberated from obligatory work.

6. An ethical judgment We have seen that the social questions arising from the increasing robotisation of industry and from the spread of artificial intelligence in the social fabric, can partly be assimilated to those that came with the mechanisation of manufacturing during the industrial revolution, and are partly entirely novel ones. In both cases the process arises, mostly, within the context of a capitalist economy. This is why the evaluation of the process as a whole cannot be univocal but depends on the position of the evaluating subject within the social strata and of the interests pertaining to this subject. Put more simply, robotisation is not good or bad in itself but is good for some social classes or categories and bad for others, in accordance with the concrete effects that it has on people’s



106

lives. The effects would be the same for the whole population, and consequently we could assign it a univocal evaluation, only inside a framework of absolute socio-economic equality. In this hypothetical (or utopian) picture, we would not hesitate to say that the effects of robotisation would be positive overall. Computers and robots can replace humans for repetitive or dangerous tasks. In addition they would allow a faster and cheaper production of consumer goods, to the advantage of the consumer. Thus the quality of the product increases, because of the greater precision of machines. Besides, some tasks can only be done by robots and computers. For instance, some work requires such precision that no human could do it (to fabricate very small objects at the nanoscale) and others can only be conceived and made with 3D printers and computers (i.e. odd shapes). We use more and more of this kind of object in consumer electronics, but also fashion (jewellery). However, since societies today are highly layered, and some – for instance the Italian one – have a rather rigid class structure offering few possibilities to rise or fall, one cannot disregard the negative effects that a robotisation, which would not also be accompanied by structural social change, would have on some social classes. To remedy this situation – as it should already be clear – our preference goes to the planned end of work scenario, for this is the only strategy that takes seriously the need to ensure a widespread distribution of the benefits of automation. Of course, we cannot say for sure that this is a ‘necessary’ development – given that one cannot speak of necessity inside a system where also will, errors and human irrationality play a part. After all the degrowthist party could win, with the result that development would effectively come to a halt. Or an erroneous growthist



107

strategy could prevail and, even though it would want growth, would fail to obtain it and so throw society into chaos. If, however, the planned end of work scenario becomes real, I believe that the final outcome – in addition to being realistic – would also be fair. To justify this value judgment, I will quote myself with a passage from the journal Mondoperaio from a few years ago, aimed at showing the ethical basis of the abolition of work: “Machines will be able to do most jobs, including designer jobs, even when they are deprived of consciousness or emotion (a possibility however that cannot at all be excluded). If no company will find it convenient to employ human beings, because they can be replaced by robots that work intelligently with no break, costing only the energy to run them, one will have to think of another social structure that could imply the abolition of work. Citizens could obtain an existence wage (or a citizen’s wage) and be paid to consume rather than to produce. This solution would be ethically justified, because science and technology are collective products, that owe their existence to the joint effort of many minds, working in different places and historic times” (Campa 2006). It is the concept of epistemic communalism that I examined in-depth in my book Etica della scienza pura (Campa 2007). So I continued: “A quantum computer for example produced by a Japanese company would not have been conceivable without the ideas of Democritus, Galileo, Leibniz and other thinkers. In addition scientific research is often financed with public money. It would be unfair to tax workers in order to finance research the final result of which would be their own social marginalization.” In brief, the collective character of technoscience amply justifies a politics based on solidarity. 7. Conclusions



108

To sum up, we have outlined four possible scenarios. Two of them imply the end of robots, two of them the end of work. Among the last two, one scenario is dystopian, the other utopian. In the worst case humanity will be reduced to slavery to the advantage of a capitalist elite. In the best case humans will live to consume and to enjoy reciprocally, while robots will do the hard and dirty work. The utopian scenario in its turn has two possible faces: one social democratic (redistribution based on social policies supported by taxation) and one socialist-capitalist (redistribution based on a system change that assign the property of robotic industries to citizens). One way or the other the forecast is that the whole of humanity will retire having worked just a little or not at all. As regards the utopian scenario, one can observe that Moravec seems to have an enormous faith in people’s ability to impose its own reasons and its own interests via the tools of democracy. The idea of a future that is not necessarily so uniform appears more fecund to me, considering that neither the past nor the present have a single face. In other words, an intermediate scenario between dystopia and utopia seems more probable, with variations from country to country, from people to people, depending on political awareness, the level of infrastructure, and the degree of democracy. It is precisely for this reason that we should reflect also upon the socio-political dimension of automation, that is, on the social and industrial policies presently in action. These will play an important part in bringing about the future. More clearly, contrary to what many futurologists appear to postulate in their analyses, human societies will not have the same future.



109



110

Chapter 5

Workers and Automata: A Sociological Analysis of the Italian Case

1. Artificial Intelligence and Industrial Automation The concept of ‘artificial intelligence’ is a vast one which includes all the forms of thinking produced by artificial machines. The concept of AI is therefore strongly related to that of automation, that is, of machines behaving autonomously, albeit in response to certain inputs and in the presence of programs. Any inorganic machine conceived and construed by humans – be they desktop computers or semi-mobile robots, dishwashers or power looms – and able to carry out the tasks that humans carry out using their own intelligence is an automaton. In other words, «a certain category of sets of elements are ‘universal’ in the sense that one can assemble such elements into machines with which one can realize functions which are arbitrary to within certain reasonable restrictions» (Minsky 1956). Given this definition, it follows that all functioning automata are endowed with a certain degree of artificial intelligence. The refrigerator is less intelligent than a PC, in more or less the same way as an insect is less intelligent than a vertebrate. And some do not hesitate to compare the various forms of organic and inorganic intelligence (Moravec 1997).

111

Automation is therefore not something new that has arisen in the last few years, but the fruit of a long and slow historical process that can be taken back to the mechanical calculators of Charles Babbage or Blaise Pascal, if not all the way to Heron’s automata. Therefore whoever has a more revolutionary conception of artificial intelligence feels the need to introduce a distinction between weak AI and strong AI – a distinction that has a philosophical dimension and touches on matters such as the functioning of the brain and the ontology of the mind. This however will not be the topic of our article. Rather our intention here is to tackle the sociological aspects of artificial intelligence – whether it is understood as weak or strong, discrete or gradualistic. In other words, we intend to analyse the social, political and economical consequences of the production and use of automata or thinking machines. Let us just set some temporal and spatial limits to our analysis. We will be looking at the artificial intelligence of the third industrial revolution (Campa 2007), which can be situated in the last decades of the 20th century and in the first decade of the 21st. In this period, automation is identified in particular with computerisation and robotisation. And we will chiefly be looking at Italy, which can in any case be viewed as exemplary, given that it is still among the first seven industrial powers of the planet and is one of the leading countries in the world for ‘robot density.’ One of the most systematic applications of electronic calculators and of robots has so far been found in industrial plants. Microprocessors are omnipresent. Personal computers are found in every home and in every office. There is no institution that does not entrust some of its task to AI in some form or other. However, it is in the manufacturing industry that one observes some macroscopic social effects of the emergence of this technology.



112

We have all seen at least once robots that paint, weld and assemble cars, as well as electronic products such as radios, TVs and computers. These are the so-called industrial robots, which in advanced technological societies have come to work alongside and, in many cases, replace the worker on the assembly line. The first industrial robots appeared in the fifties, but it is only in the seventies that their presence in Italian plants began to become significant. They were steel constructions of impressive dimensions, endowed with a rudimentary electronic brain, faculties of perception, servomechanisms and hydraulic engines. The first generation industrial robots were slow and not particularly intelligent, and therefore their work was limited to tasks that do not require a high precision, like paint spraying and car body welding. Precision work was still done by humans. However, as could be foreseen, the situation changed quickly and in the eighties one could see robots able to assemble complex electronic circuits, inserting and welding the devices in a matter of seconds and without errors. Industrial robots become ever more anthropomorphic. Their degree of freedom12 increases their precision, velocity and load capacity. In the car and heavy industry they have little by little taken over other tasks that require precision such as piercing, grinding, milling, cutting, but also palletisation and stockpiling. Nowadays they are endowed with laser devices and visual systems that allow them to operate with millimetric precision. 12 By “degree of freedom” of an industrial robot, one means the number

of axes of movement (in other words, the quantity of particular movements) that the machine is able to perform. The degree of freedom goes from 3-4 for the simplest robots to 9-10 degrees in the case of more complex ones. For comparison it is considered that the human hand has 23 degrees of freedom.



113

If it is the United States, as producers of the largest number of robots, that show the way, Japan also makes a massive entry into the sector from the seventies and onwards. Indeed, among the characteristic aspects of the third industrial revolution there is also the reorganisation of the processes of production, with the computerisation and automation of the entire factory, with Toyota as the true pioneer. It is not by chance that one tends to oppose Toyota’s model to the model of organisation based on the assembly line developed by Ford and Taylor. According to Cristiano Martorella (2002) “thus the Japanese industrial revolution has transformed the factory into an information system and has freed man from mechanical work, transforming him into a supervisor of productive processes. This takes place in a period in history that sees the transition from the industrial society to the post-industrial society. This epochal turning point will be well understood once the transition to the society of services and information will be complete.” Italy contributes as well. FIAT is the first Italian company to make massive use of industrial robots. In general this country tends to import digital electronics from abroad, having lost foothold in this field, particularly after the crash of Olivetti in 1997. However, in robotics one sees very interesting exceptions to this rule. For example Robogate is an Italian invention that has since been adopted by the entire car industry. We will not enter into technical detail that the reader may find in handbooks (Kurfess 2012, Siciliano and Khatib 2008). Rather let us cast a rapid glance at the magnitude of the process of robotisation in the industry. As the Italian newspaper La Repubblica stresses, “first generation robots, those that work in the industry all over the planet, number over a million: 350,000 in Japan alone, 326,000 in Europe. In Italy, for every 10,000 persons employed in the industry, there are over 100 robots, a num-



114

ber that makes our nation one of the first in the world in this sector. They are used above all for mechanical work, in welding and in working with plastics. And their price continues to fall: a robot purchased in 2007 may cost a quarter of the price of the same robot sold in 1990. And if its yearly cost was 100 in 1990, today it is not above 25” (Bignami 2007). More precisely, Italy is the second country in Europe and the fourth in the world regarding robot density, as shown by a more accurate study by the UNECE (2004, 2005). There are already more than 50000 units and the number continues to increase.

2. Effects on the level of employment Thanks to censuses from the ISTAT, we are able to make very accurate comparisons between the growth of automation on the one hand, and the effects on occupation on the other. If one leaves aside the census of the factories of the Kingdom of Italy that goes back to 1911, ISTAT has done nine censuses relative to industry, commerce and services (1927, 1937-39, 1951, 1961, 1971, 1981, 1991, 2001, 2011). The first data of the 9th census (2011) were presented on 11th July 2013. It is not always easy to directly compare the statistics because with time the techniques of survey and the categories under scrutiny have changed: until 1971 the focus was ‘industry and commerce’, while from 1981 it is on ‘industry and services.’ Statistical series have been ‘harmonised’ however, enabling an overall reading of the data. In addition, we are interested now in the manufacturing industry and consequently shifting the focus from commerce to services is of marginal relevance. Let us begin nonetheless with a comparison between the last three complete series of statistics (1981,



115

1991, 2001) that are fairly homogenous. The table concerns the absolute data: TABLE 1 - COMPANIES AND WORKERS BY SECTOR OF ECONOMIC ACTIVITY – 1981, 1991, 2001

Economic activity Agriculture and fishing (a) Extractive industry Manufacturing industry Energy, gas and water Construction Commerce and repair Hotel and civil services Transport and communication Credit and insurance Other services TOTAL

1981 Companies Workers

1991 Companies Workers

2001 Companies Workers

30.215

110.195

31.408

96.759

34.316

98.934

4.477

56.791

3.617

46.360

3.837

36.164

591.014

5.862.347

552.334

5.262.555

542.876

4.894.796

1.398

42.878

1.273

172.339

1.983

128.287

290.105 1.282.844

1.193.356 3.053.706

332.995 1.280.044

1.337.725 3.250.564

515.777 1.230.731

1.529.146 3.147.776

212.858

644.223

217.628

725.481

244.540

850.674

132.164

679.386

124.768

1.131.915

157.390

1.198.824

27.775

446.745

49.897

573.270

81.870

590.267

274.463 2.847.313

911.560 13.001.187

706.294 3.300.258

1.977.334 14.574.302

1.270.646 4.083.966

3.238.040 15.712.908

Although the number of workers as a whole grew over the twenty-year period 1981-2001, it is also evident that the number of employees in the manufacturing industry has remarkably decreased. The data are significant given that in the meantime the Italian population grew as a whole, albeit not at the pace of earlier decades. We can extrapolate backwards the area of investigation to uncover that until 1981 the number of people employed by the industry increased instead. A study by Margherita Russo and Elena Pirani (2006) that spans the half-century is use-



116

ful. The tables, conveniently reconstructed and harmonised, show first the growth and then the fall of employment, both in absolute terms and in percentage. TABLE 2 - DYNAMICS OF WORKERS IN ITALY BY SECTOR OF ECONOMIC ACTIVITY, 1951-2001 (ABSOLUTE VALUES) 1951

1961

1971

1981

1991

2001

Engineering

1.041.962

1.569.306

2.166.813

2.745.513

2.531.295

2.496.658

The rest of manufacturing Services

2.456.258

2.928.698

3.141.774

3.397.865

3.253.313

2.766.994

100.802

110.194

170.550

702.928

1.147.988

2.208.853

Total economic activity Total manufacturing

6.781.092

9.463.457

11.077.533

16.883.286

17.976.421

19.410.556

3.498.220

4.498.004

5.308.587

6.143.378

5.784.608

5.263.652

One could therefore think that – given the fall in the number of companies and workers during the twenty-year period from 1981 to 2001 – we have entered into a phase of deindustrialisation. This is partly true (Gallino 2003), but the data relative to industrial output show that the fall in number of workers does not correspond to a fall in output. See on this matter the study by Menghini and Travaglia (2006) on the evolution of Italian industry, where the tables relative to the decade 1981-1991 (the eighties) and 1991-2001 (the nineties) show a noticeable increase in industrial output. Intermediary surveys during the decade 2001-2011 are less ‘linear’ because of the two big epochal events that have characterised the 2000s: a) the terrorist attack on the USA and ensuing war in the Middle East; b) the major economic crisis that began in 2008 and is still with us. ISTAT data shows that in the 20082010 period, the slump in employment becomes much more



117

pronounced, while industrial output also decreases. This happens in Italy as in other Western nations. However, the first data of the 9th census on industry and services (ISTAT 2011) confirm the trend of a decreasing number of industrial employees coupled with the growth of the total number of workers. TABLE 3 - COMPARISON OF THE DYNAMICS OF WORKERS IN MANUFACTURING WITH TOTAL WORKERS, 2001-2011 Type of data Anno Total Manufacturing industry Other activities

Number of active enterprises

Number of workers

2001

2011

2001

2011

4083966

4425950

15712908

16424086

527155

422067

4810674

3891983

3556811

4003883

10902234

12532103

In ten years, the number of blue-collar workers has decreased by approximately one million units. The following specific data are also quite significant: TABLE 4 - DYNAMICS OF WORKERS IN MANUFACTURING AND REPAIR OF COMPUTERS AND MACHINERY, 2001-2011

Type of data Anno Manufacturing of computer and other electronic devices Manufacturing of machinery and other equipment Repair of computer and other domestic appliances

Number of active enterprises 2001 2011

Number of workers 2001

2011

5434

5693

139239

112055

21263

24584

451806

457956

33659

26152

61512

46837

Here, we can clearly see that workers expelled from other manufacturing industries are not reabsorbed into the computing and machinery sectors. The number of enterprises active in the



118

manufacturing of computers grew, while the number of workers in the same sector significantly shrank. We also notice a decrease of both enterprises and workers involved in computer repair. The only exception is the manufacturing of machinery, where we observe that the number of both enterprises and workers grow. But a growth of 6,150 workers compared with the loss of one million jobs can hardly be seen as evidence that workers made redundant by machineries are ‘recycled’ by the system as machinery constructors. Summing up, notwithstanding turbulences connected to wars and financial crises, we can say that on the whole, during the last thirty years, a trend has emerged that is characterised by a fall in the number of industrial workers and an increase in industrial output. This should not astonish if one keeps in mind that productivity depends also on other factors. The other factor that grows noticeably during this same period is precisely automation, that is, the massive use of computers and robots in industrial manufacturing. All this therefore leads one to think that there exists a relation between the fall of employment in industry and the growth of automation. This is the hypothesis we want to consider. Allow us to open a bracket. It is known that data are not only read but also interpreted. A statistical correlation does not imply a causal dependence between the phenomena. Therefore the statistical data could just be a starting point, to which will later be added other elements, other considerations. But without statistics one goes nowhere. To those who say that statistics are unreliable and that one can therefore easily do without them, we reply with a well known popular diction: if money does not buy happiness, imagine misery. By analogy we say: if statistics do not give certainty, imagine mere impressions. Close brackets.



119

To start, we take into account the interpretation of Luciano Gallino, perhaps the greatest expert on the sociology of work and industry in Italy. We are trying to shed light first of all on the question of technological unemployment: “Technology is essentially a means to do two different things. On the one hand one may try to produce more, even much more, using the same amount of work. On the other hand one can try to use the potentiality of technology to reduce the workforce employed to produce a given volume of goods or services. And this leads to a very simple equation: as long as one manages to increase production, which means that as long as one manages to increase the markets, technology does not generate any unemployment because the work force remains constant and the only thing that grows are the markets. The markets however, different the one from the other, varied as they are, in general cannot expand forever. When the markets can no longer expand, technology is used mainly to reduce the workforce and then the spectre of technological unemployment begins to loom” (Gallino 1999). Economists tend to underestimate the problem of technological unemployment because one observes that, in percent, the unemployment due to the technological development of the last two hundred years has not been unsustainable. In general one avoids the issue saying that every new technology eliminates one job while creating another. Even if the computer takes an employee’s job, there will be the need to build and maintain computers. There is a grain of truth in this observation, but the issue is slightly more complex. This allegation always wafts the idea of the invisible hand, of the self-regulating market. In reality the system has so far had to thank the constant intervention of governments with policies of all kinds. The readjustment of the economic system, following the introduction of new and revolutionary technologies, does not happen in real time and without a



120

price. If it is true that the worker or the employee that the machine has replaced can find another job, perhaps a new kind of job, it is also true that they might not have the skills required for the new job (for example: computer maintenance) and that, in order to acquire them they will need months and perhaps years – that is, if they are successful. Therefore the replacement work can arise one or two years after the work was lost. Humans are fragile machines – they do not survive more than a few days in the absence of a certain amount of calories, adequate clothing and a roof over their head. At the same time, these ‘machines’ tend to behave violently and disruptively if they come face to face with the prospect of their own destruction. Therefore even if it is the case that the market self-regulates, since it does not do so immediately, if one wants to avoid the instantaneous collateral effects of technological unemployment, one will have to play the public hand in addition to the invisible one. This is what all governments have been doing, even those most liberalist and capitalist. For over a century, governments have systematically obliged employers to reduce working hours, in order to compel them – against their interest – to maintain the number of employees13. They have instituted tools such as unemployment benefits or national insurance contributions, at times efficiently, at others creating pockets of parasitism. They have acquired the goods produced by the private sector via pub 13 Gallino makes the same observation: “In order to avoid reducing the

working force and so to take too fast to the road of technological unemployment, one invented over a century ago the tools to reduce working hours. At one time, at the beginning of the [20th] century, one worked 3000 hours per year, in the middle of the century about 2500, and today most workers have a mean annual schedule of around 1600-1700 hours of work. This is one of the advantages of technology, that of being able to keep people employed while decreasing their performance.”



121

lic contracts. They have funded retraining schemes and refresher courses for the chronically unemployed. And in the most tragic cases they have patched up the effects of economic crises by starting wars. On the one hand conflicts reduce the population by sending entire generations to the front and on the other hand they allow the war industry to reabsorb the jobless. However cynical this may seem, it has happened and it continues to happen. These tools, especially the systematic reduction of working hours and benefits to the temporarily unemployed, have so far worked rather well. Today the appearance of two new factors – globalisation and artificial intelligence – has created a new situation with respect to the one generated by the first and the second industrial revolution. Globalisation no longer allows one to operate on the basis of reduced working hours. To do so would be suicidal if not implemented at a global level and adopted by all nations. Although globalisation has created a single large market it has not created a single large society lead by a government that is its authentic expression. There probably exists a kind of ‘shadow world government’, otherwise one cannot understand for whom or for what the national states are giving up their own sovereignty, but – if there is such a thing – it resembles a financial oligarchy that understandably defends its own interests, rather than an enlightened elite serving the interests of all. The idea that there could exist a market without a society has, as we can see, critical consequences. In addition there is the matter of artificial intelligence. The idea that every job eliminated by a technology is sooner or later replaced by a job generated by that same technology is called into question by the nature of automation itself. Gallino (1999) writes: “This minor textbook equation prevails much less at the age of driven automation, the one that I call ‘recursive automa-



122

tion.’ The jobs that technology used to create soon after it had suppressed a certain number were partly recovered by the enlargement of the markets but partly also by producing technological means, that is, producing the same machines of goods and services that the markets absorbed up to a point. With automation applied to itself the machines produce other machines to automate, the process of automation attains very high levels and thus there is no longer any hope, or at least it is much reduced, to sooner or later find a new job in the sectors that produce the technology that eliminated the original job, the first job.” All empirical evidence shows that technological unemployment is more than a hypothesis. It is on the basis of data and graphs, and certainly not of moral principles, that sociologists criticise mainstream economics. Although it dates, the book Se tre millioni vi sembrano pochi14 is still instructive; in this analysis Gallino (1998) gives a central position to recursive automation, which until then had been given little heed. In the following review, sociologist Patrizio Di Nicola sums up the main ideas: - To the myth that the upswing generates employment the author opposes Italian statistical evidence: in thirty years GNP has doubled, but the number of workers has only increased by 2,1%, that is of 400 000 units. But at the same time the number of resident citizens has increased by over 6 million; - The idea that technology creates, long term, more jobs than it destroys was valid in the past, the author states, but is no longer so. The increase in productivity due to new machines can generate a positive occupational balance only if the markets absorb more merchandise. But in Italy companies function inside ma-

14 This book has not been translated into English. The title means: If three

million seems not much. On the ways to fight unemployment TN].



123

ture and partly static markets and export in these sectors is anything but easy; - The advice to do ‘like the Americans,’ who seemed to have managed to create a phenomenal job machine, is founded on misleading presuppositions. In fact, on the one hand, the increase in the number of jobs is a direct consequence of the increase in the population (which between 1980 and 1995 went from 227,8 to 263,4 million units). On the other hand American job performance is aided by a somewhat relaxed statistical method. Count as employed the following: 6 million students aged 16-24, who nevertheless have been working for at least one hour in the week preceding the survey (maybe washed the neighbour’s car or delivered the newspaper before going to college); 20 million contingent workers, people who work sporadically, when they can; 23 million part-time workers, who in reality correspond to 12 million fulltime jobs. And in the same way as they overestimate the number of employed – Gallino observes – the official statistics made in the USA underestimate the number of unemployed that, applying European criteria, ought to be over 12% instead of 5,3%. That is a little over the European mean. - To the idea that what is most responsible for the poor levels of employment is the welfare state Gallino objects with some ‘odd cases’: Italy, with its 12,2% unemployed, spends 25,1% of GNP on welfare, while Holland, which has an unemployment rate of 6,5% spends the most: 29,8%. Denmark, the country where unemployment is the lowest in Europe, allocates 32,7% of GNP to social welfare. At the other extreme Spain, which invests less than we [Italy] in welfare, has a level of unemployment that is above 22%.

In brief, the growth of production and of productivity has not necessarily brought about the growth of employment. The relation between growth and employment is extremely weak in a country that does not produce technologies but imports them at

124

best15. The American model is an illusion because the occupational data are ‘inflated’ (and, ten years after the publication of this book, the situation is even worse after the outbreak of the financial crisis). Welfare – considering the graphs – rather than being an obstacle to growth appears to be a factor of production, but almost all Western countries tend to respond to the crisis by dismantling or reducing social benefits. Not only that. One cannot even hope that someone who is expelled from industry will later necessarily be reabsorbed by the service sector (be it public or private), “because services are just as much susceptible to automation as the production of goods” (Bignami 2007). We will consider this aspect in greater detail.

3. Social stratification and the new generation robot Automation is already expanding beyond the manufacturing industry. The evolution of robots now has its effects also on the tertiary. In addition the presence of robots within the home is growing at the rate of 7-8% per year. According to the predictions of Bruno Siciliano, president of the International Society of Robotics and Automation, “out of the 66 billion dollars that will represent the cost of robotics in 2025, 35% will be that of personal and service robots” (Bignami 2007). This is why, if we have failed in the past to correctly formulate the social problem of robotisation, it would be even more shortsighted not to formulate it now. “So from now on robots are everywhere. In our 15 “A country that mostly buys a technology researched and developed by

other, increases its productivity, and therefore sees the number of jobs decrease but it does not see them recreated by this technology” (Gallino 2008: 17).



125

home, in the office, in our car. They take care of the elderly: in South Korea one has developed robots that control home electric appliances and remind the elder person when it is time to take his medicine. They serve as nurses to the sick (in the USA some prototypes are even taking the temperature) and they can also transform into tail-wagging puppies (the case of ‘Aibo’ among others); soon they will act as baby-sitters if it is true that some companies are researching how to ‘teach’ the automaton to rock a newborn.” Philosophers and scientists talk about all this, once in a while, in symposiums and conferences, but the question seems virtually absent from political agendas. The problem is underestimated for two main reasons: 1) the lobbying power of major industries, that only stand to benefit from robotisation, and therefore feel no necessity to discuss the issue in wider terms, and 2) the widespread conviction that robots will never be able to imitate humans all the way. Yet, as Siciliano observes, today there are robots capable of doing the same work as a craftsman. “They are working in the zone between Vietri and Cava dei Tirreni where they are imitating the master potters.” In practice the robot is not just able to imitate the assembly line production and surpass it in precision, but also that human imprecision of the craftsmen that is so characteristic of their product. An optical system records the craftsman’s imprecise brush strokes, all different from one another. Using this information one writes a program which, when implemented in the robot, enables it to produce tiles that are all different from one another. If we proceed in this way, robots could replace humans also for activities involving decision-making. Interviewed by Bignami (2007), Antonio Monopoli makes this forecast: “It is likely that with time one will produce robots with greater and greater ability to teach themselves. In fact we will have robots able to



126

‘decide’, a condition they share with humans.” Once we have got that far, according to Bignami, “the expansion of robotics will also involve problems of ethics, and it is not fortuitous that one talks of ‘Roboethics’ at the ICRA conference. One problem that may arise is the possible inadequacy of the robot’s response to an event. In the case of injury, who would be responsible?” Monopoli replies that: “If the robot is regarded as a machine, the responsibility falls on the owner. But if the robot has a great capacity for self-learning and interaction with the external world, and the idea of robots working autonomously is socially accepted, one could not question the good intention of those who designed and commercialised the robot.” These problems are generally presented as having to do with roboethics, and therefore as ethical problems (that concern the whole of humanity and have to be solved with reference to universal principles) and not as chiefly political (that is, that concern the interests of a polis, a community, a faction, a social group). Now we should wish to stress that the problem – be it ethical or political – was born before, when the big industrial robots arrived to the factories. The robots’ spread from the factories into homes and offices is if anything part of an evolution of the old problem that arose already with the industrial revolution. The ruling classes downgraded the problem of technological unemployment into a ‘technical’ one and certainly not ‘ethical’, as long as the victim of the process was the working class. It would be interesting if the same ruling classes were outraged should an anthropomorphic robot sit down at the desk of the CEO or an AI replace the manager in the control room of a multi-national. If he were alive, Karl Marx would probably say that that the bourgeoisie wakes up to the ethical problem once the robot reveals itself able to replace also the manager, the artisan, the medical doctor, the teacher, when it acquires the ability to



127

make decisions – and not only the proletarian at the assembly line. Again, the dominant group equates itself with humanity and turns its own political problem, its own class interests, into a universal ethical problem. 4. The need for a new socio-industrial policy With the exception of radical ecologists and the supporters of degrowth, most people - regardless of which side of the political spectrum they are on – would claim that economic growth and a high rate of employment are good things. These are universally seen as goals that should be pursued. Let us therefore ask ourselves if the policies that the last several Italian governments have implemented are effectively rational – that is, do they allow these goals to be achieved. Current Italian political leadership seems to assume that growth and employment have no causal link with automation, given that this factor is repeatedly forgotten in analyses. Basically, politicians accept the thesis of the “Luddite fallacy” elaborated by economists. Therefore, most policies have the aim of making the labour market more flexible or reducing the cost of labour. It is assumed that Italy would attract more investments, if it were easier for capitalists to fire workers and if workers were less ‘choosy’ when looking for a job. The policies based on these assumptions tend to create a favourable ground for brain drain and the immigration of unskilled workers. A mass of immigrants with reduced rights (given that they cannot vote and have temporary resident permits) is much more appealing to companies than skilled and demanding citizens. Has this approach, adopted systematically since the early 1990s, produced positive results? Everything points to the contrary. As a matter of fact, the deregulation of the labour market



128

and the gradual dismantling of the welfare state have not generated the expected results. Data from the World Economic Outlook Database of the International Monetary Fund (October 2012) show that, in the decade 2003-2013, Italy’s growth has on the whole been -0.1%. This means that while the global economy keeps growing, Italy occupies one of the few positions with negative growth, together with Zimbabwe, San Marino, Greece, and Portugal (Pasquali, Ventura and Aridas 2013). These policies could be ineffective and, perhaps, even counterproductive exactly because the last generation of robots and computers have something to do with structural unemployment. On the one hand, highly automated industries, having not so many humans in the loop, are probably much more preoccupied about the cost of energy than the cost of labour. On the other hand, no matter how much the cost of labour and the rights of workers in a developed country can be reduced, low-tech industries will always find it more convenient to relocate to underdeveloped countries. In other words, the very problem could be the postulate on which the system is built: its necessity – that is, the idea of the invariance of the mode of production. Hence, the solution to almost any contingent problem is primarily to patch it up (at low cost) to keep the system up and running for now – leaving the serious problems for future generations to sort out. This is pretty obvious in the case of the politics of development and of social security policies. We will not expatiate on development. For decades Italian political leaders talk of the necessity to stimulate scientific research, but talk remains always and only talk. In reality the investment in research, both from



129

the State and from private sources, is at its lowest16. Hence, it so happens that one of the nations of the leading group of developed economies (of the G7 or G8) does not have a single manufacturer of computers or of mobile phones – to state two driving products of the new economical phase. The result is that technological development is certainly not slowing down, given that technology can also be imported. Rather the result is that one does not stimulate the sector that could reabsorb at least part of the technological unemployment. As regards the politics of prevention, a now creaking system has been in place for a few decades, and it relies on two remedies: a massive immigration from the less developed countries and an increased age of retirement. The first remedy presupposes that there is an oversupply of jobs in Italy, the second on the contrary reduces the number of jobs – so this policy appears schizophrenic right from the start. Yet this policy is in fact the fruit of a plan that the Ministry of Work and Social Policies, under the leadership of Maurizio Sacconi, has put black on white. If we read a document by the Directorate-General dated February 23rd 2011 with the title “Immigration for work in Italy” we 16 The Eurostat 2009 report on science, technology and innovation in Europe is unforgiving and positions Italy among the last. In 2007, the 27 member states invested in a total of less than 229 billion euros, or 1,85% of the European GNP. At the same time, the USA reached 2,67% of GNP, and Japan (in 2006) 3,40% of GNP. In Europe, only Sweden and Finland spent more than 3% (3,60% and 3,47% respectively), then there are 4 countries (Denmark, Germany, France and Austria) that spent over 2%. Italy invests little: 1,09% in 2001 and 1,13% in 2006. But it is data relative to employment that interests us most and these data are very discouraging. According to this report researchers in the EU represent 0,9% of employment, while in Italy they reach 0,6%. See http://epp.eurostat.ec.europa.eu/cache/ITY_PUBLIC/9-08092009AP/EN/9-08092009-AP-EN.PDF



130

discover that the Italian government feels the need to increase the number of immigrants: “In the period 2011-2015 the mean yearly requirement should lie around 100,000 while in the period 2016-2020 it should reach 260,000” (Polchi 2011). So in the next few years we will need to ‘import’ one million eight hundred thousand workers who would be added to the four million already residing in Italy (data from ISTAT)17. The conclusion that we will need six million immigrant workers in the next ten years derives from the following analysis: “The need for manpower is linked at once to job demand and job supply. On the side of offer one foresees that between 2010 and 2020 a decrease in the working population (employed plus unemployed) of 5,5% and 7,9%: from 24 million 970 thousand in 2010 it would fall to a value comprised between about 23 million 593 thousand and 23 million in 2020. On the side of demand the number employed would grow for a decade at a rate between 0,2% and 0.9%, reaching in 2020 23 million 257 thousand in the first case and 24 million 902 thousand in the second” (Polchi 2011). Where is the error? For a start, one has not at all taken into account that we are not yet out of the crisis and that too many Italian companies, when they have not relocated, are closing down18. Among other things, now also ‘historic’ companies like 17 “Foreign residents in Italy on January 1st 2010 are 4.235.059, representing 7,0% of the total number of residents. On January 1st 2009 they represented 6,5%. During the year 2009 the number of foreigners grew by 343,764 units (+8,8%), a very high increase, but lower to that of the two preceding years (494,000 in 2007 and 459,000 in 2008, +16,8% and +13,4% respectively), chiefly as an effect of fewer arrivals from Romania.” (ISTAT 2010). 18 In 2010 in Italy there were over eleven thousand applications for bankruptcy – about thirty a day – representing an increase of 20% with respect to 2009 (Geroni 2011).



131

FIAT threaten to relocate their production abroad. All this while brains are drained. And there is more. If what we have seen about automation is true, the calculation error is macroscopic. One cannot appraise employment on the basis of a presumed increase in production which, among other things, does not include a possible increase in productivity due to automation. Is it too much to ask of the Ministry of Work that they know what artificial intelligence is? If nurses and bricklayers will also be replaced by robots, what will then become of the six million immigrants that no one has really tried to integrate, but that have instead been regarded as stop-gaps to keep pension payments ticking over? What will six million people do – with different languages, religions and customs – when they have no home and no work, and, since they are not even citizens, will have no political rights and not be eligible for many kinds of social benefits? Has anyone ever asked if among these six millions there is an even ratio of men and women (the required minimum to favour integration)? Has anyone ever asked what skills they have? If they can be given the jobs of the future? And how they feel about Italians? About Europeans? Of course, one cannot blame only the centre-right government for this shortsighted policy, given that it is a bipartisan vision, where in fact some left-wingers would turn a blind eye to illegal immigration – and therefore not included in the census nor even intended to plug the leak of the ‘Italian system.’ Even the Catholics gloat at the government’s document. Andrea Olivero, the national president of the ACLI, the Christian Association of Italian Workers, hurried to say that, “these data will expose the demagogy of those who go on about the threat of immigrants. Without them the nation would implode, and to welcome them civilly is not just a humanitarian act but also an intelligent strategy for the future (…). While the last few years have



132

been dominated by an obtuse logic of containment that however has failed, we are happy that the Ministry of Work now look realistically at the data because only then will it finally be possible to direct the government to the phenomenon of immigration that until now has been unsuccessful” (Polchi 2011). A wise strategy for the future? Not at all, if the scenario analysis elaborated by futurist Hans Moravec in “The Age of Robots” (1993) is at least partly correct. According to him, in the first half of the 21st century “inexpensive but capable robots will displace human labor so broadly that the average workday would have to plummet to practically zero to keep everyone usefully employed.” But since the possibility of reducing working hours is not even discussed, what we can expect is growing unemployment, or the growing precariousness of the labour market, or the creation of pointless jobs. Yet without going too far, it would be enough to familiarise oneself with Moore’s Law, the rate of development of artificial intelligence, the prospect of robotics and nanotechnology, in order to understand that not many hands and perhaps even not many brains will be needed to maintain or augment the level of production. The ‘rough guess’ planning by the Italian government leaves one therefore somewhat perplexed. If this is the vision of the future of the ruling class, then we should probably expect a gloomy scenario. The possible consequence of an underestimation of the new automation process could be “widespread immiseration, economic contraction and polarization between the wealthy, the shrinking working class and the structurally redundant” (Hughes 2004). Actually something even worse may happen. It is unlikely that we will witness the peaceful extinction by starvation of humans replaced by AI in the production process. Before this takes place a revolt will break out. And perhaps this would even come



133

as a surprise to some. Also Gallino (1999) states that we will have to expect social tensions. When asked if he foresees conflicts in the future he replies: “Yes of course, even if these conflicts will be of various kinds. In the meantime the conflict we have now is due to growing inequalities. In all the industrial nations, including our own (and ours even to a lesser extent that the others) the technological development of the last 20 or 30 years has meant a high increase in inequality between the fifth that earns least and the fifth that earns most from their work. If you then consider the smallest percentages, the differences are even larger, above all in the United States, but also in nations like Great Britain, France, our own, but even in China where inequalities have risen very much. This is a conflict that is as old as the world itself, but which nevertheless the technologies tend to accelerate and embitter. And then there are the conflicts that are, let us say, more intrinsically linked to the technologies. Many technologies meliorate life, allow one to work better, with less difficulty, many technologies entertain, they are intellectually stimulating, can serve as learning tools and so on. And then the difference that is introduced is that between those who can master these technologies, that give them a better life, and those who instead cannot make adequate use of them, either for economic reasons or for cultural reasons, perhaps also for political reasons. Let us not forget that in some states in the world the new technologies are subjected to censorship, limitations, police control and similar. Hence one of the major conflicts of the future will be between those who are full citizens, fully participating in the technological citadel, and those who instead have to camp outside its walls.” The conflict between the owner of the robots (the new means of production) and the unemployed who have been expelled from the processes of production (the new proletariat) is a loom-



134

ing menace on the horizon. Already a rate of unemployment of 10-12% creates social tensions and generates crime. Imagine what could happen if it reached a much higher rate. Obviously, it cannot be excluded that the present technological change is generating only temporary problems, like all the previous technological changes in the last two centuries. All our preoccupations could be dissolved by the birth of new jobs that we cannot even imagine. But we cannot also exclude the possibility that we might have to face a completely novel situation. The machines that will enter our society could be so intelligent that almost all human workers may soon become obsolete. We must also be prepared to face this scenario. If this happens, if this is happening, the best solution is not banning AI, but rather implementing social policies that would permit to have all the benefits of robotisation and automation without the unwanted collateral effects of unemployment or increasing job precariousness. We must be ready to reactivate the policy of the gradual reduction of working hours and to introduce a citizen’s income. We must be psychologically prepared to reverse the dominant economic paradigm. To revitalise the economy, we might not need people working harder. We might need people working less. Working less, working all – as a notorious slogan states. We might need more holidays, more free time, more welfare state, more money to spend. These policies would certainly make human labour more expensive, but – contrary to what most economists think – this could be exactly what we need. The increase of the cost of labour makes “investments in automation increasingly attractive” (Hughes 2004), high-tech economies are more competitive than low-tech ones, more competitive economies can distribute better ‘social dividends’ to their citizens.



135

This is hard to see, if we divide the world into Luddites (those that want to ban the machines) and anti-Luddites (those that label a Luddite whoever dares to relate technology to unemployment), tertium non datur. A third way actually exists: one may want more robots, more computers, more intelligent machines, more technologies, together with a consistent change in the system apt to guarantee a rational and fair redistribution of wealth. As sociologist James Hughes (2004) put it: “It’s time to make a choice: Luddism, barbarism or a universal basic income guarantee.”



136

Chapter 6

Pure Science and the Posthuman Future

There are transhumanists of many different kinds, holding specific political, cultural or religious views. However, all transhumanists share a high degree of faith in science. Science is mainly seen as the instrument that can help humans to overcome their biological limitations ─ that is, to fight against aging, death, illnesses, and other mental and physical weaknesses. Even if this is already a respectable position from an ethical point of view, the program seems to be too reductive. It reduces science to a mere instrument and this implies a clear violation of the ethos of science. Indeed, according to sociologist Robert K. Merton (1973: 275), one of the main norms of the scientific ethos is disinterestedness: “Science includes disinterestedness as a basic institutional element. Disinterestedness is not to be equated with altruism nor interested action with egoism. Such equivalencies confuse institutional and motivational levels of analysis… It is rather a distinctive pattern of control of a wide range of motives which characterizes the behavior of scientists.” This means that genuine scientists see scientific truth as a goal in itself, not as an instrument. Of course, this norm does not imply that scientists cannot have other interests besides the search for truth. They may also work to achieve personal goals like fame or money, or social goals such as the establishment of a political ideology, a religious creed, or technological applications. But truth should be always preferred in the case of a con

137

flict of interests. In short, the golden rule of the genuine scientist is “science for its own sake” (Campa 2001: 246-257). This was certainly true before the industrial revolution. Now scientific knowledge is always systematically related with its practical use and, therefore, someone may think that the very idea of scientific ethos is outdated. However, journalist John Horgan demonstrated in his popular book The End of Science (1997) that most top scientists still see their work as a purely spiritual activity and not as an instrument to solve practical problems. Science is the way toward “The Answer” ─ the ultimate answer about the main existential question of humans: Why is there something rather than nothing? Wondering about the nature of matter, life, conscience, society, etc. ─ that is trying to answer the fundamental questions of physics, biology, psychology, sociology, etc. ─ can be seen as a way to approach The Answer. Transhumanism puts so much emphasis on technology that it is worth asking if disinterested research still plays a role also in this philosophical view. Do transhumanists just want to live forever or do they also want to know the sense of their life? And do they believe that science can be the way to answer the existential question? Among those scientists interviewed by Horgan, a few can certainly be qualified as “transhumanists”, even if they do not necessarily describe themselves as such. Indeed, they believe that the transition from humans to post-humans is possible and desirable. By analyzing their views, we may at least partially satisfy our curiosity. Of course, further empirical research is needed in order to establish the attitude of transhumanists to pure science, but this can be seen as a good point of departure. The central idea of Horgan’s book is that humans have already reached the final frontiers of knowledge. We already know what it was possible to discover. Now, there is still a place

138

for the technical application of this knowledge, but not for new discoveries, not for a new paradigm shift. We may or may not agree with Horgan, but this is not the point here. The point is that our present scientific knowledge is not very satisfactory. Science seems absurd, that is to say very far from our everyday life and sometimes not consistent with our rational way of thinking. In the nineteenth century, any well-educated person could grasp contemporary physics. Today, physicists themselves operate with formulas and predict events without having a real “understanding” of the physical world. In short, science does not really answer our basic existential questions. Consequently, new questions arise. Is this frustrating situation due to the fact that the world is really absurd and alien to sentient beings? Or is it due to the fact that humans are not intelligent enough to understand the world? Noam Chomsky reminds us that humans have cognitive limitations. Chomsky divides scientific questions into problems and mysteries. Problems are answerable, mysteries are not. Before the scientific revolution, almost all questions appeared to be mysteries. Then, Copernicus, Galileo, Newton, Descartes, and other thinkers solved some of them. However, other investigations have proved fruitless. For instance, scientists have made no remarkable progress investigating consciousness and free will. Horgan (1997: 152) summarizes Chomsky view as follows: “All animals… have cognitive abilities shaped by their evolutionary histories. A rat, for example, may learn to navigate a maze that requires it to turn left at every second fork but not one that requires it to turn left at every fork corresponding to a prime number. If one believes that humans are animals ─ and not “angels,” Chomsky added sarcastically ─ then we, too, are subject to these biological constraints. Our language capacity allows us to formulate questions and resolve them in ways that rats cannot, but

139

ultimately we, too, face mysteries as absolute as that faced by the rat in a prime-number maze. We are limited in our ability to ask questions as well. Chomsky thus rejected the possibility that physicists or other scientists could attain a theory of everything.” Accepting this picture, we still may ask if posthumans ─ if not humans ─ could fully understand the world. This is, at least, what Stephen Hawking believes. Hawking predicted that physics might soon achieve a complete, unified theory of nature. This prophecy was offered up by him on April 29, 1980, when he presented a paper entitled “Is the End of Theoretical Physics in Sight?”. The speech was delivered on the occasion of his appointment as Lucasian Professor of Mathematics at the University of Cambridge, an important chair held by Newton 300 years earlier. According to Horgan (1997: 94) only a few observers noticed that Hawking, at the end of his speech, indicated posthumans and not humans as the protagonists of this conquest: “Hawking suggested that computers, given their accelerated evolution, might soon surpass their human creators in intelligence and achieve the final theory on their own.” Also philosopher Daniel Dennet seems to believe so. To be sure, Dennet (1991) thinks that we already have at least a partial answer to the mystery of consciousness. However, our explanations are never completely satisfactory to all humans: “One of the very trends that makes science proceed so rapidly these days is a trend that leads science away from human understanding” (quoted in Horgan 1997: 179). Our cognitive limitations could be understood and overcome only by posthuman beings having more powerful senses and minds. In other words, Dennet surmises that a human mind can hardly understand a human mind, that is itself. A theory of mind, although it might be highly effective and have great predictive power, is unlikely to be intelligible to mere humans. According to Dennet, the only hope hu

140

mans have of comprehending their own complexity may be to cease being human: “Anybody who has the motivation or talent will be able in effect to merge with these big software systems”. As Horgan (1997: 180) explains, “Dennet was referring to the possibility… that one day we humans will be able to abandon our mortal, fleshy selves and become machines.” Then, a new problem would of course arise: will the superintelligent machines be able to understand themselves? Trying to understand themselves, the machines would have to become still more complex. And this spiral would last for all eternity. Marvin Minsky shares a similar view, although with an important difference. According to him, humans can understand themselves. However, this does not preclude the possibility (or the necessity) to go further. Even if we manage to solve the problem of consciousness we do not ipso facto reach the end of science. We may be approaching our limits as scientists, but we will create robots and computers smarter than we that can continue doing science. Intelligent machines will then try to understand themselves, opening a new frontier to pure science. To Horgan’s objection that such knowledge would be machine science and not human science, Minsky answers as follows: “You’re a racist, in other words. I think the important thing for us is to grow, not to remain in our present stupid state. [We are just] dressed up chimpanzees” (quoted in Horgan 1997: 187). Also Hans Moravec sees the future world as dominated by artificial machines. A well known robotic engineer, Moravec has presented his futuristic view in Mind Children (1988) and Robot: Mere Machine to Transcendent Mind (1998). According to him, there will be a moment when all humans will be replaced by robots in the workplace. Instead of working, humans will be paid to consume. Subsequently, in order to avoid death, humans will merge with artificial machines by uploading their con

141

sciousness. Robots will keep competing and evolving and, finally, they will colonize the universe. But what will these machines do with their incredibly powerful artificial brains? According to Moravec they will certainly be interested in pursuing science for its own sake: “That’s the core of my fantasy: that our nonbiological descendant, without most of our limitations, who could redesign themselves, could pursue basic knowledge of things.” He added that science will be the only motive worthy of intelligent machines, because “things like art, which people sometimes mention, don’t seem very profound, in that they are primarily ways of autostimulation” (quoted in Horgan 1997: 250). Moravec has also expressed doubts as regards his own fantasy. First of all, it is hard to predict what a machine trillions of times more intelligent than us could desire or do. Secondly, he has also shown some skepticism about the possibility that sentient beings could stop competing and start cooperating in order to discover the truth about the universe. More than a goal in itself, science, for Moravec, is the by-product of an eternal competition among intelligent, evolving machines. Thus, no competition, no scientific progress. Freeman Dyson’s view presents many similarities. He just does not think that the end of “flesh” is near. Intelligent biological organisms will still play an important role in the future, even if they could look very different from present ones. Dyson is a polymath. He has written extensively on physics, space, nuclear engineering, weapons control, climate change, philosophy, and futurology. He started speculating about the future of sentient beings in 1979, by publishing an article entitled “Time without End: Physics and Biology in an Open Universe.” The paper was intended to respond to Steven Weinberg, who noted that the more the universe seems comprehensible the more it seems pointless. The possibility that humans are just a product of

142

chance and that the universe is uninterested in our sort is disturbing and frustrating. But, according to Dyson, no universe with intelligence is pointless. “In an open, eternally expanding universe, intelligence could persist forever ─ perhaps in the form of a cloud of charged particles, as Bernal had suggested ─ through shrewd conservation energy” (Horgan 1997: 253). Dyson has then elaborated his futuristic view in other popular books, such as Disturbing the Universe (1979), Infinite in All Directions (1988), and Imagined Worlds (1997). The basic idea of this scientist is that the universe “works” according to “the principle of maximum diversity”, which operates at both the physical and the mental level. Life is possible but not too easy. In principle, a universe homogeneous in all directions could also be possible. That is to say, boring. But this is not what happens. The universe is diverse and interesting in all directions. Sometimes, strange or paradoxical. According to Dyson, this happens because the laws of nature and the initial conditions are such that the world has to be as interesting as possible. Intelligent life does not exist by chance, but it is obviously always in danger in any specific region of the universe. That is why, Dyson decided to write the agenda of genetic engineers: their goal should be the creation of non-human or post-human intelligence. Since we a part of cosmic consciousness, our duty is to contribute to its diffusion in the universe. Therefore, Dyson dreams about the creation of mobile intelligent beings, able to absorb directly from the sun the energy they need. This would solve the problem of space travel and favor the colonization of the universe. In Imagined Worlds, Dyson warns that his futuristic speculations notably surpass the possibilities of present science: “Science is my territory, but science fiction is the landscape of my dreams”, he admits. However, it is important to note that he recognizes the importance of pure scientific investigation. If hu

143

mans have limitations, if they are tied to Mother Earth, posthumans will explore and colonize the universe. By doing this, they will give sense to it. This is because the universe makes sense only when it is aware of its own existence. In short, Minsky Moravec and Dyson think in evolutionary terms and believe in our posthuman future. Horgan (1997: 255) caustically judge them as follows: “Dyson, Minsky, Moravec ─ they are all theological Darwinians, capitalists, Republicans at heart. Like Francis Fukuyama, they see competition, strife, division as essential to existence ─ even for posthuman intelligence.” However, they also recognize the value of pure science and this is what matters in our research. Besides, not all scientists believe that competition is the eternal rule of the universe. For instance, Edward Fredkin, pioneer of digital physics and artificial intelligence presents a more liberal view and stresses the connection between intelligence and cooperation. Fredkin predicts that competition will prove to be only a temporary phase, one that intelligent machines will quickly transcend. It is interesting to note that this scientist has never shown any anticapitalistic idiosyncrasy. Before becoming a professor of physics at Boston University, he had been a fighter pilot in the US Air Force and a successful entrepreneur. In synthesis, Fredkin believes that the future will belong to machines “many millions of times smarter than us” and exactly because they are more intelligent they will recognize that struggle and competition are atavistic and counterproductive. Cooperation yields a win-win situation because what one learns, all can learn. And evolution depends on learning. Once the machines will be supremely intelligent, they will overcome the Darwinian race and start cooperating in order to solve the secret of the universe, or The Answer.



144

Fredkin says: “Of course computers will develop their own science. It seems obvious to me” (quoted in Horgan 1997: 255). Another scientist who assumes that machines will cooperate and eventually merge is Frank Tipler. Central to his vision is the concept of “Omega Point”, inspired by Jesuit thinker Teilhard De Chardin. Tipler mainly developed his theory in two books: The Anthropic Cosmological Principle, written in collaboration with British physicist John Barrow (1986), and The Physics of Immortality (1994). Tipler asks what will happen when all the matter of the universe is converted into an enormous conscious device processing information. He believes that this entity will resurrect all the people existed in the past, at least in form of information. In other words, the end of science is a divine machine very similar to what monotheistic religions call God. But, if the future of humanity and all the rest is to become the omnipotent, omniscient and eternal being of monotheistic religions, it makes no sense to ask if this being will look for knowledge. It will not look for The Answer, because it will be The Answer. We are now approaching the conclusions. It is well known that every scientist has the duty to update, that is to read new articles and books, to improve his/her knowledge, to acquire the new available information concerning the topic (s)he is going to study. This is a propaedeutic activity to the production of the best possible science. If humanism underlines the necessity of updating, transhumanism goes much further. It presents a new duty for future scientists. In order to produce new and better knowledge, they will not only have to update, they will have to upgrade. Updating implies the modification of the “software” of scientists ─ whether humans or machines. Upgrading will imply the modification of the “hardware” of scientists. Thus, upgrading can be seen as a new ethical rule for the individual scientist and scientific communities ─ respectively a complement and an

145

extension of the deontological norms of disinterestedness and updating. One basic problem of science is that when researchers are young and their brain works at its best, they have no erudition and experience; when they get older and acquire erudition and experience, their brain starts deteriorating. If all scientists could have both a rich amount of information and knowledge in their brain and together a good “processor” and a powerful memory for a longer time than just a few decades, would not science advance much more quickly? But how can scientists upgrade? Many transhumanist books deal with the problem of intelligence growth and we cannot mention all of them. We’ll just refer to one of them: Citizen Cyborg by sociologist James Hughes. In chapter 4 (“Getting smarter”) Hughes writes that more than 40 “smart drugs” have been clinically tested to improve memory consolidation, neural plasticity and synaptic transmission speed. But memory pills are just the beginning. Research proves that neurotrophic chemicals could govern the growth of neural stem cells in the brain. In other words, the perspective is to develop drugs that encourage brain self-repair. “Even better than a neurotrophic pill would be a neurotrophic gene therapy that helped the brain to repair itself, or that enhanced intelligence in other ways” (Hughes 2004: 38). Cognition-enhancing drugs and gene therapies can be surpassed probably only by cyborgization. To directly connect the brain to computer is, according to Hughes, the most powerful human intelligence enhancer: “Researchers have been experimenting with two-way neural-computer communication by growing neurons on and around computer chips, or by placing electrodes in brains that are connected to computers, leading to computing prostheses for the brain” (Hughes 2004: 39).



146

Dyson criticizes Thomas Kuhn because, in his masterpiece The Structure of Scientific Revolutions (1962), he narrates the story of six scientific revolutions that could be explained almost as religious conversions, while disregarding the at least twenty revolutionary paradigm shifts due to the invention of new devices ─ such as telescope, microscope, particle accelerators, etc. Enhancing senses by using instruments of investigation plays a fundamental role in science. Often, scientists tend to see themselves as angels or pure minds and forget that their body is the first instrument of investigation, the interface between their mind and the world. An interface that can and, perhaps, should be enhanced. The above-mentioned devices and many other not mentioned here can be seen as artificial prosthesis improving the cognitive capabilities of scientists of the human type. In the near future these prostheses could be implemented into the human body, concretizing the transition from scientists to postscientists (or superscientists). We are not surmising that this small sample of scientists is representative of all transhumanist movement, but this research shows at least that also the quest for pure knowledge is and can be part of the transhumanism agenda. It is not by chance that one of the main “enemies” of transhumanism, Francis Fukuyama, is also reluctant to accept the idea of science for its on sake. His attacks on transhumanism are well known. In his book Our Posthuman Future: Consequences of the Biotechnology Revolution (2002) he uses no periphrases when presents the dangers of biotech. And in an article appeared on Foreign Policy (September 2004) he accused transhumanism of being the worlds most dangerous idea. It is worth noting that Fukuyama confesses that he would never have become a social scientist if he were not interested in the sort of democracy and capitalism. Science for its own sake has never really interested him. After the publication

147

of The end of History ─ the book that made him famous all over the world ─ he was interviewed by Horgan about the future of human kind. Obviously, Scientific American journalist did not forget to answer if, with the end of politico-economical conflicts, the search for truth could become the new goal of humanity. The answer by Fukuyama was emblematic: “Hunh” (Horgan 1997: 244). The disdainful rejection of a program which represents not only the dream of Star Trek fans, but also that of an array of respectful philosophers and scientists, is indicative of the worldview of this thinker. In conclusion, the biophysical enhancing of humans aimed at improving scientific research is a strategic moral imperative, because it synthesizes two only apparently incompatible metascientific views: the Baconian formula of scientia ancilla technologiae and the rationalist plea for disinterested science. Francis Bacon and his followers have discovered that “knowledge is power”. The rationalists a la Descartes have instead insisted that “knowledge is duty”. Now, thanks to transhumanism, we have the chance to synthesize these two views at a higher level. To succeed we just have to recognize a new basic truth: “power is knowledge”.



148

Chapter 7

Making Science by Serendipity: A review of Robert K. Merton and Elinor Barber’s The Travels and Adventures of Serendipity

Robert K. Merton and Elinor Barber’s The Travels and Adventures of Serendipity (English-language translation 2004) is the history of a word and its related concept. The choice of writing a book about a word may surprise those who are not acquainted with Merton’s work, but certainly not those sociologists that have chosen him as a master. Searching, defining, and formulating concepts has always been Merton’s main intellectual activity. It is somewhat of a ritual among Merton’s commentators to thank him for the numberless concepts with which he has furnished sociological research. Barbano (1968: 65) notices that one of Merton’s constant preoccupations is with language and the definition of concepts and recognizes that the function of the latter is for him anything but ornamental. Sztompka (1986: 98) writes that “the next phase chosen by Merton for methodological discussion is that of concepts-formation. Achieving clarity, precision and unambiguous meaning of sociological concepts seems to be an almost obsessive preoccupation.” What should be clear is that he does not formulate concepts by chance. Many sociologists formulate important concepts by building a new theory of society, but this is not what Merton



149

does. He completely renounced the building of his own total system of theory and did not even spend so much time in trying to elaborate theories of the middle range. He has dedicated almost all his time to concept-formation. This is why he has not given us only some concepts but many. Merton proposes an articulated technical language now widely used by sociologists and is perfectly aware of the strategic importance of this work. The following quotation can be seen as evidence of his awareness of the heuristic power of concepts: “As we have seen, we experience socially expected durations in every department of social life and in a most varied form. […] That ubiquity of phenomenal SEDs may lead them to blend, conceptually unnoticed, into the taken-for-granted social background rather than to be differentiated into a possibly illuminating concept directing us to their underlying similarities. As Wittgenstein once observed with italicized feeling: “How hard I find it to see what is right in front of my eyes!” (Merton 1996: 167). Of course, we did not have to wait for Wittgenstein or Merton to understand the importance of words to the scientific and philosophical discourse. It had been realized already in Medieval times that talia sunt objecta qualia determinatur a praedicatis suis. But then again, as Whitehead used to say and Merton (1973: 8) used to repeat: “Everything of importance has been said before by someone who did not discover it.” If Merton and Barber invested energies in writing the history of a word, I feel it now necessary to write a few words about the history of their book. The travels and adventures of the book is not less fascinating than the content of the book itself. It was in the 1930s that Merton first came upon the concept-and-term of serendipity in the Oxford English Dictionary. Here, he discovered that the word had been coined by Walpole, and was based on the title of the fairy tale, The Three Princes of Serendip, the



150

heroes of which “were always making discoveries by accidents and sagacity, of things they were not in quest of.” The discovery of the word was serendipitous as well, since Merton was not looking for it. In this sense, it was selfexemplifying. The word could not fail to trigger him, considering that at the time he was busy with the foundation of the sociology of science – more precisely and quite significantly with the elaboration of a sociological theory of scientific discovery (Merton 1973: 281 et seq.) – and with the formulation of the idea of the unanticipated consequences of social action (See Merton 1996: 173 et seq. As Rob Norton (2002) recognizes: “The first and most complete analysis of the concept of unintended consequences was done in 1936 by the American sociologist Robert K. Merton.” In this way, the combined etymological and sociological quest began that resulted in The Travels and Adventures of Serendipity. Initially, the book was conceived as a “preparazione OTSOGIA.” In other words, it was to serve as a propedeutic to Merton’s seminal work – On the Shoulders of Giants, acronymised to OTSOG and published in 1965. Nonetheless, after completion in 1958, the book on serendipity was intentionally left unpublished. Probably the authors had the impression that when finished the book had lost some of its novelty. And the situation got “worse” year after year. We find some information about it in the long and insightful afterword written by Merton just prior to his death (in my modest opinion, one of the most interesting parts of the book). In it, Merton provides interesting statistics to illustrate how quickly the word had spread since 1958. By that time, serendipity had been used in print only 135 times. But between 1958 and 2000, serendipity had appeared in the titles of 57 books. Furthermore, the word was used in news-



151

papers 13,000 times during the 1990s and in 636,000 documents on the World Wide Web in 2001. In any case, the book was occasionally and most tantalizingly cited in Merton’s other publications. In 1990, OTSOG was translated into Italian and published with an introduction by Umberto Eco. The Italian publisher noticed a footnote mentioning the existence of the still unpublished The Travels and Adventures of Serendipity and proposed its publication in Italian. The authors agreed to an Italian translation, but Merton posed the condition that it had to be enriched by a long afterword. The translation was quickly made, but the publisher had to wait a decade to have the long and precious appendice. The reason for such a delay was the many ailments that Merton had had to battle with before his death. The Italian version was published in 2002, after Barber’s death. Two years later and a year after Merton’s death, we could welcome the appearance of the original English version. Now let us focus on an analysis of the content of the book and its theoretical consequences, that is, on the history of this term-and-concept and its significance to the sociology of science. The first few chapters elucidate the origin of the word, beginning with the 1557 publication of The Three Princes of Serendip in Venice. This is a story about the deductive powers of the sons of the philosopher-king of Serendip. However, the princes did not have their adventures in Serendip but in neighboring lands, and the king is named Jaffer. The modern association of Sri Lanka with serendipity is therefore erroneous (Boyle 2000). In a letter to Horace Mann dated January 28, 1754, Walpole described an amazing discovery as being “of that kind which I call Serendipity.” He went on to provide his succinct definition but then blurred it by providing an inadequate example from The Three Princes: “As their highnesses travelled, they were always



152

making discoveries, by accident and sagacity, of things which they were not in quest of: for instance, one of them discovered that a mule blind of the right eye had travelled the same road lately, because the grass was eaten only on the left side, where it was worse than on the right – now do you understand serendipity?” Walpole tried to illustrate the concept of serendipity with other examples, but basically failed to do it in an unequivocal way. After many decades, in 1833, Walpole’s correspondence with Horace Mann was published. Through this and other editions of the letters, the word serendipity entered into the literary circle. Merton and Barber do not fail to study and emphasize the social and historical context that permits the acceptation and diffusion of the neologism. The nineteenth century is the century of industrial revolution. It is a period of extraordinary expansion for science and technology, marked by the foundation of numerous new scientific disciplines, sociology included. As the authors (2004: 46) remark: “It is in the nature of science that new concepts, facts, and instruments constantly emerge, and there is a continual concomitant need for new terms to designate them. With the accelerated pace of scientific development in the nineteenth century, the need for new terms was frequently felt and as frequently met by the construction of neologisms. Scientists had no antipathy to new words as such: hundreds and then thousands were being coined…” Serendipity was used in print for the first time by another writer forty-two years after the publication of Walpole’s letters. Edward Solly had the honor of launching serendipity into literary circles. He signed an article on Notes and Queries, a periodical founded in 1849 by the learned bibliophile William John Thoms. Solly defined serendipity as “a particular kind of natural



153

cleverness”. In other words, he stressed Walpole’s implication that serendipity was a kind of innate gift or trait. However, Walpole was also talking of serendipity as a kind of discovery. The ambiguity was never overcome and serendipity still indicates both a personal attribute and an event or phenomenon. It’s worth noting that after the first appearance in print, “[f]or more then fifty years, serendipity was to be used almost exclusively by people who were most particularly concerned with the writing, reading, and collecting of books” (Merton and Barber 2004: 48). From the turn of the twentieth century, serendipity gained acceptance for its aptness of meaning among a wider and more varied literary circle and the word appeared in all the “big” and medium-sized English and American dictionaries between 1909 and 1934. Its 1951 inclusion in the Concise Oxford English Dictionary was of equal importance as it reflected the higher probability of the casual reader encountering the word as it filtered down from academia. Moreover, the word passes from the Oxford English Dictionary into Merton’s personal vocabulary. When outlining the lexicographical history of the word the authors reveal disparities in definition (see Merton and Barber 2004: 104-122 and 246-247). In 1909 the word is defined by The Century Dictionary as “the happy faculty or luck of finding by ‘accidental sagacity’” or the “discovery of things unsought.” But the double meaning disappears in the 1913 definition provided by Funk and Wagnall’s New Standard Dictionary of the English Language where serendipity is uniquely “the ability to find valuable things unexpectedly.” In addition, in the Swan’s AngloAmerican Dictionary (1952), serendipity is just an event and no more a personal attribute: “the sheer luck or accident of making a discovery by mere good fortune or when searching for something else.” To avoid both the ambiguities of the meaning and the disappearance of one of the meanings, Piotr Zielonka and I



154

(2003) decided to translate serendipity into Polish by using two different neologisms: “serendypizm” and “serendypicja” – to refer to the event and the personal attribute respectively. The disparities in definition also concern other aspects. While the Century Dictionary was stressing the role of sagacity, The Oxford English Dictionary in 1912-13 does not mention this aspect and defines serendipity as “the faculty of making happy and unexpected discoveries by accident”. Other definitions do not meet Walpole’s prescription of a gift for discovery by accident and sagacity while in pursuit of something else. These incomplete definitions have resulted in the wrong belief that “accidental discovery” is synonymous with serendipity. Even if Merton waited four decades to publish his book on serendipity, he made wide use of the concept in his theorizing. In 1946, Merton revealed his concept of the “serendipity pattern” in empirical research, of observing an unanticipated, anomalous, and strategic datum, which becomes the occasion for developing a new theory. In this way Merton contributes to the history he maps out. It is worth now turning our attention to the theoretical aspects of serendipity and examining the sociological and philosophical implications of this idea. In the 1930s, by facing the problems of the newly born discipline called the sociology of knowledge, Merton works to eliminate some lacunas left by his predecessors. In a similar way to Ludwik Fleck (1979), Merton is convinced that no reason impedes considering the so-called hard sciences as subject matter for the sociology of knowledge. The sociology of knowledge cannot limit its subject matter to the historical, political and social sciences. This is the frontier reached by Mannheim, but the American sociologist holds that it is necessary to go further. “Had Mannheim systematically and explicitly clarified his position in this respect, he would have been less



155

disposed to assume that the physical sciences are wholly immune from extra-theoretical influences and, correlatively, less inclined to urge that the social sciences are peculiarly subject to such influences” (Merton 1968: 552). Thanks to this new awareness, the sociology of science came into existence. As Mario Bunge (1998: 232) remarks, “Merton, a sociologist and historian of ideas by training, is the real founding father of the sociology of knowledge as a science and a profession; his predecessors had been isolated scholars or amateurs.” Certainly, the sociology of science has subsequently evolved in new directions, not all predicted by his father. The most common accusation leveled against Merton is that he has never really studied the impact of society upon science, intended as a cultural and cognitive product. He has concentrated his attention on institutions and norms, and neglected epistemological problems. Consequently, it is now common to distinguish between two programs in the sociology of science: the “weak,” pursued by Robert Merton and the Mertonians, which concerns the study of institutions and, thus, produces epistemologically irrelevant results; the “strong,” pursued by David Bloor (1976) and the sociologists of the new generation, which concerns the study of the contents of science and, thus, produces epistemologically relevant results. It is true that the American sociologist studies mainly institutions of science, not laboratory life and the products of science (e.g., theories). But he never said that sociologists cannot or should not study other aspects of science. His attention to the concept of serendipity is the best evidence of parallel attention to the very content of scientific discoveries and the way they are made. “Since it is the special task of scientists to make discoveries, they themselves have often been concerned to understand the conditions under which discoveries are made and use that



156

knowledge to further the making of discoveries. Some scientists seem to have been aware of the fact that the elegance and parsimony prescribed for the presentation of the results of scientific work tend to falsify retrospectively the actual process by which the results were obtained” (Merton and Barber 2004: 159). The authors present a considerable number of quotations showing that many scientists, historians and philosophers of science have been aware of the fact that scientific inquiry cannot be metaphorically represented as hunting a hare (searching for a specific applicable scientific theory) with a rifle (the rules of scientific method). Indeed if you are clever enough to take advantage of the opportunity, you may capture a fox thanks to accidental circumstances while searching for hares. The authors also do not fail to present the resistance to the idea of serendipity. For orthodox Marxists, scientific and technological discoveries are a product of necessity. The slavery mode of production does not need machines, while the capitalistic mode of production needs them. Thus, the industrial revolution of the eighteenth century is related to the development of physics and the invention of machinery. “To orthodox Marxists, the suggestion that discoveries could be occasioned by accidents rather than by the inexorable development of the material base of society was anathema. Since the Marxists believe that all social and physical phenomena are rigidly determined, inventions are, in principle, predictable, and the job of the historian or philosopher of science is to work out ways of predicting them” (Merton and Barber 2004: 166). However, by using the adjective “orthodox”, the authors implicitly recognize that not all Marxists share this rigidly deterministic view of social and natural reality. Even if Merton did not publish his monographic work on serendipity until 2004, he has constantly referred to this category in his theoretical work. In another writing, for instance, he makes



157

the step from description to prescription: “historians and sociologists must both examine the various sorts of ‘failure’: intelligent errors and unintelligent ones, noetically induced and organizationally induced foci of interest and blind spots of inquiry, promising leads abandoned and garden-paths long explored, scientific contributions ignored or neglected by contemporaries and, to draw the sampling to a close, they must examine not only cases of serendipity gained but of serendipity lost (as with the many instances of the antibiotic effects of penicillin having been witnessed but not discovered)” (Merton 1975: 9). The Baconian and positivist dream of elaborating a set of methodological rules capable of also opening the door to scientific discovery to people of modest intelligence and sagacity cannot be better challenged than by the idea of serendipity. Colombus’ discovery of America, Fleming’s discovery of penicillin, Nobel’s discovery of dynamite, and other similar cases, prove that serendipity has always been present in research. Merton (1973: 164) emphasizes that: “Intuition, scriptures, chance experiences, dreams, or whatever may be the psychological source of an idea. (Remember only Kekulé’s dream and intuited imagery of the benzene ring which converted the idea of the mere number of atoms in a molecule into the structural idea of their being arranged in a pattern resulting from the valences of different kinds of atoms.)” In this, Merton seems to be very close to Bachelard, Popper, Bunge, and the neorationalists in general. Bachelard (1938), for example, emphasizes the importance of abstraction as the only way to approach scientific truth. The Popperian idea that theories come from the sky is well known. Finally, one of the most systematic studies of intuition in science has been proposed by Mario Bunge (1962). However, Merton does not consider the problem of methodology solvable with the elimination of the context of discovery



158

and the improvement of the context of justification. The problem of discovery is what Oldroyd (1986) would define as the ascending part of the arch of knowledge – from the object-level to theory – that cannot be settled by merely replacing induction with intuition/invention, as Popper suggests. It is still both possible and worthwhile to study this problem from a psychological and sociological point of view. The serendipity pattern is Merton’s proposal to attempt to complete the hypothetical-deductive model, which is a logical model, and so fails to describe much of what actually occurs in fruitful investigation: “The serendipity pattern refers to the fairly common experience of observing an unanticipated, anomalous and strategic datum which becomes the occasion for developing a new theory or for extending an existing theory… The datum is, first of all, unanticipated. A research directed toward the test of one hypothesis yields a fortuitous by-product, an unexpected observation which bears upon theories not in question when the research was begun. Secondly, the observation is anomalous, surprising, either because it seems inconsistent with prevailing theory or with other established facts. In either case, the seeming inconsistency provokes curiosity… And thirdly, in noting that the unexpected fact must be strategic, i.e., that it must permit of implications which bear upon generalized theory, we are, of course, referring rather to what the observer brings to the datum than to the datum itself. For it obviously requires a theoretically sensitized observer to detect the universal in the particular” (Merton 1968: 157-162). The proposal of the serendipity pattern can be interpreted as a convergence toward Peirce’s metascience. Merton (1968: 158) writes: “Charles Sanders Peirce had long before noticed the strategic role of the ‘surprising fact’ in his account of what he called ‘abduction,’ that is, the initiation and entertaining of a hypothesis as a step in inference.” This pattern can be assimilated to



159

Peirce’s abduction, Thomas’ modus ponendo ponens, or Galileo’s reasoning ex suppositione (see Wallace 1981). One observes a fact F. This fact is surprising and cannot be explained with existing theories. Hence, stimulated by the problem, one formulates a new theoretical hypothesis T. If T were ‘true,’ then F would be obvious, therefore one has good reason to suspect that T is ‘true.’ Of course, this logical model does not actually guarantee the correctness of T, despite the fact that Thomas seemed to believe so. According to Merton, T remains a conjecture, which must be submitted to other social and rational procedures in order to be accepted as certified knowledge by the scientific community. This descriptive model has many important implications for the politics of science, considering that the administration and organization of scientific research have to deal with the balance between investments and performance. To recognize that a good number of scientific discoveries are made by accident and sagacity may be satisfactory for the historian of science, but it raises further problems for research administrators. If this is true, it is necessary to create the environment, the social conditions for serendipity. These aspects are explored in Chapter 10 of The Travels and Adventures of Serendipity. Merton and Barber (2004: 199) underline that “[r]esearch administrators may be authoritarian or they may be permissive, they may see the interests of the individual scientists as being identical with those of the organization as a whole or they may not, and such preferences for relative autonomy and independence or for relatively rigid control may be refracted through the problem of the legitimacy or desirability of serendipity.” The solution appears to be a Golden Mean between total anarchy and authoritarianism. Too much planning in science is harmful. The experience of Langmuir as a scientist in the Gen-



160

eral Electric laboratories under Willis Whitney’s direction proves the importance of the autonomy and fun of the individual researcher, together with the concern of the administrator. Langmuir decided autonomously to study high vacuum and tungsten filaments. Whitney supervised the evolution of the inquiry everyday but limited himself to asking: “Are you having fun today?” It was a clever way to make his presence felt, without exaggerating with pressure. The moral of the story is that you cannot plan discoveries, but you can plan work that will probably lead to discoveries: “To put Langmuir’s argument another way, dictatorship in the laboratory and the political sphere is as impractical as it is morally repugnant; it runs counter to what men of the eighteenth century would have called the “natural laws” of society in general and the world of science in particular. The policy of leaving nothing to chance is inherently doomed by failure: It flies in the face of human nature, and especially the nature of rational, independent scientists” (Merton and Barber 2004: 201). The peculiar thing about the serendipity pattern is that it weakens not only the Baconian, positivistic, inductivistic approach to knowledge, but also the opposite sociologistic, constructivistic and relativistic temptation to see the content of scientific theories as necessarily and fully determined by the social context. If scientists are determined by social factors (language, conceptual frames, interests, etc.) to find certain and not other “answers,” why are they often surprised by their own observations? A rational and parsimonious explanation of this phenomenon is that the facts that we observe are not necessarily contained in the theories we already know. Our faculty of observation is partly independent from our conceptual apparatus. In this independence lies the secret of serendipity.



161

The Travels and Adventures of Serendipity has rapidly gained the fame it deserves.19 Cristina Gonzalez, in Science, defines it as “A fascinating text that captivates the reader from the start.” Steve Shapin, in American Scientist, remarks that “It is a pity that we had to wait so long for it, since The Travels and Adventures of Serendipity is the great man’s greatest achievement.” According to Philip Howard, The Times (London), “This is the best written and most entertaining book of sociology ever written”. For the Nobel Laureate Roald Hoffmann, “Curiosity, wonder, openness – these cohabit, comfortably, in that marvellous coinage of Walpole, serendipity. And they mark as well Merton and Barber’s ebullient journey in search of all the meanings of the word. A romp of minds at play!” The review by the distinguished historian of science Gerald Holton is enthusiastic: “What a splendid book! The Adventures and Travels of Serendipity is not only a guide to the extraordinary history and present-day usefulness of the blessings that can come from those unplanned, accidental events which, sagaciously employed, can shape one’s life. But equally, the volume is an exemplification of superb scholarship presented in graceful style. Indeed, while reading the book one realizes that one perceives its unique subject matter from the vantage of standing on the shoulders of giants.” Harold M. Green predicts that “The Travels and Adventures of Serendipity is destined to become a classic in the history and philosophy of science.” Robert K. Merton has been my master and before dying honored me by quoting my work in his universally praised masterpiece. This might be thought to have prejudiced me in favor of this book, thus I will not add my own praise to the long list pre 19



See http://www.pupress.princeton.edu/quotes/q7576.html

162

sented above. I would only like to express my hope that Green’s prophecy is fulfilled.



163

Bibliography

Aristotle. 350 B.C.E. Politics, translated by B. Jowett, . Ayres R.U. 1998. Turning Point. The End of the Growth Paradigm. London: Earthscan Publications. Bachelard G. 1938. La formation de l’esprit scientifique. Paris: Libraire Philosophique J. Vrin. Barbano F. 1968. “Social Structures and Social Functions: the Emancipation of Structural Analysis in Sociology.” Inquiry, 11: 40-84. Bignami L. 2007. “Robot, la grande invasione.” La Repubblica, April 10th. Black J., Hashimzade N., Myles G. 2012. A Dictionary of Economics. Oxford: Oxford University Press. Blackford R. 2012. “Robots and reality: A reply to Robert Sparrow.” Ethics and Information Technology 14 (1): 41–51. Blaug M. 1958. Ricardian Economics: A Historical Study, New Haven: Yale University Press. Bloor D. 1976. Knowledge and Social Imagery. London: Routledge & Kegan Paul. Boyer R. 2012. “The four fallacies of contemporary austerity policies: the lost Keynesian legacy.” Cambridge Journal of Economics, 36 (1): 283-312. Boyle R. 2000. “The Three Princes of Serendip,” Sunday Times, July 30th and August 6th. Brynjolfsson E., McAfee A. 2011. Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Lexington (MA): Digital Frontier Press.

164

Brynjolfsson E., McAfee A. 2016. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York and London: Norton & Company. Bunge M. 1962. Intuition and science. Englewood Cliffs: Prentice-Hall. Bunge M. 1998. Social Science under Debate. A Philosophical Perspective. Toronto: University of Toronto Press. Campa R. 1998. “The Epistemological Relevance of Merton’s Sociology of Science.” Ruch Filozoficzny, Volume LV, No. 2. Campa R. 2001. Epistemological Dimensions of Robert Merton’s Sociology. Torun: Nicholas Copernicus University Press. Campa R. 2003. “In Memoriam: Robert K. Merton.” In: T. Saccheri (ed.), Prima che: Promozione della salute e responsabilità istituzionali. Milano: Franco Angeli. Campa R. 2004. “La ‘Storia filosofica dei secoli futuri’ di Ippolito Nievo come caso esemplare di letteratura dell’immaginario sociale: un esercizio di critica sociologica.” Romanica Cracoviensia, Vol. 4: 29-42. Campa R. 2006. “Transumanesimo.” Mondoperaio, N. 4/5, March-April: 148-153. Campa R. 2007. “Considerazioni sulla terza rivoluzione industriale.” Il pensiero economico moderno, Anno XXVII, n. 3, July-September: 51-72. Campa R. 2010. “Le radici pagane della rivoluzione biopolitica.” In: Divenire. Rassegna di studi interdisciplinari sulla tecnica e il postumano, vol. 4. Bergamo: Sestante Edizioni: 93-159. Campa R. 2014a. “Workers and Automata: A Sociological Analysis of the Italian Case.” Journal of Evolution and Technology, Vol. 24, Issue 1, February: 70-85.



165

Campa R. 2014b. “Technological Growth and Unemployment: A Global Scenario Analysis.” Journal of Evolution and Technology, Vol. 24, Issue 1, February: 86-103. Campa R. 2015. Humans and Automata: A Social Study of Robotics. Frankfurt am Main: Peter Lang. Campa R. 2016a. “Non solo veicoli autonomi. Passato, presente e futuro della disoccupazione tecnologica.” In: F. Verso, R. Paura (eds.), Segnali dal futuro. Napoli: Italian Institute for the Future: 97-114. Campa R. 2016b. “The Rise of Social Robots: A Review of Recent Literature.” Journal of Evolution and Technology, Vol. 26, Issue 1, February: 106-113. Campa R., Zielonka P. 2003. “Serendipity.” Nasz Rynek Kapitalowy, 4 (148). Carboni C. 2015. “Partita tecnologica sul lavoro.” Il Sole 24 Ore, May 1st. Daerden F., Lefeber D. 2000. Pneumatic artificial muscles: Actuators for robotics and automation. Brussel: Vrije Universiteit. (accessed November 19, 2015). Darling K. 2012. “Extending legal rights to social robots.” Paper presented at We Robot Conference. University of Miami, April 23, 2012. or (accessed November 19, 2015). Dennet D. 1991. Consciousness Explained. Boston: Little, Brown, and Company. Desai J.P., Dudek G., Khatib O., Kumar V. (eds.) 2013. Experimental robotics. Heidelberg: Springer. Di Nicola P. 1998. “Recensione: Luciano Gallino, Se tre milioni vi sembran pochi. Sui modi per combattere la disoccupazione.” . Douglas P. H. 1930. “Technological Unemployment.” American Federationist, August.



166

Durand J-P. 1995. La sociologie de Marx. Paris: La Découverte. Dyson F. 1979. “Time without End: Physics and Biology in an Open Universe.” In Reviews of Modern Physics vol. 51, 447460. Dyson F. 1979. Disturbing the Universe, New York and London: Harper and Row. Dyson F. 1988. Infinite in All Directions. New York: Cornelia and Michael Bessie Books. Dyson F. 1997. Imagined Worlds. Cambridge (MA): Harvard University Press. EUROSTAT. 2009. Report on science, technology and innovation in Europe. . Feldmann H. 2013. “Technological Unemployment in Industrial Countries.” The Journal of Evolutionary Economics, Volume 23, Issue 5: 1099-1126. Feng A., Graetz G. 2015. “Rise of the Machines: The Effects of Labor-Saving Innovations on Jobs and Wages.” Centre for Economic Performance LSE, Discussion Paper no. 1330: 155. Flandorfer P. 2012. “Population ageing and socially assistive robots for elderly persons: The importance of sociodemographic factors for user acceptance.” International Journal of Population Research, Article ID 829835 (13 pages). (accessed November 19, 2015). Fleck L. 1979 [1935]. Genesis and Development of a Scientific Fact. Chicago: University of Chicago Press. Floreano D., Mattiussi C. 2008. Bio-inspired artificial intelligence: Theories, methods, and technologies. Cambridge (MA): MIT Press. Ford M. 2009. The Lights In the Tunnel: Automation, Accelerating Technology and the Economy of the Future. USA: AcculantTM Publishing.



167

Ford M. 2015. Rise of the Robots: Technology and the Threat of a Jobless Future. New York: Basic Books. Fourastié J. 1949. Le Grand Espoir du XX° Siècle. Progrès technique, progrès économique, progress social. Paris: PUF. Fourastié J. 1954. “Quelques remarques sur le chomage technologique et notamment sur la distinction entre deux types de progrès technique, le progrès processif et le progrès récessif.” In: Études européennes de population: manin-d’oeuvre, emploi, migrations: situation et perspectives. Paris: Éditions de l’Institut National d’Études Démographiques. Fukuyama F. 2002. Our Posthuman Future: Consequences of the Biotechnology Revolution. New York: Farrar, Straus and Giroux. Gallino L. 1998. Se tre milioni vi sembrano pochi. Sui modi per combattere la disoccupazione. Turin: Einaudi. Gallino L. 1999. “Disoccupazione tecnologica: quanta e quale perdita di posti di lavoro può essere attribuita alle nuove tecnologie informatiche.” January 13th. . Gallino L. 2003. La scomparsa dell’Italia industriale. Turin: Einaudi. Gallino L. 2007. Tecnologia e democrazia. Conoscenze tecniche e scientifiche come beni pubblici. Torino: Einaudi. Ge Shuzhi S., Khatib O., Cabibihan J.-J., Simmons R., Williams M.-A. (eds.) 2012. Social robotics. Fourth international conference. Proceedings. Heidelberg: Springer. Ge Shuzhi S., Li H., Cabibihan J.-J., Tan Y.K. (eds.) 2010. Social robotics. Second international conference. Proceedings. Heidelberg: Springer. Geroni A. 2011. “Trenta fallimenti al giorno.” Il Sole 24Ore, March 9th. González C. 2004. “Knowledge, Wisdom, and Luck.” Science 304, 5668: 213.



168

Graeber D. 2013. “On the Phenomenon of Bullshit Jobs.” Strike! Magazine, August 17th. . Green H.M. 2004. “Merton, Robert K., and Elinor Barber. The Travels and Adventures of Serendipity: a Study in Sociological Semantics and the Sociology of Science,” International Social Science Review, Fall-Winter. Haberler G. 1932. “Some Remarks on Professor Hansen’s View on Technological Unemployment.” The Quarterly Journal of Economics, Vol. 46, No. 3, May: 558-562. Haddadin S. 2014. Towards safe robots: Approaching Asimov’s 1st law. Heidelberg: Springer. Hagen E. 1942. “Saving, Investment, and Technological Unemployment.” The American Economic Review, Vol. 32, No. 3, Part 1, September: 553-555. Hansen A. 1931. “Institutional Frictions and Technological Unemployment.” The Quarterly Journal of Economics, 45, August: 684-698. Hawking S. 1981. “Is the End of Theoretical Physics in Sight?” Physics Bulletin, January: 15-17. Hermann G., Pearson M. J., Lenz A., Bremner P., Spiers A., Leonards U. (eds.) 2012. Social robotics. Fifth international conference. Proceedings. Heidelberg: Springer. Horgan J. 1997. The End of Science. New York: Broadway Books. Hughes J. 2004. “Embrace the End of Work. Unless we send humanity on a permanent paid vacation, the future could get very bleak.” USBIG Discussion Paper No. 81. . Hughes J. 2004. Citizen Cyborg. Why Democratic Societies Must Respond to the Redesigned Human of the Future. Cambridge (MA): Westview Press. Hughes J. 2014. “Are Technological Unemployment and a Basic Income Guarantee Inevitable or Desirable?” Journal of Evo-



169

lution and Technology, Vol. 24, Issue 1, February: 1-4. ISTAT. 2010. “La popolazione straniera residente in Italia.” . ISTAT. 2011. “Censimento industria servizi.” . Kanda T., Ishiguro H. 2013. Human-robot interaction in social robotics. Boca Raton: CRC Press. Kaplan J. 2015. Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence. New Haven: Yale University Press. KELA. 2016. Ministry of Social Affairs and Health request opinions on a basic income experiment, , [accessed August 26th 2016]. Keynes J. M. 1963 [1930]. “Economic Possibilities for our Grandchildren.” In ID., Essays in Persuasion. New York: W. W. Norton & Co.: 358-373. Krugman, Paul. 2013. “Sympathy for the Luddites.” The New York Times, June 13th. Kurfess, T. R. (ed.). 2012. Robotics and Automation Handbook. CRC Press. Kindle Edition. Kurz H. D. 1984. “Ricardo and Lowe on Machinery.” Eastern Economic Journal, Vol. 10 (2), April-June: 211-229. Kurzweil R. 2005. The Singularity is Near. When Humans Transcend Biology. New York: Viking. Lefebvre H. 1982 [1968]. The Sociology of Marx. New York: Columbia University Press. Liu Y., Sun D. 2012. Biological inspired robotics. Boca Raton: CRC Press. Mabry R.H., Sharplin A.D. 1986. “Does More Technology Create Unemployment?” Policy Analysis, No. 68, March 18th. Manyika J. et al. 2013. Disruptive Technologies: Advances that will transform life, business, and the global economy, McKinsey Global Institute.



170

Martorella C. 2002. “Shigoto. Lavoro, qualità totale e rivoluzione industriale giapponese.” . Marx K. 1976 [1867]. Capital: A Critique of Political Economy. Harmondsworth: Penguin Books. Menghini M., Travaglia M.L. 2006. “L’evoluzione dell’industria italiana. Peculiarità territoriali.” Istituto Guglielmo Tagliacarne. . Merton R.K. 1965. On the Shoulder of Giants: A Shandean Postscript. New York: Harcourt Brace Jovanovich. Merton R.K. 1968. Social Theory and Social Structure. New York: The Free Press. Merton R.K. 1973. The Sociology of Science. Theoretical and Empirical Investigations. Chicago: The University of Chicago Press. Merton R.K. 1975. “Thematic Analysis in Science: Notes on Holton’s Concept.” Science, 188, April 35: 335-338. Merton R.K. 1996. On Social Structure and Science, Chicago: The University of Chicago Press. Merton R.K., Barber E. 2004. The Travels and Adventures of Serendipity. A Study in Sociological Semantics and the Sociology of Science. Princeton University Press: Princeton. Minsky M. 1956. “Some Universal Elements for Finite Automata.” Automata Studies: Annals of Mathematics Studies, Number 34. Minsky M. 1985. The Society of Mind. New York: Simon and Schuster. Montani G. 1975. “La teoria della compensazione.” Giornale degli Economisti e Annali di Economia, Nuova Serie, Anno 34, No. ¾, Marzo-Aprile: 159-192. Moravec H. 1988. Mind Children. Cambridge (Mass.): Harvard University Press.



171

Moravec H. 1993. “The Age of Robots.” . Moravec H. 1997. “When will computer hardware match the human brain?” . Moravec H. 1998. Robot: Mere Machine to Transcendent Mind. Oxford: Oxford University Press. Moyle W., Cooke M., Beattie E., Jones C., Klein B., Cook G., Gray C. 2013. “Exploring the effect of companion robots on emotional expression in older people with dementia: A pilot RCT.” Journal of Gerontological Nursing, 39(5): 46–53. Mutlu B., Bartneck C., Ham J., Evers V., Kanda T. (eds.) 2011. Social robotics. Third international conference. Proceedings. Heidelberg: Springer. Neilson S. 2011. Robot Nation: Surviving the Greatest Socioeconomic Upheaval of All Time. New York: Eridanus Press. Neisser H.P. 1942. “‘Permanent’ Technological Unemployment: ‘Demand for Commodities Is Not Demand for Labor’.” The American Economic Review, Vol. 32, No. 1, Part 1, March: 50-71. Noble D.F. 1995. Progress without People: New Technology, Unemployment, and the Message of Resistance. Toronto: Between the Lines. Norton R. 2002. “Unintended Consequences.” In: Henderson D.R. (ed.), The Concise Encyclopedia of Economics. Indianapolis: Liberty Fund. Odetti L., Anerdi G., Barbieri M.P., Mazzei D., Rizza E., Dario P., Rodriguez G., Micera S. 2007. “Preliminary experiments on the acceptability of animaloid companion robots by older people with early dementia.” In: Conference proceedings: Annual International Conference of the IEEE Engineering in Medicine and Biology Society: 1816–1819.



172

Oldroyd D. 1986. The Arch of Knowledge. An Introductory Study of the History of the Philosophy and Methodology of Science. New York: Methuen. Paprotny I., Bergbreiter S. 2014. Small-scale robotics: From nano-to-millimeter-sized robotic systems and applications. Heidelberg: Springer. Pasquale F. Cashwell G. 2015. “Four Futures of Legal Automation.” 63 UCLA Law Review Discourse, 26: 26-48. Pellicani L. 2007. Le radici pagane dell’Europa. Soveria Mannelli: Rubbettino. Pellicani L. 2015. L’Occidente e i suoi nemici. Soveria Mannelli: Rubbettino. Polchi V. 2011. “Il governo ora chiede più immigrati.” La Repubblica, March 11th. Ricardo D. 2004 [1821]. On the Principles of Political Economy and Taxation. Kitchener: Batoche Books. Rifkin J. 1995. The End of Work. The Decline of the Global Labor Force and the Dawn of the Post-Market Era. New York: Putnam Publishing Group. Russo L. 2004. The Forgotten Revolution. How Science Was Born in 300 BC and Why It Had to Be Reborn. Berlin: Springer. Russo M., Pirani E. 2006. “Dinamica spaziale dell’occupazione dell’industria meccanica in Italia 1951-2001.” . Sandhu S. 2016, “Finland to consider introducing universal basic income in 2017.” Independent, April 1st. Schor J.B. 1993. “Pre-industrial workers had a shorter workweek than today’s.” < groups.csail.mit.edu>. Schor J.B. 1993. The Overworked American: The Unexpected Decline of Leisure. New York: Basic Books.



173

Schumpeter J.A. 2006 [1954]. History of Economic Analysis. Taylor & Francis e-Library. Shapin S. 2004. “The Accidental Scientist.” American Scientist, Volume 92, Number 4. Shin D., Yeh X., Narita T., Khatib O. 2013. “Motor vs. brake: Comparative studies on performance and safety in hybrid actuations.” In: Desai J.P. et al. (ed.), Experimental robotics. Heidelberg: Springer: 101-111. Siciliano B. 2013. “Foreword.” In: Desai J.P. et al. (ed.), Experimental robotics. Heidelberg: Springer: v-vi. Siciliano B., Khatib O. (eds.) 2008. Springer Handbook of Robotics. New York: Springer. Smelser N. J. 1976. “On the relevance of economic sociology for economics.” In: Hupper T. (ed.), Economics and Sociology: Toward and Integration. Dordrecht: Springer Science+Business Media: 1-26. Smith A. 1998 [1776]. An Inquiry into the Nature and Causes of the Wealth of Nations. London: The Electric Book Company. Sonnad N. 2014. “Robot all too robot. Still thing robots can’t do your job? This video may change your mind”, Quartz, , August 15th. Sparrow R. 2002. “The march of the robot dogs.” Ethics and Information Technology 4(4): 305–318. Steuart J. 1767. An Inquiry into the Principles of Political Economy. London: Printed for A. Millar, and T. Cadell, in the Strand. Stone J. 2016. “British parliament to consider motion on universal basic income.” Independent, January 20th. Stuart Mill J. 2009 [1848]. Principles of Political Economy. Project Gutenberg TEI edition. Swedberg R. 1987. “Economic Sociology: Past and Present.” Current Sociology, 35, March 1st: 1-144.

174

Sztompka P. 1986. Robert K. Merton. An Intellectual profile. Hong Kong: Macmillian. Tabarrok A. 2003. “Productivity and unemployment.” Marginal Revolution, December 31st. . Tipler F. 1994. The Physics of Immortality. New York: Doubleday. Tipler F., Barrow J. 1986. The Anthropic Cosmological Principle. New York: Oxford University Press. UNECE. 2004. “Over 50,000 industrial robots in Italy. up 7% over 2002. Italy is Europe’s second and the world’s fourth largest user of industrial robots.” . UNECE. 2005. “Worldwide investment in industrial robots up 17% in 2004. In first half of 2005, orders for robots were up another 13%.” . Wallace W.A. 1981. “Galileo and Reasoning Ex Suppositione.” In: ID., Prelude to Galileo: Essays on Medieval and Sixteenth-Century Sources of Galileo’s Thought. DordrechtBoston: Reidel: 129-159. Wang L., Chen Tan K., Meng Chew C. 2006. Evolutionary robotics: From algorithms to implementations. Singapore: World Scientific Publishing. Wessel D. 2015. “The Typical male U.S. Worker Earned Less in 2014 Than in 1973.” The Wall Street Journal, September 18th. Wicksell K. 1977 [1934], Lectures on Political Economy, Fairfield: Augustus M. Kelley Publisher. Wladawsky-Berger I. 2015. “Technological Unemployment and the Future of Work.” Wall Street Journal, November 6th. Woirol G.R. 1996. The Technological Unemployment and Structural Unemployment Debates. Westport: Greenwood Press. Woirol G.R. 2006. “New Data, New Issues: The Origins of the Technological Unemployment Debates.” History of Political



175

Economy, 38(3), September: 473-496. Worstall T. 2015. “Finally, Someone Does Something Sensible: Finland To Bring In A Universal Basic Income.” Forbes, December 6th. Złotowski J., Weiss A., Tscheligi M. 2011. “Interaction scenarios for HRI in public space.” In: Mutlu B. (ed.), Social robotics, Heidelberg: Springer: 1-10.



176