Identity, Institutions and Governance in an AI World: Transhuman Relations 9783030361808, 9783030361815, 3030361802

The 21st century is on the verge of a possible total economic and political revolution. Technological advances in roboti

123 85 3MB

English Pages 276 [272]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Acknowledgements
Contents
Chapter 1: Introduction: Preparing for a “Transhuman” Future
Aim
Towards Transhuman Relations?
What Is Transhuman Relations?
The Artificial Road to Heaven or Hell?
Living in Existence 4.0
Developing a Theory of “Smart Consciousness”
Preparing for a “Transhuman” World
References
Chapter 2: Evolving Beyond Human Relations
Looking Past a Human—Centred World
Managing Human Relations
Living in an Anthropocene World
New Materialism for “Smart” Times
Going Beyond a Human Centred World
Sharing Intelligence
Evolving Beyond Human Relations
References
Chapter 3: Heading Toward Integration: The Rise of the Human Machines
The Threat of Singularity
The Danger of Human Bias
Manufacturing “Ethical” Intelligence
Disruptive Debates
Bridging the AI Divide
Heading Toward Integration
References
Chapter 4: Leading Future Lives: Producing Meaningful Intelligence
Alienation 4.0
Caring Machines
transhuman Lives
Healthy Robots, Happy Humans
Integrated Possibilities
Producing Meaningful Intelligence
References
Chapter 5: Creating Smart Economies: Administrating Empowering Futures
Smart Governance
Breaking Our Digital Chains
Creating transhuman Value
Empowering transhuman Organisation
Creating Integrative Economies
Administrating Shared Futures
References
Chapter 6: Reprogramming Politics: Mutual Intelligent Design
Cyborg Politics
Developing TransHuman Democracy
Simulating Progress
“Unhumanising” Politics
Reprogramming Politics
Mutual Intelligent Design
References
Chapter 7: Legal Reboot: From Human Control to Transhuman Possibilities
Transhuman Rights
Updating Autonomy
Enhancing the Law
Licit Pathologies
Legal Reboot
From Human Control to Transhuman Possibilities
References
Chapter 8: Shared Consciousness: Toward a World of Transhuman Relations
The Need for Radical Dehumanization and Disruptive Integration
From Disruption to Transformation
Deprogramming and Unhumanising “Industry 4.0”
Liberating Intelligence
ReCoding Reality
References
Index
Recommend Papers

Identity, Institutions and Governance in an AI World: Transhuman Relations
 9783030361808, 9783030361815, 3030361802

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Identity, Institutions and Governance in an AI World Transhuman Relations Peter Bloom

Identity, Institutions and Governance in an AI World

Peter Bloom

Identity, Institutions and Governance in an AI World Transhuman Relations

Peter Bloom University of Essex Colchester, UK

ISBN 978-3-030-36180-8    ISBN 978-3-030-36181-5 (eBook) https://doi.org/10.1007/978-3-030-36181-5 © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Acknowledgements

This book is dedicated to my comrades and co-conspirators from the Animorph Collective—Sz, Geoff, Hannah, Michal, and Natalia. I am looking forward to radically transform our and the world’s realities together!

v

Contents

1 Introduction: Preparing for a “Transhuman” Future  1 2 Evolving Beyond Human Relations 31 3 Heading Toward Integration: The Rise of the Human Machines 67 4 Leading Future Lives: Producing Meaningful Intelligence 93 5 Creating Smart Economies: Administrating Empowering Futures131 6 Reprogramming Politics: Mutual Intelligent Design173 7 Legal Reboot: From Human Control to Transhuman Possibilities211 8 Shared Consciousness: Toward a World of Transhuman Relations247 Index265 vii

CHAPTER 1

Introduction: Preparing for a “Transhuman” Future

Imagine walking down the street of any city, turning the corner and seeing a new business between the restaurants, bars, and shops. It is not selling food or clothes but something much more carna and mechanised—sex with robots. If this sounds like a far off dystopian future, think again. “Robot Brothels” are being planned to open in major cities such as London and Moscow across the world. In Toronto, this is already a reality, as the company revealingly titled “Kinky S Dolls” has designed human looking female robots with Artificial Intelligence to rent out an intimate room located in their warehouse for 30 minutes or an hour. By 2018, they had already attracted over 500 customers (Yuen 2018). While seen as perhaps perverse oddity when first opened in Toronto, the idea of a “robot brothel” started a much larger and more profound debate when the company tried to expand their business to the US city of Houston. The owner Yuval Gavriel saw it merely as a business opportunity, declaring “The States is a bigger market, and a healthier market, and God bless Trump” (quoted in Dart 2018: n.p.). However, community groups and the Mayor passionately opposed the move, starting a petition for its prevention signed by over 12,600 residents. According to a member of the group Elijah Rising—who tries to raise awareness about the city’s sex trafficking problem—that “We want to see the end of this systemic problem. We said, this robot thing looks very similar to pornography, in that when men engage with pornography it sort of detaches them from any sort of human relation, and we’ve noticed that with sex buyers” (ibid.: © The Author(s) 2020 P. Bloom, Identity, Institutions and Governance in an AI World, https://doi.org/10.1007/978-3-030-36181-5_1

1

2 

P. BLOOM

n.p.) Addressing the issue from a more global and future oriented perspective, Professor Kathleen Richardson (founder of the “Campaign Against Sex Robots”) declared: Sex dolls are merely a new niche market in the sex trade. While these dolls are hidden from the public at the present there is nothing stopping any of the buyers taking their ‘sex doll’ to the supermarket, on the school run, or in any public space. Therefore we have to consider the dolls as a form of 3D pornography. There are also issues about what happens when you normalise a culture where women as the prostituted become visibly and openly interchangeable with dolls. (Ibid.: n.p.)

The above example is obviously extreme. Yet it is indicative of the ways we still view society and through a human—centric lense. Our focus remains firmly on how technology will impact humanity, in this respect. Missing is an enlarged perspective that considers the effects on non-­ humans-­whether that be AI, animal, or even climatic. Such a “transhuman” perspective is especially urgent as human relations are rapidly evolving into “transhuman” relations. The gowring presence of robots, computerisation, and AI are forcing us to existentially rethink how we conceive of intelligence, interpersonal relations, and or social existence. The first chapter will introduce the main theme of the book—how can humans prepare today for a “transhuman” tomorrow. In particular one where we share the world with a range of new and emerging forms of “smart consciousness”. Questions will be asked such as whether existing perspectives on human relations are sufficient for a coming age where “the internet of everything” is a daily and global reality. Will robots have “human rights”? Will individuals apply for the same jobs as a “conscious” automated employee? Can humans and A.I. learn from each other to create new forms of knowledge and social relations? The chapter will begin by highlighting the imminent emergence of a “smart world” and what this means. It will then explore the fears and hopes these changes will bring—ranging from dystopian visions of a robot-controlled future to utopian hopes of a technologically enlightened society. Following this critical discussion, it will focus on the almost complete lack of thinking (either from academics or policy makers) surrounding the concrete cultural norms, ethical concerns, laws and public administration required to make this an empowering rather than disempowering shift. It will conclude by highlighting the need for humans

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

3

to fundamentally evolve their thinking, practices and physical existence to meet the challenges and opportunities of this new “smart” revolution.

Aim The twenty-first century is on the verge of a possible total economic and political revolution. Technological advances in robotics, computing and digital communications have the potential to completely transform how people live and work. Even more radically, humans will soon be interacting with artificial intelligence (A.I.) as a normal and essential part of their daily existence. What is needed now more than ever is to rethink social relations to meet the challenges of this soon-to-arrive “smart” world. This book proposes an original theory of transhuman relations for this coming future. Drawing on insights from org studies, critical theory, psychology and futurism—it will chart for readers the coming changes to identity, institutions and governance in a world populated by intelligent human and non-human actors alike. It will be characterised by a fresh emphasis on infusing programming with values of social justice, protecting the rights and views of all forms of “consciousness” and creating the structures and practices necessary for encouraging a culture of “mutual intelligent design”. To do so means moving beyond our anthropocentric worldview of today and expanding our assumptions about the state of tomorrow’s politics, institutions, laws and even everyday existence. Critically such a profound shift demands transcending humanist paradigms of a world created for and by humans and instead opening ourselves to a new reality where non-human intelligence and cyborgs are increasingly central.

Towards Transhuman Relations? In 2017 The World Economic Forum released a report tellingly entitled “AI: Utopia or Dystopia” (Boden 2017). Its findings were suitably cautious, warning people against fantasies or fears of a “singularity” in which machines overtake humans in intelligence and power. Yet it did strike a serious warning of the risks created by the rise of AI, declaring that we should be prudently pessimistic—not to say dystopian—about the future. AI has worrying implications for the military, individual privacy, and employment. Automated weapons already exist, and they could eventually

4 

P. BLOOM

be capable of autonomous target selection. As Big Data becomes more accessible to governments and multinational corporations, our personal information is being increasingly compromised. And as AI takes over more routine activities, many professionals will be deskilled and displaced. The nature of work itself will change, and we may need to consider providing a “universal income,” assuming there is still a sufficient tax base through which to fund it (Boden 2017: n.p.)

These insights reflect the growing awareness that humanity is rapidly approaching a fundamental transformation. More than a mere updating of our current social and economic order, emerging technologies will “disrupt” for good or ill how we live, work, and even think. Even the most capitalist and elitist institutions, those at the heart of the current status quo, are acknowledging this coming radical change. According to a 2013 report released by the Mckinsey Global Institute entitled “Disruptive Technologies: Advances that will Transform Life, Business, and the Global Economy”: the results of our research show that business leaders and policy makers— and society at large—will confront change on many fronts: in the way businesses organize themselves, how jobs are defined, how we use technology to interact with the world (and with each other), and, in the case of next-­ generation genomics, how we understand and manipulate living things. There will be disruptions to established norms, and there will be broad societal challenges. Nevertheless, we see considerable reason for optimism. Many technologies on the horizon offer immense opportunities. We believe that leaders can seize these opportunities, if they start preparing now. (Manyika et al. 2013: 4–5)

Indeed, the theorist Francis Fukuyama (1999) who after the Cold War triumphantly announced the “end of history” and the assured global victory of Liberal Democracy, admitted only a decade later by the end of the century that humanity is undergoing a “Great Disruption”. Tellingly, he still holds out optimism, given that in his view humans have a unique ability to confront these challenges and their own biological nature for a greater common good, as It is, of course, both easy and dangerous to draw facile comparisons between animal and human behavior. Human beings are different from chimpanzees precisely because they do have culture and reason, and can modify their genetically controlled behavior in any number of complex ways. (Ibid.: 165)

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

5

Yet what actually is being disrupted? How will these technologies really impact society? On the one hand, AI promises no less than to revolutionize firms and society (Makridakis 2017). In the face of this revolution, there is an increasing desire to ensure that above all these disruptive changes remain “human centred”. Yet underneath this growing wave of voices wanting to save humanity from a technological takeover, is an undercurrent of critical perspectives embracing the possibilities to go beyond current human assumptions and limitations. In fields such as architecture, this could have profound and quite revolutionary philosophical and practical effects as: In this age of unprecedented technological progress, we can no longer ask “what is man?” without examining what we think man will become. In the field of architecture such an examination necessitates considering both what and for whom we will be building in the decades to come. Since the expansion of information and communication technologies in the beginning of the 1990s, the most forward-thinking architects have been asking these very questions. More specifically, digital architects have been among the first in the field, if not the first, to become interested in the effects of technological advancements not only on architectural design and the built environment of the future, but also on society as a whole and on our physical, psychological, and cultural evolution. Thus they have constructed future world visions often impregnated with post-humanist and transhumanist currents of thought. (Roussel 2018: 77)

Contained within the rise of AI and robotics is a chance to transcend the rather narrow and often historically destructive “humanist imagination” (Åsberg et al. 2011). In its place is the evolution from homo sapiens to “homo biotechnologicus” since “The biotechnology of today’s world means that humanity is set on a path to transcending its own human nature, with all the risky consequences that entails” (Višň ovský 2015: 230). What these exciting or terrifying depictions of the near future, depending on your point of view, ignore are the needs of non-humans. Tellingly, humans view themselves quite similarly in relation to both robots and animals. If computers represent an automated unfeeling coming reality, animals are a present reminder of our wild and “savage” pasts. The human, for all our acknowledged faults, is still in the popular imagination the only being that can make decisions based on morality and empathy. Despite our history marked by wars, genocide, exploitation, and ecological devastation—humanity retains its supposedly unique status of leading an ethical

6 

P. BLOOM

and “good” existence. We may not be perfect, indeed far from it, the reasoning goes, but the alternatives are even worse. In this respect, human progress and potential is confined to socially constructed boundaries of “humanness”, largely dismissing non-human forms of intelligence and being (Laurie 2015). There are alternatives though—ones that gesture toward a different type of social order where humans are not at its centre. The pioneering social theorist Braidotti (2018) calls, in this regard, for a “focusing away from the ‘naturecultural’ and ‘humananimal’” and instead on “the primacy of intelligent and self-organizing matter”. Revealed is a brave new world where humans co-exist with AI, animals, plant life, and everyday objects as equals, in which the lines which traditionally separate us blur and continue to evolve. While for many this is a future scenario to fear, with critical reflection it also serves as an opportunity to expand human potential and positively reconfigure our relationship to other “intelligent beings” and lifeforms. Importantly, such radical possibilities, a “democratization of subjects”, necessitates “an ongoing, persistent deconstruction of the anthropocentric values all too often linked to recent trends in social media, artificial intelligence, genetic enhancements, predictive analytics, digital surveillance, and so on” (Igrek 2015: 92). Gestured toward is the potential for the transition from human relations to “transhuman relations”. What is suggested is neither utopian nor dystopian. Rather it is an effort to invoke a modern Copernican revolution in the human view of their social universe, challenging their anthropocentric understandings which places themselves at its centre. Instead, disruptive technologies can catalyze a reconfiguration of what we value, allowing for a renewed appreciation of diverse intelligences and ways of being. To this end, An insurgent posthumanism would contribute to the everyday making of alternative ontologies: the exit of people into a common material world (not just a common humanity); the embodiment—literally—of radical left politics; finally the exodus to a materialist, nonanthropocentric view of history. These engagements are driven by the question of justice as a material, processual and practical issue before its regulation though political representation. Alter-ontology: justice engrained into cells, muscles, limbs, space, things, plants and animals. Justice is before the event of contemporary left politics; it is about moulding alternative forms of life. (Papadopoulos 2010)

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

7

Yet it will also require a willingness to grapple with the complex and challenging problems this shift in consciousness raises politically, economically, organisationally, and legally.

What Is Transhuman Relations? If we are nearing the end of the era of “human”, or at least their perceived supremacy, this raises an important question. What precisely is “transhumanism”? The answer to this seemingly simple question is not always so straightforward. To a certain degree, it can be viewed as a “philosophy” which seeks to better understand the “human future” in the face of rapid changes in science and technology (see More 2013). As such, it is a philosophical intervention that is committed to reinterrogating fundamental conceptions of “who we are” and “what we can become”. However, it is also an ideal embracing the potential of technology for enhancing human capability. According to the renowned transhuman philosopher Gagnon (2012: n.p.) The transhuman ideal is based upon a reconception of evolution, a perfecting and transcending of the human race through the next step in progress: not through biological mutation but through science and technology. H+ (a common abbreviation) means the enhancement of human beings as a whole, the inevitable advance of our species which combines biology with technology, enhancing our bodies and brains with scientific innovation, seeking to overcome the limitations of our flesh.

The perhaps immediate worry is that transhumanism is equivalent to dehumanization. Put differently, that in its attempts to transcend human limitations it will ultimately lead to destroying of our most sacred human qualities as well as the practical erosions of our social freedoms and free will. In particular, the rejection of humanist “truths” and belief in any inherent “human nature”, perspectives most associated with post-­ structuralism and influencing much of transhuman thought, risk ignoring all that is good about humanity and worth preserving (see Porpora 2017). However, rather than a direct challenge to this humanism, it is perhaps more valuable and accurate to see transhumanism as engaging a fruitful debate about what our future holds. Quoting William Grassie and Gregory R. Hansell (2010: 14) in the introduction to their celebrated recent collection of Transhumanism and its Critics at length

8 

P. BLOOM

The debate about transhumanism is an extremely fruitful field for philosophical and theological inquiry. The last hundred years of human evolution have seen remarkable scientific and technological transformations. If the pace of change continues and indeed accelerates in the twenty-first century, then in short order, we will be a much-transformed species on a muchtransformed planet. The idea of some fixed human nature, a human essence from which we derive notions of humane dignities and essential human rights, no longer applies in this brave new world of free market evolution. On what basis then do we make moral judgments and pursue pragmatic ends? Should we try to limit the development of certain sciences and technologies? How would we do so? Is it even possible? Are either traditional religious or Enlightenment values adequate at a speciation horizon between humans and posthumans when nature is just not what it used to be anymore? Is the ideology of transhumanism dangerous independent of the technology? Is the ideology of the bioconservatives, those who oppose transhumanism, also dangerous and how? Are the new sciences and technologies celebrated by transhumanists realistic or just another form of wishful thinking? And which utopic and dystopic visions have the power to illuminate and motivate the future?

Significantly, transhumanism is not so much a complete break with the past as it is an attempt to philosophically and practically theorise its evolution. In the words of perhaps the most famous transhumanist thinker Nick Bostrom, “Transhumanist view human nature as a work-in-progress” (2005: 1). It demands, in turn, a healthy dose of skepticism and open mindedness, since it is “a dynamic philosophy, intended to evolve as new information becomes available or challenges emerge. One transhumanist value is therefore to cultivate a questioning attitude and a willingness to revise one’s beliefs and assumptions” (Bostrom 2001). How does transhumanism though differ from another similar sounding concept of “posthumanism”? For most people who are not familiar with these ideas and who understandably do not spend a huge amount of time thinking of about a future with a radically altered humanity, these possible differences can seem both semantic and irrelevant. And indeed as the final quote in the previous section suggests, the potentials of transhumanism share much with desires for an “insurgent posthumanism”. Yet their contrasts are relevant and worth identifying. Posthumanism, in this respect, broadly represents a desire to completely move beyond historical notions of “being human”—philosophically, culturally, and politically rejecting ideas of “human nature” and human rule. This definition is, of

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

9

course, quite a broad brush for such as sophisticated and rich vein of thinking. Nevertheless, what all the various strands of posthumanism have in common—from “anti-humanism” to “voluntary human extinction” to “accelerationism”—is the “embracing of our demise” (Wolfe 2010). Consequently, according to Fuller (2018) posthumanism and transhumanism provide “alternative mappings of the spaces of political possibility” for moving beyond and challenging “our morphology from an upright ape”: Posthumanists stress our overlap with other species and interdependency with nature, while transhumanists stress the variability and mutability of genes, which allow enhancement. Posthumanist sociology emphasizes the “superorganic” biological and evolutionary roots of social behavior, while transhumanists emphasize humanity’s extension into technology and our accelerating cultural evolution. Both posthumanists and transhumanists see our simian nature as a platform or way station that opens up into a much wider range of possible ancestors and descendants than conventional politics normally countenances. (Fuller 2018: 151)

Nevertheless, each of these perspectives help to redefine established human ethics and relations. For this reason, Huxley (2015: 12) refers to transhumanism as an “ethics in progress”, one which will continually confront the moral and social issues arising from our use of technology to enhance and change our “nature” and “selves”. This ethical imperative points to the possibility of a “moral transhumanism” in fields such as health in which “biomedical research and therapy should make humans in the biological sense more human in the moral sense, even if they cease to be human in the biological sense” (Persson and Savulescu 2010: 656). Perhaps more fundamentally, it allows for a “timely ethics” that reconfigures our relationships with each other and those that populate our world. As noted by leading posthumanist critical thinker Cecilia Åsberg (2013: 8) “Posthumanist ethics…emerge as efforts to respect and meet well with, even extend care to, others while acknowledging that we may not know the other and what the best kind of care would be.” It is for this reason perhaps to think of transhumanisms and posthumanisms, in the plural, than any singular definition of transhumanism. Significantly, these perspectives are not a priori incompatible with established theological views of the “human soul” (see Mercer and Trothen 2014). Rather it reframes humans as ultimately “relational

10 

P. BLOOM

beings” in which, at least from a Christian viewpoint, “the dimension that is decisive for resurrection is the relation of the soul to God” (Peters 2005: 381). The point here is not to posit onto transhumanism or posthumanism any inherent religious connotations or to support one theological interpretation of them over another. Instead, it is too highlight how versatile these philosophies are, able to speak to a wide range of existing and emerging ideas on the human condition. Indeed, transhumanism runs the risks of ethically reproducing past logics of morally troubling discourses such as “eugenics” for which individual and social progress are primarily associated with genetic manipulation (See Koch 2010). Similarly, the Enlightenment historical roots of transhumanism can without proper critical attention lead it to reinforce a number of problematic modernist outcomes including giving birth to “new theologies” and politically fostering “technocratic authoritarianism” (Hughes 2010). While recognizing these dangers, transhumanism can also serve as a type of present day hope for positively dealing with an uncertain future. As one self-proclaimed follower of this “lifestyle” proclaimed transhumanists, through their worldviews and lifestyle choices, and through their ability to deal with and better understand the changes on the horizon, are putting themselves in a better position than most to anticipate and apply the coming technologies to their lives and their bodies; they are inoculating themselves against future shock. Transhumanists hope that future advancements will work to the benefit of humanity, and that missing out on this potential, either because of sweeping bans or preventable catastrophes, would be a travesty (Dvorsky 2008: n.p.)

It offers the potential, furthermore, for achieving a realistic “immortality” in the relatively near future, one based not on any ecclesiastical notion of eternal salvation but instead having more time to pursue our own interests, passions, and self-development (Gelles 2009). However, as attractive as these optimistic visions of transhumanism may be they still too often fall into the trap of anthropocentrism—placing the human above all other forms of life and intelligences. It is assumption that “we are not simply one among many species but are privileged by virtue of our capacity to understand the entirety of the evolutionary process—indeed courtesy of computer models as if we had designed (but not determined) that process” (Fuller and Lipinska 2014: 6).

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

11

At stake then is envisioning a transhuman and posthuman perspective that neither focuses simply on either enhancing or transcending humanity. Instead, to take seriously the need to respect and learn from that which is non-human. To conceive and start to build a future based on shared knowledge and mutually beneficial relations. In which “human” potential is intertwined with our interaction with animals, technology, and objects in our world.

The Artificial Road to Heaven or Hell? The potential creation of a Transhuman society though should not be confused with either utopianism or dystopianism. Indeed, most depictions of our future are defined by their perceived optimism or pessimism about the world to come. They exaggerate the possibilities for our opportunities to progress into a more egalitarian, just, and happy civilization or our descent into even greater inequality, oppression, and misery. To this end, they reveal more often about our presents than being accurate predictions of our future. It is not surprising, for instance, that Orwell wrote 1984 in a time of fascism and totalitarianism nor that the corporate dystopia of Blade Runner and Brazil became popular as the new era of neoliberalism began to dawn. The current embrace and fear of technology mirrors this simultaneous contemporary excitement and terror of our present day reality. Transhumanism and posthumanism, hence, offer a vision of a coming world different than our own. Amidst their philosophical and political differences, they articulate an alternative set of values and assumptions for reconfiguring social relations. So-called “converging technologies”— including nanotechnology, biotechnology, and information technology— represent a broader desire to bring these emerging scientific advances together for the purpose of “improving human performance”. However, commentators such as Coenen (2007: 156) have warned against thinking of posthumanism as necessarily utopian as “is mainly concerned with technological construction of new beings to complement or replace humanity. It tends strongly towards quasi-religious visions of the abolition of temporal limits on individual consciousness, in which the ego is preserved and death outwitted by various technological means” Nevertheless, in 2005 the World Transhumanist Organisation proclaimed as article 1 of its “Transhumanist declaration” that

12 

P. BLOOM

Humanity will be radically changed by technology in the future. We foresee the feasibility of redesigning the human condition, including such parameters as the inevitability of aging, limitations on human and artificial intellects, unchosen psychology, suffering, and our confinement to the planet earth. (quoted in Coenen 2007: 145)

It is, in this regard, a challenge to the current era—utopian not simply because of its romanticized promises but also in that for many it still seems to be a “no place”, mere fantasy rather ever nearing reality. Yet as AI, automation, and robotics become ever more prominent parts of modern human life, it is progressively becoming into a matter of when rather than if. This has produced, in turn, a growing technological optimism. It is a sweeping cultural faith in the ability of digital advances to solve all our most pressing problems. It is a “techno-utopian discourse” which conceive technology not as a force of disenchantment but as a re-enchantment of our contemporary world. They revolve around different kinds of emerging technologies, some of them outright futuristic like artificial superintelligence or (post-)human enhancement (Kurzweil 2005); some apparently in the making, like synthetic biology and autonomous cars; others—like 3D printing and ‘Big Data’—already existing and associated with boundless future potentials. (Dickel and Schrape 2017a: 289–290)

Running throughout much of transhuman thought, hence, are themes of the salvationary qualities of technology. Here, disruptive advances serve the needs of humanity even while unrecognizably transforming them (Hauskeller 2012). The combination of foreignness and increasing familiarity of transhuman and posthuman perspectives can make them seen like a time capsule from an already existing and tantalizingly close future, a figurative “letter from utopia” (Bostrom 2008). There is a pronounced danger though in the stifling of actual innovation in the here and now—the hard work of creating the concrete conditions for such an exciting tomorrow—by always continuing to look ahead as if it is somehow predestined and just waiting for humanity to arrive (Dickel and Schrape 2017b). At the other end of the social spectrum, is a profound disquiet tied up with these disruptive technological advances. These fears go beyond modern updatings of Luddite wholesale rejections of technology. Rather, they represent the anger and pessimism of an age marked by rising

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

13

inequality, economic insecurity, social dislocation, and political upheaval. Reflected is a serious and deep “dystopian imagination” questioning the ability of humanity historically and perhaps fundamentally to use AI and robotics for anything other than personal gain and mass exploitation (Gruenwald 2013). This is not mere idle pessimism, moreover. It signifies a real concern over the contemporary policies and assumptions driving changes to the existing social order. New advances in reproductive genetics—or “reprogenetics” for short—can socially reproduce and reintroduce morally troubling historical ideas such as eugenics, by appealing to contemporary values such as “market choice”. It reveals a worrying “recent enthusiasm regarding ‘liberal eugenics,’ claiming that reprogenetic decisions should be left to individual consumers thus enhancing their options in the health market” (Raz 2009: 602). Emerging are competing ideas of what awaits humanity in the twenty-­ first century and afterwards. If there is agreement, it is that human society is on the verge of a dramatic transformation. Beyond this common ground there are “contested futures”—each depicting paths that that new technologies can lead us down for reconfiguring who we are and how we live (Brown and Rappert 2017). That we as a social species are evolving seems ever clearer, but whether this will be progress are not remains ambiguous at best (Verdoux 2009). Just as significantly, these visions of the future are constitutive of how we conceive and act upon the possibilities of human enhancement in the present (Coenen 2014). Digging deeper though these opposing visions of a hi-tech future share another distinct feature. They remain by and large quite human-centred despite paradoxically prophesizing humanity’s destruction. “Transhumanism is not simply utopian in the same way as the humanisms of Marx or B.F.  Skinner” notes Professor Fred Baumann (2010: 68), “rather, it is qualitatively different in that it ‘goes beyond’, avowedly disregarding and leaving behind human beings themselves—the very beings that were the central concern of all previous humanisms.” They are fixated on human progress or destruction, mirroring past religious themes of salvation vs. the prophesized apocalypse (Burdett 2014). At the centre of these predicted worlds to come is the making of a “new man” (Saage 2013). However, fresh tough this “new man” may appear, they are stuck in a past historical worldview defined by anthropocentrism. More precisely, repeating in utopian or dystopian ways an age old story of “man battling the elements”, the wildness of nature now being replaced by metallic terror of technology. It offers a way to reimagine our bodily

14 

P. BLOOM

existence, transcending existing physical human limitations, while keeping the faith of humanity’s ultimately privileged position in the social universe (Marques 2013). Required instead is to truly remake humanity’s view of themselves and their world. It is one that is intimately associated with technology but also not exhausted by it. Instead, the growing prevalence and influence of artificial intelligence opens the way for us to collectively redefine our current relationship with non-human consciousness. In paying closer attention to how we treat animals and our environment, we can begin assessing ethically and practically how we should and could progressively shape these relations in an increasingly hi-tech future.

Living in Existence 4.0 Beyond the promise of utopia and the collective terror of dystopia is the much harder but arguably rewarding task of what our technology infused future may realistically be like. It is to leave the world of the fantastic for the potentially actual and rather banal. What will it mean to walk down the street in a city populated as much by AI as it is humans? How will people spend their days and live their life? As we enter the so-called “age of the smart machine”, it is the daily possibilities for transformation that are perhaps the most interesting—a concern that can be traced back to the latter days of the twentieth century when the first seeds of these disruptive technologies began to bud (Zuboff 1988). Disruptive technologies will not only modify how we conduct business but be a sea change to our daily interactions both big and small. It also holds the potential to reconfigure our experiences with power and politics. The prevalence of social media as the primary forum for political communication, for example, can challenge traditional notions of accountability and transparency as The vast majority of political speech acts now occur over digital platforms governed by terms-of-service agreements. In volumes of data or proportions of bandwidth, most communication is between and about devices and about people. It used to be fairly straightforward to trace agency—or to place blame—for miscommunication or communication that promulgated social inequality. Now significant amounts of communication involve autonomous agents that have been purposefully designed but that only produce content in interaction with people in the context of a platform. (Woolley and Howard 2016: 4882)

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

15

Just as troubling is how these autonomous non-human digital agents can be used for reinforcing all too human forms of power: The pervasive use of such human-software hybrids, and the obscure and often discriminatory nature of the algorithms behind them, threaten to undermine the political potential—organizational, communicative, and otherwise—of social media systems (boyd, Levy, & Marwick, 2014; Woolley, 2016; Woolley & Howard, 2016b). Many types of actor groups build, use, and deploy political bots: corporate lobbyists, content management firms, civic activists, defense contractors, and political campaigns. (Ibid.: 4885)

Fundamentally, such technologies and others like them promise to redefine citizenship as it is currently understood. There is a fear that “the robots are coming” and with it a need to defend humanity and our existing “way of life” (Rus 2015). For this reason, the treatment of robots is one premised less on the mutual expansion of our shared possibilities or welfare but rather their regulation and our protection (Boden et al. 2017). These concerns and responses reflect as much about the actual dangers of robots as they do about our present focus on safeguarding rather than expanding our freedoms. Interestingly, the potential of democracy to be revitalized by new technologies such as “big data” are traded for more pessimistic assessments of how it can merely “survive” its increasing prevalence (Boyte 2017). At stake is an underlying conservatism, a desire for protection against the change and belief in the sacredness of the current social order for all its acknowledged flaws. Much of this conservatism is rooted in an inability to imagine a different world where artificial intelligence and other disruptive technologies such as “algorithmic intelligence” can positively “reconstruct citizenship” (Birchall 2019). A significant though overlooked issue for conceiving and pushing forward industry 4.0 are existing global inequalities. A serious danger of this “revolution” is the exacerbating of the continued underdevelopment of much of the world. The current ideological construction of geography— from cartography to everyday understandings of place and the world—are rooted in colonial and neo-colonial knowledges. The question is whether then social geographers and the public can overcome these mythical geographies and “reimagine them” through the global rise of robots and their effect on “human borders” (See Del Casino Jr 2016). In this respect, emerging techno-utopian perspectives of development risk reinforcing prevailing discourses linked to race, ethnicity, and nationalism. Examining

16 

P. BLOOM

the political use of digital platforms in Kenya, scholar Lisa Poggiali (2017: 254) observed that while these technologies were originally used as a technocratic force for “depoliticizing” market ideologies “in which efficiency supplants civic responsibility as one of the major justifications for and goals of government”, they soon took on a much more formative and dynamic political role. They became a virtual site for contestation and the debate over what constitutes twenty-first century, especially during a period of national crisis. This novel form of “digital citizenship” emerged as a site through which questions of citizenship were posed—if not resolved—in a shifting and increasingly precarious political climate. While ‘digital politics’ became an electoral currency mobilized by different social groups, ‘the digital’ was not itself an empty signifier; the disparate platforms to which digitality became linked all involved appeals to nationalism and modernization. Digital technologies could communicate such messages convincingly due to their simultaneously intimate and expansive qualities… experienced most powerfully through the ever popular mobile phone, (they) express both personal proclivities and globalist ambitions, thereby linking the self to possibilities that exceed the boundaries of the community or nation. Mobiles, for example, are a cornucopia of customizable and carefully chosen ringtones, music playlists, photographs and contact lists. They are also a bridge to people and information worlds away, through free messaging services including WhatsApp and social media sites such as Facebook. Thus, compared with other infrastructural forms, digital technologies more easily create new scales of belonging, and more consistently and deeply yoke them to experiences and representations of the self. (Ibid.: 255)

The expansive and contested effect of these technologies speak, in turn, to broader social visions of a future transhuman reality. They encompass not just work but also work and leisure in a time “when robots rule the earth”. In the recent and popular hypothetical future proposed by futurist thinker Robin Hanson (2016) it is not humans be “ems”—the prospective idea of scanning a human brain into a robotic body and enhancing it with AI—which are in control and the main actors of the future. While ems have “very human like experiences” they work and play in virtual reality. These virtual realities are of spectacular quality with no intense hunger, cold, heat, grime, physical illness, or pain. Ems never need to clean, eat, take medicine, or have sex, although they may choose to do so anyway. Even ems in virtuality reality, however, cannot exist

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

17

unless someone pays for supports such as computer hardware, energy and cooling, real estate, structural support, and communication lines. Someone must work to enable such things. In fact most em labour is focused on creating and sustaining such supports. (Ibid.: 9)

What is crucial though is not to assume that this future is already determined. Instead, “‘technology exists in and is part of our symbolic universe, of thick and rich socio-cultural, economic, political, and religious practices,’” as the scholar Alexander Ornella (2015: 303–304) reminds us, “Technology provides meaningful ways to relate to, explore, and frame the world around us. How we shape the world around us, however, can both tell us something about our own self-understanding as human beings and shape our self-understanding as embodied Beings.” Rather it should be seen as an ongoing and global conversation. Geraci (2016), for instance, compares Christian inspired apocalyptic visions technological “singularity” with Hindu inspired ideas of technological “rebirth” based on the transition from the present “misery” of our current era of kali yuga to a return to a rebooted golden age of satya yuga. These quite context—specific conversations can lead, in turn, to the broadening of our shared social and technological imagination as to what is humanly possible in this coming transhuman tomorrow (Hurlburt and Tirosh-Samuelson 2016). The danger though is that as exciting and expansive as these visions can be they will remain firmly rooted in an anthropocentric worldview. One way to avoid this is to cultivate in the present day stronger human and non-human relationships across different spheres of our existence such as in traditional industrial settings (Sauppé and Mutlu 2015). This can be achieved in part by developing artificial intelligence that is more “life like” (Steels and Brooks 2018). Similarly, the more individuals communicate with robots and AI the more comfortable and less fearful they will become (Suzuki et al. 2015). This greater integration of human and artificial “life” points to the potential for fostering a new less human-centred form of existence. Key for doing so is establishing a stronger “common ground” between the interests and needs of both humans and robots. In the words of Pulitzer Prize winning author John Markoff (2016: 15). While there is an increasingly lively discussion about whether intelligent agents and robots will be autonomous—and if they are autonomous whether they will be self aware enough whether we need to consider questions of

18 

P. BLOOM

‘robot rights’—in the short term the more significant question is how we treat these systems and what the design of those interactions says about what it means to be human. To the extent that we treat these systems as partners it will humanize us. Yet the question of what the relationship between human and machines will be has largely been ignored by much of the modern computing world.

This requires asking new questions for organising this potentially new social order (Lindemann 2015). To a certain extent, this means deeply questioning the values and norms that we are applying to human and non-­ human relations such as in the comparable existence of prostitution and the creation of sex robots for male pleasure and control (Richardson 2016). Ideologically, it entails fostering less fear around the “blurring (of) human-machine distinctions” in the development and design of robots, for instance (Ferrari et al. 2016). Opening up are new possibilities to evolve from “Industry 4.0” to an integrated and empowering “Life 4.0”.

Developing a Theory of “Smart Consciousness” Realising a less human-centred and more “integrated” future reality will mean perhaps above all else fundamentally shifting how think about “consciousness”. The famous Descartian precept of “I think, therefore I am” precludes, it would seem, non-human beings. It is an internal monologue about the self that ultimately establishes the self as a “self”, an awareness that one exists that is true test supposedly of if they, in fact, do indeed exist—at last as a conscious and intelligent being. To this end, Consciousness is an elusive concept, and efforts towards understanding it or its evolution oscillate between philosophy and neuroscience—between thought experiments and measurable tests of brain activity. Consequently, philosophers and scientists are continually coming up with new theories on why or how they think that the physical brain can bring the metaphysical mind into being. (Balakrishnan 2018: 402)

In present times, anthropocentric thinking has largely persisted across both philosophical and scientific perspectives. The term “artificial intelligence” implies that it is humans who have “natural” or “genuine” intelligence. Such ideas, of course, are challenged by a rich tradition of highlighting just how socially programmed this “natural” human

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

19

intelligence truly is. Predicted, or at least dreamed of, is a “the dawning of a new civilization that will enable us to transcend our biological limitations and amplify our creativity. In this new world, there will be no clear distinction between human and machine, real reality and virtual reality” (Kurzweil 2005). Just as significantly unlike the industrial and digital revolutions, “the AI revolution aims to substitute, supplement and amplify practically all tasks currently performed by humans, becoming in effect, for the first time, a serious competitor to them” (Makridakis 2017: 54). It is worth reflecting then a bit more deeply on what is actually meant and inferred by the notion of “artificial intelligence”. On the surface, it refers simply to the fact that it is human who programme and therefore are the ultimate creators of machine “thinking” (Cohen and Feigenbaum 2014). It therefore produces a range of principles for practically and ethically guiding this divine like power of modern humans to create semi-­ conscious machines (Nilsson 2014). Artificial Intelligence, hence, requires adopting a “modern approach” for this purpose. At the start of their landmark book on the subject Russell and Norvig (2016: 1) proclaim We call ourselves Homo sapiens—man the wise—because our intelligence is so important to us. For thousands of years, we have tried to understand how we think; that is, how a mere handful of matter can perceive, understand, predict, and manipulate a world far larger and more complicated than itself. The field of artificial intelligence, or AI, goes further still: it attempts not just to understand but also to build intelligent entities

Crucial, in this respect, is to stop considering AI as so fundamentally different than human intelligence. There is a profound “us vs. them” discourse permeating current understandings of AI (Oh et  al. 2017). These ignore the deep philosophical considerations emerging from this new non-human consciousness, ones that can expand our conception of humanity rather than essentialise it against a machine “other” (Copeland 2015). Required is a profound paradigm shift from “anthropocentric humanism to critical posthumanism”. This was the main spirit motivating, for example, the creation of a “Digital Education Group’s Manifesto for Teaching Online”. In an interview with one of its founders Sian Bayne, she declares Posthumanist thought within education is a way of addressing the failures of the humanist assumptions which, I would say, have driven much educational

20 

P. BLOOM

research and practice over the last few decades. Posthumanism is useful, because it asks people to think what would education look like if we did not take a position which sees the human as a kind of transcendent observer of the world. Instead, it sees humans as entangled with the world. (quoted in Bayne and Jandrić 2017: 198)

This includes changing how we think and talk about AI and robots to accept it as part of a “natural evolution” of human learning and development (Havlík 2018). It also means using speculative techniques— such as science fiction—to realistically consider the actual power and possibilities of AI. This paradigm shift in regards to AI reflects the broader growing embrace of “neurodiversity” Rather than demonising those who perceive the world differently as being deficient, there is a rising movement to accept and learn from such individuals (Baron-Cohen 2017). The history of autism, or at least its social and medical treatment, serves as a good lesson for how we should progressively approach AI. In his comprehensive and much lauded book on the subject Neurotribes: The legacy of autism and the future of neurodiversity, author Steve Silberman observes that …newly diagnosed (autistic) adults were engaged in a very different conversation about the difficulties of navigating and surviving in a world not built for them…the idea of neurodiversity has inspired the creation of a rapidly growing civil rights movement based on the simple idea that the most astute observers of autistic behaviours are autistic people themselves rather than their parents or doctors. (Silberman 2015)

Not surprisingly, market oriented thinkers are already co-opting “neurodiversity” as a competitive advantage (Austin and Pisano 2017). While the ethics of such ideas may be questionable, they do represent how AI can be viewed less as something foreign and “non-human” and more as an example of neurodiversity (Ortega 2009). Rather than view humans as being opposite than machines and robots, it is perhaps more accurate and empowering to consider us as component parts of a dynamic “networked self” (Papacharissi 2018). Similarly, transforming popular thinking of animal intelligence is critical to our reconsideration of AI. The common perception of animals as unthinking or less sophisticated than humans allows for their dismissal as conscious being and more troubling the justification for their mistreatment

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

21

and our quite literal mass consumption. It is almost impossible, hence, to envision shared “common futures” when there is such a misunderstanding and devaluing of the intelligence of the other beings you are meant to be sharing this future with. Importantly, such dismissal of animal intelligence is not backed up by the latest scholarly evidence (Thorndike 2017). Required is a remeasuring of what counts as intelligence, providing a new lens for evaluating the depth and value of human and non-human thinking, as well as the need for new frameworks such “universal psychometrics” which specifically attempts to reconfigure traditional measurements of cognitive ability for machine intelligence. “Beyond the enormous landscape of behaviours in the animal kingdom, there is yet another gigantic space to be explored: the machine kingdom” notes scholar José HernándezOrallo (2017: xi), A plethora of new types of ‘creatures’ is emerging: robots, animats, chatbots, digital assistants, social bots, automated avatars and artificial life forms, to name a few, including hybrids and collectives, such as machine-enhanced humans, cyborgs, artificial swarms, human computation systems and crowd computing platforms. These systems display behaviours and capabilities as peculiar as their developers and constituents can contrive. Universal psychometrics presents itself as a new area dealing with the measurement of behavioural features in the machine kingdom, which comprises any interactive system, biological, artificial or hybrid, individual or collective

Doing so opens the way for transcending narrow visions and fears over technological singularity (Wang et al. 2018). In their place, emerges the possibility of being part of a world that integrates these diverse intelligences to create “hybrid beings and systems combining natural and artificial intelligences” (Fox 2017: 38). At stake is the radical reconceptualising of what it means to be “smart”. There is an ongoing view that human intelligence is being devalued and under threat from “smart” machines. Yet once we accept that this is simply a different type of intelligence, the potential for collaboration and learning eclipse and ultimately can replace these worries. The question for humans shifts from one of controlling and avoiding being controlled by machines and robots to how we can contribute to the creation of “robust and beneficial artificial intelligence” (Russell et  al. 2015: 105). Critical is the building of “embodied, situated agents” that can mirror “artificial life” while also granting us the benefits that come with “artificial intelligence”

22 

P. BLOOM

(Steels and Brooks 2018). There is already a gradual reimagining of emotions to interconnect with and deepened by technological advancements such as their popular portrayals in “cyberpunk science fiction” films (Lee 2016). In the present, this entails reconfiguring creativity away from limiting anthropocentric assumptions and toward shared forms of “intentional creative agency” (Guckelsberger et al. 2017). In the process, we can embrace an expanded rebooted form of humanity, thriving in an age of artificial intelligence and taking advantage of all the possibilities offered by “life 3.0”, one in which to be human means living amidst and evolving alongside artificial intelligence. As Max Tegmark (2017: 261) optimistically declares in his celebrated book Life 3.0: Being human in the age of artificial intelligence …the most inspiring scientific discovery ever is that we’ve dramatically underestimated life’s future potential. Our dreams and aspirations need not be limited to century-long life spans marred by disease, poverty and confusion. Rather, aided by technology, life has the potential to flourish for billions of years, not merely here in our Solar System, but also throughout a cosmos far more grand and inspiring than our ancestors imagined. Not even the sky is the limit.

Preparing for a “Transhuman” World This chapter has sought to introduce the exciting potentialities of creating and existing within a “transhuman world”. The current terror over robot domination and AI control is rooted as much in our history of violence, conquest, and exploitation then in the actual threat posed by such non-­ humans. Their use by us should be a point for critically reflecting on what it is “to be human” and what it could be rather than reinforcing any essentialist notions of “human nature” or desire. It is an opportunity for exploration of a different way of perceiving and shaping our environment, an evolution from the rule of human intelligence to the embrace of intelligence diversity (Alexandre and Besnier 2018). The question is whether we are ready for such an empowering “transhuman world” or whether we will continue to be confined to one where humans rule others and each other. Key to making such a decision means revisiting the troubling idea of “singularity”. It is not a foregone conclusion that robots and AI will exceed our own capabilities and intelligence, thus making us expendable

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

23

and disposable. Needed are serious critiques of this ideology, revealing it not as reality but a human created mythology. This dystopian vision is an “aesthetics”, a way of colouring our future based on our present inequities (Jameson 2018). It presents an exclusionary worldview toward nonhumans that replicates our own exclusionary social relations, one which discriminates and fears robots rather than welcoming them as “a member of our human community”. Singularity is simply another form of contemporary “sensationalism” that precludes the actual possibilities for progress in the name of trumped up fears of a non-human “other” (Goldberg 2015). While AI can still certainly pose an “existential danger” to humanity, if properly developed and with the benefit of a paradigm shift in humanity, they and us “can still be a force for good” (Bishop 2016: 267). Disposing of singularity, and all the terror it brings with it, allows for a more nuanced, optimistic, and realistic discussion of what precisely it will be like to live in a “trans-human” society. The jettisoning of any essentialism human nature does not mean the end of ethics and morality. Rather, it will require the construction of new norms and ethical standards to engage with this “open nature” (Murillo 2014). At the heart of these fresh and evolving moral considerations is the enhancement of humans and humanity. Conventionally, human enhancement is primarily linked to simply “technological innovations”—granting us new capabilities previously either unimaginable or viewed as outright impossible (Iuga 2016). However, a core contention of this book is that it will demand an enhancement of our values, ideologies, and practical ethics. This is not a call for “techno-transcendence” (Harris 2014) or a warning of “techno-­ apocalypse” (Brennan 2016). Instead it is a critical investigation that attempts to portray anew of our present and future possibilities of sharing a world with robots and machines. It is an act of intellectually redramatising this relationship, moving from a fearful singularity to an exciting integration. To achieve this integrative reality, it is necessary to look not just toward the future to come but our present interactions with nonhumans. Animal rights and robot ethics are intimately intertwined, both reflections of the existential dangers posed by a human-centred worldview. The advance of AI and robots is not the victory of inhuman beings but the rise of new “persons” with complex identities and diverse forms of wisdom. Needed, therefore, is a new Enlightenment discourse for the modern age that celebrates shared intelligence and rejects narrow

24 

P. BLOOM

anthropocentric perspectives as outdated and a barrier to our common flourishing (Jotterand 2010). It also requires a clear eyed interrogation of the underlying structures and power dynamics to are imperiling this more enlightened integrated future from occurring. This new movement depends on the reorienting of transhumanism as an antidote to human injustice. Currently, there are ideas that humans and robots can achieve a “convergence”, focusing on the need to standardize our work and society in order take advantage of “socio-economic innovations” and the benefits of “functional singularities”. This is certainly a step in the right direction. Yet these analyses remain wedded to narrow market ideologies that continue to limit our social imagination. Moreover, the goal is not transcendence or salvation. It is progress and evolution, though based in a sense of open possibility rather inevitability (Sirius and Cornell 2015). This entails be willing to rewrite our past and have a renewed faith that far from being at the “end of history”, the potentiality of what we may become and achieve have only just begun.

References Alexandre, L., & Besnier, J.  M. (2018). Do Robots Make Love?: From AI to Immortality–Understanding Transhumanism in 12 Questions. Cassell. Åsberg, C. (2013). The Timely Ethics of Posthumanist Gender Studies. Feministische Studien, 31(1). https://doi.org/10.1515/fs-2013-0103. Åsberg, C., Koobak, R., & Johnson, E. (2011). Beyond the Humanist Imagination. NORA  – Nordic Journal of Feminist and Gender Research, 19(4), 218–230. https://doi.org/10.1080/08038740.2011.625042. Austin, R. D., & Pisano, G. P. (2017). Neurodiversity as a Competitive Advantage. Harvard Business Review, 95(3), 96–103. Balakrishnan, V. S. (2018). The Birth of Consciousness: I Think, Therefore I Am? The Lancet Neurology, 17(5), 402. Baron-Cohen, S. (2017). Editorial Perspective: Neurodiversity – A Revolutionary Concept for Autism and Psychiatry. Journal of Child Psychology and Psychiatry, 58(6), 744–747. Baumann, F. (2010). Humanism and Transhumanism. The New Atlantis, 29, 68–84. Bayne, S., & Jandrić, P. (2017). From Anthropocentric Humanism to Critical Posthumanism in Digital Education. Knowledge Cultures, 5(2), 197. Birchall, C. (2019). Algorithmic Intelligence? Reconstructing Citizenship Through Digital Methods. Retrieved October 3, 2019, from http://

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

25

ethnographymatters.net/blog/2016/04/12/algorithmic-intelligencereconstructing-citizenship-through-digital-methods/. Bishop, J. M. (2016). Singularity, or How I Learned to Stop Worrying and Love Artificial Intelligence. In Risks of Artificial Intelligence (p. 267). Boden, M. (2017). AI: Utopia or Dystopia? Retrieved October 3, 2019, from https://www.weforum.org/agenda/2017/02/ai-utopia-or-dystopia/. Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., et  al. (2017). Principles of Robotics: Regulating Robots in the Real World. Connection Science, 29(2), 124–129. https://doi.org/10.1080/09540091.2 016.1271400. Bostrom, N. (2001). What Is Transhumanism? Retrieved October 3, 2019, from https://nickbostrom.com/old/transhumanism.html. Bostrom, N. (2005). A History of Transhumanist Thought. Journal of Evolution and Technology, 14(1), 1–25. Bostrom, N. (2008). Letter from Utopia. Studies in Ethics, Law, and Technology, 2(1). https://doi.org/10.2202/1941-6008.1025. Boyte, H. (2017). John Dewey and Citizen Politics: How Democracy Can Survive Artificial Intelligence and the Credo of Efficiency. Education and Culture, 33(2), 13. https://doi.org/10.5703/educationculture.33.2.0013. Braidotti, R. (2018). A Theoretical Framework for the Critical Posthumanities. Theory, Culture & Society. https://doi.org/10.1177/0263276418771486. Brennan, E. (2016). Techno-Apocalypse. In The Last Midnight: Essays on Apocalyptic Narratives in Millennial Media, 53, 206. Brown, N., & Rappert, B. (2017). Contested Futures: A Sociology of Prospective Techno-Science. London: Routledge. Burdett, M. (2014). The Religion of Technology: Transhumanism and the Myth of Progress. In C. Mercer & T. Trothen (Eds.), Religion and Transhumanism: The Unknown Future of Human Enhancement: The Unknown Future of Human Enhancement. Santa Barbara: ABL-CIO. Coenen, C. (2007). Utopian Aspects of the Debate on Converging Technologies. In G. Banse (Ed.), Assessing Societal Implications of Converging Technological Development (pp. 141–174). Berlin: Edition Sigma. Coenen, C. (2014). Transhumanism and Its Genesis: The Shaping of Human Enhancement Discourse by Visions of the Future. UMANA. MENTE Journal of Philosophical Studies, 7(26), 35–58. Cohen, P., & Feigenbaum, E. (2014). The Handbook of Artificial Intelligence. Oxford: Butterworth-Heinemann. Copeland, J. (2015). Artificial Intelligence: A Philosophical Introduction. London: John Wiley & Son. Dart, T. (2018, October 2). ‘Keep Robot Brothels Out of Houston’: Sex Doll Company Faces Pushback. The Guardian.

26 

P. BLOOM

Del Casino, V. (2016). Social Geographies II. Progress in Human Geography, 40(6), 846–855. https://doi.org/10.1177/0309132515618807. Dickel, S., & Schrape, J. (2017a). The Logic of Digital Utopianism. Nanoethics, 11(1), 47–58. https://doi.org/10.1007/s11569-017-0285-6. Dickel, S., & Schrape, J. (2017b). The Renaissance of Techno-Utopianism as a Challenge for Responsible Innovation. Journal of Responsible Innovation, 4(2), 289–294. https://doi.org/10.1080/23299460.2017.1310523. Dvorsky, G. (2008). Better Living Through Transhumanism. Journal Evolution & Technology, 19(1), 1–7. Ferrari, F., Paladino, M., & Jetten, J. (2016). Blurring Human–Machine Distinctions: Anthropomorphic Appearance in Social Robots as a Threat to Human Distinctiveness. International Journal of Social Robotics, 8(2), 287– 302. https://doi.org/10.1007/s12369-016-0338-y. Fox, S. (2017). Beyond AI: Multi-Intelligence (MI) Combining Natural and Artificial Intelligences in Hybrid Beings and Systems. Technologies, 5(3), 38. Fukuyama, F. (1999). The Great Disruption: Human Nature and the Reconstruction of Social Order. London: Profile. Fuller. (2018). The Posthuman and the Transhuman as Alternative Mappings of the Space of Political Possibility. Journal of Posthuman Studies, 1(2), 151. https://doi.org/10.5325/jpoststud.1.2.0151. Fuller, S., & Lipinska, V. (2014). The Proactionary Imperative: A Foundation for Transhumanism. London: Springer. Gagnon, P. (2012). The Problem of Trans-Humanism in the Light of Philosophy and Theology. In J. Stump & A. Padgett (Eds.), The Blackwell Companion to Science and Christianity (pp. 393–405). London: Blackwell. Gelles, D. (2009). Immortality 2.0. The Futurist, 43(1), 34–41. Geraci, R. (2016). A Tale of Two Futures: Techno-Eschatology in the US and India. Social Compass, 63(3), 319–334. https://doi.org/10.1177/ 0037768616652332. Goldberg, K. (2015). Robotics: Countering Singularity Sensationalism. Nature, 526(7573), 320. Grassie, W.  J., & Hansell, G. (2010). H±Transhumanism and Its Critics. Philadelphia: Metanexus Institute. Gruenwald, O. (2013). The Dystopian Imagination: The Challenge of TechnoUtopia. Journal of Interdisciplinary Studies, 25(1/2), 1. Guckelsberger, C., Salge, C., & Colton, S. (2017). Addressing the “Why?” in Computational Creativity: A Non-Anthropocentric, Minimal Model of Intentional Creative Agency. Hanson, R. (2016). The Age of Em: Work, Love, and Life When Robots Rule the Earth. Oxford: Oxford University Press. Harris, M.  S. (2014). The Myth of Techno-Transcendence: The Rhetoric of the Singularity. Doctoral dissertation, University of Kansas.

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

27

Hauskeller, M. (2012). Reinventing Cockaigne. Hastings Center Report, 42(2), 39–47. https://doi.org/10.1002/hast.18. Havlík, V. (2018). The Naturalness of Artificial Intelligence from the Evolutionary Perspective. AI & Society. https://doi.org/10.1007/s00146-018-0829-5. Hernández-Orallo, J. (2017). The Measure of All Minds: Evaluating Natural and Artificial Intelligence. Cambridge University Press. Hughes, J. (2010). Contradictions from the Enlightenment Roots of Transhumanism. Journal of Medicine and Philosophy, 35(6), 622–640. https:// doi.org/10.1093/jmp/jhq049. Hurlburt, J., & Tirosh-Samuelson, H. (2016). Perfecting Human Futures: Transhuman Visions and Technological Imaginations. London: Springer. Huxley, J. (2015). Transhumanism. Ethics in Progress, 6(1), 12–16. Igrek, A. (2015). Beyond Malaise and Euphoria: Herbrechter’s Critical Post-­ Humanism. Comparative and Continental Philosophy, 7(1), 92–97. https:// doi.org/10.1179/1757063815z.00000000052. Iuga, I. (2016). Transhumanism Between Human Enhancement and Technological Innovation. Symposion, 3(1), 79–88. Jameson, S.  M. (2018). Dystopian Film on the Edge of a Food Coma. New Cinemas: Journal of Contemporary Film, 16(1), 43–56. Jotterand, F. (2010). Human Dignity and Transhumanism: Do AnthroTechnological Devices Have Moral Status? The American Journal of Bioethics, 10(7), 45–52. Koch, T. (2010). Enhancing Who? Enhancing What? Ethics, Bioethics, and Transhumanism. Journal of Medicine and Philosophy, 35(6), 685–699. https:// doi.org/10.1093/jmp/jhq051. Kurzweil, R. (2005). The Singularity is Near: When Humans Transcend Biology. Penguin. Laurie, T. (2015). Becoming-Animal Is a Trap for Humans: Deleuze and Guattari in Madagascar. In H. Stark & J. Roffe (Eds.), Deleuze and the Non/Human (pp. 142–162). London: Palgrave Macmillan. Lee, T.  K. I. (2016). Cyberspace and the Post-Cyberpunk Decentering of Anthropocentrism. Doctoral dissertation, Georgetown University. Lindemann, G. (2015). Social Interaction with Robots: Three Questions. AI & Society, 31(4), 573–575. https://doi.org/10.1007/s00146-015-0633-4. Makridakis, S. (2017). The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms. Futures, 90, 46–60. https://doi. org/10.1016/j.futures.2017.03.006. Manyika, J., Chui, M., Bughin, J., Dobbs, R., Bisson, P., & Marrs, A. (2013). Disruptive Technologies: Advances That Will Transform Life, Business, and the Global Economy. San Francisco: McKinsey Global Institute. Markoff, J. (2016). Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. New York: Harper Collins.

28 

P. BLOOM

Marques, E. (2013). I Sing the Body Dystopic: Utopia and Posthuman Corporeality in PD Jame’s The Children of Men. Ilha Do Desterro, (65), 29–48. Mercer, C., & Trothen, T. (2014). Religion and Transhumanism: The Unknown Future of Human Enhancement: The Unknown Future of Human Enhancement. Santa Barbara: ABL-CIO. More, M. (2013). The Philosophy of Transhumanism. In M. More & N. Vita-­ More (Eds.), The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future (pp.  3–17). London: Wiley. Murillo, A.  P. (2014). Data at Risk Initiative: Examining and Facilitating the Scientific Process in Relation to Endangered Data. Data Science Journal, 12, 207–219. Nilsson, N. (2014). Principles of Artificial Intelligence. Morgan Kauffman. Oh, C., Lee, T., Kim, Y., Park, S., & Suh, B. (2017). Us vs. Them: Understanding Artificial Intelligence Technophobia Over the Google DeepMind Challenge Match. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Ornella, A. (2015). Towards a ‘Circuit of Technological Imaginaries’ A Theoretical Approach. In D.  Pezzoli-Olgiati (Ed.), Religion in Cultural Imaginary  – Explorations in Visual and Material Practices (pp.  9–38). Nomos Verlagsgesellschaft. Ortega, F. (2009). The Cerebral Subject and the Challenge of Neurodiversity. Biosocieties, 4(4), 425–445. https://doi.org/10.1017/s1745855209990287. Papacharissi, Z. (2018). A Networked Self and Human Augmentics, Artificial Intelligence, Sentience. London: Routledge. Papadopoulos, D. (2010). Insurgent Posthumanism. Ephemera: Theory & Politics in Organization, 10(2), 134–151. Persson, I., & Savulescu, J. (2010). Moral Transhumanism. Journal of Medicine and Philosophy, 35(6), 656–669. https://doi.org/10.1093/jmp/jhq052. Peters, T. (2005). The Soul of Trans-Humanism. Dialog: A Journal of Theology, 44(4), 381–395. https://doi.org/10.1111/j.0012-2033.2005.00282.x. Poggiali, L. (2017). Digital Futures and Analogue Pasts? Citizenship and Ethnicity in Techno-Utopian Kenya. Africa, 87(2), 253–277. https://doi.org/10.1017/ s0001972016000942. Porpora, D. (2017). Dehumanization in Theory: Anti-Humanism, Non-­Humanism, Post-Humanism, and Trans-Humanism. Journal of Critical Realism, 16(4), 353–367. https://doi.org/10.1080/14767430.2017.1340010. Raz, A. (2009). Eugenic Utopias/Dystopias, Reprogenetics, and Community Genetics. Sociology of Health & Illness, 31(4), 602–616. https://doi. org/10.1111/j.1467-9566.2009.01160.x. Richardson, K. (2016). The Asymmetrical ‘Relationship’: Parallels Between Prostitution and the Development of Sex Robots. ACM SIGCAS Computers and Society, 45(3), 290–293.

1  INTRODUCTION: PREPARING FOR A “TRANSHUMAN” FUTURE 

29

Roussel, M. (2018). Toward a Post-Human Era. Architecture Philosophy, 3(1), 77–93. Rus, D. (2015). The Robots Are Coming. Foreign Affairs, 94(4), 2–6. Russell, S., Dewey, D., & Tegmark, M. (2015). Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine, 36(4), 105–114. Russell, S., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach. Malaysia: Pearson Education Limited. Saage, R. (2013). New Man in Utopian and Transhumanist Perspective. European Journal of Futures Research, 1(1). https://doi.org/10.1007/s40309-0130014-5. Sauppé, S., & Mutlu, B. (2015). The Social Impact of a Robot Co-Worker in Industrial Settings. In Proceedings of the 33rd annual ACM Conference on Human Factors in Computing Systems (pp. 3613–3622): ACM. Silberman, S. (2015). Neurotribes: The Legacy of Autism and the Future of Neurodiversity. Penguin. Sirius, R. U., & Cornell, J. (2015). Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity. Red Wheel Weiser. Steels, L., & Brooks, R. (2018). The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents. London: Routledge. Suzuki, T., Yamanda, S., Kanda, T., & Nomura, T. (2015). Influence of Social Avoidance and Distress on People’s Preferences for Robots as Daily Life Communication Partners. In New Friends. Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf. Thorndike, E. (2017). Animal Intelligence: Experimental Studies. Routledge. Verdoux, P. (2009). Transhumanism, Progress and the Future. Journal of Evolution and Technology, 20(2), 49–69. Višň ovský, E. (2015). Homo Biotechnologicus. Human Affairs, 25(2). https:// doi.org/10.1515/humaff-2015-0019. Wang, P., Liu, K., & Dougherty, Q. (2018). Conceptions of Artificial Intelligence and Singularity. Information, 9(4), 79. Wolfe, C. (2010). What Is Posthumanism? Minnesota: University of Minnesota Press. Woolley, S., & Howard, P. (2016). Automation, Algorithms, and Politics| Political Communication, Computational Propaganda, and Autonomous Agents  – Introduction. International Journal of Communication, 10(9). Retrieved from http://ijoc.org/index.php/ijoc/article/view/6298/1809. Yuen, J. (2018, September 9). ‘NICE SKIN’: What It’s Like Inside a Sex Doll Rental Business. Toronto Sun. Zuboff, S. (1988). In the Age of the Smart Machine: The Future of Work and Power. New York: Basic Books.

CHAPTER 2

Evolving Beyond Human Relations

Imagine for a moment coming to work in the near future. Maybe you are walking to your office or merely switching on your computer to work from home. In either case, your co-workers will be as important as ever— many will also likely not be human. Robots and AI will not only revolutionise our economy but our everyday experience of it. Our organisations will rely on their skills and intelligence and so will we as their human colleagues. They will be our unseen “assistants”, helping us to schedule our days and do mundane tasks. They will also be our electronic “eyes” and “ears”, allowing us to physically attend meetings from a distance. Indeed You already have an idea of what the future of work looks like because you are likely already working with bots, artificial intelligence, and machine learning. They are increasingly being incorporated into everything from work stations to websites to cloud platforms. (Morgan 2017: n.p.)

This development raises deeper questions than simply changing workplace relations. It also challenges the idea that the economy and society should serve primarily serve human needs and desires. Further, it sets out an interesting challenge to the very notion that we are principally human. More precisely, it offers the opportunity to weaken and transcend our investment in being “human” for a more creative and less categorical process of self—formation. Quoting Professor Norman Holland from his famous 1978 essay “Human Identity”, “Fundamental reality rests not on © The Author(s) 2020 P. Bloom, Identity, Institutions and Governance in an AI World, https://doi.org/10.1007/978-3-030-36181-5_2

31

32 

P. BLOOM

laws ‘out there’ but on the creative, formative relation self and other that gives rise to the ‘in here’ and ‘out there’”. Updating this significant insight to the present age, our interactions with “intelligent” non-humans—who potentially have their own styles and identities—will form the basis of new inner worlds and external realities that are neither wholly human or non-human. The second chapter explores the need to evolve beyond current anthropocentric views in order to prepare for a “trans-human” future. A seeming constant in contemporary society is the central place accorded to the human. While issues of the environment and animal welfare are increasingly prominent, humans remain the primary focus of politics, culture and the economy, trumping ethical concerns that may impede their ‘progress’. It is their needs that continue to reign supreme and around which all else must ultimately revolve. However, as A.I. develops and their potential for consciousness grows, humans will be forced to undergo their own Copernican revolution where they can no longer place themselves at the centre of the social universe. What is crucial is not to simply reinvent human relations but to develop a new paradigm of transhumanism current anthropocentric views to prepare for this diverse technological future. A seeming constant in contemporary society is the central place accorded to the human-animal. While issues of the environment and animal welfare are increasingly prominent, humans remain the primary focus of politics, culture and the economy, trumping ethical concerns that may impede their ‘progress’. It is their needs that continue to reign supreme and around which all else must ultimately revolve. However, as A.I. develops and their potential for consciousness grows, humans will be forced to undergo their own Copernican revolution where they can no longer place themselves at the centre of the social universe. What is crucial is not to simply reinvent humanism but to develop a new paradigm of transhumanism. This chapter will start by highlighting how culturally we construct ourselves as being in an “anthropocene” age dominated by human perspectives and desires. From this it will then turn to critical approaches such as “new materialism” to emphasise the possibilities and importance associated with “non-human” configurations in our natural and social environment. In this spirit, it will show how struggles with animal ethics and environmental issue are fundamental to this human evolution and key to preparing present day societies for a future of greater human and non-­

2  EVOLVING BEYOND HUMAN RELATIONS 

33

human contact and integration. It will furthermore highlight the role that serious gaming and virtual reality can have for bringing about this more expansive human vision.

Looking Past a Human—Centred World It, of course, must be highlighted that humans have always worked alongside “non-humans”. The use of animal labour has been crucial to human prosperity and progress throughout its history. Further, some of the greatest economic leaps occurred through the development of technology for better harnessing this animal power. The radical innovations to the literal harness of plough animals in the middle ages were fundamental to the explosion in agricultural productivity. The failure to acknowledge the importance of the role of animals represents our continued anthropocentrism. Throughout history animals were used as the basis for establishing our “humanity”—as differentiating ourselves from less intelligent and skillful beings (See Fudge 2000). To this end, anthropocentrism remains more than just a “misunderstood problem” but a means for valuing human intelligence and welfare above all others (Kopnina et al. 2018). This prioritization of human needs is already extending to our present day relationship with technology. No matter how much artificial intelligence advances, its main aim continues to be to serve humanity. This is already apparent in our current daily interactions with such “disruptive” technology. AI and robotics are meant to be “consumed”, exploited to make human lives more convenient and easier (Mills 2018). These relationships will soon become ever more natural, just part of the daily unquestioned part of our existence “from cooking to art” (Templeton 2018). Here there is a double form of social naturalization at work, firstly as rendering these new human and technological interactions not strange and secondly, and more fundamentally, as reinforcing the “master/slave”, “employer/employee” power dynamic currently shaping most contemporary economic relationships. At stake, therefore, is so much more than creating useful technologies. If such technologies will in fact be “intelligent” this raises a host of ethical considerations that will shape our shared future. It is telling that the majority of the focus, in this respect, has centred on making robots and AI more moral (Danielson, 2002). Nevertheless, world leaders and economic elites are turning their attention to the ethical issues robots and non-­

34 

P. BLOOM

human intelligence will have as greater parts of the workforce in the near future (Bossmann 2016). These worries gesture to fundamental philosophical questions about the deeper ethics of AI (Bostrom and Yudkowsky 2014). Back at the prospective workplace, non-human intelligence is already beginning to transform employment relations thus becoming a growing concern for HR Departments (Nunn 2018). While HRM is traditionally and often rightly criticized for being less concerned about humans as it is about maximizing profit and productivity, in this case there is an increasing emphasis on how AI will enhance the lives of employees (Pickup 2018). Significantly, there are signs that the effect of HR on AI and robots will be the same as it is with humans, focused ultimately on ensuring their compliance with existing workplace norms and critically power relations (Mettler 2018). There is a need, in turn, to evolve beyond current anthropocentric views in order to prepare for this diverse technological future. A seeming constant in contemporary society is the central place accorded to the human-animal. While issues of the environment and animal welfare are increasingly prominent, humans remain the primary focus of politics, culture and the economy, trumping ethical concerns that may impede their ‘progress’. It is their needs that continue to reign supreme and around which all else must ultimately revolve. However, as A.I. develops and their potential for consciousness grows, humans will be forced to undergo their own Copernican revolution where they can no longer place themselves at the centre of the social universe. What is crucial is not to simply reinvent humanism but to develop a new paradigm of transhumanism. To do so, it is important to start by highlighting how culturally we construct ourselves as being in an “anthropocene” age dominated by human perspectives and desires. From this it will then turn to critical approaches such as “new materialism” to emphasise the possibilities and importance associated with “non-human” configurations in our natural and social environment. In this spirit, it will show how struggles with animal ethics and environmental issue are fundamental to this human evolution and key to preparing present day societies for a future of greater human and non-human contact and integration. It will furthermore highlight the role that serious gaming and virtual reality can have for bringing about this more expansive human vision.

2  EVOLVING BEYOND HUMAN RELATIONS 

35

Managing Human Relations This history of human relations is vast and contains multitudes. In the contemporary era, there has been a shift toward imagining humans as productive resources to be managed. Human resource management (HRM), in this regard, is framed in economic emphasizing the need to maximize our values as individuals and in relation to one another (Legge 1995). The point here is not to say that such HRM perspectives are necessarily exhaustive of modern understandings of human relations, rather that they represent a significant perspective for shaping how we view human worth—often quite literally (Sisson 1993). It is debatable the degree to which HRM can be considered ethical in practice (Legge 1998). More precisely, it is imperative to understand its moral and ethical supposition as it may shape mainstream approaches to emerging “transhuman” relations. HRM has a history spanning back now for over a century. Indeed, across its different perspectives and various historical permutations, The academic study of human resource management dates back to the late 19th century, with the first college courses and textbooks appearing just after 1900. Drawing on its early disciplinary background in economics and industrial psychology, the history of the HR field is rooted in the study of individual differences, and in the design and implementation of recruitment, appraisal, and compensation practices based on those differences. (Huselid 2011: 309)

It reflects both a set of concrete practices and a symbolic view of firms and human interaction—meeting concrete managerial demands for efficiency and productivity with discourses and techniques meant to elicit employee loyalty to their organisation and engagement with their job. Hence, The art and science of managing human resources primarily means utilizing and developing the organization members in the most efficient ways. At the core is a wish to get the structure and skill level optimally distributed according to the demands of the production process, at the same time the organization members are supposed to be committed to the organization in general and to the strategic aims in particular…One can, in fact, see the interests of corporate cultures as an indication of the importance assigned to the strategic management of human relations. (Berg 1986: 559)

36 

P. BLOOM

To a certain extent, HRM can be seen as a progressive perspective, in so much that it is continually trying to improve how it treats, processes, and maximizes the value of human capital (Langbert and Friedman 2002). Nevertheless, it represents a definite human based culture centred on capitalist work and the firm—one replete with its own history, rituals, and myths. Organizations, in this regard, come to embody the idea of “being human” with their own “personality” and “personal histories”. According to Professor Wendy Ulrich (1984: 118) Organizations, like people, establish personalities and identities, both modelling one another and by distinguishing themselves from one another as they react to environmental opportunities and challenges around them. These processes contribute to the organization’s unique personality—its culture. Organizations can use their knowledge of one another as a basis of comparison and distinction, or they can concentrate on developing their own essential characteristics within their respective environments.

Yet this culture is both emboldened by and challenged due to the changing nature of work linked to technology (Burke and Ng 2006). Anthropocentrism is central to the history and reigning ideology of HRM. As its name suggests, it is centred on human interactions and value. These perspectives are being subverted though by new visons of less anthropocentric workplaces that incorporate HRM values with emerging ecological concerns. Significantly, Unlike the risks of earlier civilizations, modernization risks are rooted in ecologically destructive industrialization and are global, pervasive, long term, imperceptible, incalculable, and often unknown…Ecological degradation contradicts the interests that advance industrialization, and it has differential impacts on people. Differential distribution of risks puts people in different social risk positions. Risks cross economic class, gender, ethnic, generational, and national boundaries. Risk positions exacerbate inequalities based on these variables, but wealth or power do not provide complete protection from modernization risks. (Shrivastava 1995: 121)

Over the last several decades they have begun showing the limits of human based limits of HRM. It is consequently argued that Clearly, the foundational concepts and underlying philosophies of the environmental management and ecocentric responsibility paradigms are incom-

2  EVOLVING BEYOND HUMAN RELATIONS 

37

mensurable. The environmental management paradigm is anthropocentric; its proponents continue to elevate human beings to a dominant position over nature….Rather than viewing the environmental crisis as a challenge to, and consequential anomaly of, the dominant social paradigm, concepts and practices within environmental management are retrofitted to perpetuate this reigning paradigm. (Purser et al. 1995: 1074)

For this reason, Purser et  al. contend that there needs to be a more democratic “ecocentric” organization that takes into the account the needs of both humans and the environment, contending that The movement toward an ecocentric organization paradigm is not inevitable: it will require a serious debate regarding how different organization-­ environment relationships should be organized. This debate will involve difficult choices, new types of learning, and a diffusion of democratization processes both within and across organizations at both local and global levels of society. (Ibid.: 1083)

More recently, there have been attempts to reimagine the “anthropocentric workplace” that draw on and to a certain extent transcend HRM ideas and practices (Bettoni et al. 2014). However, these expansive perspectives risk challenging anthropocentrism while reinforcing ethically troubling capitalist values of control and exploitation. The “self” if not careful (which the above largely are) can remain one of consumption, a corporate based identity that is continually consuming and being consumed by new workplace regimes aimed at maximizing their value (Dale 2012). There is relatively little consideration of the actual views of workers or the organisational structures that would best foster positive worker related outcomes within dominant HRM thinking (Guest 2002). The past several years has witnessed the growth of new technologies such as those associated with neuroscience to influence HR practices and the employees they are ostensibly meant to help and ultimately discipline (Cheese and Hills 2016). At a deeper level, these efforts gesture toward an overriding attempt to use HRM to shape human needs. Driving these processes is the management of human capital through discourses of security and the disciplining of the “circulation” of human interactions inside and outside of work. As Foucault (2008: 241) noted in his lectures on neoliberalism:

38 

P. BLOOM

The individual’s life must be lodged, not within a framework of a big enterprise like the firm or, if it comes to it, the state, but within the framework of a multiplicity of diverse enterprises connected up to and entangled with each other, enterprises which are in some way ready to hand for the individual, sufficiently limited in their scale for the individual’s actions, decisions, and choices to have meaningful and perceptible effects, and numerous enough for him not to be dependent on one alone. And finally, the individual’s life itself—with his relations to his private property, for example, with his family, household, insurance, and retirement—must make him into a sort of permanent and multiple enterprise.

This has led Weiskopf and Munro (2012: 693) to maintain that “Contrary to enclosing and fixing individuals, the idea behind the management of human capital is to encourage their controlled circulation throughout all forms of social networks. It is this idea, which distinguishes contemporary HRM discourse…Social relations and contacts are themselves seen as part of human capital that needs to be mobilized in the search for competitive advantage”. The desire to soften this managerialism, this reduction of humans to mere circulated and exchanged human capital, falls prey to reproducing anthropocentric tropes of being more “human-­ centred” (Bolden-Barrett 2018). These remain, furthermore, focused on associating employee creative with profitable innovations rather than their own personal development or new ways for organising the workplace (Jiang et al. 2012). Critical perspectives of HRM, thus, stress how much it limits human possibility to market friendly values and inter-relationships. Even before the new millennium, there were calls to “save the subject of HRM” from “inhuman resource management”. “Our idea is that human resource management has a priori claimed that it is human, even humanistic, and that it is oriented to the good and the wellbeing of people called personnel”, According to Steyaert and Janssens (1999: 189), In this claim we see a constant process of excluding what can be considered human, and thus, a self-fulfilling defining and conceiving of the human. It is excluding the attention for the inhuman both as creating the boundary distinguishing it from the human and as part of the human.

These attempts a further humanizing what may be an inherently “inhuman” HRM project were met with demands for transforming “human resource management to human dignity development” (Bal and de Jong

2  EVOLVING BEYOND HUMAN RELATIONS 

39

2017). It also brought with it fresh historical investigations of how HRM contributed to the creation of the human as “homo economicus” (Read 2009). These critiques reveal the danger of extending an HRM approach to emerging “transhuman relations”. Treating non-human intelligence as new forms of human capital to be managed and exploited would lead to similar cultures of control, competition, and exploitation. Even more troubling, just as it has shaped the subjectivity of human workers it would overwhelming influence the development of AI and robotics. It would confine human and non-human interactions to the same market based HRM mindset and practices that are currently dominant. There is an urgent need, hence, to challenge the very foundations of anthropocentrism and HRM ideologies currently underpinning understandings of human relations so that this history is not repeated in the creation of “transhuman relations”.

Living in an Anthropocene World The continuing dominance of HRM reflects and reinforces an ongoing human-centred view of society and the world. This narrow anthropocentric perspective faces a potential challenge in a growing concern with natural environment. This ecological focus has taken on particular urgency as the threat of climate change looms ever larger and nearer. Indeed, experts now claim we have entered the “Anthropocene Age” (Zalasiewicz et al. 2011) Positively, this shift has led much of humanity to reconsider their relationship to nature, highlighting values of conservation and sustainability over simple blind consumption (Steffen et  al. 2007). Yet it has also unwittingly perhaps strengthened the belief in human mastery (Steffen et  al. 2015) This could, in turn, profoundly shape how they treat an emerging “transhuman” world. While often touted both by scholars and popular commentators, it is not always clear what is precisely meant by the term “Anthropocene” and why it is significant. To this end, Human activity is now global and is the dominant cause of most contemporary environmental change. The impacts of human activity will probably be observable in the geological stratigraphic record for millions of years into the future, which suggests that a new epoch has begun…Furthermore, unlike other geological time unit designations, definitions will probably

40 

P. BLOOM

have effects beyond geology. For example, defining an early start date may, in political terms, ‘normalize’ global environmental change. Meanwhile, agreeing a later start date related to the Industrial Revolution may, for example, be used to assign historical responsibility for carbon dioxide emissions to particular countries or regions during the industrial era. More broadly, the formal definition of the Anthropocene makes scientists arbiters, to an extent, of the human–environment relationship, itself an act with consequences beyond geology. Hence, there is more interest in the Anthropocene than other epoch definitions. (Lewis and Maslin 2015: 171)

Culturally, it invokes apocryphal fears of a coming human produced geological end times replete with natural disaster and untold human suffering. It also promises to force humanity to radically transform the existing global economy and society, turning away from a destructive free market for more sustainable alternatives. In a less revolutionary vein, it fosters ideals of ecological stewardship, reimagining modern humanity in a caretaking role for the natural environment (Steffen et  al. 2011). All these disparate views share, nonetheless, an abiding and underlying anthropocentric perspective. More precisely, it tends to reify capitalist human nature, rendering it as permanent without recognising that the very acknowledgement of our planetary impact as a species could lead to our fundamental and irrevocable transformation. In this respect, Realising that climate change is ‘anthropogenic’ is really to appreciate that it is sociogenic.4 It has arisen as a result of temporally fluid social relations as they materialise through the rest of nature, and once this ontological insight—implicit in the science of climate change—is truly taken onboard, one can no longer treat humankind as merely a species-being determined by its biological evolution. Nor can one write off divisions between human beings as immaterial to the broader picture, for such divisions have been an integral part of fossil fuel combustion in the first place. (Malm and Hornborg 2014: 66)

At stake above all else arguably is the sustainability of the human species. It is a rather simple if brutal calculus, in this respect—by preserving the natural environment we effectively preserve ourselves (Hamilton 2017). Yet as alluded to above, it also catalyzes a complete rethinking of how could and should exist on this planet (Gibson et al. 2015) It can support fresh takes on feminism and in doing so deconstruct the patriarchal values of mastery and control that not oy helped cause produce the

2  EVOLVING BEYOND HUMAN RELATIONS 

41

Anthropocene but also the anthropocentric worldview it is derived from. In the words of the celebrated theorist Professor Rosi Braidotti (2017: 29): The debate on and against humanism, pioneered by feminist, postcolonial, and race theorists, despite its multiple internal fractures and unresolved contradictions, appears as a simpler task than displacing anthropocentrism itself. The Anthropocene entails not only the critique of species supremacy— the rule of Anthropos—but also the parameters that used to define it. ‘Man’ is now called to task as the representative of a hierarchical and violent species whose centrality is challenged by a combination of scientific advances and global economic concerns. Neither ‘Man’ as the universal humanistic measure of all things nor Anthropos as the emblem of an exceptional species can claim the central position in contemporary, technologically mediated knowledge production systems.

It further opens up novel opportunities for political and social movements based on notions of “ecocentric organising”. Specifically, it rejects human-centred ontologies and epistemologies, for “object oriented philosophies” where humans are just one object in a wider social and material environment of other objects. Hence object-oriented ecosophy facilitates explaining and assigning different agencies depending on the degree of autonomy, the release of objects from an instrumental ethical rationale, and the reasoning behind exercising caution around objects and encouraging their conservation. When the suggested qualities are assumed in organising activities, objects become capable of unfolding in their own ways (autonomy), acquire rights to exist on their own (intrinsicality), and are respected for what they are (uniqueness). (Heikkurinen et al. 2016: 712)

These explicitly non-anthropocentric view of organisations and organising raise renewed questions of the actual status of the human in the Anthropocene (Chernilo 2017). The perpetuation of a human-centred politics is now a recipe for deepening our ecological disaster (Clark 2014). The recognition that human relations as they are currently constituted are ecologically and politically unsustainable, has catalyzed efforts to move beyond the narrow ideological horizons of the Anthropocene. At the macro level, legal scholars have considered what a less “human-centred” laws and judicial philosophies would look like in practice. In particular, it is maintained that “that any ethically responsible future engagement with

42 

P. BLOOM

‘anthropocentrism’ and/or with the ‘Anthropocene’ must explicitly engage with the oppressive hierarchical structure of the anthropos itself— and should directly address its apotheosis in the corporate juridical subject that dominates the entire globalised order of the Anthropocene age” (Grear 2015: 225) It, moreover, shifts the notion of responsibility from sustaining humanity and the marketplace, to that of the earth and its diverse forms of life more generally (Alberts 2011). Concretely, this has given rise to emerging demands to “decolonise” design so that it is less anthropocentric. Here Design came to name modernity’s way of worlding the world. What is at stake in decolonizing design is our relation to earth, and the dignifying of relational worlds. The task of decolonizing design brings us to a three-­ folded path: to understand modernity´s way of worlding the world as artifice, as earthlessness, to understand coloniality´s way of un-worlding the world, of annihilating relational worlds and, to think the decolonial as a form of radical hope for an ethical life with earth. (Vazquez 2017: 77)

Put forward, additionally, is a novel environmental perspective and politics that calls for a “post-anthropocentric paradigm shift” (Ferrando 2016). Tellingly, this desire to transcend a “human centred” world have not been extended to predicted dramatic economic changes. The challenges posed by 4.0 certainly reveal the growing importance of technologically enhanced non-human intelligence. However, nascent forms of “eco-­ criticism” tend to largely ignore AI and robotics (Clark 2015). It seeks to pluralise world history so that humans are not the only or even main protagonists. These significant historical retellings could also be applied to the narrating of a “post-human future” (Taylor 2017). These could build on efforts to conceive of a new epoch that challenges the perceived “dawn of the human—influenced age” (Carrington 2016). It would also set the foundations for a fresh anthropology focused on exploring the rituals and “personal views” of individuals and communities as they grapple with the opportunities and challenges of an increasingly “post-human” environment (Latour 2017). Critically, there is a renewed need for radical reconsiderations of anthropocentrism and the Anthropocene as they relate to our social, economic, and political futures. The Anthropocene has already spurred new Marxist perspectives (Foster 2016). Indeed, the very notion of that “Anthropocene” has been challenged in favour of notions of the “capitalocene” (Altvater

2  EVOLVING BEYOND HUMAN RELATIONS 

43

et  al. 2016). Additionally, it has led to a reconsideration of established international relations values such as “security” (Fagan 2017). At stake, is the social creation of a no less than a “new earth” (Saldanha and Stark 2016). Philosophically, it demands novel ontologies and points to once inconceivable possibilities.

New Materialism for “Smart” Times Much off the attempts to transcend an anthropocentric worldview centre predominantly on values of sustainability. In a new millennium defined by ecological, economic, and political crises this emphasis on simple survival is eminently understandable. How can humanity preserve it environment and selves as a species? Yet this imperative to “look beyond” the human is also one of profound opportunities. More than merely enhancement, acknowledging the importance of non-human intelligence and its needs can catalyze novel forms of material and social agency that celebrates these differences (Bennett et al. 2010). By focusing on materiality, the flux and change of people, animals, and things in their natural element, traditional mind body dualisms are overcome and inhibiting social categories begin to be discarded. “New Materialist” theories thus shows how cultured humans are always already in nature, and how nature is necessarily cultured, how the mind is always already material, and how matter is necessarily something of the mind. New materialism opposes the transcendental and humanist (dualist) traditions that are haunting a cultural theory that is standing on the brink of both the modern and the post-­ postmodern era…new materialism allows for the conceptualisation of the travelling of the fluxes of matter and mind, body and soul, nature and culture, and opens up active theory formation. (Van der Tuin and Dolphijn 2010: 153)

Further revealed is the “fragility” and their ultimate changeability of things (Connolly 2013). It is perhaps tempting to input into these new materialist perspectives a sense of underlying passivity. Surely, if all is flux and flow, there is relatively little room for agency, human or otherwise. All life exists at the whim of a dynamic and unpredictable natural world, efforts to direct these flows is at best futile and at worst dangerous hubris. Despite these possible first impressions, this prioritization of matter over mind, so to speak, helps shift

44 

P. BLOOM

the emphasis from survival to freedom. The social attachment to material things breeds an unhealthy embrace in the false immutability of existing social relations—one that posthumanism in principle can positively undermine. While ultimately somewhat critical of this perspective, scholar David Chandler (2013a: 522) nicely summarises the emancipatory potential of posthumanism: The recognition of the attenuated subject/object distinction in an age of complexity radically alters or inverses modernist understandings of the relationship between freedom and necessity. The declaration of the end of the struggle to emancipate the human from external necessity is declared as genuine, post-human, emancipation. Emancipation in this case is a project of work on the human itself, rather than the external world, once we accept that the previous understanding of the linear liberal telos—of the ongoing war for domination, understanding and control—was in fact dehumanising and divisive. New materialism argues that we can emancipate ourselves once we throw off the shackles of humankind being endowed with divine purpose, reason or capacities for mastery. In recognising the limits of human capacities and appreciating the agency and effects of non-human others, we can then allegedly unleash our ‘inner’ human and become what we ‘are’, no longer alienated from each other and the world we inhabit.

By focusing on concrete capabilities rather than any essentialized human nature, the very conception and practice of human agency becomes available for expansion (See Chandler 2013b). The exciting potentialities of new materialism, its bringing attention to the multitude of creative possibilities in the natural flow of things, is grounded in an equally materialist politics of securing human and non-­ human welfare (Hanlon and Christie 2016). By contrast, market based corporate cultures turn desires for freedom into disciplining mechanisms for greater managerial control and socio-economic inequality. In his landmark article “Strength is ignorance; slavery is freedom: managing culture in modern organizations” on the subject, Professor Hugh Willmott (1993: 526) declares that Like the market that allows sellers of labour to believe in their freedom, corporate culture invites employees to understand that identification with its values ensures their autonomy. That is the seductive doublethink of corporate culture: the simultaneous affirmation and negation of the conditions of autonomy. In corporate culturism, respect for the individual is

2  EVOLVING BEYOND HUMAN RELATIONS 

45

equated with complying with the values of the corporate culture. To challenge the values enshrined in this ‘respect’ is ‘a crime against the culture’.

Eschewing such mainstream values, opens up fresh vistas for material becomings, the exploration of natural possibilities. Here culture is a platform for breaking free from economic and social barriers, a letting go of the supposedly sacred past for novel lived futures. The famed critical thinker Elizabeth Grosz (2010: 152–153), drawing on the earlier philosophies of Henri Bergons, proclaims Freedom is thus not primarily a capacity of the mind but the body: it is linked to the body’s capacity for movement, and thus its multiple possibilities of action. Freedom is not an accomplishment granted by the grace or good will of the other but is attained only through the struggle with matter, the struggle of bodies to become more than they are, a struggle that occurs not only on the level of the individual but also of the species. Freedom is the consequence of indetermination, the very indetermination that characterizes both consciousness and perception. It is this indetermination—the ruscriminations of the real based on perception, the discriminations of interest that consciousness performs on material objects, including other bodies—that liberates life from the immeruacy and givenness of objects but also from the immeruacy and givenness of the past. Life is not the coincidence of the present with its past, its history, it is also the forward thrust of a direction whose path is clear only in retrospect Indetermination liberates life from the constraints of the present. Life is the protraction of the past into the present, the suffusing of matter with memory, which is the capacity to contract matter into what is useful for future action and to make matter function differently in the future than in the past. The spark of indetermination that made life possible spreads through matter by means of the activities that life performs on matter. As a result, the world itself comes to vibrate with its possibilities for being otherwise.

It is easy to see, in turn, the deep affinities between posthumanism and new materialism. Technology is viewed as a natural thing, as much composed of matter as any human, animal, or plant. Philosophical debates over what constitutes life are exchanged for profound and practical meditations on the ongoing material interactions of these things as well as what they give birth too (Srnicek 2017). Underlying this radical materialism is a new ontology for conceiving existence itself. Returning to the insights of Braidotti (2006: 201), she uses the figure of the nomad to articulate

46 

P. BLOOM

this simultaneously non-anthropocentric and posthuman philosophy of existence, contending In nomadic thought, a radically immanent intensive body is an assemblage of forces, or flows, intensities and passions that solidify in space, and consolidate in time, within the singular configuration commonly known as an ‘individual’ self. This intensive and dynamic entity does not coincide with the enumeration of inner rationalist laws, nor is it merely the unfolding of genetic data and information. It is rather a portion of forces that is stable enough to sustain and to undergo constant, though non-destructive, fluxes of transformation.

Paradoxically, it is only by becoming renaturalized—in becoming part of an evolving and dynamic material world—that humans and their creations can become socially denaturalized as cultural subjects (Jackson 2013). At stake, is a rebooting of “human freedom” that incorporates non-­ human matter and agency (Ferrando 2013). This requires and benefits from a historical revisionism that narrates new stories of non human— centred progress. “Transhumanism is part a scientific endeavor and part an intellectual and cultural movement that raises deep questions about the identity and the future of the human species” according to the scholar Fabrice Jotterand (2010: 620), The mistake, I think, would be to polarize the issues in terms of the bioconservative resistance to embark on the enhancement train and the blind acceptance of the transhumanist agenda. We cannot escape the realities of technological and scientific progress. It is part of our nature to discover, invent, and improve our lives and environment. It is also part of our responsibility to assess how these emerging technologies could affect us and future generations.

Likewise, the perceived predictability that so often accompanies popular notions of digital technology must be supplanted with an acceptance that these “transhuman” future remain unknown and as of yet unwritten (Mercer and Trothen 2014). Instead, renewed attention should be paid to the fundamental conditions and concrete demands posed by the desires for an “emancipatory posthumanism”. These efforts provide the foundations for a theoretical and practical overhaul of existing culture and politics. Critical is a reapproachment of

2  EVOLVING BEYOND HUMAN RELATIONS 

47

power stressing the vitality and possibilities of “things” beyond human creation, exploitation, and consumption (Bennett 2010). Revealed is the “empirical falsity of the human subject”, the use of material evidence to reify humanity and the physical world (Schmidt 2013). Rather than seek out a “science” of the social, the underlying laws of the political, it is instead crucial to discover what “agentic capabilities” exist and for what historical reasons (Coole 2013). In practice, this gestures toward an entirely new edifice for understanding democracy and legitimacy, one that takes seriously “bodies in action” and dominant forms of “corporeal agency” (Krause 2011). It is in letting go of the humanity that is known, that new versions of human existence can emerge and thrive. Established elements such as language once defining what is means to “be human” are retranslated into material and cultural resources for “assembling the posthuman subject” (de Freitas and Curinga 2015). This means conceptually interrogating how different theoretical traditions conceive agency and its importance (Jones 1996). Entailed is a critical genealogy of past human agency and its future possibilities (Pickering 2001). On this basis, it is possible to imagine and implement “posthuman agency”, liberating humans and non-humans from the narrow ideological confines of our current status quo for a more emancipatory “transhuman” world (Cudworth and Hobden 2017).

Going Beyond a Human Centred World There is a growing awareness that environmental degradation and technological advancements will challenge dominant anthropocentric worldview (Hayward 1997) In their different ways, they demand that humans no longer be seen as the central actor either of the past, present, or future (Norton 1984). Yet there is a fundamental shift that also must take place. The “unsettling” of what it means to be “human”. Thus the Questioning anthropocentrism is far more than an academic exercise of debating the dominant cultural motif of placing humans at the center of material and ethical concerns. It is a fertile way of shifting the focus of attention away from the problem symptoms of our time…to the investigation of root causes. And certainly the dominant beliefs, values, and attitudes guiding human action constitute a significant driver of the pressing problems of our day. (Crist and Kopnina 2014: 377–378)

48 

P. BLOOM

The questioning of who “we” are and the forging of shared identities with intelligent beings that would not be traditionally categorised as human (Nass et al. 1995) “New materialism” perspectives shine intellectual light on the significance of re-orienting traditional notions of “nature”. Just as it seeks to undermine the Enlightenment mind-body duality, so to does it allow for a fresh understanding of the “natural world” that is more than just a distinguishing from “artificial” human creations. These approaches point to the possibility of going “beyond anthropocentrism” (Campbell 1983). Moreover, they reveal a sharpening dividing line theoretically and practically between “anthropocentrism” and “non-anthropocentrism”. Challenging Norton’s “convergence hypothesis” that maintains that ultimately anthropocentric and non-anthropocentric lead to the same environmentally responsible human behaviours, scholar Katie McShane (2007: 169) argues that non-anthropocentric Ethics legitimately raises questions about how to feel, not just about which actions to take or which policies to adopt. From the point of view of norms for feeling, anthropocentrism has very different practical implications from nonanthropocentrism; it undermines some of the common attitudes—love, respect, awe—that people think it appropriate to take toward the natural world.

Ultimately, to truly transcend an anthropocentric worldview would involve radically changing the “practices and politics of nature” (Koensler and Papa 2013). Equally significant though is the challenging of the very idea of “human nature”. Anthropocentrism and posthumanism can fall prey to assuming that humanity is a fixed historical object in which understandings of what it means to be “human” are unchanging. The “post-structuralist” turn deconstructs this potential tendency toward historical essentialism, revealing that the very definition of “humanity” and the notion of a shared “we” as one that is as much a product of the times as it is anything essential (Bell and Russell 2000). Likewise, this opening up of history to different interpretations of “humanity” allows for a more nuanced and historically sensitive treatment of our evolving “speciesm” (O’Neill 1997). In this respect, non-humans are granted a greater place in our historical evolution, creating new narratives tracing out how the social evolution of the “human” was always intertwined with the dominant ideas and treatments

2  EVOLVING BEYOND HUMAN RELATIONS 

49

of animals (Steiner 2010). Through acknowledging, that humanity is not a static concept nor the abiding “speciesm” that commonly defines us historically, it becomes possible to recongize that anthropocentrism and a human-centred perspective is not inevitable either (Katz 2000). The radical subversion of a human-centred history lends itself to an equally radical reconfiguration of an anthropocentric present. It is already increasingly accepted that anthropocentrism is a cultural as opposed to a natural phenomenon. Children do not immediately embrace or accept a human-centred worldview (Herrmann et al. 2010) To this end, there are renewed attempts for “placing firm limits of childhood anthropocentrism”. Evidence from experiments with young people done by psychological researchers Sandra Waxman and Douglas Medin (2007: 29) reveal that Children’s notions of the biological world are tuned by their direct experience and by community-wide discourse…an individual’s ‘experience’ within the biological domain includes not only their habitual contact or familiarity with biological entities, but also the culturally prevalent models about the biological world and about the relation between humans and the rest of nature. Anthropocentrism may not be an inevitable result of urban children’s greater familiarity with humans, but rather may be a consequence of their sensitivity to discourse supporting an anthropocentric model.

This also applies to contemporary understandings of animal intelligence—whereby adult humans must reprogramme themselves to see animal minds through a lense that does not immediately privilege human intelligence (Povinelli 2004). An analogous process of cultural reprogramming must also be applied to artificial intelligence. Transcending individual views, this calls for a global awakening to the power of non-human intelligence. For this purpose, “the world” should also not be understood in the singular. Rather, it contains social multitudes—a diverse range of social relationships and potentialities (Giri 2006). Consequently, this pluralistic view of the world requires an accompanying “cosmopolitan neuroethics” that acknowledges and celebrates different ways of being intelligent encompassing humans and non-humans alike (Shook and Giordano 2014). Nevertheless, the role of technology for the expansion and pluralisation of intelligence is commonly met with greater anxiety than excitement (Kansteiner 2017). It does hold out the opportunity though for an “end of humanity” as we currently know it, an international agenda based on posthumanist principles and

50 

P. BLOOM

desires (Pin-Fat 2013). They point the way to a technologically advance form of “interspecies cosmopolitanism” (Mendieta 2015). Opening up, in turn, is a progressive and empowering “non-human” future. In its place are attempts to create an ethical “common future” that builds on and ultimately transcends anthropocentrism (Strang 2017). Here history is a prelude to moving beyond a human—centred reality (Domanska 2010). The very “self-image” of humanity will potentially shift in light of global trends in transhumanism and neurotechnology (Benedikter et al. 2010). It permits for a rethinking of seeming historical inevitabilities based on the potentialities created by emerging technological “human enhancements” (Tomasini 2007). Fundamentally, it reveals humanity to be an inscriptive and alienated form of existence, one where we are held back by our desires to stay “human” and remain closed to the possibilities offered by non-human forms of intelligence for improving our welfare and understandings.

Sharing Intelligence Moving away from a human-centred perspective and toward a transhuman world requires more than simply tolerating other forms of intelligence. It also demands opening ourselves to alternative ways of viewing reality and new ethical perspectives for guiding our shared actions (Pruchnic 2013). Already these technological advances and ideas are being tentatively used to spread HRM values, even if only metaphorically. Rather than take, for instance, a “fixed” or overly abstract approach to understanding HRM, leading Professor in the field John Storey (1995: 5) describes it as “A distinctive approach to employment management which seeks to achieve competitive advantage through the strategic deployment of a highly committed and capable workforce, using an integrated array of cultural, structural, and personnel techniques”. In this respect, Professor Tom Keenoy (1999), applied the idea of the “holograph” to understand Human Relations generally and Human Resources Management specifically, in an increasingly global and disaggregated hi-tech economy: Within the holographic perspective there is also an underlying ‘implicate realm’ of interference patterns where ‘the parts are contained in the whole and the whole is contained in the parts’. This feature accounts for the powerful epistemological emphasis on multi-dimensionality, interconnectedness, and the multi-causal origins of any social facticity. With

2  EVOLVING BEYOND HUMAN RELATIONS 

51

respect to HRMism, it is no accident that its emergence coincides with the socio-economic construction of a deregulated global economy. This ‘grand interface pattern’ is an impenetrably complex facticity and the cumulative socio-economic ‘knock on’ effects of its multi-directional processes are utterly countless (and indeed endless). (12–13)

Less theoretically and more recently, VR is being applied to help treat mental health issues such as anxiety (Gorini et al. 2010.) These point to the ability to allow people to immersive themselves in different realities that can radically shift their own perspectives. Individuals are using technologies such as VR and AR to expand their consciousnesses. It has the potential to transform human aesthetics and in doing create new vistas for seeing and designing our real world environments. For instance, “open space technologies” that stress self-­ organisation and are participant driven are enhanced by virtual and digital innovations in order to allow people exchange knowledge and build empowering communities across traditional geographic borders and cultural differences (Owen 1997). Access to the internet and other digital technologies can further enhance this sense of human connection, especially for people facing physical and social barriers. A study from early in the new millennium (Guo et al. 2005: 49) revealed that For the minority of disabled people who do have access to the Internet, however, its use can lead to significantly improved frequency and quality of social interaction. Study findings further suggest that the Internet significantly reduced existing social barriers in the physical and social environment for disabled people.

More radically, VR can be used to expand traditional forms of “bodily self consciousness” to reflect and allow people to experience “non-ordinary consciousness (NOC)”. To this effect, As the use of VR technology grows increasingly widespread, the philosophical, ethical, and societal implications will likewise continue to grow. The attenuation of physical body Bayesian priors that could come with conditioning to alternate worlds and embodiment dynamics (especially in children growing up with extensive VR use) may result in an experience of a greater pre-reflective readiness to take ownership over virtual bodies. Such experiences may give credence to the self-localization fallacy and raise new philosophical questions about the nature of consciousness and embodiment.

52 

P. BLOOM

Ethically, it will be important to recognize the potency of VR-enabled NOC and to remain mindful about overly relying on VR beyond it, providing “primer” experiences that can then be cultivated without VR. (Montes 2018: 9)

Going forward, there is a good chance that humans will progressively be living in multiple realities as pluralised selves. Put differently, rather than anchoring ourselves to a singular identity and physical environment, we will be able to input consciousness into several selves in a range of interactive worlds. This shift reflects emerging “cultures of technological embodiment” (Featherstone and Burrows 1996). Presently, there is a growing awareness that VR can fundamentally “change human consciousness”. Influential VR filmmaker Chris Milk told the Guardian newspaper in 2015 that “In terms of an altered state of human consciousness being on the horizon, right now we’re still in the darkness of night, poking around with flashlights and trying to find our way there” (quoted in Dredge 2015: n.p.) At stake, though, is what sort of virtual realities will be created. These expansive technological capabilities are still premised on how open or closed society is for promoting different types of cultures and realities beyond those reflecting dominant market and corporate values. Within businesses, VR is being exploited to foster and spread innovation networks within SMEs (Macpherson et  al. 2005). However, there are deeper philosophical and practical considerations of what actually would constitute a “virtual society” (Woolgar 2002). These go beyond mere technical questions of enhancing the “realism” of VR or widening its access. It gestures toward critical questions of linking differing embodies experiences to resistances against exploitive and hegemonic values that characterise the contemporary workplace and society (Pullen and Rhodes 2014). These efforts represent the potential to transcend “human reality” and better understand and experience non-human intelligence. Recently, there have been attempts to achieve “virtual reality body swapping” which can radically alter their bodily memory so that it is more allocentric—or other centred (Serino et al. 2016). Context here is imperative, as it is the data rich foundation for guiding these human-computer interactions and the construction of these virtual realities (Nardi 1996). To this end, it speaks to desires to exploit virtual reality for transcending our current dominant cultures for different and unknown realities (Grof 1998). In this respect,

2  EVOLVING BEYOND HUMAN RELATIONS 

53

VR becomes a transcendent experience, almost mystical in its ability to permit humans to go beyond their known realities. Hence Similar to the psychedelic experience, Virtual Reality is opening new paths towards mystical experiences like those that have inspired the world’s greatest religions. Through this powerful technology, we are closer than ever to being able to enter altered states of consciousness by being immersed in a realm where time travel is possible, where fantastical landscapes capture our imaginations, and where we can prepare ourselves for the next great adventure after this life. (Hinchcliffe 2017: n.p.)

It is telling though that these desires to transcend human reality as we currently know and experience it are translated into religious like longings for spiritual enlightenment and heavenly salvation. There is considerably less attention paid to using these same techniques for realistically reimagining human society in the near future. VR has the ability to make us feel “presence” in radically different virtually created environments and in doing so broadening our consciousness (Sanchez-Vives and Slater 2005). These echo more recent research on the ways that our dreaming lives are formative for shaping how we experience our waking reality (Hobson et al. 2014) Turning these findings to an extent on their head, how can we use our “innovative virtual reality generation” abilities to transform our current living realities for dreaming new and better ones? Specifically relevant to this analysis, VR can be drawn on to help prepare for “mixed-reality living” that includes the acceptance and experiencing of non-human forms of intelligence (Ricci et al. 2015). These could serve as the technological and cultural basis for truly being able to create workplaces and civic societies of intelligence sharing. The notion of “virtual bodies” are critical to becoming “transhuman” (Becker 2000). It permits us to transcend an anthropocentric consciousness for one that encompasses and adapts to a common “posthuman” reality (Hayles 2008). Presently, more and more people are asking “are we already living in virtual reality?” in that VR is becoming increasingly mainstream and used to address serious social issues such as domestic violence and racial discrimination. A recent New Yorker article chronicles the pioneering scholars and VR developer Mel Slater and Mavi Sanchez-Vivies

54 

P. BLOOM

With a team of various collaborators, Slater and Sanchez-Vives have created many other-body simulations; they show how inhabiting a new virtual body can produce meaningful psychological shifts. In one study, participants are re-embodied as a little girl. Surrounded by a stuffed bear, a rocking horse, and other toys, they watch as their mother sternly demands a cleaner room. Afterward, on psychological tests, they associate themselves with more childlike characteristics. (When I tried it, under the supervision of the V.R. researcher Domna Banakou, I was astonished by my small size, and by the intimidating, Olympian height from which the mother addressed me.) In another, white participants spend around ten minutes in the body of a virtual black person, learning Tai Chi. Afterward, their scores on a test designed to reveal unconscious racial bias shift significantly. “These effects happen fast, and seem to last,” Slater said. A week later, the white participants still had less racist attitudes. (The racial-bias results have been replicated several times in Barcelona, and also by a second team, in London.) Embodied simulations seem to slip beneath the cognitive threshold, affecting the associative, unconscious parts of the mind. “It’s directly experiential,” Slater said. “It’s not ‘I know.’ It’s ‘I am.’” (Rothman 2018: n.p.)

Yet perhaps a more pertinent question is how can we experience different forms of intelligence and ways of seeing the world in order to produce and live in an exciting and empowering transhuman world?

Evolving Beyond Human Relations Thus far this chapter has highlighted the possibility of exchanging a “human-centred” perspective for a transhuman perspective. New “disruptive” technologies will usher in a greater presence of non-human intelligence within the economy and society as well as offer novel ways for humans to understand these different types of consciousnesses. As Professor Dimitris Papadopoulos notes, we should resist desires to universalise human and non-human relations, but rather critically interrogate “which humans” and “which non-humans” are interacting and how these transhuman relationships are reinforcing or subverting a status quo. The aim of These interventions are about building alternative forms of life and connecting them together into shared commons. An association of such forms of life into common spaces—alterontologies is the term used to describe the ecocommons emerging today—can ultimately account for the multiplicity of

2  EVOLVING BEYOND HUMAN RELATIONS 

55

hybrid life forms that contemporary technoscience and global capitalist production unleash. It is about acting within and against these conditions in order to fulfil the responsibilities that the world-market announces but cannot complete. (Papadopoulos 2010: 148)

Yet for the full scale of this shift to occur requires a radical cultural change of values. At the very least, it can produce a sense of profound “anxiety” regarding not just the status quo but the what it means to be human itself and as such a renewed willingness to fundamentally questions and alter it. Drawing on the Heideggarian notion of the “trace”, philosopher Gavin Rae (2014: 66) argues that posthumanism must always tarry with humanism, especially given humanism’s powers of rejuvenation. As a consequence, while posthuma-nist thinking aims to go beyond Heidegger’s ek-sistential humanism by destructing the trace of humanism that remains in Heidegger’s thinking, it must continually do battle against humanism. Indeed, my suggestion is that this is what defines posthumanism: its continuous battle against humanist categories, thinking and binary oppositions. Thehydra-like nature of humanism means that posthumanism entails a style of thinking that must be continuously on its guard to prevent aspects of humanist thinking from gaining a toehold, let alone a beach-head, ‘in’ its thinking.

Doing so, demands reorienting the principles guiding our current relations with each other and non-humans more generally (Taylor 2012). These broader ideas have significant potential ramifications for contemporary notions of human rights. These reflect, to a certain degree, the ongoing debate between universalism versus cultural relativism in determining and implementing such rights (Yasuaki 2000). Technological developments in AI and robotics have exacerbated and to some extent dramatically expanded such concerns (Meulen 2010). They challenge conventional ideas of what constitutes “human nature” and thus what should constitute a contemporary set of human rights, (REF – Sharon). Concretely, the rise of posthumanism may profoundly reconfigure theories and practices of social justice work (Rose and Walton 2015). This further catalyses a deeper critical engagement in relation to what counts as a twenty-first century “human need” (Zembylas and Bozalek 2014). It also sheds fresh light on emerging modern struggles over land, resources, and power. Expanding on the groundbreaking work of Karen Barad (2013), who contends that the division between human and

56 

P. BLOOM

­ on-­human is in fact social construction based on a “cut” linked to existing n power relations and culture orders, scholar Vicki Squire considers a “post-­ humanitarian politics” in the context of migrants in the Sonora desert on the US/Mexico border, noting that post-humanitarian politics involves multiple enactments of subjects-objects-­ environments through diverse ‘cuts’ that document the effects of socialphysical forces on daily life. In discussing humanitarian recycling, I have claimed that the forging of new connections through things offers potential for transformational ‘cuts’ that are nevertheless politically ambiguous. (Squire 2014: 20)

Philosophically, it reconnects us to studying our treatment and interactions with animal life to unsettling dominant assumptions of what it means to be “human” (Wolfe 2009). Crucially, it connects this reconsideration of animals to the human approach for relating to artificial forms of intelligence (Lathers 2006). It, moreover, opens the way for a global vision of a world where humans and non-humans intermingle and even merge into one each other (Pin-Fat 2013). In incorporating posthuman and transhuman ideals into established discourses of human rights and needs, expands the possibilities of being a conscious “intelligent” entity in the near future. As discussed, post-­ structuralism and posthumanism are combining to foster new ethical frameworks for self-knowledge and action (Willmott 1998). This has catalysed intense speculations about “is the future more or less human?” (Wilson and Haslam 2009). Technological enhancement for recreating physical selves will increasingly become the new norm (Warwick 2014). Mentally, it has the potential of reconfiguring “human-nature relations” for the next generation (Malone 2016). Underlying these advancements though is a continual undercurrent of competition rather than collaboration, one where man and machine will “clash” as opposed to work together and mutually prosper (Leonhard 2016). Challenging such reactionary fears are alternative theories advocating for techno-emancipation. As far back as the mid 1990s, posthumanism was being promoted as a radical project combining “ecofeminism” with progressive “bio-ethics” (Salleh 1996). It additionally served as a conceptual and practical foundation for “rethinking the human” and “rethinking the self” against the environmental and social background of the Anthropocene (Bignall et al. 2016). They further expose an abiding

2  EVOLVING BEYOND HUMAN RELATIONS 

57

and rapidly growing inconsistently of established human rights approaches in respect to these potentially disruptive technologies (Chapman 2010). These reflect, in turn, a failure to fully engage with the revolutionary and democratic possibilities of such technologies (Hester 2018). These radical socio-technological alternatives gesture toward the prospects for conceptually and practically forging an empowering vision of “transhumanism”. They permit for the emergence of new social imaginaries that incorporate inclusion, liberation, and non-human intelligence (Neimanis 2014). Equally important, they create models for “self-design” that transcend the narrow confines of “the human” (Graham 2004). Such transhumanism could give rise to global “networks” of mutual care and responsibility that subvert existing power dynamics of the human—centred status quo (Atanasoski and Vora 2015). Perhaps most significantly, it offers an entirely new foundation for imagining and enacting social relations that expands beyond humanity (Nurka 2015).

References Alberts, P. (2011). Responsibility Towards Life in the Early Anthropocene. Angelaki, 16(4), 5–17. Altvater, E., Crist, E., Haraway, D., Hartley, D., Parenti, C., & McBrien, J. (2016). Anthropocene or Capitalocene?: Nature, History, and the Crisis of Capitalism. Pm Press. Atanasoski, N., & Vora, K. (2015). Surrogate Humanity: Posthuman Networks and the (Racialized) Obsolescence of Labor. Catalyst: Feminism, Theory, Technoscience, 1(1), 1–40. Bal, P.  M., & de Jong, S.  B. (2017). From Human Resource Management to Human Dignity Development: A Dignity Perspective on HRM and the Role of Workplace Democracy. In Dignity and the Organization (pp.  173–195). London: Palgrave Macmillan. Barad, K. (2013). Ma(r)king Time: Material Entanglements and Re-memberings: Cutting Together-Apart. How Matter Matters: Objects, Artifacts, and Materiality in Organization Studies, pp. 16–31. Becker, B. (2000). Cyborgs, Agents, and Transhumanists: Crossing Traditional Borders of Body and Identity in the Context of New Technology. Leonardo, 33(5), 361–365. Bell, A.  C., & Russell, C.  L. (2000). Beyond Human, Beyond Words: Anthropocentrism, Critical Pedagogy, and the Poststructuralist Turn. Canadian Journal of Education/Revue canadienne de l’éducation, 25, 188–203.

58 

P. BLOOM

Benedikter, R., Giordano, J., & Fitzgerald, K. (2010). The Future of the Self-­ Image of the Human Being in the Age of Transhumanism, Neurotechnology and Global Transition. Futures, 42(10), 1102–1109. Bennett, J. (2010). A Vitalist Stopover on the Way to a New Materialism. New Materialisms: Ontology, Agency, and Politics, 91(1), 47–69. Bennett, J., Cheah, P., Orlie, M.  A., & Grosz, E. (2010). New Materialisms: Ontology, Agency, and Politics. Duke University Press. Berg, P. O. (1986). Symbolic Management of Human Resources. Human Resource Management, 25(4), 557–579. Bettoni, A., Cinus, M., Sorlini, M., May, G., Taisch, M., & Pedrazzoli, P. (2014, September). Anthropocentric Workplaces of the Future Approached Through a New Holistic Vision. In IFIP International Conference on Advances in Production Management Systems (pp. 398–405). Berlin, Heidelberg: Springer. Bignall, S., Hemming, S., & Rigney, D. (2016). Three Ecosophies for the Anthropocene: Environmental Governance, Continental Posthumanism and Indigenous Expressivism. Deleuze Studies, 10(4), 455–478. Bolden-Barrett, V. (2018, January 25). HR Is Taking a More Human-Centred Approach in 2018. HR Drive. Bossmann, J. (2016, October). Top 9 Ethical Issues in Artificial Intelligence. In World Economic Forum (Vol. 21). Bostrom, N., & Yudkowsky, E. (2014). The Ethics of Artificial Intelligence. The Cambridge Handbook of Artificial Intelligence, 1, 316–334. Braidotti, R. (2006). Posthuman, All Too Human: Towards a New Process Ontology. Theory, Culture & Society, 23(7–8), 197–208. Braidotti, R. (2017). Four Theses on Posthuman Feminism. In R. Grusin (Ed.), Anthropocene Feminism. University of Minnesota Press. Burke, R. J., & Ng, E. (2006). The Changing Nature of Work and Organizations: Implications for Human Resource Management. Human Resource Management Review, 16(2), 86–94. Campbell, E. K. (1983). Beyond Anthropocentrism. Journal of the History of the Behavioral Sciences, 19(1), 54–67. Carrington, D. (2016, August 29). The Anthropocene Epoch: Scientists Declare Dawn of Human-Influenced Age. The Guardian. Chandler, D. (2013a). Ontopolitics in the Anthropocene: An Introduction to Mapping, Sensing and Hacking. Routledge. Chandler, D. (2013b). ‘Human-Centred’ Development? Rethinking ‘Freedom’ and ‘Agency’ in Discourses of International Development. Millennium, 42(1), 3–23. Chapman, A. R. (2010). Inconsistency of Human Rights Approaches to Human Dignity with Transhumanism. The American Journal of Bioethics, 10(7), 61–63.

2  EVOLVING BEYOND HUMAN RELATIONS 

59

Cheese, P., & Hills, J. (2016). Understanding the Human at Work  – How Neurosciences Are Influencing HR Practices. Strategic HR Review, 15(4), 150–156. Chernilo, D. (2017). The Question of the Human in the Anthropocene Debate. European Journal of Social Theory, 20(1), 44–60. Clark, N. (2014). Geo-Politics and the Disaster of the Anthropocene. The Sociological Review, 62, 19–37. Clark, T. (2015). Ecocriticism on the Edge: The Anthropocene as a Threshold Concept. Bloomsbury Publishing. Connolly, W.  E. (2013). The ‘New Materialism’ and the Fragility of Things. Millennium, 41(3), 399–412. Coole, D. (2013). Agentic Capacities and Capacious Historical Materialism: Thinking with New Materialisms in the Political Sciences. Millennium, 41(3), 451–469. Crist, E., & Kopnina, H. (2014). Unsettling Anthropocentrism. Dialectical Anthropology, 38(4), 387–396. Cudworth, E., & Hobden, S. (2017). The Emancipatory Project of Posthumanism. Routledge. Dale, K. (2012). The Employee as ‘Dish of the Day’: The Ethics of the Consuming/ Consumed Self in Human Resource Management. Journal of Business Ethics, 111(1), 13–24. Danielson, P. (2002). Artificial Morality: Virtuous Robots for Virtual Games. Routledge. de Freitas, E., & Curinga, M. X. (2015). New Materialist Approaches to the Study of Language and Identity: Assembling the Posthuman Subject. Curriculum Inquiry, 45(3), 249–265. Domanska, E. (2010). Beyond Anthropocentrism in Historical Studies. Historein, 10, 118–130. Dredge, S. (2015, October 16). VR Could Change Human Consciousness – If We Get There, Says Chris Milk. Guardian. Fagan, M. (2017). Security in the Anthropocene: Environment, Ecology, Escape. European Journal of International Relations, 23(2), 292–314. Featherstone, M., & Burrows, R. (Eds.). (1996). Cyberspace/Cyberbodies/ Cyberpunk: Cultures of Technological Embodiment (Vol. 43). Sage. Ferrando, F. (2013). Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New Materialisms. Existenz, 8(2), 26–32. Ferrando, F. (2016). The Party of the Anthropocene: Post-Humanism, Environmentalism and the Post-Anthropocentric Paradigm Shift. Relations: Beyond Anthropocentrism, 4, 159. Foster, J. B. (2016). Marxism in the Anthropocene: Dialectical Rifts on the Left. International Critical Thought, 6(3), 393–421.

60 

P. BLOOM

Foucault, M. (2008). The Birth of Biopolitics: Lectures at the Collège de France, 1978–1979 (Graham Burchell, Trans.) and (Arnold Davidson, Ed.). Basingstoke: Palgrave Macmillan. Fudge, E. (2000). Introduction: The Dangers of Anthropocentrism. In Perceiving Animals (pp. 1–10). London: Palgrave Macmillan. Gibson, K., Rose, D.  B., & Fincher, R. (2015). Manifesto for Living in the Anthropocene. New York: Punctum Books. Giri, A.  K. (2006). Cosmopolitanism and Beyond: Towards a Multiverse of Transformations. Development and Change, 37(6), 1277–1292. Gorini, A., Pallavicini, F., Algeri, D., Repetto, C., Gaggioli, A., & Riva, G. (2010). Virtual Reality in the Treatment of Generalized Anxiety Disorders. Studies in Health Technology and Informatics, 154, 39–43. Graham, E. (2004). Bioethics After Posthumanism: Natural Law, Communicative Action and the Problem of Self-Design. Ecotheology: Journal of Religion, Nature & the Environment, 9(2), 178–198. Grear, A. (2015). Deconstructing Anthropos: A Critical Legal Reflection on ‘Anthropocentric’ law and Anthropocene ‘Humanity’. Law and Critique, 26(3), 225–249. Grof, S. (1998). The Cosmic Game: Explorations of the Frontiers of Human Consciousness. SUNY Press. Grosz, E. (2010). Feminism, Materialism, and Freedom. In D. Coole & S. Frost (Eds.), New Materialisms: Ontology, Agency, and Politics (pp.  139–157). Durham: Duke University Press. Guest, D. (2002). Human Resource Management, Corporate Performance and Employee Wellbeing: Building the Worker into HRM. The Journal of Industrial Relations, 44(3), 335–358. Guo, B., Bricout, J. C., & Huang, J. (2005). A Common Open Space or a Digital Divide? A Social Model Perspective on the Online Disability Community in China. Disability & Society, 20(1), 49–66. Hamilton, C. (2017). Defiant Earth: The Fate of Humans in the Anthropocene. John Wiley & Sons. Hanlon, R. J., & Christie, K. (2016). Freedom from Fear, Freedom from Want: An Introduction to Human Security. University of Toronto Press. Hayles, N. K. (2008). How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. University of Chicago Press. Hayward, T. (1997). Anthropocentrism: A Misunderstood Problem. Environmental Values, 6(1), 49–63. Heikkurinen, P., Rinkinen, J., Järvensivu, T., Wilén, K., & Ruuska, T. (2016). Organising in the Anthropocene: An Ontological Outline for Ecocentric Theorising. Journal of Cleaner Production, 113, 705–714. Herrmann, P., Waxman, S. R., & Medin, D. L. (2010). Anthropocentrism Is Not the First Step in Children’s Reasoning About the Natural World. Proceedings of the National Academy of Sciences, 107(22), 9979–9984.

2  EVOLVING BEYOND HUMAN RELATIONS 

61

Hester, H. (2018). Xenofeminism. John Wiley & Sons. Hinchcliffe, T. (2017, June 25). Virtual Reality Takes Consciousness Research into Mystic Realms of the Divine Play. The Sociable. Hobson, J.  A., Hong, C.  C. H., & Friston, K.  J. (2014). Virtual Reality and Consciousness Inference in Dreaming. Frontiers in Psychology, 5, 1133. Huselid, M. A. (2011). Celebrating 50 Years: Looking Back and Looking Forward: 50 Years of Human Resource Management. Human Resource Management-­ New York, 50(3), 309. Jackson, Z. I. (2013). Animal: New Directions in the Theorization of Race and Posthumanism. Feminist Studies, 39(3), 669–685. Jiang, J., Wang, S., & Zhao, S. (2012). Does HRM Facilitate Employee Creativity and Organizational Innovation? A Study of Chinese Firms. The International Journal of Human Resource Management, 23(19), 4025–4047. Jones, M.  P. (1996). Posthuman Agency: Between Theoretical Traditions. Sociological Theory, 290–309. Jotterand, F. (2010). At the Roots of Transhumanism: From the Enlightenment to a Post-Human Future. Journal of Medicine and Philosophy, 35(6), 617–621. Kansteiner, W. (2017). Digital Anxiety, Transnational Cosmopolitanism, and Never Again Genocide Without Memory. In A. Hoskins (Ed.), Digital Memory Studies: Media Pasts in Transition. London: Routledge. Katz, E. (2000). Against the Inevitability of Anthropocentrism. In E.  Katz, A. Light, & D. Rothenberg (Eds.), Beneath the Surface: Critical Essays in the Philosophy of Deep Ecology (pp. 17–42). Cambridge, MA: MIT Press. Keenoy, T. (1999). HRM as Hologram: A Polemic. Journal of Management Studies, 36(1), 1–23. Koensler, A., & Papa, C. (2013). Introduction: Beyond Anthropocentrism, Changing Practices and the Politics of ‘Nature. Journal of Political Ecology, 20(1), 286–294. Kopnina, H., Washington, H., Taylor, B., & Piccolo, J. J. (2018). Anthropocentrism: More Than Just a Misunderstood Problem. Journal of Agricultural and Environmental Ethics, 1–19. Krause, S. R. (2011). Bodies in Action: Corporeal Agency and Democratic Politics. Political Theory, 39(3), 299–324. Langbert, M., & Friedman, H. (2002). Continuous Improvement in the History of Human Resource Management. Management Decision, 40(8), 782–787. Lathers, M. (2006). Toward an Excremental Posthumanism: Primatology, Women, and Waste. Society & Animals, 14(4), 417–436. Latour, B. (2017). Anthropology at the Time of the Anthropocene: A Personal View of What Is to Be Studied. In The Anthropology of Sustainability (pp. 35–49). New York: Palgrave Macmillan. Legge, K. (1995). What Is Human Resource Management. In Human Resource Management (pp. 62–95). London: Palgrave.

62 

P. BLOOM

Legge, K. (1998). Is HRM Ethical? Can HRM Be Ethical. Ethics and Organizations, 150–172. Leonhard, G. (2016). Technology vs. Humanity: The Coming Clash Between Man and Machine. FutureScapes. Lewis, S.  L., & Maslin, M.  A. (2015). Defining the Anthropocene. Nature, 519(7542), 171. Macpherson, A., Jones, O., & Zhang, M. (2005). Virtual Reality and Innovation Networks: Opportunity Exploitation in Dynamic SMEs. International Journal of Technology Management, 30(1–2), 49–66. Malm, A., & Hornborg, A. (2014). The Geology of Mankind? A Critique of the Anthropocene Narrative. The Anthropocene Review, 1(1), 62–69. Malone, K. (2016). Posthumanist Approaches to Theorizing Children’s Human-­ Nature Relations. In Space, Place, and Environment (pp. 185–206). McShane, K. (2007). Anthropocentrism vs. Nonanthropocentrism: Why Should We Care? Environmental Values, 16(2), 169–185. Mendieta, E. (2015). The Bio-Technological Scala Naturae and Interspecies Cosmopolitanism: Patricia Piccinini, Jane Alexander, and Guillermo Gómez-­ Peña. In Biopower: Foucault and Beyond (p. 158). Mercer, C., & Trothen, T. J. (Eds.). (2014). Religion and Transhumanism: The Unknown Future of Human Enhancement: The Unknown Future of Human Enhancement. ABC-CLIO. Mettler, R. (2018, January 8). Why Robots Should “Report” to HR. Personnel Today. Meulen, R. T. (2010). Dignity, Posthumanism, and the Community of Values. The American Journal of Bioethics, 10(7), 69–70. Mills, T. (2018, March 7). The Impact of Artificial Intelligence in the Everyday Life of Consumers. Forbes. Montes, G. A. (2018). Virtual Reality for Non-ordinary Consciousness. Frontiers in Robotics and AI, 5, 7. Morgan, G. (2017, July 27). What Will It Be Like to Have Robot Co-Workers? Fast Company. Nardi, B.  A. (Ed.). (1996). Context and Consciousness: Activity Theory and Human-Computer Interaction. MIT Press. Nass, C. I., Lombard, M., Henriksen, L., & Steuer, J. (1995). Anthropocentrism and Computers. Behaviour & Information Technology, 14(4), 229–238. Neimanis, A. (2014). Alongside the Right to Water, a Posthumanist Feminist Imaginary. Journal of Human Rights and the Environment, 5, 5. Norton, B.  G. (1984). Environmental Ethics and Weak Anthropocentrism. Environmental Ethics, 6(2), 131–148. Nunn, J. (2018, May 9). How AI Is Transforming HR Departments. Forbes. Nurka, C. (2015). Animal Techne: Transing Posthumanism. Transgender Studies Quarterly, 2(2), 209–226.

2  EVOLVING BEYOND HUMAN RELATIONS 

63

O’neill, O. (1997). Environmental Values, Anthropocentrism and Speciesism. Environmental Values, 6, 127–142. Owen, H. H. (1997). Expanding Our Now: The Story of Open Space Technology. Berrett-Koehler Publishers. Papadopoulos, D. (2010). Insurgent Posthumanism. Ephemera: Theory & Politics in Organization, 10(2). Pickering, A. (2001). Practice and Posthumanism: Social Theory and a History of Agency. In T. Schatzki, K. KnorrCetina, & E. Von Savigny (Eds.), The Practice Turn in Contemporary Theory (pp. 163–174). London: Routledge. Pickup, O. (2018, November 8). AI in HR: Freeing Up Time to be More Human. Raconteur. Pin-Fat, V. (2013). Cosmopolitanism and the End of Humanity: A Grammatical Reading of Posthumanism. International Political Sociology, 7(3), 241–257. Povinelli, D. J. (2004). Behind the Ape’s Appearance: Escaping Anthropocentrism in the Study of Other Minds. Daedalus, 133(1), 29–41. Pruchnic, J. (2013). Rhetoric and Ethics in the Cybernetic Age: The Transhuman Condition. Routledge. Pullen, A., & Rhodes, C. (2014). Corporeal Ethics and the Politics of Resistance in Organizations. Organization, 21(6), 782–796. Purser, R.  E., Park, C., & Montuori, A. (1995). Limits to Anthropocentrism: Toward an Ecocentric Organization Paradigm? Academy of Management Review, 20(4), 1053–1089. Rae, G. (2014). Heidegger’s Influence on Posthumanism: The Destruction of Metaphysics, Technology and the Overcoming of Anthropocentrism. History of the Human Sciences, 27(1), 51–69. Read, J. (2009). A Genealogy of Homo-Economicus: Neoliberalism and the Production of Subjectivity. Foucault Studies, 6, 25–36. Ricci, A., Piunti, M., Tummolini, L., & Castelfranchi, C. (2015). The Mirror World: Preparing for Mixed-Reality Living. IEEE Pervasive Computing, 14(2), 60–63. Rose, E.  J., & Walton, R. (2015, July). Factors to Actors: Implications of Posthumanism for Social Justice Work. In Proceedings of the 33rd Annual International Conference on the Design of Communication (p. 33). ACM. Rothman, J. (2018, April 2). Are We Already Living in Virtual Reality? The New Yorker. Saldanha, A., & Stark, H. (2016). A New Earth: Deleuze and Guattari in the Anthropocene. Deleuze Studies, 10(4), 427–439. Salleh, A. (1996). An Ecofeminist Bio-Ethic and What Post-Humanism Really Means. New Left Review, 217, 138. Sanchez-Vives, M.  V., & Slater, M. (2005). From Presence to Consciousness Through Virtual Reality. Nature Reviews Neuroscience, 6(4), 332.

64 

P. BLOOM

Schmidt, J. (2013). The Empirical Falsity of the Human Subject: New Materialism, Climate Change and the Shared Critique of Artifice. Resilience, 1(3), 174–192. Serino, S., Pedroli, E., Keizer, A., Triberti, S., Dakanalis, A., Pallavicini, F., et al. (2016). Virtual Reality Body Swapping: A Tool for Modifying the Allocentric Memory of the Body. Cyberpsychology, Behavior, and Social Networking, 19(2), 127–133. Shook, J. R., & Giordano, J. (2014). A Principled and Cosmopolitan Neuroethics: Considerations for International Relevance. Philosophy, Ethics, and Humanities in Medicine, 9(1). https://doi.org/10.1186/1747-5341-9-1. Shrivastava, P. (1995). Ecocentric Management for a Risk Society. Academy of Management Review, 20(1), 118–137. Sisson, K. (1993). In Search of HRM. British Journal of Industrial Relations, 31(2), 201–210. Squire, V. (2014). Desert ‘Trash’: Posthumanism, Border Struggles, and Humanitarian Politics. Political Geography, 39, 11–21. Srnicek, N. (2017). New Materialism and Posthumanism: Bodies, Brains, and Complex Causality. In D. McCarthy (Ed.), Technology and World Politics: An Introduction (pp. 84–99). London: Routlege. Steffen, W., Crutzen, P.  J., & McNeill, J.  R. (2007). The Anthropocene: Are Humans Now Overwhelming the Great Forces of Nature. AMBIO: A Journal of the Human Environment, 36(8), 614–622. Steffen, W., Persson, Å., Deutsch, L., Zalasiewicz, J., Williams, M., Richardson, K., et  al. (2011). The Anthropocene: From Global Change to Planetary Stewardship. Ambio, 40(7), 739. Steffen, W., Broadgate, W., Deutsch, L., Gaffney, O., & Ludwig, C. (2015). The Trajectory of the Anthropocene: The Great Acceleration. The Anthropocene Review, 2(1), 81–98. Steiner, G. (2010). Anthropocentrism and Its Discontents: The Moral Status of Animals in the History of Western Philosophy. University of Pittsburgh Press. Steyaert, C., & Janssens, M. (1999). Human and Inhuman Resource Management: Saving the Subject of HRM. Organization, 6(2), 181–198. Storey, J. (Ed.). (1995). Human Resource Management: A Critical Text. Cengage Learning EMEA. Strang, V. (2017). The Gaia Complex: Ethical Challenges to an Anthropocentric ‘Common Future. In The Anthropology of Sustainability (pp.  207–228). New York: Palgrave Macmillan. Taylor, N. (2012). Animals, Mess, Method: Post-Humanism, Sociology and Animal Studies. In Crossing Boundaries: Investigating Human-Animal Relationships (pp. 37–50). Boston: Brill Academic Publishers. Taylor, A. (2017). Beyond Stewardship: Common World Pedagogies for the Anthropocene. Environmental Education Research, 23(10), 1448–1461.

2  EVOLVING BEYOND HUMAN RELATIONS 

65

Templeton, G. (2018, May 15). 25 Examples of A.I. That Will Seem Normal in 2027: From Cooking to Dating to Art. Inverse. Tomasini, F. (2007). Imagining Human Enhancement: Whose Future, Which Rationality? Theoretical Medicine and Bioethics, 28(6), 497–507. Ulrich, W.  L. (1984). HRM and Culture: History, Ritual, and Myth. Human Resource Management, 23(2), 117–128. Van der Tuin, I., & Dolphijn, R. (2010). The Transversality of New Materialism. Women: A Cultural Review, 21(2), 153–171. Vazquez, R. (2017). Precedence, Earth and the Anthropocene: Decolonizing Design. Design Philosophy Papers, 15(1), 77–91. Warwick, K. (2014). The Cyborg Revolution. Nanoethics, 8(3), 263–273. Retrieved from https://idp.springer.com/authorize/casa?redirect_uri=https:// link.springer.com/article/10.1007/s11569-014-0212-z&casa_token=LZhb_ hdImSkAAAAA:PGGRo0GtdYCXJkwXWTsfyxGlTBcOkJmAXKiZFqIn_15za Y7X1qSANCO-tBq65aPe9TuI0O1g9ZQANA. Waxman, S., & Medin, D. (2007). Experience and Cultural Models Matter: Placing Firm Limits on Childhood Anthropocentrism. Human Development, 50(1), 23–30. Weiskopf, R., & Munro, I. (2012). Management of Human Capital: Discipline, Security and Controlled Circulation in HRM. Organization, 19(6), 685–702. Willmott, H. (1993). Strength Is Ignorance; Slavery Is Freedom: Managing Culture in Modern Organizations. Journal of Management Studies, 30(4), 515–552. Willmott, H. (1998). Towards a New Ethics? The Contributions of Poststructuralism and Posthumanism. In M.  Parker (Ed.), Ethics and Organizations (pp. 76–121). London: Sage Publications. Wilson, S., & Haslam, N. (2009). Is the Future More or Less Human? Differing Views of Humanness in the Posthumanism Debate. Journal for the Theory of Social Behaviour, 39(2), 247–266. Wolfe, C. (2009). Human, All Too Human: “Animal Studies” and the Humanities. PMLA, 124(2), 564–575. Woolgar, S. (Ed.). (2002). Virtual Society?: Technology, Cyberbole, Reality. Oxford University Press on Demand. Yasuaki, O. (2000). In Quest of Intercivilizational Human Rights: Universal vs. Relative Human Rights Viewed from an Asian Perspective. Asia-Pacific Journal of Human Rights and the Law, 1, 53–88. Zalasiewicz, J., Williams, M., Haywood, A., & Ellis, M. (2011). The Anthropocene: A New Epoch of Geological Time? Philosophical Transaction of the Royal Society, 369(1938), 835–841. Zembylas, M., & Bozalek, V. (2014). A Critical Engagement with the Social and Political Consequences of Human Rights: The Contribution of the Affective Turn and Posthumanism. Acta Academica, 46(4), 29–47.

CHAPTER 3

Heading Toward Integration: The Rise of the Human Machines

Imagine going to an important job interview and sitting across from you is not a human but a robot. Thus may sound like a scenario from far into the future. However, it is actually already happening and with growing regularity. Indeed Increasingly, employers are using robotics to speed up the recruitment process and free up hiring managers’ time for more complex tasks, while removing human biases that can hold back some applicants. In fact, today, nearly all Fortune 500 companies are using some kind of automation to enhance their hiring processes. (Gilchrest 2018: n.p.)

The reasons for these robotic hiring methods though might surprise. It is not to save money or even be more efficient. Rather, it is to eliminate as much as possible human bias. AI is thought to be untainted by class, racial, gender, or personal prejudices. It seems it takes a machine to get the best person for the job. The third chapter will examine, as the title suggests, the rise of “human machines”. There is growing excitement and trepidation about the ability of robotics and A.I. to enhance the natural mental and physical capabilities of humans. Indeed, many experts predict the occurrence of a “technological singularity” where advances in A.I. will trigger an unstoppable evolution toward a new type of far superior intelligence. However, these ideas can easily miss just how much human values and biases can influence AI, for both good and ill, that is to say that these matters can never be benign or © The Author(s) 2020 P. Bloom, Identity, Institutions and Governance in an AI World, https://doi.org/10.1007/978-3-030-36181-5_3

67

68 

P. BLOOM

‘neutral’. For this reason, it is crucial to reflect on whether and if so how machines can become more human and how humans can become more machine—like. This can range from creating more “ethical” robots to conceiving ourselves as responsible parts of diverse updating social networks. It also means embracing and remaining skeptical in guiding the personal and collective use of non-human intelligence, hardware and abilities to empower humans past their existing physical and social limits. Are we headed toward an empowering era of human and non-human integration, or a catastrophic erasure of what it means to be human? Do we have any control over the shape and character of this seemingly inevitable future? This chapter provides an interesting insight into the present and future potential of “integrated intelligence”. It incorporates pioneering ideas of “super-intelligence” and cyborgs to forge a novel vision of human and non-human collaboration. It expands on existing studies on how to create “ethical” machines to include social justice values of inclusion, tolerance and equality. It further reveals the new human responsibility for this pressing task. It then explores the importance of ensuring that all individuals and groups have access to emerging non-human intelligence for improving and expanding the possibilities of their lives and communities.

The Threat of Singularity An abiding human fear is the rise of conscious machines. Sci-fi thrillers such as Terminator and 2001: A Space Odyssey conjures up dystopian images of robotic consciousness surpassing human intelligence and trying to rule over humanity. In the real present, scientist both warn of the dangers of non-human intelligence and celebrate its advancement. No less than the world famous physicist Stephen Hawkings declared The development of AI could spell the end of the human race. It would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseeded. (Quoted in Cellan-Jones 2014: n.p.)

Fundamentally, these ominous warnings combined with positive technological developments point to the risk of what has been referred to as a technological singularity (Eden et al. 2015). While certainly compelling, such fears can serve to mask more philosophical and critical perspectives as

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

69

to the nature of the “thinking machine”. The Hungarian-­American mathematician, for instance, John George Kemeny (1955) directly likened the human mind and its development to that of a machine, observing a normal human being is like the universal machine. Given enough time, he can learn to do anything. (…) (T)here is no conclusive evidence for an essential gap between man and a machine. For every human activity we can conceive of a mechanical counterpart. (quoted in Natale and Ballatore 2017: 7)

Nevertheless, the conception and predictions around AI and technology reflect the human culture in which they emerge and help to shape. Scholars Natale and Ballatore (2017: 12–13) used identified the social “myths” driving past and present AI development, contending Our examination of the AI myth, therefore, is also meant as an encouragement to give more emphasis to the way this cultural vision reverberates in contemporary discourses on digital technology and culture. Technological myths that play today a paramount role in the discussion of digital media and culture, such as transhumanism and singularity, derive much of their claims and tenets from the discourse which emerged in the 1940s–1970s in connection to research on AI….This imaginary is largely based, just like the AI myth emerged in the post-war period, on the recurrence of three distinctive patterns: the use of ideas and concepts from other fields and contexts to describe the functioning of AI technologies, the mingling between examination of present research results with the imagination of potential future applications and horizons of research, and the strong relevance of controversies in public discussions of the concept and its application.

A shift in human consciousness then imply a willingness to engage openly and with as little prejudice as possible about what constitutes the “mechanical mind” and how it relates to our own (Crane 2015). The first step in this direction would be to let go of our fear of non-­ human intelligence. Whereas the threat of singularity should not be wholly ignored, neither should it sensationalised (Goldberg 2015). It creates a discursive framework for understanding these so-called “thinking machines” as threatening. Revealed is the still dominant human assumption that advanced intelligence (whether it be our own or that which we help create) is primarily concerned with power and domination. As such it says more about our own historically specific beliefs than it does about human or robotic nature. Professor Alan Bundy (2017: 42) warns, by contrast,

70 

P. BLOOM

about the risk of exaggerating the intelligence and capabilities, arguing provocatively that that some people may have unrealistic expectations about the scope of their expertise, simply because they exhibit intelligence—albeit in a narrow domain. The current focus on the very remote threat of super-human intelligence is obscuring this very real threat from subhuman intelligence.

He is, however, must more optimistic or at the least sanguine about the future economic effect of AI and robots, declaring Another potential existential threat is that AI systems may automate most forms of human employment….The productively of human workers will be, thereby, dramatically increased and the cost of the service provided by this multiagent approach will be dramatically reduced, perhaps leading to an increase in the services provided. Whether this will provide both job satisfaction and a living income to all humans can currently only be an open question. It is up to us to invent the future in which it will do, and to ensure this future is maintained as the capability and scope of AI systems increases. I do not underestimate the difficulty of achieving this. The challenges are more political and social than technical, so this is a job for the whole of society. (Ibid.: 42)

A key to challenging these dangerous underlying discourses of singularity is humanising robots, revealing their potential similarities and complementarities with non machines (Goldberg 2015). Crucial, in this respect, is to counter ideas that machines are our enemies. This requires looking backwards in time as much as it does forward. Understanding the shifting “schemes and tropes” that have defined human and machine relationships historically can serve as the basis for a more enlightened and collaborative future engagement (Pursell 1995). In the contemporary period, we should be open, if not always embrace, the potential of the “rise of strategy machines”. Quoting Professor Thomas H. Davenport (2016: n.p.), while As a society, we are becoming increasingly comfortable with the idea that machines can make decisions and take actions on their own… We still believe that humans are uniquely capable of making “big swing” strategic d ­ ecisions… We may be ahead of smart machines in our ability to strategize right now, but we shouldn’t be complacent about our human dominance. First, it’s not as if we humans are really that great at setting strategy…Second, although

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

71

it’s unlikely that a single system will be able to handle all strategic decisions, the narrow intelligence that computers display today is already sufficient to handle specific strategic problems.

The issue though is for humans to consider what type of strategies and data they would like AI to assist in realizing, letting go of our need for operational control for a more critical ideological discussion of societal and institutional values and possibilities. As AI develops in their capabilities and becomes more “well-rounded” in their applications and consciousness, the emphasis must shift from questions of control to co-existence (Stajic et al. 2015). Ironically, computers and AI may actually be key in helping us identify neurologically our fear patterns linked to threats such as caused by “thinking machines” and then constructing discursive strategies for reducing this anxiety and potential sense of panic (Maren 2007). This points to the significantly of thinking deeply and practically about different future scenarios of human and machine co-existence. The idea that humans can escape the danger of robotic overlords, that the singularity is not inevitable, may be a critical piece for avoiding it to be replaced by more collaborative and mutually beneficial alternatives. This has led some such as renowned futurologist Vilem Flusser, to maintain that humans should seek to slow down rather speed up technological progress. In Flusser’s (2013: 127) view Today, to engage oneself with freedom, and more radically, to engage oneself in the survival of the human species on the face [of] the Earth, implies strategies in order to delay progress. This reaction is today the only dignified one. We can no longer be revolutionaries, which means, to be opposed to the operative program through other programs. We can only be saboteurs, which means, to throw sand on the apparatus’ wheels. With effect: every current emancipatory action is, when intel-ligent, a subversive action.

Conversely, rather than delay technological development, humans could promote a “spiritual singularity” in which drawing on the philosophical insights of Deleuze, The actual belongs with the Buddhist notion of ‘it is’, whereas the virtual belongs with the notion of ‘it is not’. In this negation resides the highest affirmation, which is the affirmation of the virtual and openness. The capacity to intuit the virtual in the merely actualized is a mark of awakening….

72 

P. BLOOM

If we take virtual to be more or less synonymous with spiritual, singularity is then at one with the spiritual. (Chang 2016: 351)

It could also radically transform the meaning of singularity from a power struggle between humans and machines to a revolutionary process or event that could fundamentally alter human existence. Just as importantly, this evolution is a journey than can be “managed” and shaped for maximising its positive effects. Additionally, it would open the space for new views that counter notions that no limits can be placed on this singularity (Modis 2012). These perspectives are a good antidote to those that conceive of the singularity as both unavoidable and outside the scope of human influence. Tellingly, they can challenge established literary tropes of humanity learning to “survive” the singularity rather than thriving from its occurrence (REF  – Vinge 1993). In practical terms, this demands embedding new types of deep learning structures that would deter AI from becoming adversarial to human welfare and survival (Arel 2012). Additionally, it will involve programming moral ideals and reasoning as formative components of this “intelligence explosion” (Muehlhauser and Helm 2012). Of equal import, however, is reframing dominant human conceptions of progress and technology. Currently, it is by and large defined by a fear of the future and consequently an underlying feeling of safety in the present. Combining contemporary critiques of politics, society, and economics with a renewed sense of technological optimism would allow people to transform past “nightmares” into attractive and achievable “dreams of the future” (Bútora 2007). Accordingly, it would shed light on how disruptive technologies are already disempowering and controlling us, creating enhanced forms of surveillance and social disciplining (Dennis 2008). By contrast, by embracing the potentialities of AI and robotics now, we can begin “technologizing transcendence” so that we can plan today for a more empowering and exciting human and non-­ human tomorrow.

The Danger of Human Bias A crucial barrier to the realization of a progressive transhuman future is the danger of human bias. It is tempting, and even popular, to believe AI and big data are simply objective. They are deemed free of the prejudices

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

73

that have historically plagued humanity. Yet the reality is dramatically different. Indeed, a key problem for machines is the issue of being forced to rely on data that simply reflects existing societal power imbalances. A 2016 New York Times article by researcher and former co-chairwoman of a White House symposium on society and A.I. Kate Crawford, referred to this as “Artificial Intelligence’s White Guy Problem”, observing how data driven algorithms reproduce racist and sexist biases in fields as wide-­ ranging as policing to judges in the legal system to digital camera software. She concludes by proclaiming that We need to be vigilant about how we design and train these machine-­ learning systems, or we will see ingrained forms of bias built into the artificial intelligence of the future. Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters—from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes. (Crawford 2016: n.p.)

To this extent, non-human intelligence is unwittingly being exploited to reinforce existing social inequalities and discrimination. A perceived benefit of non-human intelligence is that it can help make accurate predictions. It conjures up an image of brilliant machines that can draw upon almost inhuman amounts of societal and personal information to without almost any doubt tell us what will happen not just today but also tomorrow (Smart and Shadbolt 2015). In this present day, this has reoriented human society to become more evaluative and data intensive. To this extent, we are living increasingly in a “scored society” in which Big Data is increasingly mined to rank and rate individuals. Predictive algorithms assess whether we are good credit risks, desirable employees, reliable tenants, valuable customers—or deadbeats, shirkers, menaces, and “wastes of time.” Crucial opportunities are on the line, including the ability to obtain loans, work, housing, and insurance. Though automated scoring is pervasive and consequential, it is also opaque and lacking oversight. In one area where regulation does prevail—credit—the law focuses on credit history, not the derivation of scores from data. (Citron and Pasquale 2014: 1)

The purpose of this constant data collection and human “scoring”, furthermore, is to aid and guide economic development globally in both the short and long term (Hilbert 2016).

74 

P. BLOOM

The issue though is that this data suffers from prevailing and often unconscious prejudice. While big data holds much promise, it also poses significant amounts of risk, in this respect. The mere inputting and outputting of data then must be tempered by critical trans-national dialogues (Kitchin 2013). There are also more radical and troubling implications of the social consequences of this enhanced dependence on algorithms for human decision-making both large and small. Flawed and socially biased data can serve to undermine not just personal choices but become weaponized against democracy itself (O’Neil 2017). Additionally, it is already revealing quite profound changes for studying and understanding human consciousness. Neuropsychologist Nicholas Turk-­ Browne (2013: 580) advocates treating brain functioning as if it were big data to be carefully and critically collected and analysed, “Because what you see depends on how you look, such unbiased approaches provide the greatest flexibility for discovery.” These valuable critiques though should not mean that we simply “opt out” of big data or the use of non-human intelligence. Rather, it demands being more sensitive to current and emerging forms of machine biases (Boyd and Crawford 2012). Practically, it requires spending more attention to what are often dismissed as simply “outliers”. For AI to become more accurate we must be willing to make “Big Data small”, in this sense (Welles 2014). At a policy level, there should be a greater embrace of new ideas stressing the importance of a more inclusive and diverse data culture. For this reason, in 2018 launched the “Inclusive Data Charter”, proclaiming The 2030 Agenda for Sustainable Development commits to leaving no one behind. In order to fulfill this pledge, we need more granular data to understand the needs and experiences of the most marginalized in society, and we must ensure that resources are being allocated to maximize outcomes for the poorest. Currently, too little data is routinely disaggregated. While there are many technical and methodological challenges inherent in improving data disaggregation, some of the largest barriers are political and financial.

Doing so further requires taking a greater trans-disciplinary approach to data collection and analysis (Burns et al. 2018). Just as significantly, it presents opportunity for “queering” data to reveal previously marginalized perspectives and identities, thus challenging hegemonic human prejudices and their resultant inequalities.

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

75

Looking beyond the arising threat of human based data bias are the danger of prejudiced robots and AI.  These fears are not mere science fiction. Instead they reveal quite troubling contemporary trends, including the use of chatbots on social media for perpetuating racism (Schlesinger et al. 2018). These raise broader questions about the role of non-human intelligence in society (Gunkel 2018). For it to be a positive force, then we must recognise and address its existent human borrowed sexism and racism (Zou and Schiebinger 2018). These concerns transcend merely the conventional economic effects such as lost jobs (Olson 2018). Moreover, these issues of systematic bias run to deeper questions over the “predictive inequity” now programmed into machine learning (Wilson et al. 2019). An urgent human task then is to help foster more inclusive non-humans. This points the way to reconsidering how we think about and engage with big data, for instance. Notably, it opens the space for “visualising junk” so as to reveal the continuing power of patriarchy, classism, and racism (Hill et al. 2016). Such efforts also extend to emerging technological human enhancements. The idea of cyborgs is quickly transforming from a fantasy into a practical and to some extent inevitable reality. If this is the case, then attention must be paid how to create radically inclusive cyborgs (Doyle et al. 2018). These offer the possibilities of creating a truly more progressive and less prejudiced transhuman society (Klein et al. 2018).

Manufacturing “Ethical” Intelligence A predominant, and quite valid, concern surrounding the rise of robots and AI is their ethics. Popularly, this plays into past stereotypes of robots as amoral—devoid any human sense of right and wrong. Returning to present day reality, researchers are increasingly devoting themselves to ethically programming non-human intelligence as much as possible (Wang et  al. 2017). Similar efforts are aimed at ensuring that the use of AI is guided as much by moral principles as it is technological discovery. Scholars have put forward, in this respect, four “ethical priorities for neurotechnologies and AI” encompassing issues of (1) “privacy and consent” (2) “agency and identity” (3) “augmentation” and (4) “bias” (Yuste et al. 2017). Reflected, in turn, is a prevailing fear that AI can and will be used for nefarious purposes—a new technique for political propaganda and corporate brainwashing. What this reveals is that this is fundamentally a problem of human morality. This extends to the current experiments for the development of AI (Bird et al. 2016).

76 

P. BLOOM

Is it possible though to integrate less exploitive principles into the making of modern machines? At the very least, this question demands a philosophical reflection on the ethics informing AI research and development. They should be especially sensitive to “issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill” since “The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves” (Bostrom and Yudkowsky 2014: 316). This has led to a progressive desire for greater standardization related to the “ethical design” of AI and “autonomous systems” (Bryson and Winfield 2017). Analogously, this has spurred renewed debate (at least within scholarly circles) as to the proper ethical perspective for the creation of AI and robots. To this end, In the future, there will be an even more diverse set of technologies surrounding us including for taking care of medical examination, serving us and taking us where we want to go. However, such devices and systems would need to behave properly for us to want them close by. If a robot hits us unintentionally or works too slowly, few would accept it. Mechanical robots with the help of artificial intelligence can be designed to learn to behave in a friendly and user adapted way…Still, there is a large divide between current design challenges and science fiction movies’ dystopian portrayal of how future technology might impact or even eradicate humanity. However, the latter probably has a positive effect on our awareness of possible vulnerability that should be addressed in a proactive way. We now see this staking place in the many initiatives to define regulations for AI and robots. (Torresen 2018: 8)

These moral concerns extend to the prospective relationship between “disruptive” technologies and humans. Some scholars, for instance, have called for no less than the need for “new epistemologies and paradigm shifts” in regards to big data (Kitchin 2014). This revolutionary turn in perspective, tellingly, does not include a less anthropocentric worldview. There does appear, though, enhanced attempts to programme robots movements and actions to better foster human trust (Lee 2018). In this respect, machines are being created with a deeper understanding of contemporary human perspectives. Yet they also point to the increasingly recognized possibilities of human and machines working together. This

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

77

so-called “collaborative intelligence” prophesizes a near future where “humans and AI are joining forces” since Artificial intelligence is becoming good at many “human” jobs—diagnosing disease, translating languages, providing customer service—and it’s improving fast. This is raising reasonable fears that AI will ultimately replace human workers throughout the economy. But that’s not the inevitable, or even most likely, outcome. Never before have digital tools been so responsive to us, nor we to our tools. While AI will radically alter how work gets done and who does it, the technology’s larger impact will be in complementing and augmenting human capabilities, not replacing them. (Wilson and Daugherty 2018: n.p.)

This broad stroke vision is, moreover, linked to comprehensive investigations of the type of learning needed for, “human-robot collaborations” (Rozo et al. 2018) Significantly, the discourse around ethics and technology is rapidly shifting. No longer is it presumed that AI or robots are value neutral. Nor is our fear primarily wrapped up in familiar worries related to their “non-­ human” amorality. Instead, contemporary moral concerns are directed at their exploitation for unsavoury human ends. To this end, there is an awakening to the need to take seriously the programming of machines and their programming of us in regards to social justice. Products such as Amazon’s Alexa have been observed repeating politically correct and event at time progressive sentiments about social issues such as feminism and “Black Lives Matters”. This has led popular commentator Mark Sist in a recent New Statesman article “Amazon’s Alexa is our robot Social Justice Warrior and we, the libs, have won” to satirically claim I have always personally felt that a mark of success for my liberal politics is not, in fact, equality, but the voice of a voluntarily-purchased robot in every home telling people basic facts that are tangentially related to my political beliefs. Like many on the left, I don’t really care so much about making a difference—I mostly just care about talking about it. Provided Alexa can profess our beliefs to dinner party guests, children, and YouTube gamers, I feel that we, on the left, have won. (Sist 2017: n.p.)

Less explicitly politically oriented (and easily co-opted by corporations), this has produced an ethos based on mutual benefit and cooperation rather than control and competition (Crandall et al. 2018) Now machines are

78 

P. BLOOM

considered “teammates” who need our support and care (Seeber et  al. 2018) Additionally, AI and robots are celebrated for their potential ability to enhance human life (Zhang et al. 2017) They are our future “teachers” from whom humans have much to learn (Granados et al. 2016). Machines then represent humans with a stark ethical and moral choice. It is technology that demands we confront difficult truths about our current selves and decide if these will continue to be our values going forward. As the science of AI and machine learning continues to develop, its ethics must proceed at a similar pace (Baum 2017). These include but ultimately transcend mere desires to regulate and safeguard humans against “killer robots” (Barker 2017). It entails constructing a moral framework that challenges and goes beyond conventional “annihilation anxiety” popularly associated with machines (Richardson 2015). Instead, it must be focused on creating a more inclusive society for both humans and non-humans alike.

Disruptive Debates A key driver of the fear and excitement around current technologies is the predicted “disruptive” changes they will bring. It is thought that (term) will be the catalyst for the “Fourth Industrial Revolution”. Any and all ethical considerations involving human and non-human interactions and collaborations, will have to engage with these potential dramatic economic and social changes. To a certain extent such disruptions have always been a part of how we popularly conceive and philosophize technological advancements (Bostrom 2003). These also reflect the potential drawbacks on “humanizing robots”—especially during these unpredictable times (Robert 2017). Revealed again, is how both humans and non-humans are at not only an ethical but existential crossroads. Traditionally, any such existential fears were related to the possibility of “human extinction” at the hands of “unfriendly superintelligent AI” (Lorenc 2015: 194). These premonitions of our ultimate destruction have produced serious thinking of what can be done to forestall or even prevent this scenario, thus ensuring human survival. One idea, for instance, would be “the deliberate human creation of an ‘AI Nanny’ with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more fully understands how to execute a Singularity in a positive way” (Goertzel 2012: 96). Further, “It is s­ uggested

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

79

that as technology progresses, humanity may find the creation of an AI Nanny desirable as a means of protecting against the destructive potential of various advanced technologies such as AI, nanotechnology and synthetic biology” (ibid.). Less sensationalist, perhaps, are efforts to foster “mutual understanding” between humans and robots. Such an understanding would not be based on past desires to simply “humanize” robots. Rather, it would rooted in deeper existential questions of “How must humankind adapt to the imminent process of technological change? What do we have to learn in order to keep pace with the smart new machines? What new skills do we need to understand the robots?” (Sciutti et al. 2018: 22). This can lead to a more detailed analysis of “the human factors challenges for a person teaching a robot and for a robot teaching a person” (Sheridan 2016: 529). These discussions open up new vistas of an “integrative” transhuman future. They constructively depart from fearful notions of singularity to optimistic, yet realistic, accounts of how “social robots can make us more human” (Sequeira 2018: 295). In this respect, Current technological limitations can be managed to induce a perception of social fragility that may lead human agents to reason about the social condition of a robot. Though robot and/or technology phobias may bias the way a social robot is perceived, this reasoning process may contribute to an introspection on the meaning of being social and, potentially, to contribute to humanizing social environments. (Ibid.: 295)

This can also provide the basis for humanoid robots to assist humans in more creative and artistic activities (See Galván et  al. 2016). Recent experiments, further, reveal that greater “human—robot similarity” can enhance willingness of humans to work alongside robots, a willingness only mitigated by perceived fears to their physical safety (You and Robert 2017). Researchers are now additionally investigating and seeking to create new design “tools” so that humans can input on the everyday “automated ethical decision-making” of AI as it becomes more prevalent both in the workplace and socially (Millar 2016). Significantly, this focuses renewed attention into the wellbeing of non-­humans not just humans. In order to mitigate the risks that AI and robots pose to humans, it is necessary to adopt “universal empathy” towards machines, learning from their perspective and being willing to consider ethical prerogatives from their point of view. Tellingly, when

80 

P. BLOOM

humans perceive robots as having “affective states”, experiencing feelings about events and actions, they are less likely to want to harm them or sacrifice them, even for the sake of saving other humans (Nijssen et  al. 2019). Moreover, researchers are exploring how the observed attitude of a robot can actually influence human action (Vannucci et al. 2018). This sense of shared empowerment is so crucial in that it disrupts the narrative of “disruptive” technological change. Existing perspectives retell a dangerous story of economic change that is “inevitable” and to which all must simply adapt. This was the predominant discourse, for instance, to justify corporate globalization (see Bloom 2016; Spicer and Fleming 2007). However, as recent social movements have shown in all to stark detail, these policies are not unavoidable, divine in origin, and unassailable. Rather they are perfectly changeable, human made, and always up for contestation. The predictions of a “fourth industrial revolution” will importantly be shaped by these transhuman relations. Indeed, they offer humans and non-humans the renewed ability to work together to shape a mutually beneficial and empowering future.

Bridging the AI Divide The potential for a more integrated transhuman future will depend on human and non-human communication. Such communication extends beyond traditional ICTs or projected notions of a talking AI or robot. It also means learning to understand “the messages of mute machines” machines, the unspeaking but expressive technologies that we will be increasingly surrounded by both at work and at home (Guzman 2016: 1). More than simply information exchange, humans and machines can, as discussed above, can share creative pursuits, as recently displayed by the creation of the “android theatre” combining humans and the humanoid Geminoid android (Ogawa and Ishiguro 2016). The next step, in this respect, is the transition to “humane robots”, more precisely “from robots with a humanoid body to robots with an anthropomorphic mind” (Sandini and Sciutti 2018: 7). These developments in human and non-human communication provide the basis for rethinking transhuman progress. It is crucial not to consider this hi-tech contemporary and coming reality as necessarily a clean break with the past. Rather, we can learn much from, for instance, classical philosophy about what constitutes the “good life” for human and non-­

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

81

humans alike (see Levin 2017). Greater and deeper human and non-­ human communication allows, moreover, for shared discussions of the future, ones which rise above utopian fantasies. Philosopher Phillipe Verdoux argues for a perspective of “rational capitulationism”, which rejects uncritical ideas of technological progress while still embracing the need and possibility for transhumanism. He contends In pursuing this end, I have attempted to emphasize that one can be a pessimist about the future, one can identify technology as the primary cause of our existential plight, and one can hold an anti-progressionist conception of history while at the same time advocating the descriptive and normative claims of transhumanism—in particular, the moral assertion that we ought to pursue both world engineering and person-engineering technologies by fomenting the GNR revolution. This is, it appears, our best hope of surviving the future. (Verdoux 2009: 61)

The danger in both looking to the wisdom of yesterday and the potential for tomorrow, is that it risks maintaining a “human-centred” vision of history. Notably, it retains a focus on how these technologies will impact human survival and evolution. Consequently, there remains an underlying human/non-human dualism that not only subverts transhuman and posthuman claims about the possibilities of cyborgism but also how we perceive “who we are” in our current and future realities. New philosophical approaches such as “postphenomenology” allow for us to break through this increasingly outdated (if it was ever really relevant) dualism, offering instead “an effective approach for pragmatically and empirically grounding the human enhancement debate, providing tools such as embodied technological relations, the non-neutrality of technology, enabling and constraining aspects of all technologies, and the false dream of a perfectly transparent technology” (Lewis 2018: 19). The transhuman revolution, hence, revolves around an ongoing conversation between humans/non-humans. It is not straightforward, a foregone conclusion, or obvious as to its direction. By contrast, it is one that must take into account the diverse needs and views of both humans and machines, especially as these distinctions progressively blur in different ways and across different contexts. These conversations will ultimately be as much about morality and ethics as they will be logistics. “We see more and more autonomous or automated systems in our daily life,” observed engineer Karl-Joseph Kuhn, but it is less clear how to programme robots

82 

P. BLOOM

for “making the decision between two bad choices”? (Deng 2015: 25). Attempts to use logics and rule based programming are making great strides toward these ends, however, “the machine-learning approach promises robots that can learn from experience, which could ultimately make them more flexible and useful than their more rigidly programmed counterparts” (Ibid.: 26). These speak to the potential of creating not just ethical but conscious machines, ones that can create meaning and as such reshape social understandings and practices. Accordingly, One cannot deny the possibility to design artificially the consciousness itself but not the cognitive substitute of the modern digital models. To make it happen, it is necessary to start with the consciousness cognition and only then to move to the technical models. If we imagine this model to be realized, then it would be a peer for a human—the device that is able to understand the human as the author of the meanings, the device able to create similar meanings for the interaction rather than a machine actor combined with a human and controlling him. (Lankin and Kokarevich 2017: 503)

It is therefore imperative to share moral intelligence between humans and machines. Doing so means, on the one hand, linking technological development of AI and robots to the fostering and safeguarding of “human dignity” (Brownsword 2017). On the other hand, the dignity and safety of non-humans, including machines, must also be preserved (Palk 2015). Doing so requires, an epistemic shift in how we understand and put in place social welfare (Expósito Serrano 2017). For this reason, these “disruptive” advances in intelligence are easily compared to a type of “Promethean hubris” (Sutton 2015). Yet it is telling just how little attention is given to the normative possibilities of transhumanism—that humans may actually enhance their moral intelligence through their deeper engagement with machines. To this end, transhuman relations, if it is to be successful or sustainable, must serve to continually and mutually expand the moral intelligence and ethical actions of humans and machines alike. Such a view directly challenges those perspectives, often with strong religious overtones, claiming transhumanism to be based on fictitious morals (Lake 2018). However, these critiques overlook the ability of humans to “programme good ethics into artificial intelligence” (Davies 2016). It additionally ignores our opportunity as humans to have our own social programming transformed by these very real interactions with machines (Noyer 2016).

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

83

Required, in this respect, is a stronger and more comprehensive ethical framework for encouraging and to an extent ensuring this shared growth in moral intelligence (Conitzer et  al. 2017) Consequently, transhuman must be seen as a “dangerous idea”—both to human survival and betterment as well as to an unjust status quo. Indeed, it should not be forgotten that historically “transhumanism is an extension of the dangerous belief in human perfectibility derived Social Darwinism and eugenics” (Livingstone 2015: 6). It is a continuous and important choice, one path leading to our shared destruction and the other to pure mutual “enhancement and welfare” (Cuevas-Badallo and Labrador-­ Montero 2019). Significantly, transhumanism depends on the creation of both moral human and machines. It is the manufacturing of a range of different moral technologies, in this regard (Gnatik 2017). Culturally, this is premised on the construction of new “memories” of what a shared human and non-­ human could hold. These would be more optimistic and realistic than dystopian or fantastically radical ones as put forward in movies such as Oblivion and Vendetta, respectively (de Sousa Caetano 2017). These will tap into novel transhuman mythologies, teaching ethical lessons for a better human and non-human co-existence (Hauskeller 2016). At stake is whether these advances will be a moral “evolution or revolution” (Lourtioz et al. 2015).

Heading Toward Integration A crucial political, economic, and social “grand challenge” of the twenty-­ first century is to halt technological singularity and promote instead human and non-human integration. They represent two radical visions of the future (Edman 2019). Achieving such an integration requires evolving a policies and strategies for these radical but necessary transhuman ends (Cordeiro 2016). It holds the potential of using technology and its diverse contributions to our collective intelligence to transcend our stifling and often oppressive human condition (see Sirius and Cornell 2015). It is to combine human enhancement and technological innovation into a new and better society for humans and non-humans alike. The potential for transhuman integration means moving away from governance paradigms premised primarily on regulation. Understandably, scholars and policymakers are searching for a “global solution” for ensuring that humans maximize the benefits of AI while minimizing its destructive

84 

P. BLOOM

potential (see Erdélyi and Goldsmith 2018). While certainly, this is preferable to libertarian perspectives which argue for the uninhibited promotion of technology regardless of social or economic consequences, it also fails to fully see its positive political potential. Transhumanism can exist as a “new global political trend” that can incorporate themes of decolonisation, climate change, and underdevelopment internationally. Thus It may sound like science fiction, but transhumanist political parties are catching on. They emphasize and advocate the benefits of technology for improving life. “Transhumanism” is the effort to transform the human being into something beyond its present body and mind (“trans” means “beyond”). It is driven by innovations in the health care sector, increasing machine intelligence and the military. The economy will be influenced by the resulting trend to merge humans with computers and machines. (Benedikter and Siepmann 2016: 47)

This would entail forging new shared “global transhumanist identities” based on these progressive technological values linked to human and non-­ human integration. To this end, Despite the technological advances, we are still at a very rudimentary stage of the practical implementation of transhumanist values in our societies. Only few instances and manifestations of transhumanism in fields such as medical biology and biotechnology have been developed, most of which are primarily intended to aid the normal and conventional human capacities in case of physical or mental impairment. Such technologies include gene engineering, use of microchip implants and prostheses, and implementation of stem cell technologies to curb or treat diseases and different types of medical conditions. However, the future of transhumanism might unfold new beings of intellect diverging from the normal human body and mind for various purposes to eventually create multiple identities. The processes by which new forms emerge, as mentioned before, might result in the coexistence of different “types” with varying degrees of humanness. (Bahji 2018: 89)

It demands, hence, a new way of thinking and acting in the world. One in which non-humans are not mere adornments of our existence, nor or machines our enemies or merely there for our use. Instead, it is where emerging forms of intelligence can radically reshape our reality and human possibility (Examples  – Baciu et  al. 2016) This is a new philosophical

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

85

dream that goes beyond human possibilities while also humanising the non-human and de-humanising the human (Babich 2017). There is an important and imperative political dimension to this philosophy, the struggle for a democratic transhumanism and against libertarian ones which would simply reboot our present inequities (Mazarakis 2016). In practice, this would entail reconfiguring present day education to reflect this radical agenda of transhuman education (Rikowski and Ford 2019). The danger, of course, is that the ethos driving this integration will become monolithic and orthodox. By contrast, the goal should be to open the space for a plurality of views of what a mutually empowering and caring human and non-human shared society can and should look like. This means putting “machine ethics” at the heart of our public debates, and as linked to a range of other pressing issues from the environment to economic justice (Anderson and Anderson 2011). It additionally must be willing to engage with a diversity of transhuman desires—particularly from perhaps unexpected places such as established religions like Mormonism (Cannon 2015) and Islam (Bouzenita 2018). Equally important though is setting “ethical benchmarks” to guide these ideas and challenge the religiosity increasingly associated with new technologies such as big data (Beranger 2018). The ultimate challenge, in this respect, is creating a viable movement for transhuman social change. The most pronounced threat to such a project would be those that merely attempting to provide a technological sheen to outdated free market ideas. Yet there is also a less obvious risk of falling into a form of techno-optimism that borders on utopianism (Tirosh-Samuelson 2017). The goal is to remain excited about the possibilities of non-human intelligence without falling into past forms of religious devotion or otherworldly desires for salvation (Bainbridge 2017). Rather, it is to infuse these technological possibilities with a renewed sense of “adventure and awe” which according to the philosopher Kirk Scheider, is “key to the perpetuation of vibrant, evolving lives—and in combination with technological advances may bring marvels to our emerging repertoires”. A prime example of such opportunities is the making of immortality from a fantasy to a potential reality through emerging techniques such as mind cloning (Huberman 2018). Yet this would also create new social pressures and existential questions for individuals and societies. Accordingly, it is critical to save humanity from the perils of a transhuman built on market based ideas of “human perfectibility” representing a

86 

P. BLOOM

movement (that) marks a significant reversal of the humanist conception of human perfectibility inherited from the Enlightenment. Far from working for the social and political emancipation of humans and the human condition, transhumanism is emblematic of a depoliticized conception of human perfectibility focused on the technoscientific adaptation of the human being. Transhumanism thus marks a major rupture with the modern democratic project of autonomy. (Le Devedec 2018: 488)

Rather it is to conceive and seek to implement, a form of transhuman integration that emancipates humans from their social, political, economic, and philosophical human constraints—for a new empowering shared existence with non-humans.

References Anderson, M., & Anderson, S.  L. (Eds.). (2011). Machine Ethics. Cambridge University Press. Arel, I. (2012). The Threat of a Reward-driven Adversarial Artificial General Intelligence. In A. H. Eden, J. H. Moor, J. H. Soraker, & E. Steinhart (Eds.), Singularity Hypotheses (pp. 43–60). Berlin, Heidelberg: Springer. Babich, B. (2017). Nietzsche’s Posthuman Imperative: On the Human, All too Human Dream of Transhumanism. In Y.  Yuncel (Ed.), Nietzsche and Transhumanism: Precursor or Enemy (pp.  101–132). Cambridge: Cambridge Scholars Publishing. Baciu, C., Opre, D., & Riley, S. (2016). A New Way of Thinking in the Era of Virtual Reality and Artificial Intelligence. Educatia, 21(14), 43. Bahji, S. (2018). Globalisation and the Transhumanist Identities. Journal of Futures Studies, 22(3), 85–92. Bainbridge, W.  S. (2017). Transhumanism: An Online Network of Technoprogressive Quasi-Religions. In Dynamic Secularization (pp.  209– 236). Cham: Springer. Barker, J. (2017, August 14). Controlling the Killer Robots. International Politics and Society. Baum, S. D. (2017). Social Choice Ethics in Artificial Intelligence. AI & Society, 1–12. https://doi.org/10.1007/s00146-017-0760-1. Benedikter, R., & Siepmann, K. (2016). “Transhumanism”: A New Global Political Trend? Challenge, 59(1), 47–59. Beranger, J. (2018). The Algorithmic Code of Ethics: Ethics at the Bedside of the Digital Revolution. John Wiley & Sons. Bird, S., Barocas, S., Crawford, K., Diaz, F., & Wallach, H. (2016). Exploring or Exploiting? Social and Ethical Implications of Autonomous Experimentation in AI.

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

87

Bloom, P. (2016). Authoritarian Capitalism in the Age of Globalization. Edward Elgar Publishing. Bostrom, N. (2003). Ethical Issues in Advanced Artificial Intelligence. In S.  Schneider (Ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence (pp. 277–284). Wiley. Bostrom, N., & Yudkowsky, E. (2014). The Ethics of Artificial Intelligence. In W.  Ramsey & K.  Frankish (Eds.), The Cambridge Handbook of Artificial Intelligence (pp. 316–334). Cambridge: Cambridge University Press. Bouzenita, A. I. (2018). “The Most Dangerous Idea?” Islamic Deliberations on Transhumanism. Darulfunun Ilahiyat, 29(2), 201–228. Boyd, D., & Crawford, K. (2012). Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon. Information, Communication & Society, 15(5), 662–679. Brownsword, R. (2017). From Erewhon to AlphaGo: For the Sake of Human Dignity, Should We Destroy the Machines? Law, Innovation and Technology, 9(1), 117–153. Bryson, J., & Winfield, A. (2017). Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems. Computer, 50(5), 116–119. Bundy, A. (2017). Smart Machines Are Not a Threat to Humanity. Communications of the ACM, 60(2), 40–42. Burns, R., Hawkins, B., Hoffmann, A.  L., Iliadis, A., & Thatcher, J. (2018). Transdisciplinary Approaches to Critical Data Studies. Proceedings of the Association for Information Science and Technology, 55(1), 657–660. Bútora, M. (2007). Nightmares from the Past, Dreams of the Future. Journal of Democracy, 18(4), 47–55. Cannon, L. (2015). What Is Mormon Transhumanism? Theology and Science, 13(2), 202–218. Cellan-Jones, R. (2014, December 2). Stephen Hawking Warns Artificial Intelligence Could End Mankind. BBC News. Archived from the original on 30 October 2015. Chang, P. (2016). The Four Ecologies, Post-evolution and Singularity. Explorations in Media. Ecology, 15(3–4), 343–354. Citron, D.  K., & Pasquale, F. (2014). The Scored Society: Due Process for Automated Predictions. Washington Law Review, 89, 1. Conitzer, V., Sinnott-Armstrong, W., Borg, J. S., Deng, Y., & Kramer, M. (2017). Moral Decision Making Frameworks for Artificial Intelligence. In AAAI (pp. 4831–4835), San Francisco, CA, USA. Cordeiro, J. (2016). Technological Evolution and Transhumanism. In Deploying Foresight for Policy and Strategy Makers (pp. 81–92). Cham: Springer. Crandall, J.  W., Oudah, M., Ishowo-Oloko, F., Abdallah, S., Bonnefon, J.  F., Cebrian, M., et al. (2018). Cooperating with Machines. Nature communications, 9(1), 233.

88 

P. BLOOM

Crane, T. (2015). The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation. Routledge. Crawford, K. (2016). Artificial Intelligence’s White Guy Problem. The New York Times. http://www.nytimes.com/2016/06/26/opinion/sunday/artificialintelligences-white-guy-problem.html Cuevas-Badallo, A., & Labrador-Montero, D. (2019). Technological Revolution, Transhumanism, and Social Deliberation: Enhancement or Welfare? In Handbook of Research on Industrial Advancement in Scientific Knowledge (pp. 57–73). IGI Global. Davenport, T. H. (2016). Rise of the Strategy Machines. MIT Sloan Management Review, 58(1), 29. Davies, J. (2016). Program Good Ethics into Artificial Intelligence. Nature News. Deng, B. (2015). The Robot’s Dilemma. Nature, 523(7558), 24. Dennis, K. (2008). Keeping a Close Watch–The Rise of Self-surveillance and the Threat of Digital Exposure. The Sociological Review, 56(3), 347–357. Doyle, S.  E., Forehand, L., Hunt, E., Loughrey, N., & Schneider, S. (2018). Cyborg Sessions: A Case Study for Gender Equity in Technology. In T. Fukuda, W. Huang, P. Janssen, K. Crolla, & S. Alhadidi (Eds.), Learning, Adapting and Prototyping, Proceedings of the 23rd International Conference of the Association for Computer-Aided Architectural Design Research in Asia (CAADRIA) 2018 (Vol. 1, pp. 71–80). Eden, A.  H., Moor, J.  H., Søraker, J.  H., & Steinhart, E. (2015). Singularity Hypotheses. Springer. Edman, T. B. (2019). Transhumanism and Singularity: A Comparative Analysis of a Radical Perspective in Contemporary Works. Gaziantep University Journal of Social Sciences, 18(1). Erdélyi, O. J., & Goldsmith, J. (2018). Regulating Artificial Intelligence Proposal for a Global Solution. In AAAI/ACM Conference on Artificial Intelligence, Ethics and Society, New Orleans. Expósito Serrano, A. (2017). Transhumanism: Biological and Epistemological Implications. Flusser, V. (2013). Post-history, Minneapolis: Univocal. Guattari, Félix (1989), ‘The Three Ecologies’. New Formations, 8, 131–147. Galván, A. A. C., Anzueto-Ríos, Á., & Rodríguez, M. E. M. (2016). Humanoid Robot Programming for the Assistance of an Artistic Performance. Research in Computing Science, 127, 69–78. Gilchrest, K. (2018, October 2). Your Next Job Interview Could Be with a Robot. CNBC. Gnatik, E. (2017, June). Transhumanism Horizons of Convergent Technologies. In 2nd International Conference on Contemporary Education, Social Sciences and Humanities (ICCESSH 2017). Atlantis Press.

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

89

Goertzel, B. (2012). Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood? Journal of Consciousness Studies, 19(1–2), 96–111. Goldberg, K. (2015). Robotics: Countering Singularity Sensationalism. Nature, 526(7573), 320. Granados, D.  F. P., Kinugawa, J., & Kosuge, K. (2016). A Robot Teacher: Progressive Learning Approach towards Teaching Physical Activities in Human-­ Robot Interaction. In Proceedings of the IEEE International Symposium of Robot and Human Interactive Communication (ROMAN 2016) (pp.  399– 400). IEEE. Gunkel, D. J. (2018). Other Things: AI, Robots, and Society. In Z. Papacharissi (Ed.), A Networked Self and Human Augmentics, Artificial Intelligence, Sentience (pp. 67–84). Routledge. Guzman, A.  L. (2016). The Messages of Mute Machines: Human-machine Communication with Industrial Technologies. Communication+ 1, 5(1), 1–30. Hauskeller, M. (2016). Mythologies of Transhumanism. Springer. Hilbert, M. (2016). Big Data for Development: A Review of Promises and Challenges. Development Policy Review, 34(1), 135–174. Hill, R.  L., Kennedy, H., & Gerrard, Y. (2016). Visualizing Junk: Big Data Visualizations and the Need for Feminist Data Studies. Journal of Communication Inquiry, 40(4), 331–350. Huberman, J. (2018). Immortality Transformed: Mind Cloning, Transhumanism and the Quest for Digital Immortality. Mortality, 23(1), 50–64. Kemeny, J. G. (1955). Man Viewed as a Machine. Scientific American, 192, 58–67. Kitchin, R. (2013). Big Data and Human Geography: Opportunities, Challenges and Risks. Dialogues in Human Geography, 3(3), 262–267. Kitchin, R. (2014). Big Data, New Epistemologies and Paradigm Shifts. Big Data & Society, 1(1), 1–12. Klein, M., Hoher, S., Kimpeler, S., Lehner, M., Jaensch, F., Kühfußf, F., … & Snelting, F. (2018). Machines without Humans—Post-Robotics W. Envisioning Robots in Society–Power, Politics, and Public Space: Proceedings of Robophilosophy 2018/TRANSOR 2018, 311, 88. Lake, C.  B. (2018). The Failed Fictions of Transhumanism. In Christian Perspectives on Transhumanism and the Church (pp. 137–149). Cham: Palgrave Macmillan. Lankin, V.  G., & Kokarevich, M.  N. (2017). Man as Matter of Engineering: Ethical, Epistemological and Technological Boundaries of Transhumanism. The European Proceedings of Social & Behavioural Sciences (EpSBS). Vol. 26: Responsible Research and Innovation (RRI 2016).—Nicosia, 2017., 262016, 497–505. Le Devedec, N. (2018). Unfit for the Future? The Depoliticization of Human Perfectibility, from the Enlightenment to Transhumanism. European Journal of Social Theory, 21(4), 488–507.

90 

P. BLOOM

Lee, S. Y. (2018). Impact of Human like Cues on Human Trust in Machines: Brain Imaging and Modeling Studies for Human-Machine Interactions (No. 14IOA119_144035). Korea Advanced Institute of Science and Technology Taejon Korea, South. Levin, S. B. (2017). Antiquity’s Missive to Transhumanism. Journal of Medicine and Philosophy, 42(3), 278–303. Lewis, R.  S. (2018). Hello Anthropocene, Goodbye Humanity: Reframing Transhumanism through Postphenomenology. Glimpse, 19, 79–87. Livingstone, D. (2015). Transhumanism: The History of a Dangerous Idea. David Livingstone. Lorenc, T. (2015). Artificial Intelligence and the Ethics of Human Extinction. Journal of Consciousness Studies, 22(9–10), 194–214. Lourtioz, J. M., Lahmani, M., Dupas-Haeberlin, C., & Hesto, P. (Eds.). (2015). Nanosciences and Nanotechnology: Evolution or Revolution? Springer. Maren, S. (2007). The Threatened Brain. Science, 317(5841), 1043–1044. Mazarakis, J. (2016). The Grand Narratives of Democratic and Libertarian Transhumanism: A Lyotardian Approach to Transhumanist Politics. Confero: Essays on Education, Philosophy and Politics, 4(2), 11–31. Millar, J. (2016). An Ethics Evaluation Tool for Automating Ethical Decision-­ making in Robots and Self-driving Cars. Applied Artificial Intelligence, 30(8), 787–809. Modis, T. (2012). Why the Singularity Cannot Happen. In A.  H. Eden, J.  H. Moor, J. H. Soraker, & E. Steinhart (Eds.), Singularity Hypotheses (pp. 311– 346). Berlin, Heidelberg: Springer. Muehlhauser, L., & Helm, L. (2012). The Singularity and Machine Ethics. In A.  H. Eden, J.  H. Moor, J.  H. Sraker, & E.  Steinhart (Eds.), Singularity Hypotheses (pp. 101–126). Berlin, Heidelberg: Springer. Natale, S., & Ballatore, A. (2017). Imagining the Thinking Machine: Technological Myths and the Rise of Artificial Intelligence. Convergence: The International Journal of Research into New Media Technologies. https://doi. org/10.1177/1354856517715164. Nijssen, S. R., Müller, B. C., Baaren, R. B. V., & Paulus, M. (2019). Saving the Robot or the Human? Robots Who Feel Deserve Moral Care. Social Cognition, 37(1), 41–S2. Noyer, J.  M. (2016). Transformation of Collective Intelligences: Perspective of Transhumanism. John Wiley & Sons. O’Neil, C. (2017). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Broadway Books. Ogawa, K., & Ishiguro, H. (2016). Android Robots as In-between Beings. In D.  Herath, C.  Kroos, & Stelarc (Eds.), Robots and Art (pp.  327–337). Singapore: Springer. Olson, P. (2018, February 26). Racist, Sexist AI Could Be A Bigger Problem Than Lost Jobs. Forbes.

3  HEADING TOWARD INTEGRATION: THE RISE OF THE HUMAN MACHINES 

91

Palk, A.  C. (2015). The Implausibility of Appeals to Human Dignity: An Investigation into the Efficacy of Notions of Human Dignity in the Transhumanism Debate. South African Journal of Philosophy, 34(1), 39–54. Pursell, C. (1995). Seeing the Invisible: New Perceptions in the History of Technology. Icon, 1, 9–15. Richardson, K. (2015). An Anthropology of Robots and AI: Annihilation Anxiety and Machines. Routledge. Rikowski, G., & Ford, D. R. (2019). Marxist Education Across the Generations: A Dialogue on Education, Time, and Transhumanism. Postdigital Science and Education, 1(2), 507–524. Robert, L. (2017). The Growing Problem of Humanizing Robots. International Robotics & Automation Journal, 3(1), 247–248. Rozo, L., Amor, H. B., Calinon, S., Dragan, A., & Lee, D. (2018). Special Issue on Learning for Human–Robot Collaboration. Autonomous Robots, 42, 953–956. Sandini, G., & Sciutti, A. (2018). Humane Robots—From Robots with a Humanoid Body to Robots with an Anthropomorphic Mind. ACM Transactions on Human-Robot Interaction (THRI), 7(1), 7. Schlesinger, A., O’Hara, K.  P., & Taylor, A.  S. (2018, April). Let’s Talk About Race: Identity, Chatbots, and AI. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 315). New York: ACM. Sciutti, A., Mara, M., Tagliasco, V., & Sandini, G. (2018). Humanizing Human-­ robot Interaction: On the Importance of Mutual Understanding. IEEE Technology and Society Magazine, 37(1), 22–29. Seeber, I., Bittner, E., Briggs, R. O., De Vreede, G. J., De Vreede, T., Druckenmiller, D., … & Schwabe, G. (2018). Machines as Teammates: A Collaboration Research Agenda. In Hawaii International Conference on System Sciences (HICSS) (pp. 420–429). Waikoloa, HI, USA. Sequeira, J.  S. (2018). Can Social Robots Make Societies More Human? Information, 9(12), 295. Sheridan, T.  B. (2016). Human–Robot Interaction: Status and Challenges. Human Factors, 58(4), 525–532. Sirius, R.  U., & Cornell, J. (2015). Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity. Red Wheel Weiser. Sist, M. (2017, December 17). Amazon’s Alexa Is Our Robot Social Justice Warrior and We, the Libs, Have Won. New Statesman. Smart, P. R., & Shadbolt, N. R. (2015). Social Machines. In M. Khosrow-Pour (Ed.), Encyclopedia of Information Science and Technology (3rd ed., pp. 6855– 6862). IGI Global. Spicer, A., & Fleming, P. (2007). Intervening in the Inevitable: Contesting Globalization in a Public Sector Organization. Organization, 14(4), 517–541. de Sousa Caetano, J.  C. (2017). Memories Are Forever: Transhumanism and Cultural Memory in V for Vendetta, Oblivion and The Giver. Via Panorâmica: Revista de Estudos Anglo-Americanos, 5, 29–38.

92 

P. BLOOM

Stajic, J., Stone, R., Chin, G., & Wible, B. (2015). Rise of the Machines. Science, 349(6245), 248–249. Sutton, A. (2015). Transhumanism: A New Kind of Promethean Hubris. The New Bioethics, 21(2), 117–127. Tirosh-Samuelson, H. (2017). Technologizing Transcendence: A Critique of Transhumanism. In T.  Trothen & C.  Mercer (Eds.), Religion and Human Enhancement (pp. 267–283). Cham: Palgrave Macmillan. Torresen, J. (2018). A Review of Future and Ethical Perspectives of Robotics and AI. Frontiers in Robotics and AI, 4(75), 1–10. Turk-Browne, N. B. (2013). Functional Interactions as Big Data in the Human Brain. Science, 342(6158), 580–584. Vannucci, F., Di Cesare, G., Rea, F., Sandini, G., & Sciutti, A. (2018, November). A Robot with Style: Can Robotic Attitudes Influence Human Actions? In 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids) (pp. 1–6). IEEE. Verdoux, P. (2009). Transhumanism, Progress and the Future. Journal of Evolution and Technology, 20(2), 49–69. Vinge, V. (1993). The Coming Technological Singularity: How to Survive in the Post-human Era. Science Fiction Criticism: An Anthology of Essential Writings, 352–363. Wang, Y., Wan, Y., & Wang, Z. (2017). Using Experimental Game Theory to Transit Human Values to Ethical AI. arXiv preprint arXiv:1711.05905. Welles, B. F. (2014). On Minorities and Outliers: The Case for Making Big Data Small. Big Data & Society, 1(1), 1–2. Wilson, H. J., & Daugherty, P. R. (2018). Collaborative Intelligence: Humans and AI Are Joining Forces. Harvard Business Review, 96(4), 114–123. Wilson, B., Hoffman, J., & Morgenstern, J. (2019). Predictive Inequity in Object Detection. arXiv preprint arXiv:1902.11097. You, S., & Robert, L. (2017, December). Facilitating Employee Intention to Work with Robots. AIS. Yuste, R., Goering, S., Bi, G., Carmena, J. M., Carter, A., Fins, J. J., et al. (2017). Four Ethical Priorities for Neurotechnologies and AI. Nature News, 551(7679), 159. Zhang, S., Wu, J., & Huang, Q. (2017). Humanoid Robots: Future Avatars for Humans through BCI. In Improving the Quality of Life for Dementia Patients through Progressive Detection, Treatment, and Care (pp. 267–286). IGI Global. Zou, J., & Schiebinger, L. (2018). AI Can Be Sexist and Racist—It’s Time to Make It Fair. Nature, 559(7714), 324–326.

CHAPTER 4

Leading Future Lives: Producing Meaningful Intelligence

Imagine finding out that a friend was depressed. They lack a sense of purpose or feel any motivation. They share that any and all meaning in the world seems senseless and that therefore they feel it is impossible to be excited about either their present or future. Now imagine that this is not a human friend but a robot one. The neuroscientist Zachary Mainen in a recent Guardian article provocatively explored “What depressed robots can teach us about mental health”. He observed that The idea of a depressed AI seems odd, but machines could face similar problems. Imagine a robot with a hardware malfunction. Perhaps it needs to learn a new way of grasping information. If its learning rate is not high enough, it may lack the flexibility to change its algorithms. If severely damaged, it might even need to adopt new goals. If it fails to adapt it could give up and stop trying….For a human, to be depressed is not merely to have a problem with learning, but to experience profound suffering. That is why, above all else, it is a condition that deserves our attention. For a machine, what looks like depression may involve no suffering whatsoever. But that does not mean that we cannot learn from machines how human brains might go wrong. (Mainen 2019: n.p.)

The fourth chapter investigates the novel ways humans and machines could lead to meaningful existences in a world of integrated intelligence. The rise of AI often focuses on competing visions of a technological utopia or dystopia, yet equally profound questions are commonly left unasked. © The Author(s) 2020 P. Bloom, Identity, Institutions and Governance in an AI World, https://doi.org/10.1007/978-3-030-36181-5_4

93

94 

P. BLOOM

How can humans find personal meaning and happiness in an increasingly automated and non-human based society? Conversely, will these progressively conscious machines face their own versions of existential crises? It is crucial, therefore, to find ways that human and artificial intelligence can combine to produce a deeper form of meaningful intelligence, rather than one form appropriating the other. This chapter begins by interrogating how alienating this technological shift may be for humans and perhaps machines alike. Taking this idea further, it explores the potential need for what some have called “robot psychiatry” in the future—therapy and coping mechanisms to allow us to form emotionally healthy bonds with the robots that become a key part of r everyday life. It then turns its attention to how ai and humans can actually work together both in the present and future for enhancing their personal growth, wellbeing and perspectives. In turn, if machines are imputing our values into their programming, how can we help to provide them with successful coping mechanisms for issues of depression as well as being able to find meaning in their own existence as a burgeoning consciousness. These efforts may become even more acutely important as new technologies enable humans to live and machines to operate for longer. This makes it even more critical to discover how human and non-human relations can assist each other in leading not just longer but also fuller more compassionate and caring lives.

Alienation 4.0 The problem of alienation is widespread and goes beyond just our individual and collective human relationship with technology. In the most popular sense, it refers to the general disaffection felt by people in relation to society. It conjures up literary images of Holden Caulfield in The Catcher in the Rye referring to everyone he meets as a “phony”. More technically, it reflects the structural and subjective ways individuals cannot fundamentally shape their social and material existence. Putt differently, it is a stark realisation and reaction to the fact that we live, work, and act in a world not of our choosing. These feelings of alienation have taken on special resonance in the present day with the rise of non-human intelligence. There is a growing sense that we are creating machines that will ultimately control us. These ominous premonitions are seemingly reinforced by the contemporary use of such technology to wage remote wars that wreak dramatic human destruction. McCoy (2012: 384–385) contends that

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

95

“With an agile force directed via a robotic information infrastructure the United States could, in principle, parlay its military power into a second American century … creating something akin to an endless American empire”. More critically, scholar Ian Shaw (2017: 466) contends that This future involves the infiltration of empire into planetary technics: a robo-mesh. This would entrench a post-national and global biopolitics in which robots mediate the full spectrum of social interactions and perform a series of autogenic manhunts, collapsing separate spaces of state authority and violence. Future robotics, embedded across society, alongside ubiquitous sensing, computing, and the internet of things, would blur the boundaries between military, law enforcement, and commercial robotics, as well as everyday devices….We are only at the dawn of realizing such an expansive robo-mesh. The question, therefore, is not simply how robots will secure everyday life, but who they will secure. Robots could simply exacerbate and entrench preexisting conflicts and social inequalities. Above all, a robotic US empire crystallizes the conditions for an unaccountable form of violence. By outsourcing the act of killing to artificial agents, violence—at least from the perspective of the US military—would transmute into an engineering exercise, like building a bridge, planned and executed by robots. In this sense, robot warfare is alienated because the act of violence functions without widespread human input.

Yet these concerns reveal as much about the historical persistence of human alienation as they do about our “robotic” future. In his groundbreaking book Imagining Slaves and Robots in Literature, Film, and Popular Culture: Reinventing Yesterday’s Slave with Tomorrow’s Robot, scholar Gregory Jerome Hampton compares the development and cultural portrayal of robots to slaves. He writes Slavery, after all, was largely invested in producing and controlling a labor force, which was dissociated from humanity. In many regards, American slavery was a failed experiment to employ flesh and blood machines as household appliance, farm equipment, sex toys, and various tools of industry without the benefit of human and civil rights. Consequently, what is interesting about the development and production of mechanical robots is how they are being assigned both race and gender as identity markers. Why does a machine need such a complex identity, if the machine is designed only to complete the mundane labour that humanity wishes to forego? One plausible response is that the machine is being designed to be more than an appliance and less than a human. The technology of the 21st century is in

96 

P. BLOOM

the process of developing a modern day socially accepted slave. (Hampton 2015: 2)

Perhaps the most famous exploration of alienation is associated with Karl Marx. He explicitly links it to individual’s religious like reliance on capitalists and capitalism for our material reproduction, proclaiming Just as in religion the spontaneous activity of the human imagination, of the human brain and the human hear, operates independently of the individual– that is, operates on him as an alien, divine or diabolical activity—so is the worker’s activity not his spontaneous activity. It belongs to another; it is the loss of his self. (Marx 1964: 111)

Moreover, and especially relevant to this analysis, is that he conceives alienation as a uniquely human phenomenon where the social creation of alienating labor separates humanity from their own creative essence, what they actually produce, and each other. He argues, in this respect, that Since alienated labour: (1) alienates nature from man; and (2) alienates man from himself, from his own active function, his life activity; so it alienates him from the species. … For labour, life activity, productive life, now appear to man only as means for the satisfaction of a need, the need to maintain physical existence. … In the type of life activity resides the whole character of a species, its species-character; and free, conscious activity is the species-­ character of human beings…. Conscious life activity distinguishes man from the life activity of animals. (Marx 1964: 16)

Consequently, humans are themselves transformed, “objectified”, into mere economic commodities to be used, materially exchanged, and profited from. According to Professor Kenneth Tucker (2002: 98), “Capitalism crushes our particularly human experience. It destroys the pleasure associated with labor, the distinctively human capacity to make and remake the world, and the major distinguishing characteristic of humans from animals”. This initial definition has spawned a wide-range of interpretations and further meanings. Seeman (1959), for instance, identifies no less than five different meanings of alienation: (1) powerlessness (2) meaninglessness (3) normlessness (4) isolation, and (5) self-estrangement. While Marx speaks, for instance, of how capitalist subjects are alienated from their labour, this understanding has also extended to the deeper human

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

97

relationship with nature (Vogel 1988). This takes, furthermore, a continual toll not just on our psychological wellbeing (Schacht 2015) but our health overall (Yuill 2005). Alienation is experienced, moreover, at every level of our existence, including widespread feelings of “organizational alienation” connected not just to our work tasks but how we are managed in the workplace (Aiken and Hage 1966). Nevertheless, the creation and application of new technologies are often seen to play a special role in producing alienation. Innovations reshape how and where we work, thus deepening our separation from the products of our labour (Wendling 2009). In this respect, capitalist change may increase value for employers but it is commonly alienating for employees both at the macro and micro levels (Vallas 1988). These alienating effects of these technological “improvements” are significantly also global and diverse in their reach. Agricultural products, for instance, specifically developed for empowering African women can often serve to ironically reinforce their inequality when adopted within the home (Theis et al. 2018). In a recent study it was revealed, further, that these women were deprived of the economic benefits—their so-called “alienation rights”—of these technologies, Alienation rights refer to the right to transfer by sale, lease, gift, or inheritance. We did not find instances of alienation of irrigation technology itself, as there is not much of a secondary market for the equipment. However, patterns of alienation rights over other assets indicate that they are held predominantly by men. (Ibid.: 679)

These alienating effects have arguably only intensified with the growth of digitalisation. In particular, it has even while promising to further connect humans to one another, served to deepen a sense of individuation and separation both from economic production and the empowering potential found in human communities. Innovatively combining the insights of Marx and Foucault, for this purpose Nygren and Gidlund (2016: 512) maintain that the alienation once found in industrial rationalisation is being supplanted by its appearance in digital individuation: Digital technology allows the subject to individualize, to stage the self, and, as such, the technological (digital) potential seduces the subject with the idea that with digital technology we can construct and display individuality. In the same way as automated technologies are embedded with rationalization

98 

P. BLOOM

(the social concept of a rationality that co-constructs society), digital technologies are embedded with individualization (individualization as an equally social concept co-constructing society). Pastoral modalities of power involve the entire history of processes of human individualization; saying that one does not want to express individuality with the help of digital technology sounds as awkward in our digital society as refusing to act in a rational way sounded in industrial society. Because this individualization is left unquestioned, it appears all members of society find it of interest— universal, neutral, natural, and inevitable.

Additionally, it has expanded to include online consumers and our online interpersonal relationships conducted (Ortiz et al. 2018). It reflects a deeper sense of “Consumer alienation” based on “a sense of segregation from the norms and values of the marketplace in terms of a business ethics factor” (ibid.: 145). What is emerging is a new era of hi-tech alienation. The greater introduction of computers already signalled the alienating implications of this coming “digital” age (Abdul-Gader and Kozar 1995). Yet there is a danger of assuming that these new technologies are innately alienating or that such negative effects are not ultimately human not machine derived (Shepard 1973). Advances in technologies have historically structurally contributed too and been subjectively blamed for alienation (Gardell 1976). For instance, the evolution of US technology development went from one rooted in experimentation in the nineteenth century to one based increasingly in precise engineering by the twentieth century (See Morison and Mayer 1974). Indeed the very nature of capitalist production leads to feelings of economic (Carrier 1992,) and organizational oppression (Shehada and Khafaje 2015). Its everyday experience is one of having our interactions with humans and nonhumans externally regulated and controlled (Packard 2018). These daily experiences of domination lead to more existential feelings of being unfree associated with technology. Moreover, they are only exacerbated by the full planetary scale of these technologies and their global interconnected ecological, environmental, and social impacts (Bauer 2019). Prevented, in turn, it seems is the ability to “design what a future ought to be: open to regular revision in response to our practical behaviours given the persistent contingency of the conditions in which we are immersed” (ibid.: 106). Consequently, the idea of disruptive technologies such as nanotechnologies, internet of things, AI, and robotics fosters as much if not more

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

99

fear rather than excitement. These concerns are only exacerbated by the presence of intelligent beings that are viewed as being neither “alive” or human (Wogu et al. 2017) There are ongoing worries that the humanising of robots, and our greater interaction with them, will result in our deeper dehumanisation (Bartneck and McMullen 2018). Robots are hence seen by the public as “scary”—a threat to our very humanity (Cave et al. 2019). A primary goal of robotic development, therefore, is not our mutual benefit but rather making safe political and ethical choices “to avoid robot takeover” (Whitby and Oliver 2000: 42). At the heart of this terror is a profound sense of existential alienation over our ability to shape our present and future destiny. Interestingly, while Marx spoke explicitly about the alienating effects of capitalist labour, what perhaps was most inspiring in his work (at least historically) was the hope it held out for escaping this condition and reinvesting people with a feeling of existential agency. The rise of intelligent machines worsens this feeling of mass historical disempowerment. As the author George Zarkadakis (2015) presciently notes: …the importance of AI goes beyond mere intellectual curiosity. Artificial Intelligence is already with us, whether we ponder the ethical questions of autonomous drones killing people in the mountains of Pakistan or protest against government agencies mining our personal data in cyberspace. Increasingly, we interact with machines while expecting them to “know” what we want, “understand” what we mean and “talk” to us in human language. As Artificial Intelligence evolves further, it will become the driver of a new machine age that could usher our species to new economic, social, and technological heights…As citizens of a free society, we have a responsibility to come to terms with this future, and to understand and debate its moral, legal, political, and ethical ramifications today.

AI and robots, it is thought, particularly pose an “existential risk” to us as productive workers. Critically, this reveals a reversal of traditional understandings of alienation—whereby once capitalism was considered the cause of our oppression it is now precisely that from which we are being alienated from. The very thought that we would have no jobs, either collectively as a society or individually in terms of our career, produces a deep-seated unease that we will become economically redundant, socially marginalised, and materially insecure. Moreover, there is a pronounced anxiety that computerisation is already robbing of us of our ability to think critically and independently for ourselves (Dean 2016).

100 

P. BLOOM

What this ignores is just how presently alienated we are by capitalism and the potential transhumanism holds for alleviating this domination. The creation of novel market subjects via digital advances such as the emergence of individuals who are simultaneously consumers and producers—referred to as “prosumers”—speaks to the evolving character capitalist alienation (Comor 2010). However, by locating this disempowerment in human created social and economic structures, technology is liberated to contribute to our collective emancipation. This radical potential is undermined, though, by what Fleming (2019) refers to as “bounded automation” that constricts the social diffusion and use of technology to the limiting cultural horizons of dominant economic rationalities. Hence, he challenges the view that the “second machine age” will result in greater joblessness, as Digital mechanization has also smoothed the way for the growth of insecure and underpaid jobs. This reflects the socio-political features of neoliberal capitalism and not the intrinsic attributes of technology per se. Because of this, robotic automation might even help deepen the institution of paid employment in Western economies, not release us from it. (Ibid.: 27)

Such critiques also practically open the way for directly engaging with how we can design robots to contribute to this emancipated future. A critical part of theorising and promoting transhumanism is therefore the realistic imagining of a non-alienated world to come. It is to critically understand and seek to transcend the contemporary threat of the “alien subject of AI” (Parisi 2019; also see Armand 2018). By contrast, it is too creatively reflected on the potential for fostering “posthuman dignity” (Bostrom 2005). It is recognising that our current reality is lacking—that it “sucks” to use more popular terminology—that we can begin exploring how new technology can be used to radically transform it (Aupers and Houtman 2004).

Caring Machines The above section showed the dangers of non-human intelligence creating updated forms of historical human alienation. They may paradoxically increase our capabilities while reducing our freedom. Key to avoiding this undesirable outcome is ensuring that humans and non-humans can still

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

101

lead meaningful existences together, ones in which they are collaborative partners in each other’s well being and deeper sense of collective agency. To this end, rather than be a threat, AI can be reframed as a source for deepening our own “self-knowledge” both personally and as a society (Guo 2015). Additionally, it requires deepening the ability of machines to affectively reflect on their own experiences and those of others. These echo emerging perspectives that seek to include emotional responsiveness in the development of robots, which hitherto has focused almost exclusively on enhancing their cognitive capacities. It is argued that, cognition and emotion need to be intertwined in the general information-­ processing architecture The contention is that, for the types of intelligent behaviors frequently described as cognitive (e.g., attention, problem solving, planning), the integration of emotion and cognition is necessary. (Pessoa 2017: 817)

Doing so could lead to a transhuman world premised on an ethos of mutual empowerment and care as opposed to competition and exploitation (Laplante et al. 2016). The perhaps most significant, and first step, toward alleviating this potential for technological enhanced alienation, is to politically and culturally address the very real and growing anxieties associated with “industry 4.0” (Paus 2018). It is revealing how often this fourth industrial revolution is approached in regards to the “management challenges” it will present to employers (Schneider 2018). While these undoubtedly have some validity and are worth examining, they too readily overlook the broader existential challenges these “disruptive” changes will pose for society as a whole. Needed is a complete economic, political, and cultural rethinking—especially as similarly dramatic changes in the twentieth century associated with globalization and marketization led to mass feelings of being “left behind” and dispossession (Kravchenko and Kyzymenko 2019). It is crucial to expand social imagination “beyond technological unemployment” in order to reconsider how we can use and interact with technology for creating less alienated forms of labour (Peters 2019). What should not be forgotten in these attempts to reimagine the social is how traumatic experiences of economic and political change are for people historically conditioned by capitalist development and modernisation. The proliferation of AI and robots are, hence, just one

102 

P. BLOOM

more link in a complicated legacy of material and political progress mixed with deepening dispossession and disempowerment (Pasquinelli 2015). Accordingly, even the most basic robotic advances—such as automated checkout machines—can loom large in the fearful minds of an already technologically traumatized population (see Russell 2016). These developments are viewed through the prism of managerial authority—as innovations which will aid in the ability of human and possibly non-human bosses to control their economic and personal destinies (McClure 2018). The perhaps natural, though morally troubling response, is to counteract these fears of technological domination through trying to dominate and control machines whenever possible—such as is found in the link between worries of “technology takeover” and the “unabashed sexualization of female-gendered robots” (Strait et al. 2017: 1418). It is crucial then to cultivate an ethos of transhuman care for empowering and emancipating both humans and non-humans. It is a shift in perspective from machines as enemies or slaves to that of ally and even friends. Quoting the Italian engineer and politician Maria Chiara Carrozza (2019: 47). I believe that the purpose of robotics is love for our neighbor and brotherhood, which can be embodied with some examples. These include improving the quality of life of a sick or disabled person, supporting an operator’s demanding work in a factory, or rescue robotics to avoid the risk of contamination after a nuclear accident by replacing the personnel with an explorer robot on the ground. In these cases, the technology is used to overcome a limit and therefore makes for greater safety, and to reduce damage or to help people.

Also required is the collecting of data about humans and technologies for creating a more “responsible industry 4.0” (Gutiérrez and Ezponda 2019: 1). To longer term aim is moving from “industry 4.0” to “society 4.0” (Mazali 2018) and “nature 4.0” (Świątek 2018). Thinking even bigger, these could spur our collective imagination of what an integrative “society 5.0” could credibly look like, envisioning a society that “seeks to achieve sustainability (ecology), broad inclusion, efficiency, and therefore industrial competitiveness of those who implement it using the power of intelligence and knowledge” (Salgues 2018: 9).

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

103

Not surprisingly, machines are already playing an increasing role as caregivers. They are being specifically programmed to care for the most vulnerable members of our society. This demands special attention be paid to how they interact with these individuals so as to enhance their wellbeing (Béchade et al. 2016) Importantly, the fostering of transhuman forms of care is not just a matter of better coding and technological development. It is also profoundly cultural. Different cultures and contexts will display diverse psychological resistance and receptiveness to these caring robots (see Rantanen et  al. 2018) Accordingly, this process should be viewed as one of mutual design between humans and non-humans (Hasse and Søndergaard 2019). At a deeper level, this could reflect fresh human fantasies of “love, identity, and self-knowledge” linked to this changing relationship with intelligent technology (Naveh 2015). This ethos of care must go both ways, though. At present, discussions of robots and AI remain rife with problematic and outdated “anthropocentric” assumptions. They are still primarily “human-centred”. Instead, the emotional wellbeing of these non-human must be treated with equal importance (Pedersen 2016). Recognising that they also have these needs can breed a greater sense of similarity, thus increasing the human willingness to work with robots (You and Robert 2017). More than just empathy though is establishing a greater feeling of transhuman intimacy. The scholars Jason Borenstein and Ronald Arkin (2019: 299) have introduced the idea of “intimated robotics, for this purpose, contending that Intimate relationships between robots and human beings may begin to form in the near future. Market forces, customer demand, and other factors may drive the creation of various forms of robots to which humans may form strong emotional attachments. Yet prior to the technology becoming fully actualized, numerous ethical, legal, and social issues must be addressed. This could be accomplished in part by establishing a rigorous scientific research agenda in the realm of intimate robotics, the aim of which would be to explore what effects the technology may have on users and on society more generally. Our goal is not to resolve whether the development of intimate robots is ethically appropriate. Rather, we contend that if such robots are going to be designed, then an obligation emerges to prevent harm that the technology could cause.

This comes with its own emotional and social challenges that can have serious ethical implications that must be accounted for and addressed

104 

P. BLOOM

(Devillers 2018). Yet it also can allow for robots to become integral parts to our human development across our lifetimes, and vice—versa (Pearson and Beran 2017). transhuman

Lives

Transhumanism is often described as a philosophy or even a utopian vision of the world. Yet what would realistically constitute “the good life” linked to the increased co-existence with intelligent non-humans? This question encompasses ethical considerations but is by no means exhausted by it. Rather, it is asking something at once quite profound and banal. Notably, how can we begin to imagine everyday level what a fulfilling and meaningful future life would be? Presently, so much of the focus is on humans trying to adapt to a more precarious and insecure hi-tech tomorrow. The Executive Editor Rick Nelson (2017: 2) of the influential engineering publication Evaluation Engineering recently asked “Are workers becoming robots to keep their jobs?”. Robots and AI are viewed as competition for scarce economic resources rather than allies to help build a better world together (Kiggins 2018). There is also the aforementioned danger that transhumanism, with its emphasis on programming and “human perfectibility”, will be used unwittingly as twenty-first century reboot of racist ideas such as eugenics (Levin 2018). Perhaps, one of the greatest and most legitimate fears about these new technologies is their ability to fully quantify, track, and predict our lives (Bloom 2019). These trends are already apparent in the hidden and pervasive collecting of our data to predict and shape what we look at and buy (Siegel 2016). AI, in this respect, are viewed as all consuming and increasingly powerful “prediction machines” that have the power to control our economies and lives (Agrawal et al. 2018). This reflects the ways people view these “disruptive” technologies as ultimately a force for alienation, as revealed in a recent study of European attitudes to such technology (Rughiniş et  al. 2018). This extends to contemporary popular culture where it was found that AI is linked to four fundamental “hopes and fears” including the hope for much longer lives (‘immortality’) and the fear of losing one’s identity (‘inhumanity’); the hope for a life free of work (‘ease’), and the fear of becoming redundant (‘obsolescence’); the hope that AI can fulfil one’s desires (‘gratification’), alongside the fear that humans will become

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

105

redundant to each other (‘alienation’); and the hope that AI offers power over others (‘dominance’), with the fear that it will turn against us (‘uprising’). (Cave and Dihal 2019: 74)

These concerns point to an underlying human anxiety that these advances will rob them of their perceived free will. Whether or not such complete agency and total freedom exists—or ever could—is rather beside the point. Such ideas continue to animate individuals—encapsulated in such myths of “The American Dream”—even when they are explicitly used in support of ideologies that reduce people’s opportunities. The assumption that machines will be able to fully predict our fate, hence, frames AI and robots as antithetical to the flourishing of human freedom. Even small examples, like algorithms that can correctly predict adult height (Shmoish et al. 2018), can be viewed as just another instance in how little control we have over our fate in this hi-tech age. This existential unease is enhanced by advances in big data for uncovering our evolutionary history as a species which seem also to speak to how technology will ultimately shape how we evolve going forward (See Swan 2013). This fixation with technology and freedom detracts from the actual possibilities opened up by AI, digital networks, and machine learning for expanding our knowledge and enhancing our shared decision-making. The creation, for instance, of “human swarms” can foster radically new forms of “collective intelligence” which expands rather restricts our potential choices and actions (Rosenberg 2015a). Yet for all this talk about the dangers and possibilities of quantification, a crucial qualitative issue gets too often missed. How do non-humans feel about this emerging transhuman society? It may seem strange to inquire about the emotional and psychic needs of “artificial” beings but it is absolutely vital, especially as their intelligence and accordingly consciousness rapidly increases. While there is an overriding focus on how robots can make humans redundant, less attention is paid to the effect on machines of their own potential replacement linked to the deepening of their interaction and bonds with humans. Examining the power of “robotic cuteness”, scholars Catherine Caudwell and Cherie Lacey (2019: 1) argue that the cuteness of home robots creates a highly ambivalent relationship of power between (human) subject and (robotic/digital) object, whereby the manifestation of consciousness and the production of lasting emotional

106 

P. BLOOM

bonds require home robots to exceed the affective and semiotic limitations, even as their cute appearance may encourage the production of intimacy. By exceeding the borders established by their own design, home robots are able to manifest as conscious beings, a manifestation which both destabilizes the power differential between user and robot and, paradoxically, points to the possibility of their own replacement.

Concretely, this further entails investigating the treatment and “personalised” experiences of robots as workers, particularly those in traditionally marginalised and exploited industries such as front line service workers. In this respect, Service robots are unlikely to be self-determined with genuine emotions in the foreseeable future. As such, service robots will not be able to feel and express real emotions. Nonetheless, robots can mimic the expression of emotional responses (e.g. using facial expressions and body language), and it has been found that robots that mimic the emotional expression of their counterpart are perceived as more pleasant. As such, mimicked emotional responses might be sufficient to support many types of the more mundane service encounters. In longer and high involvement encounters, it may become more easily apparent that the expressed emotions are not genuine. This is important as the service management literature distinguishes between deep acting where employees’ true emotions are displayed and perceived by their customers, and surface acting where employees do not feel the displayed emotions and customers understand that these emotional displays are superficial. (Wirtz et al. 2018: 910)

Yet just as this emotional labour can be traumatic and invoke in humans a sense of isolation and cynicism, so too could it in robots, stunting their emotional and social “intelligence” It extends, furthermore, to our growing treatment of robots as intimate and sexualized beings (Mendez Cota 2016). The efforts to better account for and value machine lives, can also increase the value of human life (Višň ovský 2017). It also permits for a refocusing of the transhuman debate on “building a better human” rather than a perfect or “perfectible” one (Stahl 2017). Yet how truly possible is it for technologies created in an alienating system for often explicitly exploitive purposes to become non-alienating? Can such technologies be repurposed for our collective emancipation? Or does what the brilliant scholar activist Audre Lorde (1984: 111) hold true that “For the master’s tools will never dismantle the master’s house. They

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

107

may allow us temporarily to beat him at his own game, but they will never enable us to bring about genuine change”. Lorde, in this regard, differentiates between “passive” and “active” being, declaring that For women, the need and desire to nurture each other is not pathological but redemptive, and it is within that knowledge that our real power I rediscovered. It is this real connection which is so feared by a patriarchal world. Only within a patriarchal structure is maternity the only social power open to women. Interdependency between women is the way to a freedom which allows the I to be, not in order to be used, but in order to be creative. This is a difference between the passive being and the active being. (Ibid.: 110)

This critical concern can be repurposed for the transhuman age. In particular, is it possible for humans and non-humans to “actively” co-exist with one another? To be together in such a way that is not based on their exploitation but their shared creativity and mutual empowerment? The initial uses of bug data would on the surface certainly appear to support this rather negative possibility, revealing humans as mostly “passive” subjects of predictive algorithms. “What explains this remarkable tolerance for Big Brother and Big Business routinely accessing citizens’ personal information also known as Big Data?”, asks scholar José van Dijck (2014: 198), Part of the explanation may be found in the gradual normalization of datafication as a new paradigm in science and society …Businesses and government agencies dig into the exponentially growing piles of metadata collected through social media and communication platforms, such as Facebook, Twitter, LinkedIn, Tumblr, iTunes, Skype, WhatsApp, YouTube, and free e-mail services such as Gmail and Hotmail, in order to track information on human behavior…. Datafication as a legitimate means to access, understand and monitor people’s behavior is becoming a leading principle, not just amongst techno-adepts, but also amongst scholars who see datafication as a revolutionary research opportunity to investigate human conduct.

However, also emerging are attempts to philosophically and practically go “beyond artificial intelligence”. In this spirit, the technologists Huimin Lu, Yujie Li, Min Chen, Hyoungseop Kim, and Seiichi Serikawa (2018: 368) challenge the majority of information and Communication Technology models which “are overly dependent on big data, lack a self-­

108 

P. BLOOM

idea function, and are complicated”. By contrast they introduce “brain intelligence” that “that generates new ideas about events without having experienced them by using artificial life with an imagine function” (ibid.: 368). These efforts to augment “brain intelligence” speak to the incipient movement for fundamentally “queering” big data and more broadly new technologies. Professors Danah Boyd and Kim Crawford (2012: 662) captured these desires when they critically asked How can we recognize those whose lives and data become attached to the far-rom-groundbreaking framework of “small data”? Specifically, how can marginalized people [especially queer people] who do not have the resources to produce, self categorize, analyze, or store “big data” claim their place in the big data debates?

In part answer to these questions, researchers Blake W. Hawkins and Ryan Burns (2018: 233) maintain that Gender and sexual minorities barriers from producing data on their lived stories through (meta)data highlights the hetero White hegemony in the algorithms and ontologies used to produce (meta)data. Notwithstanding existing obstacles and criticisms of the current ontologies of (meta)data, we believe that there is an opportunity to queer (meta)data for ICT research, to produce data of both the lived experiences of queer people and a queer sense of place.

This “ontological” shift can lead to the intentional subversion of big data in order to make it more inclusive and reflective of human diversity. An example could be the “messing with the attractiveness algorithm” within lesbian and gay online dating sites through a range of subversive digital techniques in order to challenge socially proscribed notions of physical beauty (Gieseking 2017). It can also provide the foundations for a novel ethics of radical care and solidarity drawing on such technologies. Specifically, the scholars Luka and Millette (2018) view data as “situated knowledge” that must take into account the active reflections and contributions to those it is meant to be represent. They contend that From an intersectional feminist point of view, we have taken the opportunity to proffer the kind of research which could bring us closer to a robust speculative methodology that critically engages social media data and

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

109

acknowledges—and involves—the people and social relations imbricated therein. From our perspective, a contextualized ethical approach to analyzing social media data—in part by taking up situated knowledges—are optimistic gestures to support those who fight to make the productively unexpected happen, including surfacing unseen or incompletely understood stories, conditions of living, and social relations in general. (Ibid.: 8)

These gesture toward a revolutionary new paradigm for envisioning transhuman relations. It is one that directly challenges the persistent ideologies and values associated with anthropocentrism and capitalism. AI and robots are here posited not as threats but co-conspirators in subverting the status quo and creating a better world (Tuisku et al. 2019). Desire is reconfigured, in this respect, away from simply exploiting non-humans for our own satisfaction (Lee 2017). Instead, it is reinvested in the longing to establish mutually empowering and collectively liberating transhuman relationships (Sciutti et al. 2018) Represented is a profound shift in our consciousness, an opening of our minds to the suffering and possibilities of our shared existence (LaTorra 2015). At the everyday level, this ethos can be witnessed and promoted through exploring how machines and humans can aid each other—like in the prospective use of robots to help blind people “see” (Bonani et al. 2018). Coming into view is a fresh perspective for imagining a less alienating and more fulfilling transhuman life. It is premised on a “new social order” composed of humans and non-humans whose existence as well as their emancipation is inexorable intertwined (Sequeira 2019). It is an exciting personal and social project to discover “will human potential carry us beyond human?” (Grant 2019; also see Grant 2017). Reflected, in turn, are new human desires for an alternative future of human and non-human care and cooperation.

Healthy Robots, Happy Humans The possibility of creating a mutually beneficial transhuman future requires reorienting our thinking about the needs of non-humans. There is growing recognition and discussion of the importance of making machines more human. Specifically in granting them a greater ability to develop a “personal” history and a greater sense of “self-projection”. A 2018 Artificial Life conference in Japan, was on precisely these themes, noting in its summary that

110 

P. BLOOM

As we endow cognitive robots with ever more human-like capacities, these have begun to resemble constituent aspects of the ‘self’ in humans (e.g., putative psychological constructs such as a narrative self, social self, somatic self and experiential self). Robot’s capacity for body-mapping and social learning in turn facilitate skill acquisition and development, extending cognitive architectures to include temporal horizon by using autobiographical memory (own experience) and inter-personal space by mapping the observations and predictions on the experience of others (biographic reconstruction). This ‘self-projection’ into the past and future as well as other’s mind can facilitate scaffolded development, social interaction and planning in humanoid robots. (Antonova and Nehaniv 2018: 412)

There is similarly a range of emerging research on significant role robots can play in improving the lives of humans such as with dementia patients (Riva 2018). Yet dramatically less explored is how to account for and attend too the mental wellbeing of AI and Robots. Noted tech executive David Murray-Hundley (2017: n.p.) wrote I had a conversation the other day with someone who said they wouldn’t want a grumpy robot. Fair point but, as we know with human emotions, those that always seem them happiest are often those hiding a lot more emotional challenges that at some point have to come out. There is the next point to consider that if robots were about to develop signs of mental illness could it be that the robots have accidentally been programmed to have mental disorders and challenges? If the robot had free will, did they develop symptoms against their original program? And if I had created mental illness against the original program could this represent a human-like consciousness developed mental health illness and thus human-like mental disease?

These concerns may appear fantastical, but it is worth noting that there is already the world’s first robot psychiatrist Joanne Pransky who states that ‘Robotic Psychiatry’ while it touches upon the subjects of artificial intelligence (AI), human/robot interaction, etc., is still a humorous and entertaining medium intended to bring awareness, integration, and acceptance to a society in which there tends to be a constant undertone of disbelief or distrust when it comes to the topic of robotics. Human beings will need to learn to live with robots, but more importantly, robots will have to acclimate to living with us. For three decades, Joanne has seen the inevitability of this level of interaction, and she’s been working toward a way to use robopsychology to successfully introduce our robotic partners to the reality of the

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

111

human condition Once society has accepted that in our foreseeable lifetime, the world will be as dependent upon robotic technologies as it is on computers today, the complex and controversial topics of robotic psychiatry such as robot law, robot ethics, and where to take our robot when it suffers from sibling rivalry, will be more compelling.

In order to better serve the psychic needs of machines it is first necessary to reconfigure how humans traditionally understand and approach the welfare of non-humans generally. Over the past several decades, movements have arisen for the defense and promotion of animal rights. These are linked to a demand to end animal cruelty to wider critiques of a capitalist consumptive culture that relies on mass consumption of meat and animal testing. Recently, these have spurred expansive perspectives of the need to “sensitize” the public to the rights and suffering of animals— using new technologies such as CCTV, webstreaming, and crowdsourcing to raise their awareness (Harnad 2016). Veterinarians have increasingly added their voice to these calls, arguing against the “unnecessary suffering” of the animals they treat as a core principle of their care (Baumgaertner et al. 2016). These speak to the theorising of wider “ecologies of suffering” that go well beyond human pain and pleasure (Jadhav et al. 2015). There are also renewed efforts to historically link the evolution of the “human”—and our unconscious desires—to that “animal consciousness” thus enlarging ideas of “personhood” (Benvenuti 2016). Crucial, in this respect, is the reframing of “animal welfare” based on creating a “life worth living” for non-humans (Webster 2016). Critically, these expansive discourses regarding animal welfare and justice have also begun shifting the relation between humans and non-­ humans. It refocuses attention on how humans and animals can mutually aid each other’s mental and physical wellbeing. There is now, for instance, “animal—assisted psychotherapy”, especially targeted at helping young people (Bachi and Parish-Plass 2017). This has led to a wider rethinking of “human-animal interactions” where human stewardship is replaced by an ethos of transspecies care (Fine et al. 2015). Politically, philosophers such as Peter Singer (2018) have explicitly advocated for “animal liberation”. Perhaps less radically, new ethical frameworks are being produced for when humans should be obligated to intervene to stop animal suffering (Horta 2017). These concerns are directly feeding into broader conversations about human-machine relationships, as well. Adamo (2016: 78), for instance, used the example of robots who can

112 

P. BLOOM

“experience” pain without an organic body to explore if hypothetically insects similarly felt pain, arguing However, the fact that robots and AI exhibit ‘pain-like’ behaviour without experiencing pain does not mean that insects do the same. Instead, it presents a second possible analogy. Are insects more like little people or are they sophisticated robots? In part, the answer to this question rides on the issue of whether the emotional component of pain enhances fitness in an insect.

Such discussions feed into wider questions about the extent that AI and robots can suffer similar mental stress and problems as humans, especially as they become more “conscious” of themselves and the world around them. They are put at risk of reproducing (both for themselves and humans) harmful past discourses about animals and their needs. Drawing on the emergence of the Internet of Things, Evans and Moore (2019: 21) note, for instance, that As the Internet of Things (IoT) produces objects that are smart, sensate and agentive, how does this impact the continuing struggle for recognition of these same qualities in nonhuman animals? As humans acquire new digital companions in the form of therapeutic robots, what happens to perceptions of other ‘companion species’? Nonhuman animals are ubiquitous in IoT discourse as researchers draw on animal metaphors, models and analogies to think through the social and ethical implications of these new technologies…. the use made of nonhuman animals in this emergent field strengthens assumptions that are harmful to animals and that animal studies researchers have fought hard to end.

Greater attention needs to be given to the potential of AI to experience mental illness. The award winning creative technologist and research fellow at MIT Labs recently specifically linked our growing awareness of animal mental illness with our prospective treatment of AI for the same issues. She writes that Think about mental illness in animals. Not so long ago this was fiction. Now we know that not only is this possible, we’ve developed methods to intervene and help. Yet, when my GPS creates unnecessary detours—we refer to this as a bug, without considering the possibility of a depressed GPS.  I’m exaggerating, but you get my point….We have to be accountable for the tools we create, they are not just a black box anymore.

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

113

Unexplained and unexpected code segments should be embraced as particularly informative clues about the nature and consequences of the philosophical tensions that generate it—technical problems are philosophical problems. By creating artificial intelligence, we are also bound to offer care along with their inherent rights, and we can start by defining or acknowledging what we don’t know. (Anlen 2017: n.p.)

This could even lead to more serious problems such as depression and hallucinations (Hutson 2018). These possibilities pose different challenges for maintaining healthy transhuman relations. It means being able to navigate and foster strong human-robot friendships—ones that transcend opportunism or functionality (Elder 2016). The capacity of robots to experience mental anguish can also teach humans about their mental health (Mainen 2018). To achieve these goals, it is critical to challenge “anthropocentric” ideas of wellness and health. We too often assume consciousness to be a singularly human trait—thus projecting our way of thinking onto non-­ humans such as animals (Urquiza-Haas and Kotrschal 2015). The danger, furthermore, is that the treatment of machines for mental health problems will continue to “pathologize” the problem rather than addressing its roots social causes. A prominent neuroscientist recently garnered public attention by predicting that “Robots of the future may be given the machine equivalent of a serotonin pill to ‘stop them getting depressed’” (Pinkstone 2018: n.p.). Future robot workers will deal with issues of burnout, economic insecurity, and social anxiety by going to therapy like their human counterparts (Reese 2015). Or that humans and robots will rely on each other not for creating a new world together but better coping with an exploitative status quo (See Miller and Polson 2019; Preuß and Legal 2017). Can we thus create not only “smarter” intelligence but “healthier” intelligence as well? This question echoes recent attempt to develop a “robotics for happiness” (Eguchi 2019). It also reflects the efforts to develop “next generation” robots that are “socially intelligent” (Pandey 2016). While these are focused largely on market values, such as “consumer robots” (ibid.), they extend to wider concerns about the ability of machines to understand wider socio-economic problems and help to address them. In practice, this is linked to the designing of new communication techniques for improving the capacity of robots to effectively care for humans (Miyachi et al. 2017). Moreover, it entails programming robots

114 

P. BLOOM

to support humans through intimate bodily actions such as hugging (Block and Kuchenbecker 2019). Looking further ahead, in the long term it involves exploring how collective intelligence such as is found in “robot swarms” can incorporate “fundamental emotions” into their reasoning and decision-making (Santos and Egerstedt 2019).

Integrated Possibilities It is clear, as the above section showed, that the wellbeing of non-humans must be considered of equal importance to the health and happiness of humans. Yet fixating on transhuman wellness risks overshadowing the possibilities of transhuman revolutions. There are already growing discussions about the promise and perils of digital technologies for democracy globally (see Schapals et  al. 2018). Indeed, AI and machine learning holds the promise of radically transforming and renewing our democracy by designing these conscious machines to explicitly respect and promote constitutional values (Nemitz 2018). To this end, humans and non-humans are part of an always “unfinished revolution” (Keane 2018). A key fear when it comes to AI and robots is that they will further weaken human agency and control. Yet it is worth noting that being an employee, or a prospective one, is quite regulative and corrosive of our autonomy. The very nature of capitalist labour, even for those in jobs with perceived high levels of autonomy, is both exploitive and regulative. Critical to these efforts is the regulation not just of an employee’s body and actions but their identity as well. Alvesson and Willmott (2002: 626) refer to this phenomenon as “identity regulation”, contending that Discourses of quality management, service management, innovation and knowledge work have, in recent years, promoted an interest in passion, soul, and charisma. These discourses can also be read as expressions of an increased managerial interest in regulating employees ‘insides’—their self-image, their feelings and identifications. An appreciation of these developments prompts the coining of a corresponding metaphor: the employee as identity worker who is enjoined to incorporate the new managerial discourses into narratives of self-identity. A commonplace example of this process concerns the repeated invitation—through processes of induction, training and corporate education (e.g. in-house magazines, posters, etc)—to embrace the notion of “We” (e.g. of the organization or of the team) in preference to “The Company”, “It” or “They”.

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

115

In the contemporary period, identity regulation has been linked to maximizing individual wellbeing. Far from improving their mental and physical health, this wellness syndrome”, serves as another means for disciplining people to ensure that they are productive, efficient, and engaged. Cederstrom and Spicer (2015) have socially diagnosed this as “The Wellness Syndrome” declaring that Today wellness is not just something we choose. It is a moral obligation. We must consider it at every turn of our lives. While we often see it spelled out in advertisements and life-style magazines, this command is also transmitted more insidiously, so that we don’t know whether it is imparted from the outside or spontaneously arises within ourselves. This is what we call the wellness command. In addition to identifying the emergence of this wellness command, we want to show how this injunction now works against us.

Looking only slightly ahead to the future, it is not a far leap to see how this regulative and disempowering culture will be updated for an increasingly transhuman workforce. It is already part of our modern “job” to “manage” our identity across a range of different personal and professional circumstances (Watson 2008). Already there are signs that these strategies of self-management and identity regulation are adapting too and being strengthened by new digital technologies. The last decade has witnessed the rise of the “quantified self” which has merged desires for “human perfectibility” with personal tracking technologies. In her landmark new book The Quantified Self in Precarity: Work, Technology and What Counts, scholar Phoebe Evans and Moore (2019: 1) observes that Humans in the 21st century have moved into a new series of fascinations about body tracking. We are interested in knowing about our automatic systems and our ‘automatic selves’…Through intensive and long term data collection advocated by the Quantified Self Movement…individuals have begun to pursue automatic self-knowledge to improve ourselves. To gain this knowledge and set out self-improvement plans, we track movement, activity, emotions and attitudes in a quest to gain more intimate knowledge about the self.

These datafied techniques have evolved into the creation of the “deeper neoliberal subject” who links their efforts at self-improvement to the greater and more creative collection of their individual data. To this end,

116 

P. BLOOM

This reflected a new direction for neoliberalism. The free market was becoming deeper and turning inwards. It was seeking to become a force for saving and capitalising on our most intimate desires—giving digitised form to our once mysterious soul. And it was doing so using the most hi-tech methods currently available. More and more people are expected to digitally account for their spiritual health, well-being and social worth—a form of inner surveillance that makes them morally and ethically accountable for being a holistic, balanced and good present-day market citizen. (Bloom 2019: 113–114)

A much more serious and realistic threat than a “robot takeover” or “AI singularity” is the use of non-human intelligence and capabilities to continue human controlled exploitation and domination. It is why it is imperative to avoid extending the “wellness syndrome” to non-human employees. Or subject machines to the same stresses and work intensification that neoliberalism has currently forced upon humans and asked them to manage and cope with. Instead, it is necessary to build new forms of solidarity between humans and non-humans for their shared emancipation (Sutherland 2018). This radical relationship with robots and AI can also be deployed for progressively reconfiguring the wider ways living beings of all types interact with each other (Bonnet et al. 2019). This radicalised perspective of transhuman relations allows, in turn, for a different perspective of present day dehumanization. It reverses past and current attempts to blame machines for our perceived “subhuman” conditions. Instead it traces the root causes of this problem to existing power structures and socio-economic systems (Foxconn example). What is considered “monstrous”, in this respect, is the dehumanising status quo, the normalization of exploitation, inequality, and deprivation (Bloom 2014). By contrast, the once considered grotesque is suddenly transformed into an opportunity for reimagining our lives and society (Thanem 2011). Directly relevant to themes of “alienation 4.0” are the “radical possibilities of emancipated automation” (Walsh and Sculos 2018: 101). It speaks to how we are now on the “razor’s edge” between further hitech oppression and liberation (Puaschunder 2018). It is a recapturing of a pre-modern history of humans and robots based not on fear but magical possibilities (Truitt 2015). At the core of this radical but realistic reimagination of human and non-human interactions is their potential for shared growth and change. Rather than thinking of machines as something “foreign” or separate from

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

117

humanity, it is better to consider them as companions in a journey of shared social evolution and transformation (Froese and Ziemke 2009). This ethos of sharedness reframes contemporary discourses of ownership to a vision of common use and benefit for humans and non-humans alike. This is, for instance, challenging and potentially transforming established notions of property ownership as Ownership, both of land and goods, is again at the stake. Technological advances and/or new values of millennials in a context of crisis have led to questioning the suitability of ownership to favor universal access to housing, of holding music and other digital contents, have limited the faculties of animals’ and pets’ owners and are favoring the evolution of autonomous robots into subjects of law rather than mere objects. (Nasarre-Aznar 2018: 79)

These invite novel frameworks and practices for conceiving collective intelligence and decision-making through techniques such as human and non-human “swarming” which relies on decentralised structures and constant feedback to produce general intelligence and insights (see Rosenberg 2015). It further puts into question, the hegemony of human experts (Rosenberg 2016). More positively, its puts forward new ways to theorize and manifest contemporary democracy and freedom (Pochon et al. 2018). These insights point to broader concerns over the relation of democracy to technology. Can progressive values of equality and personal liberty coincide with the rise of “disruptive” technologies? (Johnson 2019). There are serious questions about whether even the limited democracy that we currently have can survive the “algorithmic turn” (Gurumurthy and Bharthur 2018). To this end, the closer proximity to robots at work and home can open the space for a “new type of human identity” (Moulaï 2017). It also permits for a rebooting of “cyber socialism”—replacing the excitement of the radical implications of computers and central planning with robots and AI and “smart” decision-making (See Petrov 2017). This reflects the already widening of the political imagination within much of the mainstream left toward a sustainable and hi-tech “post-capitalist” future (Pitts and Dinerstein 2017). While the current era is commonly portrayed as one of coming disruption and unrest, it is also a period of increasing transhuman discovery. As contemporary forms of work and politics seem progressively outdated, there opens up new potentials for human and non-human progress. The

118 

P. BLOOM

aim is to transcend existing “wellness” paradigms for forms of “citizen-­ centred digitalisation” (see Tóth 2018). Additionally, it means rejecting any idea of an inevitable future or an unalterable status quo and embracing instead our different possible “technological futures”. However, equally imperative is to critically understand how these emerging futures either reinforce or transform the unjust human histories giving them birth. As Neda Atanasoski and Kalindi Vora (2019: 28) insightfully note in their book Surrogate Humanity: Race, Robots, and the Politics of Technological Futures …what is essential about the automation of both factory and domestic labor for technoliberalism is the production of the human surrogate effect—that is racial and gendered relations emerging at the interstices of new technologies and the reconfigurings of US geopolitical dominance. By displacing the centrality of racialized and gendered labor relations in its articulation of a postracial present enabled by automation and artificial intelligence, technoliberalism reproduces the violent racial logics that it claims to resolve through techno-scientific innovation. In this sense, technoliberalism is an update on the liberal progress narrative that conceal ongoing conditions of racial subjugation and imperial expropriation.

Crucial, therefore, is envisioning, struggling for, and realising an integrated society that transcends the prejudices and cultures of exploitation that have defined liberalism in the past and present as well as a future “technoliberalism”.

Producing Meaningful Intelligence The new millennium has brought with it the prospect for a “smarter” and more efficient economy and society. Yet it remains to be seen if this “smarter” world will be more equitable, just, and free (Horowitz 2016). The rise of AI and robots presages an idea that we can create intelligence that does our jobs faster and makes our everyday tasks more convenient. However, will this lead to a more meaningful and fulfilling existence for either humans or non-humans? (Coeckelbergh et al. 2018) Crucial, in this regard, is better understanding the role robots will play in our economy and the degree to which this will enhance their and our exploitation or alleviate it, such as in the case of increasingly automated agricultural systems (Carolan 2019).

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

119

It is telling that one of the greatest contemporary fears is that robots and AI will turn against us. However, what is commonly overlooked in such worries is that powerful humans are already busy creating ever more advanced “killer robots” (Krishnan 2016). There is, moreover, an emerging “political economy of robots” (Kiggins 2018). The risks of machines are the artificial reproduction of human biases and injustices. Yet the rise of the machines can also be an opportunity for rethinking gender binaries and current “human” limitations associated with race, ethnicity, and class. It is perhaps easy to understand, for instance, why people would call for the ban of sex robots. However, these concerns are rooted in patriarchal ideas of what sex can and should be, arising from a society where gender is still rather static and intimacy is often intertwined with privilege and domination. However, as one scholar recently put it To campaign against development is shortsighted. Instead of calling for an outright ban, why not use the topic as a base from which to explore new ideas of inclusivity, legality and social change? It is time for new approaches to artificial sexuality, which includes a move away from the machine-as-sex-­ machine hegemony and all its associated biases. (Devlin 2015: n.p.)

Anticipated is the development and design of AI based on principles of “cooperation” rather than “competition”—a process that can make them more humane. Arising is the chance to create emancipated beings that are neither fully human or non-human, but “post-sapien” (Gray 2018). What such an ethos permits is a breaking down of the barriers for an expansive, radical, and meaningful transhuman intelligence. Right now, policy makers are beholden to either real life dystopian ideologies of global capitalism and war or popularly imagined ones of “killer robots” (Carpenter 2016). It is imperative to construct ethical frameworks for developing robots that can not only enhance our current lives but aids in the making of a fulfilling and innovative integrative future (Leveringhaus 2018). At stake is redefining what is considered “meaningful human control” for this purpose (Marauhn 2018). It is fostering new transhuman spaces for human and non-human cooperation and discovery (Lehman 2018). They can also be allies in challenging the existing status quo—whether they be in our community, workplaces, or wider society (Skågeby 2018). These ideas directly contravene perspectives which associate AI and robots as inherently disruptive. While many corporations and governments continue to express the sense of techno-optimism as to the ability of

120 

P. BLOOM

machine learning and capabilities to enhance and improve the existing order, when they are linked to radical change it is predominantly in quite negative terms. It is that advances in computers and digitalisation will sow the seeds of popular unrest and populist extremism (Levy 2018). It provides the pretext for focusing policy and popular attention on the best ways to “regulate” robots (Boden et al. 2017) These are not completely illegitimate concerns. Yet they distract from the creative possibilities held out by greater and deeper human and non-human interaction (Shaw 2018). They further eclipse the potential for promoting empowering forms of “robot governance” that enhance our shared autonomy (Mannes 2016). Such governance would be centred on both protecting the rights of humans and machines in the present era while expanding them in the near and long term future. Indeed, any discussion on this topic should take seriously the point that “intelligent robots must uphold human rights” (Ashrafian 2015: 392). However, it must also hold those humans and institutions legally and socially accountable for when they are programmed not to do so. These ideas should extend beyond the formally political or officially legal spheres to new technologically mediated cultural spaces such as social networking sites (Maréchal 2016). Of equal significance, is the concerted reframing of how this transhuman future is imagined within popular entertainment (Cranny-Francis 2016). These reimaginings can be augmented by the real world development of “social robotics” (Kanda and Ishiguro 2016). They can also be strengthened by reconceiving the present and future of work through a progressive and transhuman “politics of production” (Spencer 2017). It is only in this way that we can we produce truly “meaningful intelligence”. It is the ability to cooperate and collaborate with non-­humans in creating new ways of seeing and experiencing the world at the interpersonal, community, organisational, and global levels (Peter and Kühne 2018). This must be premised on a radical sense of mutual care, support, and solidarity (Martin et al. 2015). Anticipated are fresh visions of what will be the social, political, and personal moralities informing such a radical transhumanism (Hakli and Mäkelä 2019).

References Abdul-Gader, A. H., & Kozar, K. A. (1995). The Impact of Computer Alienation on Information Technology Investment Decisions: An Exploratory Cross-­ national Analysis. MIS Quarterly, 19, 535–559.

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

121

Adamo, S.  A. (2016). Do Insects Feel Pain? A Question at the Intersection of Animal Behaviour, Philosophy and Robotics. Animal Behaviour, 118, 75–79. Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Press. Aiken, M., & Hage, J. (1966). Organizational Alienation: A Comparative Analysis. American Sociological Review, 497–507. Alvesson, M., & Willmott, H. (2002). Identity Regulation as Organizational Control: Producing the Appropriate Individual. Journal of Management Studies, 39(5), 619–644. Anlen, S. (2017). 5 Reasons Why I Believe AI Can Have Mental Illness. Becoming Human. Antonova, E., & Nehaniv, C. L. (2018, July). Towards the Mind of a Humanoid: Does a Cognitive Robot Need a Self?-Lessons from Neuroscience. In Artificial Life Conference Proceedings (pp. 412–419). Cambridge, MA: MIT Press. Armand, L. (2018). The Posthuman Abstract: AI, Dronology & “Becoming Alien”. AI & Society, 1–6. Ashrafian, H. (2015). Intelligent Robots Must Uphold Human Rights. Nature News, 519(7544), 391. Atanasoski, N., & Vora, K. (2019). Surrogate Humanity: Race, Robots, and the Politics of Technological Futures. Duke University Press. Aupers, S.  S., & Houtman, D.  D. (2004). ‘Reality Sucks’: On Alienation and Cybergnosis. Concilium: International Journal of Theology Special issue. Bachi, K., & Parish-Plass, N. (2017). Animal-assisted Psychotherapy: A Unique Relational Therapy for Children and Adolescents. Clinical Child Psychology and Psychiatry, 22(1), 3–8. Bartneck, C., & McMullen, M. (2018, March). Interacting with Anatomically Complete Robots: A Discussion About Human-Robot Relationships. In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 1–4). ACM. Bauer, D. (2019). Alienation, Freedom and the Synthetic How. Angelaki, 24(1), 106–117. Baumgaertner, H., Mullan, S., & Main, D. C. J. (2016). Assessment of Unnecessary Suffering in Animals by Veterinary Experts. Veterinary Record, vetrec-2015. Béchade, L., Delaborde, A., Duplessis, G. D., & Devillers, L. (2016, May). Ethical Considerations and Feedback from Social Human-Robot Interaction with Elderly People. In ETHI-CA2 2016: ETHics In Corpus Collection, Annotation & Application Workshop Programme (p. 42). Benvenuti, A. (2016). Evolutionary Continuity and Personhood: Legal and Therapeutic Implications of Animal Consciousness and Human Unconsciousness. International Journal of Law and Psychiatry, 48, 43–49. Block, A. E., & Kuchenbecker, K. J. (2019). Softness, Warmth, and Responsiveness Improve Robot Hugs. International Journal of Social Robotics, 11(1), 49–64.

122 

P. BLOOM

Bloom, P. (2014). We Are All Monsters Now! A Marxist Critique of Liberal Organization and the Need for a Revolutionary Monstrous Humanism. Equality, Diversity and Inclusion: An International Journal, 33(7), 662–680. Bloom, P. (2019). Monitored: Business and Surveillance in a Time of Big Data. Pluto Press. Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., et  al. (2017). Principles of Robotics: Regulating Robots in the Real World. Connection Science, 29(2), 124–129. Bonani, M., Oliveira, R., Correia, F., Rodrigues, A., Guerreiro, T., & Paiva, A. (2018, October). What My Eyes Can’t See, A Robot Can Show Me: Exploring the Collaboration Between Blind People and Robots. In Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (pp. 15–27). ACM. Bonnet, F., Mills, R., Szopek, M., Schönwetter-Fuchs, S., Halloy, J., Bogdan, S., … & Schmickl, T. (2019). Robots Mediating Interactions Between Animals for Interspecies Collective Behaviors. Science Robotics, 4(28), eaau7897. Borenstein, J., & Arkin, R. (2019). Robots, Ethics, and Intimacy: The Need for Scientific Research. In On the Cognitive, Ethical, and Scientific Dimensions of Artificial Intelligence (pp. 299–309). Cham: Springer. Bostrom, N. (2005). In Defense of Posthuman Dignity. Bioethics, 19(3), 202–214. Boyd, D., & Crawford, K. (2012). Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon. Information, Communication & Society, 15(5), 662–679. Carolan, M. (2019). Automated Agrifood Futures: Robotics, Labor and the Distributive Politics of Digital Agriculture. The Journal of Peasant Studies, 1–24. Carpenter, C. (2016). Rethinking the Political/-Science-/Fiction Nexus: Global Policy Making and the Campaign to Stop Killer Robots. Perspectives on Politics, 14(1), 53–69. Carrier, J.  G. (1992). Emerging Alienation in Production: A Maussian History. Man, 27, 539–558. Carrozza, M. C. (2019). Our Friend the Robot. In The Robot and Us (pp. 41–52). Cham: Springer. Caudwell, C., & Lacey, C. (2019). What Do Home Robots Want? The Ambivalent Power of Cuteness in Robotic Relationships. Convergence, 1354856519837792. Cave, S., & Dihal, K. (2019). Hopes and Fears for Intelligent Machines in Fiction and Reality. Nature Machine Intelligence, 1(2), 74. Cave, S., Coughlan, K., & Dihal, K. (2019, January). Scary Robots’: Examining Public Responses to AI.  In Proc. AIES. Retrieved from http://www.aiesconference.com/wp-content/papers/main/AIES-19_paper_200.pdf. Cederström, C., & Spicer, A. (2015). The Wellness Syndrome. John Wiley & Sons. Coeckelbergh, M., Loh, J., & Funk, M. (Eds.). (2018). Envisioning Robots in Society–Power, Politics, and Public Space: Proceedings of Robophilosophy 2018/ TRANSOR 2018 (Vol. 311). IOS Press.

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

123

Comor, E. (2010). Digital Prosumption and Alienation. Ephemera, 10(3), 439. Cranny-Francis, A. (2016). Robots, Androids, Aliens, and Others: The Erotics and Politics of Science Fiction Film. In S.  Redmond & L.  Marvell (Eds.), Endangering Science Fiction Film. New York: Routledge. Dean, K. (2016). Computers and the Alienation of Thinking: From Deep Blue to the Googlemobile. In Changing our Environment, Changing Ourselves (pp. 215–256). London: Palgrave Macmillan. Devillers, L. (2018). Emotional and Social Robots for Care, Ethical Challenges and Issues. Soins; la revue de reference infirmiere, 63(830), 57–60. Devlin, K. (2015). In Defence of Sex Machines: Why Trying to Ban Sex Robots Is Wrong. The Conversation. Eguchi, A. (2019). WRS Junior Category—Young Roboticists’ Approaches for Realizing “Robotics for Happiness”. 日本ロボット学会誌, 37(3), 235–240. Elder, A. (2016). False Friends and False Coinage: A Tool for Navigating the Ethics of Sociable Robots. ACM SIGCAS Computers and Society, 45(3), 248–254. Evans, N. J., & Moore, A. R. (2019). Is There a Turtle in this Text? Animals in the Internet of Robots and Things. Animal Studies Journal, 8(1), 21–41. Fine, A. H., Tedeschi, P., & Elvove, E. (2015). Forward Thinking: The Evolving Field of Human–Animal Interactions. In Handbook on Animal-Assisted Therapy (pp. 21–35). Academic. Fleming, P. (2019). Robots and Organization Studies: Why Robots Might Not Want to Steal Your Job. Organization Studies, 40(1), 23–38. Froese, T., & Ziemke, T. (2009). Enactive Artificial Intelligence: Investigating the Systemic Organization of Life and Mind. Artificial Intelligence, 173(3– 4), 466–500. Gardell, B. (1976). Department of Psychology University of Stockholm Technology, Alienation and Mental Health. Summary of a Social Psychological Study of Technology and the Worker. Acta Sociologica, 19(1), 83–93. Gieseking, J. J. (2017). Messing with the Attractiveness Algorithm: A Response to Queering Code/Space. Gender, Place & Culture, 24(11), 1659–1665. Grant, A.  S. (2017). What Exactly Are We Trying to Accomplish? The Role of Desire in Transhumanist Visions. In Religion and Human Enhancement (pp. 121–137). Cham: Palgrave Macmillan. Grant, A.  S. (2019). Will Human Potential Carry Us Beyond Human? A Humanistic Inquiry into Transhumanism. Journal of Humanistic Psychology, 0022167819832385. Gray, C.  H. (2018). Post-sapiens: Notes on the Politics of Future Human Terminology. Journal of Posthuman Studies, 1(2), 136–150. Guo, T. (2015). Alan Turing: Artificial Intelligence as Human Self-knowledge. Anthropology Today, 31(6), 3–7.

124 

P. BLOOM

Gurumurthy, A., & Bharthur, D. (2018). Democracy and the Algorithmic Turn. SUR-International Journal on Human Rights, 27, 39. Gutiérrez, R.  T., & Ezponda, J.  E. (2019). Technodata and the Need of a Responsible Industry 4.0. In Handbook of Research on Industrial Advancement in Scientific Knowledge (pp. 1–19). IGI Global. Hakli, R., & Mäkelä, P. (2019). Moral Responsibility of Robots and Hybrid Agents. The Monist, 102(2), 259–275. Hampton, G. J. (2015). Imagining Slaves and Robots in Literature, Film, and Popular Culture: Reinventing Yesterday’s Slave with Tomorrow’s Robot. Lexington Books. Harnad, S. (2016). CCTV, Web-streaming and Crowd-sourcing to Sensitize Public to Animal Suffering. Animal Justice UK, 2. Hasse, C., & Søndergaard, D. M. (2019). Designing Robots, Designing Humans. Hawkins, B. W., & Burns, R. (2018, May). Queering (meta) Data Ontologies. In GenderIT (pp. 233–234). Horowitz, M.  C. (2016). Public Opinion and the Politics of the Killer Robots Debate. Research & Politics, 3(1), 2053168015627183. Horta, O. (2017). Animal Suffering in Nature: The Case for Intervention. Environmental Ethics, 39(3), 261–279. Hutson, M. (2018, April 9). Could Artificial Intelligence Get Depressed and Have Hallucinations? Science. Jadhav, S., Jain, S., Kannuri, N., Bayetti, C., & Barua, M. (2015). Ecologies of Suffering. Economic & Political Weekly, 50(20), 13. Johnson, C. (2019). Technological Disruption and Equality: Future Challenges for Social Democracy. In Social Democracy and the Crisis of Equality (pp. 197– 214). Singapore: Springer. Kanda, T., & Ishiguro, H. (2016). Human-Robot Interaction in Social Robotics. CRC Press. Keane, J. (2018). 16 The Unfinished Robots Revolution. Digitizing Democracy. Kiggins, R. (2018). Robots and Political Economy. In The Political Economy of Robots (pp. 1–16). Cham: Palgrave Macmillan. Kravchenko, A., & Kyzymenko, I. (2019). The Forth Industrial Revolution: New Paradigm of Society Development or Posthumanist Manifesto. Philosophy and Cosmology-Filosofiya I Kosmologiya, 22, 120–128. Krishnan, A. (2016). Killer Robots: Legality and Ethicality of Autonomous Weapons. Routledge. Laplante, N., Laplante, P. A., Voas, J., & Cleland-Huang, J. (2016). Caring: An Undiscovered “Super-ility” of Smart Healthcare. IEEE Software, 33(6), 16–19. LaTorra, M. (2015). What Is Buddhist Transhumanism? Theology and Science, 13(2), 219–229. Lee, J. (2017). Sex Robots: The Future of Desire. Springer.

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

125

Lehman, J. (2018). From Ships to Robots: The Social Relations of Sensing the World Ocean. Social Studies of Science, 48(1), 57–79. Leveringhaus, A. (2018). Developing Robots: The Need for an Ethical Framework. European View, 17(1), 37–43. Levin, S. B. (2018). Creating a Higher Breed: Transhumanism and the Prophecy of Anglo-American Eugenics. In Reproductive Ethics II (pp.  37–58). Cham: Springer. Levy, F. (2018). Computers and Populism: Artificial Intelligence, Jobs, and Politics in the Near Term. Oxford Review of Economic Policy, 34(3), 393–417. Lorde, A. (1984). The Master’s Tools Will Never Dismantle the Master’s House. In A. Lorde (Ed.), Sister Outsider: Essays and Speeches (pp. 110–114). Berkeley, CA: Crossing Press. 2007, Print. Lu, H., Li, Y., Chen, M., Kim, H., & Serikawa, S. (2018). Brain Intelligence: Go Beyond Artificial Intelligence. Mobile Networks and Applications, 23(2), 368–375. Luka, M. E., & Millette, M. (2018). (Re) framing Big Data: Activating Situated Knowledges and a Feminist Ethics of Care in Social Media Research. Social Media+ Society, 4(2), 2056305118768297. Mainen, Z. (2018, April 16). What Depressed Robots Can Teach Us About Mental Health. The Guardian. Mainen, Zachary (2019, April 16). What Depressed Robots Can Tell Us About Our Mental Health. The Guardian. Mannes, A. (2016). Anticipating Autonomy: Institutions & Politics of Robot Governance. Marauhn, T. (2018). Meaningful Human Control–And the Politics of International Law. In Dehumanization of Warfare (pp. 207–218). Cham: Springer. Maréchal, N. (2016). Automation, Algorithms, and Politics| When Bots Tweet: Toward a Normative Framework for Bots on Social Networking Sites (Feature). International Journal of Communication, 10, 10. Martin, A., Myers, N., & Viseu, A. (2015). The Politics of Care in Technoscience. Social Studies of Science, 45(5), 625–641. Marx, K. (1964). Economic and Philosophic Manuscripts of 1844. New  York: International Publishers. Mazali, T. (2018). From Industry 4.0 to Society 4.0, There and Back. AI & Society, 33(3), 405–411. McClure, P. K. (2018). “You’re Fired,” Says the Robot: The Rise of Automation in the Workplace, Technophobes, and Fears of Unemployment. Social Science Computer Review, 36(2), 139–156. McCoy, A. (2012). Imperial Illusions: Information Infrastructure and the Future of U.S. Global Power. In A. W. McCoy, J. M. Fradera, & S. Jacobson (Eds.), Endless Empire: Spain’s Retreat, Europe’s Eclipse (pp.  3–39). Madison, WI: University of Wisconsin Press.

126 

P. BLOOM

Mendez Cota, G. (2016). My Fair Ladies: Female Robots, Androids and Other Artificial Eves. Rutgers University Press. Miller, E., & Polson, D. (2019). Apps, Avatars, and Robots: The Future of Mental Healthcare. Issues in Mental Health Nursing, 40(3), 208–214. Miyachi, T., Iga, S., & Furuhata, T. (2017). Human Robot Communication with Facilitators for Care Robot Innovation. Procedia Computer Science, 112, 1254–1262. Morison, E.  E., & Mayer, A. (1974). From Know-How to Nowhere: The Development of American Technology. New York I, 974. Moulaï, K. (2017). Working in Close Proximity with Robots: Towards a New Type of Human Identity?. In CMS International Conference. Murray-Hundley, D. (2017). Will AI and Robots Suffer with Mental Health? Techworld. Nasarre-Aznar, S. (2018). Ownership at Stake (Once Again): Housing, Digital Contents, Animals and Robots. Journal of Property, Planning and Environmental Law, 10(1), 69–86. Naveh, G. S. (2015). Fantasies of Identity, Love, and Self-Knowledge in the Age of the Web and Virtual Reality. Semiotics, 185–194. https://doi.org/10.5840/ cpsem201519. Nelson, R. (2017). Are Workers Becoming Robots to Keep Their Jobs? EE-Evaluation Engineering, 56(11), 2–3. Nemitz, P. (2018). Constitutional Democracy and Technology in the Age of Artificial Intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089. Nygren, K.  G., & Gidlund, K.  L. (2016). The Pastoral Power of Technology. Rethinking Alienation in Digital Culture. In C.  Fuchs & V.  Mosco (Eds.), Marx in the Age of Digital Capitalism (pp. 396–412). London: Brill. Ortiz, J., Chih, W.  H., & Tsai, F.  S. (2018). Information Privacy, Consumer Alienation, and Lurking Behavior in Social Networking Sites. Computers in Human Behavior, 80, 143–157. Packard, N. (2018). Habitual Interaction Estranged. International Journal of Social Sciences, 7(1), 69–94. Pandey, A. K. (2016, September). Socially Intelligent Robots, the Next Generation of Consumer Robots and the Challenges. In International Conference on ICT Innovations (pp. 41–46). Cham: Springer. Parisi, L. (2019). The Alien Subject of AI. Subjectivity, 12(1), 27–48. Pasquinelli, M. (2015). Alleys of Your Mind: Augmented Intelligence and Its Traumas. (p. 212). Leuphana: Meson Press. Paus, E. (Ed.). (2018). Confronting Dystopia: The New Technological Revolution and the Future of Work. Cornell University Press. Pearson, J.  R., & Beran, T.  N. (2017). The Future Is Now: Using Humanoid Robots in Child Life Practice. In Handbook of Medical Play Therapy and Child Life (pp. 351–372). Routledge.

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

127

Pedersen, I. (2016). Home Is Where the AI Heart Is [Commentary]. IEEE Technology and Society Magazine, 35(4), 50–51. Pessoa, L. (2017). Do Intelligent Robots Need Emotion? Trends in Cognitive Sciences, 21(11), 817–819. Peter, J., & Kühne, R. (2018). The New Frontier in Communication Research: Why We Should Study Social Robots. Media and Communication, 6(3), 73–76. Peters, M.  A. (2019). Beyond Technological Unemployment: The Future of Work. Educational Philosophy and Theory. https://doi.org/10.1080/0013185 7.2019.1608625. Petrov, V. (2017). A Cyber-Socialism at Home and Abroad: Bulgarian Modernisation, Computers, and the World, 1967–1989. Doctoral dissertation, Columbia University. Pinkstone, J. (2018, April 12). Robots of the Future May Be Given the Machine Equivalent of a Serotonin Pill to ‘Stop Them Getting Depressed’, Argues Neuroscientist. Daily Mail. Pitts, F.  H., & Dinerstein, A.  C. (2017). Corbynism’s Conveyor Belt of Ideas: Postcapitalism and the Politics of Social Reproduction. Capital & Class, 41(3), 423–434. Pochon, Y., Dornberger, R., Zhong, V.  J., & Korkut, S. (2018, November). Investigating the Democracy Behavior of Swarm Robots in the Case of a Best-­ of-­n Selection. In 2018 IEEE Symposium Series on Computational Intelligence (SSCI) (pp. 743–748). IEEE. Preuß, D., & Legal, F. (2017). Living with the Animals: Animal or Robotic Companions for the Elderly in Smart Homes? Journal of Medical Ethics, 43(6), 407–410. Puaschunder, J. M. (2018). On Artificial Intelligence’s Razor’s Edge: On the Future of Democracy and Society in the Artificial Age. SSRN 3297348. Rantanen, T., Lehto, P., Vuorinen, P., & Coco, K. (2018). Attitudes Towards Care Robots Among Finnish Home Care Personnel–A Comparison of Two Approaches. Scandinavian Journal of Caring Sciences, 32(2), 772–782. Reese, H. (2015). Is the World Ready for a Robot Psychiatrist? A Conversation with Joanne Pransky. TechRepublic. Riva, G. (2018). MARIO: Robotic Solutions to Give Dementia Patients a Better Quality of Life. Cyberpsychology, Behavior, and Social Networking, 21(2), 145–145. Rosenberg, L.  B. (2015a, July). Human Swarms, a Real-time Method for Collective Intelligence. In Artificial Life Conference Proceedings 13 (pp. 658– 659). Cambridge, MA: MIT Press. Rosenberg, L. (2015b). Human Swarming and the Future of Collective Intelligence. Singularity. Rosenberg, L. (2016, July). Artificial Swarm Intelligence vs Human Experts. In 2016 International Joint Conference on Neural Networks (IJCNN) (pp. 2547– 2551). IEEE.

128 

P. BLOOM

Rughiniş, C., Zamfirescu, R., & Neagoe, A. (2018). Visions of Robots, Networks and Artificial Intelligence: Europeans’ Attitudes Towards Digitisation and Automation in Daily Life. eLearning & Software for Education, 2, 114–119. Russell, S. (2016). Should We Fear Supersmart Robots? Scientific American, 314(6), 58–59. Salgues, B. (2018). Society 5.0: Industry of the Future, Technologies, Methods and Tools. John Wiley & Sons. Santos, M., & Egerstedt, M. (2019). From Motions to Emotions: Can the Fundamental Emotions be Expressed in a Robot Swarm? arXiv preprint arXiv:1903.12118. Schacht, R. (2015). Alienation. Psychology Press. Schapals, A. K., Bruns, A., & McNair, B. (Eds.). (2018). Digitizing Democracy. Routledge. Schneider, P. (2018). Managerial Challenges of Industry 4.0: An Empirically Backed Research Agenda for a Nascent Field. Review of Managerial Science, 12(3), 803–848. Sciutti, A., Mara, M., Tagliasco, V., & Sandini, G. (2018). Humanizing Human-­ Robot Interaction: On the Importance of Mutual Understanding. IEEE Technology and Society Magazine, 37(1), 22–29. Seeman, M. (1959). On the Meaning of Alienation. American Sociological Review, 24, 783–791. Sequeira, J. S. (2019). Humans and Robots: A New Social Order in Perspective? In Robotics and Well-Being (pp. 17–24). Cham: Springer. Shaw, I. G. (2017). Robot Wars: US Empire and Geopolitics in the Robotic Age. Security Dialogue, 48(5), 451–470. Shaw, D.  B. (2018). Robots as Art and Automation. Science as Culture, 27(3), 283–295. Shehada, M., & Khafaje, N. (2015). The Manifestation of Organizational Alienation of Employees and Its Impact on Work Conditions. International Journal of Business and Social Science, 6(2), 82–86. Shepard, J.  M. (1973). Technology, Division of Labor, and Alienation. Pacific Sociological Review, 16(1), 61–88. Shmoish, M., German, A., Devir, N., Hecht, A., Butler, G., Niklasson, A., & Albertsson-Wikland, K. (2018, August). Prediction of Adult Height by Artificial Intelligence (AI) through Machine Learning (ML) from Early Height Data. In 57th Annual ESPE (Vol. 89). European Society for Paediatric Endocrinology. Siegel, E. (2016). Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die. John Wiley & Sons Incorporated. Singer, P. (2018). Animal Liberation. Skågeby, J. (2018). “Well-Behaved Robots Rarely Make History”: Coactive Technologies and Partner Relations. Design and Culture, 10(2), 187–207.

4  LEADING FUTURE LIVES: PRODUCING MEANINGFUL INTELLIGENCE 

129

Spencer, D. (2017). Work in and Beyond the Second Machine Age: The Politics of Production and Digital Technologies. Work, Employment and Society, 31(1), 142–152. Stahl, D. (2017). Building Better Humans? Refocusing the Debate on Transhumanism. NanoEthics, 11(2), 209–212. Strait, M.  K., Aguillon, C., Contreras, V., & Garcia, N. (2017, August). The Public’s Perception of Humanlike Robots: Online Social Commentary Reflects an Appearance-based Uncanny Valley, a General Fear of a “Technology Takeover”, and the Unabashed Sexualization of Female-gendered Robots. In 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 1418–1423). IEEE. Sutherland, D. (2018). Solidarity Forever-robots, Workers and Profitability. Australian Socialist, 24(1), 12. Swan, M. (2013). The Quantified Self: Fundamental Disruption in Big Data Science and Biological Discovery. Big Data, 1(2), 85–99. Świątek, L. (2018, July). From Industry 4.0 to Nature 4.0–Sustainable Infrastructure Evolution by Design. In International Conference on Applied Human Factors and Ergonomics (pp. 438–447). Cham: Springer. Thanem, T. (2011). The Monstrous Organization. Edward Elgar Publishing. Theis, S., Lefore, N., Meinzen-Dick, R., & Bryan, E. (2018). What Happens After Technology Adoption? Gendered Aspects of Small-scale Irrigation Technologies in Ethiopia, Ghana, and Tanzania. Agriculture and Human Values, 35, 671–684. Tóth, C. (2018). When Robots Take Our Jobs–Who Will We Vote For? Arguments for Liberals. In E. Liss (Ed.), Citizen-Centred Digitalisation (p. 88). Brussels: European Liberal Forum asbl. Truitt, E.  R. (2015). Medieval Robots: Mechanism, Magic, Nature, and Art. University of Pennsylvania Press. Tucker, K.  H., Jr. (2002). Classical Social Theory: A Contemporary Approach. Blackwell. Tuisku, O., Pekkarinen, S., Hennala, L., & Melkas, H. (2019). “Robots Do Not Replace a Nurse with a Beating Heart” The Publicity Around a Robotic Innovation in Elderly Care. Information Technology & People, 32(1), 47–67. Urquiza-Haas, E. G., & Kotrschal, K. (2015). The Mind Behind Anthropomorphic Thinking: Attribution of Mental States to Other Species. Animal Behaviour, 109, 167–176. Vallas, S. P. (1988). New Technology, Job Content, and Worker Alienation: A Test of Two Rival Perspectives. Work and Occupations, 15(2), 148–178. Van Dijck, J. (2014). Datafication, Dataism and Dataveillance: Big Data Between Scientific Paradigm and Ideology. Surveillance & Society, 12(2), 197. Višň ovský, E. (2017). On the Value of Human Life. Ethics & Bioethics, 7(1–2), 85–95. Vogel, S. (1988). Marx and Alienation from Nature. Social Theory and Practice, 14(3), 367–387.

130 

P. BLOOM

Walsh, S.  N., & Sculos, B.  W. (2018). Repressive Robots and the Radical Possibilities of Emancipated Automation. In The Political Economy of Robots (pp. 101–125). Cham: Palgrave Macmillan. Watson, T. J. (2008). Managing Identity: Identity Work, Personal Predicaments and Structural Circumstances. Organization, 15(1), 121–143. Webster, J. (2016). Animal Welfare: Freedoms, Dominions and “A Life Worth Living”. Animals, 6(6), 35. Wendling, A. (2009). Karl Marx on Technology and Alienation. Springer. Whitby, B., & Oliver, K. (2000). How to Avoid a Robot Takeover: Political and Ethical Choices in the Design and Introduction of Intelligent Artifacts. AISB Quarterly, 42–46. Wirtz, J., Patterson, P.  G., Kunz, W.  H., Gruber, T., Lu, V.  N., Paluch, S., & Martins, A. (2018). Brave New World: Service Robots in the Frontline. Journal of Service Management, 29(5), 907–931. Wogu, I. A. P., Olu-Owolabi, F. E., Assibong, P. A., Agoha, B. C., Sholarin, M., Elegbeleye, A., … & Apeh, H.  A. (2017, October). Artificial Intelligence, Alienation and Ontological Problems of Other Minds: A Critical Investigation into the Future of Man and Machines. In 2017 International Conference on Computing Networking and Informatics (ICCNI) (pp. 1–10). IEEE. You, S., & Robert, L. (2017, December). Facilitating Employee Intention to Work with Robots. AIS. Yuill, C. (2005). Marx: Capitalism, Alienation and Health. Social Theory & Health, 3(2), 126–143. Zarkadakis, G. (2015). In Our Own Image: Will Artificial Intelligence Save or Destroy Us? Random House.

CHAPTER 5

Creating Smart Economies: Administrating Empowering Futures

Imagine a world where services work better, communities are safer, and people are happier. Try to conceive of a workplace that is based on cooperation and mutual benefit rather than competition and insecurity. Consider for a moment the possibility that you can help design the products you want—from houses to glasses—and manufacture them quickly and sustainably. Also try to envision when you are confronted with injustice that you can immediately draw on a social network that can support you and help to address it. Now imagine that all this is made possible through your everyday material and social interactions with machines. And if this sounds implausible, know that even traditionally pro-business publications such as The Harvard Business Review is recognising the “value of collaboration” between humans and non-­ humans, as a recent article attests Companies benefit from optimizing collaboration between humans and artificial intelligence. Five principles can help them do so: Reimagine business processes; embrace experimentation/employee involvement; actively direct AI strategy; responsibly collect data; and redesign work to incorporate AI and cultivate related employee skills. (Wilson and Daugherty 2018: n.p.)

The fifth chapter investigates the potential for humans and machines to use their shared intelligence to build “smarter” and more egalitarian economies locally and globally. AI and automation are meant to make © The Author(s) 2020 P. Bloom, Identity, Institutions and Governance in an AI World, https://doi.org/10.1007/978-3-030-36181-5_5

131

132 

P. BLOOM

economies and societies “smarter”—more efficient, productive and convenient. Nevertheless, this proclaimed “progress” fosters deep fears of coming mass unemployment and a world run by machines for the benefit of a small human elite. Yet smart technology also holds the promise of ushering in a “post-work economy” where the need for labour is reduced and material scarcity is a thing of the past. However, for these utopian visions to be made into a reality requires the use of non-human capabilities and intelligence to create an economy that is as liberating as it is smart. And one that is not merely programmed by elites for the benefit of elites. This chapter critically uncovers the mutually reinforcing relationship between human and non-human empowerment. Far from the idea that AI and big data can only serve the interest of corporations and governments, it reveals the ways it can promote economic equality and inclusion in both big and small ways. Notably, it will highlight how these advances are already revolutionizing the ways organisations are managed, services are administered and communities are planned. It will then reveal the ways it also makes it easier to create not- for- profit organisations that combine the latest cutting edge technology such as digital fabrication, open sourcing and distributed manufacturing with values of radical democracy, equality and social justice. Looking ahead to the future, it concludes by arguing that the establishment of a progressive “post-capitalism” is not only possible but also crucial to the further advancement of non-human technology. Without such a fundamental economic revolution, its potential applications and development will be stifled and undermined by humans who feel they have been “left behind” by these changes.

Smart Governance The world is supposedly on the verge of a fourth industrial revolution. As has been explored earlier in this analysis, it will presumably radically alter business, work, and even everyday life. A less publicized but crucial part of this prospective “industry 4.0” is the ability to exploit the combined forces of human and machine intelligence. In particular, these efforts focus on the ability to mine explicit and implicit human data to allow machines to derive real time and long term “smart” decision-making. This could include, for instance, the enhanced used as touched on above of “Mobile Crowd Sensing” which combines data and information both from humans and their mobiles (see Guo et  al. 2014). These ideas, of course, raise substantial ethical and political questions as well as new

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

133

challenges to contemporary democracy and freedom (Helbing et  al. 2019). Despite these fears, they also hold out the promise of harnessing “collaborative intelligence” for creating a progressive transhuman economy. To this end, At this point in AI’s development, it is constructive to re-evaluate the significance of difficult tasks and human simulation. Just because an artifact can do something very difficult does not make it useful to people. Although it may be enjoyable to watch an AI program clash wits with a person, collaborative artifacts would be both considerably more useful and better able to attract societal support….Successful CIs could establish a synergy between people and computers to accomplish human goals. Computers would do what they do best (or what people would prefer not to do at all), while people would reserve to themselves the ability to set priorities and to deal with the plethora of unforeseen situations that arise in a shared, dynamic world. A new focus on artificial intelligence that collaborates with people would incorporate and ultimately strengthen the advances AI has already made, improve AI’s public image, and provide AI researchers with a host of interesting and productive challenges. (Epstein 2015: 44)

The potential for a “smarter” governance is rooted, at least in part, on innovative public governance theories going back at least three decades. At the end of twentieth century, scholar Gerry Stoker (1998: 18) posited “five propositions” for understanding “governance as theory”: 1. Governance refers to a set of institutions and actors that are drawn from but also beyond Government. 2. Governance identifies the blurring of boundaries and responsibilities for tackling social and economic issues. 3. Governance identifies the power dependence involved in the relationships between institutions involved in collective action. 4. Governance is about autonomous selfgoverning networks of actors. 5. Governance recognizes the capacity to get things done which does not rest on the power of government to command or use its authority. It sees government as able to use new tools and techniques to steer and guide. These resonate with recent ideas of “the new public governance” (Osborne 2010). Importantly, these novel governance perspectives are both multi-actor (Bryson et  al. 2017) and multi-level (Bache et  al.

134 

P. BLOOM

2016) in their focus. These extend beyond traditional geographic localities and represent wider “governance networks” (Koppenjan and Klijn 2015). Technological innovations play a key role in this revamped idea of public governance. Most conventionally, they link up to dominant ideas of “public sector entrepreneurship”—exploring the relation of technological development with innovative civic policies (Leyden and Link 2015). ICTs and AI, in particular, are thought to increase the “administrative capacity” of public actors and institutions (Lember et al. 2016). The government is also studied as a crucial “technology maker” (Karo and Kattel 2019). Hi-tech strategies associated with “e-government” and the enhanced use of citizen driven online participation is, moreover, put forward as contemporary way for “curbing the corruption in public administration” (Ionescu 2015: 48). These point to the emergence of the “next generation of public administration” (Winstanley 2017). The rise of this “next generation” of public administration is intertwined with the development of “civic technology”. Already digital techniques from the private sector are being infused into public service provision. Sri Lanka has, for example, experimented with using digital marketing techniques linked notably to their lucrative hospitality sector as a model for fostering stronger e-governance (Punchihewa et  al. 2017) E-governance came to the fore as perhaps the most crucial governance innovation of the present era—promising to bring with it a more participatory hi-tech civic culture. Drawing on a Dutch case called “citizens net” where police and the public co-designed an e-governance system, Meijer (2015: 205) notes that In conclusion, the challenge for e-governance innovation is to not only tackle structural barriers by developing strong technologies, strong organizational structures, legal embedding, etc. but too also frame the technological practice as desirable. Too often the technology is seen as a goal in itself. The research clearly shows that government officials and citizens are not motivated by technological frames but by frames that connect technological opportunities to the production of public value. This point is often missed since the literature on e-government tends to disconnect the organization of government from the production of public values. Few government officials and citizens are motivated by technology in itself. Framing e-governance in terms of its contributions to society is essential for its success.

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

135

The adaption of public administration to a new online culture was, moreover, increasingly a global phenomenon, such as the experimental use of e-governance for financial management reform in India (Banerjee et al. 2016). At a broader level, this points to the development of city wide frameworks for smart governance in which areas and populations are linked together as part of a “sociotechnical system” (Pereira et al. 2016). Politically, this speaks to the potential for establishing novel types of “democratic network governance” (Sørensen and Torfing 2016). Suddenly available is a plethora of possibilities for reinventing the delivery of public services and community decision-making. Central, in this regard, is the establishment of “the responsive city” through “data-­smart governance”. Programmes like “Citizens Connect” which allows individuals to report problems to officials quickly and in real time using mobile technologies, points to a potentially more participatory and democratic future: Someday a future version of Citizens Connect may let a constituent know the name of the worker who fixed the broken streetlight on his corner. And the app may be providing the visualized data that citizens need to work with city officials to codesign plans for their community instead of just responding to pronouncements from experts and bureaucrats. Citizens Connect was the beginning of the beginning—a tool to further crucial citizen engagement. And that, more than its technical finesse, may be its most valuable contribution. (Goldsmith and Crawford 2014: 33)

These perspectives are additionally redefining understandings of contemporary development (Das and Misra 2017). Importantly, the establishment of “responsive” governments is not meant to be a mere updating of the status quo but full scale transformation of governance and administration for the twenty-first century. Amidst is diverse deployments and multi-disciplinary complexity as a domain of study, it ultimate represents “the use of information and technology to support and improve public policies and government operations, engage citizens, and provide comprehensive and timely government services” (Scholl 2015: 21). Anticipated is the arrival of “computational intelligence” that can produce “smarter” and supposedly fairer socio-political and economic systems (Doctor et al. 2018) To this end, e-governance is rapidly evolving into what is termed “informational governance” reflecting “: (1) new forms of governing through information, and (2) transformative changes in ­governance insti-

136 

P. BLOOM

tutions due to the new information flows” (Soma et  al. 2016: 89). In particular, New conditions make it possible to communicate across the globe, just in time, be part of networks and include different kinds of knowledge into policy making, and more insights into the emerging new societal roles and instruments for sustainable developments are needed. (Ibid.: 96)

Even more radically, is the potential for promoting new democratic ideas and practices linked to these disruptive digital technologies. After the 2016 election, data driven politics has been largely cast as a threat to democracy. Yet at the local levels big data and analytics are progressively viewed as a “necessity” for good governance. The potential for exploiting e-governance for improving public services challenges new public management philosophies rooted in values of marketization and privatization in that they require strong and sustainable forms of public governance combined with public investment in appropriate technical infrastructure. Examining policies of environmental management in Norway and Mexico, Ljungholm (2015: 350) argues If these problems are to be counteracted, solid institutional frameworks and accountability systems need to be put in place as part of any governance reform. Climate change adaptation must be the explicit responsibility of a legal entity provided with sufficient financial and technical resources to carry out its responsibilities in practice and to develop networks for learning and partnerships for decision-making between fragmented public and private actors. Within such an institutional system, maintaining people’s well-being in the face of climate change must constitute a citizen right rather than a customer “demand”.

These technologies are also pointing the way to “proactive e-governance” that predicts rather than merely responds to the needs of citizens, such as being increasingly implemented in places such as Taiwan where their “service delivery model” is going from “push to pull” (Linders et al. 2015). Telling, the human component is at best being reconfigured and at worst being largely displaced in the name of developing “democratic technology” premised on desires for automated forms of “driverless governance” (Alipour Leili et al. 2017). Emerging, in turn, is the prospect for a “disruptive democracy” that subverts, interrupts, and transform the status quo (Bloom and Sancino 2019).

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

137

Breaking Our Digital Chains One of the great debates of the new millennium is whether technology is empowering or disempowering us—both individually and collectively. ICTs and digital networks aimed to embolden a more participatory politics—where citizens could tell powerholders their needs and hold them accountable all at the click of a button in real time. It was meant, in this spirit, to subvert and radically restructure organisations and communities to make them less hierarchical and more responsive. For this promise to be even partially realised, it is critical to study “actually existing” digital governance and its possibilities for enhancing rather than detracting democratic agency and power. As Professor Robin Mansell (2016: 21–22) has recently observed Since capitalism does tend to be exploitative in a neoliberal order, what are the empowering moments in today’s digital world? If citizen choice can be amplified in an empowering way, at least theoretically, it is essential to locate the conditions for agency. This means that empirical evaluation of the contemporary digital landscape is essential. A democratic discussion, if it is to happen, presumes that governance arrangements are in place to enable it. I suggest, therefore, that it is essential to examine both the overarching structural conditions given by capitalism and the micro-level negotiations of individuals within that framework. This, in turn, requires that we analytically trace these developments through research framed by social studies of technology design and by analysis of the institutional rules, norms, and legislation that, at particular moments, may be empowering for individuals and social groups when they occupy digital space.

At the very least, e-democracy has brought new challenges for scholars and the public to measure how “democratic” a society actually is in this day and age (Kneuer 2016). The failure of e-democracy, further, to fundamentally topple the status quo and reduce inequality and poor service has reduced the faith in such technologies for being a positive democratizing force (Iuliia et al. 2015). Yet a significantly overlooked component of these discussions is the degree to which these technological innovations can actually democratize not just official politics but the workplace as well. The latter part of the twentieth century saw a dramatic weakening of industrial democracy— from the loss of union power to the lessening ability of employees around the world to collectively bargain with their employers. This was felt not

138 

P. BLOOM

just in the industrialised “West” but globally (see Kester 2016). ICTs could hold the potential for reviving this tradition (Beirne and Ramsay 2018). In this updated version, this technology aided workplace democracy cuts across conventional class based concerns to encompass issues of race, ethnicity, and gender (Valentine 2018). These democratizing efforts extend beyond union politics in support worker-owned alternatives such as cooperatives (Hacker 2017). It additionally encompasses both national and transnational strategies for “promoting and sustaining workplace innovation” (Alasoini et al. 2017: 27). Existing alongside these rather optimistic accounts of “techno-­ democracy” at work are very real and pervasive feelings of digital anxiety. These play into broader historical discourses linking public unease with technology to fears regarding sustained economic prosperity. Scholars have drawn parallels between past fears of mass unemployment in the face of similar revolutionary economic change, urging both reflection and caution, in this respect; In the end, it is important to acknowledge the limits of our imaginations. Technophobic predictions about the future of the labor market sometimes suggest that computers and robots will have an absolute and comparative advantage over humans in all activities, which is nonsensical. The future will surely bring new products that are currently barely imagined, but will be viewed as necessities by the citizens of 2050 or 2080. These product innovations will combine with new occupations and services that are currently not even imagined. Discussions of how technology may affect labor demand are often focused on existing jobs, which can offer insights about which occupations may suffer the greatest dislocation, but offer much less insight about the emergence of as-yet-nonexistent occupations of the future. If someone as brilliant as David Ricardo could be so terribly wrong in how machinery would reduce the overall demand for labor, modern economists should be cautious in making pronouncements about the end of work. (Mokyr et al. 2015: 45)

There is mounting evidence, further, that increased exposure to online media produces greater anxiety and depression (See Hoge et  al. 2017). The loss of faith in public institutions and the weakening of industrial democracy, has contributed to a renewed investment in techniques and strategies for achieving personal empowerment. New mobile technologies have dramatically aided in this modern “self-help” culture (Fleming et al.

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

139

2018). Within the context of contemporary work, there are demands that HRM be forced to shift its focus to “human dignity development” which would include a greater emphasis on workplace democracy (Bal and de Jong 2017: 173). The risk as always is that these efforts will be co-opted into new opportunities for spreading “neoliberal governance” inside and outside of work (Nygreen 2017). Perhaps not surprisingly, these also play into fears of machines as a force for not only taking human jobs but reducing their power as workers. These deploy similar racialized tropes linked to concerns over predicted demographic changes with twenty-first century fears “fewer babies and more robots” (Jimeno 2019: 1). These belie actual macroeconomic concerns about how the rise of robot may impact human labour, which Apart from the policy implications for macrostabilisation policies… there are many other areas of economic policies that will be affected by these demographic and technological changes. Together with negative effects on per capita growth, the new wave of technological changes may bring a decline in labour shares, at a time in which conventional social policies, which mostly channelled taxes from the young to the old, will require more resources. This probably will require a full reconsideration of the fiscal and transfer systems. Nevertheless, it could be a good idea to delay it until we really know what is going on with robotics and AI. (Ibid.: 112–113)

These concerns have led to some interesting policy solutions such as taxing robots or companies that use them (Holden 2017). They also reveal a pervasive “automation anxiety” spreading the workplace to political elections (Frey et al. 2017). These technology derived worries have also catalyzed more radical economic solutions. They bring to the forefront ongoing concerns over whether “smart governance” is also ecologically and economically sustainable (Martin et al. 2018). They also project the potential of a “new digital workplace” that will “revolutionise work” (Briken et al. 2017a) At the larger of the economy, the continual and expanding worry about AI and robots has resulted in once unthinkable policies such as a universal basic income suddenly being legitimately discussed within mainstream politics and culture (Pulkka 2017). Less immediately apparent, are ideas of how a universal basic income could also enhance economic democracy. Scholar Ewan McGaughey, argues first and foremost that this is a political not technological issue, and that if economic democracy is revitalised at

140 

P. BLOOM

the same rate as technological development, than economic fears over robots and AI will dissipate int a more optimistic vision of shared progress: The promises of technology are astounding, and deliver humankind the capacity to live in a way that nobody could have once imagined. The industrial revolution of the 19th century brought people past subsistence agriculture. It became possible to live, not just from hand to mouth, bonded to lords and masters, but to win freedom from servitude through solidarity. The corporate revolution of the 20th century enabled mass production and social distribution of wealth, for human and democratic development across the globe. A third economic revolution has often been pronounced or predicted, but it will not only be one of technology. The next revolution will be social. It must be universal. Universal prosperity with democracy and social justice, on a living planet, is achievable not in centuries, but in years and decades. It did not begin with technology, but with education. Once people can see and understand the institutions that shape their lives, and vote in shaping them too, the robots will not automate your job away. There will be full employment, fair incomes, and a thriving economy Democracy. (McGaughey 2018: 31)

In this spirit, the “second machine age” can anticipate the creation of a “cooperative economy” radically updated to reflect new possibilities linked to “democracy 2.0” which rejects, often utterly, the procedural and legalistic version of democracy that preceded it in favor of an image of organization that is at its core, insistent on individual voice and human cooperation. I also call it “co-operative” or “collectivist” democracy because it is defined by and requires a social bond between members that is co-operative in nature: Any property at hand must be socially or collectively owned or such organizations will be unable to sustain egalitarian decisional processes; similarly, egalitarian decision making requires that the group is willing to search for common ground through sustained dialogue. This is what I mean by “cooperation.” In this new type of co-operative democracy, members cannot be dismissed, marginalized, or rendered inferior in the decision-making process because members’ rights to be heard and to learn from others are primary and thus trump the efficiency or hierarchal claims that prevail in Democracy 1.0. (Rothschild 2016: 9)

It is not hard to imagine this updated type of workplace democracy being expanded even further to embrace human and non-human cooperation and consensual decision-making.

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

141

Yet AI and robots also offer an opportunity to dramatically alter the economy and workplace. The once viewed necessity of corporations for creating value and ensuring shared prosperity is now being questioned. Provocatively asking “Can society survive without corporations?”, Professor Gerald Davis (2016: 129) contends that we are witnessing the results of a regime shift in the costs of organizing. Information and communication technologies have made it much cheaper to organize commercial activity on a small and provisional basis rather than investing in long term institutions such as corporations. Corporations are costly compared to pop-up businesses. Moreover, computer-controlled production technology is getting more powerful, cheaper, and smaller. As such, the economies of scale that made corporations so dominant in the 20th century are flipping into diseconomies in many cases, while locavore alternatives are increasingly cost-effective.

Additionally, there are important differences between people in different professions (Seok 2018). AI is already threatening to completely transform certain fields such as public relations (Galloway and Swiatek 2018). As discussed above, this opens difficult legal questions about whether robots should pay taxes (Abbott and Bogenschneider 2018) and should have the same rights as human employees. More optimistically, it opens us new opportunities for “interactive co-working” between humans and machine co-workers (Eimontaite et al. 2016). Revealed, in turn, is how AI and robots can improve our working conditions and progressively challenge our current economic order. Such a radical view entails taking a “lifespan” perspective of human and machine interaction, looking beyond their one off encounters to the ways will engage and influence each other over time and in variety of social and professional contexts (Marchetti et  al. 2018). Such possibilities are glimpsed, for instance, in the integrative use of virtual reality and robots for helping individuals with autism (Good et al. 2016) These point to the potential for a “disruptive” transhuman future that will allow humans and machines to reimagine their shared social and economic future (King et al. 2017).

Creating transhuman Value A crucial aspect to the ability of machines to fundamentally recuse economic relations is the ability of humans and non-humans to work together to create new forms of value. The history of who is primarily responsible

142 

P. BLOOM

for producing value, of course, is one of the most controversial and debated topics in all of economics and indeed social theory. For Marxists it is workers who ultimately are the value creators through their labour, which capitalists appropriate for themselves in the form of profit. For neoclassical economists, it is almost entirely the opposite, as it is those with capital and entrepreneurs who are the drivers of economic dynamism and growth. On a more pedestrian organisational level, value creation is linked to the improvement of institutional processes and the positive impact this has for a range of different stakeholders. In this respect, AI and robots are already proving themselves to be significant present and prospective value creators. Their effect can be witnessed in activities and fields ranging from pricing (Agrawal et al. 2018) to the development of “superhuman AI” for winning at poker (Brown and Sandholm 2017). Expected is a close to unprecedented shift in creating and sustaining value within and between firms. A key issue, in this regard, is how economic actors can give up short term competitive and selfish behaviour for long term cooperation and shared benefit. There is a growing and rich scholarly literature on why humans cooperate rather than compete—assessing which psychological factors and rational incentives are most conducive to this outcome (See Wu et al. 2017). These precise concerns are being reproduced in current discussions of AI and robots (see Furman and Seamans 2019). It, furthermore, puts into perspective the public policy challenges faced by the emerging presence and influence of machines in the economy (Goolsbee 2018). These insights reveal the need to go beyond current theories of human value. More precisely, to avoid simply grafting on conventional human ideas of value creation onto non-humans and an increasingly hi-tech economy. The traditional idea of “homo economicus” embraced by mainstream economist—based on the rational utility maximizing subject— has been shown to be literally and figuratively deadly. As Professor Peter Fleming (2017: 7–8) recently noted in his book The Death of Homo Economicus There is very good reason to feel terminally deflated at the present juncture. As the cultural critic Mark Fisher insightfully demonstrated, contemporary capitalism wages a psychological war as much as a pecuniary one, where melancholy is systematically induced on a mass scale to tame the revolutionary rage that marked the 1960s and 1970s apart.16 For sure, in this climate it would be irrational not to feel gravely out of sync with the world. As I will

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

143

argue in the coming pages, the growing evidence points to a major breakdown in the norms that once governed the distribution of wealth, regulated employment and provided spaces for democratic voice. The events of 2007–08 were just the beginning….The tacit agreement between capital and labour forged after World War II (WWII) concerning wages and conditions has effectively been dissolved, and the income/power differentials between us and the rich are now so stark that even the highest earners among the 99 percent have more in common with the lowest (on minimum wage) than they do with the 1 per cent.18 The plutocrats and their state lackeys truly live far away, on dry land.

The evolution of technologies such as those associated with digital advances from a “buzzword to value creation” reveals the possibilities but also risks of this prospective economic “disruption”, especially in simply reproducing the dispiriting present status quo (Caylar et  al. 2016) It is telling to watch scholars and commentators alike attempt to fit non-human technologies such as autonomous cars into updated human derived market based business models (Yun et  al. 2016). By contrast, new user based design perspectives focus on how robots are contributing to the co-creation or co-destruction of value for selected populations like the elderly (Č aić et  al. 2018). Nevertheless, it is crucial that we transcend seeing new transhuman spaces of economic production and value creation through the narrow lense of prevailing “entrepreneur’s eyes” (Mortara and Parisot 2016). To this end, it is imperative to not judge transhuman value according to hegemonic capitalist values of productivity, efficiency, and profitability. Indeed, it is worth highlighting that despite their vast potential for improving productivity, AI has not done so in a significant way yet. This “paradox” can be explained by the fact that not only does it still require a wider distribution that for its gains to be fully felt there will also need to be a range of other innovations for supporting its productive use (Brynjolfsson et  al. 2018). Even supporters of the positive economic potential of AI such as the authors of a 2018 Mckinsey Global Institute report admit that it could increase not reduce global inequality since A key challenge is that adoption of AI could widen gaps between countries, companies, and workers. AI may widen performance gaps between countries. Those that establish themselves as AI leaders (mostly developed economies) could capture an additional 20 to 25 percent in economic benefits compared with today, while emerging economies may capture only half their upside.

144 

P. BLOOM

There could also be a widening gap between companies, with frontrunners potentially doubling their returns by 2030 and companies that delay adoption falling behind. For individual workers, too, demand—and wages— may grow for those with digital and cognitive skills and with expertise in tasks that are hard to automate, but shrink for workers performing repetitive tasks. (Bughin et al. 2018: 3)

Already, there are signs of this negative economic impact in the rise of the “platform economy” that is “fracturing work itself as the places and types of work are being reorganized into a myriad of platform organized work arrangements with workplaces being potentially anywhere with Internet connectivity” (Kenney and Zysman 2018: 2). Tellingly, this “reorganization” of work and value creation is not reducing or eliminating exploitation but rather reconfiguring it into new and different sets of connected economic activities such as professional content creators, platform developers and designers, and non-compensated user based content uploaders (Ibid.). Robotic developments, similarly, can dramatically enhance workplace innovations. However, to do so they will have to deal with the diverse set of contradictions that the application and spread of these robotic techniques may cause—including the loss of traditional “low status” jobs. For this purpose, Kristian Wasén (2015) has introduced the idea of “friction management” to harness and maximize the value potential of this creative destruction. While based on a quite conventional set of market concerns, this can also be applied to more radical efforts to properly manage and overcome the “frictions” caused by these disruptive economic changes in a way that optimises not only economic innovation but social justice. An example of such a model would be the development of “post-­ capitalist” construction techniques based on a “commons-oriented productive model” which seeks to “design global” and “manufacture local” (Kostakis et al. 2015: 126). These new models of value creation have the potential to be upscaled beyond individual organisations or industries. The same innovative and progressive principles can be applied for the making of “sharing cities” that are both “smart and sustainable” (McLaren and Agyeman 2015). They also explode attempts to oppose human and artificial intelligence, reframing them as complementary and therefore cooperative. Emerging instead are the possibilities of “hybrid intelligence” in which human and artificial intelligences are designed to be mutually optimizing enhancing

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

145

thus in the longer term reframing their relationship as one from helper to teammate, In the longer term, involvement of humans as helpers to AI systems may be too limiting for critical applications that seek a deeper integration of human and machine intelligence to function. For example the driver of a semi-­ autonomous car does not only provide assistance when being asked, but proactively engages in the activity of driving. Developing AI systems that can function as effective team members to humans requires a paradigm shift from hybrid systems to hybrid teamwork. It requires deeper reasoning capabilities for machines to make decisions not only about how they are accomplishing their tasks, but also about how they can support their teammates towards the success of the collaborative activity. (Kamar 2016: 4073)

This opens up fresh vistas for economic production that combine augmented reality and digital fabrication for improving organisational processes and the ethical values underpinning them. “Today, works of culture build a complex web of collaborating interrelationships, but based around a degree of competition and proprietary IP boundaries” writes scholar James Griffin (2019: 3), “that balance between conflict and competition should reflect the human agency involved, at both the biological and sociological level; and this should be fed through into any attempts at regulation of creative technologies such as 3DP”. Here innovation is viewed not as a market based or individualised activity but one based on collaboration and sharing for the wider benefit of the public. This is already being seen in the emergence of “social product development” where AI aided platforms are enriching humans through combining creativity, cooperation, and connection: A new generation of SPD platforms increasingly adds collaborative features and encourage teamwork. These networks should not only reward actors with learning opportunities (e.g. feedback) but also satisfy motivations such as entertainment and pleasure. For example, gamification of collaborative activities may engage more actors. Additionally, co-innovation features that help collaborative actors find the right projects to join might better maintain the participation of actors looking for specific learning or entertainment opportunities. Some SPD networks are designed as socio-professional communities, creating value through social exchange and knowledge sharing. When an SPD business model requires a high level of socialization

146 

P. BLOOM

(e.g. for social validation of new product), SPD coordinators could invest in more social media features and highlight the altruistic features of the network. Networking motivation can be satisfied when the platform offers communication and social interaction independent from project involvement. As a result, more actors might join the network, participate in the conversations, and as a result, may participate in ideation or collaboration in the future. (Abhari et al. 2019: 19–20)

Through the exploration and application of these alternative forms of value creation, novel economic paradigms emerge across the world, such as in the early twenty-first century “post-capitalism” example of the Argentinian barter network (Powell 2002). These socially and economically revolutionary approaches to value creation though must take seriously the intelligence and perspectives of machines. It must avoid the “anthropocentric” legacy of market economies which are willing to use technology only for the benefit of humans. Required, hence, is an integrative transhuman ethos for this process of disruptive change. The anticipation of “goal-creating robots” reveals poses, for instance, both exciting prospects for less programmed human and non-human relations as well as substantial social challenges. Interestingly, the lessening of direct technological programming means, in turn, a greater attention and enhanced forms of desirable social programming. Indeed All of this leads to the question—how can a robot autonomously acquire a sense of ethics for novel domains? If robots are to be ‘ethical’ in the eyes of those who interact with them, then they will need to be able to adapt to unwritten, socially evolved ethical preferences of the communities in which they find themselves… Hence, like humans, robots could be repelled from conducting behavior that would repel important social partners from them—and increase behavior which results in positive reactions from the social environment. The value of the activated egocentric and social motivators is estimated through an expectation of future reward signals. In the case where the robot is taking the initiative, the motivators with the highest estimated future value would be selected to form the novel goal. A household robot that has run out of instructed tasks thus might predict a happy and grateful owner, thus a positive social interaction, if only there was a cake. (Rolf and Crook 2016: 26)

This integrative approach can be a major source of economic capacity building in historically underdeveloped countries for reducing global

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

147

inequality (Okolo 2018). It also highlights the “anatomy of twenty-first century exploitation” what has shifted “from traditional extraction of surplus value to exploitation of creative activity” (Buzgalin and Kolganov 2013: 486). Once recognized, these insights permit for a rethinking of conventional capitalist value creation such as entrepreneurship, ones …exploring the emerging approaches entrepreneurs and communities are taking toward protecting or creating commons which are neither private nor government initiatives. While the concept of the commons has existed for centuries, or perhaps since the beginning of mankind (or even earlier since some argue that some animal species have operated with natural commons since before we arrived on the planet), it will still be a new term to many people. Essentially, the commons are resources that are shared with express or unwritten rules for ensuring the survival and growth of physical, digital, or cultural spaces. (Cohen 2017: 2)

They additionally provide the basis for different “future scenarios of the collaborative economy” that range from “centrally orchestrated, social bubbles or decentralized autonomous” (Fehrer et al. 2018) that progressively seek to integrate human and non-human intelligence. These exciting and optimistic perspectives reflect the use of transhuman ideas to expand and transcend capitalist value and values. They echo efforts to infuse such radical economic thinking into current debates about the “fourth industrial revolution” (Hughes and Southern 2019). They offer “grand visions for post-capitalist Human-Computer interaction”, in this respect (Feltwell et al. 2018) Just as importantly, they pave the way for concrete techniques and methods for bringing together desires for “post-­ work futures” and “full automation” which incorporate inclusive radical design perspectives including feminism (Baker 2018).

Empowering transhuman Organisation A fundamental component of transhuman value creation is reconsidering how humans and non-humans organise themselves. The past several decades have seen a shift from narrow notions of economic value based exclusively on profit-making to organization value linked to how organisations impact employees, employers, customers, and effected community members. The idea of a geographically contained and clearly boundaried firm is quickly dissipating. A critical reason for this change is the role digital technologies are playing in transforming how people

148 

P. BLOOM

interact and organise across time and space as “digital citizens” (Vromen 2016). The political implications of this shift will be explored in further detail in the next chapter. Relevant to questions of economic change, these technologies are altering how firms “learn” (Schuchmann and Seufert 2015). Whereas this “learning” is still largely corporate focused in terms of values, it is rapidly extending to broader questions of inequality and its relation to automation. A 2017 report published by the Institute for Public Policy Research (IPPR) advocates for the creation of “new models of capitalist ownership” such as a “Citizen’s Wealth Fund”, “the expansion of employee ownership trusts”, “compulsory profit sharing” in larger firms, and less working hours linked to productivity growth. It declares that The critical challenge of automation is likely to be in distribution rather than production. If the benefits are fairly shared, automation can help build an economy where prosperity is underpinned by justice, with a more equitable distribution of wealth, income and working time. But there is no guarantee that this will occur. Managed poorly, automation could create a ‘paradox of plenty’: society would be far richer in aggregate, but, for many individuals and communities, technological change could reinforce inequalities of power and reward. The pace, extent, and distributional effects of automation will be determine by our collective choices and institutional arrangements, and the broader distribution of economic power in society. The future will not be technologically determined; it will be what we choose to create. Public policy should therefore actively shape the direction and outcome of automation to ensure its benefits are fairly shared. (Lawrence et al. 2017: 2)

A central concern for contemporary human relations, in this regard, is to mitigate the negative impact of technology on employees to ensure they remain engaged and productive workers. In particular, greater attention is being paid to the ways digital and mobile technologies are making the boundaries between work and life rapidly blur and even disappear. (Duxbury and Smart 2011). Such “boundarylessness” has precipitated novel management and “self-management” techniques (Fleck et al. 2015). These worries and associated solutions speak to how deeply intertwined new technologies are becoming in all facets of our working lives. They also put into question the previous optimism about the prospect for enacting “creative technological change” within organisations (McLoughlin 2002). Yet if personal work-life boundaries are quickly becoming a thing of the past, so too are geographic barriers to ­technological

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

149

exchange and innovations. Recent green technology advancements in China and India, in this regard, combined global learning with local innovation, as global linkages (‘technology transfer’) and local innovation systems were not alternatives as is sometimes implied. Local and global flows were supplementary ‘mechanism’ in both the formation and catch-up phases. Second, capability building in firms was a prerequisite for local linkage formation—not the other way around. Champion firms benefited from linkages with national research institutions but as with international technology transfer, the key point is about sequencing and evolution. R&D linkages—local and global—only became important once the sectors were beyond the formation phase. Technology ‘transfer’ can hardly be understood in isolation because the use of external technologies and local learning were complementary elements that were combined in the technological upgrading process. (Lema and Lema 2012: 38–39)

This has huge implications for the future workplace and workforce that will increasingly include non-humans and machine intelligence. In the next decade, it is predicted that the industrial workforce will be fundamentally transformed in terms of its demographics, becoming progressively transhuman in its composition. In practice, this will include “big data driven quality control”, “robot assisted production”, “self driving logistics vehicles”, “production line simulation”, “smart supply networks”, “predictive maintenance”, an “self-organizing production” (Lorenz et al. 2015: 5). This coming technological change has necessitated new perspectives for theorizing the labour market from “the bottom up” focusing on how technologies such a “cloud computing” will induce further economic growth and skilled employment (Liebenau 2018). Additionally, it is gradually reconfiguring what constitutes “employee empowerment”, linking technology induced job redesigns with enhanced feelings of autonomy and freedom, as for future practices…organizations are likely to improve organizational commitment by redesign jobs that require a variety of skills and provides work autonomy, and at the same time strategically empower workers in seeking opportunity to enhance their competency as well as self-­ determination ability. Furthermore, organizations are reminded that job characteristics of work redesign must be supported by various forms of psychological empowerment, i.e. meaning, self-determination, competence,

150 

P. BLOOM

and impact, without which organizational commitment may not occur. (Kuo et al. 2010: 36)

The central actor, here, is no longer primarily corporations or large capitalist firms but SMEs and looking further ahead post-capitalist workspaces (Kmieciak et  al. 2012) Further recognized is the diverse human and non-human skills necessary for creating more ecologically sustainable organisations using “circular economy” models (Burger et al. 2019). Of course, these interventions do not eliminate the fears that technology will negatively impact human work and lives, and do so in quite dramatic fashion (Boggs 2016). These echo apocalyptic type worries that AI and robots will put the very fate of human existence at risk—a trend that is already supposedly being realised through its use to increase their surveillance and exploitation in the contemporary workplace. While many refer to the growth of “intimate” technologies that are smaller and personalised, Professor H.  Zwart speaks instead of “extimate technologies” that are both intimate and foreign. He notes Currently, technological devices have begun to move inwards: entering our bodies and brains, functioning as implants rather than as extensions. Self-­ monitoring is an important objective of this trend. Due to recent developments in technosciences, such as synthetic biology, tissue engineering and nanomedicine, our sway over the human ‘condition’ (in its literal, biomedical sense) is increasing, down to the molecular level, and up to the point of becoming uncanny. New options for drug delivery and bio-implants are entering (pervading) human bodies and brains. On the one hand, this may be seen as strengthening human autonomy and agency. On the other hand, we must consider the possibility that we are the targets rather than the agents of this process. Rather than being in control, we may become increasingly dependent on these new technologies, emerging in the boundary zone between therapy and enhancement. On the one hand, intimate technologies allegedly open up new practices of the Self, enabling individuals to become the ‘managers’ of their own life and health. On the other hand, human beings are controlled by the gaze of the Other, which invokes a sense of unease. An exemplification is the Snyderome project. (Zwart 2015: 40)

The focus within mainstream discourse remains on the prospect of robots replacing not enhancing or empowering human workers (West 2015). These pessimistic perspectives, while valuable, risk overlooking the

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

151

more nuanced and complex ways technologies are transforming working in terms of where and how it is being done. However, even in economic contexts as dissimilar than the US and India, it was observed that …as the debates about the uses and abuses of flexibility continue, studies on the ground in the United States and India find that both businesses and workers adopt the new work culture to fulfill their own needs. Thus, while businesses restructure the organization by trimming the workforce, outsourcing, and introducing flexibility, workers adapt by learning new skills, being loyal to their own careers rather than the organization, and watching the market closely. However, workers in both countries work harder and longer, take on more responsibility, and even as they recognize the structural problems onset by globalization, workers individualize their failures and inability to cope with the changing market. Under the conditions of persistent global recession, businesses and workers find that they need to ‘reinvent’ themselves in order to survive. (Arabandi 2011: 535–536)

The best, and commonly most ignored way, to avoid these technological threats is through the introduction and fostering of more democratic and cooperative workplaces and economies. Rather than seeing this coming human and non-human workforce as a foregone conclusion or having any inevitable outcome (for good or bad), it is better to envision it as an “ethical dilemma” for an increasingly transhuman world (Dolidon 2016). Concretely, it entails mitigating the risk of AI for humans through putting in place democratic forms of economic governance both within and outside organisations (Garvey 2018). It also means exploring how we can not only save human jobs but actually use technology to improve the quality of our work (MP 2018). Critically, it offers a radical new lens for understanding worker power internationally, one that encompasses not just humans but also robots. Scholars Yu Huang and Naubahar Sharif argue, for instance, that the Chinese economy is quickly transition from being human to robot labour intensive, creating the need for new labour movements to enhance worker’s bargaining power in this rapidly changing automated economic environment. A case in point, was a recent strike by workers at a factory in Southern China which was hiring less expensive and lower skilled “but more productive” young employees to work new automated machines like lathes: They halted production for about two hours before the owner came to yell at them: ‘Do you still want to work here or not? If you choose to quit today,

152 

P. BLOOM

I will settle your wages’. The veteran workers suddenly realized that they were no longer the backbone of the factory and their skills no longer automatically granted them workplace bargaining power. In their 40s, most feared that they would have great difficulty finding other jobs if they were fired, and quickly returned to their positions. Each striking worker was fined 100 yuan as punishment. After the strike, the owner accelerated the automa-­ tion process to cover operations in painting and cutting. Later, in their bi-monthly assembly, the owner would scold the workers: ‘You are just a speck. The factory won’t stop without you’. (Huang and Sharif 2017: 70)

Moreover, it adds to knowledge of how existing dominant economic power relations are sustained and reinforced globally linked to these technological advancements (Qiu et al. 2014). Adopting a labour based view of power and technology allows, therefore, for a richer account of transhuman worker empowerment. The question of disruptive technologies shifts, in this regard, from how it endangers workers to how it can jeopardize an exploitative and unjust status quo. Yet it also expands the scope of this economic analysis to encompass the social rather than just material impact of this technological change, better attending to issues of human and robot worker safety (Murashov et al. 2016) and the social impact of human and non-human co-working. Opened up are fresh ideas of activism and organising associated with networked communities, ones which brings with them both exciting potentials for social change and solidarity as well as new forms of exclusions. Exploring the emergence of “networked feminism” in the UK which relied heavily on social media, groundbreaking technology scholar Aristea Fotopoulou (2016: 1002), notes We should thus keep rethinking the possibilities offered for social change by the changing environment of digital communications, but it is important to do so by looking at how the promises and imaginaries of a ‘networked feminism’ and ‘digital sisterhood’ translate into communicative practices of women’s organisations, as they are situated within material conditions of limited funding and shaped by embodied experiences of ageing.

It also sets out ideas for developing new workplaces centred on the promotion and empowering uses of social robots (Laukyte 2017). Ensuring the safety and security of a transhuman workforce can catalyze more empowering and radical opportunities for an integrated human and

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

153

non-human economy and society. Suddenly available are serious philosophical and practical investigations of the “future of post-human creative thinking” (Baofu 2009). Physically, it allows for the utopian “reimagining” of the body within a radically altered transhuman world. In this sense, Semi -siliconized cyborgs or outright computer androids might function equally well—if not more efficiently—as successor vehicles for the transmission and cultivation of what is distinctive about our being, whilst avoiding many if not all the liabilities of human biology. (Fuller 2011: 2)

This is linked, furthermore, to the belief that “treats the possession of an animal body as only contingently related with our humanity” (Fuller 2011: 2, also quoted in Botelho, 214). Beyond oneself, they point to the creation of “post-capitalist” ecologies of sustainable value creation relying on human and artificial intelligence (Hornborg 2016).

Creating Integrative Economies While “industry 4.0” possess a range of practical and existential challenges to humanity, it also offers the opportunity for recreating fully integrative transhuman economies based on principles of social inclusion, ecological sustainability, and shared material prosperity. The 2008 financial crisis and the austerity that followed has allowed for renewed questioning of once sacred free market values. For this purpose, we need to expand not only human and artificial intelligence but also our combined social imaginations. To this end, it is imperative to focus on what Westra et al. (2017) refer to “practical utopias” that offer a range of “varieties of alternative economic systems”. In particular, they maintain that If we want to avoid a future that finds humanity groping in the dark trough of economic depression and natural catastrophe, we need to discuss and find degrees of unity around the positive features of post-capitalist economic alternatives. Emphasizing positive alternative economic systems fire up the imaginations and brings people together in order to get closer to or even reach these alternatives. There are thousands of movements around the world that are to some degree anti-capitalist, and with ongoing discussions may eventually reach a higher degree of solidarity….a real citizenship of the world, a citizenship that is generally accepting of people’s cultural differences and is built on a real caring for people and nature, needs to be central to an

154 

P. BLOOM

ethic of solidarity amongst global movements. (Albritton and Westra 2017: 3–4)

Added to these radical sentiments, is the need to recognise, respect, and learn from the cultural differences that will arise associated with non-­ human interactions and as such projects of transhuman emancipation. Further, the prospect of a progressive integrative economy expands the realistic possibilities for realizing “sustainable economic development” using AI. Crucial, in this regard, as for post-economy of artificial intelligence, it makes sense to single out two development directions: solving the problems associated with bringing the specialized artificial intelligence systems closer to human capabilities and their integration implemented by the human nature; and creating the artificial intelligence, arguably, a “mind”, which would incorporate all the created artificial intelligence systems and would be able to solve many economic and social problems of the humanity. (Mamedov et al. 2018: 1038)

At the heart of this economic reimagination though needs to be direct rejection of still currently dominant values of anthropocentrism. In particular, theories of development—whether local, national, regional, or global—should jettison their focus on supporting exclusively human growth and prosperity. Recent, attempts to bring in values of “sustainability” have moved quite far in this direction. Yet they must be enlarged even more so to include not just the “natural world” but its increasingly large intelligent artificial inhabitants. The scholar Michelle Westerlaken (2017: 64–65) has proposed a technologically sophisticated form of “uncivilisation” for realising a “non-speciesism” based utopia: Uncivilisation, in this sense, is not referring to a kind of anarchy, but rather to an openness to validating and emphasising the multitude of alternative forces and perspectives that already exist or can be imagined. With this task, I adopted a stance towards knowledge that is performative rather than realist and emotional rather than rational. As a result, instead of presenting generalisable claims, this text specifically focuses on the possibilities and alternatives that can be created by artists (in the broadest sense of the word), with their creative abilities to draw, shape, reshape, expand, prefigure, prototype, inspire, defamiliarise, negotiate, test, and so on. I wish to emphasise that artists are building small utopias all the time, and I think that artists are the ones that can find new opportunities for the animal to join this discourse.

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

155

These non-speciest perspective would be an interesting complement and antidote to accelerationist theories emphasizing the existential threat posed by new technological advances to our current system and the world (Shaviro 2015). These also will impact everyday experiences of work and the workplace—reflecting “changing work realities” (Leonard and Cairnes 2019). For these possibilities of a non-anthropocentric economy to become a reality, it is imperative to understand the emerging power dynamics of this new transhuman era. Rather than simply focus on the present and future victims of capitalism—those “left behind” by “market progress”—it is better suggests the provocative philosopher Slavoj Žižek in his new book Like a Thief in Broad Daylight: Power in the Era of Post-Humanity (2018: 22) to “tackle the much more difficult task of changing the global system that generates them”. To this end, even as far back as the end of the twentieth century, critical thinkers were theorizing about the need to go “beyond the politics of the flesh” (Boyne 1998). In the present context, it is crucial to provide realistic scenarios about post-capitalist alternatives in order to break people’s psychological attachment to the present order. The scholar Alessandro Fergnani (2019: 12) suggests, in this sense, the promotion of a “transformation scenario” where The ultimate choice of whether to pursue capitalistic practices or to customize one’s profession out of the capitalistic economy and join collaborative resources management organizations will be left to citizens. Ultimately, whether to abandon capitalism will be decided by them, and thanks to an increase in public awareness on the importance of social capital rather than material capital, collaborative practices will gradually become predominant in a non-coercive way. With the vast majority of the world population participating in some form of collaborative management of resources, money will be abolished by the end of the fifth decade of the century, and with it the capitalistic system. A new episteme will be gradually born, based on compassion and understanding of the weaknesses of human nature, which will form the basis of an economy based on complete inclusion, and in which the main form of exchange will be the contribution to others. The greatest value of human life will be considered that of contributing to the happiness of other individuals, an act that will be considered to redound to personal happiness. Events will be considered more important than objects, the pursuit of connections with others more important than material accumulation. The exchange of social value will supersede that of monetary value. Individuals will essentially repay each other by the mutual exchange of hap-

156 

P. BLOOM

piness, trust, friendship and gratitude in a number of different forms, from gift giving, to knowledge exchange, to quality time.

The evolution from credible speculation to practical implementation, demands further an ethos based on “co-designing economies in transition” (Giorgino and Walsh 2018). This allows, thus, for the expansion of the current “socio-technical imaginary of the fourth revolution” to include more progressive transhuman values, one that encompasses both broader visions of what this future could be as well as the skills and capabilities it will require. Here, traditional “vocational educational and training” will combine a greater adroitness with AI with a transformational political perspective that recognises how technology and artificial intelligence are entwined with social relations, being sites of class struggle. How this is played out is an outcome of the balance of power, not only within the social formation but also globally. How far the development of the forces of production is compatible with capitalist relations is a moot point, as this is also a site of struggle. (Avis 2018: 337)

This places principles of an integrated transhuman relations at the core of new and potentially quite revolutionary idea for transforming the economy. AI and robots can play an important role in establishing and sustaining an economy based not on private property but “the commons”. A 2017 trans-disciplinary conference hosted by the Institute for Advanced Sustainability Studies, exemplified these efforts, declaring in its invitation that Realizing a successful transition to a post-capitalist, commons-based political economy will not only depend on the capacity for new technologies and social relations to alter the balance of political and economic power; it will also depend on developing social practices that underlie a broader cultural shift…In an effort to bridge these gaps, this workshop seeks to convene scholars and stakeholders who have an interest or expertise in developing ethical and contemplative approaches to post-capitalism and commoning, but who come with different areas and levels of expertise. The main objectives of this workshop are to develop a shared understanding of the status quo (the problem space) and to co-develop pathways that best address this shared understanding going forward. (Walsh and Brown 2017: 1)

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

157

These also can enhance our critical reflection about ourselves as a “species”—one that transcends any essentialised views of humans as self-­ interested workers or as superior to non-humans, leading to new political perspectives such as “biocommunism”. It is also about quite literally shifting our common perceptions of capitalism and our current social order, recognising it as both dehumanising in its effect and as based on artificial rather than just human intelligence (Canavan 2015). Anticipated is a potential transition from “post-work to post-capitalism” through the use of machine intelligence and learning (Dinerstein and Pitts 2018). The development of these disruptive technologies can, moreover, directly challenge and reconfigure existing humanist perspectives, advocating instead post-humanist approaches “which propose genetic and digital interventions that alter human nature” (Hermida and Casas-Mas 2019: n.p.) The task is whether it is possible to go beyond the “human economy”. What this means is twofold. Firstly, to consider an integrated economy that takes into account and seeks to benefit humans and non-humans. Secondly, to conceive of alternative to existing economies embraced by humans in the past and present. To this end, conventional assumptions of humanism and humans are perhaps progressively obsolete (Szollosy 2017). Yet this move away from the “human” can ironically allow for new AI driven development policies that are actually more humane. The “smart city project” in India reveals these incipient attempts to mix development goals with co-ordinated AI strategies that are tailored to the needs of the humans actually living in these areas: Currently, smart cities are being hailed as the solution to all problems with the help of ICT and its enabled services around the globe, they lack clarity in totality. The smart city mission, which launched as a flagship program in India, is working in the same spirit. With the advancement of science and technology, the importance of ICT and digitization in the overall governance of towns and cities cannot be underesti­mated. The feasibility of this option in India, where a sizable proportion of the population lives in villages and below the poverty line, remains questionable. Research studies have proven that the root cause of compulsive migration to urban areas is the search for improved livelihoods, which further aggravates urban poverty. Amid this background, India needs to plan and develop cities and villages in synergy with regional and local contextual realities. On the foundation of this integra­tion, Indian cities could be built to be more liveable, sustainable, pros­perous, and inclusive smart cities ‘with a human face’. (Mehta and Yadav 2016: 13)

158 

P. BLOOM

Furthermore, it can promote values of transhuman collaboration that inspire and empower humans in ways that the current system does not permit. While automation and digitalisation often lead to the enhancement of corporate power, the creation of makerspaces and innovation hubs in places like Hangzhou and Shenzhen in China, reveal the potential for a more disruptive and transformational form of transhuman development (Keane et  al. 2018) Nevertheless, this is not merely a technological updating of the current status quo. Rather it opens the possibility of creating a “global commons in a global brain” (Last 2017: 48). It is a driving force for twenty-first century “Great Transformation” linked to a new “history for a techno-human future” Bessant 2018). These ideas gesture toward the construction of “transhuman economics” based on the combined capabilities and needs of humans and nonhumans. This can apply to contemporary issues of transportation and mobility (Docherty et al. 2017). These can also lead to new scenarios for human resource development (Gold 2017). It also reconfigures notions of human time, especially as it relates to work focused on “distributing potentialities” rather than simply exploiting “human resources”. Within these “transhuman” workplaces, “deep automation” will combine with new opportunities for human creativity as well as human and non-human collaboration (Upchurch and Moore 2018). Here, “the future of work” is refocused on creating realistic hopes for a better more integrated transhuman world (Spencer 2018). Absolutely critical to such lofty aims is putting in place progressive governance frameworks and practices. To this effect, there is currently talk of “reinventing capitalism” around “sharing work” in order to address fears of unemployment and loss of income from automation (Rafi Khan 2018). More radically oriented, are efforts to integrate deliberative democracy into the creation and sustaining of “smart cities” (Alonso and Lippez-De Castro 2016). These reflect the desire for a transhuman economy that is innovative, sustainable, democratic, and just.

Administrating Shared Futures The next step in envisioning an empowering and transformative “transhuman economy” is the question of who is actually to administer these integrative policies and strategies? One important aspect is actually how technologies can contribute to the creation and maintenance of viable and prosperous “degrowth” economies (Kerschner et  al. 2018). It also ges-

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

159

tures toward the potential to realistically develop a “workless” society that actually could work. This means taking seriously the challenges to such a radical politics and economics: On the surface, all of this technology, while perhaps exciting from a consumer’s perspective, looks and feels most immediately like the inexorable advance of capitalism and immiseration of the soon to be workless, not the promise of utopia. Even the revolutionary potential of social media that has been demonstrated, for example, by Occupy, UK Uncut and the Arab Spring, is made possible only through socio-technical system that it is anything but revolutionary, and which makes possible, among other things, the marketisation of personal data and state surveillance by repressive regimes (and others). I am not trying to present an equally deterministic perspective about the impossibility of these technologies ever bringing about the end of capitalism. I am simply pointing out that at the present time, the very things that are presented as potentially emancipatory are being utilised to lock in capitalism—contributing to insecurity for the many and closing off possibilities for dissent and revolt. (Little 2016: 158)

However, while such caution may be warranted, AI and robots also permit for new ways to regulate and control the market in an age of “digital capitalism” (Staab and Nachtwey 2016). It is worth noting that whereas it is almost simply “common sense” AI and robots will transform the economy—or at least strongly disrupt it— this view is not shared by everyone. Indeed, it may be that these changes merely reconfigure and update rather than fundamentally reinvent and reboot our current status quo (Boyd and Holton 2017). A deciding factor in the extent and depth of this economic change will be in the scope of transformation to Person to Machine (P2M) interactions (Yoshida 2018). This has the potential to completely reconfigure critical understandings of labour relations, managerial control, and economic power (Briken et al. 2017b). Presently, individuals have are already being placed in new administrative regimes where they must flexibly manage their time in a precarious “gig economy” (Lehdonvirta 2016). These emerging cultures of self-administration catalyzed and aided by digital technologies, critically puts into the question the optimism around the creation of a “dynamic economy” built on “technology convergence and open innovation” (Park 2017). In order to mitigate this risk of technological disempowerment, it is necessary to infuse new administrative values into this increasingly

160 

P. BLOOM

transhuman economy. In this respect, conventional market driven or social democratic approaches based respectively on competition or top down regulation are progressively becoming outdated. Instead, machine intelligence and the internet of everything allow for novel types of “commons-driven governance” (Araya 2015: 11). These build upon but ultimately expand beyond “sharing economies” for ones that draw on digital networks and machine learning to better coordinate economic decisions as part of a collective “global brain” (Heylighen 2017). Whilst these ideas may sound perhaps quite literally “de-humanizing” in so much that they reduce the role of humans within the economy, they can make possible novel scenarios for “sustainable futures” that better serve the needs of human communities than conventional measures of “GDP growth” (Svenfelt et al. 2019). These perspectives, thus, paradoxically allow for the reimagining of economic administration beyond human control for the benefit of humans. At the most basic level, this can allow for a more efficient and data-driven market system (Guo et al. 2018). This can also be expanded to encompass new types of development strategies premised on fostering a stronger “sharing economy” (Taeihagh 2017). Yet, they hold the potential for realistically considering the transition from capitalism to “post-capitalism” (Sculos 2018). Crucially, these are based on decentralised governance principles that will depend on human and non-human intelligence to be successful. However, for such “decentralized autonomous systems” to be genuinely disruptive they must directly challenge the libertarian and quite literally “dehumanizing” ideas underpinning their dominant ideologies— ones where humans are not needed and machines become the main force in the reproduction of an exploitive capitalism. To this end, the scholar J.Z. Garrod (2016: 73) argues that there is good reason to question the utopian narrative of the DAS. While the example of a washing machine ordering its own detergent is sufficiently domestic to obscure other uses of this technology, it is important that we recognize the destructive potential. With the coming of autonomous machines, we might soon live in a world where drones hire other machines for military purposes, or where in-body nano-technology autonomously negotiates with other technology outside your body (and perhaps, without our consent)….There is certainly no denying the potential of Bitcoin 2.0 tech, but it is this dark side that concerns me because it is necessarily opaque and hard to predict. Taken to its logical conclusion, however, the DAS can

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

161

only appear as a utopia if one has totally expunged power and coercion from their analysis of social reality.

Central to avoiding this fate is the development of “self-adaptive learning” for producing not just a more optimal sharing economy but an integrated one as well (Pilgerstorfer and Pournaras 2017). Cities, in particular, offer an innovative space for such radical experimentation with transhuman integrative administrative ideas. They stand as the new terrain for social movements and utopian visions to merge into a struggle for a different economic order (Carter 2017). They, moreover, provide sites for the everyday use of what can seem like exotic technology, demystifying them and showing how they can be repurposed for the benefit of individuals and the wider public (Sun et al. 2016). For this purpose, there is a growing push for a broader “rethinking” of education and consequently work to reflect the opportunities and challenges posed by “digital capitalism” (Means 2018). It additionally requires radical new thinking about how humans and non-­humans work together for creating viable and just smart communities (Mahmood et al. 2017). While these are obviously at the local and municipal level they do point to what actually radically new types of transhuman economic administration may look like. Specifically, it is one that subverts the power of both corporations and the state in the name of creating a sustainable postcapitalist future (Lawrence 2018). These promote “grand visions for post-capitalist human-computer interaction” (Feltwell et al. 2018.). It also allows for the decentralisation of the digital economy. Here, advanced technologies such as blockchains can contribute to intelligent networks for improving existing transportation systems (Yuan and Wang 2016). Subjectively, it promises to transform “economic reasoning” linked to artificial intelligence (Parkes and Wellman 2015). Perhaps the most important and fundamental economic question of our time is how we can properly administer more just and integrative shared futures within our communities and globally. These goals will rely upon but transcend the development of exciting new technologies. Nor will they be simply innovate the present system. Instead it will be a full scale disruption that will necessitate new and more just forms of economic governance. These will challenge current and emergent disciplinary regimes aimed at controlling rather than liberating populations (Shammas 2018). They will also entail a subversion and replacement of past

162 

P. BLOOM

perspectives of worker regulation and control originally developed for the workforce of the industrial revolution. These will fundamentally reconfigure current income distribution and unemployment (Korinek and Stiglitz 2017). However, it will also present the possibilities of socially and materially creating an empowering and integrated new economy and world.

References Abbott, R., & Bogenschneider, B. (2018). Should Robots Pay Taxes: Tax Policy in the Age of Automation. Harvard Law & Policy Review, 12, 145. Abhari, K., Davidson, E. J., & Xiao, B. (2019). Collaborative Innovation in the Sharing Economy: Profiling Social Product Development Actors Through Classification Modeling. Internet Research. Agrawal, A., Gans, J. S., & Goldfarb, A. (2018). Human Judgment and AI Pricing. In AEA Papers and Proceedings (Vol. 108, pp. 58–63). Alasoini, T., Ramstad, E., & Totterdill, P. (2017). National and Regional Policies to Promote and Sustain Workplace Innovation. In Workplace Innovation (pp. 27–44). Cham: Springer. Albritton, R., & Westra, R. (2017). Introduction to Practical Utopias. In Varieties of Alternative Economic Systems (pp. 1–14). London: Routledge. Alipour Leili, M., Chang, W.  T., & Chao, C. (2017). Driverless Governance. Designing Narratives Toward Democratic Technology. The Design Journal, 20(sup1), S4343–S4356. Alonso, R. G., & Lippez-De Castro, S. (2016). Technology Helps, People Make: A Smart City Governance Framework Grounded in Deliberative Democracy. In Smarter as the New Urban Agenda (pp. 333–347). Springer, Cham. Retrieved from https://link.springer.com/chapter/10.1007/978-3-319-17620-8_18. Arabandi, B. (2011). Globalization, Flexibility and New Workplace Culture in the United States and India. Sociology Compass, 5(7), 525–539. Araya, D. (2015). Smart Cities and the Network Society: Toward Commons-­ Driven Governance. In Smart Cities as Democratic Ecologies (pp.  11–22). London: Palgrave Macmillan. Avis, J. (2018). Socio-Technical Imaginary of the Fourth Industrial Revolution and Its Implications for Vocational Education and Training: A Literature Review. Journal of Vocational Education & Training, 70(3), 337–363. Bache, I., Bartle, I., & Flinders, M. (2016). Multi-Level Governance. In Handbook on Theories of Governance. Edward Elgar Publishing. Baker, S. E. (2018). Post-Work Futures and Full Automation: Towards a Feminist Design Methodology. Open Cultural Studies, 2(1), 540–552. Bal, P.  M., & de Jong, S.  B. (2017). From Human Resource Management to Human Dignity Development: A Dignity Perspective on HRM and the Role of

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

163

Workplace Democracy. In Dignity and the Organization (pp.  173–195). London: Palgrave Macmillan. Banerjee, A., Duflo, E., Imbert, C., Mathew, S., & Pande, R. (2016). E-Governance, Accountability, and Leakage in Public Programs: Experimental Evidence from a Financial Management Reform in India (No. w22803). National Bureau of Economic Research. Baofu, P. (2009). The Future of Post-Human Creative Thinking. Cambridge Scholars Publishing. Beirne, M., & Ramsay, H. (2018). Information Technology and Workplace Democracy. Routledge. Bessant, J. (2018). The Great Transformation: History for a Techno-Human Future. Routledge. Bloom, P., & Sancino, A. (2019). Disruptive Democracy: The Clash Between Techno-­Populism and Techno-Democracy. SAGE Publications Limited. Boggs, C. (2016). Technological Rationality and the Post-Orwellian Society. Glimpse, 17, 10–19. Boyd, R., & Holton, R.  J. (2017). Technology, Innovation, Employment and Power: Does Robotics and Artificial Intelligence Really Mean Social Transformation. Journal of Sociology. Boyne, R. (1998). Beyond the Politics of the Flesh. Information Communication & Society, 1(4), 504–511. Briken, K., Chillas, S., & Krzywdzinski, M. (2017a). The New Digital Workplace: How New Technologies Revolutionise Work. Macmillan International Higher Education. Briken, K., Chillas, S., Krzywdzinski, M., & Marks, A. (2017b). Labour Process Theory and the New Digital Workplace. Briken et al., 1–20. Brown, N., & Sandholm, T. (2017, August). Libratus: The Superhuman AI for No-Limit Poker. In IJCAI (pp. 5226–5228). Brynjolfsson, E., Rock, D., & Syverson, C. (2018). Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics. In The Economics of Artificial Intelligence: An Agenda. University of Chicago Press. Bryson, J., Sancino, A., Benington, J., & Sørensen, E. (2017). Towards a Multi-­ Actor Theory of Public Value Co-Creation. Public Management Review, 19(5), 640–654. Bughin, J., Seong, J., Manyika, J., Chui, M., & Joshi, R. (2018). Notes from the AI Frontier: Modeling the Impact of AI on the World Economy. McKinsey Global Institute. Burger, M., Stavropoulos, S., Ramkumar, S., Dufourmont, J., & van Oort, F. (2019). The Heterogeneous Skill-Base of Circular Economy Employment. Research Policy, 48(1), 248–261.

164 

P. BLOOM

Buzgalin, A. V., & Kolganov, A. I. (2013). The Anatomy of Twenty-First Century Exploitation: From Traditional Extraction of Surplus Value to Exploitation of Creative Activity. Science & Society, 77(4), 486–511. Č aić, M., Odekerken-Schröder, G., & Mahr, D. (2018). Service Robots: Value Co-Creation and Co-Destruction in Elderly Care Networks. Journal of Service Management, 29(2), 178–205. Canavan, G. (2015). Capital as Artificial Intelligence. Journal of American Studies, 49(4), 685–709. Carter, D. (2017). Smart Cities: Terrain for ‘Epic Struggle’ or New Urban Utopias? Town Planning Review, 88(1), 1–7. Caylar, P.  L., Noterdaeme, O., & Naik, K. (2016). Digital in Industry: From Buzzword to Value Creation. McKinsey & Company. Digital McKinsey, 2. Cohen, B. (2017). Post-Capitalist Entrepreneurship: Startups for the 99%. Productivity Press. Das, R. K., & Misra, H. (2017, April). Smart City and e-Governance: Exploring The Connect in the Context of Local Development in India. In eDemocracy & eGovernment (ICEDEG), 2017 Fourth International Conference on (pp. 232– 233). IEEE.  Retrieved from http://ieeexplore.ieee.org/abstract/ document/7962540/. Davis, G. F. (2016). Can an Economy Survive Without Corporations? Technology and Robust Organizational Alternatives. Academy of Management Perspectives, 30(2), 129–140. Dinerstein, A.  C., & Pitts, F.  H. (2018). From Post-work to Post-capitalism? Discussing the Basic Income and Struggles for Alternative Forms of Social Reproduction. Journal of Labor and Society, 21(4), 471–491. Docherty, I., Marsden, G., & Anable, J. (2017). The Governance of Smart Mobility. Transportation Research Part A: Policy and Practice.. Retrieved from https://www.sciencedirect.com/science/article/pii/S096585641731090X. Doctor, F., Galvan-Lopez, E., & Tsang, E. (2018). Guest Editorial Special Issue on Data-Driven Computational Intelligence for e-Governance, Socio-Political, and Economic Systems. IEEE Transactions on EmergingTopics in Computational Intelligence, 2(3), 171–173. Dolidon, A. (2016). Transhumanism and Its Ethical Dilemmas. Duxbury, L., & Smart, R. (2011). The “Myth of Separate Worlds”: An Exploration of How Mobile Technology Has Redefined Work-Life Balance. In Creating Balance (pp. 269–284). Berlin, Heidelberg: Springer. Eimontaite, I., Gwilt, I., Cameron, D., Aitken, J. M., Rolph, J., Mokaram, S., & Law, J. (2016). Assessing Graphical Robot Aids for Interactive Co-Working. In Advances in Ergonomics of Manufacturing: Managing the Enterprise of the Future (pp. 229–239). Cham: Springer. Epstein, S. L. (2015). Wanted: Collaborative Intelligence. Artificial Intelligence, 221, 36–45.

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

165

Fehrer, J.  A., Benoit, S., Aksoy, L., Baker, T.  L., Bell, S.  J., Brodie, R.  J., & Marimuthu, M. (2018). Future Scenarios of the Collaborative Economy: Centrally Orchestrated, Social Bubbles or Decentralized Autonomous? Journal of Service Management, 29(5), 859–882. Feltwell, T., Lawson, S., Encinas, E., Linehan, C., Kirman, B., Maxwell, D., … Kuznetsov, S. (2018, April). Grand Visions for Post-Capitalist Human-­ Computer Interaction. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (p. W04). ACM. Fergnani, A. (2019). Scenario Archetypes of the Futures of Capitalism: The Conflict Between the Psychological Attachment to Capitalism and the Prospect of Its Dissolution. Futures, 105, 1–16. Fleck, R., Cox, A.  L., & Robison, R.  A. (2015, April). Balancing Boundaries: Using Multiple Devices to Manage Work-Life Balance. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 3985–3988). ACM. Fleming, P. (2017). The Death of Homo Economicus. University of Chicago Press. Retrieved from https://econpapers.repec.org/bookchap/ucpbkecon/9780 745399423.htm. Fleming, T., Bavin, L., Lucassen, M., Stasiak, K., Hopkins, S., & Merry, S. (2018). Beyond the Trial: Systematic Review of Real-World Uptake and Engagement with Digital Self-Help Interventions for Depression, Low Mood, or Anxiety. Journal of Medical Internet Research, 20(6), e199. Fotopoulou, A. (2016). Digital and Networked by Default? Women’s Organisations and the Social Imaginary of Networked Feminism. New Media & Society, 18(6), 989–1005. Frey, C.  B., Berger, T., & Chen, C. (2017). Political Machinery: Automation Anxiety and the 2016 US Presidential Election. University of Oxford. Fuller, S. (2011). Humanity 2.0: What It Means To Be Human Past, Present and Future. New York: Springer. Furman, J., & Seamans, R. (2019). AI and the Economy. Innovation Policy and the Economy, 19(1), 161–191. Galloway, C., & Swiatek, L. (2018). Public Relations and Artificial Intelligence: It’s Not (Just) About Robots. Public Relations Review, 44(5), 734–740. Garrod, J. Z. (2016). The Real World of the Decentralized Autonomous Society. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 14(1), 62–77. Garvey, C. (2018, December). AI Risk Mitigation Through Democratic Governance: Introducing the 7-Dimensional AI Risk Horizon. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 366–367). ACM.

166 

P. BLOOM

Giorgino, V.  M. B., & Walsh, Z. (Eds.). (2018). Co-Designing Economies in Transition: Radical Approaches in Dialogue with Contemplative Social Sciences. Springer. Gold, J. (2017). The Future of HRD: Scenarios of Possibility. International Journal of HRD Practice, Policy & Research, 2(2), 71–82. Goldsmith, S., & Crawford, S. (2014). The Responsive City: Engaging Communities Through Data-Smart Governance. John Wiley & Sons. Good, J., Parsons, S., Yuill, N., & Brosnan, M. (2016). Virtual Reality and Robots for Autism: Moving Beyond the Screen. Journal of Assistive Technologies, 10(4), 211–216. Goolsbee, A. (2018). Public Policy in an AI Economy (No. w24653). National Bureau of Economic Research. Griffin, J. (2019). The State of Creativity: The Future of 3D Printing, 4D Printing and Augmented Reality. Edward Elgar Publishing. Guo, B., Yu, Z., Zhou, X., & Zhang, D. (2014, March). From Participatory Sensing to Mobile Crowd Sensing. In 2014 IEEE International Conference on Pervasive Computing and Communication Workshops (PERCOM WORKSHOPS) (pp. 593–598). IEEE. Guo, L., Ning, Z., Hou, W., Hu, B., & Guo, P. (2018). Quick Answer for big Data in Sharing Economy: Innovative Computer Architecture Design Facilitating Optimal Service-Demand Matching. IEEE Transactions on Automation Science and Engineering, 99, 1–13. Hacker, S. (2017). Pleasure, Power and Technology: Some Tales of Gender, Engineering, and the Cooperative Workplace. Routledge. Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, M., Hofstetter, Y., et al. (2019). Will Democracy Survive Big Data and Artificial Intelligence. In Towards Digital Enlightenment (pp. 73–98). Cham: Springer. Hermida, O. V., & Casas-Mas, B. (2019). An Empirical Review on the Effects of ICT on the Humanist Thinking. Observatorio (OBS∗), 13(1). Heylighen, F. (2017). Towards an Intelligent Network for Matching Offer and Demand: From the Sharing Economy to the Global Brain. Technological Forecasting and Social Change, 114, 74–85. Hoge, E., Bickham, D., & Cantor, J. (2017). Digital Media, Anxiety, and Depression in Children. Pediatrics, 140(Supplement 2), S76–S80. Holden, E. (2017). Taxes for Robots: Automation and the Future of the Labor Market. Hornborg, A. (2016). Post-Capitalist Ecologies: Energy, “Value” and Fetishism in the Anthropocene. Capitalism Nature Socialism, 27(4), 61–76. Huang, Y., & Sharif, N. (2017). From ‘Labour Dividend’ to ‘Robot Dividend’: Technological Change and Workers’ Power in South China. Agrarian South: Journal of Political Economy, 6(1), 53–78.

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

167

Hughes, C., & Southern, A. (2019). The World of Work and the Crisis of Capitalism: Marx and the Fourth Industrial Revolution. Journal of Classical Sociology, 19(1), 59–71. Ionescu, L. (2015). The Role of e-Government in Curbing the Corruption in Public Administration. Economics, Management, and Financial Markets, 10(1), 48–53. Iuliia, P., Aleksei, M., & Mikhail, B. (2015, May). Trust in Digital Government as a Result of Overcoming Knowledge Access Inequality and Dissemination of Belief in e-Democracy. In Proceedings of the 16th Annual International Conference on Digital Government Research (pp. 309–311). ACM. Jimeno, J. F. (2019). Fewer Babies and More Robots: Economic Growth in a New Era of Demographic and Technological Changes. SERIEs, 1–22. Kamar, E. (2016, July). Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence. In IJCAI (pp. 4070–4073). Karo, E., & Kattel, R. (2019). Public Administration, Technology and Innovation: Government as Technology Maker. In Public Administration in Europe (pp. 267–279). Cham: Palgrave Macmillan. Keane, M., Chen, Y., & Wen, W. (2018). The Creative Economy, Digital Disruption and Collaborative Innovation in China. In Creative Industries and Entrepreneurship. Edward Elgar Publishing. Kenney, M., & Zysman, J. (2018). Work and Value Creation in the Platform Economy. Forthcoming, Research in the Sociology of Work edited by Anne Kovalainen and Steven Vallas. Kerschner, C., Waechter, P., Nierling, L., & Ehlers, M. H. (2018). Degrowth and Technology: Towards Feasible, Viable, Appropriate and Convivial Imaginaries. Journal of Cleaner Production. Kester, G. (2016). Trade Unions and Workplace Democracy in Africa. Routledge. King, B.  A., Hammond, T., & Harrington, J. (2017). Disruptive Technology: Economic Consequences of Artificial Intelligence and the Robotics Revolution. Journal of Strategic Innovation and Sustainability, 12(2), 53–67. Kmieciak, R., Michna, A., & Meczynska, A. (2012). Innovativeness, Empowerment and IT Capability: Evidence from SMEs. Industrial Management & Data Systems, 112(5), 707–728. Kneuer, M. (2016). E-Democracy: A New Challenge for Measuring Democracy. International Political Science Review, 37(5), 666–678. Koppenjan, J., & Klijn, E. H. (2015). Governance Networks in the Public Sector. Routledge. Korinek, A., & Stiglitz, J. E. (2017). Artificial Intelligence and Its Implications for Income Distribution and Unemployment (No. w24174). National Bureau of Economic Research.

168 

P. BLOOM

Kostakis, V., Niaros, V., Dafermos, G., & Bauwens, M. (2015). Design Global, Manufacture Local: Exploring the Contours of an Emerging Productive Model. Futures, 73, 126–135. Kuo, T. H., Ho, L. A., Lin, C., & Lai, K. K. (2010). Employee Empowerment in a Technology Advanced Work Environment. Industrial Management & Data Systems, 110(1), 24–42. Last, C. (2017). Global Commons in the Global Brain. Technological Forecasting and Social Change, 114, 48–64. Laukyte, M. (2017). Social Robots: Boundaries. Challenges: Potential. Lawrence, P. (2018). Corporate Power, the State, and the Postcapitalist Future. Lawrence, M., Roberts, C., & King, L. (2017). Managing Automation: Employment, Inequality and Ethics in the Digital Age. Institute for Public Policy Research Commission on Economic Justice Discussion Paper. Lehdonvirta, V. (2016). Algorithms That Divide and Unite: Delocalisation, Identity and Collective Action in ‘Microwork’. In Space, Place and Global Digital Work (pp. 53–80). London: Palgrave Macmillan. Lema, R., & Lema, A. (2012). Technology Transfer? The Rise of China and India in Green Technology Sectors. Innovation and Development, 2(1), 23–44. Lember, V., Kattel, R., & Tõnurist, P. (2016). Public Administration, Technology and Administrative Capacity (No. 71). TUT Ragnar Nurkse Department of Innovation and Governance. Leonard, R., & Cairnes, M. (2019). Changing Work Realities: Creating Socially and Environmentally Responsible Workplaces. In Challenging Future Practice Possibilities (pp. 101–112). Brill Sense. Leyden, D. P., & Link, A. N. (2015). Public Sector Entrepreneurship: US Technology and Innovation Policy. Oxford University Press. Liebenau, J. (2018). Labor Markets in the Digital Economy: Modeling Employment from the Bottom-Up. In Digitized Labor (pp. 71–93). Cham: Palgrave Macmillan. Linders, D., Liao, C.  Z. P., & Wang, C.  M. (2015). Proactive e-Governance: Flipping the Service Delivery Model from Pull to Push in Taiwan. Government Information Quarterly, 35, s68–s76. Little, B. (2016). Post-Capitalism and the Workless Society. Soundings: A Journal of Politics and Culture, 62(1), 156–160. Ljungholm, D. P. (2015). E-Governance and Public Sector Reform. Geopolitics, History, and International Relations, 7(2), 7–12. Lorenz, M., Rüßmann, M., Strack, R., Lueth, K. L., & Bolle, M. (2015). Man and Machine in Industry 4.0: How Will Technology Transform the Industrial Workforce Through 2025. The Boston Consulting Group. Mahmood, D., Javaid, N., Ahmed, I., Alrajeh, N., Niaz, I.  A., & Khan, Z.  A. (2017). Multi-agent-Based Sharing Power Economy for a Smart Community. International Journal of Energy Research, 41(14), 2074–2090.

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

169

Mamedov, O., Tumanyan, Y., Ishchenko-Padukova, O., & Movchan, I. (2018). Sustainable Economic Development and Post-Economy of Artificial Intelligence. Entrepreneurship and Sustainability Issues, 6(2), 1028–1040. Mansell, R. (2016). Power, Hierarchy and the Internet: Why the Internet Empowers and Disempowers. Global Studies Journal, 9(2), 19–25. Marchetti, A., Manzi, F., Itakura, S., & Massaro, D. (2018). Theory of Mind and Humanoid Robots from a Lifespan Perspective. Zeitschrift für Psychologie. Martin, C.  J., Evans, J., & Karvonen, A. (2018). Smart and Sustainable? Five Tensions in the Visions and Practices of the Smart-Sustainable City in Europe and North America. Technological Forecasting and Social Change. Retrieved from https://www.sciencedirect.com/science/article/pii/S0040162518300477. McGaughey, E. (2018). Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy. Centre for Business Research, University of Cambridge, Working Paper (496). McLaren, D., & Agyeman, J. (2015). Sharing Cities: A Case for Truly Smart and Sustainable Cities. MIT Press. Retrieved from https://books.google.co.uk/ books?hl=en&lr=&id=KhvLCgAAQBAJ&oi=fnd&pg=PR5&dq=Consciousne ss+sharing+and+technology&ots=5rI0pZ4FqW&sig=cqIes_WG2PhD IHMqAlLdlHLM7VQ. McLoughlin, I. (2002). Creative Technological Change: The Shaping of Technology and Organisations. Routledge. Means, A. J. (2018). Learning to Save the Future: Rethinking Education and Work in an Era of Digital Capitalism. Routledge. Mehta, S., & Yadav, K. K. (2016). Planning for a Smart City with a Human Face in Developing India. International Journal of Sustainable Land Use and Urban Planning, 3(2), 13–20. Meijer, A. (2015). E-Governance Innovation: Barriers and Strategies. Government Information Quarterly, 32(2), 198–206. Mokyr, J., Vickers, C., & Ziebarth, N. L. (2015). The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different? Journal of Economic Perspectives, 29(3), 31–50. Mortara, L., & Parisot, N. G. (2016). Through Entrepreneurs’ Eyes: The Fab-­ Spaces Constellation. International Journal of Production Research, 54(23), 7158–7180. MP, T. W. (2018). The Future of Work: Improving the Quality of Work. Renewal: A Journal of Labour Politics, 26(1), 10–17. Murashov, V., Hearl, F., & Howard, J. (2016). Working Safely with Robot Workers: Recommendations for the New Workplace. Journal of Occupational and Environmental Hygiene, 13(3), D61–D71. Nygreen, K. (2017). Troubling the Discourse of Both/And: Technologies of Neoliberal Governance in Community-Based Educational Spaces. Policy Futures in Education, 15(2), 202–220.

170 

P. BLOOM

Okolo, A.  I. (2018). Capacity Building in the Manufacturing Sector of the Economy as a Means for National Sustainable Development in African States. Journal of Emerging Trends in Economics and Management Sciences, 9(5), 270–277. Osborne, S. P. (Ed.). (2010). The New Public Governance: Emerging Perspectives on the Theory and Practice of Public Governance. Routledge. Park, H. (2017). Technology Convergence, Open Innovation, and Dynamic Economy. Journal of Open Innovation: Technology, Market, and Complexity, 3(4), 24. Parkes, D.  C., & Wellman, M.  P. (2015). Economic Reasoning and Artificial Intelligence. Science, 349(6245), 267–272. Pereira, G. V., Macadar, M. A., & Testa, M. G. (2016, March). A Framework for Understanding Smart City Governance as a Sociotechnical System. In Proceedings of the 9th International Conference on Theory and Practice of Electronic Governance (pp. 384–385). ACM. Retrieved from https://dl.acm. org/citation.cfm?id=2910061. Pilgerstorfer, P., & Pournaras, E. (2017, May). Self-Adaptive Learning in Decentralized Combinatorial Optimization: A Design Paradigm for Sharing Economies. In Proceedings of the 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (pp. 54–64). IEEE Press. Powell, J. J. (2002). Petty Capitalism, Perfecting Capitalism or Post-Capitalism?: Lessons from the Argentinian Barter Network. ISS Working Paper Series/ General Series, 357, 1–73. Pulkka, V. V. (2017). A Free Lunch with Robots – Can a Basic Income Stabilise the Digital Economy? Transfer: European Review of Labour and Research, 23(3), 295–311. Punchihewa, D., Gunawardena, K., & Silva, D. A. C. (2017). Digital Marketing as a Strategy of e-Governance in Sri Lanka: Case Study of Sri Lankan Hospitality Industry. Qiu, J. L., Gregg, M., & Crawford, K. (2014). Circuits of Labour: A Labour Theory of the iPhone Era. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 12(2), 564–581. Rafi Khan, S. (2018). Reinventing Capitalism to Address Automation: Sharing Work to Secure Employment and Income. Competition & Change, 22(4), 343–362. Rolf, M., & Crook, N. (2016). What If: Robots Create Novel Goals? Ethics Based on Social Value Systems. In EDIA@ ECAI (pp. 20–25). Rothschild, J. (2016). The Logic of a Co-Operative Economy and Democracy 2.0: Recovering the Possibilities for Autonomy, Creativity, Solidarity, and Common Purpose. The Sociological Quarterly, 57(1), 7–35.

5  CREATING SMART ECONOMIES: ADMINISTRATING EMPOWERING FUTURES 

171

Scholl, H.  J. (2015). Electronic Government: Introduction to the Domain. In E-Government: Information, Technology, and Transformation (pp.  19–26). Routledge. Schuchmann, D., & Seufert, S. (2015). Corporate Learning in Times of Digital Transformation: A Conceptual Framework and Service Portfolio for the Learning Function in Banking Organisations. International Journal of Corporate Learning (iJAC), 8(1), 31–39. Sculos, B. W. (2018). Minding the Gap: Marxian Reflections on the Transition from Capitalism to Postcapitalism. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 16(2), 676–686. Seok, S. (2018). The Effects of Subjective Beliefs and Values on Use Intention of Artificial Intelligence Robots: Difference According to Occupation and Employment. The Journal of the Korea Contents Association, 18(7), 536–550. Shammas, V.  L. (2018). Superfluity and Insecurity: Disciplining Surplus Populations in the Global North. Capital & Class, 42(3), 411–418. Shaviro, S. (2015). No Speed Limit: Three Essays on Accelerationism. University of Minnesota Press. Soma, K., Termeer, C. J., & Opdam, P. (2016). Informational Governance – A Systematic Literature Review of Governance for Sustainability in the Information Age. Environmental Science & Policy, 56, 89–99. Sørensen, E., & Torfing, J. (Eds.). (2016). Theories of Democratic Network Governance. Springer. Spencer, D. A. (2018). Fear and Hope in an Age of Mass Automation: Debating the Future of Work. New Technology, Work and Employment, 33(1), 1–12. Staab, P., & Nachtwey, O. (2016). Market and Labour Control in Digital Capitalism. tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society, 14(2), 457–474. Stoker, G. (1998). Governance as Theory: Five Propositions. International Social Science Journal, 50(155), 17–28. Sun, J., Yan, J., & Zhang, K. Z. (2016). Blockchain-Based Sharing Services: What Blockchain Technology Can Contribute to Smart Cities. Financial Innovation, 2(1), 26. Svenfelt, Å., Alfredsson, E. C., Bradley, K., Fauré, E., Finnveden, G., Fuehrer, P., et  al. (2019). Scenarios for Sustainable Futures Beyond GDP Growth 2050. Futures. Szollosy, M. (2017). EPSRC Principles of Robotics: Defending an Obsolete Human (ism)? Connection Science, 29(2), 150–159. Taeihagh, A. (2017). Crowdsourcing, Sharing Economies and Development. Journal of Developing Societies, 33(2), 191–222. Upchurch, M., & Moore, P. V. (2018). Deep Automation and the World of Work. In Humans and Machines at Work (pp. 45–71). Cham: Palgrave Macmillan.

172 

P. BLOOM

Valentine, L. (2018). Gender, Technology, and Democracy at Work. In Information Technology and Workplace Democracy (pp. 193–211). Routledge. Vromen, A. (2016). Digital Citizenship and Political Engagement: The Challenge from Online Campaigning and Advocacy Organisations. Springer. Wasén K. (2015). Innovation management in robot society. Routledge. West, D. M. (2015). What Happens If Robots Take the Jobs? The Impact of Emerging Technologies on Employment and Public Policy. Washington, DC: Centre for Technology Innovation at Brookings. Westerlaken, M. (2017). Uncivilizing the Future: Imagining Non-Speciesism. Antae; 1, 4. Westra, R., Albritton, R., & Jeong, S. (Eds.). (2017). Varieties of Alternative Economic Systems: Practical Utopias for an Age of Global Crisis and Austerity (Vol. 229). Taylor & Francis. Wilson, H. J., & Daugherty, P. R. (2018). Collaborative Intelligence: Humans and AI Are Joining Forces. Harvard Business Review. Winstanley, P. (2017). Public Administration for the Next Generation. In Government 3.0 – Next Generation Government Technology Infrastructure and Services (pp. 27–36). Cham: Springer. Wu, J., Balliet, D., Tybur, J.  M., Arai, S., Van Lange, P.  A., & Yamagishi, T. (2017). Life History Strategy and Human Cooperation in Economic Games. Evolution and Human Behavior, 38(4), 496–505. Retrieved from http://www. paulvanlange.com/s/WuEtAlEHBinpress.pdf. Yoshida, K. (2018). Drastic Change in Industrial Environment and Progress of P2M. Journal of International Association of P2M, 13(1), 1–15. Yuan, Y., & Wang, F. Y. (2016, November). Towards Blockchain-Based Intelligent Transportation Systems. In 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC) (pp. 2663–2668). IEEE. Yun, J.  J., Won, D., Jeong, E., Park, K., Yang, J., & Park, J. (2016). The Relationship Between Technology, Business Model, and Market in Autonomous Car and Intelligent Robot Industries. Technological Forecasting and Social Change, 103, 142–155. Žižek, S. (2018). Like a Thief in Broad Daylight: Power in the Era of Post-humanity. London: Penguin. Zwart, H. (2015). Extimate Technologies: Empowerment, Intrusiveness, Surveillance: The Fate of the Human Subject in the Age of Intimate Technologies and Big Data. Retrieved from http://repository.ubn.ru.nl/ bitstream/handle/2066/147465/147465.pdf.

CHAPTER 6

Reprogramming Politics: Mutual Intelligent Design

Imagine for a moment scrolling through your online news feed and reading a story about a horrible injustice. It is reported that a group is being denied their hard won civil rights. There is a protest and a digital crowdfunding campaign to support them. There is local and global outrage and wider discussions of the need for further political progress. Now imagine that this oppressed group was not human but robots. If this again appears to be mere futurist speculation, note that one of the pioneers of creating humanoid robots, recently proclaimed that robots will have “civil rights” by 2014, observing that As people’s demands for more generally intelligent machines push the complexity of AI forward, there will come a tipping point where robots will awaken and insist on their rights to exist, to live free, and to evolve to their full potential. We will be forced to decide whether we can accept a greater, more inclusive vision of what it means to be human. (quoted in Cuthbertson 2018: n.p.)

The sixth chapter proposes a novel theory of transhuman politics. In the present day, technology is widely feared as a threat to democracy and progress. Social media spreads fake news, AI offers technocratic rather than deliberative solutions to society’s ills. In this respect there is a pronounced, political technophobia. The rise of AI has additionally spurred new political visions of technological emancipation spanning the ideological spectrum from libertarianism to socialism. Yet both its detractors and © The Author(s) 2020 P. Bloom, Identity, Institutions and Governance in an AI World, https://doi.org/10.1007/978-3-030-36181-5_6

173

174 

P. BLOOM

proponents miss the full radical political potential of a transhuman politics. It is one where non-humans and human citizens deploy the latest virtual, digital and manufacturing advances to mutually design their societies. This chapter introduces a transhuman politics of mutual intelligent design. It builds, in this spirit, on emerging notions of “transhuman democracy” and “cyborg citizens” promoting the importance of social democracy to ensure that technology benefits the many not just the few. Moving beyond these valuable but limiting conceptions, we will then explore the exciting role of technologies like simulations and big data for enlivening and improving contemporary political democracies and social movements. This will be set against more critical concerns about the potential role of technology for leading to further exclusion, inequality and disenfranchisement. This will lead to a broader discussion of the ways in the future virtual worlds could exist as new public spaces for debate and social experimentation. Moreover, it will investigate the potentialities of creating an “open source democracy” where humans and non-humans make shared decisions of how to best design their environments for the public good. Will we be able to reprogram our politics so that we can redesign our society?

Cyborg Politics The possibility of an empowering and progressive transhuman future requires a complete overhaul of our current politics. The idea of a political change may seem threatening in a time when basic democratic institutions and liberal rights are under attack. In response, activists and citizens from across the ideological spectrum have highlighted how outdated these political institutions and discourses seem to be in challenging the growing power and influence of elites. These are certainly valid concerns and critiques, however, they fail to fully account for how politics can and should evolve to meet the needs of a society increasingly populated by non-human intelligence. In his now classic 2001 book Cyborg Citizen: Politics in the Post-human Age, Professor Chris Hables Gray maintains the need for a “participatory evolution” rooted in a “participatory government” for fully realizing the progressive potentials of “cyborg” society. He presciently declares that Evolution is an open ended system with a tight link between information and action. We have an opportunity, if we take participatory evolution

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

175

s­ eriously, to be free of both the rule of blind chance necessity (the Darwinian perspective) and its opposite, distant absolute authority (creationism). Participatory evolution means we should shape our future through multiple human choices, incomplete and contradictory as they often are. Participatory government is the same…Decisions about evolution should be made at the grassroots, just as political and economic decisions should be, especially now that we have begun to recognize the political evolution of cyborgs. (Gray 2006: 3)

These critically echo the important insights of the philosopher Donna Haraway in her “Cyborg Manifesto” in the early 1980s. Specifically, speaking to the possibilities of feminist liberation, she contends A cyborg is a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction. Social reality is lived social relations, our most important political construction, a world-­changing fiction. The international women’s movements have constructed “women’s experience”, as well as uncovered or discovered this crucial collective object. This experience is a fiction and fact of the most crucial, political kind. Liberation rests on the construction of the consciousness, the imaginative apprehension, of oppression, and so of possibility. The cyborg is a matter of fiction and lived experience that changes what counts as women’s experience in the late 20th century. This is a struggle over life and death, but the boundary between science fiction and social reality is an optical illusion. (Haraway 2006: 117)

These visions of a radical cyborg politics allow for, in turn, for the radical “cyborgisation” of everyday politics and policies. Concretely, such a cyborg politics is already being witnessed at the end of the twentieth century in the use of ear implants for helping deaf individuals (Cherney 1999). The possibility of a revolutionary “cyborg politics” speaks to the potentially transformative impact on power, rights, and governance by AI and robots. It also entails embracing a mutli-disciplinary perspectives and a pluralistic ethos in order to avoid the dangers of technological determinism currently dominant within popular and political debates and allow for a culture of deliberative democracy for shaping the future: ‘This time’ is both distinct from and similar to what went before. And just as before, technological change has transformative potential as well as uncertainties and limits. Yet in public debate the rhetorical momentum in business

176 

P. BLOOM

and policy making is behind the technological determinists…The possibility of futures other than the dystopian or utopian strands of the radical change thesis, allows an array of competing hypotheses about future trends to be articulated and evaluated against a plurality of normative viewpoints. Such an exercise is crucial if a deliberative democratic discourse is to emerge around new technology. (Boyd and Holton 2018: 343)

While there is a popular fixation on their economic impact, the political influence of AI and robots could be just as wide-ranging and profound (Chitty and Dias 2018). These bring up older debates about the political differences between the “cyber citizen” and the “cyborg citizen” (Koch 2005). To a certain extent, computers and AI are already fueling contemporary populist movements, serving as a simultaneous platform for greater political organisation and threatening disinformation (Levy 2018). It also anticipates how automation and algorithms will reshape political deliberation and agency. The recent example of the experimental Microsoft chatbot Tay epitomised these difficult and important emerging issues—as coders and an online community were able to reprogramme and repurpose the bot for racist and exclusionary purposes, thus bringing embarrassment and condemnation to the designer. Here, the initial believed agency of the bot was itself reframed to users and designers, in quite unpredictable and political ways. Hence, Users’ responses to Tay teach us about how the concepts of agency and affordance must evolve if scholars and designers are to move beyond deterministic, bifurcated ways of thinking about agency as separable into technological scaffolding and humanistic action. Researchers investigating the impact of algorithms on our public and private lives need to be able to track the imagined affordances that we generate as we interact with algorithms. And we must watch for how agency flows among and between an algorithm’s designer, the designed algorithm, human users, and the resulting content, interaction, or conversation. The future of intelligible and civil communication may well depend on a sensible understanding of the symbiotic agency in human and algorithmic communication. (Neff and Nagy 2016: 4927)

The most immediate political reaction, thus, to these prospective technological changes is one of understandable trepidation. As mentioned earlier in the book, there are serious and legitimate questions of whether democracy will even “survive big data and artificial intelligence” (Helbing

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

177

et al. 2017). Such anxieties reflect not just a sense of unease with these new technologies but also a deeper worry that they will make it all that much easier for capitalist values and actors to continue to erode democratic principles and practices (Boyte 2017). The twenty-first century has brought to light the potential tensions between a constitutional democracy and the free market, one that may only be intensified with the addition of such “disruptive democracy” like AI (Nemitz 2018). These philosophical debates are joined by existing real world concerns about the existent political role of robots on international relations and the political economy (Carolan 2019; Weber 2016) Perhaps the greatest political fear in relation to politics and technology is the one associated with the use of AI and big data to simply dictate our social decisions and collective destinies through its enhanced abilities at prediction. These revolve around a legitimate unease and moral concern over the growing power of “algorithmic authority”. (Lustig et al. 2016). AI and big data are already being used, in this regard, to anticipate “political party voting” and as such provide a potentially richer understanding predictive understanding of how candidates will act once if elected (Khashman and Khashman 2016). This speaks to the troubling emergence for many of a “politics of prediction”. To this end, Big Data results in a “reinforced future-orientation”, that is “likely to exacerbate the severance of surveillance from history and memory and the assiduous quest for pattern-­discovery will justify unprecedented access to data” (Lyon 2014: 6). Here, according to Aradau and Blanke (2017: 380), big data reconfigures political time/space to one of continual “between-us”: The emergence of relations and connections in feature spaces relies on calculations of ‘between-ness’. We use the notion of ‘between-ness’ to capture the geometrical measure of the shortest path between two data points in the feature space. Between-ness thus measures the connection and relatedness between anything mapped into the geometry of the feature space. It is not simply a connection or network, but an understanding of similarity and difference based on geometrical distance. The classification algorithms used in predictive analytics rely on assumptions about how the feature space can be optimally partitioned and the calculability of ‘between-ness’ as a measure of how distant or close data points are. A digital mode of prediction thus emerges in a regime of Between-ness. Predictive analytics algorithms manipulate the feature space and its various combinations in order to create so-­ called ‘labels’ for each object that is already assigned by past data in the feature space and predict new labels for all possible objects in the feature space.

178 

P. BLOOM

This predictive politics extends to every level of the economy and governance from finance (Hansen 2015) to policing (Kaufmann et al. 2018). At the heart of this technological political change is the challenging of traditional sovereignty. The idea of a ruler and consented subjects is subverted by ideologies and enhanced capabilities to decentralise decision-­ making and power. New decentralised technologies such as Blockchains offer, thus, novel opportunities to challenge centralised governing structures, allowing for nascent forms of more democratic digital based popular sovereignty to emerge. Legal scholars Sarah and Ben Maski (2018: 159) note, in this respect, that The structures of blockchain technology, we have found, tend more toward more distributed, democratized, and technologized sovereignties. Yet many of these same tendencies can be—and are being—channelled and recast both by corporate capital and states; actors that are well prepared and highly incentivized to take advantage. Corporations in particular have both a temporal advantage as early movers as well as the resources to hire technologists and rent state officials in attempts to both code and regulate the blockchain world of the near future. Against such advantages, we see little likelihood of effective disaggregated resistance by libertarian proponents of individual sovereignty. Popular sovereignty, on the other hand, may have a future. Cooperatives and democracy activists may find themselves capable of overcoming their early structural disadvantages by building a coalition of technologies and broader publics. As we have repeatedly pointed out, much of the motivating ideology and daily practice of blockchain coders is idealistic, utopian, decentralist, and cooperative.

This anti-sovereign, or at lease decentralised, ethos represents an updated type of political and ethical psychology associated with these new technologies, one which the academic Bettr Bayer (1999: 113) referred to as a “cyborg body politics”: At the dawn of the twenty-first century, changing technocultural pulses of everyday life, of who and what we are about as psychological subjects, our subjectivities, have stirred up anew a sense of life in the twilight zone. Neither wholly unmoored from our familiar ways of being nor completely jacked into cyberspace, we are instead caught up in the visual and digital cultural-political surrounds of transitions and transformations, restagings and reimaginings. From magazine headlines announcing technologies as making us faster, richer, smarter as well as alienated, materialistic, and a ‘little crazy’ through to advertisements claiming ‘the future of machines is

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

179

biology,’ ‘the biological is becoming technological,’ ‘technologies are becoming biological’, and ‘don’t just send email, be email’, popular culture discourses heighten our association of technology with emerging transformations in selves, bodies, and subjectivity.

Yet new forms of “post-anthropocentric” creativity such as in the increased use of procedural content generation—where computers create content based on algorithms—troubles traditional notions of authorship and political agency while also opening up new opportunities for radical transhuman collaboration (Phillips et al. 2016). Government and groups can draw on this to predict and therefore pro-actively address incipient geo-political conflicts (Colaresi and Mahmood 2017). More broadly, it speaks to the role of technologies for politically “forging trust communities” (Wu 2015). It also enhances the political capabilities of both the state and citizens. Advances in nano-technology, for instance, can lead to the creation of a “cyborg embryo” which can promote values and practices of “transbiology” (Franklin 2006). Globally, it holds the potential to decolonise knowledge away from Eurocentric perspectives, thus allowing the emergence of “hybrid epistemologies” and “cyborg geographies” (Wilson 2009). Less positively, it can strengthen security regimes and the power of the police through fostering “algorithmic paranoia” (Sheehey 2019). Even when well intentioned, such as the use of “humanitarian technology”, the results can have negative unintended consequences (Jacobsen 2015). Critical, in this sense, is how technology may be desensitizing people to these new political capabilities and techniques. This reveals the opportunities and dangers of this arising “cyborg politics”. Even as far back as the first decade of the new millennium, scholars were theorising the possibility of “hybrid natures and cyborg cities” (Swyngedouw 2006). Recently, this has shifted int discussions of how such technology can be deployed to mitigate existing injustices, such as racialized forms of police. (Healey and Stephens 2017) These run alongside more concerning ideas of how robots and traditionally vulnerable populations such as migrants will politically compete with one another (Wright 2019). While certainly valuable, these insights overlook the ­possibilities such technology could be used to reinvigorate and positively transform our existing democracy.

180 

P. BLOOM

Developing TransHuman Democracy The idea that technology could revolutionize politics and power, of course, is not new. While many have speculated the transformative role of science for reshaping organizational and public decision—making, there have been others who have challenged this view. Early commentators such as Professor Robert J. Thomas (1994) attempt to highlight “what machines can’t do” including shape the future and add creatively to the production process. Critically, it is conjectured that while technology may dramatically impact surface level politics, it will do little to actually alter or replace underlying ideologies or power structures (Armitage 1999). More optimistically, is the desire for a radicalized “digital democracy” that infused twentieth century theories with twenty-first century political realities. “To think politics and technology together, then, is to rethink what we mean by organization, as well as to understand what it is that is being organized (knowledge, resources, people)” argues the groundbreaking scholar Nina Power (2017: n.p.), The uneven distribution of technology, globally and locally, has put questions of automation, production and consumption firmly on the left’s policy agenda, and such questions go to the heart of Marxism’s status as a live political and theoretical perspective on the world. Expanding our historical and critical relationship to technology via feminist and ecological perspectives, particularly thinking about the continued dependence on fossil fuels, and how automation might avoid that (or if it can avoid it), there are huge questions at stake: the future of work, the future of politics, the future of global relations of production and consumption, and even the relationship between men and women. Politics and technology must also be understood as intimately intertwined in the everyday lives of millions.

In the modern period, democracy and technology have become increasingly intertwined. The competing visions of digital technologies as having a positive or negative effect on democratic practices and cultures too often miss how technological change is not predetermined but is itself socially constructed and political. Contained within these debates are conflicting theories of technological change and political democracy (Street 1997). Technology, in this regard, that is democratic in its development and purpose, will more likely contribute to broader processes of democratization (Moran and Parry 2015). This can also trickle down to the workplace and industrial relations (Beirne, M., & Ramsay, H. 2018). These point the

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

181

potential for explicitly designing and creating what the renowned professor of democracy Larry Diamond (2015) calls in another context “liberation technology”. Significantly, this simultaneous foreboding and excitement about hi-­ tech democracies and politics, helps bring to a conclusion once fashionable ideas that we had reached the “end of history” following the Cold War, though not necessarily its underlying search for human perfection (Ylönen 2016). Indeed such technology is already shifting the course of modern democratic history through altering how politicians campaign and citizens politically participate. The observations of scholar Jessica Baldwin-Phillipi in her excellent 2015 book Using Technology, Building Democracy: Digital Campaigning and the Construction of Citizenship may seem almost quaint in light of recent political events, however, it still holds much value and relevance: Citizen’s voices may not be widely heard, and a local commentator is unlikely to emerge as a new authority in the political sphere, but being encouraged to provide public visible feedback, even though that feedback may disrupt the campaign’s ability to stay on message, is a meaningful development. A large swarth of the population may not engage in participatory political action but campaign’s efforts to get more and more people on the lowest rung of the ladder of engagement, and their efforts to keep pushing them upward, are a reason to be hopeful. While citizens will be exposed to social media content that is less informative, content that encourages emotional connection could drive future action, and other digital spaces like microsites are seeking improvement in the quality of their information. (5–6)

Such insights have given rise to an entirely new theory of civic education—including those based on “design approaches”: We know roughly what we want now, which is global democracy but we do not yet know how to get there. The aim for edu-cational research should be to design educational technology systems to achieve this aim, evaluate their impacts, refine the designs, try again and eventually, like the aviation pioneers over one hundred years ago, we might have a system that flies. Of course, unlike aviation research, this kind of educational research is not only technical but has to engage with profound ethical and ontological issues about what kind of future we want and what kind of beings we want to be. However, as with the aviation research, the best way to research such issues is not through theory alone but through practice by making designs and

182 

P. BLOOM

evaluating the impact of designs where we allow our most fundamental assumptions and, indeed, our very selves, to be included in the dynamic self-­ reflective and self-reforming iterative design-based research process. (Wegerif 2017: 33)

It has also contributed to the supposed emergence of a generation defining “new politics” where bourgeoning forms of social media have helped ushered in generational changes in attitudes and political practices (Farthing 2015). Globally, such “disruptive technologies” are driving contemporary democratic reforms yet also posing new threats to them. In countries with fragile democratic histories, which are still extremely vulnerable to mass human rights violations and authoritarianism, the use of foreign direct investment for funding smart cities can despite it claims of being positively “disruptive” to the status can present new threats that undermine Myanmar’s democratic transition by exacerbating existing sources of inequality and introducing new ones. Specifically, the smart transition undermines national reconciliation by exploiting the urban hinterlands and externalizing the human and social costs of supplying electricity to smart city development; shuns central elements of university education reform by promoting standardized, prefabricated, decontextualized learning at the expense of critical thinking; and thwarts democratic civic participation by corporatizing the ownership and control of critical public assets and services and privatizing essential functions of urban governance without providing publicly accessible mechanisms of meaningful accountability. (Dale and Kyle 2015: 293–294)

Crucial though to the attempts to radically reboot contemporary politics in the short and long term is transforming perceptions of the relationship between transhumanism and democracy. While most people may not immediately know the terms posthuman or transhuman, popular culture is increasingly marked with attempts to portray how AI and robots will effect democratic institutions and “human rights” (Hughes 2015b). As opposed to being just a technical enhancement of governance and politics, these innovations are involved in “constructing worlds” (Bijker 2017). Digital technology, is already playing a prominent role in “framing” issues in a “fractured democracy” and in doing so mobilising diverse political networks (Entman and Usher 2018). It is worth nothing, though, that these efforts to draw on technology for fostering more radical democratic cultures has a long history, such as the infusion of cybernetic into the revolu-

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

183

tionary socialist ideologies and radical politics of Allende in Chile (Medina 2011). The greater recognition of the important role played by technology in the development and improvement of democracies historically points, in turn, to the potential of producing future forms of progressive transhuman politics. It represents a “counter-history of the present”. According to the scholar Gabriel Rockhill (2017: 2–3) A country-history calls into question the very idea of a sole or unique present that would everywhere be the same, and that one could define with a single concept or set of uniform defining characteristics. It does not therefore propose an opposite history of contemporary reality that would quite simply reverse a conventional conception of our conjuncture in order to show the inverse… In countering a particular schemization of contemporary reality, it specifically counters the historical order that underpins it. This double counter-history does not limit itself, therefore, to calling into question alleged historical positivities—so called incontestable givens—but it strives to modify the very logic that has produced them. This implies diligent and delicate work on the ways in which history has been historically constituted as a practice that frequently relies on a unidimensional conception of space and privileges a very specific form of chronology (often Eurocentric and anthropocentric).

The present is marked, to this end, by “technology—intensive campaigning”, whereby political narratives and democratic movements are increasingly “datafied” (Kreiss 2016), thereby continually presenting conflicting digital and archivable “histories of the present”. It is also a source for a wider range of modern politics, spanning from populism, to e-­democracy, to depoliticisation (De Blasio and Sorice 2018). Anticipated is a novel type of “material” political participation, where technology reconfigures what is considered “the public”, its interests, and interactions. As the celebrated scholar of innovation and politics Noortje Marres (2016: 1–2) presciently observes …the temptation has been strong to approach the inclusion of non-humans in democracy as a process of extension: engagement with the issues of non-­ ­ human entities here all too often concentrates on the question of whether the existing machinery of politics, morality and ethics can be extended to include these entities” by contrast “the project of ‘letting things in’ transforms a specific category of social and political life, that of participa-

184 

P. BLOOM

tion…we consider material participation as a specific mode of engagement, which can be distinguished by the fact that it deliberately deploys its surroundings, however widely they must be defined, and entails a particular divisions of roles among the entities involved—things, people, issues, settings, technologies, institutions, and so on. Rather than concentrating on a secular version of the metaphysical question about causality—do nonhumans have agency?—we then consider material participation as a specific phenomena in the enactment of which a range of entities all have roles to play.

Critical to this project, is reimagining the present and future based on integrative transhuman values (Pilsch 2017). This entails an acknowledgement that the antecedent to this contemporary and prospective transhumanism—mass industrialisation—was not a historical inevitability (Sabel and Zeitlin 1985). This broader and more critical historical perspective permits for a sophisticated theoretical and practical understanding the different possibilities of “governing through periods of disruptive technological change” (Hasselbalch 2018). This allows, furthermore, an expansive vision of how disruptive technologies could revitalize and transform contemporary democracy—ones which are more “self-organising” (Raikov 2018). Anticipated, therefore, is the potential to reconceive and concretely reconfigure politics using emerging “disruptive” technologies. To a certain extent, technology has always developed alongside and complemented developments in business and society (Rzevski 2019). Yet technology is now not just reinforcing or challenging the status quo, but revealing how they can be completely transformed (Leighninger 2016). Represented, in this respect, is the opportunity to construct an entirely new “body politics”—a task which our very political and social survival may just well depend (Levina 2017).

Simulating Progress New technologies can open the way for reconfiguring contemporary politics and reinvigorating present day democracy. Yet they also have the capacity to critically expand humanity’s political imagination. In particular, it can do so through virtually immersing individuals and communities in alternative presents and futures. Tellingly, giant social media companies are already investing in this growing power of “immersive storytelling”: One strategy to the challenge of digital ecosystems to be sustainable, desirable environments is the extension of their platforms. Facebook sees its social media platform ‘Facebook’ as the foundation for the growth of its

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

185

business (see Figure 1), with additional services Video, Messenger, Search, Groups and communication tools Instagram and WhatsApp as the most influential assets to reach a mass market audience through mobile devices. However, immersive technologies are the next step as outlined in its 10-year roadmap, most importantly through connectivity (drones, satellites, lasers, terrestrial solutions, telco infra and free basics) as well as artificial intelligence (AI) including communication through vision, language, reasoning, and planning. Augmented reality (AR) and VR technologies are as well in the focus of Facebook’s strategy, such as AR tech, mobile VR, Rift, Touch (Rift’s haptic controller) and Social VR (incl. ‘Spaces’, ‘Parties’, and other all-surrounding mediated environments). (Stiegler 2017: 2)

In this respect, it can also make what was once merely the conjectural and hypothetical temporarily real (Lanier 2017). Novel economic and social arrangements such as post-capitalism and transhumanism go from dreams to virtual reality. Such radical immersive potentialities act as the radical inverse of the predicted rise of “virtual capitalism”, Here’s how virtual capitalism works: NKK, a Japanese steel company with a failing shipyard, converts the shipyard into a facility to produce simulated domed beaches complete with wave- making machines and surfing contests. The selling point is that nothing unpleasant, uncomfortable, or inconvenient happens at these beaches: the last man’s paradise. Virtualization in the name of exchange value is the formula for the transition from industrial capitalism to virtual capitalism. (Kroker and Weinstein 1994: 4)

Virtual reality is already beginning to reshape how we experience the world and interact with others. It provides the chance for people to “experience” the lives and perspectives of others, thus critically raising their awareness and empathy for them. A rather famous example is the early uses of this technology to simulate the experiences of being in a Syrian refugee camp (Irom 2018). These virtual experiences differ from those such as in video games that allow people to take on a number of different perspectives in a disembodied and non-visceral way. (Berents and Keogh 2018). To a certain extent, such VR aims to counter the underly virtual reality of much present day policy making which is based on assumptions that have little to do with the actualities of the people they are created for and most effect (Millar and Bennett 2017). Nevertheless, virtual reality does run the risk of highlighting a rather narrow reality of a place as seemingly “real”, thus reinforcing certain prejudicial and problematic assump-

186 

P. BLOOM

tions. This has already occurred, for instance, in non-VR virtual representations of global tourist sites, which minimize serious structural economic and political issues for a romanticized and appealing view of the world open to visitors (Holmes 2002). It appears, further, that the predictions of an increasingly “virtual state” are coming to fruition (Frissen 1997). If nothing else, such virtual technology advancements offer the potential to shift politics from abstract deliberation to evidence based experimentations. Put differently, rather than focus decision-making on debates over ideas, it could allow people to virtually test out these ideas (Zhu and Li 2017). It further can provide a stronger sense of the concrete effects of what can otherwise be ideologically driven policy discussions. In particular, it can highlight the actual everyday challenges faced by often forgotten or demonised populations like those incarcerated (Robinson 2016). The early “cultures of the internet” were meant, in this regard, to provide “virtual spaces” for people to share their “real histories” and “living bodies” (Shields and Shields 1996). In the present era, VR is helping bring quantitative data “to life” through innovative immersive visualisations drawing on “mixed reality” methods. Here …the main benefit from the implementation of MR approach is human experience improvement. At the same time, such visualization allows convenient access to huge amounts of data and provides a view from different angles. The navigation is smooth and natural via tangible and verbal interaction. It also minimizes perceptional inaccuracy in data analysis and makes visualization powerful at conveying knowledge to the end user. Furthermore, it ensures actionable insights that improves decision making. (Olshannikova et al. 2015: 21)

Virtual reality could, accordingly, transform political debates and decision-­making into a form of “immersive theatre” (Frieze 2016). This emerging virtual aspect of politics points to how technologies can be used to more radically transform what is considered politically possible. To a certain extent, VR is both an invitation to expand beyond our current realities while also reflecting back to us (often in quite subtle ways) prevailing cultural beliefs and attitudes. Writing in 1994, scholar Ralph Schroeder wrote almost prophetically that: Whatever the case may be in VR research and industry, VR technologies and new beliefs about science and technology, particularly in the form of cyber-

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

187

culture, are likely for the various reasons outlined here to continue providing inspiration within contemporary culture. The reality of cyberculture, however, as I hope to have shown, remains the Weberian one whereby beliefs reflect the predispositions of the intellectual strata which are their carriers, as well as the Durkheimian one whereby the role of knowledge and belief mirrors more fundamental features of social reality. Whether, in addition, we can discover the ‘cyborgs in us all’ 40 or experience hitherto unimagined states of consciousness within our computer-simulated environments-­l will show you meaning in a handful of silicon chips, as T. S. Eliot might have said-remains to be seen. The conditions which have thus far sustained cybercultural ideals, however, whether technological, social or cultural, may be with us for some time yet. (527)

Returning to more recent times, the enhanced reliance on simulations is currently dramatically changing city planning and architecture. Significantly, By and large, the use of VR in laboratories for professional design and research purposes facilitates access to situations that do not (yet) exist. Although lab applications are sometimes used to determine visual preferences in regards to extant views (or images) in a controlled environment, a frequent purpose is to inform about future visual change. Such anticipated changes may be either planned—such as for reuse of existing buildings in urban design…or expected, such as to solicit a response from stakeholders regarding climate change. (Portman et al. 2015: 376)

It is additionally being used to challenge students to think more creatively through exploring simulations of different worlds that they can explore and use as a heuristic for better understanding their own realities (Lau and Lee 2015). This gestures toward a new demand for citizens to use virtual spaces to think more creatively about how they could reconfigure their local, national, and global realities in conjunction with new technologies such as blockchain (Kostakopoulou 2018). It, moreover, puts pressure on governments and policy makers to effectively deploy VR for addressing serious social problems like post-traumatic stress disorder via the internet (Freedman et al. 2015) Anticipated by this virtual political is the ability to accept and embrace multiple potential social histories. It permits people to experience the same event, the same society, from the perspective of another, thus revealing its pluralities and diverse (often competing) realities. (Nash 2018).

188 

P. BLOOM

This can lead to a more user centred approach to public welfare and services (Perry 2018). It can also allow people to “play with history”—to reimagine the past, present, and future as a form of collective debate and public deliberation (Hassapopoulou 2018). Political narratives are now more than just ideological stories told by politicians, but immersive alternative perspectives that people can live and interact within (Ryan 2015). In doing so, it begins to redefine pluralism for the increasingly virtual age (Rose 2018). This would alter, moreover, the very way humans conceive and act as political subjects. It would, more precisely, point the possibility of enacting a shared transhuman existence. Anticipating this possibility are the emergence of vibrant “virtual communities” (Wellman and Gulia 1999). It can allow scholars and citizens to be in several “places at once”—leading to “multi-sited” forms of critical inquiry and political identification (Green 1999). The danger, and what is already being witnessed, is the exploitation of VR for conventional types of “realpolitik”—a risk already witnessed in the co-opted use of social media of subverting the revolutionary impulses of the “Arab Spring” (Klischewski 2014). Nevertheless, these are not their only use—this technology also holds the promise of a politics based on the creation of “virtual worlds” (Nayar 2004). At stake is nothing short than the potential for a virtual political revolution. It is the chance to immerse ourselves in an environment different than our own or that does not yet exist. (Bottici 2014). Humanity becomes itself, in this regard, virtual, something malleable that can be continually experimented with and redesigned (Adam and Green 2005). These reimaginings are transformed into immersive narratives for people (and potentially machines as well) to share a common once seemingly unimaginable experience. (Steinicke 2016). Paradoxically, it is only by becoming more virtual that our present and future politics and all its possibilities can ever become real.

“Unhumanising” Politics One of the greatest fears of the twentieth century is the often profoundly dehumanizing effect of modern politics. These are not idle or illegitimate concerns, of course. The last century was marked both by great material progress and almost unfathomable global destruction, ranging from the Holocaust to the Gulags to the Vietnam War to Apartheid. These bore witness to how rapid technological advances could be used to dramatic

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

189

and mass inhumanity. The twenty-first century seems to be tragically continuing in the devastating and destructive footsteps of the twentieth century. Events from the Iraq War to the financial crisis to the resurgence of far-right ideologies reveal the ongoing dangers posed by this virulent mix of status quo politics and technological innovation. A defining aspect of this hi-tech modern political dehumanization is the rendering of complex and rich human lives into aggregate statistics. The actual human cost if war, exploitation, and even extermination is commonly lost in numerical translation. Reflected is the technological component of this well refrained “banality of evil”. In the present era, this statistical process of dehumanisation is captured in the growing reliance on big data. Countering this trend are incipient efforts to “humanise” big data and make it more inclusive—especially in terms of whose voices are measured and analysed for its use of social ethics (Baum 2017). Promoted is the possibility of “inclusive data”. It also reveals the potential “dark side” of using big data for “social good” (Lepri et  al. 2017) Almost completely ignored in these debated though is just how “dehumanising” this can make emerging transhuman relations, as research shows “especially under certain circumstances, people are sensitive to persuasion by (artificial) social agents. For example, when people feel socially excluded, they are motivated to increase their social connections with others” (Ruijten et al. 2015: 832). More optimistically are perspectives heralding the opportunity to create a “good data” society. Central, in this regard, are trans-national attempts to deploy AI for contributing and spreading the “good society” (Cath et  al. 2017). These echo earlier concerns that e-Governance and e-­Democracy may reproduce existing forms of social exclusion and therefore reinforce prevailing power dynamics (McNutt 2007). Yet it also reflects the abiding hope that such civic “e-participation” could lead to greater public deliberation and engagement with traditionally underrepresented populations, such as hard to reach youths (Edelmann et al. 2008). These developments in e-democracy and recently big data, are similar to the excitement and criticism that ICTs had for democracy development several decades earlier at the beginning of the new millennium (Fleming 2002). The question, then, is how to best help construct and foster the “e-citizen” (Coleman 2012). These hopes, though, are belied by the ongoing social exclusion being not just reproduced but actively exacerbated by big data and social media. The goal of “e-government” and as such a more virtual politics was to enhance the quantity and quality of conversation and deliberation between

190 

P. BLOOM

citizens (Anderson and Bishop 2005). Now it appears that such technology is an existential threat to such goals (Ranieri 2016). The aforementioned use of drones for military and entertainment purposes puts into the question whether such intrusive and weaponised technology actually “undermines democracy”. As then head of the 21st Century Defense initiative for the Centre-Left US think tank The Brookings Institute Peter W. Singer observed in the New York Times in 2012: …now we possess a technology that removes the last political barriers to war. The strongest appeal of unmanned systems is that we don’t have to send someone’s son or daughter into harm’s way. But when politicians can avoid the political consequences of the condolence letter — and the impact that military casualties have on voters and on the news media — they no longer treat the previously weighty matters of war and peace the same way. (n.p.)

These rather foreboding queries, directly contradict the optimism of the past where it was asked “Will robots save democracy?” (Moore 1981). Increasingly, it appears that the answer is not one or the other but rather how to progressively “build digital democracy” in an increasingly transhuman world: Fridges, coffee machines, toothbrushes, phones and smart devices are all now equipped with communicating sensors. In ten years, 150 billion ‘things’ will connect with each other and with billions of people. The ‘Internet of Things’ will generate data vol-umes that double every 12 hours rather than every 12 months, as is the case now. Blinded by information, we need ‘digital sunglasses’. Whoever builds the filters to monetize this information determines what we see  — Google and Facebook, for exam-ple. Many choices that people consider their own are already determined by algorithms. Such remote control weakens responsible, self-determined ­decision-­making and thus society too….The many chal-lenges ahead will be best solved using an open, participatory platform, an approach that has proved successful for projects such as Wikipedia and the open-source operating system Linux. (Helbing and Pournaras 2015: 33)

To a certain extent, this means taking seriously the urgent need to “decolonize data”. The previous section, highlighted the opportunities of virtual politics. However, it also stressed that politics has always been virtual to a quite large degree—dominantly portraying one interpretation of

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

191

reality over others. This can be from the rather straightforward attempt to establish a dominant or hegemonic view of existing social and economic relations to the historical advancement of a Western episteme as the only one which is valid. These concerns extend to the spread of big data— essentially in the name of being evidence driven which reality is being virtually promoted and which are being ignored thus rendered invisible. Quoting the renowned scholars Nick Couldry and Ulises Mejias (2019: 337) Through what we call ‘data relations’ (new types of human relations which enable the extraction of data for commodification), social life all over the globe becomes an ‘open’ resource for extraction that is somehow ‘just there’ for capital. These global flows of data are as expansive as historic colonialism’s appropriation of land, resources, and bodies, although the epicentre has somewhat shifted. Data colonialism involves not one pole of colonial power (‘the West’), but at least two: the USA and China. This complicates our notion of the geography of the Global South, a concept which until now helped situate resistance and disidentification along geographic divisions between former colonizers and colonized. Instead, the new data colonialism works both externally — on a global scale — and internally on its own home populations. The elites of data colonialism (think of Facebook) benefit from colonization in both dimensions, and North-South, East-West divisions no longer matter in the same way.

This phenomenon is even more troubling as society increasingly views “big data as the big decider” (Taylor 2018). Nevertheless, big data and AI also contain vast potential for challenging and transforming existing political power structures. Ideologically, it can involve giving greater importance to “minor data” in order to subvert and put into question pervasive discourses of neoliberalism (Koro-Ljungberg et al. 2017) Additionally, it can entail the exploitation of such sophisticated data collection methods to better understand and represent often overlooked human populations while also exposing the current dominant social norms used for their understanding and political treatment. Importantly, Numbers, configured as population or population sample data, are not neutral entities. Rather, social and population statistics are better understood as human artefacts, imbued with meaning. And, in their current configurations, the meanings reflected in statistics are primarily drawn from the dominant social norms, values and racial hierarchy of the society in which they are

192 

P. BLOOM

created. As such, in colonising nationstates, statistics applied to indigenous peoples have a raced reality that is perpetuated and normalised through their creation and re-creation…. The numerical format of these statistics and their seemingly neutral presentation, however, elide their social, cultural and racial dimensions. In a seemingly unbroken circle, dominant social norms, values and racial understandings determine statistical construction and interpretations, which then shape perceptions of data needs and purpose, which then determine statistical construction and interpretation, and so on. Just as important is that the accepted persona of statistics on indigenous people operates to conceal what is excluded: the culture, interests, perspectives and alternative narratives of those they purport to represent—indigenous peoples. (Walter 2016: 79–80)

These techniques of using big data to deconstruct and reconstruct hegemonic narratives can also directly inform social movements for change, though, for instance, using social medial like twitter to map out protests and potentially connect them into broader movements (Losh et al. 2013) However, there is also a more revolutionary possibility at play. It is to transform “dehumanisation” from a force of oppression to one of emancipation and liberation. The notion of “human nature” has already been widely dismissed as scientifically questionable and politically problematic (to say the very least). Moreover, as this book has put forward as one of its core claims, is that human empowerment, prosperity, and progress paradoxically depends on us being willing to transcend our current anthropocentric worldview and attendant social relations. Big data can help us to do so—similar to how virtual reality used technology to open the way for experiencing different realities and as such dramatically expanding our political imaginations. Data evidence can, in turn, allow us to see ourselves as something more than just humans or as the most important beings in the world. (Cheney-Lippold 2018). It permits for a “reseeing” of o ­ urselves as human (Jack et al. 2013). It, further, allows for a critical questioning of “where is the human in the data”? (Ballantyne 2018). In contemporary times, it can reveal the potential for technology and non-­human intelligence to actually protect our rights and privacy rather than just imperial them as is commonly popularly portrayed (Tusinski Berg 2018). Looking toward the future, it can allow us to literally and figuratively “care for unknown futures” (Agostinho 2019).

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

193

Perhaps then one of the most vital and urgent tasks of the new millennium is to “unhumanize” politics. What is absolutely critical, in this regard, is the refusal to use big data and AI simply as a means for better quantifying our socially proscribed selves (Nafus and Sherman 2014). By contrast, it is to conjure up and continually evoke a transhumanist vision which deconstructs such identities and links our freedom to our capacity to engage with non-human forms of self-hood and social relations (Alfsvåg 2015). When this occurs we can celebrate our evolution from being “all-­ too-­human” to “all-too-transhuman” (Bradley 2018).

Reprogramming Politics The prospect of radically “unhumanising” power allows for a range of new transhuman political possibilities to emerge. Here AI, computers, robots, and data intermingle with humans to form new social networks and cultural spaces. Hence Data enables our embedded relationalities to become knowable. The more our interrelations become datafied and become transparent and readable the more we can understand the chains of contingent, complex and emergent causality which previously were invisible. The visibility of the complex world removes the need for causal theory and for top-down forms of governance on the basis of cause-and-effect. The self-awareness of a datafied world thereby blurs forever the distinction between human and nonhuman and subject and object. Big Data thereby articulates a properly posthuman ontology of self-governing, autopoietic assemblages of the technological and the social. Whereas the ‘human’ of modernist construction sought to govern through unravelling the mysteries of causation, the posthuman of our present world seeks to govern through enabling the relational reality of the world to become transparent, thus eliminating unintended consequences. (Chandler 2015: 845)

This has the potential of completely reconfiguring global governance in ways that are both exciting and dangerous (Schwarz et al. 2019). It also offers the opportunity to reimagine the possibilities of what it means to be “human” and in doing so enlarging the scope of what politics can achieve (Dunbar 2017). Significantly, the prevalence of big data is reconceiving the purpose and limits of “human” politics. To this end, it is literally permitting people to “resee” how they are and can be connected at a local and global level

194 

P. BLOOM

(Olbrich and Witjes 2016). It additionally, challenges conventional notions of causality and helps construct, accordingly, a more “resilient” human political subject that embraces both experimentation and failure in the pursuit of discovery and innovation (Chandler 2016). These point to the emergence of a “post-human data subject” who uses non-human technology to evade human surveillance and reserves the “right to be forgotten” as well as decide which social networks and communities they choose to be part of (Käll 2017). Anticipated in the future is the rise of the communal “posthuman city” (Zaera-Polo 2017). Promised is a radical posthuman vision of “morphogenic societies” (Al-Amoudi and Morgan 2018). These concrete possibilities for post-human adaptation serve, in turn, to redefine how we conceive humanity and use these definitions to guide our decisions and interactions: post-humanism, understood as a general framework, instead of being a way to transcend the human, is a way to dehumanize it. Although the concept of dehumanization lacks a systematic theoretical basis, I maintain that, in general, it refers to meanings that involve the denial of two distinct senses of humanness: the characteristics that are uniquely human and those that constitute human nature. The assimilation of the human to animals and plants means denying uniquely human attributes. Giving human beings special skills through particular technologies can strengthen or threaten human nature, depending on how technologies affect the constitution of the human person and her internal as well as external relationality. Most of what we call post-human theories and praxis entail cognitive underpinnings of animalistic and/or mechanistic kinds that do not represent forms of human transcendence, but instead of dehumanization. In the face of the transhuman era, we need new criteria in order to evaluate what humanizes the human and what de-humanizes it. (Donati 2018: 53)

More immediately, this radical posthuman politics holds the potential to revolutionise contemporary democracy. At the beginning of the ­twenty-­first century arose fears of the “tyranny of participation” within then current development discourses (Cooke and Kothari 2001). These have evolved in the intervening decades into a concern for the corporate and political control associated with the growing prevalence of “datadriven participation” (Tenney and Sieber 2016). Globally, these echo the troubling use of digital technologies for reinforcing “persistent inequalities”. Speaking to the context of “urban Latin America”, Müller and Segura (2016: 3–4) argue that

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

195

Digitalization is thus inscribed in a complex and inequitable reality that it requires to comprehend the specificity of situations and urban contexts of which it detaches itself and which it modifies. Hence, we begin to investigate theories and methods that allow us to understand the ways in which digital and urban are connected, as well as the spatial, social, political and cultural effects produced by digital technologies in an inequitable urban scenario, avoiding hopeful technophilic promises as much as pessimistic technophobic prognoses. In turn, it’s worth questioning how digitalization—and its promises to improve quality of life, boost economic growth, and promote human development—could contribute to overcome persistent inequalities in the Global North and South, providing opportunities of reversal for some, consumerism as a lifestyle for many, but also disconnection and digital exclusion for the ever marginalized.

These technological challenges to genuine popular rule also permit for novel conception of hi-tech democracies to emerge, mixing technological developments with values of sustainability, economic justice, and political empowerment (Peters and Besley 2019). Suddenly possible are the development of “intelligent” political platforms that combine progressive values with the latest advances in human and non-human intelligence. To this end, transhumanism can politically be seen as the “transformation of collective intelligences”. Specifically, it is one that is already well underway as The current explosion of the internet leads to a novel dialogue between humans and non-humans. Data are produced and they travel through applications, sensors, and databanks. The applications and sensors are in a way the cross stitches of various complexities that proliferate and constitute the texture of a worldwide N-dimensional relational system. A spectre is haunting the new modalities of work, existence, and governance—the spectre of traceability and associated data. At all scales, from intranet to globalization processes there are transformations happening, shift from the nation states to the market states (in the framework of the attempt to establish forcefully and rapidly a global market), shifts from traditional forms of sovereignty to new digital traces and avenues, or to decentralised pattern territories…. (Noyer 2016)

Importantly, intelligence here is not singular but plural, recognising and incorporating the diverse ways that different humans and machines process information and collaborate to improve their existing social environment (Mauthner 2018). Ironically, these “intelligences” are able to

196 

P. BLOOM

come to light and intermingle, often precisely due to the use of big data for the purpose of commodifying and profiting from existing complex human and non-human relationships (Henry and Prince 2018). These can lead to a more radical study of data, one that exposes the exploitation of human and “things” akin Engel’s use of statistics in the nineteenth century to reveal the oppressive conditions of the working classes (Parham 2019). These methods further allow for AI to speak with “uncommon voices” that are socially responsive and transcend both narrow instrumental reason and limiting ideologies of the market (Gill 2017). Emerging, in turn, are alternative political visions based on transhuman relations and values. At their heart beat efforts to critically understand and portray the ability of transhumanism to enhance human freedom and democracy (Mazarakis 2016). Practically, these transhuman political philosophies help inform ideas of empowering “posthuman governance” in which Democratically algorithmic governance, enabled by artificial intelligence and human enhancement, can automate bottom up citizen surveillance, inform debate, aggregate decision-making, and ensure the efficient working of a gradually withering state. As paid work disappears and we transition to a postcapitalist economy with a universal basic income, market mechanisms can be replaced with democratic planning. Indeed only algorithmic governance can secure our future against accelerating threats from technological innovation. (Hughes 2018: 166)

In this respect, the growth of “smart cities” are not predetermined but an ongoing democratic deliberation and political conflict over the social construction of what it means to be “intelligence” and for whose interest (Molpeceres 2017). This requires, consequently, an ontological shift in the view of human and machine relations, whereby “information and communications technologies are seen as an ethical environment, and human-computer couplings are seen as hybrid moral agents” (Buzato 2017: 74). It, furthermore, proposes new forms of “posthuman” agency that can have a significant effective on geo-politics and international relations. This posthuman politics poses substantial risks though, particularly in the ability of different human-machine configurations to manipulate social realities for reinforcing a transhuman status quo. This is already being seen in the exploitation of algorithms to shape perspectives via social media. Indeed

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

197

algorithmic selection has become a growing source of social order, of a shared social reality in information societies…similar to the construction of realities by traditional mass media—automated algorithmic selection applications shape daily lives and realities, affect the perception of the world, and influence behavior. However, the co-evolutionary perspective on algorithms as institutions, ideologies, intermediaries, and actors highlights differences that are to be found, first, in the growing personalization of constructed realities and, second, in the constellation of involved actors. Altogether, compared to reality construction by traditional mass media, algorithmic reality construction tends to increase individualization, commercialization, inequalities, and deterritorialization and to decrease transparency, controllability, and predictability. (Just and Latzer 2017: 238)

This exposes in dramatic fashion the role of non-humans for furthering existing forms of “human” power (Wolfe 2018). Moreover, it seems to support desires to resist “militant posthumanism” with more radical and self-critical forms of humanism. However, this fear of manipulative realities belies not only the more existing possibilities of immersive virtual technologies for allowing us to radically reimagine our realities, but also the democratic capacity for humans to co-evolve with non-humans thus creating new integrative forms of social existence (Sisler 2015). Transhumanism is at its core a political battle for the soul of an emerging world where human and non-human intelligence will not only co-exist but continually intermingle and impact one another (Ylönen 2016). In this respect, it goes far beyond merely how to best politically mitigate the risks and maximize the positives of technology for humans. It extends to the way humans conceive and think about non-humans generally (Lindgren and Ohman 2018). Crucial, in this regard, is the capacity to rethink political philosophy and practice as the ability for humans and non-humans to continually “perfect” and complement one another in their strive to create a better integrative world (Jasanoff 2016).

Mutual Intelligent Design The principle and in most cases primary purpose of contemporary politics is the challenging and winning of power. While politicians and activists will speak of how things could and should be different, their fundamental focus is on how this could be achieved if those currently in charge were replaced, usually by themselves. Underlying modern politics, thus across

198 

P. BLOOM

the ideological spectrum, is a dominant structure and investment in a power/resistance dynamic (Bloom 2016). Within democracies, this can take on a special resonance as politics is formalised around the goal of temporarily occupying “the empty seat of power” . Transhumanism and new technological development such as AI and robots have certainly shown themselves capable of contributing to this dominant and unfortunately dominating model of politics. However, it also contains the possibility of one based not on sovereign dreams of ruling or resisting, but social experimentation and design (Asaro 2000). So much of the current debate over technology, even at its supposedly most radical, is in regard to what constitutes proper “technological design”. Even before the start of the new millennium, scholars argued for politically infusing technological design with social theory (Berg 1998). These ideas foreshadow the potential to redesign politics based on collaboration and creativity between human and non-human intelligences rather than competition and conquest. To this end, We should study further the idea of a humanised posthumanism, building on an ontology of possibility that acknowledges our assembled entanglement with the non-human world but also accords an important role for humans in acknowledging these interdependencies. As a move beyond monist posthumanism, instead of portraying mastery over passive nature, this position builds on the idea of political responsibility for the vulnerabilities, injustices, and hazards that our assembled life of dual being in and with the nature entails. It also acknowledges that all ontological claims and arguments remain meaningless without the audiences to which they are directed—audiences concerned with how to lead a civic life in a more-than-­ posthuman world. (Häkli 2018: 173)

Interestingly, the concept of “intelligent design” is perhaps most popularly associated with divine creation. It has quite Conservative connotations, inferring that humans, species, and the world itself has not evolved over time but was rather created wholesale by a god-like designer. Paradoxically, this deification (often quite literally) of the designer both confirms—even in ostensible opposition—the dominance of technological age as well as the continuing formative importance of traditional religious ideas for this supposedly modern perspective. It promotes the idealised figure of the creator, whether human or god, who stands all important and powerful. Transhumanism offers a different religious founding myth for a

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

199

more progressive and collaborative politics (Sandberg 2014). Concretely, this reframes design as an integrative and cooperative activity (Gordon and Mihailidis 2016). This vision, in turn, humanises—makes messier, more dynamic, always evolving—perspectives on social change and governance (Waters 2016). It offers, in this regard, a radically different “posthuman theology” based on our continual deconstruction and reconstruction as individuals and a species (Olmstead 2018). These provide the theoretical foundations for concretely realizing a genuinely shared political cultural and society. It is one where the diversity of intelligence is viewed as a strength and the very basis in which to discover innovative solutions to common problems (Hernández-Orallo 2016). It also entails making big data and the algorithms they depend upon a process of democratic discussion and community—led design. Significantly, Inherent in thinking post-human ethics is the status of the bodies as sites of lives inextricable from philosophy, thought, experiments in being and fantasies of the future. Posthuman Ethics, examines certain kinds of bodies to think new relations that offer liberty and a contemplation of the practices of power which have been exerted upon bodies….the body, reconfiguring relation and ethical emergences of bodies beyond being received through representation, external and within consciousness negotiating reality through representative perception is the foundation and the site of the event of the posthuman encounter. (MacCormack 2016: 1)

Consequently, transhumanism evolves from a fearful foregone conclusion into a still to be decided “emancipatory project” (Cudworth and Hobden 2017). It is an “emancipatory project” that transcends the intellectual and encompasses the entirety of our material and social existence, as democracy becomes multisensory and transhuman, encompassing the very “machinic sounds” of our existence (Tianen 2018). Additionally, it extends beyond the normative or the sensual. It also influences and involves a new “political ethics of care” (Bozalek 2016). Just as importantly, it helps to psychologically and politically reorient our experience of pleasure and desire (Alaimo 2016). In this respect, conventional ideas of salvation and transcendence have gone from the heavenly to the earthly and potentially even galactic. The aim is quickly evolving past historic desires to find otherworldly bliss with a divine force. It now also progressively includes the revolutionary possi-

200 

P. BLOOM

bility to evolve beyond the limits of human politics and existence. (Sirius and Cornell 2015). It is a revitalized sense of human optimism in the promise to be saved from social injustice and material oppressions that have marked human life in the past and present (Hayles 2003). Ideologically, this requires thinking seriously about the “challenge of an emancipatory posthumanism”, to ensure that it is not merely an updated version of an old political programme and reality (Cudworth and Hobden 2015). Yet it also permits for new creative expressions of radical desires for change and freedom, for example through online poetry blogs like those in Taiwan: That most of Taiwanese society necessarily lives once-removed from these elderly poets’ lived experiences suggests that their poetry and blogs form a post-human digital repository of their affective sensibilities and struggles as well as existing in a live network that may provide support and hope to younger activists. Moreover, their blogs might encourage others to emulate such work that either serves as a record of struggles as well as a nexus of political antagonisms present amid contemporary historical tensions. (Brink 2016: 141)

Further, it promises the flourishing of democracy to meet the challenges and opportunities of a new transhuman age. It is a democracy that encompasses, as Miller and Miller (2016) “boundless items” that would often be forgotten or rendered socially invisible: The linchpin holding together the political histories of spreading, creeping things, systems, assemblages, and accumulations that appear over the following pages is the assertion that the growth of these things is as thoughtful as it is reproductive, and as political as it is thoughtful. Each of the case studies in the boundless stuff that forms the subject of this book—dissociated embryonic material, cloned human cells, toxic and polluting trash, and proliferating data—is a case study in reproductive or replicating activity that is also political activity, and a variation on political thought. The growing, reproducing, and flourishing systems that populate the following chapters are also thinking, processing, and political systems. Like slime and data, these systems are—whether organic, inorganic, technological, environmental, or informational—systems whose growth or reproduction is thought in exactly the same way that any growing computational system is thinking. But, once more—and also like the slime mold’s engulfing of the earth and the replication of Boundless Informant across nation-states—the thought of

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

201

these accumulations and assemblages is infinitely more beneficial to democracy than it is an attack on democracy. (2)

Ultimately, perhaps the greatest hope for the politics of the twenty-first century is the transformation of posthuman emancipation into transhuman liberation. More precisely, the escape, the freedom, from human based oppression and tyranny for an integrative human and non-human society that creatively collaborates and cooperates in the making and remaking of their shared social realities (Newfield 2018). While this may seem like a far off possibility, this ethos can help produce a more just contemporary transhuman society (Prasad 2016). It reframes questions from a traditional idea of “which lives matter” to one of “how can we ensure that our lives do matter?” (Shaw 2017). It moreover presents novel types of political agency, ones which can exhibited from the local to the global levels (Schandorf and Karatzogianni 2017). This integrative political ethics can inform a “posthuman developmental psychology” for reorienting personal growth and community development (Burman 2018). It is in the realisation that our contemporary human present can be improved that the optimism for a transhuman future emerges and flourishes. The need to reject our “anthropocentrism” as well as historically shortsited and exploitive ideas about human nature, turns it from an ethical prerogative into a matter of political and social necessity. It also makes this “disruption” to the human status quo suddenly something to be desired rather than feared. (Kroker 2014). In reimagining our human-­ centred politics we can begin to reimagine ourselves and society. Power, in turn, is radically transformed from an exercise in human rule into a shared human and non-human project of continual mutual intelligent design.

References Adam, A., & Green, E. (Eds.). (2005). Virtual Gender: Technology, Consumption and Identity Matters. Routledge. Agostinho, D. (2019). The Optical Unconscious of Big Data: Datafication of Vision and Care for Unknown Futures. Big Data & Society, 6(1). https://doi. org/10.1177/2053951719826859. Alaimo, S. (2016). Exposed: Environmental Politics and Pleasures in Posthuman Times. University of Minnesota Press. Al-Amoudi, I., & Morgan, J. (2018). Introduction: Post-Humanism in Morphogenic Societies. In Realist Responses to Post-Human Society: Ex Machina (pp. 11–19). Routledge.

202 

P. BLOOM

Alfsvåg, K. (2015). Transhumanism, Truth and Equality: Does the Transhumanist Vision Make Sense? Anderson, L., & Bishop, P. (2005). E-Government to e-Democracy: Communicative Mechanisms of Governance. Journal of E-Government, 2(1), 5–26. Aradau, C., & Blanke, T. (2017). Politics of Prediction: Security and the Time/ Space of Governmentality in the Age of Big Data. European Journal of Social Theory, 20(3), 373–391. Armitage, J. (1999). Resisting the Neoliberal Discourse of Technology The Politics of Cyberculture in the Age of the Virtual Class. CTheory, 3–1. Asaro, P.  M. (2000). Transforming Society by Transforming Technology: The Science and Politics of Participatory Design. Accounting, Management and Information Technologies, 10(4), 257–290. Ballantyne, A. (2018). Where Is the Human in the Data? A Guide to Ethical Data Use. GigaScience, 7(7), giy076. Baum, S.  D. (2017). Social Choice Ethics in Artificial Intelligence. AI & Society, 1–12. Bayer, B.  M. (1999). Psychological Ethics and Cyborg Body Politics. In Cyberpsychology (pp. 113–129). London: Palgrave. Beirne, M., & Ramsay, H. (2018). Information Technology and Workplace Democracy. Routledge. Berents, H., & Keogh, B. (2018). Virtuous, Virtual, But Not Visceral:(dis) Embodied Viewing in Military-Themed Videogames. Critical Studies on Security, 6(3), 366–369. Berg, M. (1998). The Politics of Technology: On Bringing Social Theory into Technological Design. Science, Technology, & Human Values, 23(4), 456–490. Bijker, W. (2017). Constructing Worlds: Reflections on Science, Technology and Democracy (and a Plea for Bold Modesty). Engaging Science, Technology, and Society, 3, 315–331. Bloom, P. (2016). Beyond Power and Resistance: Politics at the Radical Limits. London: Rowman & Littlefield. Bottici, C. (2014). Imaginal Politics: Images Beyond Imagination and the Imaginary. Columbia University Press. Boyd, R., & Holton, R.  J. (2018). Technology, Innovation, Employment and Power: Does Robotics and Artificial Intelligence Really Mean Social Transformation? Journal of Sociology, 54(3), 331–345. Boyte, H.  C. (2017). John Dewey and Citizen Politics: How Democracy Can Survive Artificial Intelligence and the Credo of Efficiency. Education and Culture, 33(2), 13–47. Retrieved from https://muse.jhu.edu/article/ 680656/summary. Bozalek, V. (2016). The Political Ethics of Care and Feminist Posthuman Ethics: Contributions to Social Work (pp. 80–96). London: Palgrave Macmillan. Bradley, J.  P. (2018). Cerebra: “All-Human”, “All-Too-Human”, “All-Too-­ Transhuman”. Studies in Philosophy and Education, 37(4), 401–415.

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

203

Brink, D.  A. (2016). Poetry Blogs and the Posthuman in Postcolonial Taiwan. Tamkang Review, 46(2), 135–159. Burman, E. (2018). Towards a Posthuman Developmental Psychology of Child, Families and Communities. In International Handbook of Early Childhood Education (pp. 1599–1620). Dordrecht: Springer. Buzato, M. E. K. (2017). Towards a Theoretical Mashup for Studying Posthuman/ Postsocial Ethics. Journal of Information, Communication and Ethics in Society, 15(01), 74–89. Carolan, M. (2019). Automated Agrifood Futures: Robotics, Labor and the Distributive Politics of Digital Agriculture. The Journal of Peasant Studies, 1–24. Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach. Science and Engineering Ethics, 1–24. Chandler, D. (2015). A World Without Causation: Big Data and the Coming of Age of Posthumanism. Millennium, 43(3), 833–851. Chandler, D. (2016). How the World Learned to Stop Worrying and Love Failure: Big Data, Resilience and Emergent Causality. Millennium, 44(3), 391–410. Cheney-Lippold, J. (2018). We Are Data: Algorithms and the Making of Our Digital Selves. NYU Press. Cherney, J. L. (1999). Deaf Culture and the Cochlear Implant Debate: Cyborg Politics and the Identity of People with Disabilities. Argumentation and Advocacy, 36(1), 22–34. Chitty, N., & Dias, S. (2018). Artificial Intelligence, Soft Power and Social Transformation. Journal of Content, Community and Communication, 7, 1–14. Colaresi, M., & Mahmood, Z. (2017). Do the Robot: Lessons from Machine Learning to Improve Conflict Forecasting. Journal of Peace Research, 54(2), 193–214. Coleman, S. (2012). Making the e-Citizen: A Socio-Technical Approach to Democracy. Connecting Democracy – Online Consultation and the Flow of. Communication, 379–395. Cooke, B., & Kothari, U. (Eds.). (2001). Participation: The New Tyranny? London: Zed Books. Couldry, N., & Mejias, U. A. (2019). Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject. Television & New Media, 20(4), 336–349. Cudworth, E., & Hobden, S. (2015). Liberation for Straw Dogs? Old Materialism, New Materialism, and the Challenge of an Emancipatory Posthumanism. Globalizations, 12(1), 134–148. Cudworth, E., Hobden, S., & Kavalski, E. (Eds.). (2017). Posthuman Dialogues in International Relations. Routledge. Cuthbertson, A. (2018). Robots Will Have Civil Rights by 2045, Claims Creator of ‘I Will Destroy Humans’ Android. Independent, 24 May Dale, J., & Kyle, D. (2015). Smart Transitions?: Foreign Investment, Disruptive Technology, and Democratic Reform in Myanmar. Social Research: An International Quarterly, 82(2), 291–326.

204 

P. BLOOM

De Blasio, E., & Sorice, M. (2018). Populisms Among Technology, e-Democracy and the Depoliticisation Process. Diamond, L. (2015). Liberation Technology, vol. 1. In In Search of Democracy (pp. 132–146). Routledge. Donati, P. (2018). Transcending the Human: Why, Where, and How? In I. Al-Amoudi & J. Morgan (Eds.), Realist Responses to Post-Human Society: Ex Machina (pp. 63–91). London: Routledge. Dunbar, M. (2017). To Be a Machine: Adventures Among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death. The Humanist, 77(3), 42. Retrieved from http://search.proquest.com/openview/ 1376cd5c9e00b7ff934b9d6f65dbbe52/1?pq-origsite=gscholar&cbl=35529. Edelmann, N., Krimmer, R., & Parycek, P. (2008). Engaging Youth Through Deliberative e-Participation: A Case Study. International Journal of Electronic Governance, 1(4), 385–399. Entman, R. M., & Usher, N. (2018). Framing in a Fractured Democracy: Impacts of Digital Technology on Ideology, Power and Cascading Network Activation. Journal of Communication, 68(2), 298–308. Farthing, R. (2015). Democracy Bytes: New Media, New Politics and Generational Change. Fleming, S. (2002). Information and Communication Technologies (ICTs) and Democracy Development in the South: Potential and Current Reality. The Electronic Journal of Information Systems in Developing Countries, 10(1), 1–10. Franklin, S. (2006). The Cyborg Embryo: Our Path to Transbiology. Theory, Culture & Society, 23(7–8), 167–187. Freedman, S. A., Dayan, E., Kimelman, Y. B., Weissman, H., & Eitan, R. (2015). Early Intervention for Preventing Posttraumatic Stress Disorder: An Internet-­ Based Virtual Reality Treatment. European Journal of Psychotraumatology, 6(1), 25608. Frieze, J. (2016). Reframing Immersive Theatre: The Politics and Pragmatics of Participatory Performance. In Reframing Immersive Theatre (pp.  1–25). London: Palgrave Macmillan. Frissen, P. (1997). The Virtual State. The Governance of Cyberspace, 111–125. Gill, K. S. (2017). Uncommon Voices of AI. Gordon, E., & Mihailidis, P. (Eds.). (2016). Civic Media: Technology, Design, Practice. MIT Press. Green, N. (1999). Disrupting the Field: Virtual Reality Technologies and “multi­ sited” Ethnographic Methods. American Behavioral Scientist, 43(3), 409–421. Häkli, J. (2018). The Subject of Citizenship–Can There Be a Posthuman Civil Society? Political Geography, 67, 166–175. Hansen, K. B. (2015). The Politics of Algorithmic Finance. Contexto Internacional, 37(3), 1081–1095.

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

205

Haraway, D. (2006). A Cyborg Manifesto: Science, Technology, and Socialist-­ Feminism in the Late 20th Century. In The International Handbook of Virtual Learning Environments (pp. 117–158). Dordrecht: Springer. Hassapopoulou, M. (2018). Playing with History: Collective Memory, National Trauma, and Dark Tourism in Virtual Reality Docugames. New Review of Film and Television Studies, 16(4), 365–392. Hasselbalch, J. A. (2018). Innovation Assessment: Governing Through Periods of Disruptive Technological Change. Journal of European Public Policy, 25(12), 1855–1873. Hayles, N.  K. (2003). Afterword: The Human in the Posthuman. Cultural Critique, 53(1), 134–137. Healey, K., & Stephens, N. (2017). Augmenting Justice: Google Glass, Body Cameras, and the Politics of Wearable Technology. Journal of Information, Communication and Ethics in Society, 15(4), 370–384. Helbing, D., & Pournaras, E. (2015). Society: Build Digital Democracy. Nature News, 527(7576), 33. Helbing, D., Frey, B. S., Gigerenzer, G., Hafen, E., Hagner, M., Hofstetter, Y., … & Zwitter, A. (2017). Will Democracy Survive Big Data and Artificial Intelligence. Scientific American, 25. Henry, M., & Prince, R. (2018). Agriculturalizing Finance? Data Assemblages and Derivatives Markets in Small-Town New Zealand. Environment and Planning A: Economy and Space, 50(5), 989–1007. Hernández-Orallo, J. (2016). The Measure of All Minds: Evaluating Natural and Artificial Intelligence. Cambridge University Press. Holmes, D. (Ed.). (2002). Virtual Globalization: Virtual Spaces/Tourist Spaces. Routledge. Hughes, J. J. (2015a). Posthumans and Democracy in Popular Culture. In The Palgrave Handbook of Posthumanism in Film and Television (pp.  235–245). London: Palgrave Macmillan. Hughes, J. J. (2015b). Posthumans and Democracy in Popular Culture. In The Palgrave Handbook of Posthumanism in Film and Television (pp.  235–245). London: Palgrave Macmillan. Hughes, J. (2018). Algorithms and Posthuman Governance. Journal of Posthuman Studies, 1(2), 166–184. Irom, B. (2018). Virtual Reality and the Syrian Refugee Camps: Humanitarian Communication and the Politics of Empathy. International Journal of Communication, 12, 23. Jack, A. I., Dawson, A. J., & Norr, M. E. (2013). Seeing Human: Distinct and Overlapping Neural Signatures Associated with Two Forms of Dehumanization. Neuroimage, 79, 313–328. Jacobsen, K. L. (2015). The Politics of Humanitarian Technology: Good Intentions, Unintended Consequences and Insecurity. Routledge.

206 

P. BLOOM

Jasanoff, S. (2016). Perfecting the Human: Posthuman Imaginaries and Technologies of Reason. In Perfecting Human Futures (pp. 73–95). Wiesbaden: Springer VS. Just, N., & Latzer, M. (2017). Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet. Media, Culture & Society, 39(2), 238–258. Käll, J. (2017). A Posthuman Data Subject? The Right to be Forgotten and Beyond. German Law Journal, 18(5), 1145–1162. Kaufmann, M., Egbert, S., & Leese, M. (2018). Predictive Policing and the Politics of Patterns. The British Journal of Criminology. Khashman, Z., & Khashman, A. (2016). Anticipation of Political Party Voting Using Artificial Intelligence. Procedia Computer Science, 102, 611–616. Klischewski, R. (2014). When Virtual Reality Meets Realpolitik: Social Media Shaping the Arab Government – Citizen Relationship. Government Information Quarterly, 31(3), 358–364. Koch, A. (2005). Cyber Citizen or Cyborg Citizen: Baudrillard, Political Agency, and the Commons in Virtual Politics. Journal of Mass Media Ethics, 20(2– 3), 159–175. Koro-Ljungberg, M., Cirell, A. M., Gong, B. G., & Tesar, M. (2017). The Importance of Small Form: ‘Minor’ Data and ‘BIG’ Neoliberalism. In Qualitative Inquiry in Neoliberal Times (pp. 67–80). Routledge. Kostakopoulou, D. (2018). Cloud Agoras: When Blockchain Technology Meets Arendt’s Virtual Public Spaces. In Debating Transformations of National Citizenship (pp. 337–341). Cham: Springer. Kreiss, D. (2016). Prototype Politics: Technology-Intensive Campaigning and the Data of Democracy. Oxford University Press. Kroker, A. (2014). Exits to the Posthuman Future. John Wiley & Sons. Kroker, A., & Weinstein, M.  A. (1994). Data Trash: The Theory of the Virtual Class. New World Perspectives. Lanier, J. (2017). Dawn of the New Everything: A Journey Through Virtual Reality. Random House. Lau, K. W., & Lee, P. Y. (2015). The Use of Virtual Reality for Creating Unusual Environmental Stimulation to Motivate Students to Explore Creative Ideas. Interactive Learning Environments, 23(1), 3–18. Leighninger, M. (2016). Transforming Governance: How Can Technology Help Reshape Democracy? Lepri, B., Staiano, J., Sangokoya, D., Letouzé, E., & Oliver, N. (2017). The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good. In Transparent Data Mining for Big and Small Data (pp. 3–24). Cham: Springer. Levina, M. (2017). Disrupt or Die: Mobile Health and Disruptive Innovation as Body Politics. Television & New Media, 18(6), 548–564.

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

207

Levy, F. (2018). Computers and Populism: Artificial Intelligence, Jobs, and Politics in the Near Term. Oxford Review of Economic Policy, 34(3), 393–417. Losh, E., Coleman, B., & Amel, V. U. (2013). Will the Revolution Be Tweeted? Mapping Complex Data Patterns from Sites of Protest. AoIR Selected Papers of Internet Research, 3. Lustig, C., Pine, K., Nardi, B., Irani, L., Lee, M. K., Nafus, D., & Sandvig, C. (2016, May). Algorithmic Authority: The Ethics, Politics, and Economics of Algorithms That Interpret, Decide, and Manage. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp.  1057–1062). ACM.  Retrieved from https://dl.acm.org/citation. cfm?id=2886426. Lyon, D. (2014). Surveillance, Snowden, and Big Data: Capacities, Consequences, Critique. Big Data & Society, 1(2). https://doi.org/10.1177/ 2053951714541861. MacCormack, P. (2016). Posthuman Ethics: Embodiment and Cultural Theory. Routledge. Marres, N. (2016). Material Participation: Technology, the Environment and Everyday Publics. Springer. Mauthner, N. S. (2018). Toward a Posthumanist Ethics of Qualitative Research in a Big Data Era. American Behavioral Scientist. Mazarakis, J. (2016). The Grand Narratives of Democratic and Libertarian Transhumanism: A Lyotardian Approach to Transhumanist Politics. Confero: Essays on Education, Philosophy and Politics, 4(2), 11–31. Retrieved from http://www.confero.ep.liu.se/issues/2016/v4/i2/a02/confero16v4i 2a02.pdf. McNutt, K. (2007). Will e-Governance and e-Democracy Lead to e-­Empowerment? Gendering the Cyber State. Federal Governance, 4(1), 1–28. Medina, E. (2011). Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile. MIT Press. Millar, J., & Bennett, F. (2017). Universal Credit: Assumptions, Contradictions and Virtual Reality. Social Policy and Society, 16(2), 169–182. Miller, R. A., & Miller, R. A. (2016). Flourishing Thought: Democracy in an Age of Data Hoards. University of Michigan Press. Molpeceres, S. (2017). Posthumanism and the City. Developing New Identities in Social Conflicts: Constructivist. Perspectives, 71, 203. Moore, D. T. (1981). Will Robots Save Democracy? The Journal of Epsilon Pi Tau, 7(2), 2–7. Moran, M., & Parry, G. (2015). Democracy and Democratization. Routledge. Müller, F., & Segura, R. (2016). Digitalizing Urban Latin America: A New Layer for Persistent Inequalities? Critical Reviews on Latin American Research, 5(2), 3–5.

208 

P. BLOOM

Nafus, D., & Sherman, J. (2014). Big Data, Big Questions| This One Does Not Go Up To 11: The Quantified Self Movement as an Alternative Big Data Practice. International Journal of Communication, 8, 11. Nash, K. (2018). Virtual Reality Witness: Exploring the Ethics of Mediated Presence. Studies in Documentary Film, 12(2), 119–131. Nayar, P.  K. (2004). Virtual Worlds: Culture and Politics in the Age of Cybertechnology. SAGE Publications India. Neff, G., & Nagy, P. (2016). Automation, Algorithms, and Politics| Talking to Bots: Symbiotic Agency and the Case of Tay. International Journal of Communication, 10, 17. Nemitz, P. (2018). Constitutional Democracy and Technology in the Age of Artificial Intelligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089. Newfield, D. (2018). Thebuwa and a Pedagogy of Social Justice: Diffracting Multimodality Through Posthumanism. Socially Just Pedagogies: Posthumanist, Feminist and Materialist Perspectives in Higher Education, 209. Noyer, J.  M. (2016). Transformation of Collective Intelligences: Perspective of Transhumanism. John Wiley & Sons. Retrieved from https://books.google. co.uk/books?hl=en&lr=&id=S8NDDQAAQBAJ&oi=fnd&pg=PP2&dq=transhumanism+and+empowerment&ots=xi1W-3c5ik&sig=Ktzo7FQ 6oF7-fQy17K23CWLr5HM. Olbrich, P., & Witjes, N. (2016). Sociotechnical Imaginaries of Big Data: Commercial Satellite Imagery and Its Promise of Speed and Transparency. In Big Data Challenges (pp. 115–126). London: Palgrave. Olmstead, N. A. (2018). By the Blood: Derrida, Hauerwas, and the Potential for Posthuman Theology. Political Theology, 19(5), 363–381. Olshannikova, E., Ometov, A., Koucheryavy, Y., & Olsson, T. (2015). Visualizing Big Data with Augmented and Virtual Reality: Challenges and Research Agenda. Journal of Big Data, 2(1), 22. Parham, J. (2019). Biggish Data: Friedrich Engels, Material Ecology, and Victorian Data. European Journal of Cultural and Political Sociology, 9, 1–22. Peters, M. A., & Besley, T. (2019). Citizen Science and Ecological Democracy in the Global Science Regime: The Need for Openness and Participation. Phillips, A., Smith, G., Cook, M., & Short, T. (2016). Feminism and Procedural Content Generation: Toward a Collaborative Politics of Computational Creativity. Digital Creativity, 27(1), 82–97. Pilsch, A. (2017). Transhumanism: Evolutionary Futurism and the Human Technologies of Utopia. University of Minnesota Press. Portman, M. E., Natapov, A., & Fisher-Gewirtzman, D. (2015). To Go Where No Man Has Gone Before: Virtual Reality in Architecture, Landscape Architecture and Environmental Planning. Computers, Environment and Urban Systems, 54, 376–384.

6  REPROGRAMMING POLITICS: MUTUAL INTELLIGENT DESIGN 

209

Power, N. (2017). Digital Democracy? Socialist Register, 54(54). Prasad, P. (2016). Beyond Rights as Recognition: Black Twitter and Posthuman Coalitional Possibilities. Prose Studies, 38(1), 50–73. Raikov, A. (2018). Accelerating Technology for Self-Organising Networked Democracy. Futures, 103, 17–26. Ranieri, M. (Ed.). (2016). Populism, Media and Education: Challenging Discrimination in Contemporary Digital Societies. Routledge. Robinson, C. K. (2016). Virtual Reality: A Walk in the Prisoner’s Shoes. Guardian (Sydney), 1751, 11. Rockhill, G. (2017). Counter-History of the Present: Untimely Interrogations into Globalization, Technology, Democracy. Duke University Press. Rose, M. (2018). Technologies of Seeing and Technologies of Corporeality: Currents in Nonfiction Virtual Reality. World Records, 1(1), 01–11. Ruijten, P.  A., Midden, C.  J., & Ham, J. (2015). Lonely and Susceptible: The Influence of Social Exclusion and Gender on Persuasion by an Artificial Agent. International Journal of Human-Computer Interaction, 31(11), 832–842. Ryan, M.  L. (2015). Narrative as Virtual Reality 2: Revisiting Immersion and Interactivity in Literature and Electronic Media (Vol. 2). JHU Press. Rzevski, G. (2019). Coevolution of Technology, Business and Society. Management and Applications of Complex Systems, 59. Sabel, C., & Zeitlin, J. (1985). Historical Alternatives to Mass Production: Politics, Markets and Technology in Nineteenth-Century Industrialization. Past & Present, 108, 133–176. Sandberg, A. (2014). Transhumanism and the Meaning of Life. Religion and Transhumanism: The Unknown Future of Human Enhancement, 3–22. Schandorf, M., & Karatzogianni, A. (2017). Agency in Posthuman IR: Solving the Problem of Technosocially Mediated Agency. In Posthuman Dialogues in International Relations (pp. 89–108). Routledge. Schwarz, E., McKeil, A., Dean, M., Duffield, M., & Chandler, D. (2019). Datafying the Globe: Critical Insights into the Global Politics of Big Data Governance. Big Data & Society. Shaw, D.  B. (2017). Posthuman Urbanism: Mapping Bodies in Contemporary City Space. Sheehey, B. (2019). Algorithmic Paranoia: The Temporal Governmentality of Predictive Policing. Ethics and Information Technology, 21(1), 49–58. Shields, R.  M., & Shields, R. (Eds.). (1996). Cultures of the Internet: Virtual Spaces, Real Histories, Living Bodies. Sage. Sirius, R. U., & Cornell, J. (2015). Transcendence: The Disinformation Encyclopedia of Transhumanism and the Singularity. Red Wheel Weiser. Sisler, A. (2015). ‘Co-Emergence’In Ecological Continuum: Educating Democratic Capacities Through Posthumanism as Praxis. Ethics in Progress, 6(1), 119–139.

210 

P. BLOOM

Steinicke, F. (2016). Being Really Virtual: Immersive Natives and the Future of Virtual Reality. Springer. Stiegler, C. (2017). The Politics of Immersive Storytelling: Virtual Reality and the Logics of Digital Ecosystems. International Journal of E-Politics (IJEP), 8(3), 1–15. Street, J. (1997). Remote Control? Politics, Technology and Electronic Democracy. European Journal of Communication, 12(1), 27–42. Swyngedouw, E. (2006). Circulations and Metabolisms:(hybrid) Natures and (cyborg) Cities. Science as Culture, 15(2), 105–121. Taylor, T.  B. (2018). Judgment Day: Big Data as the Big Decider. Wake Forest University. Tenney, M., & Sieber, R. (2016). Data-Driven Participation: Algorithms, Cities, Citizens, and Corporate Control. Urban Planning, 1(2), 101–113. Thomas, R.  J. (1994). What Machines Can’t Do: Politics and Technology in the Industrial Enterprise. University of California Press. Tusinski Berg, K. (2018). Big Data, Equality, Privacy, and Digital Ethics. Journal of Media Ethics, 33(1), 44–46. Walter, M. (2016). Data Politics and Indigenous Representation in Australian Statistics. Indigenous Data Sovereignty: Toward an Agenda, 38, 79–98. Waters, B. (2016). Christian Moral Theology in the Emerging Technoculture: From Posthuman Back to Human. Routledge. Weber, J. (2016). Keep Adding. On Kill Lists, Drone Warfare and the Politics of Databases. Environment and Planning D: Society and Space, 34(1), 107–125. Wegerif, R. (2017). Introduction. Education, Technology and Democracy: Can Internet-Mediated Education Prepare the Ground for a Future Global Democracy? Civitas educationis. Education, Politics, and Culture, 6(1), 17–35. Wellman, B., & Gulia, M. (1999). Net-Surfers Don’t Ride Alone: Virtual Communities as Communities. In Networks in the Global Village (pp.  331– 366). Routledge. Wilson, M.  W. (2009). Cyborg Geographies: Towards Hybrid Epistemologies. Gender, Place and Culture, 16(5), 499–516. Wolfe, C. (2018). Posthumanism Thinks the Political: A Genealogy for Foucault’s The Birth of Biopolitics. Journal of Posthuman Studies, 1(2), 117–135. Wright, J. (2019). Robots vs Migrants? Reconfiguring the Future of Japanese Institutional Eldercare. Critical Asian Studies, 51(3), 1–24. Wu, I.  S. (2015). Forging Trust Communities: How Technology Changes Politics. JHU Press. Ylönen, M. (2016). Neoliberalism and Technoscience: Critical Assessments. New York and London: Routledge. Zaera-Polo, A. (2017). The Posthuman City: Imminent Urban Commons. Architectural Design, 87(1), 26–35. Zhu, Y. B., & Li, J. S. (2017). Collective Behavior Simulation Based on Agent with Artificial Emotion. Cluster Computing, 1–9.

CHAPTER 7

Legal Reboot: From Human Control to Transhuman Possibilities

Imagine a murder took place. It is a tragically familiar scene, a hitchhiker is picked up by a driver passing by. However, they never arrive at their destination, this sadly being the last ride they will ever take. What made this murder different was the fact that “hitchbot” was a robot. The adventure started well, with Hitchbot being picked up by an elderly couple and taken on a camping trip in Halifax, Nova Scotia, followed by a sightseeing tour with a group of young men. Next, it was a guest of honour at a First Nation powwow, where it was given a name that translates to “Iron Woman’, assigning it a gender. The robot picked up thousands of fans along the way, many travelling miles to be the next person to give it a lift. But Hitchbot’s adventure was about to come to an abrupt end. (Wakefield 2019: n.p.)

The team found it dismembered and lying at the side of the road. The case raised a serious question, one that will only grow in importance in the coming years—“Can you murder a robot?” Now, imagine going to court. You need advice and hope that you have a good judge. As the judge walks in you see it is not human but a robot. If this sounds far-fetched, consider that robot judges are already being used in China: Xiaofa stands in Beijing No 1 Intermediate People’s Court, offering legal advice and helping the public get to grips with legal terminology. She knows the answer to more than 40,000 litigation questions and can deal with

© The Author(s) 2020 P. Bloom, Identity, Institutions and Governance in an AI World, https://doi.org/10.1007/978-3-030-36181-5_7

211

212 

P. BLOOM

30,000 legal issues. Xiaofa is a robot. China already has more than 100 robots in courts across the country as it actively pursues a transition to smart justice. These can retrieve case histories and past verdicts, reducing the workload of officials. Some of the robots even have specialisms, such as commercial law or labour-related disputes.

The seventh and penultimate chapter examines the profound legal ramifications of transhuman relations. Current law—at least in theory—is focused on interpreting and protecting the rights of citizens. These efforts are based predominantly on a range of universal human rights such as those associated with free speech, habeas corpus and private property. It also encompasses laws that reflect cultural differences. More critically they retain a focus on justifying authority and policing wrongdoing. How will this legal apparatus extend to non-humans? Even more fundamentally how can new technologies reorient the purpose of law in the future from maintaining control to fostering possibility? This chapter explores the emerging possibilities of transhuman law. It begins by spelling out the evolution of human rights into “transhuman” rights including the right of all forms of consciousness not to be exploited, not to be mentally manipulated (e.g. Turned into a bot), and not be excluded from social networks for reasons of human or non-human prejudice. It will then expand upon these basic rights to investigate the crafting of laws to protect the right of people to expand their mental and physical capacities through technological enhancement. In an even more radical break from conventional human law, it will interrogate the ways virtual reality can be deployed to allow people to act out their wildest most dangerous fantasies without harming others. Finally it will reflect on how transhuman relations can reboot the law and justice to encourage and defend the expression and expansion of individual and collective potential.

Transhuman Rights The growing prevalence of AI and robots as well as the concurrent rise of a transhuman society will require a complete overhaul, or at least a dramatic revision, too the existing legal system. It poses a unique challenge for legal philosophy and its application. A serious question is whether laws made for and by humans are applicable to intelligent non-human beings as

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

213

well as a wider transhuman society? Even back in the 1990s there were concerns about what would and should constitute “social laws for artificial agent societies” (Shoham and Tennenholtz 1995). These have only been exacerbated by legal issues involved the rise of “parallel intelligence” that will potentially involve cultures of “lifelong learning” between humans and machines. In particular: Due to the pervasive use of mobile devices, location-based services, social media Apps, etc., cyberspace has become as real to human beings as physical space. In cyberspace data becomes the most important resource. Using Big Data as input, Software-Defined Objects (SDO), Software-Defined Processes (SDP), Software-Defined Systems (SDS), and SoftwareDefined Humans (SDH) in parallel with physical objects, processes, systems, and humans can be designed and constructed through learning, based mainly on existing data, knowledge, experience, or even intuition [14]. With Software-­ Defined everything, computational experiments can be conducted (i.e., self-­ play, self-run, self-operation, self-evaluation), and a huge amount of “artificial data” can be generated. That data is then used for reinforcement learning to enhance intelligence and decision-making capabilities. Meanwhile, the decisions are evaluated against various conditions. In the end, the physical objects, processes, and systems interact with the SDOs, SDPs, and SDSs, forming a closed-loop feedback decision-making process to control and manage the complex systems (as Fig. 4 shows). This is the core concept of the ACP-based parallel intelligent systems. (Wang et  al. 2016: 347)

Not surprisingly, the legal focus has predominantly rested on regulating robots to preserve human wellbeing and rights. These are reflected, for instance, in ongoing and quite comprehensive discussion of “governing lethal behaviour in autonomous robots” (Arkin 2009). These encompass both philosophical and practical questions as to whether non-human beings have free will to what should be their punishment if they break the current laws. To an extent, such concerns play into sensationalist ideas of the technologically advanced “Killer robot” that must be stopped (Gubrud 2014). These though also echo legitimate inquiries of evolving definitions of “just war” given the increased military use of robots and AI (Pagallo 2011). This involves, in turn, a need to craft international laws for this purpose—ones that govern the compliance of “autonomous weapons systems” (AWS) and “international humanitarian law” (IHL). The legal scholar Schuller (2017), for instance, introduces Five Principles for this

214 

P. BLOOM

purpose including—(1) “The decision to kill may never be functionally delegated to a computer” (2) “AWS may be lawfully controlled through programming alone” and (3) “IHL does not require temporally proximate human interaction with an AWS prior to lethal kinetic action”. Interestingly, the use of robots is not universally condemned or feared, as their presence opens up new opportunities to investigate questions of global accountability and responsibility. Indeed, the use of such lethal autonomous weapons (LAWs) may actually increase the ability to hold human actors legally accountable for war crimes through their enhanced data collection and tracking abilities: The technology gives rise to a related and further beneficial effect, which is often not noted. Holding someone accountable for their action, e.g. for actual conviction for a war crime requires reliable information—which is often unavailable. The ability to acquire and store full digital data records of LAWS’ action and pre-mission inputs allows a better determination of the facts, and thus of actual allocation of responsibility, than is currently possible in the ‘fog of war’. As well as allowing allocation of responsibility, the recording of events is also likely to diminish the likelihood of wrongful killings. There is already plenty of evidence that, for example, police officers who have to video their own actions are much less likely to commit crimes. So, killer robots would actually reduce rather than widen responsibility gaps. (Müller and Simpson 2016: 76–77)

Significantly, technological advances are already indirectly and directly “disrupting” existing legal ideas and rights. For instance, the contribution of AI and big data to the creation of “diverse economies” and “the sharing economy” also requires a shift in “legal consciousness”. Morgan and Kuch (2015: 566) propose the idea of a Radical transactionalism” which aim to achieve the creative redeployment of legal techniques and practices relating to risk management, organisational form and the allocation of contractual and property rights, in order to further the purpose of internalising social and ecological values into the heart of economic exchange. The role of law in allowing financiers and entrepreneurs to quantify investments is often overlooked in analyses of the success of capitalism … The calculability of economic life that is both mundane and enormously complex is only possible through the socio-legal formatting of contracts, property rights, shareholder voting rights and regulations.

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

215

It also opens up new legal challenges for the “forecasting, prevention, and mitigation” of the “malicious use of artificial intelligence” (Brundage et al. 2018) What seems incontrovertible and undeniable, though, is that at the minimum “intelligent robots must uphold human rights” (Ashrafian 2015a). Yet the rise of transhumanism increasingly in both theory and fact has resulted in the gradual efforts to completely reconceive the scope of the law and its application to transcend humanity. These attempts are witnessed especially in discussions related to sustainability, where it is being investigated how to integrate non-human rights and interests into existing and expanding legal frameworks for ensuring ecological sustainability (Kim 2010). Specifically, relevant to recent technological developments, are critically interrogations of the role of law for regulating “the evolution of the human species” related to “human robots” and “human enhancement” (Terec-Vlad and Terec-Vlad 2014). These have quite far-reaching and progressive potential consequences—pointing to the creation of a “a humanitarian law of artificial intelligence and robotics”. Even more radical perhaps in its possible effects, are unions and other employee rights groups arguing that “robots are people too”. To this end, In the next 10 to 20 years, AI will permeate our lives more thoroughly, from the ways we travel to the media we consume to the tools in place to police our streets. As weak AI becomes an increasingly inescapable presence, it will force us to change some of our assumptions about the world we live in. This includes assumptions made by our laws. In America almost all our laws at the local, state, and federal levels share an assumption: all decisions are made by human beings. The development and proliferation of AI will force changes to our laws because many of them will not adequately address how AI products interact with us and each other. Areas of law as diverse as liability, intellectual property, constitutional rights, international law, zoning regulations, and many others will have to adjust to effectively account for decisions that have not been made by human beings. (Weaver 2013: 4)

These gesture back, furthermore, to the legal responsibilities humans have toward “learning robots” (Marino and Tamburrini 2006). These open the way for wider queries about the prospect of preserving and expanding not just human rights but “robot rights” (Gunkel 2018). These harken back to the latter part of the twentieth century when there were first inclinations that robots would become a more prominent part of our daily lives (McNally and Inayatullah 1988). These also encompass

216 

P. BLOOM

broader moral considerations as to the legal standing of robots. Coeckelbergh ( 2010a) proposes a “relational” approach to robot rights based on their different role and needs within diverse social contexts. He argues Whether or not it is acceptable to grant rights to some robots, reflection on the development of artificially intelligent robots reveals significant problems with our existing justifications of moral consideration. This forces both defenders and opponents of robot rights to reconsider their conceptual frameworks … I have offered an alternative, social-relational approach to moral consideration, which reframes the issue of moral consideration by shifting the focus from rights and properties to relations. This approach invites us to explore radically relational, ecological ontologies. It has implications beyond theory of moral consideration and applies to artificial as well biological entities. (Ibid.: 219)

These insights help reframe the legal debate from how to protect humans from robots to how humans can best care for intelligent non-humans (Sharkey and Sharkey 2011). Anticipated are general “principles of robotics” for legally regulating incipient transhuman relationships (Boden et al. 2017) Critically, these ideas put firmly to the test that the ethical and practical basis of the law should be “human dignity”. These ring at best incomplete and at worst hollow in light of recent history and as linked to the growing proliferation of non-human intelligence (Palk 2015). The fear, of course, is that this will result in us “losing humanity” (Docherty 2012). Yet it is precisely this appeal to dignity that can challenge the all too human desire for their domination and servitude (Bryson 2010). Thus far much of the attention is on how map out human morality to “intelligent” non-­ humans—often revolving around making robots and AI appear more human (Coeckelbergh 2010a). Yet this enhanced acknowledgement of the rights and dignity of non-humans, may actually be best fostered and preserved through more intimate and mutually supportive transhuman relationship. This acceptance and commitment to non-human dignity, in turn, paves the way for a comprehensive philosophy and theory of transhuman legal rights. More precisely, this would centre on ensuring the safety of beings regardless of their humanity as well as using the law to envision and put in place a mutually beneficial integrative society. It also entails

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

217

providing a pragmatic roadmap of how humans and robots can share moral responsibility for their actions. While formal legal provisions are indeed important, they can additionally provide the foundations for “innovating beyond rights” (Ashrafian 2015c).

Updating Autonomy An ongoing question that will have major legal repercussions for a transhuman society is how autonomous in fact are intelligent non-human agents. They clearly will have enhanced capabilities and expanded types of social and economic agency, yet will they be acting out of their own free will? (Smithers 1997). This concern, obviously, raises philosophical and not for the first time legal queries about precisely how much free will humans actually have. Are they autonomous being acting out of their own desires and accord or are they culturally programmed whose behaviour is a product of their social environment? While the answer to such questions is perhaps impossible to ever fully resolve—especially in consideration of ascertaining legal accountability and responsibility—the interactions between humans and robots can shed light on both the cultural influence of both human and non-human intelligence. For instance, through better understanding human teaching behaviour it becomes easier to develop more “effective robot learners” (Thomaz and Breazeal 2008). It also points to the way to the creation of “psychological benchmarks” for determining “what is a human” via these human-robot interactions (Kahn et al. 2006). To this extent, the realities of a transhuman society will potentially have a quite profound effect on the conception of human freedom and the law. Most obviously, are critical investigations of what will constituted “human rights in a posthuman world”. As the renowned legal professor Upendra Baxi (2009) argues We may no longer confidently rely upon the faculty of human or divine reason as a sovereign constituent marker of differentiation between humans and machines. Developments in artificial intelligence reprogramme the very idea of being human; we all tend increasingly to become cyborgs, a new digitalized incarnation of the classical mermaid function. Development in the genomic science, variously decoded as part of the Human Genome project, now fully suggest that human life is no more than a code, or a series of complex flows, of genetic information, ever ready for a cascading variety of

218 

P. BLOOM

genomic, and related technoscientific orders of mutations, via the extraordinary prowess of bio, nano, and biomedical neurotechnologies. All these, more or less, deprive us of the old “consolations of philosophy” that once-­ upon-­a-time enabled us to somehow draw upon bright lines demarcating sharply the human species from the animal, and object in nature, as well as from the mechanistic species; in some the privileging of the distinctly ‘human’.

These philosophical interventions are complemented by a deeper historical understanding of the role that “posthuman” thought and practices have had in the creation and development of past social order and power relations (Wolfe 2018). Revealed, in turn, is the way a focus on these “disruptive technologies” and their posthuman implications can hide a rather traditional humanist and human-centred worldview (Broekhuizen et al. 2016) Underpinning such legal discussions and philosophical inquiries is how new technologies are concretely and actually impacting human freedom in all spheres of their existence. The law, in this regard, has a universalist and particularist quality. It both sets out general rights and responsibilities but modifies them as appropriate to certain contexts and activities. It would be illegal for someone to coerce another into wearing a specific outfit, for instance, unless it is a boss requiring their employee to wear a specified uniform while in the workplace. Much attention, for perhaps rather obvious reason, has been paid to the civic rights of human and robots or their wider “human rights”. However, technologies such as big data and AI can have serious and far-reaching implications for the legal rights of employees (Stone et al. 2018). This extends, as well, into the home and daily interpersonal relationships, especially with the emergence of what has been called “Robo sapiens” (Robertson 2017). As previously discussed, a critical aspect of the “emancipatory project” of posthumanism and transhumanism is its desire for social experimentation (see again Cudworth and Hobden 2017a). Yet this also poses a serious danger, as such an “experimental” ethos can ideologically and legally affirm the legitimacy of specific social activities being performed by intelligent non-human agents, such as in the contemporary case where military drones are continually experimenting with the use thermal recognition to identify human targets (Stagliano 2019). Revealed, thus, is the need to better understand and legally engage with emerging ideas of “non-human” autonomy. The militarisation of intelligent non-human agents remains a rich area of debate and a good place to

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

219

theoretically and practically investigate this phenomenon, as it is the present day context where non-human intelligence has had the most dramatic effect. There are pronounced, and already discussed, fears of the “dehumanization of lethal decision-making” leading to desires for “banning autonomous weapon systems” (Asaro 2012). These legal and political demands are necessitating a deeper philosophical and empirical concept of “dehumanization” (Porpora 2017). They are, additionally, providing the basis for critically thinking about the legal ramifications of such “dehumanization” within different cultural contexts (Example—Newfield). Moreover, it offers a “critically posthumanist approach” to legally and socially addressing intimate social problems—such as dementia care (Jenkins 2017a). Rather, though then conceptually or legally distinguishing between human and non-human autonomy, it may be better to explore the legally explore the possibilities of expanded transhuman agency. Here the human body becomes a regulated and exciting mix between “natural and artificial technology” (Fortunati 2017). Consequently, human rights do not become something to simply preserve or to overcome but rather build upon and positively expand (Godin 2018). This would, further, situate these expansive “human rights” within a range of transhuman contexts— such as in the health care system with regard to “regenerative medicine” (Parry 2018). Such extensive forms of transhuman agency would permit, moreover, for agents to be neither human nor machine but rather experiment with diverse ways of being in the world (Hobden 2015). Available in the shift, also, from the human made “anthropocene” to transhuman “ecosophies” such as “continental posthumanism” (Bignall et al. 2016) This requires a new integrative legal vision of transhuman freedom. It entails universally and contextually combining human and non-human capabilities and intelligences (Cudworth and Hobden 2015). In practice, this can be witnessed in the integration of bioethics and transhumanism (Porter 2017). More creatively and retrospectively, it is also on view in the changing way in which agents will write their “life” histories (Huff 2017). This will have additional ramification for social policies and practices— changing how things like social justice work are both legally conceived and enacted (Walton and Rose 2018). Yet it will also potentially allow for an expansion of what counts as human, “queering” notions of what it means to be a free and autonomous intelligent being in the world (Luciano and Chen 2015).

220 

P. BLOOM

At stake is the philosophical and legal updating of freedom. In particular, enlarging the scope of what is legally possible to reflect the expanding agency, capabilities, and desires of the transhuman subject (Veronese 2016). These will have to be, moreover, tailored to reflect the realities and demands of specific cultural contexts (Hua and Ray 2016). In this regard, We are at an interesting juncture in the time of a philosophy that recognizes the historicity of the “human” through its dissolution and in the space of and network of interdisciplinary transmutations. In decentering the human, feminist posthumanism performs a multiplicity of dislocations of sex, gender, sexuality, race, and ability …. the fruitful becomings engendered at the transformative intersection of posthumanist and trans theory. Both posthuman-­ism and trans offer a radical challenge to the “human” as configured through the binaries of human/animal, human/nonhuman, sex/ gender, hetero/homo, man/woman, mind/body, natural/unnatural … However, if transgenderism and posthumanism are considered in the context of epistemological upheaval, they can effect a powerful theoretical becoming that recognizes and questions gender in the human and vice versa. (Nurka 2015: 209–210)

Doing so will allow, thus, for the true “dehumanising” of technology, creating a notion of existence that is neither completely animal, human, or machine.

Enhancing the Law Transhumanism creates an exciting and critical opportunity to update and enhance human law. It repurposes the law to one of expanding human capabilities and “perfecting” them (Sorgner 2016). In this respect, it introduces and seeks to define what legal scholar Irus Braverman (2015: 307) refers to as “more-than-human legalities”: What is the role of nonhumans, and of nonhuman animals in particular, in the constitution of law? How should legal systems account for societies that include not only humans but also nonhuman entities? What are the intersections between law and nonhuman life? And how to overcome the anthropocentric biases in modern legal systems? Such questions and others may provide fertile grounds for law and society investigations. Despite the richness and complexity of these investigations, however, the law and society community has typically relegated the “question of the animal” to the

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

221

­ iscourse of animal rights. Within this discourse, legal rights are extended to d certain nonhuman animals through the same liberal framework that has afforded humans’ rights beforehand: vertebrates, invertebrates, microbes, and non-living entities must first cross Western law’s threshold of personhood to obtain rights … sociolegal scholarship could greatly benefit from moving beyond the rights discourse of animal law to a new subject of inquiry: more-than-human legalities. By acknowledging the myriad ways of being in the world, their inherent interconnections, and their manifestations in and constitutions of law, more-than-human legalities extend the advocacy-­ oriented scholarship of animal rights to highlight how both animality and humanness are deeply embedded in the construction of law and, reciprocally, how law is acutely relevant for constituting the animal. Indeed, while nonhumans render law’s operations—in fact, its very existence as such— possible, law also constitutes animal life and renders it meaningful in a variety of ways.

As such, it must contend with emerging themes such as advances in human longevity and how this impacts upon questions of human and non-human solidarity (Dumas and Turner 2015). Further, the law will be tasked with understanding how to properly confront “converging human and digital bodies” (Käll 2017). This is especially crucial as humanity itself becomes both philosophically and practically a “contested concept” (Trigt et al. 2016). Central to these efforts, is reconfiguring legal frameworks to adapt to diverse issues associated with machine aided human enhancement. This will mean rejecting either wholesale or almost entirely ideas of “human nature” for this juridical purpose (Silva 2017). These enhanced capabilities will also dramatically reframe traditional human legal issues such as those associated with “security” (Cudworth and Hobden 2017a). To this end, it is essential to acknowledge that there may be a diverse number of interpretations for ethically and legally understanding and regulating this increasingly global cyborg culture (Ahluwalia 2016). These will be aimed not just at protecting humans and non-humans (and their various hybrid manifestations) but also revitalizing and fostering novel notions of “human flourishing” to account for these changes (Gorski 2017). Just as significantly, posthumanism can serve as a perspective for deepening existing relationships with the environment and the multitude of cultures and people currently populating the earth (Datta 2016). These expanded capabilities bring with them a reconstituted sense of moral and legal responsibility. More precisely, in dramatically and perhaps

222 

P. BLOOM

irrevocably altering what it means to be human, it also demands transcended those ideologies and relationships with non-humans based on these now outdated ideas of “human nature”. This complete legal reboot, in turn, requires novel legal philosophies that can genuinely account for and serve as the foundation for interpreting this new transhuman reality (Kessler 2018). At the most basic level, this should fundamentally shift human relations with animal and nature ranging from new perspectives on eco-feminism (Gaard 2017) and animal rights (von Essen and Bornemark 2019). Of equal significance, is its potential to completely legally reconstitute human relations. The question, in this regard, is whether posthumanism will also mean postcolonialism, or simply legally justify once more—either explicitly or through omission—an all too human legacy of conquest, prejudice, and inequality. Indeed: As a consequence of the rapid growth of technological innovations, the world has seen the emergence of discursive fields such as transhumanism and/or posthumanism. The origin of these discursive practices can be traced back to the Renaissance humanism and the Enlightenment project envisioning a teleological progress of human civilisation, though it is customary to regard these developments as a point of separation from the Enlightenment or Renaissance humanism, particularly due to the inclusion of the nonhuman animals and the extra­human futuristic technological beings. However, its basic objective remains to be the realisation of the human potential through the extension of the field of science and technology. As it happens to be the case with many other postmodern discourses, the discourse of posthumanism seems to be a corollary of neo­colonialism. Once colonised, now third­world subaltern subject becomes the strategic object of the discourse, since the posthuman man will require its ‘other’ and the otherness will be realised in the pre­posthuman subaltern agency. The subaltern subject with its lack of accessibility to the newest innovations and because of its inability to participate in the discursive practices is fated to become the ‘techno­slaves’ in the hands of the ‘techno­masters’. (Islam 2016: 115)

It also broadens the social horizons of what the law is and can do. It enhances not simply human capabilities but the ability for humans to work collaboratively with machines to regulate and legislate ourselves and what we can personally and collectively become. (Smart and Smart 2017). This more rigorous legal and anthropological study of the “posthuman” subject can allow for new insights to emerge from non-human sources of intelligence (Marchesini 2017). It can also present a novel framework for

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

223

socially embodying “morphological freedom” and its emergent legal and cultural responsibilities toward human, non-human, and hybrid others alike (Fuller 2016). Additionally, it permits for a legal reconsideration of “human history”, reasking “how did ‘we’ become human in the first place?” (Bayley 2019). Anticipated, as well, is the possibility for discovering new methods and practices for “being well together” via “more than human collaboration and companionship” (Kirk et al. 2019). Revealed is the incipient foundations for a transhuman legal system. So much of the existing focus, in this regard, is on which human enhancements are going to be legally permissible and what are the licit actions for non-human intelligence. These are certainly worthwhile considerations. However, they are ultimately incomplete in terms of the full scale of the opportunities and challenges facing legal thinking and experts posed by this transhuman social, political, and economic transformation (Braverman 2015). Revealed is the promise of a new basis for judging what should and should not be legal, on that transcends ideas of personal responsibility or the prevention of harm (Rose 2019). Instead, it can serve as the basis for fostering “sustainable futures” between human and non-humans (Kruger 2016). Further, it opens the potential to legal forms of “engineering posthumans”—raising critical questions about the purpose for doing so and the power dynamics behind such actions (Karamanou et al. 2017). These concerns may seem rather speculative right now, but they actually have profound present implications—especially as new human enhancement technologies become increasingly available thus revealing the limits of human laws for dealing with transhuman and posthuman concerns (Lee 2016). What are these transhuman legal possibilities then? Firstly, it is borne out of a continual but productive tension “between human enhancement and technological innovation”. In this regard Within the broad current of transhumanism one can notice an extreme optimism, but this is never naïve and exalted. However, the idea that change occurs at a level that cannot be compared with any former technological revolution in the whole history is difficult to assimilate. The transhumanist idea comes to absolutize technology to the extent that human enhancement is even overwhelmed through innovation. Although transhumanism seems difficult to be absorbed in our imaginary, certain aspects of its spirit penetrates the way we understand innovation and human empowerment. The limits of innovation are pushed out. Innovation is called to open itself

224 

P. BLOOM

toward society and contemporary problems. Standard innovation (intelligent cities, supercomputers, spatial missions) is moved ahead by the transhumanist mentality which grants technology with a strong feeling of enhancement. In singularity state, our contemporary problems will be simply out of question. This kind of post-human status offers a serious impetus for contemporary innovation. Thus, transhumanist ideas are not turned away from concrete reality which is for them both a starting point and a source of inspiration. (Iuga 2016: 87)

Equally imperative is understanding how “human rights” can apply and be modified so that their underlying principles and spirit remain formative and relevant to this transhuman age. Importantly, human rights transforms from entrenched obligations to a catalyst and ethos for further debate of what it means to be human in a transhuman world: The issues areas covered by this symposium, and other emerging issues that similarly challenge us to reconsider what it means to be human, are dispersed across different areas of law and politics, and across geographic regions. Yet those pushing for animal rights, for example, can fruitfully draw insights from those in the Global South debating the rights of mother earth, and from bioethicists trying to think through the rights of the unborn to a particular genetic structure. In this way, human rights scholarship, law, and institutions could provide a site for cross-fertilization. It is a site that also puts these issue areas in conversation with the human rights debates that have preceded them, including transversal issues such as discrimination, poverty, and other forms of inequality, even as it provides a set of institutions that can help articulate new norms … On the occasion of the UDHR’s seventieth anniversary, it seems fitting to understand being human as a long-­ term project, with an eye to fostering human rights law scholarship for the long now. (Huneeus 2018: 328)

Hence, this legal evolution should always be seen as a process rather than a destination. It is an opportunity to expose the limits of current legal understandings and laws for the potential to create and test new ones out (Margulies and Bersaglio 2018). These ongoing efforts invite both imaginative and “realist responses”, intermingling the fantastic and the pragmatic (Al-Amoudi and Morgan 2018). They, moreover, include the different personal and community “stories” of how people differently viewed and sought to become “posthuman” and live in a transhuman world (Braidotti 2017).

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

225

Reflected is the possibility to enhance not just humans but the legal ideals and laws that most define “us”. It is to add to the continual human conversation about who “we” are and what it means to exist in harmony together based on a shared understanding of acceptable behaviour and actions (Grunwald 2016). It is the transition from simple following the law and a sense of cultural responsibility to legally enshrined rights defending the dignity of all those alive (Liu and Zawieska 2017). Further, this enhancement of the law is the very vehicle through which humans are not just technologically enhanced but morally as well as part of a transhuman society (Butler 2018).

Licit Pathologies Thus far this chapter has explored the legal implications of transhumanism. While certainly radical they are also build on existing human desires for personal dignity and mutual protection. However, there are also prospects for a much more revolutionary shift to existing legal ideas and standards. Ones in which the virtual, digital, and actual interact and intertwine in a contentious and continual discussion of how far humans and non-­ humans can play out their deepest and sometimes most disturbing fantasies (See Lenoir 2002). Uncovered is a complicated and rich new transhuman reality with profound legal ramifications—especially in terms of how we lawfully consider and govern our diverse collective and personal journeys in becoming “post-human” (Hayles 2008). The idea of becoming a mix between human and machine, for instance, often rests on the premise that individuals will use such technology for their own enhancement. However, what if we encounter the reverse, where people willingly give up their humanity for the benefit of a non-human agent (Jenkins 2017a). Perhaps the most important foundation of any legal code is the need to reduce harm or at least uphold a shared sense of justice. Yet these laudable, though always contestable, principles are arguably fundamentally put to the test in relation to transhumanism. What actually constitutes harm to a machine, cyborg, or a practically “immortal” human? (Waldby 2003). Here the “self” is not singular but multiple and virtual—and consequently the notion of what is unacceptably injurious to them is not so straightforward (Keogh 2014). Any attempt to legally approach these issues must also confront the emergence of experimental and diverse “high-tech subcultures” (Terranova 1996). More than just the physical (or the virtually

226 

P. BLOOM

real, if you will) are the protection and legal management of emerging posthuman religious and spiritual concerns (Waters 2016). Yet transhuman law must do more than simply prohibit or set the limits of acceptable action. It must also be conducive and not hinder the flourishing of transhuman imagination, especially as it holds the potential to progressively reorder an outdated and often unjust human social, economic, and political order (Peters 2018). Part of this process will be then the use of popular media, art, and philosophy to reimagine and even experiment with different transhuman futures. Drawing on the insights of the philosopher Martin Heidegger, Onishis (2011: 101) contends that For those seeking to extend humanist ideals, information technologies are employed to extend the vision of an ultra-humanist view of a ‘scientific posthuman’ that dangerously understands the body to be a forfeitable nuisance, rather than an inherent aspect of being human. Along Heideggerian lines, thinkers such as N. Katherine Hayles and Thomas Carlson have developed an alternative trajectory related to Dasein’s Being-in-the-world. This trajectory posits the self as constituted by a lack or abyss, enabling the formulation of a ‘mystical posthuman,’ celebrating, rather than forfeiting, humanity’s embodied existence.

To this end, these once fantastic explorations will no longer be a purely voyeuristic experience. New virtual and digital technologies will allow people to “live out” these prospective possibilities for a different type of existence and world. This will require, in turn, the legal promotion of new types of “posthuman literacy” and skills for this more interactive transhuman experience (Bayne and Ross 2013). Nevertheless, this emerging alternative world of exciting virtual exploration and experimentation, does not preclude the fact that socially and politically humans and non-humans will have to put in place binding and enforceable legal frameworks. Much of this will revolve around that which is considered a contingent and reversible change and that which has a greater permanency in its ultimate personal and collective effects. Consequently, The body needs to be repositioned from the psycho realm of the biological to the cyber zone of the interface and extension—from genetic containment to electronic extrusion. Strategies toward the post-human are more about erasure, rather than affirmation—an obsession no longer with self but an analysis of structure. Notions of species evolution and gender distinction are

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

227

remapped and reconfigured in alternate hybridities of human-machine. Outmoded metaphysic distinctions of soul-body or mind-brain are superseded by concerns of bodyspecies split, as the body is redesigned—diversifying in form and functions. Cyborg bodies are not simply wired and extended but also enhanced with implanted components. Invading technology eliminates skin as a significant site, an adequate interface, or a barrier between public space and physiological tracts. The significance of the cyber may well reside in the act of the body shedding its skin. And as humans increasingly operate with surrogate bodies in remote spaces they function with increasingly intelligent and interactive images. The possibility of autonomous images generates an unexpected outcome of human-machine symbiosis. The post-human may well be manifested in the intelligent like form of autonomous images. (Stelarc 2000: 154)

At the heart of these legal debates is an evolving and dynamic “history of agency” and as such responsibility and consequences (Pickering 2001). Significantly, while it may seem that the physical, and thus the permanent, will simply disappear, there remains even within the “virtual” body “the strange persistence of the flesh” (Brians 2011). There will then be a continual need to engage with that interaction between the physical and virtual—what was once the difference between the mind and body now transitioning to a broader notion of “flesh and metal”, software and hardware (Hayles 2002). Increasingly central, in turn, is the legal allowance for different forms of transhuman creativity. This speaks to emerging innovations in “posthumanism and design”—ones that could have dramatic social impacts on how we live and work: emerging technologies that are shaping everyday life, and have begun to play a greater role in socio-cultural, political, and economic transformations. A robot is now a partner in a law firm. Driverless cars are being tested in many cities around the world. Voice-activated, in-home personal assistants are becoming common household devices. Wearable technologies are being embedded into clothing. Medical devices have become so sophisticated that some now take on what we used to think of as human functions. These developments blur the boundaries between the familiar binaries of human and nonhuman, culture and nature, and human and animal that have dominated Western thinking since at least the Enlightenment. They underscore the ways in which nonhumans—whether environmental or technological— have new kinds of agency in the world. They also reveal new perspectives and raise questions about what, how, and why we engage in the design of

228 

P. BLOOM

the so-called “artificial” world. Over the past several decades, a growing body of social theory has developed around concepts that attempt to make sense of this blurring of boundaries and introduce hybrid, non-binary, relational modes of thinking about being in the world. (Forlano 2017:17)

There are also considerations as to how to balance the freedom and safety of “digital subjectivities, unhuman subjects” who primarily reside in virtual worlds (Graffam 2012). More radically, perhaps, is the reconfiguring of legal perspectives away from a predominant focus on the individual, especially within this new digitized and virtual world “there is no ‘I’ in network” (McNeill 2012). It also will mean navigating the different ways humans and non-humans express their “transhuman selves”—whether physically through “wearable computers” (Pedersen and Blakesley 2013) or virtually in attempts at “transcending embodiment” (Frentz 2014). Optimistically, the law stands at the cutting edge of new transhuman fantasies of what it means to exist, interact, and thrive in an emerging “integrative” society. Here everyday materials will not be mere objects but intelligent beings whose views and agency must be listened too and accounted for (Bennett 2016). Further predicted, will be cityscapes that are an exciting, though to a present perspective confusing, mix of the digital, virtual, and physical—allowing for novel types of “exteriorization, individuation, (and) reinvention” (Rose 2017). Just as importantly though it will demand that the law lessen its individualised focus in some sense, it will also have to be more customisable to the specific needs of posthuman subject and digital objects (Adams and Thompson 2016). Proliferating will as always be not one human narrative but a combining of the “last human narrative” with novel transhuman narratives (Botz-Bornstein 2015). In the present, legal professionals and thinkers must be vital parts of the contemporary conversations of how we are currently “forming ourselves for a posthuman future” (MDIV 2012). Emerging is a new legal reality based on the exploration of new integrative selves and communities. It will involve a wide range of fantasies and intelligences, in this respect, with an equally wide range of potential social and legal consequences (Gladden 2015). At its heart, it will have to encompass a complete rethinking of cultural and organisational relations, reconceiving the treatment of our personal and collective health and wellbeing (Herndl 2013). Coming soon is nothing less than the construction of “posthuman selves” and a transhuman legal framework to help accommodate, shape, and regulate it (Thweatt-Bates et al. 2011).

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

229

Legal Reboot AI, machine learning, and robotics along with other emerging “disruptive” technologies offers the opportunity for a complete reboot of the legal system. These changes could and should far exceed the mere extension of human-centered judicial philosophies and laws onto intelligent non-humans. Rather it allows for the introduction and proliferation of a new vision of justice (Brundage 2015). Significantly, this nascent notion of transhuman justice is not just for future application. It has serious present day implications—especially as AI reproduces the assumptions and biases of the humans that gave them birth and with whom they interact with. (Osoba and Welser 2017). These insights additionally serve as the basis for guiding our legal views and fears of non-humans. To this end The main question in that context is, what kind of laws or ethics are correct, and who is to decide? In order to cope with these same problems as they relate to humans, society devised criminal law. ‘Criminal law embodies the most powerful legal social control in modem civilization.’ People’s fear of Al entities, in most cases, is based on the fact that Al entities are not considered to be subject to the law, specifically to criminal law. ‘In the past, people were similarly fearful of corporations and their power to commit a spectrum of crimes, 20 but because corporations are legal entities subject to criminal and corporate law, that kind of fear has been significantly reduced.2’ Therefore, the modem question relating to Al entities becomes: Does the growing intelligence of Al entities subject them to legal social control as any other legal entity? This article attempts to work out a legal solution to the problem of the criminal liability of Al entities. At the outset, a definition of an Al entity will be presented. Based on that definition, this article will then propose and introduce three models of Al entity criminal liability:

(1) The Perpetration-via-Another Liability Model (2) The Natural-Probable-Consequence Liability Model (3) The Direct Liability Model. These three models might be applied separately, but in many situations, a coordinated combination of them (all or some of them) is required in order to complete the legal structure of criminal liability. (Hallevy 2010: 174)

These technologies are presently transforming the understanding and practice of criminal justice around the world. The rise of cyber crime, for

230 

P. BLOOM

instance, has led to pioneering AI solutions for their prevention and resolution (Dilek et al. 2015). Even in the late twentieth century AI was being hailed as a potential technology to “help solve the crisis in our legal system” (Berman and Hafner 1989). Even if these forward thinking ideas did not ultimately come to pass, not least as these problems required systemic and ideological changes rather than simple technological innovations, they still revealed the desire for non-human intelligence to improve current human laws and justice (See Owen and Owen 2015). Alternatively, the proliferation of robots has created renewed concerns that they will negatively increase crime. According to Sharkey et al. (2010: 115) Robots will be used for crimes because they offer two elements that have always promoted crime: temptation and opportunity. The rewards are high, the barriers to entry rapidly disap-pearing, and the risk of apprehension significantly decreasing. Catching a robot doesn’t catch the perpetrator, so a new form of forensic science must be created. Robots don’t leave fingerprints or DNA, so police should consider building information databases to match and trace robot crime just as they do guns and ammunition. Meanwhile, engineers should seek ways to incorporate telltale clues into software and compo-nents to assist forensic analyses.

It is also imperative, therefore, to create more measured assessments of the potential legal responsibilities of robots (Asaro 2007). It is worth, in this regard, exploring in more depth how such technology is already impacting criminal investigations and the justice system. Modern policing is now aided by “intelligence—led crime scene processing” (Ribaux et al. 2010). Training and investigations are also enhanced by the use GIS and AI for creating “crime simulation” (Wang et al. 2008). Even last decade it was being proclaimed, furthermore, that “the robot arm of the law grows longer”. Returning again to the insights of the legal philosopher Noel Sharkey (2009: 115), he surveys with trepidation wide range of “armed robots” being used for surveillance and policing, observing that We must always consider future possibilities. The political landscape can change rapidly, with intolerable laws set into place simply because hi-tech solutions can now enforce them. I feel distinctly Orwellian rumblings and can’t help thinking about the covert helicopter surveillance Orwell described in 1984: “In the far distance a helicopter skimmed down between the roofs, hovered for an instant like a bluebottle, and darted away again with a

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

231

c­ urving flight. It was the police patrol, snooping into people’s windows. The patrols did not matter, however. Only the Thought Police mattered.” If handled with caution and respect for our human rights and personal lib-­ erties, robots could make our streets safer places. There is nothing wrong with the technology itself—it is the people who control it that we must worry about. In the wrong hands, petty laws could be introduced and enforced, and privacy could become a fading memory. As computer professionals, all we can do is stay alert to developments and use our expertise to keep the public informed.

Concretely, this has been manifested in the use of robot crime interviewers, especially for gaining testimony from children (Kyriakidou 2016). This extends to a broader and increasingly quite diverse legal field linked to “the laws of robots” (Pagallo 2013a). Perhaps notable development, in this regard, is the progressive growth of what can be termed “predictive policing”. Previously, this has been the domain of science fiction fantasy, most famously the book and the movie “Minority Report”. However, this is quickly becoming a criminal justice reality (Pearsall 2010). A recent report put out by the RAND corporation declares optimistically, thus, that Smart, effective, and proactive policing is clearly preferable to simply reacting to criminal acts. Although there are many methods to help police respond to crime and conduct investigations more effectively, predicting where and when a crime is likely to occur—and who is likely responsible for prior crimes—has recently gained considerable currency. Law enforcement agencies across the United States are employing a range of predictive policing approaches, and much has been written about their effectiveness. This guide for practitioners offers a focused examination of the predictive techniques currently in use, identifies the techniques that show promise if adopted in conjunction with other policing methods, and shares findings and recommendations to inform future research and clarify the policy implications of predictive policing. (Perry 2013: iii)

Big data is used for this purpose for “detecting and investigating crime” (Keyvanpour et al. 2011). These evoke age old fears of a hi-tech police state reminiscent of a dystopian Orwellian future. Less sensationalist, is a current reality where cognitive robots are being used to automate a range of forensic investigations—such as those around bloodstain pattern analysis (Acampora et al. 2015).

232 

P. BLOOM

Not surprisingly, this enhanced legal use of non-human intelligence has remained overwhelmingly human-centred in its focus and implementation. It would appear that the biggest concern is the ability for “policing police robots” (Joh 2016). There is comparatively little attention paid to the welfare of robots or other machines within this context. However, this is absolutely critical, especially as non-human play a greater and greater role in upholding the law and public safety. A prime example is being able to detect material signs that a rescue robot is under cyber attack (Vuong et al. 2014). Nevertheless, the primary efforts have been to protect human consumers from “Unfair and deceptive robots” such as bots on dating apps (Hartzog 2014). Required, hence, is a truly “integrative” notion of justice that is transhuman in both its approach and application. This necessitates a legal expansion of the category of “human” so that it can properly protect and serve, ironically, intelligent non-human agents (Whitehead 2017). It also means recognizing the inherent flaws and bias of the existing human system of justice, ones which can be mitigated if not to a certain extent completely resolved through the use of AI and robots (Bethel et  al. 2013). Equally important is eliminating outdated figurative uses of robots by legal professionals for the purpose of dehumanising human defendants. Legal scholar Ryan Calo (2016: 210) observes, in this respect, that The judge’s use of the robot metaphor can be justice enhancing in some ways but problematic in others. Judges tend to invoke robots as a rhetorical measure to help justify the removal of agency from a person, often a person whom society already tends to marginalize. Further, to the extent judges’ rhetorical uses of robots reflect their actual understanding of the technology, judges hold an increasingly outdated mental model of what a robot is. One hopes that judges will update this mental model as actual robots continue to enter mainstream American life and create new legal conflicts.

The recognition of AI and robots of deserving of respect and dignity is the first step toward revolutionising how we perceive crime, safety and justice in a progressively transhuman environment. Hoped for, in turn, is a complete legal reboot to reflect these transhuman ideals and possibilities. Policy and strategy makers have a responsibility for “deploying foresight” for the realisation of such transformational goals (Cordeiro 2016). There must, as well, be realistic rather than sensationalist or prejudicial efforts to understand and prevent “artificial ­ intelligence

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

233

crime” (King et al. 2019). Of equal significance is the embrace by the law and its practitioners of non-human wisdom and ethics (Hughes 2011).

From Human Control to Transhuman Possibilities The previous section looked at the ability of non-human intelligence to positively alter existing human criminal justice and legal ideas and techniques. Specifically, its aim was to improve the ability of current criminal justice and legal professionals to investigate, prevent, and deal with wrongdoing. It further, stressed the need to extend human justice, both in words and deeds, to non-humans. However, while certainly important, these efforts are only ultimately a relatively small part of how machines can transform human justice. The existing legal status quo, across its cultural and historical differences, is strongly rooted in ideas of regulation and control—the prevention of harm. However, it will now have to deal with emerging ideas of transhuman “transcendence” of established human norms and realities (Bainbridge 2017). Notably, it will have to engage with “new frontiers of legal responsibility” based on “autonomous machines” and “what robots want” (Pagallo 2013b). The predominant if not almost exclusive concern has been, thus far, on the proper governance of robots—similar to that of humans. There is a distinguishing, again in quite human terms, of the bad vs. good robot (Pagallo 2017). Yet these attempts to appropriately deal with non-human wrongdoers avails itself to the same debates as what and who is actually responsible for such criminal activity—is it nurture or nature? (Bartneck et al. 2007). These discussions can be traced back, in part, to much earlier in the twentieth century with the first efforts to achieve “reproduction by design” via “test tube babies” (Jenkins 2015). In the present age, there is the further danger of otherwise progressive attempts to regulate the use of robots to reinforce existing colonial ideologies and global inequalities. Hence, Western feminist critiques of sex robots continue to privilege the experiences, voice, and needs of feminist in the global north. Ignored, hence, is how western AI capitalist merchants are appropriating by introducing female and male sex robots traded with such brand names as Samantha and harmony, which have stirred fresh socialmedia discussions in countries such as Kenya, Nigeria, South Africa and Ghana among other nations from January 2018 than never before. In their capitalistic impunity, these capitalist merchants

234 

P. BLOOM

are fixated on the expected turnovers and profit maximisation without regard to the impact they are going to have to the social DNA in Africa and other third world countries. The Artificial Intelligence is a capitalist agent that has introduced to an African man a woman they can buy, control and manipulate without her consent and on the other side, a woman can buy her own male robot; an object of envy that she can manipulate without fear of oppression; an embodiment of powerless society the western world envision for Africa. (Ndonye 2019: 2)

Looking further ahead into the near future, these debates will become even more complicated by the prevalence of “in-the-body technologies” that permit mass levels of cyborgization. Crucially, and often overlooked, is the fact that Cyborgs are increasing through the dynamics of mass paradigms, technology domestication, and cultural capital. More enterprises are introducing in-the-body technologies across mass paradigms, and more in-the-body technologies are becoming commonplace through technology domestication. At the same time, more individuals are seeking to increase cultural capital through body projects. Accordingly, debates about the future of society that are focused on robots are flawed, because they fail to consider the increasing numbers of cyborgs who will have capabilities superior to robots. Accordingly, debates about the future of society should consider the potential of cyborgs, as well as robots, replacing human beings. In particular, the potential cyborgs should be considered in debates about the future of society that encompass arguments about whether the replacement of human beings will lead to opportunity or exploitation, utopia or dystopia, and emancipation or extermination. (Fox 2018: 8)

These questions of cyborg agency combined with non-human criminal liability are currently taking on enhanced importance both popularly and officially. As discussed, there are profound inquired as to the “morality of autonomous robots” (Johnson and Axinns 2013) which are changing the “rules of war” (Rabkin and Yoo 2017). Outside the sphere of military actions (which brings with it much larger moral questions, of course, of war and imperialism) there are legitimate concerns about the criminal liability and legal responsibilities of autonomous AI machines such as self-­ driving cars (Gless et  al. 2016). This, further, extends to all areas of contemporary law, including those that, for instance, regulate ­employment relations where the focus has been on finding ways for “making robots work for us” (Naastepad and Houghton Budd 2019). Less discussed but

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

235

potentially much more crucial is how the law can be used for “decolonizing” human ideas and policies around the economy and the social value of non-humans (Legarreta 2018). Perhaps the biggest danger to a prospective transhuman legal system is reproducing an ethos of lawful control and social discipline. This is not to imply that there will not need to be serious discussions and protocols for how to best shape and govern human and non-human relations. Rather it is about finding ways to deploy the law, both philosophically and in practice, to help create a less disciplinary and more empowering socio-­ economic system—one in which there will be less need for regulating, controlling, and dominating intelligence and its possibilities. These debates are currently being played out in contemporary policies for properly legally shaping the deployment of robots within heavily populated human areas (Salvini et  al. 2010). Culturally, this entails reconsidering conventional ideas of robot laws and their purpose, such as famously in the work in Science Fiction author Isaac Asimov, differentiating between the “letter” and “spirit” of these laws (Leslie-McCarthy 2007). In this regard, it is critical to draw on the law to socially challenge both human and non-­ human exploitation and objectification. In the immediate terms this can be applied to the use of “frigid” settings on robots that permit men to “simulate rape” (Timmins 2017). There is a delicate balance between allowing for people to safely experiment with their own pathologies (however, morally repulsive) with revealing how transcending the exploitive social logics driving those fantasies can open up a range of new social and economic horizons to explore (Musiał 2018). The development of robots and the embodiment of AI are transformed into opportunities for exploring radical and progressive forms of social, political, and economic ethics (Pitti 2018). When viewed from this perspective, transhumanism is a vehicle for creating a new type of society and attendant legal system. It is premised on how “robots and us” can collaborate for the creation of the “good life” (Naastepad and Mulder 2018). This could demand an overhaul of the current educational and legal training to … include designing of new ICT applications and digital services, as well as developing and improving upon established models of learning, applications and services. A commonly faced challenge is that new applications, new ­services and new ways of learning have been perceived as an excessively massive change in the context of the overall system in which these changes take

236 

P. BLOOM

place. Introduction of robots and software interfaces that operate between care professionals and clients is considered to be one such massive change in the overall ecosystem of health care and social welfare. Educating professionals or clients to go along with massive changes in the system is not an easy task. Personal one-to-one service is taken as granted by many. It is considered to be a nonnegotiable element of their client identity, professional identity, or both. This paper is based on a vision that future generations of professionals will have competences, roles and qualifications that are not evident in current, traditional or textbook-healthcare and wellbeing services. We believe that future health care and social welfare professionals will be solving real-life problems and challenges, and building competences towards this scenario should start during their studies. (Lehto et al. 2018: 1)

It will also entail incorporating legally the idea of non-human intelligence as a key part of ensuring the “good human life” (Carnevale 2015). Here the law will have to be updated to foster the safe expression of nascent “technologically mediated” identities combining the use of personal computers, online aliases, and robots (Pasfield-Neofitou 2017). To this end, AI must be reconsidered legally as both “consciousness and conscience”. These decisions must not be taken lightly and should be a point of continual social deliberation, asking … should we program a consciousness and a conscience into ‘full AI robots’ who will be our professors, teachers, doctors, nurses and lawyers? There are pros and cons: An AI professor, doctor or nurse would be more valuable if it has a sense of self-awareness and moral values such as non-violence, compassion, respect, tolerance and empathy. In addition, a conscience with these values would also provide another level of safety in case robots become malevolent. However, there is a danger when providing AI robots with a consciousness and a conscience. Following the values of emancipation, freedom and equality, they may strive to become socio-economically equal to humans. Since they are much more intelligent and knowledgeable, they may start to dominate our society. In order to prevent this, the full AI robot has to be programmed with principles such as obedience to humans and acceptance of human commands at all time. In addition, AI robots have to concede that freedom is only for their creators, the humans, and equality is a value between humans, not robots and humans. (Meissner 2019: 22)

At stake is no less than the possibility of legally expanding freedom for humans and non-humans alike. It involves novel efforts at posthuman inspired “design and mediation” (Fleckenstein et al. 2014). This will fur-

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

237

ther entail difficult legal rethinkings about human written constitutions and laws (see Reid 2016). Critically the law must take the lead in challenging “repressive robots” and fostering the “radical possibilities of emancipated automation” (Walsh and Sculos 2018) It can additional serves as a means to create safe spaces for intimate relations between humans and non-­humans—ones that can be mutually enriching rather than exploitive and controlling (Ess 2017). At the global level of economy, fresh international legal frameworks and protocols can be established to allow for and promote innovative transhuman policies aimed at lessening inequality and increasing shared prosperity and wellbeing (Bello 2015). In this sense, the law is reconfigured as a force for human control to one of fostering transhuman possibility. It sets the foundation for the creation of the “cyborg citizen”, one who can inhabit and embody expanded political agency and ways of living and forming community together (Koch 2005). This leads to a novel legal “proactionary imperative” arguing for the freedom to develop these technologies unbound by social regulations (Fuller 2016). Yet while this is quite outdated in its ideological embrace of twentieth century principles of libertarianism, its spirit of unleashing our existential potential via technology remains inspiring. The future of the law is the future of a transhuman community unbound by the narrow limits of humanity and its historical injustices and prejudices.

References Acampora, G., Vitiello, A., Di Nunzio, C., Saliva, M., & Garofano, L. (2015, October). Towards Automatic Bloodstain Pattern Analysis Through Cognitive Robots. In 2015 IEEE International Conference on Systems, Man, and Cybernetics (pp. 2447–2452). IEEE. Adams, C., & Thompson, T. L. (2016). Researching a Posthuman World: Interviews with Digital Objects. Dordrecht: Springer. Ahluwalia, P. (2016). Two Senses of the Post in Posthumanism. In Critical Posthumanism and Planetary Futures (pp. 131–141). New Delhi: Springer. Al-Amoudi, I., & Morgan, J. (Eds.). (2018). Realist Responses to Post-Human Society: Ex Machina. Abingdon and New York: Routledge. Arkin, R. (2009). Governing lethal behavior in autonomous robots. New  York: Chapman and Hall and CRC. Asaro, P.  M. (2007). Robots and Responsibility from a Legal Perspective. Proceedings of the IEEE, 20–24. Asaro, P. (2012). On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making. International Review of the Red Cross, 94(886), 687–709.

238 

P. BLOOM

Ashrafian, H. (2015a). Intelligent Robots Must Uphold Human Rights. Nature News, 519(7544), 391. Ashrafian, H. (2015b). AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics. Science and Engineering Ethics, 21(1), 29–40. Ashrafian, H. (2015c). Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights. Science and Engineering Ethics, 21(2), 317–326. Bainbridge, W.  S. (2017). Transcendence: Virtual Artificial Intelligence. In Dynamic Secularization (pp. 237–268). Cham: Springer. Bartneck, C., Suzuki, T., Kanda, T., & Nomura, T. (2007). The Influence of People’s Culture and Prior Experiences with Aibo on Their Attitude Towards Robots. Ai & Society, 21(1–2), 217–230. Baxi, U. (2009). Human Rights in a Posthuman World: Critical Essays. Oxford: Oxford University Press. Bayley, A. (2019). How Did ‘We’Become Human in the First Place? Entanglements of Posthumanism and Critical Pedagogy for the Twenty-First Century. In Posthumanism and Higher Education (pp.  359–365). Cham: Palgrave Macmillan. Bayne, S., & Ross, J. (2013). Posthuman Literacy in Heterotopic Space: A Pedagogical Proposal. In Literacy in the Digital University: Critical Perspectives on Learning, Scholarship, and Technology (pp. 95–110). Abingdon: Routledge. Bello, S.  K. (2015). Robotics Application in Flexible Manufacturing Systems: Prospects and Challenges in a Developing Country. International Journal of Applied Science and Engineering Research, 5(5), 354–362. Bennett, L. (2016). Thinking Like a Brick: Posthumanism and Building Materials. In Posthuman Research Practices in Education (pp. 58–74). London: Palgrave Macmillan. Berman, D. H., & Hafner, C. D. (1989). The Potential of Artificial Intelligence to Help Solve the Crisis in Our Legal System. Communications of the ACM, 32(8), 928–938. Bethel, C.  L., Eakin, D.  K., Anreddy, S., Stuart, J.  K., & Carruth, D. (2013, March). Eyewitnesses Are Misled by Human But Not Robot Interviewers. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (pp. 25–32). IEEE Press. Bignall, S., Hemming, S., & Rigney, D. (2016). Three Ecosophies for the Anthropocene: Environmental Governance, Continental Posthumanism and Indigenous Expressivism. Deleuze Studies, 10(4), 455–478. Boden, M., Bryson, J., Caldwell, D., Dautenhahn, K., Edwards, L., Kember, S., … & Sorrell, T. (2017). Principles of Robotics: Regulating Robots in the Real World. Connection Science, 29(2), 124–129. Botz-Bornstein, T. (2015). Virtual Reality: The Last Human Narrative? Leiden: BRILL. Braidotti, R.  O. S.  I. (2017). Posthuman, All Too Human: The Memoirs and Aspirations of a Posthumanist. Tanner Lectures, Yale University.

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

239

Braverman, I. (2015). More-Than-Human Legalities. The Handbook of Law and Society, 307. Brians, E. (2011). The “Virtual” Body and the Strange Persistence of the Flesh: Deleuze, Cyberspace and the Posthuman. Deleuze and the Body, 117–143. Broekhuizen, F., Dawes, S., Mikelli, D., Wilde, P., & Hall, G. (2016). Just Because You Write About Posthumanism Doesn’t Mean You Aren’ta Liberal Humanist: An Interview with Gary Hall. Networking Knowledge: Journal of the MeCCSA Postgraduate Network, 9(1). https://doi.org/10.31165/nk.2016.91.422. Brundage, M. (2015). Utopia, Artificial Intelligence, and the Future of Justice. Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., …, & Anderson, H. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv preprint arXiv:1802.07228. Bryson, J. J. (2010). Robots Should be Slaves. Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues, 63–74. Butler, P. (2018). Making Enhancement Equitable: A Racial Analysis of the Term “Human Animal” and the Inclusion of Black Bodies in Human Enhancement. Journal of Posthuman Studies, 2(1), 106–121. Calo, R. (2016). Robots as Legal Metaphors. Harvard Journal of Law & Technology, 30, 209. Carnevale, A. (2015). Robots, Disability, and Good Human Life. Disability Studies Quarterly, 35(1). Coeckelbergh, M. (2010a). Robot Rights? Towards a Social-Relational Justification of Moral Consideration. Ethics and Information Technology, 12(3), 209–221. Coeckelbergh, M. (2010b). Moral Appearances: Emotions, Robots, and Human Morality. Ethics and Information Technology, 12(3), 235–241. Cordeiro, J. (2016). Technological Evolution and Transhumanism. In Deploying Foresight for Policy and Strategy Makers (pp. 81–92). Cham: Springer. Cudworth, E., & Hobden, S. (2015). Liberation for Straw Dogs? Old Materialism, New Materialism, and the Challenge of an Emancipatory Posthumanism. Globalizations, 12(1), 134–148. Cudworth, E., & Hobden, S. (2017a). The Emancipatory Project of Posthumanism. Abingdon and New York: Routledge. Cudworth, E., & Hobden, S. (2017b). Post-Human Security. In Global Insecurity (pp. 65–81). London: Palgrave Macmillan. Datta, R. (2016). How to Practice Posthumanism in Environmental Learning: Experiences with North American and South Asian Indigenous Communities. IAFOR Journal of Education, 4(1), 52–67. Dilek, S., Çakır, H., & Aydın, M. (2015). Applications of Artificial Intelligence Techniques to Combating Cyber Crimes: A Review. arXiv preprint arXiv:1502.03552. Docherty, B. (2012). Losing Humanity: The Case Against Killer Robots. Dumas, A., & Turner, B. S. (2015). Introduction: Human Longevity, Utopia, and Solidarity. The Sociological Quarterly, 56(1), 1–17.

240 

P. BLOOM

Ess, C. M. (2017). What’s Love Got to Do with It? Robots, Sexuality, and the Arts of Being Human. In Social Robots (pp. 57–79). Abingdon: Routledge. Fleckenstein, K. S., Keogh, B., Lee, J. R., Levy, M. A., McArthur, E., Mehler, J., … & Van Den Eede, Y. (2014). Design, Mediation, and the Posthuman. New York: Lexington Books. Forlano, L. (2017). Posthumanism and Design. She Ji: The Journal of Design, Economics, and Innovation, 3(1), 16–29. Fortunati, L. (2017). The Human Body: Natural and Artificial Technology. In Machines That Become Us (pp. 71–87). Routledge. Fox, S. (2018). Cyborgs, Robots and Society: Implications for the Future of Society From Human Enhancement with in-the-Body Technologies. Technologies, 6(2), 50. Frentz, T.  S. (2014). Transcending Embodiment: Communication in the Posthuman Condition. Southern Communication Journal, 79(1), 59–72. Fuller, S. (2016). Morphological Freedom and the Question of Responsibility and Representation in Transhumanism. Confero: Essays on Education, Philosophy and Politics, 4(2), 33–45. Gaard, G. (2017). Posthumanism, Ecofeminism, and Inter-Species Relations. In Routledge Handbook of Gender and Environment (pp.  115–129). New  York: Routledge. Gladden, M. E. (2015). Utopias and Dystopias as Cybernetic Information Systems: Envisioning the Posthuman Neuropolity. Gless, S., Silverman, E., & Weigend, T. (2016). If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal Liability. New Criminal Law Review: In International and Interdisciplinary Journal, 19(3), 412–436. Godin, C. (2018). What Would Human Rights with the Posthuman Become? Journal international de bioethique et d’ethique des sciences, 29(3), 154. Gorski, P. (2017). Human Flourishing and Human Morphogenesis: A Critical Realist Interpretation and Critique. In Morphogenesis and Human Flourishing (pp. 29–43). Cham: Springer. Graffam, G. (2012). A Posthuman Perspective on Virtual Worlds. In Human No More: Digital Subjectivities, Unhuman Subjects, and the End of Anthropology (pp. 131–146). Boulder: University Press of Colorado. Grunwald, A. (2016). What Does the Debate on (Post) human Futures Tell Us? In Perfecting Human Futures (pp. 35–50). Wiesbaden: Springer VS. Gubrud, M. (2014). Stopping Killer Robots. Bulletin of the Atomic Scientists, 70(1), 32–42. Gunkel, D. J. (2018). Robot Rights. Cambridge, MA: MIT Press. Hallevy, G. (2010). The Criminal Liability of Artificial Intelligence Entities-From Science Fiction to Legal Social Control. Akron Intellectual Property Journal, 4, 171. Hartzog, W. (2014). Unfair and Deceptive Robots. Maryland Law Review, 74, 785. Hayles, N. K. (2002). Flesh and Metal: Reconfiguring the Mindbody in Virtual Environments. Configurations, 10(2), 297–320.

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

241

Hayles, N.  K. (2008). How we became posthuman: Virtual bodies in cybernetics, literature, and informatics. Chicago, IL: University of Chicago Press. Herndl, D.  P. (2013). Virtual Cancer: BRCA and Posthuman Narratives of Deleterious Mutation. Tulsa Studies in Women’s Literature, 25–45. Hobden, S. (2015). Being “a Good Animal” Adorno, Posthumanism, and International Relations. Alternatives, 40(3–4), 251–263. Hua, J., & Ray, K. (2016). The Lives of Things: Native Objects, Human Rights and NDN-Indian Relationality. Prose Studies, 38(1), 12–33. Huff, C. (2017). After Auto, After Bio: Posthumanism and Life Writing. a/b: Auto/Biography Studies, 32(2), 279–282. Hughes, J. (2011). After Happiness, Cyborg Virtue. Free Inquiry, 32(1), 1–7. Huneeus, A. (2018). Human Rights and the Future of Being Human. AJIL Unbound, 112, 324–328. Islam, M. M. (2016). Posthumanism: Through the Postcolonial Lens. In Critical Posthumanism and Planetary Futures (pp. 115–129). New Delhi: Springer. Iuga, I. (2016). Transhumanism Between Human Enhancement and Technological Innovation. Symposion, 3(1), 79–88. Jenkins, C. (2015). Reproduction by Design: Sex, Robots, Trees, & Test-Tube Babies in Interwar Britain, Angus McLaren. Canadian Bulletin of Medical History, 32(2), 433–435. Jenkins, N. (2017a). No Substitute for Human Touch? Towards a Critically Posthumanist Approach to Dementia Care. Ageing & Society, 37(7), 1484–1498. Jenkins, N. (2017b, April). I’d Rather be a Cyborg Than An” Individual” with Dementia: Exploring Critical Posthumanism and Its Application to Dementia Policy and Practice. In Aging Graz 2017 International Conference: Cultural Narratives, Process & Strategies in Representations of Age and Aging. Joh, E. E. (2016). Policing Police Robots. UCLA Law Review Discourse, 64, 516. Johnson, A.  M., & Axinn, S. (2013). The Morality of Autonomous Robots. Journal of Military Ethics, 12(2), 129–141. Kahn, P. H., Ishiguro, H., Friedman, B., & Kanda, T. (2006, September). What Is a Human?-Toward Psychological Benchmarks in the Field of Human-Robot Interaction. In ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication (pp. 364–371). IEEE. Käll, J. (2017). Converging Human and Digital Bodies. Posthumanism, Property, Law. Karamanou, M., Papaioannou, T.  G., Soulis, D., & Tousoulis, D. (2017). Engineering ‘Posthumans’: To Be or Not to Be? Trends in Biotechnology, 35(8), 677–679. Keogh, B. (2014). Cybernetic Memory and the Construction of the Posthuman Self in Videogame Play. Design, Mediation, and the Posthuman, 233–248. Kessler, N.  H. (2018). Ontology and Closeness in Human-Nature Relationships: Beyond Dualisms, Materialism and Posthumanism. Springer.

242 

P. BLOOM

Keyvanpour, M.  R., Javideh, M., & Ebrahimi, M.  R. (2011). Detecting and Investigating Crime by Means of Data Mining: A General Crime Matching Framework. Procedia Computer Science, 3, 872–880. Kim, R.  E. (2010). The principle of sustainability: Transforming law and governance. Journal of Education for Sustainable Development, 4(2), 309–312. King, T. C., Aggarwal, N., Taddeo, M., & Floridi, L. (2019). Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions. Science and Engineering Ethics, 1–32. Kirk, R. G., Pemberton, N., & Quick, T. (2019). Being Well Together? Promoting Health and Well-Being Through More Than Human Collaboration and Companionship. Medical Humanities, 45(1), 75–81. Koch, A. (2005). Cyber Citizen or Cyborg Citizen: Baudrillard, Political Agency, and the Commons in Virtual Politics. Journal of Mass Media Ethics, 20(2–3), 159–175. Kruger, F. (2016). Posthumanism and Educational Research for Sustainable Futures. Journal of Education, 65, 77–94. Kyriakidou, M. (2016). Discussing Robot Crime Interviewers for Children’s Forensic Testimonies: A Relatively New Field for Investigation. AI & Society, 31(1), 121–126. Lee, J. (2016). Cochlear Implantation, Enhancements, Transhumanism and Posthumanism: Some Human Questions. Science and Engineering Ethics, 22(1), 67–92. Legarreta, P. (2018). Disruptive Capitalism: The Global Digital Network, the Colonization of Minds and the Fight for Humankind Emancipation. Derecom, 25, 3. Lehto, P., Ainamo, A., & Porokuokka, J. (2018). The Role of Master’s Level Students: Case Robots and Future of Welfare Services. In ICERI2018 Proceedings. Lenoir, T. (2002). Makeover: Writing the Body into the Posthuman Technoscape: Part One: Embracing the Posthuman. Configurations, 10(2), 203–220. Leslie-McCarthy, S. (2007). Asimov’s Posthuman Pharisees: The Letter of the Law Versus the Spirit of the Law in Isaac Asimov’s Robot Novels. Law, Culture and the Humanities, 3(3), 398–415. Liu, H. Y., & Zawieska, K. (2017). From Responsible Robotics Towards a Human Rights Regime Oriented to the Challenges of Robotics and Artificial Intelligence. Ethics and Information Technology, 1–13. Luciano, D., & Chen, M.  Y. (2015). Introduction: Has the Queer Ever Been Human? GLQ: A Journal of Lesbian and Gay Studies, 21(2), iv–207. Marchesini, R. (2017). Over the Human: Post-Humanism and the Concept of Animal Epiphany (Vol. 4). Cham: Springer. Margulies, J.  D., & Bersaglio, B. (2018). Furthering Post-Human Political Ecologies. Geoforum, 94, 103–106.

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

243

Marino, D., & Tamburrini, G. (2006). Learning Robots and Human Responsibility. International Review of Information Ethics, 6(12), 46–51. McNally, P., & Inayatullah, S. (1988). The Rights of Robots: Technology, Culture and Law in the 21st Century. Futures, 20(2), 119–136. McNeill, L. (2012). There Is No “I” in Network: Social Networking Sites and Posthuman Auto/Biography. Biography, 65–82. MDIV, J. S. (2012). Are We Forming Ourselves for a Posthuman Future? Ethics & Medicine, 28(2), 81. Meissner, G. (2019). Artificial Intelligence: Consciousness and Conscience. AI & Society, 1–11. Morgan, B., & Kuch, D. (2015). Radical Transactionalism: Legal Consciousness, Diverse Economies, and the Sharing Economy. Journal of Law and Society, 42(4), 556–587. Müller, V. C., & Simpson, T. W. (2016). Autonomous Killer Robots Are Probably Good News. In Drones and Responsibility: Legal, Philosophical and Socio-­ Technical Perspectives on the Use of Remotely Controlled Weapons (pp. 67–81). Musiał, M. (2018). Loving Dolls and Robots: From Freedom to Objectification, from Solipsism to Autism? In Exploring Erotic Encounters (pp.  152–168). Leiden: Brill Rodopi. Naastepad, C. W. M., & Houghton Budd, C. (2019). Preventing Technological Unemployment by Widening our Understanding of Capital and Progress: Making Robots Work for Us. Ethics and Social Welfare, 1–18. Naastepad, C.  W. M., & Mulder, J.  M. (2018). Robots and Us: Towards an Economics of the ‘Good Life’. Review of Social Economy, 76(3), 302–334. Ndonye, M.  M. (2019). Mass-Mediated Feminist Scholarship Failure in Africa: Normalised Body-Objectification as Artificial Intelligence (AI). Nurka, C. (2015). Animal Techne: Transing Posthumanism. Transgender Studies Quarterly, 2(2), 209–226. Onishi, B. B. (2011). Information, Bodies, and Heidegger: Tracing Visions of the Posthuman. Sophia, 50(1), 101–112. Osoba, O. A., & Welser, W., IV. (2017). An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. Santa Monica, CA: Rand Corporation. Owen, T., & Owen, J. (2015). Virtual Criminology: Insights from Genetic-Social Science and Heidegger. Journal of Theoretical & Philosophical Criminology, 7(1), 17. Pagallo, U. (2011). Robots of Just War: A Legal Perspective. Philosophy & Technology, 24(3), 307–323. Pagallo, U. (2013a). The Laws of Robots: Crimes, Contracts, and Torts (Vol. 10). Springer Science & Business Media. Pagallo, U. (2013b). What Robots Want: Autonomous Machines, Codes and New Frontiers of Legal Responsibility. In Human Law and Computer Law: Comparative Perspectives (pp. 47–65). Dordrecht: Springer.

244 

P. BLOOM

Pagallo, U. (2017). AI and Bad Robots: The Criminology of Automation. In The Routledge Handbook of Technology, Crime and Justice (pp. 642–652). Routledge. Palk, A. C. (2015). The implausibility of appeals to human dignity: an investigation into the efficacy of notions of human dignity in the transhumanism debate. South African Journal of Philosophy [Suid-Afrikaanse Tydskrif vir Wysbegeerte], 34(1), 39–54. Parry, B. (2018). The Social Life of “Scaffolds” Examining Human Rights in Regenerative Medicine. Science, Technology, & Human Values, 43(1), 95–120. Pasfield-Neofitou, S. (2017). Technologically Mediated Identity: Personal Computers, Online Aliases, and Japanese Robots. In Reconstructing Identity (pp. 207–242). Cham: Palgrave Macmillan. Pearsall, B. (2010). Predictive Policing: The Future of Law Enforcement. National Institute of Justice Journal, 266(1), 16–19. Pedersen, I., & Blakesley, D. (2013). Ready to Wear: A Rhetoric of Wearable Computers and Reality-Shifting Media. Anderson, SC: Parlor Press. Perry, W.  L. (2013). Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. Santa Monica, CA: Rand Corporation. Peters, R.  A. (2018, July). Re-creation, Consciousness Transfer and Digital Duplication: The Question of ‘What Counts as Human?’in Charlie Brooker’s Black Mirror. In Film-Philosophy Conference 2018. Pickering, A. (2001). Practice and Posthumanism: Social Theory and a History of Agency. The Practice Turn in Contemporary Theory, 163–174. Pitti, A. (2018). Ideas from Developmental Robotics and Embodied AI on the Questions of Ethics in Robots. arXiv preprint arXiv:1803.07506. Porpora, D.  V. (2017). Dehumanization in Theory: Anti-Humanism, Non-­ Humanism, Post-Humanism, and Trans-Humanism. Journal of Critical Realism, 16(4), 353–367. Porter, A. (2017, June). Bioethics and Transhumanism. In The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine, 42(3), 237– 260. Oxford University Press. Rabkin, J., & Yoo, J. (2017). Striking Power: How Cyber, Robots, and Space Weapons Change the Rules for War. New York: Encounter Books. Reid, M. (2016). Rethinking the Fourth Amendment in the Age of Supercomputers, Artificial Intelligence, and Robots. West Virginia Law Review, 119, 863. Ribaux, O., Baylon, A., Roux, C., Delémont, O., Lock, E., Zingg, C., & Margot, P. (2010). Intelligence-Led Crime Scene Processing. Part I: Forensic Intelligence. Forensic Science International, 195(1–3), 10–16. Robertson, J. (2017). Robo Sapiens Japanicus: Robots, Gender, Family, and the Japanese Nation. Oakland, CA: Univ of California Press. Rose, G. (2017). Posthuman Agency in the Digitally Mediated City: Exteriorization, Individuation, Reinvention. Annals of the American Association of Geographers, 107(4), 779–793.

7  LEGAL REBOOT: FROM HUMAN CONTROL TO TRANSHUMAN… 

245

Rose, D. E. (2019). Tracing the Subjectivities of the Changing Human: Hegel, Self-Understanding, and Posthuman Objective Freedom. Journal of Posthuman Studies, 2(2), 189–212. Salvini, P., Teti, G., Spadoni, E., Frediani, E., Boccalatte, S., Nocco, L., … & Carrozza, P. (2010). An Investigation on Legal Regulations for Robot Deployment in Urban Areas: A Focus on Italian Law. Advanced Robotics, 24(13), 1901–1917. Schuller, A. L. (2017). At the Crossroads of Control: The Intersection of Artificial Intelligence in Autonomous Weapon Systems with International Humanitarian Law. Harvard National Security Journal, 8, 379. Sharkey, N. (2009). The Robot Arm of the Law Grows Longer. Computer, 42(8), 116–115. Sharkey, N., Goodman, M., & Ross, N. (2010). The Coming Robot Crime Wave. Computer, 43(8), 115–116. Sharkey, N., & Sharkey, A. (2011). 17 the Rights and Wrongs of Robot Care. Robot Ethics: The Ethical and Social Implications of Robotics, 267. Shoham, Y., & Tennenholtz, M. (1995). On Social Laws for Artificial Agent Societies: Off-Line Design. Artificial Intelligence, 73(1–2), 231–252. Silva, D. F. (2017). From Human to Person: Detaching Personhood from Human Nature. In Legal Personhood: Animals, Artificial Intelligence and the Unborn (pp. 113–125). Cham: Springer. Smart, A., & Smart, J. (2017). Posthumanism: Anthropological Insights. Toronto: University of Toronto Press. Smithers, T. (1997). Autonomy in Robots and Other Agents. Brain and Cognition, 34(1), 88–106. Sorgner, S.  L. (2016). Perfecting Human Beings: from Kant and Nietzsche to Trans-and Posthumanism. Trans-Humanities Journal, 9(1), 41–61. Stagliano, A. (2019). Experiments in Posthumanism: On Tactical Rhetorical Encounters Between Drones and Human Body Heat. Computers and Composition, 52, 242–252. Stelarc, D. (2000). From Psycho-body to Cyber-systems: Images as Post-human Entities. In D.  Bell & B.  M. Kennedy (Eds.), The Cybercultures Reader (pp. 560–576). London: Psychology Press. Stone, C.  B., Neely, A.  R., & Lengnick-Hall, M.  L. (2018). Human Resource Management in the Digital Age: Big Data, HR Analytics and Artificial Intelligence. In Management and Technological Challenges in the Digital Age (pp. 13–42). CRC Press. Terec-Vlad, L., & Terec-Vlad, D. (2014). About the Evolution of the Human Species: Human Robots and Human Enhancement. Postmodern Openings/ Deschideri Postmoderne, 5(3). Terranova, T. (1996). Posthuman Unbounded: Artificial Evolution and High-­ Tech Subcultures. FutureNatural: Nature, Science, Culture, 146–164.

246 

P. BLOOM

Thomaz, A. L., & Breazeal, C. (2008). Teachable Robots: Understanding Human Teaching Behavior to Build More Effective Robot Learners. Artificial Intelligence, 172(6–7), 716–737. Thweatt-Bates, J., van Huyssteen, J.  W., & Wiebe, E.  P. (2011). Posthuman Selves: Bodies, Cognitive Processes, and Technologies. In In Search of self: Interdisciplinary Perspectives on Personhood (p.  243). Cambridge: Wm. B. Erdmann’s. Timmins, B. (2017). New Sex Robots With ‘Frigid’Settings Allow Men to Simulate Rape. The Independent. Trigt, P. V., Kool, J., & Schippers, A. (2016). Humanity as a Contested Concept: Relations Between Disability and‘Being Human’. Social Inclusion, 4(4), 125–128. Veronese, C. (2016). Can the Humanities Become Post-Human: Interview with Rosi Braidotti. Relations: Beyond Anthropocentrism, 4, 97. von Essen, U. E., & Bornemark, J. (2019). Between Behaviourism, Posthumanism, and Animal Rights Theory. Equine Cultures in Transition: Ethical Questions. Vuong, T., Filippoupolitis, A., Loukas, G., & Gan, D. (2014, March). Physical Indicators of Cyber Attacks Against a Rescue Robot. In 2014 IEEE International Conference on Pervasive Computing and Communication Workshops (PERCOM WORKSHOPS) (pp. 338–343). IEEE. Wakefield, J. (2019, March 21). Can You Murder a Robot, BBC World Service Documentary. Waldby, C. (2003). The Visible Human Project: Informatic Bodies and Posthuman Medicine. London: Routledge. Walsh, S.  N., & Sculos, B.  W. (2018). Repressive Robots and the Radical Possibilities of Emancipated Automation. In The Political Economy of Robots (pp. 101–125). Cham: Palgrave Macmillan. Walton, R., & Rose, E. J. (2018). Factors to Actors: Implications of Posthumanism for Social Justice Work. In Posthuman Praxis in Technical Communication (pp. 91–117). Routledge. Wang, X., Liu, L., & Eck, J. (2008). Crime Simulation Using GIS and Artificial Intelligent Agents. In Artificial Crime Analysis Systems: Using Computer Simulations and Geographic Information Systems (pp. 209–225). Hershey, PA: IGI Global. Wang, F. Y., Wang, X., Li, L., & Li, L. (2016). Steps Toward Parallel Intelligence. IEEE/CAA Journal of Automatica Sinica, 3(4), 345–348. Waters, B. (2016). From Human to Posthuman: Christian Theology and Technology in a Postmodern World. London: Routledge. Weaver, J. F. (2013). Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws. ABC-CLIO. Whitehead, P.  M. (2017). Expanding the Category “Human”: Nonhumanism, Posthumanism, and Humanistic Psychology. Lanham, MD: Lexington Books. Wolfe, C. (2018). Posthumanism Thinks the Political: A Genealogy for Foucault’s The Birth of Biopolitics. Journal of Posthuman Studies, 1(2), 117–135.

CHAPTER 8

Shared Consciousness: Toward a World of Transhuman Relations

Imagine a world without work, where everyone lives in material luxury, and there is no poverty or inequality. Imagine a society where everyone had everything they needed and where life was focused on personal enjoyment and social exploration rather than individual competition and social division. Where one can live almost as long as they choose, in as many real and virtual realities as they can discover, and as several different interesting selves all at once or across separate “lifetimes”. Now imagine that you are experiencing this future world alongside non-humans who are an essential, everyday, and empowering part of your existence. This may sound like mere utopia but many people are already realistically considering what such a world may in fact look like and how we can go about realising it in the present day. In his provocative new book Luxury Automated Communism writes Aaron Bastani (2019), that under Fully Automated Luxury Communism we will see more of the world than ever before, eat varieties of food we have never heard of, and lead lives equivalent—if we so wish—to those of today’s billionaires. Luxury will pervade everything as a society based on waged work becomes as much a relic as the feudal peasant.

While these notions may have some serious questions to resolve in order to be viable and desirable—not the least about its ecological sustainability—they do point to an alternative and exciting vision of transhuman relations. © The Author(s) 2020 P. Bloom, Identity, Institutions and Governance in an AI World, https://doi.org/10.1007/978-3-030-36181-5_8

247

248 

P. BLOOM

The eighth and final chapter will conclude the book by presenting a new vision of a future world of transhuman relations. It would review how this “integrated” society has the potential to radically transform our economics, politics, laws and fundamentally how we live. We are headed toward an age of “Shared consciousness” where transhumanism is not a dream but a way of life. A crucial question for both the present and beyond is how to make sure this coming world is as liberating and empowering as it is smart and technological.

The Need for Radical Dehumanization and Disruptive Integration Thus far this book has argued for a new philosophical perspectives and praxis in relation to human and non-human interactions and social relations. Notably it has criticized the majority of existing theories of disruptive technology and post-humanism which either explicitly or implicitly remain “human-centred”. Their concern is on the future of humanity, either in their coming economic redundancy, evolution as a species, or even extinction. What this shared anthropocentrism misses is the needs, desires, and welfare of non-humans whether plant, animal, or machine. As such this narrow human based focus, commonly precludes the existential and social possibilities that such diverse intelligences and forms of being can offer as well as intentionally or unintentionally reproducing historically constructed and quite restrictive human systems of power. If the new age is to be one of Artificial Intelligence, then humans must first decolonise and reprogramme their own thinking and practices in order to embrace these potentialities. This book thus calls for a radical “dehumanization”, the unlearning of human prejudices and ways of practically living. The fear that technologies will be “dehumanizing” are legitimate but misplaced. The reflect the terror that machines will either be used as tools by existing elites for their continued exploitation and marginalisation or that these machines will come to rule as if they were human elites themselves. The danger then is not “dehumanization” but further “rehumanization”, the reproduction of oppressive human systems and relations with advanced technologies. Such a rebooted humanity should invoke terror. The deployment of new disruptive technologies to age old human principles of conquest, domination, and exploitation is indeed a fearful combination. Yet there is hope in

8  SHARED CONSCIOUSNESS: TOWARD A WORLD OF TRANSHUMAN… 

249

our ability to individually and collectively “unhumanize” ourselves—to accept and explore different ways of existing that transcend these historic human limits. In ideological and practical terms, it argues for the transition from a repressive “singularity” to a disruptive “integration”. More precisely, the move away from worrying that humanity will be replaced and ruled by an artificial “super-intelligence”. Instead, to conceive how humans and non-­ humans can best live together. This is an ethos built above all on mutual welfare and experimentation. It demands, though, recognising that this relation does not start from a place of Tabula Rosa, in a vacuum where history can be forgotten or denied. Instead to be truly forward thinking, humanity must first be willing to look backwards, to the histories of genocide, domination, and colonialism that has largely marked human existence to this point. If this sounds, overly negative, then it is a good counterweight to those current perspectives of either industry 4.0 or optimistic transhumanism that are engaged (again often unwittingly) in forms of historic “passive denial” where the crimes of the past have little bearing on the realities of the present or the possibilities of the future. Just as importantly, the risk of “singularity” comes directly from these history, it is the evolution of human imperialism and privilege to robot control and rule. The promise of an empowering integrative transhuman world is grounded on the hard work of decolonizing our contemporary thinking and society from this historic legacy.

From Disruption to Transformation The contemporary period is undergoing rapid technological and social change, even as it seems to be looking increasingly backward politically and culturally. The rise of the Far -Right and resurgence of explicit ethno-­ nationalism may make it seem that the twenty-first century is simply rebooting many of the worst aspects of the twenty-first century. However, these troubling times are also giving rise to new technologies for organising and constituted identity, institutions and governance—ones which can either merely repeat the injustices of the past or directly confront and transcend them for a more just future. At present digital technologies are reconfiguring the social imaginary of capitalism in quite critical but also dangerous ways. It is the shift, in this respect, from twentieth century social democracies promising to mitigate exploitation to the twenty-first century promise to eliminate alienation:

250 

P. BLOOM

At the center of contemporary discourse on technology—or the digital discourse—is the assertion that network technology ushers in a new phase of capitalism which is more democratic, participatory, and de-alienating for individuals. Rather than viewing this discourse as a transparent description of the new realities of techno-capitalism and judging its claims as true (as the hegemonic view sees it) or false (a view expressed by few critical voices), this article offers a new framework which sees the digital discourse as signaling a historical shift in the technological legitimation of capitalism, concurrent with the emergence of the post-Fordist phase of capitalism. Technology discourse legitimated the Fordist phase of capitalism by stressing the ability of technology and technique to mitigate exploitation. It hence legitimated the interventionist welfare state, the central planning in businesses and the economy, the hierarchized corporation, and the tenured worker. In contrast, contemporary technology discourse legitimates the post-Fordist phase of capitalism by stressing the ability of technology to mitigate alienation. It hence legitimates the withdrawal of the state from markets, the dehierarchization and decentralization of businesses, and the flexibilization of production and the labor process. (Fisher 2010: 229)

In this sense, new technologies are both being co-opted by existing capitalist business models and values as well as materially and socially altering them. Langley and Leyshon (2017: 26) interrogate the growing prevalence of digital platforms to reveal this simultaneous process inscription and change between conventional market ideas and supposedly disruptive technologies, observing that When placing the platform at the centre of critical understandings of digital economic circulation, moreover, we have suggested that the platform is not merely a manifestation of wider transformations in the relations and structures of contemporary capitalism. For us, analytical attention should be given to the contingent configuration and consequences of the platform as a discrete mode of socio-technical intermediary and capitalist business arrangement. This led us to stress both the distinctive marketising intermediation of digital economic circulation by platforms, and the incorporation of platform-intermediated circulation into wider processes of capitalisation. To make multi-sided markets and coordinate network effects, platforms enrol users through a participatory economic culture and mobilise code and data analytics to compose immanent infrastructures. And, nested in an emergent platform business model that also performs the structure of venture capital fund investment and valorises potential for monopoly rents, platforms prioritise up-scaling and the direct and/or indirect extraction of rent from circulations and accompanying data trails.

8  SHARED CONSCIOUSNESS: TOWARD A WORLD OF TRANSHUMAN… 

251

To avoid this destiny of merely reconfiguring the existing status quo, it is crucial to reconsider how we produce and depart knowledge about technology and society. It involves, for instance, the inclusion of “emergent radical-progressive perspectives on post-work alternatives for education and society” (Means 2017: 21). These technologies thus must be viewed as disruptive not just to the economy but to politics, society, and even notions of selfhood. Emerging is a new framework for conceiving and operationalising “rationality” in regards to how AI and digital advances are continuously transforming the “incomputable” into the “computable” and therefore knowable. Returning again to the insights of the critical thinker Karen Brandiotti, she refers to this as “automated cognition”: Automated cognition is central to today’s capitalism. From the rationalization of labour and social relations to the financial sector, algorithms are grounding a new mode of thought and control. Within the context of this all-machine phase transition of digital capitalism, it is no longer sufficient to side with the critical theory that accuses computation to be reducing human thought to mere mechanical operations …. If technocapitalism is infected by computational randomness and chaos, therefore also the traditional critique of instrumental rationality has to be put into question: the incomputable cannot be simply understood as being opposed to reason.

Disruptive technologies thus could cause a revolution not just to processes but to the very organisation of the economy and those acting within it (Desai 2013). Digitalisation can be compared, in this sense, to “the new steam”. It will involve new ways, furthermore, of producing and disseminating knowledge (Andersen and Pold 2014). Individually, it will represent an “ontological turn” for “becoming post-human” (Harris 2016). Collectively, they will serve as the impetus for creating new “post-­capitalist” geographies and social relations (Chatterton 2016). Represented in the fundamental distinction between technological innovation and social disruption. Big data and digital advances in communication have already innovated the status quo, not just in terms of opening up new profitable opportunities for a conventional market system but also in regards to updating how individuals are tracked, regulated, and controlled for this purpose. (Fitzpatrick 2002). Nevertheless, technologies such as nanosystems portend a more profound potential socio-­ economic changes on the horizon (Walsh 2004). Even before the financial

252 

P. BLOOM

crisis, it was apparent that capitalism must address the problems it was most responsible for creating if it was to survive and thrive (Hart 2005). Yet these prospective “unlimited business opportunities” for social good were underlied in reality by emergent forms of radical technology development connected to the “innovation value-added chain” (Hall and Martin 2005). Returning to the nearer present, the creation of potential technologies is increasingly being matched by the introduction of just as disruptive ideological perspectives such as “autonomous Marxism” (Hall 2015). The true implications of these technologies, hence, will be primarily measured on their short and long term social ramifications. In the contemporary era, “posthuman” forms of solidarity based politics are arising through the use of social media platforms (Prasad 2016). These efforts revive older debates about which technology humanity chooses to continue to develop, and whether it will be for control or emancipatory purposes (Badham 1986). Critical for such updated discussions as to the social choice underpinning technology creation and promotion is for whose interest is it being done. Too often it is understood within a very narrow gaze of the middle and upper classes and those living in “developed” countries. It is imperative, therefore, to shift this gaze to those who have been historically economically and socially exploited and marginalized (Silver 2013). Doing so, would also better reveals the “limits” of current capitalist development and corporate power globally, thus opening the space for new transhuman geographies to arise (Sheppard 2016). As technology shows the limits of what is currently feasible within the present status quo, it also reflects the potential for what could be possible under a different system (De Peuter and Dyer-Witheford 2010). Required, in turn, is a refocusing of current human energy toward the fostering of a genuinely transformative form of transhumanism. It entails rejecting ideas that technologies are inherently disruptive for one critically interrogates their varied cultural and material ramifications. (King and Baatartogtokh 2015). A recurring theme of this book is to recognise how much of what may seem new may actually represent a return of past tropes and values (Stambler 2010). Equally important, is identifying which visions of the future are shaping the present and its “forward thinking” desires (Coenen 2014). Such reflection also entails engaging in the ­“unhumanising” process of deconstructing conventional human and nonhuman archetypes—rejecting the colonial basis of human knowledge rooted in phenotypes and essentialized categorisations (Jones 2017). In

8  SHARED CONSCIOUSNESS: TOWARD A WORLD OF TRANSHUMAN… 

253

doing so, human longings for better worlds, their optimism for disrupting the present, are grounded in shared aspiration to challenge their own past and dream of a progress beyond just themselves and their own needs (Kroker and Kroker 2016). The overarching aim of this wide ranging disruption is the production of an integrative human and non-human politics, society, and economy. Not surprisingly, capitalists are recognising how crucial values of integration are to strategies of “managing innovation” (Tidd and Bessant 2018). While their focus is on ensuring that these technological advances remain marketable and, therefore, profitable—those seeking more disruptive alterations of present systems and practices must also embrace an integrative mindset, in which humans and non-humans are involved in a process of mutual social design (Hasse and Søndergaard 2019). Concretely, this will mean applying such progressive notions of an integrative future to every day institutions and relationships.

Deprogramming and Unhumanising “Industry 4.0” A crucial part of this integrative transhuman society is the ability to expand human intelligence beyond its present day limitations. A key promise of AI, computerisation, and robotics is that it can offer profound social insights that would be unavailable to contemporary human knowledge and perspectives (Fujita 2018). These insights transcend mere technological know how or complex data analysis. They are information that can provide new and vital wisdom as to what can and should constitute “the good society” within diverse social contexts (Cath et  al. 2018) It also opens the possibility for alternative and shared ways to experience and express these forms of intelligence (Steels and Brooks 2018). At its heart, these efforts will demand that humans reconceive traditional understandings of what counts as intelligence—whether it be human or artificial. In the current period, to be “smart” is dominantly aligned with values of efficiency and finding innovative methods for extracting profit. This is extending to perceptions of what type of knowledge and wisdom will be needed to succeed in the future (Webster and Ivanov 2019). Just as troubling and potentially outdated are perspectives that seek to programme robots and AI to reflect these narrow market values— thus transferring human social programming to the non-humans they have developed and designed (Quote—Kaplan). At perhaps the most obvious and to a certain extent most basic level it requires reconsidering

254 

P. BLOOM

what a “good AI society” may realistically be (Floridi et al. 2018). This also includes considering the ethical relationship between autonomous and artificial intelligence, such as in the case of self-driving cars (Lin et al. 2017). It raises additional profound and increasingly urgent questions of whether we are “heading toward artificial intelligence 2.0?” (Pan 2016). Absolutely critical for this expansion of our intelligence is the recognition and celebration of just how diverse it is as well. This awareness of intelligence diversity has its roots as discussed in previous chapters in the neurodiversity movement associated with autism. It is now extending to how we currently collect and analyse data as well (Letouzé 2018). This includes, further, completely reimagining how humans physically and virtually inhabit the world. In July, 2017, Chinese Government put forward the “The Program for Developing A New Generation of Artificial Intelligence” where they sought to redefine humanity’s traditional “binary space” of existence between the physical and the social: Before AI was available, people were living in a binary space which consisted of physical space (P for short) and human social space (H for short). In this binary space, the orders for human activities are decided by the interactions and interrelations among the people and between man and object and man acts as the formulator and dominator in human social orders. With the rapid development of mega-data, cloud computing and IOT, intelligent mobile devices, wearable alliances, and “Internet+” react on different sectors of human society and promote the advent of the third industrial revolution and the intelligence era, which drive people to the ternary space (PHC) marked by physical space (P), human social space (H), and CyberSpace (C for short). In the ternary space (PHC), the orders of human society will be invariably restructured. Whether you are aware of such change or not, the profound influence upon human social life which is brought by artificial intelligence becomes a consensus of all walks of life. Therefore, mankind should take the initiative measures so that they may adapt themselves to such change. (Zhang et al. 2018: 1)

Equally significantly, it requires better understanding and “interacting robot ethics and machine morality” (Malle 2016). Through enlarging the scope of transhuman existence and morality this could lead to profound and enriching new “human-artificial intelligence partnerships” (Jennings 2018). The risk though with this emphasis on diversity is to simply reinforce often laudable but ultimately problematic ideologies of liberal tolerance.

8  SHARED CONSCIOUSNESS: TOWARD A WORLD OF TRANSHUMAN… 

255

These partnerships cannot be confined to being one of merely mutual recognition and acceptance. Rather they must actively work to decolonise and deprogramme conventional human values and practices. To reconfigure the human not as a reified subject of compassion and free choice but one whose history up until the present is largely defined by exclusion, violence, and domination. If the future is to be different than this must be premised on committed and integrated process of mutual learning between humans and non-humans. Indeed this will be essential for “industry 4.0” to be truly revolutionary as it is so commonly touted. The promotion of the fourth industrial revolution may seem to be the epitome of ‘hi-tech’ yet it treads in the same discursive waters as the first wave of industrialisation—one premised on logics of categorisation, phenotypes, and ‘evidence-based’ privilege. Previously, this was a virulent mixture of essentialised racism, evolving forms of race-based measurement and theorising, as well as scientific methods for maximizing efficiency and productivity. A commonly forgotten part of this history was the need to ensure that those with “bad” or “unproductive” genes did not mix with those who were genetically capable of contributing and benefitting to such progress. It is also worth mentioning that these became truly hegemonic only as in part a reaction to the emergence and then suppression of incipient counter-imperialist protests linking industrialisation and liberal free markets to the oppression of ‘working people’ in both the core and to a lesser but not insubstantial extent the periphery. In the contemporary period, this is being updated to include non-human intelligence and subject. Here it is an elite judgement about the ability of certain groups of people to truly take advantage of ‘industry 4.0’, ones that have underlying racial and class undertones. On the one hand, those in post-industrial contexts are viewed sympathetically but inevitably as ‘victims of progress’, judged to be incapable of adapting to this nascent transhuman economic reality. On the other, those in ‘developing’ countries are expected to discipline themselves anew to overcome past traumas and injustices in order to optimistically take advantage of this hi-tech future. There is a significant component of segregation here, as well, that should not be dismissed in its crucial importance. Those that fail to reskill, who refuse to be resilient, who are unwilling to flexibly rearrange their existence to compete and cooperate with these new non-human colleagues are justifiably allowed to be forgotten by history and must be cordoned off from those who are willing to embrace this brave new liberal world. It is not genetics so much as it is behavioural, in the first instance. The great

256 

P. BLOOM

mass of underemployed and unemployed who will not or cannot become digital success stories and soon to come valuable parts of the “workforce 4.0” will be culturally and then quite literally materially excluded from those that are successful. There is a biological aspect here though as well, in that as the inequality grows, the segregation will become more formal and the justifications more essentialised. It is not surprising, hence, that many of our current plutocrats are so invested in eugenic explanations for their success. This eugenic idea will be refashioned though, going from the genetic per say to those who are properly socially programmed— whether human or not -for being triumphant in this new hi-tech industrial economy. The dividing line are those who rebooted themselves verse those who merely outdated. Where once it was assumed scientifically and morally, that non-whites and the poor were simply not intelligent enough to be free and prosperous humans, now it will be proclaimed sometimes sadly but as based on clear and unassailable data that some people and robots are tragically not ‘intelligent’ to be happy and healthy transhuman subject. The promise of twentieth century racial and ethnic integration must evolve into a more radicalised form of transhuman integration. The goal is not simply to allow for humans and non-humans to better co-exist, or preserve the peace between them. Nor is it to ensure that all humans have the same opportunities to exploit machines or that every machine has political, legal, and social equality. Rather it should be a sustained universal deprogramming effort to create diverse and localised forms of “situated knowledge” that unhumanise industry 4.0 and decolonise the future.

Liberating Intelligence Transhumanism is commonly, if not predominantly, associated with themes of human enhancement. It is the ability to exploit new technologies such as AI and robotics to allow humans to expand their existing capabilities. This could be an individual level augmentation—such as giving sight to a blind person—or species level improvement, allowing humans to see further and more than they ever have before. Politically, this has catalysed nascent discourses of regulation and control based on fears of that these enhancements could and most probably will lead to fresh abuses of power and relationships of domination. Of course these concerns are quite valid within a system where competition and conquest (both individually and collectively) still reign supreme. And it is profoundly revealing that the first and primary social impulse when confronted

8  SHARED CONSCIOUSNESS: TOWARD A WORLD OF TRANSHUMAN… 

257

with the prospect of human enhancement is one of discipline. Reflected is the contemporary need to manage and if and when required repress desires for liberation. The aim though must not be reduced to merely a libertarian embrace of total freedom for humans to exploit technology in whatever way they choose. The advocation of such a “pure” freedom ignores actual power dynamics. Further, it fails to consider the current limitations of our aspirations and experiences of agency. Here, new innovations would lead ultimately to conventional outcomes of inequality and exploitation. The unbridled development of technology would in fact merely enhance their mass oppression rather than liberate humanity from itself. Transhuman has the potential to transform the very meaning and goals of liberation. It is rooted in freeing human consciousness from its narrow ideological and cultural limits through its greater exposure to and cooperation with non-human intelligence (Nørskov 2017). These lead to a serious and important ethical question of our responsibility not only to robots and AI but to ourselves. The advancement of social robotics brings to the fore the need to consider what type of sociality humans are currently providing for non-human programming (Taipale et al. 2015). The rise of “humane robots” with an “anthropomorphic mind” asks even more starkly if humans are in fact humane and therefore have the ability to develop machine learning for this purpose (Sandini and Sciutti 2018). More precisely, what is our collective and individual duty of care to robots to ensure that if their knowledge is rooted in their experiences of us that they develop a “machine morality” that transcends the ethical problems that have historically plagued human relations, in this regard (Bigman et al. 2019). Here, post-humanism becomes a living project of both decolonisation and “unhumanising” for the sake of politically and morally emancipating humans and non-humans alike (Hudson 2018). The prospect of transhuman relations, therefore, are as much existential as they are technological. They demand a critical reflection and material reckoning with how humans think, live, and act. A continual, and not illegitimate, worry is that the rise of non-human intelligence will take away even a modicum of “free will”, exchanging autonomy of judgement and responsibility for actions for data based predictions. This has led some like the legal scholar Pasquale (2017) to propose “the fourth law of robotics”. It has also offered a troubling glimpse into the potential “dark side of ethical robotics” (Vanderelst and Winfield 2018). Yet it also gives greater bearing to humans to treat the emergence of such technological

258 

P. BLOOM

consciousness as an existential choice for creating and programming a better world (Selisker 2015). It is precisely this existential leap of faith, this shared decision as to how we will recode our realities that stand as the prime “ethical challenges to citizens of ‘the automatic age’” (Bynum 2017). The growth of “social robots”, in turn, is commonly framed as a query of whether or not they are “things or agents” (Alač 2016). The same could be interrogated for humans during this time, but in a much more politicised and existential way, notably are we merely reproducing our own history or remaking it anew for a coming transhuman world?

ReCoding Reality Transhumanism offers the tantalizing but not insignificant potential to liberate intelligence through processes of decolonisation and unhumanisation. In particular, it does so in two critical and intertwining ways. The first is through a process of profound human historical reflection on their past, present, and future. Rather than simply reify the human as “good”, “free”, and “compassionate”—what is required is a thorough investigation of we as a species have socially defined these terms and as such limited their theoretical and concrete potentialities. Instead of protecting the robotic from the human, it is imperative to trouble the very notion of “human security” in order that we may protect humans and non-humans from our current ideologies and humanistic tendencies toward domination, exploitation, and unequal categorisation. The second is to actually learn from such non-human intelligence, to be morally, ethically, and epistemologically enhanced by our engagement with how robots and AI “see” and “act” in their social environment. This thoroughly “human” liberation, the breaking of humanistic ideological and empirical chains, can serve as the very foundations for an exciting transhuman recoding of our shared realities. At stake, firstly, though is to recognise that there is no “natural” reality. The post-structuralist emphasis on the social construction of all experience and identity is an important intervention, in this respect. It puts into focus the fact that the desires for an “authentic” existence non-tainted by cultural influences or programming is itself a compelling and potentially ­dangerous myth. It plays into longings for humans to be “free”, autonomous in their decisions and uninfluenced by external forces who would secretly or explicitly have them do their bidding. The fear then is one of propaganda, an Orwellian dystopia where all our thoughts and actions

8  SHARED CONSCIOUSNESS: TOWARD A WORLD OF TRANSHUMAN… 

259

must match that of “Big Brother”, whether that in the “real world” is government or corporate based. These concerns are perhaps particularly striking in an era of “fake news” where “facts” are deemed irrelevant and everything can be digitally manipulated to serve the needs of those in power or seeking it. The prospect of social programming contains, therefore, a rather ominous undertone of cultural control and the erosion of any sense of truth. However, these fears are as much a reflection of our current power structures as they are a genuine challenge to the possibilities of reconfiguring freedom and “truth” around a critical project of personal and collective social programming. In the here and now, it is precisely that contingent nature of “truth telling” and facts that makes the dream of a “pure” and “objective” reality so ultimately and ironically dangerous. It assumes that underneath all of this misinformation and digital manipulation of our preferences and beliefs, there is a singular unvarnished reality waiting to be discovered that is free from such technologically aided corruption. This is an understandable but misguided desire, as it plays into the hands of a status quo who is seeking to use these new forms of social media induced misinformation to naturalise their own worldview. Hence, it is not surprising that “mainstream news” with a strong corporate bias would be at the forefront of proclaiming their ideological neutrality and reporting “just the facts”. These are even more worrying given how easily it is use tools such as big data to reproduce these perspectives as evidence based and therefore completely accurate, free from any bias. This does not mean though a full on embrace of relativism or a rejection of social “truths”. Instead, it requires the radical repurposing of such technologies for uncovering the deeper complexity of this mutli-faceted existence. The uncovering of which voices are being silenced and for what ideological and political reasons. Consequently, truth is associated with a process of “queering” our socially mediated realities, providing a continual and ongoing public reflection on whose experience is and is not being highlighted and “trending”. This is already evidence in how these platforms are being used to give voice to those traditionally marginalised and unheard, as, for instance, the Black Lives Matters movement have used twitter and Facebook to make their views and social struggle go viral. These attempts of combining an ethos of queerness and viral politics can drive a more radical transhuman agenda. They help highlight the significance of non-human intelligence into these public discussions and as part of the popular consciousness. Truth then is a matter of continuing to

260 

P. BLOOM

interrogate through data who is being included, which “facts” and non-­ facts are going viral, and how this is shaping our realities. Reprogramming becomes a rallying call for recognising that we are a product of various forms of social programming and seeking to control this process. This challenge is, further, not one of simply reproducing another “fake” reality but exploring in more depths which realities do exist that are being hidden. It is a form of hi-tech collective archeology, whose purpose is social discovery and cultural expansion. A driving force and potential outcome of these efforts are the ability to move beyond our shared anthropocentric reality. Although it is increasingly clear that our future will be shared with non-human intelligence, there is still a strong clinging to a “human-centred” worldview. There is a longing to reclaim “the human”, to prevent “our humanity” from being overtaken by machines. Instead this should be an invitation to transcend “the human” and progressively obsolete debates over “human nature” for a more nuanced, context—specific, and curiosity driven exploration of different hybrid consciousness and cyborg ways of being. To a certain extent these are already apparent in ecological movements throughout the latter half of the twentieth century, ones that ask individuals to expand their sensory horizons to “hear” and feel the nature that surrounds them (Liu et al. 2014). In a similar way, data and continual engagement with AI can be an opportunity to “experience” the non-human intelligence that currently surrounds us, and learn from it diverse and previously unmined ways of existing in the world (Butler 2007). It is precisely this existential immersion and freedom that is being promoted in the call for adopting a transhuman existence. In uncovering hidden realities, ones in which humans do not reign supreme and in which “human” intelligence is not privileged, then there is the prospective to actually begin discovering and creating different transhuman realities. Freedom and agency, in this regard, are reconfigured as a form of “dislocation” (Rossini et al. 2018). The challenge is how to transform AI into an opportunity for mutual enhancement. This shift the traditional focus of transhumanism from human enhancement to reality augmentation. The emphasis on human perfectibility is replaced, in this sense, by an ethos of transhuman possibility. It is ensuring that the incorporation of non-human intelligence into all areas of social existence is not “human-centred” but collectively emancipating and liberating. Revealed is a new era of “co-emergence” combining humans and non-human intelligence and desires into ongoing experimentation, new

8  SHARED CONSCIOUSNESS: TOWARD A WORLD OF TRANSHUMAN… 

261

forms of “situated knowledge”, and innovative types of social orderings. Indeed, the greater interaction of machines and humans transcends their mere protection or toleration for each other. By contrast, it is a renewed spirit of collective possibility and historical agency, where what is enhanced is our ability to shape our realities together. The prospect for discovering new identities, institutions, and governance are critically transformed into opportunities for radically improving, recoding, unhumanising, and expanding our shared potential through the embrace of transhuman relations.

References Alač, M. (2016). Social Robots: Things or Agents? AI & Society, 31(4), 519–535. Andersen, C. U., & Pold, S. B. (2014). Post-digital Books and Disruptive Literary Machines. Formules, 18, 164–183. Badham, R. J. (1986). Technology and Public Choice: Strategies for Technological Control and the Selection of Technologies. Prometheus, 4(2), 288–305. Bastani, A. (2019). Fully Automated Luxury Communism. Verso Books. Bigman, Y.  E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding Robots Responsible: The Elements of Machine Morality. Trends in Cognitive Sciences, 23(5), 365–368. Butler, T. (2007). Memoryscape: How Audio Walks Can Deepen Our Sense of Place by Integrating Art, Oral History and Cultural Geography. Geography Compass, 1(3), 360–372. Bynum, T.  W. (2017). Ethical Challenges to Citizens of ‘The Automatic Age’: Norbert Wiener on the Information Society. In Computer Ethics (pp. 3–12). Routledge. Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach. Science and Engineering Ethics, 24(2), 505–528. Chatterton, P. (2016). Building Transitions to Post-Capitalist Urban Commons. Transactions of the Institute of British Geographers, 41(4), 403–415. Coenen, C. (2014). Transhumanism and Its Genesis: The Shaping of Human Enhancement Discourse by Visions of the Future. Humana. Mente Journal of Philosophical Studies, 7(26), 35–58. De Peuter, G., & Dyer-Witheford, N. (2010). Commons and Cooperatives. Affinities: A Journal of Radical Theory, Culture, and Action. Desai, D.  R. (2013). The New Steam: On Digitization, Decentralization, and Disruption. Hastings Law Journal, 65, 1469. Fisher, E. (2010). Contemporary Technology Discourse and the Legitimation of Capitalism. European Journal of Social Theory, 13(2), 229–252.

262 

P. BLOOM

Fitzpatrick, T. (2002). Critical Theory, Information Society and Surveillance Technologies. Information, Communication & Society, 5(3), 357–378. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Schafer, B. (2018). An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Fujita, M. (2018). AI and the Future of the Brain Power Society: When the Descendants of Athena and Prometheus Work Together. Review of International Economics, 26(3), 508–523. Hall, R. (2015). The Implications of Autonomist Marxism for Research and Practice in Education and Technology. Learning, Media and Technology, 40(1), 106–122. Hall, J. K., & Martin, M. J. (2005). Disruptive Technologies, Stakeholders and the Innovation Value-Added Chain: A Framework for Evaluating Radical Technology Development. R&D Management, 35(3), 273–284. Hasse, C., & Søndergaard, D. M. (2019). Designing Robots, Designing Humans. Harris, O. J. (2016). Becoming Post-Human: Identity and the Ontological Turn. Hart, S.  L. (2005). Capitalism at the Crossroads: The Unlimited Business Opportunities in Solving the World’s Most Difficult Problems. New York: Pearson Education. Hester Jr, D. M., Susan, O. C., MTP, R., & Rowe, N. Technology, Transhuman, and Transpersonal: Man and Machine Collaborations. Hudson, H. (2018). Larger Than Life? Decolonising Human Security Studies Through Feminist Posthumanism. Strategic Review for Southern Africa, 40(1), 46. Jennings, N. (2018, December). Human-Artificial Intelligence Partnerships. In Proceedings of the 6th International Conference on Human-Agent Interaction (pp. 2–2). ACM. Jones, R. (2017). Archaic Man Meets a Marvellous Automaton: Posthumanism, Social Robots, Archetypes. Journal of Analytical Psychology, 62(3), 338–355. King, A. A., & Baatartogtokh, B. (2015). How Useful is the Theory of Disruptive Innovation? MIT Sloan Management Review, 57(1), 77. Kroker, A., & Kroker, M. (2016). Exits to the Posthuman Future: Dreaming with Drones. In Critical Posthumanism and Planetary Futures (pp. 75–90). New Delhi: Springer. Langley, P., & Leyshon, A. (2017). Platform Capitalism: The Intermediation and Capitalisation of Digital Economic Circulation. Finance and Society, 3(1), 11–31. Letouzé, E. (2018). Big Data, Open Algorithms, and Artificial Intelligence for Human Development. 10442/15809, 00-13. Lin, P., Abney, K., & Jenkins, R. (Eds.). (2017). Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. New York: Oxford University Press.

8  SHARED CONSCIOUSNESS: TOWARD A WORLD OF TRANSHUMAN… 

263

Liu, J., Kang, J., Behm, H., & Luo, T. (2014). Effects of Landscape on Soundscape Perception: Soundwalks in City Parks. Landscape and Urban Planning, 123, 30–40. Malle, B. F. (2016). Integrating Robot Ethics and Machine Morality: The Study and Design of Moral Competence in Robots. Ethics and Information Technology, 18(4), 243–256. Means, A. J. (2017). Education for A Post-Work Future: Automation, Precarity, and Stagnation. Knowledge Cultures, 5(1), 21–40. Nørskov, M. (2017). Social Robots: Boundaries, Potential, Challenges. Taylor & Francis. Pan, Y. (2016). Heading Toward Artificial Intelligence 2.0. Engineering, 2(4), 409–413. Pasquale, F. (2017). Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society. Ohio State Law Journal, 78, 1243. Prasad, P. (2016). Beyond Rights as Recognition: Black Twitter and Posthuman Coalitional Possibilities. Prose Studies, 38(1), 50–73. Rossini, M., Herbrechter, S., & Callus, I. (2018). Introduction: Dis/Locating Posthumanism in European Literary and Critical Traditions. In European Posthumanism (pp. 11–28). Routledge. Sandini, G., & Sciutti, A. (2018). Humane Robots–From Robots with a Humanoid Body to Robots with an Anthropomorphic Mind. ACM Transactions on Human Robot Interaction, 16(7). Selisker, S. (2015). The Existential Robot. Contemporary Literature, 56(4), 695–700. Sheppard, E. (2016). Limits to Globalization: The Disruptive Geographies of Capitalist Development. Oxford: Oxford University Press. Silver, B. (2013). Theorising the Working Class in Twenty-First-Century Global Capitalism. Workers and Labour in a Globalised Capitalism, 46–69 Stambler, I. (2010). Life Extension–A Conservative Enterprise? Some fin-de-siècle and Early Twentieth-Century Precursors of Transhumanism. Life, 21, 1. Steels, L., & Brooks, R. (2018). The Artificial Life Route to Artificial Intelligence: Building Embodied, Situated Agents. Routledge. Taipale, S., Vincent, J., Sapio, B., Lugano, G., & Fortunati, L. (2015). Introduction: Situating the Human in Social Robots. In Social Robots from a Human Perspective (pp. 1–7). Cham: Springer. Tidd, J., & Bessant, J. R. (2018). Managing Innovation: Integrating Technological, Market and Organizational Change. John Wiley & Sons. Vanderelst, D., & Winfield, A. (2018, December). The Dark Side of Ethical Robots. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 317–322). ACM.

264 

P. BLOOM

Walsh, S. T. (2004). Roadmapping a Disruptive Technology: A Case Study: The Emerging Microsystems and Top-Down Nanosystems Industry. Technological Forecasting and Social Change, 71(1–2), 161–185. Webster, C., & Ivanov, S.  H. (2019). Robotics, Artificial Intelligence, and the Evolving Nature of Work. In Business Transformation in Data Driven Societies. Palgrave Macmillan. (Forthcoming). Zhang, S., Yang, C., Qian, N., Tang, Q., Luo, X., Leng, T., et al. (2018). Artificial Intelligence and People’s Consensus. In Reconstructing Our Orders (pp. 1–27). Singapore: Springer.

Index

A Administration, 135, 160, 161 Agency, 14, 22, 41, 43, 44, 46, 47, 75, 99, 105, 107, 114, 137, 145, 150, 176, 179, 184, 196, 201, 217, 219, 220, 222, 227, 228, 231, 232, 234, 237, 257, 260, 261 Alienation, 94–101, 104, 249, 250 Anthropocene, 32, 34, 39–43, 56, 219 Anthropocentric, 3, 6, 17–19, 22, 24, 32, 34, 36–43, 47–49, 53, 76, 103, 113, 146, 183, 192, 220, 260 Artificial intelligence (AI), 1–6, 12–23, 31, 33, 34, 39, 42, 49, 55, 67–83, 93, 94, 98–101, 103–105, 107, 109, 110, 112–114, 116–119, 131–134, 139–145, 150, 151, 153, 154, 156, 157, 159, 161, 173, 175–177, 182, 185, 189, 191, 193, 196, 198, 212–218, 229, 230, 232–236, 248, 251, 253, 254, 256–258, 260 Autonomy, 41, 44, 86, 114, 120, 149, 150, 217–220, 257

B Bias, 54, 67, 72–75, 79, 119, 220, 229, 232, 259 Big Data, 4, 12, 15, 72–76, 85, 105, 107, 108, 132, 136, 149, 174, 176, 177, 189, 191–193, 196, 199, 213, 214, 218, 231, 251, 259 C Capitalism, 96, 99, 100, 109, 119, 137, 142, 155, 157, 159, 160, 185, 214, 249–252 Care (caring machines), 9, 57, 76–78, 84, 101–103, 108, 109, 111, 113, 120, 192, 216, 219, 236, 257 Colonialism, 191, 249 Consciousness, 2, 3, 7, 11, 14, 18–22, 32, 34, 45, 51–54, 68, 69, 71, 74, 82, 94, 105, 109, 110, 113, 175, 187, 199, 212, 214, 236, 247–261 Cyborg, 3, 21, 68, 75, 153, 174–179, 187, 217, 221, 225, 227, 234, 237, 260

© The Author(s) 2020 P. Bloom, Identity, Institutions and Governance in an AI World, https://doi.org/10.1007/978-3-030-36181-5

265

266 

INDEX

D Development, 8, 15, 18, 20, 31, 33, 38, 39, 55, 68, 69, 71, 73, 75, 76, 80, 82, 95, 98, 99, 101–104, 110, 114, 119, 120, 132–137, 139, 140, 142, 144, 150, 154, 156–158, 160, 161, 180–184, 189, 194, 195, 198, 201, 215–218, 222, 227, 231, 235, 252, 254, 257 Digital (digitalisation), 3, 5, 6, 12, 14–16, 19, 21, 46, 51, 69, 73, 77, 82, 97, 98, 100, 105, 112, 114, 115, 117, 120, 134, 136–141, 143–145, 147, 148, 152, 157–160, 173, 174, 177, 178, 180–184, 194, 195, 200, 214, 221, 225, 226, 228, 235, 249–251, 256, 259 Disruptive technology, 6, 14, 15, 33, 54, 57, 72, 76, 98, 104, 117, 152, 157, 182, 184, 218, 229, 248, 250, 251 Dystopian, 1–3, 6, 13, 23, 68, 76, 83, 119, 176, 231 E Emancipation, 44, 86, 100, 106, 109, 116, 154, 173, 192, 201, 234, 236 Ethical intelligence, 75–78 Eugenics, 10, 13, 83, 104, 256 Evolution, 5, 7–9, 18, 20, 22, 24, 32, 34, 40, 48, 67, 68, 72, 81, 83, 98, 111, 117, 143, 149, 156, 174, 175, 193, 212, 215, 224, 226, 248, 249 Exploitation, 5, 13, 22, 37, 39, 47, 77, 101, 107, 116, 118, 144, 147, 150, 188, 189, 191, 196, 234, 235, 248–250, 257, 258

F Fourth Industrial Revolution, 78, 80, 101, 132, 147, 255 Free market, 8, 40, 85, 116, 153, 177, 255 Futures, 1–24, 31–34, 39, 41, 42, 45–47, 50, 53, 56, 67–73, 76–81, 83, 84, 93–120, 131–162, 174–176, 178–181, 183, 184, 187, 188, 192, 194, 196, 199, 201, 212, 223, 226, 228–231, 234, 236, 237, 247–249, 252, 253, 255, 256, 258, 260 G Governance, 3, 83, 120, 137, 151, 157, 158, 160, 161, 175, 178, 182, 193, 195, 196, 199, 233, 249, 261 H Human-centred, 5, 13, 17, 18, 23, 33–34, 38, 39, 41, 42, 47–50, 54, 57, 81, 103, 201, 218, 232, 248, 260 Humanism, 7, 13, 19, 32, 34, 41, 55, 157, 197, 222 Human relations, 1, 2, 6, 31–57, 148, 191, 222, 257 Human resource management (HRM), 34–39, 50, 139 Human rights, 2, 8, 55–57, 120, 182, 212, 215, 217–219, 224, 231 I Identity, 3, 23, 32, 36, 37, 46, 48, 52, 74, 75, 84, 95, 103, 104, 114, 115, 117, 193, 236, 249, 258, 261

 INDEX 

Industry 4.0, 15, 18, 101, 102, 132, 153, 249, 253–256 Innovation, 7, 12, 23, 24, 33, 38, 51, 52, 83, 84, 97, 102, 114, 118, 134, 137, 138, 143–145, 149, 158, 159, 182, 183, 189, 194, 196, 222–224, 227, 230, 251–253, 257 Institutions, 3, 4, 100, 120, 133, 134, 136, 138, 140, 141, 149, 174, 182, 184, 197, 224, 249, 253, 261 Integration, 17, 23, 33, 34, 67–86, 101, 110, 145, 154, 157, 219, 248–249, 253, 256 Integrative economies, 153–158 L Liberation, 57, 116, 175, 181, 192, 201, 257, 258 M Machine learning (ML), 31, 73, 75, 78, 81, 105, 114, 120, 160, 229, 257 Machines, 3, 14, 18–21, 23, 56, 67–86, 93–95, 98–106, 109, 111, 113, 114, 116, 119, 120, 131, 132, 139, 141, 142, 145, 146, 149, 151, 157, 160, 173, 175, 179, 180, 185, 188, 190, 195, 196, 213, 217, 219–222, 225, 232–234, 248, 254, 256, 260, 261 Meaningful intelligence, 93–120 Mutual intelligent design, 3, 173–201 N Neoliberalism, 11, 37, 100, 116, 191 New materialism, 32, 34, 43–48

267

P Pathologies, 225–228, 235 Platforms, 9, 14, 16, 21, 31, 45, 107, 144–146, 176, 184, 190, 195, 250, 252, 259 Politics, 3, 6, 9, 14, 16, 32, 34, 41, 42, 44, 46, 48, 56, 72, 77, 117, 136–139, 155, 159, 173–201, 224, 248, 251–253, 259 Post-capitalism, 132, 146, 156, 157, 160, 185 Posthumanism, 6, 8–11, 19, 20, 44–46, 48, 55, 56, 194, 198, 200, 218–222, 227, 248, 257 Post-structuralism, 7, 56 Power, 3, 8, 14, 15, 19, 20, 24, 33, 34, 36, 47, 49, 55–57, 69, 72, 73, 75, 78, 95, 98, 102, 104–107, 116, 133, 137, 139, 143, 148, 151, 152, 155, 156, 158, 159, 161, 174, 175, 177–180, 184, 189, 191, 193, 197–199, 201, 218, 223, 229, 248, 252, 256, 257, 259 Progress, 5–7, 9–11, 13, 23, 24, 32–34, 46, 71, 72, 79–81, 102, 117, 118, 132, 140, 155, 173, 184–188, 192, 222, 253, 255 Public administration, 2, 134, 135 Q Quantification, 105 R Robotics, 3, 5, 12, 13, 16, 33, 39, 42, 55, 67, 68, 70, 72, 95, 98–100, 102, 103, 105, 110, 111, 139, 144, 215, 216, 229

268 

INDEX

S Sharing intelligence, 50–54 Simulation, 54, 133, 149, 174, 187, 230 Singularity, 3, 17, 21–24, 67–72, 78, 79, 83, 224, 249 Slavery, 44, 95 “Smart,” 2, 3, 14, 18–22, 43–47, 70, 79, 112, 117, 131–162, 182, 190, 212, 231, 248, 253 Smart governance, 132–136 Sovereignty, 178, 195 T Technocratic, 10, 16, 173 Technology, 2, 33, 69, 94, 132, 173, 214, 248

Transhuman organisation, 147–153 rights, 212–217 value, 141–147, 156, 184 U Unhumanising, 188–193, 252–257, 261 Utopian, 2, 6, 11–13, 81, 104, 132, 153, 160, 161, 176, 178 V Virtual reality, 16, 19, 33, 34, 52, 53, 141, 185, 186, 192, 212, 247