234 116 2MB
English Pages 173 [174] Year 2023
Evaluating Education: Normative Systems and Institutional Practices
Steve Fuller
Back to the University’s Future The Second Coming of Humboldt
Evaluating Education: Normative Systems and Institutional Practices Series Editors Sharon Rider, Department of Philosophy, Uppsala University, Uppsala, Sweden Michael A. Peters, Beijing Normal University, Beijing, China
This book series examines and theorizes the historical development, socio-economic context, conceptual framework and implicit conceptual assumptions of different regulatory and evaluative regimes. It investigates the values embedded in policies and their implications in practice, proposing and developing alternative perspectives on how to conceive the role of public higher education and the mission of the university in the twenty-first century. It also surveys developments arising out of the current regime, such as the “metrics industry” that has emerged to rank and measure the performance of institutions in secondary, tertiary and postgraduate education, as an important management tool in the implementation of institutional, national and international policies. The series investigates the multiple ways in which assessment has become a standardized function of governments and funders, and examines the consequences of the shifting line between private and public ownership. The series encourages relevant contributions from all disciplines, including, inter alia, philosophy, sociology, media studies, anthropology, political science, history, legal studies, and economics in order to foster dialogue and deepen our understanding of the complex issues involved. Although the emphasis is on the university, the series addresses the diversity of evaluation criteria and techniques on a broad scale, covering not only secondary and postgraduate education as well, but also adult and continuing education. Focusing especially on areas of potential contention, the series explores the ways how the standards of quality posited and tools of measurement employed resolve, engender or conceal conflicts of values, goals or interests. Book proposals are to be submitted to the Publishing Editor: Claudia Acuna ([email protected]).
Steve Fuller
Back to the University’s Future The Second Coming of Humboldt
Steve Fuller Sociology University of Warwick Coventry, UK
ISSN 2570-0251 ISSN 2570-026X (electronic) Evaluating Education: Normative Systems and Institutional Practices ISBN 978-3-031-36326-9 ISBN 978-3-031-36327-6 (eBook) https://doi.org/10.1007/978-3-031-36327-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Dedicated to Sharon Rider, Muse and Mensch
Preface and Acknowledgements
This book has been more difficult than usual to write because the subject matter – the future of the university – is close to me, which serves to exaggerate my general tendency to write in an ‘orthogonal’ manner, which means that the text cuts across everything I happen to be thinking, regardless of what I am officially writing about. This has a couple of implications worth noting. The first is that while I have drawn on my past writings, nothing is reproduced intact, as this work cuts across differently to reflect the new context. And context is king when ascribing meaning to texts. The second implication concerns the book’s style, which often deals with history in a ‘flashback’ and ‘flashforward’ fashion. (The Greeks would say analepsis and prolepsis, respectively.) I have proceeded this way before, often to the annoyance of readers who long for an intellectual landscape akin to that in Edwin Abbott’s Flatland. Nevertheless, whatever we might wish to say in the present is ultimately about stabilizing our relationship to the past and the future, which is normally in flux. I would like to thank the editors of the following journals for enabling me to present texts that ended up serving as raw material for this one: Synthese, Postdigital Science and Education, Philosophy of the Social Sciences, Social Epistemology (and SERRC), Analyse und Kritik, British Journal of the Sociology of Education and Ludus Vitalis (Mexico). In terms of individuals who have directly or indirectly influenced my thinking about the issues discussed in these pages, let me thank Thomas Basbøll, Adam Briggle, Jim Collier, Bob Frodeman, Jenna Hartel, Petar Jandric, Ian Jarvie, Anton Leist, Aleksandra Lukaszewicz, Bill Lynch, Alfred Nordmann, David Budtz Pedersen, Sharon Rider, René von Schomberg, Frederik Stjernfelt, Georg Theiner and, last but not least, Cheryce von Xylander. Most of this book was written when I was one of the inaugural senior research fellows at the Käte Hamburger Kolleg in RWTH-Aachen, Germany, in 2021–22. It was completed at my home institution, University of Warwick, UK. Both suffered from being on the frontline of the post-pandemic, post-Brexit world, for which I am most grateful – and I hope this text redeems that investment! Coventry, UK Steve Fuller
vii
Contents
1
Introduction: Relaunching the Humboldtian University �������������������� 1
2
Deviant Interdisciplinarity as Philosophy Humboldt-Style ���������������� 15 2.1 Normal and Deviant Interdisciplinarity as Alternative Ways of Organizing Inquiry�������������������������������������������������������������� 15 2.2 How Expertise Appears to the Deviant Interdisciplinarian�������������� 18 2.3 Normal and Deviant Interdisciplinarity as Alternative Ways of Organizing the University�������������������������������������������������� 25 2.4 Deviant Interdisciplinarity after Philosophy: From Naturphilosophie to Biological Science���������������������������������� 30 2.5 The Fate of the Deviant Interdisciplinarian: The Case of Jean-Baptiste Lamarck ������������������������������������������������ 33 2.6 Epilogue: The Fate of Philosophy in a Multi-Disciplinary World���������������������������������������������������������� 37
3
Judgement as the Signature Expression of Academic Freedom���������� 39 3.1 Academic Freedom as a Form of Positive Liberty���������������������������� 39 3.2 The Fate of Judgement in the Hands of Bureaucracy from Germany to America���������������������������������������������������������������� 45 3.3 The Genealogy of Judgement from Germany Back to Greece �������� 51 3.4 The Logic of Judgement: Of Truth and Other Values ���������������������� 55 3.5 The Religious Sources of Academic Freedom: The Decision to Dissent������������������������������������������������������������������������������������������ 61
4
Hearing the Call of Science: Back from Max Weber to Francis Bacon�������������������������������������������������������������������������������������� 67 4.1 The Religious Roots of Max Weber’s Secular Calling: Luther Versus Calvin������������������������������������������������������������������������ 67 4.2 The Purposiveness of the Academic Calling: Weber Pivoting Between Kant and Popper �������������������������������������� 71
ix
x
Contents
4.3 The Call of a Voice that Few Hear: The Problem of Minorities in Organized Inquiry �������������������������������������������������� 76 4.4 What Would Francis Bacon Have Said? ������������������������������������������ 81 4.5 Bacon Redux: The University as a Revolutionary Constitution�������������������������������������������������������� 84 5
The University as the Site of Utopian Knowledge Capitalism ������������ 93 5.1 Marxism as a Casualty in the Fight to Reclaim Progressivism from Neoliberalism��������������������������������������������������� 93 5.2 Utopian Socialism as Post-Rentier Capitalism: Saint-Simon as Social Epistemologist���������������������������������������������� 97 5.3 Saint-Simon Deconstructed and the Promise of Proudhon�������������� 102 5.4 Towards a Truly Open Science and the Prospect of Academic Georgism��������������������������������������������������������������������� 106 5.5 Utopian Knowledge Capitalism as Philosophy of Science: Revisiting the Popperians���������������������������������������������� 113
6
Prolegomena to a Political Economy of Knowledge beyond Rentiership���������������������������������������������������������������������������������� 119 6.1 The Cognitive Economy of Gestalt Shifts: Plato and Aristotle Redux ���������������������������������������������������������������� 119 6.2 Modal Power and the Defeat of Knowledge as a Public Good�������� 122 6.3 Learning from Plagiarism: Knowledge as an Artworld�������������������� 127 6.4 Academic Rentiership as a Tale of Swings and Roundabouts���������� 132 6.5 The American Hegemony and its Protscience Digital Afterlife�������� 137 6.6 Postscript: Are Neoliberals Right to Want to Reduce Knowledge to Information?�������������������������������������������������������������� 141
7
Appendix: Towards a Theory of Academic Performance�������������������� 145 7.1 To Speak or to Write? That Is the Question�������������������������������������� 145 7.2 Improvisation as the Wellspring of Human Creativity���������������������� 148 7.3 The Three ‘Rs’ of Academic Performance: Roam, Record and Rehearse������������������������������������������������������������������������ 152 7.4 Ave Atque Vale! My Own Back to the University’s Future�������������� 155
References �������������������������������������������������������������������������������������������������������� 159 Index������������������������������������������������������������������������������������������������������������������ 169
Chapter 1
Introduction: Relaunching the Humboldtian University
Abstract The Introduction identifies the innovative features of Wilhelm von Humboldt’s ‘modernist’ reinvention of the university as a concept and an institution in the early nineteenth century, especially with an eye to how these features might be highlighted and extended today and into the future. The core Humboldtian principle is the unity of research and teaching in the person of the ‘academic’, understood as a free inquirer who serves as an exemplar for students to emulate in their own way as part of their personal development (Bildung). Unlike prior and subsequent conceptions of the university, the Humboldtian one was not about ‘credentialing’ people as ‘experts’, but about giving a focus and direction to their humanity. Six themes, which recur through this book, capture the Humboldtian spirit: futurism, historical consciousness, judgement, translation, knowledge as a public good, and professionalism. Universities epitomize ‘legacy institutions’ that continue to exist mainly because they have already existed. However, it is not clear that their existence is required for their major functions to be performed. These major functions are teaching and research. That they should be performed by the same set of people working in the same place – indeed, arguably as the same activities – was the idea of Wilhelm von Humboldt, an early nineteenth liberal philosopher, linguist and Prussian Minister of Education. By merging the two functions he reinvented the university as a dynamic engine of nation-building, and more generally the flagship vehicle of ‘Enlightenment’. Today we underestimate the significance of Humboldt’s achievement because it occurred when the university had been widely regarded by the avant garde thinkers of his youth – the French philosophes – as a lumbering medieval throwback. This book plans to update and defend Humboldt’s vision for the twenty-first century, fully realizing that not only has the character of teaching and research changed but also that the nation-state no longer holds the same promise that it did in Humboldt’s day. If I told you that the Humboldtian university nowadays lives ‘virtually’, you might think that I mean that it has migrated online. But that is too charitable. The © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Fuller, Back to the University’s Future, Evaluating Education: Normative Systems and Institutional Practices, https://doi.org/10.1007/978-3-031-36327-6_1
1
2
1 Introduction: Relaunching the Humboldtian University
Humboldtian university has barely got a grip – let alone imposed its logic – on the internet’s endless frontiers. On the contrary, I mean ‘virtually’ in its pre-cyber, deflated sense: Humboldt’s spirit continues to haunt the self-justifications given by both academics and their administrators for their activities, without justifying the activities themselves. It is almost as if latter-day Humboldtians, ranging from Juergen Habermas to the average university administrator, continue to sing the same lyrics, regardless of the melody – let alone the instruments – being played. JeanFrancois Lyotard (1983) made great sport of this fact in his 1979 report on the state of higher education, published as The Postmodern Condition. He highlighted the false consciousness of these latter-day Humboldtians. They talk as if the modern university has been the natural home of free inquiry, even though the most important innovations of the modern period – from industrial chemistry in the nineteenth century to more recent advances in information technology and the biomedical sciences – have required interdisciplinary if not transdisciplinary knowledge typically developed outside of traditional academic settings. In each case, the university was initially resistant, with many of the original researchers either lured away from academia altogether or supported there only with the funding of private foundations. The university itself became a serious player in this process only by dedicating degree programs to emerging forms of knowledge (e.g., molecular biology, computer science). When it has failed to do so, argued Lyotard, it has resembled its medieval forebear in being primarily in the business of reproducing epistemic authority – a ‘diploma mill’ or ‘paper belt’, as it is put in justifiably disparaging terms today (Gibson, 2022; Noble, 2001). Admittedly, Lyotard’s critique of the Humboldtian myth may have reflected France’s own non-Humboldtian modern higher education history, whereby research and teaching remain divided into two sorts of institutions. But it also reflected the more general global weakening of the state as a driver of mass social change. Thus, Lyotard adumbrated ideas that are now widely taken-for-granted about the affinity between postmodernism and neoliberalism, whereby the epistemic agency of transnational capitalism shapes the overall direction of knowledge production. So, how does the university make its way in the future, especially when the world’s knowledge is increasingly not deposited in dedicated academic buildings and other public spaces (i.e., libraries) but distributed across the internet, to which everyone has great but variable access? My overall strategy is simple: The Humboldtian university should plant its original seed in this new, partly virtual soil. The original Humboldtian academics curated and contextualized different claims to knowledge by a public exercise of judgement about what, how and why various things should be placed in the curriculum. It was in the spirit of setting an example in the sense of ‘paradigm’. What academics do for the students, the students will need to do for themselves once they leave the university — and face the internet as their primary source of knowledge. Humboldt’s own immediate context was Prussia, especially in its potential role as unifier of the German peoples. Prussia was an aspiring player on a European scene that was dominated by two countervailing forces: On the one hand, there were realms still under the sway of the Roman Catholic Church’s version of natural law
1 Introduction: Relaunching the Humboldtian University
3
theory, which served to legitimize inherited forms of epistemic and political authority; on the other, there was the increasingly alluring presence of English liberalism, with its loosening of traditional legal strictures to enable the free mobility of capital, labour and goods, all increasingly underwritten by forms of non-academic knowledge that we now associate with the ‘Industrial Revolution’. In this continent-wide struggle, the universities – not least Oxford and Cambridge – were still very much aligned with the Church. Indeed, they were primarily about reproducing the sort of knowledge that maintained the social order. Against this backdrop, the French Enlightenment philosophers called for the state to establish research institutions that were not constrained by the need to perpetuate that status quo. On the contrary, these new institutions might provide the basis for a new form of elite ‘technical’ training suited for the emerging scientific-industrial order – what became Les Grandes Écoles. But the French philosophes were not Humboldtians avant la lettre. While they famously publicized the innovative research of their day, especially through the subscription-based L’Encyclopédie, they never envisaged mainstreaming it into general education, let alone that the teaching and research roles might be synthetically realized in the same people. They were of the opinion – still common today – that, on the one hand, the credentialing demands of education might retard and compromise the natural course of inquiry and on the other, genuinely innovative research might disrupt the patterns of societal reproduction required for political order. Thus, the carnage and terror unleashed by the philosophe-inspired 1789 French Revolutionaries eventually settled into Napoleon’s division of academia into distinct sectors for teaching and research, with academics typically holding dual appointments, often to the detriment of teaching. This is now regarded as the distinctive French alternative to Humboldt, which has had its emulators worldwide (e.g., Russia). However, the modern French segregation of research from teaching concedes defeat in the struggle to democratize knowledge. Thus, there is none of Humboldt’s Romantic presentation of educators as reporters from the frontlines of inquiry; instead, the medieval repositories of received wisdom have morphed into purveyors of technical training at the highest level. Behind this difference in visions rested a difference in the relationship between academia and the state. Napoleon ultimately saw academia as a stabilizing force, in which research would feed into policy making and education would be designed, Plato-style, for the level of decision-making that people in society would need to take, given their place within it. The results are well known, and they have often flown under the banner of ‘Positivism’ (Fuller, 2006a: Chaps. 14 and 15). Most French political leaders have been graduates of one of the Grandes Écoles, the French were the pioneers of intelligence testing for purposes of streaming students through the education system, and Pierre Bourdieu (1988) became a leading critical sociologist in the late twentieth century by imagining that, for better or worse, the whole world might be run in this distinctly French way. And as universities have come to be increasingly seen as training grounds and credentials mills, while research has come to depend more explicitly on external funding, a case can be made that even institutions formally dedicated to the
4
1 Introduction: Relaunching the Humboldtian University
Humboldtian spirit have devolved into something more Napoleonic in practice – that is, ‘management schools’. Humboldt took the basic Enlightenment idea of education as Bildung, a process of self-development, and projected it onto what he envisaged as the emerging nation-state of Germany. Nowadays, we would say that Humboldt wanted to turn universities into incubators of human capital. In either case, it would require a level of harmonization of the activities of the state and academia, but without either party seeing the maintenance of the existing social order as the goal. Students would be educated in a future-facing way, based on the research of their teachers. They would be expected to learn things that their parents – even if they attended the same universities – did not learn, since the state would be presenting society as always en marche (apologies to Emmanuel Macron). ‘Academic freedom’ became such a ‘hot button’ issue in the Humboldtian context because these newly empowered teacher- researchers were prone to undermine the political conditions under which such licensed vanguardism was maintained. This was the concern that Max Weber (1958) expressed in his legendary introductory speech to graduate students, ‘Science as a Vocation’: How can you not bite the hand that feeds you, as your appetite for knowledge grows? Philosophy was central to the Humboldtian university just as theology was to its medieval predecessor. But it was a certain kind of philosophy, which we nowadays call ‘German idealism’. All the signature figures of this movement – Fichte, Schelling and Hegel – were central to the early days of the University of Berlin, the epicentre of the Humboldtian revolution in higher education. Their philosophies have been described as ‘systematic’ and ‘encyclopaedic’ in style. They certainly aimed to match the comprehensiveness of, say, Thomas Aquinas and some of the academic followers of Leibniz who dominated German universities before Humboldt – that is, those who taught the man whose pushback from his own education provided philosophical inspiration for both Humboldt and the idealists: Immanuel Kant. Philosophy in the Humboldtian lecture theatre offered more a prospectus for future exploration than a map of already conquered domains. It was a course of study more for romantic heroes than rationalistic experts (Fuller, 2020a: Chap. 9). Over the past two centuries, Humboldt’s romance of endless inquiry has been institutionally domesticated. In this book, that lost future is represented by what I call deviant interdisciplinarity. But the loss of that future happened gradually. It perhaps began in nineteenth century German academia, when philosophy and history started to self-segregate in a manner analogous to grammar and rhetoric in the medieval Trivium. ‘Intellectual history’ remains the orphaned child of this rupture, evidence of the modern academy’s intolerance of Hegel’s view that history is philosophy teaching by examples. In the twentieth century, there were flickering attempts to recover the original idealist vision of philosophy through a proactive approach to librarianship that still travels under the name of ‘social epistemology’ (Zandonade, 2004). Nowadays the spirit survives outside of academia with the centrality increasingly accorded to curation in the digital infosphere (Bhaskar, 2016).
1 Introduction: Relaunching the Humboldtian University
5
Indeed, it was the spirit in which ‘epistemology’ was introduced into English by the Scottish metaphysician James Ferrier (1854) in the mid-nineteenth century. His basic point was that anything that we deem ‘unknown’ is knowable; otherwise, it would never have become part of knowledge-talk (Fuller, 2007a: 32). This core intuition of German idealism, which testifies to the power of naming, may be understood as the complement of the famous Wittgenstein aphorism, ‘Whereof one cannot speak, thereof one must be silent’. Its ‘apophatic’ theological basis – which cannot be explored here – is the extent to which the Abrahamic deity can be named, described or otherwise captured in language. In its secular Humboldtian guise, the path is clear: The academic imperative is to translate the ‘unknowable’ into the ‘knowable but unknown’ and ultimately the ‘known’, all in aid of a secular salvation narrative (aka ‘progress’). ‘Faustian’ would not be an inappropriate epithet for this attitude. In practical terms, students entering the Humboldtian university would be introduced to the state of play regarding all knowledge, in terms of which they would then need to position themselves as they chart their own course through life. Unlike the medieval curriculum, whose Trivium and Quadrivium were largely about training the mind, voice, eye, ear, etc. of the whole person (i.e., ‘skills-based’, in today’s lingo), Humboldt’s modern curriculum presumes that students will have already received such training at Gymnasium. The university’s task then would be to open the imagination, a capacity that all this prior training is meant to have prepared, resulting in more specific pursuits and the more advanced training (aka ‘methods’) associated with particular fields of inquiry. The success of this approach in its original incarnation can be measured by the role that German idealism played as the seedbed for disciplinary specialization in the nineteenth century, albeit often producing outcomes that stood in opposition to the teachings of the original idealist philosophers (Schnädelbach, 1984). Karl Marx vis-à-vis Hegel is an extreme but hardly the only case in point. In retrospect, the relative neglect of German idealism today may reflect its having provided the intellectual husk or scaffolding (or ‘ladder’ à la Wittgenstein) for everything that we recognize as organized conceptual and empirical inquiry in the academy: Once it has done its job, it is discarded. To repurpose a turn of phrase that Marx had adapted from Fichte, the Humboldtian revolution has unwittingly resulted in the withering away of philosophy. Of course, Humboldt, Fichte and Marx agreed on the withering away of the state in its most excessive form, namely, as an administrative apparatus that arrogates to itself the powers of the authorizing sovereign. (Political scientists nowadays call this the ‘principal-agent’ problem.) Humboldt and Fichte thought that the university might provide the model for this ultimate political ‘withering’, which helps to explain why philosophers were made rectors: They inspired rather than directed. Yes, universities are institutions – among the very oldest in the West – but their primary purpose is the perpetuation of free inquiry, something above and beyond sheer corporate survival. Academic administration is solely in service of that end, not the ‘end in itself’ that it sometimes seems to have become over the years. Humboldt’s model was eighteenth-century Deism, according to which humans come to inhabit the home that God had created for them, thereby rendering God
6
1 Introduction: Relaunching the Humboldtian University
redundant, resulting in what Voltaire satirized as deus absconditus. It opened the door to ideas of building ‘Heaven on Earth’, in which humans come to complete material creation (Becker, 1932). It became the calling card of ‘progress’ in the nineteenth and twentieth centuries, as Europe’s historically Christian culture increasingly criticized clerical authority for its grounding in parental authority. It also helps to explain the appeal to the Enlightenment by the 1960s’ student revolutionaries, who saw it as a collective outworking of the Oedipus Complex: Kill the father! In this respect, the Humboldtian university has stood opposed to the tradition of Oxbridge colleges operating in loco parentis. Indeed, a strict adherence to Kant’s classic definition of ‘Enlightenment’ as a movement would delegitimize paternalism altogether, extending well beyond the Catholic papacy and more broadly Christian appeals to the ‘divine right of kings’. One would no longer simply grow into the role of one’s ancestors but rather would use their legacy as capital, a protean basis for investment into projects potentially unrelated to their source. Indeed, capitalism’s spread of exchange relations to an unprecedented, if not unlimited, extent across society has normalized this way of thinking about all forms of inheritance. From this standpoint, it is patronizing to use ‘applied’ to refer to knowledge created outside of academic settings yet somehow ending up bearing an academic signature. That is to reverse the causal order of knowledge production – indeed, in a way that motivated Nietzsche’s deconstruction of Western morals (Culler, 1982). Of course, everything had to come from somewhere, which means that anyone could be the source of anything. But that’s just the logic of probabilities speaking. It follows that whoever does it first is simply a matter of chance; hence, the logical positivist contempt – shared by Karl Popper – for the very idea of a ‘logic of discovery’. The Humboldtian university aims to oppose the entrenchment of that chance discovery moment in all its guises, be it the ‘cult of genius’ or intellectual property rights. Implied here is the principle that for knowledge to be universally valid, it must be universally available. Pace Miranda Fricker (2007), I have called this principle epistemic justice (Fuller, 2007b: 24–29). Its roots go back to the mid-seventeenth century Port Royal Logic, a widely used textbook in what we now call ‘critical thinking’ for two centuries, which was inspired by Descartes. It featured a contrast between ‘demonstration’ and ‘persuasion’ in terms of the range of audiences – more vs. less – that are likely to be brought round to the speaker’s conclusion (Arnauld & Nicole, 1996). Clearly, the rhetorical power of Galileo’s experiments was in the back of the authors’ minds. The US philosopher of science Thomas Nickles (1980) updated this sensibility for our own times when he defined a ‘justified’ scientific knowledge claim in terms of its ‘discoverability’, suggesting that the claim’s validity is intrinsically tied to the availability of multiple paths to its realization. This is not so different from Frege’s account of how the identity of Venus was established by coordinating data about the ‘Morning Star’ and the ‘Evening Star’. Even today the modern university pays at least lip service to Humboldt’s original vision, if only to mimic the watered-down version that remains society’s default understanding of what a university is. Most people presume that academics engage in teaching and research, and that the two are somehow related to each other in ways
1 Introduction: Relaunching the Humboldtian University
7
from which both students and the wider public benefit. Thus, people are often surprised – and increasingly scandalized – when they learn that some academics in fact do not teach but rather spend most of their time working for private contractors. At the same time, from the start of its modern Humboldtian reinvention, universities have been subject to competing pulls that have threatened its coherence and raison d’être. After all, the modern era has also witnessed the rise of both vocational training centers and dedicated research institutes as mutually exclusive, client-driven bodies. Nevertheless, the Humboldtian university’s claim to uniqueness has rested on its capacity to ‘manufacture knowledge as public good’, a phrase that I have long used to capture a simple but profound idea: Knowledge enlightens only if the advantage enjoyed by its original creators is removed through mass dissemination. The core Humboldtian intuition championed in this book is epitomized in the slogan: Research is elite, but teaching is democratic. Humboldt converted the proselytizing mission of the university’s monastic past (the source of ‘discipline’ in ‘discipleship’) into the more generically ‘universalist’ aspirations that arose from the secular Enlightenment, whereby research and teaching represent, respectively, the ‘private’ and ‘public’ faces of reason, in Kant’s (1999) sense. (Here one needs to think of the ‘privacy’ of reason as lying in the training and credentials that first licenses one to assert knowledge claims, which is then ‘publicized’, once others lacking that background can nevertheless understand and potentially accept the claims.) To be sure, his strategy appealed to several principles that do not sit easily with each other, resulting in the modern university being at once the home of both free expression and the redistribution of privilege. It is telling that Noam Chomsky (1971) made Humboldt the linchpin of his famous 1970 lecture, ‘Language and Freedom’, where he explicitly tied his linguistic theory to his left-libertarian brand of politics. His aim was to defend ‘academic freedom’ in the broadest sense, whereby the academic exemplifies in the classroom the sort of freedom that might be emulated in society at large. Like Max Weber (1958) fifty years earlier, Chomsky appreciated that the Humboldtian academic self-presents as a paradigm of personal conduct, which means that his or her future- facing orientation is manifested in both content and manner. Specifically, the academic challenges students in a language sufficiently close to their own that they feel capable (at least in principle) to respond as equals to the professor. In Chomsky’s own case, his great learning has been normally hidden behind an approachable style, in both academic and public settings. In the second half of the twentieth century, people came to expect that a university degree could secure lucrative employment, which has turned out to be a poisoned chalice from which universities have nevertheless gladly drunk (Fuller, 2016a: Chap. 1). On a geopolitical level, it reflected global capitalism’s erosion of the project of nation-building, which had been the original frame of the modern reinvention of the university. Universities have tried to adapt, albeit with decidedly mixed results. The overall effect has been to blur ‘university’ as a brand of knowledge organization. In that sense, my aim is to remind readers of what made the ‘university’ such a distinctive yet exportable brand for more than two centuries. Here one shouldn’t underestimate the role that both the idea and the institution of
8
1 Introduction: Relaunching the Humboldtian University
the ‘university’ played in Germany’s rise from a disorganized central European backwater to the topflight of global knowledge and industry by the end of the nineteenth century. Nowadays nation-building isn’t as central to the narrative of humanity as it was in the nineteenth and twentieth centuries. Moreover, universities have become entangled in matters that have extended and/or compromised the Humboldtian mission. Thus, this book does not promise that every institution that currently bears the name ‘university’ would survive my Humboldt-inspired sense of renewal. But then as critics of the university rightly say, there is no special reason for universities to exist indefinitely. We therefore need to reaffirm the distinctiveness of the brand. In these pages, I propose that we go ‘meta’ on Humboldt, a move that suits our ‘post-truth’ times. The Humboldtian mission should not depend exclusively on the state. It should find other vehicles. Indeed, one of the great hidden strengths of the US ecology of higher education is that it does not constitute a ‘system’ at all. Rather, it is a patchwork of institutions, whose leaders have been based on private foundations, often dating from before the founding of the nation itself. The origins of the Ivy League are not so different from Oxbridge’s, including their clerical and paternalistic trappings. Nevertheless, by the early twentieth century – and especially after the First World War raised America’s global ascendancy to full self-consciousness – these institutions quickly adopted and scaled up the Humboldtian model. It turned the US into the world’s cynosure for higher education, a status that a century later it still holds – and is likely to hold into the foreseeable future, even if the US suffers more general geopolitical decline. In this respect at least, the US continues to follow in the UK’s footsteps. My argument for relaunching the Humboldtian vision of the university – ‘Humboldt 2.0, if you will’ – is predicated on a certain understanding of Humboldt’s ideal of the ‘unity of teaching and research’ as embodied in the person of the academic, one that I have been pursuing for the past quarter-century (Fuller, 2000a). It involves regarding the classroom as the crucible for forging this vaunted unity as researchers engage in what I have dubbed the ‘creative destruction of social capital’ through the process of teaching. If by ‘research’ we mean the production of new knowledge (even if it is about what we already know), then it is by definition an elite achievement. Someone is always recognized as having done something first. To be sure, some have followed philosopher Charles Sanders Peirce and sociologist Robert Merton in suggesting that the scientific spirit is animated by the desire for such recognition. But even if that is a fact of human psychology, if knowledge is to be understood as a public good, let alone an agent of democracy, then these initially esoteric achievements need to be made more generally available. This means that teaching is the necessary complement to research. Thus, the Humboldtian aims to ensure that priority of discovery does not lay the groundwork for entitlement, a version of rentiership that effectively forces future researchers to traverse the same path – typically requiring the same credentials – of their predecessors to make further progress. Such belief in ‘path dependency’, as economists say, is routinely taken for granted by commentators both inside and outside of academia who declare that our world’s increasing complexity means that
1 Introduction: Relaunching the Humboldtian University
9
more people need to spend more time to make sense of much less of what there is to know. Thus, we are told that the only way forward is ‘trust in experts’. It is against this modern superstition that our ‘post-truth condition’ justifiably rails. And on this matter, the Humboldtian is on the side of the ‘post-truthers’, especially as we live in the most educated time in history. Deference to cognitive authorities that sufficed even fifty years ago no longer carry the same intuitive force. The path from Humboldt to post-truth had been quietly paved by philosophers in the nineteenth and twentieth centuries, as the aura surrounding ‘discovery’ in science was eclipsed by the need for any such discovery to be ‘justified’. Invariably, this meant arriving at a teachable process – typically associated with ‘logic’ in some sense – that would have enabled anyone to draw the same conclusion as the original ‘genius’ discoverer did. Indeed, in the 1980s the artificial intelligence pioneer Herbert Simon developed a family of computer programmes called ‘BACON’ that were designed to reduce the discovery process to a mechanical routine (Langley et al., 1987). While this project may seem to be a far cry from the Humboldtian mission, it shares an interest in the demystification of cognitive authority. For some, though not all, the step too far that Simon and his colleagues took was to suggest that scientists in the future might derive the ideas they pursue from just such a machine. But would this be so different from being inspired by reading a book? In any case, for the Humboldtian, education is the primary instrument of democratization. It works by making it easier for the next generation to make its own what previous generations had struggled to acquire – and even then, with only a few in full possession of it. One should not underestimate the extent to which most cultures saddle each generation with the burden of ‘honouring’ their ancestors, which often leaves them little time to do much else. In contrast, Humboldt’s future-facing, research-led orientation to education capitalized on the increasing physical presence of books, journals and other written material – not to mention increased literacy in the population. The presence of libraries on university campuses symbolized this tendency. It turned teaching into an art of compression, whereby students no longer filled their minds with long passages of past authors learned by rote; rather, their teachers provided them with streamlined versions suited to the world yet to unfold in which they would spend the remaining and largest portion of their life. This explains the rise of mathematical formulas, first in the natural and later the social sciences, as well as the preoccupation with prosody and logical form in more humanistic fields. Students would be taught these as paradigms for organizing their thoughts into various forms of expression as they go forward; whatever else they might need to complete their thoughts would be supplied by the university library. In the early twentieth century, anthropologist Alfred Kroeber (1917) appeared to have the Humboldtian university in mind when he described humanity’s emerging ‘superorganic’ nature. Kroeber, a first-generation American of German parents, is now perhaps mainly known as the father of the feminist science-fiction writer, Ursula K(roeber) Le Guin. Writing at a time when John Dewey’s influence was ascendant among American educators, Kroeber argued that Homo sapiens is set apart from other animal species by intergenerational efficiency savings, which enable us to exert greater mastery over an ever-expanding part of our environment,
10
1 Introduction: Relaunching the Humboldtian University
not least that indefinite temporal horizon called ‘the future’. Kroeber’s intellectual infrastructure today extends to the technological noösphere whose public face is the internet, nowadays sometimes metaphysically glossed as the ‘Technium’ (Kelly, 2010). Most interestingly, it has facilitated a move against complexity in the growth of knowledge, which I have called ‘disintermediation’ (Fuller, 2000a: 114). While it may be tempting to dismiss this movement as ‘vulgarisation’, especially when summaries replace original texts or dead metaphors supplant living arguments, it is better to think of it as a stage in the ongoing economization of thought, whereby the maintenance of any distinction must be justified by the material difference it makes. For those of the generation before Dewey and Kroeber, such as Peirce and Ernst Mach, this was how science was generally seen to have transformed biological evolution to social progress, thereby raising humans above their animal origins. For them the epitome of such ‘economy’ was the mathematical formula, which compressed a potential infinity of observations into a short string of symbols that could be manipulated in a system operating with a finite number of rules. At the same time, it’s easy to forget the ancient roots of this way of conceptualizing our relationship to knowledge, which are crucial for understanding education’s dual modern role in individual emancipation and social progress. The Greek rhetoricians originally spoke of topoi, the projection of memory onto physical locations for purposes of recall in a speech. The speaker would thus come to associate delivering a speech with navigating a space. The Sophists economized on this process by introducing the Athenians to systematic writing, or grammar, which enabled speakers to put their memories – and thoughts more generally – on paper, a more portable space that would be repeatedly consulted and elaborated. In Plato’s Phaedrus, Socrates famously railed against this practice, which he believed might falsify thought altogether. His argument masked Plato’s deeper concerns that mass literacy would make it possible for anyone to script their own course of action, a suspicious art linked to the theatre, which routinely blurs the line between what is real and fake – at least in Plato’s mind (Fuller, 2020a: Conclusion). As it turns out, Plato was worrying two millennia ahead of his times, as these private writing practices triggered the process that eventuated in Kroeber’s ‘superorganic’ world populated by numerous external memory stores and ‘informatized’ virtual spaces. Indeed, a measure of the extent to which the traditional distinction between the intellectual resources proper to the individual and to the individual’s environment has changed is that cognitive scientists are increasingly happy to take us to possess ‘extended minds’ if we have not turned into ‘cyborgs’ altogether (Clark, 2003). All of this makes for challenging times for education generally – and for the university as a specific educational institution. Elon Musk’s wearable brain- computer interface ‘Neuralink’, though itself still more promise than product, looms as a spectre on the horizon. It raises the following question that runs as a red thread in the pages that follow: What is the added value of university life – for both academics and students – at a time when not only is the research agenda dictated by extramural funding but also employers are turning to alternative means of training, qualification and selection? The answer proposed here involves replaying the university’s history in a new key that focuses on the institution’s centrality to the
1 Introduction: Relaunching the Humboldtian University
11
translation of humanity’s collective knowledge to reduce the gap between its producers and its users. Yet, social media has already achieved the material side of this ambition. So, what’s academia’s distinctive contribution? The Humboldtian answer lies in the cultivation of what Kant called judgement, an attitude of mind that sits between the religious (Jesuit) capacity for discernment and the secular (aesthetic) capacity for connoisseurship. This attitude cuts across all fields of inquiry in aid of organizing what has been known into what will need to be known, by separating what is worth and not worth knowing. Judgement in this sense is exercised by both students and their teachers as they repurpose the past in the present with an eye to the future. In today’s terms, it’s all about curation. As a guide to the argument in these pages, the distinct spirit of the ‘Humboldtian’ university is encapsulated in the following six themes: 1. Futurism as the Soul of the Humboldtian University: The original genius of Humboldt’s vision lay in its focus on the ‘future’ as the unoccupied terrain in which the aspirational German peoples might take the lead by incorporating the best and deleting the worst elements of, on the one hand, traditional modes of authority (represented by King + Church) and, on the other, the modern radicalism of French republicanism and English liberalism. To a certain extent, this tension had been already brewing within the university itself, as Immanuel Kant (1996) had made clear in his 1798 essay, ‘The Conflict of the Faculties’, where he proposed that ‘philosophy’ (understood as a modern discipline) should adjudicate between the conflicting claims to authority advanced by the medieval disciplines of law, theology and medicine. This background explains the ‘dialectical’ and ‘synthetic’ character of the German idealist philosophers who were the original standard bearers of the Humboldtian vision at the University of Berlin. Research and teaching were ‘unified’ in that the teacher did research to resolve such cross-disciplinary conflicts in his own mind and then presented a rationalized version of that journey to the students, which was offered in the spirit of a model that they might adapt as they make their own personal journeys through life. Put another way, students were given a prospector’s guide to the open vistas of knowledge available for them to pursue. It amounted to a ‘deviant interdisciplinarity’ approach to knowledge, whereby existing academic disciplines are understood to be no more than way stations for thought in humanity’s collective inquiries. 2. Historical Consciousness as the Source of Humboldtian Futurism: The Humboldtian approach to history sees the past as a repository of potential futures that have yet to be fully realized but may still come to pass under the right circumstances. It marked a sea change from the medieval and early modern universities stress on the continuity of knowledge and practice over time, punctuated by occasional eccentric (‘miraculous’) incidents, followed by a restoration of the ‘natural order’. In the modern era, ‘tradition’ was the word that came to be used to capture this pre-Humboldtian sensibility, which was projected onto cultures worldwide that had yet undergone ‘Enlightenment’. Put bluntly, the non-West came to be the site of what the West was trying to leave behind from its own his-
12
1 Introduction: Relaunching the Humboldtian University
tory (Said, 1978). In its place, two different but equally ‘modern’ approaches to history arose: one steadfast in staying the course set by the original Enlightenment by not reverting to tradition, and the other more open to the past but as a basis for an alternative future, ‘Another Enlightenment’, so to speak. The success of the Industrial Revolution and the failure of the French Revolution fueled these alternative visions, respectively. The former has been seen as more ‘pragmatic’ and broadly aligned to liberalism, the latter more ‘utopian’ and broadly aligned to socialism. The Humboldtian university has catered to both sensibilities. Indeed, as my coinage of ‘utopian knowledge capitalism’ in this book suggests, they are not so distinct. Indeed, the capitalist slogan ‘creative destruction’ and its socialist counterpart ‘permanent revolution’ are two sides of the same coin (Fuller, 2021a): We can neither revert to the presumed past nor assume that the future will be an extended version of the present. Past and future are co-produced in the present. Understood this way, education is the student’s version of what their teachers routinely undergo in the research process – both in the name of Bildung, aka life as inquiry. Moreover, it counteracts the blindness that comes from thinking that we live in the most advanced period in history, which makes it hard to imagine how people in the past managed to bring us to where we are now. Yet, such an imagination may come in handy if the world as we know it were suddenly to end. 3. Judgement as the Key Humboldtian Attitude: The primary quality of the Humboldtian academic is judgement. It lies behind notions as exalted as academic freedom and as mundane as marking students’ work. It is rooted in the legal power to dispose of a case by deciding its normative standing (I.e., acquitted/convicted, pass/fail, etc.). By the late eighteenth century, it had become a generalized feature of perception, following the invention of ‘aesthetics’ as a distinct science. This extended notion of judgement figured prominently in Kant’s later work, which inspired the young Humboldt. It involves relating the composition of a work to the larger concept that informs it. In this regard, both teacher and student are ‘free’ to compose matter into form, each allowing space for the other, but in the end, to varying degrees of success. The modern idea of the ‘critic’ in the arts (extending to the ‘public intellectual’ as critic of society) derives from this sensibility. ‘Criticism’ in this sense aims to liberate form from matter by saying how the created work facilitates or impedes the realization of the creator’s idea. This implies an approach to education that joins the performing arts and the laboratory sciences in common cause. Both frame the pedagogical problem-space as ‘demonstrations’ rather than ‘examinations’, whereby students’ knowledge is tested by personally participating in an activity that allows many ways to succeed and fail. Critical judgement of these demonstrations involves treating them more in the spirit of completing an ambiguous Gestalt figure than matching a fixed template. In this respect, Humboldtian pedagogy goes against the spirit of Thomas Kuhn’s (1970) famous account of ‘normal science’ as puzzle solving. Students are expected to exercise judgement to the same extent as their teachers. But this then introduces an element of chance in the evaluation process.
1 Introduction: Relaunching the Humboldtian University
13
4. Translation as the Key Humboldtian Activity: A striking way to think about the difference between the medieval and the modern university is that the former presumed that the universe of knowledge was distributed, which in turn justified deference to expertise, whereas the latter presumed that the educated person could know whatever needs to be known ‘in translation’, which would then minimize his or her reliance on experts. This marks the difference between the university operating as a training ground for administrators and for autonomous individuals. That Humboldt was a linguist is not an accidental feature of his vision. The central problem of translation – as translators themselves often put it – is to convey the ‘source’ to the ‘target’, i.e., to render in a second language what has been already rendered in the first (Bassnett, 1980). This is the task of teaching, where the ‘source’ language is the research world, and the ‘target’ language is the student world. The difference is that teaching is, so to speak, embodied translation, in that the teacher performs the translation so that the students might at first understand the imparted knowledge and then later use it for their own purposes. This helps to explain the original significance of the lecture in the modern university as a performance that went beyond an opportunity for students to copy the texts on which they will be examined. Among the many parallels between teaching and translation is the disparity between the contexts in which translated knowledge is produced and consumed – what philosophers of science call, respectively, the ‘context of discovery’ and the ‘context of justification’. It is through teaching that research is ‘justified’ in this sense – and, in the Kantian sense, the privacy of reason is rendered public. The contemporary phrase ‘translational research’, which is usually limited to the application of biomedical research, also aptly captures Humboldtian pedagogy. 5. Knowledge as a Public Good as the Humboldtian Ideal: Let us now shift focus from the character of the knowledge producer to that of the knowledge produced – namely, the university as the site where knowledge is manufactured as a public good. Key to understanding this phrase is what I have dubbed anti- rentiership. It goes back to the Humboldtian principle that knowledge empowers by being made as widely available as possible, which in turn drives the imperative to translate research into teaching. However, this general sensibility – core to the university’s modern democratizing mission – goes against the increasing prominence of intellectual property claims, which nowadays extend beyond industrial patents into teaching materials. But I also mean to include long- standing academic practices, including the credit-oriented academic citation system, the toll-like character of the peer review process, the institutional incentives for academic work to self-organize around ‘strong attractors’ that then become ‘paradigms’ (in Kuhn’s sense) and, last but not least, the moral panic surrounding plagiarism, understood as a crime committed routinely by both academics and students. As all these policies shore up academic authority in the wider society, they also prevent academic knowledge from becoming a public good. Overcoming this impasse in the Humboldtian spirit requires a decoupling of academia’s processes of internal reproduction and of external dissemination. It would entail curbing the ‘mission creep’ of peer review, which has ended up
14
1 Introduction: Relaunching the Humboldtian University
becoming a dispersed regulatory mechanism for the entire knowledge system, even outside of academia. The Humboldtian would restrict peer review to determining whether what is claimed in, say, an article is testable on its own terms, regardless of its relationship to the dominant paradigm’s research agenda. For its part, plagiarism would be reduced to judgements about whether a new piece of work ‘adds value’, notwithstanding the amount or character of the appropriation of the work of others that it involves. 6. Academic Professionalism in the Humboldtian Vision: Max Weber characterized the Humboldtian sense of professionalism as ‘vocation’, a self-subordination to the voice of another, which amounts to accepting a role in a drama not of one’s own creation and which continues after one has left the stage. The original model was monastic discipline, a lifestyle inspired by God. Academic disciplines descend from this idea. However, discipline was always regarded as a means to an end, not an end in itself. Whereas discipline rendered the monks as instruments of God, in the modern era, this idea was secularized as the search for Truth, wherever it may lead, whatever the consequences. The etymology of ‘truth’ in words meaning faith, trust and loyalty is not trivial. In practice, it meant pushing the discipline’s epistemic constraints to their limits, which in turn underwrote an ethic of ‘testability’, the most popular and self-conscious version of which was Karl Popper’s ‘falsifiability’ principle. In this respect, science normalizes heresy. However, in religion, this tradition has been countered by ‘priests’, whose job is to maintain the churches that serve the needs of those who have not received such a direct call from God and, more generally, to stabilize the institutional framework common to the monks and the pastorate. In the modern era, the administrative class has served this role for universities, catering to the needs of both academics and students. But for Humboldt, this role was quite specific: namely, simply to enable the freedom of both academics and students to pursue the truth. Put more provocatively, academic administration should be in the business of publicizing not policing what transpires on campus. However, this vision of university administration has been hard to realize for various reasons, not least because academics and students can become too materially invested in university governance, which in turn diverts them from the search for Truth. For this reason, Humboldt thought that the state (given its overarching power and relative detachment) should be the ultimate underwriter of university administration. However, the morphing and long-term decline of the state have thrown this feature of Humboldt’s vision into serious doubt and needs to be rethought.
Chapter 2
Deviant Interdisciplinarity as Philosophy Humboldt-Style
Abstract This chapter introduces two types of interdisciplinarity relevant to the organization of inquiry: normal and deviant. They differ over their attitudes to the history of science, underlying which are contrasting notions of the ‘efficient’ pursuit of knowledge. The normal/deviant distinction was already inscribed in the university’s medieval origins in terms of the difference between Doctors and Masters, the legacy of which remains in the postgraduate/undergraduate degree distinction. The prospects for deviant interdisciplinarity in the history of the university were greatest from the early sixteenth to the early nineteenth century – the period that we now call ‘early modern’. Towards the end of that period, due to Kant and the generation of idealists who followed him, philosophy was briefly the privileged site for deviant interdisciplinarity. After Hegel’s death, philosophy fell into decline and normal interdisciplinarity began to take hold, resulting in today’s fixation on ‘expertise’. Prospects for a post-philosophical, deviant interdisciplinary vision are explored, largely through the lens of what after Jean-Baptiste Lamarck is now called ‘biology’. The chapter concludes with an Epilogue that considers contemporary efforts to engage philosophy in interdisciplinary work, in which William James figures as a polestar.
2.1 Normal and Deviant Interdisciplinarity as Alternative Ways of Organizing Inquiry Wilhelm von Humboldt made philosophy the modern university’s foundational discipline. And in many respects, it has remained that way, notwithstanding its increasing professionalization and specialization in the twentieth century. Philosophy may relate to interdisciplinarity in two distinct ways: On the one hand, philosophy may play an auxiliary role in the process of interdisciplinarity through a conceptual analysis of knowledge based on the special disciplines, which are presumed to be the main epistemic players. I characterise this version as normal because it captures the more common pattern of the relationship, which in turn © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Fuller, Back to the University’s Future, Evaluating Education: Normative Systems and Institutional Practices, https://doi.org/10.1007/978-3-031-36327-6_2
15
16
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
reflects an acceptance of the division of organized inquiry into disciplines. On the other hand, philosophy may be itself the site to produce interdisciplinary knowledge, understood as a kind of second-order understanding of reality that transcends the sort of knowledge that the disciplines provide when left to their own devices. This is my own position, which I dub deviant. My old teacher Nicholas Rescher used to describe his research as ‘interdisciplinary’, which puzzled those who saw him as a solitary producer of books of systematic philosophy. He quickly explained that his own mind was the site of the interdisciplinary work, which left the audience amused by the apparent irony of the remark. In fact, Rescher was playing it straight, but this moment of unintended humour reveals two rather incommensurable views about the animus behind interdisciplinary work, each of which is tied a conception of philosophy’s role in the organization of knowledge. Normal interdisciplinary work occurs in research teams or, if it does occur in a single mind, one’s mental space is arranged as a factory with a clear division of labour amongst the disciplines, each of which contributes a discrete task to an overarching epistemic enterprise. But Rescher was talking about interdisciplinary work as a blending process whose distinctly ‘interdisciplinary’ character is more clearly present in the style of work than the finished product. Rescher’s view represents what I call the deviant interdisciplinarian perspective, which has been both persistent and persistently abnormal in Western intellectual history, where nowadays it is integral to the post-truth condition (Fuller, 2018a: Chap. 4). It is also the form of interdisciplinarity that I champion. The difference between normal and deviant interdisciplinarity turns on contrasting attitudes to the history of the rise of disciplines in organized inquiry. Both the normal and the deviant versions accept that the need for interdisciplinarity arises from the gradual specialisation of inquiry, which is typically portrayed as a division of labour from some primordial discipline – call it ‘theology’ or ‘philosophy’– that asks the most general and fundamental questions about the nature of reality. But this common image is then interpreted in two radically different ways, which are fairly called ‘normal’ and ‘deviant’ in two distinctive senses. Normal interdisciplinarity is not only the more widely subscribed view in our time (i.e., empirically ‘normal’) but it also portrays interdisciplinary inquiry as a natural outgrowth of disciplined inquiry (i.e., normatively ‘normal’). Correspondingly, deviant interdisciplinarity is not only the less subscribed view but also sees interdisciplinary inquiry as, in key respects, reversing epistemically undesirable tendencies inherent in disciplined inquiry. The normal interdisciplinarian gives cognitive specialisation a positive, even a naturalistic spin, say, by associating it with arboreal exfoliation, as in Kuhn (1970), in which the specialisation attending a paradigm shift is explicitly associated with the ‘tree of life’ image used by Darwinists after Ernst Haeckel to capture biological speciation as a whole. However, this can be misleading, since Darwin’s own use implied that the fruits produced by the branches are eventually consumed (i.e., rendered extinct). Not surprisingly, then, it has been more common to envisage specialisation in terms of the functional differentiation of the developing individual organism, which may better fit with what Kuhn had in mind (Fuller, 2007a: Chap.
2.1 Normal and Deviant Interdisciplinarity as Alternative Ways of Organizing Inquiry
17
2). In any case, normal interdisciplinary inquiry is easily likened to the gathering of ripe fruit from the various branches to produce an intellectually satisfying dish. The value of the dish is entirely dependent on the quality of fruits from which it is prepared. This suggests that interdisciplinary inquiry is subordinate to – if not parasitic on – discipline-based inquiry. In contrast, the deviant interdisciplinarian treats the ‘division of labour’ identified by his normal counterpart as a dispersion of effort, such that an overall sense of the ends of inquiry is lost – specifically how the various disciplines contribute to the full realization of the human being. The recovery of this loss is then the deviant interdisciplinarian’s task (Fuller & Collier, 2004: Chap. 2). The task is biblically rooted in the Tower of Babel, a ‘second Fall’ consisting in humanity’s self-arrogation of its divine entitlement for its own particular purposes (symbolised by the proliferation of tongues) without sufficiently attending to the deity’s overarching design. Such was the theological pretext for the modern idea that science should aim to fathom this unifying intelligence, what Newton, following Renaissance scholars, originally called prisca sapientia (‘pristine wisdom’) but which after Einstein has been popularised as the search for a ‘Grand Unified Theory of Everything’ (Harrison, 2007). We shall return to this difference in attitude towards specialisation below, as it bears directly on the image of the philosopher in interdisciplinary inquiry. One of the most striking differences between normal and deviant interdisciplinarity – already present in my Rescher story – is the locus and character of interdisciplinary work. Normal interdisciplinarity is designed for teamwork, as each disciplinary expertise is presumed to make a well-defined contribution to the final project, whereas deviant interdisciplinarity assumes that the differences in disciplinary expertise themselves pose an obstacle to the completion of the project. Of course, at one level, the normal interdisciplinarian could hardly disagree – namely, about the prima facie difficulties in translating and then integrating forms of knowledge that are tied so intimately to distinct technical jargons and skills. But for her these difficulties are surmountable because they correspond to well-defined domains, such that each expert knows (at least in principle) when to defer to a more adequately informed colleague (Kitcher, 1993: Chap. 8; Lamont, 2009). In contrast, the deviant interdisciplinarian takes this culture of deference to reflect something more sinister, epistemic rent-seeking, which in this book is referred to as academic rentiership (Fuller, 2002: Chap. 1; cf. Tullock, 1966). Here the distinctive skills associated with disciplinary expertise are portrayed as conspiracies against the public interest: They give the misleading impression that the most reliable, if not only, means of arriving at valid conclusions in a given domain is by trusting someone who has undergone a specific course of training, regardless of its empirical track record on relevant past cases or the availability of less costly means likely to reach the same ends. Despite the best efforts by modern philosophers of science to warn against committing the genetic fallacy by distinguishing the contexts of discovery and justification, such a deep trust in expertise would seem to reproduce the very problem for which that distinction was meant to solve, since whatever epistemic virtue is displayed ‘expertise’, it pertains to how knowledge was acquired, rather than how is used. At least, this would be the start of the deviant
18
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
interdisciplinarian’s critique of expertise, which will be pursued in more detail in the next section. Normal interdisciplinarians are unlikely to be fazed by what I have said so far. Citing Kuhn (1970), they would whereby ‘normal science’ is most clearly marked by the default inclination to solve today’s problems by seeing them as versions of past problems, courtesy of their textbook representation. Normal science’s sense of ‘path dependency’, as economists say, characterises each discipline’s sense of methodological rigour, a capacity to abstract the essential elements of a problem and resolve them in a principled fashion. From this standpoint, the deviant interdisciplinarian may seem ‘eclectic’ and ‘arbitrary’, very much as upstart entrepreneurs look to managers in established firms, where the former wish to ‘creatively destroy’ and the latter to ‘monopolize’ markets (Schumpeter Schumpeter, 1950). Keeping with the business analogy, normal and deviant interdisciplinarians differ in their understanding of ‘efficiency’. On the one hand, normal interdisciplinarians appeal to efficiency in terms of the path-dependent nature of disciplinary specialisation – the fact that specialisation only increases over time and through sub-divisions in already existing specialities. This is a sense of ‘efficiency’ dictated by the environment – that is, the most economic means of dealing with real complexity discovered in the world. On the other hand, deviant interdisciplinarians appeal to environmental pressures that encourage the creative combination of previously distinct expertises into an all-purpose technique. This is a sense of ‘efficiency’ dictated by the inquirer – that is, the most economic means of dealing with the need to retain a unified sense of purpose in the face of centrifugal forces in the environment. Both glosses on ‘efficiency’ can lay claim to the title of ‘more evolved’, the one in a divergent Darwinian and the other in a convergent Lamarckian sense (cf. Arthur, 2009; Fuller, 2015: Chap. 6).
2.2 How Expertise Appears to the Deviant Interdisciplinarian While expertise certainly possesses a socially situated character, the ‘situation’ is largely defined by ‘expert’, whose expertise is grounded not in the situation itself but in the expert’s professional accreditation. This typically involves specialised training, but not necessarily prior experience with the situation. In this respect, the etymological root of ‘expert’ in ‘experienced’ sends an equivocal signal. The ‘prior experience’ may consist of no more than the expert’s education and/or direct acquaintance. Thus, the knowledge on which experts draw is alternatively cast as based on ‘templates’ or ‘precedents’, allowing for both a rationalist and an empiricist basis for the epistemology of expertise, respectively. Moreover, this ambiguity strengthens the expert’s hand in justifying his or her own practice. Even if the expert has never directly encountered the current situation, s/he may claim that it belongs to a kind of situation that others in the expert’s field have previously encountered. In
2.2 How Expertise Appears to the Deviant Interdisciplinarian
19
this way, the unfamiliar is rendered familiar to the expert in a way that secures the confidence of the client. Moreover, it is no mere social constructivist nicety. Anyone who contracts the services of an expert effectively licenses the expert to exercise discretion over the matter of concern to the client, which remains vaguely defined yet no less urgent prior to the expert’s intervention. Someone seeking the services of a physician for an ailment is a paradigm case (Fuller, 1988: Chap. 12). The client expects the expert to define the situation in a way that addresses the client’s concern. But it entails approaching the situation in ways that differ from the client’s default modes of epistemic access. This shift in perspective aims to convert the situation of concern into a ‘soluble problem’. The ‘success’ of the expert-client transaction is judged primarily in terms of the client’s acceptance that the expert has made a ‘good faith’ attempt to solve the client’s problem. Whether the client’s problem is actually solved – or the client simply comes to understand the nature of his or her situation better – is a secondary concern. Indeed, if the client wishes to contest the expert’s handling of the client’s problem, then other experts of the same kind need to be engaged in the ensuing litigation to determine the occurrence of any ‘malpractice’. Malpractice is not something that clients can judge for themselves without additional expert input. However, if the expert is found guilty of malpractice, then the expert may be formally expelled from the professional peer group. In that case, the knowledge possessed by this defrocked expert would no longer count as expertise, even though the content of the defrocked expert’s knowledge would not have changed. This account of expertise does not fit comfortably within the competing stereotypes of knowledge in modern epistemology: knowing how and knowing that, the former associated with practices and the latter with propositions. The difference lies in the ontology of knowledge and hence the mode of access properly called ‘epistemic’ with regard to the relevant objects. Expertise involves a rather different approach to epistemology – a different ontology of knowledge, if you will – one that is perhaps most familiar from the religious sphere. Here we need to recall that when ‘expert’ started to be used regularly in legal proceedings (i.e., ‘expert witnesses’) in Third Republic France, the non-expert was called a ‘lay’ person, a word whose implications philosophers appear to be curiously oblivious to, even though it is still regularly used in public discussions of expertise. The original suggestion was that experts constituted a secular clergy, perhaps even a replacement for the Roman Catholic version in terms of their authority. To be sure, this characterisation has not always been made to the advantage of the experts over the clergy, but it is worth dwelling on why the comparison has stuck over the years to understand the distinctive epistemology of expertise. Perhaps the most intuitively straightforward way to get to the heart of the matter is to compare a priest and, say, a physician or psychiatrist, focusing on their epistemically relevant similarities. Much of what the secular experts say and do could be – and have been done – by priests in the past and, in some countries, even to this day. Moreover, in terms of ‘knowing how’ and ‘knowing that’, there is no reason to think that the reliability of one group has been better than the other, with regard to the efficacy or truth of what they have done or said. We simply lack a consistent
20
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
track record of what might be called ‘client satisfaction’, which takes seriously the client’s own judgements of their transactions with experts. After all, alongside the many homoeopathy patients who end up accepting their death from cancer are the many chemotherapy patients who similarly accept their fate. Both groups may be satisfied with the choices they made, even if third party social epistemologists are not. Moreover, it is not clear whether there is a specifically ‘epistemological’ problem here – or a more straightforward ‘cultural’ problem about how people should conduct their lives. In any case, we live in a world in which a wide variety of drugs and treatments can be administered only by qualified medical practitioners, even though others not so qualified may display the same level of competence in ‘knowing how’ and ‘knowing that’ with regard to such matters. Here I mean people who act just as the experts would in the relevant conditions, but they lack expert authorization. In a sense, it is the complement of the ‘placebo effect’, whereby people claim to be cured with ‘fake drugs’ because someone they regard as an expert has prescribed them. In contrast, I mean non-experts who prescribe what the relevant experts would but are not trusted (as much) because they are not licensed as experts. To be sure, sometimes these people successfully masquerade as experts, at least until they overplay their hand by making a serious practical error or seeking to leverage their pseudo-expertise. This ‘overplaying’ amounts to the pseudo-expert provoking an investigation into credentials that s/he would otherwise have been presumed to possess. At this point, recall our earlier observation that even when an expert makes errors that are so serious that the client files a claim of malpractice, the expert’s peer community plays a significant role in determining the liability of the charged expert. In effect, the possession of expert credentials may serve to shield the practitioner from forms of punishment to which the pseudo-expert is automatically liable, which may then be amplified by any deception involved. Yet, the cognitive error and the material harm may be the same. What accounts for the difference – and should there be a difference? Here social epistemologists have been inclined to appeal to something called ‘trust’, which functions mainly as a euphemism for a kind of risk-taking, whereby a portion of one’s own sphere of judgement is forfeited to someone else who is presumed to be more capable of acting on that person’s behalf. ‘Forfeited’ is used deliberately to convey the fact that the person taking the risk recognizes their own incapacity to address a matter of concern to them. Political scientists and economists, closely following the conceptual framework of the law, characterize the client-expert relationship as one of ‘principal-agent’, which captures well the voluntary subordination of the will that is involved in clients’ ‘trust’ of experts (Ross, 1973). In religious times, people trusted priests because of their faith in ‘God’, however defined. Nowadays people trust medical practitioners because of their faith in ‘Science’, however defined. In both cases, ‘truth’ is the philosophical term of art used to cover the object of the faith shared by the principal and the agent, with the proviso that the relationship between the principal’s trusted agent and that larger object of faith is bound to be imperfect, yet epistemically superior to the principal’s
2.2 How Expertise Appears to the Deviant Interdisciplinarian
21
own relationship to the object, especially with regard to the matter of immediate concern to the principal. The epistemically distinctive feature of expertise, then, is the distributed nature of the process of knowing, whereby the principal knows through the agent, in both main senses of ‘knowing’ in modern epistemology: the ‘knowing that’ something is true and ‘knowing how’ to apply the truth to reach a desirable practical outcome. The original context for ‘principal-agent’ theory is relevant for understanding what exactly is ‘distributed’ here. Clearly agency is distributed, which may invite thoughts of an ‘extended mind’. But materially speaking, risk is distributed, such that the principal aims to minimize the cost of personal misjudgement by placing a bet on the agent’s chance of making a better judgement on the principal’s behalf. In effect, what social epistemologists substantively mean by ‘trust in experts’ is a version of ‘risk pooling’ in the insurance and financial trades. Both Descartes and Pascal, in their different ways, were arguing against engaging in such activities: One should bear the risks for oneself, with Descartes being somewhat more bullish than Pascal about the likely outcome. Hence, while both were avowed Christians, they were widely seen in their day as anti-clerical: They would rather place their faith (or, in Descartes’ case, ‘thinking’) in God directly than in a priest who then exercises that faith on their behalf. In this sense, the expert is in a double-sided relationship of ‘representation’: on the one hand, to the client for whom the expert is a trusted surrogate and, on the other hand, the object of knowledge to which the expert must remain loyal. I do not wish to argue conclusively here whether ‘representation’ is used univocally or equivocally in these two contexts. Suffice it to say, social constructivists (me included) follow in Hobbes’ footsteps in treating the two uses univocally, which in turn reflects the long historic drive toward increasing literalism in language, starting with Augustine and the Franciscan scholastics (Duns Scotus, Ockham, etc.), running through the Protestants, Bacon, Descartes, Pascal and their secular progeny, not least Rousseau. In practice, contrary to its contemporary connotations, this ‘literalism’ has fostered the view that one’s language should reflect what one thinks because speaking is an act of self-authorization: You should not say it unless you mean it. Here words like ‘belief’ and ‘representation’ function as euphemisms for this more potent idea. All the above philosophers and theologians are haunted by the creativity of God’s Word (logos), which ‘literally’ applies to humans as having been created imago dei (in Augustine-speak), notwithstanding the Fall recounted in Genesis. The modern preoccupation with the use of language to authorize control in both the legal and scientific spheres is its secular descendant, ‘positivism’ being its most self-conscious expression (Turner, 2010: Chap. 3; cf. Passmore, 1961). The arc of Anglo-German thought from Mill and Russell to Wittgenstein and Kripke made this concern central to what became ‘analytic philosophy’ in the twentieth century. It also helps to make sense of the persistent controversies surrounding expertise as a form of knowledge. The expert’s discretion to define the client’s situation for purposes of ministering to it amounts to a power to name, understood as a pretext for actions of a certain sort that the expert advises or takes on behalf of the client in an otherwise indeterminate
22
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
situation. More to the point, the expert purports to resolve this indeterminacy by replacing what the client has identified through a ‘proper name’ (i.e., something unique) with a ‘definite description’ (i.e., a specific complex of repeatable properties). The question then is whether that definite description – or even some succession of such descriptions – is likely to be exhaustive of the situation originally identified by the client. Practically speaking, the answer is bound to vary from case to case. Theoretically speaking, resistance to expert advice may be seen as akin to the refusal to reduce proper names to definite descriptions. In the philosophy of science, this sensibility is reflected in historically informed doubts that the truth about some ‘rigidly designated’ (in Kripke-speak) part of reality is likely to be reached by successively improved versions of the descriptors used in current authoritative accounts of that reality – Hilary Putnam’s ‘pessimistic meta- induction’ (Putnam, 1978). Philosophers routinely call this position ‘realism’, but it is more Platonic than Aristotelian in inspiration: It presumes a conception of reality to which we may gain access but not necessarily by our currently established means, which include those whom we now deem ‘expert’. Truth be told, it is also the implied metaphysics of ‘the end justifies the means’, which licenses even substantial deviations from established expertise if one – and perhaps more importantly, one’s ‘society’, however defined – is willing to absorb the risks involved. The very idea that expertise should be seen as something above and possibly beyond competence arose in the 1980s and was associated with the drive to automate complex decision-making in so-called ‘expert systems’, which remain a staple in the ‘knowledge management’ field in business schools (Fuller, 2002: Chap. 3). It was a response to the long-proven failure of human experts – starting with Paul Meehl’s research in the 1950s – to perform as well as mechanical counterparts in diagnosing various medical and psychiatric disorders, at least as judged by professional handbook standards. In epistemological terms, it had been demonstrated that human expert judgement is ‘unreliable’ in real world settings (Faust, 1984). This led ethnographers to interview human experts to understand how they would process cases as they made their decisions under a variety of hypothetical conditions. Implied in this strategy was that the experts’ basic approach was sound but that it ran into difficulties when they reached their ‘natural limits’. ‘Natural limits’ should be understood to mean limits to both one’s professional training and processing capacities, including hot and cold cognitive biases. The intended result of this research was an ‘expert system’, which consisted of a user-friendly computer interface informed by a decision-tree-styled algorithm, the design of which was based on the expert interviews. It inspired many cognitive scientists and philosophers of science, me included, to countenance that a sufficiently advanced form of artificial intelligence may be the true reference class of the various qualitative and quantitative accounts of ‘rationality’ that philosophers have historically proposed (Fuller, 1993). The backlash against this entire strategy, philosophically inspired by Hubert Dreyfus and still informing Harry Collins’ approach to expertise, was to argue that any such ‘expert system’ would always be insufficient to replace the human expert (Collins, 1990; Dreyfus & Dreyfus, 1986). The backlashers claimed that human
2.2 How Expertise Appears to the Deviant Interdisciplinarian
23
performance in the relevant domain minus the above ‘natural limits’ would always be better than the ‘debugged’ human represented in the computer algorithm programming the expert system. Their stance reflected a larger background concern – namely, that an advanced form of artificial intelligence might significantly supersede human performance in the sorts of complex cognitive tasks traditionally seen as the exclusive preserve of humans. Unsurprisingly perhaps, it resulted in a sharper distinction being drawn between ‘competence’ and ‘expertise’, almost as a proxy for ‘machine’ and ‘human’. The intended contrast was between relatively routine domain-specific judgements, which a well-programmed machine might deliver, and more ‘creative’ judgements that may suspend some of the problem-solving constraints governing the domain, but without completely undermining the domain’s epistemic framework. Nevertheless, this did not deter artificial intelligence enthusiasts following the lead of Herbert Simon (myself included), who believed that the competent-expert distinction could be deployed to capture the difference between, say, on the one hand, Kuhn-style ‘normal scientists’ in classical physics who were incapable of thinking outside of their paradigm to solve the long-standing problems facing Newtonian mechanics, and on the other hand, Einstein, Heisenberg and the other early twentieth century revolutionaries who managed to radically transform physics without destroying it altogether (Langley et al., 1987; Nickles, 1980). Subsequent research drew the implied distinction in less world-historic terms, but the intuition guiding it is clear enough. It is one thing to acknowledge that Einstein and his comrades dealt with outstanding physics problems in a different frame of mind from their more classical colleagues, and another to say that the success of their approach amounted to their ‘knowing something more’ than their classical colleagues, in some univocal sense of ‘know’. It would be reasonable to grant the former but deny the latter. Indeed, a classical physicist such as Henri Poincaré was probably more competent than Einstein by the academic standards of the day, yet that did not prevent Einstein from proving more expert in accounting for relative motion. These are good prima facie grounds for concluding that competence and expertise bear some sort of orthogonal relationship to each other. Let me briefly try to tease out the nature of this ‘orthogonality’. What unites this world-historic case of ‘expertise’ with more ordinary cases involving, say, doctor-patient is an acceptance that the decision-making context is open. In other words, the ‘normal’ ways of making sense of the situation are suspended – to an extent that remains to be specified – so that certain ‘abnormal’ ways of dealing with it are licensed. Due to the normalization of expertise in contemporary society, it is easy to forget the alien – perhaps even ‘incommensurable’ – character of how a doctor typically approaches a patient. Nevertheless, patients license that alien treatment because they have come not to trust their own judgement on matters of health, notwithstanding their personal nature. Similarly, what Kuhn (1970) called ‘revolutionary science’ is made possible once normal scientists take an estranged stance toward their ‘normal science’ practices because of their failure to solve long-standing puzzles of central concern to them. Kuhn calls this shift in orientation a ‘crisis’ – and it reflects a recognition of the limits of what heretofore
24
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
had passed for ‘competence’. To be sure, the intuitiveness of Einstein’s superior expertise depends on a retrospective evaluation of the different problem-solving approaches taken by him and Poincaré – that is, after their uptake by the relevant peer community in physics. By that standard, Einstein succeeded where Poincaré had failed. I shall return to this point shortly. What the above suggests is that expertise, rather than being an incremental or even a step-change advance on competence, operates in a radically different cognitive space in which context is king. Competence is about knowledge in a closed system, and expertise in an open system. Imagine Einstein travelling back just a bit in time, to, say, 1850, around when his parents were born. Would his approach to problem-solving been seen as expert or crazy by his physics colleagues? The answer would depend on the likelihood that the corresponding peer community would consolidate around Einstein’s treatment of light as a constant in the understanding of motion. Alternatively, fast forward into the future that Simon and other AI enthusiasts have envisaged – one in which an Einstein-like computer could not only adjust the parameters of the variables in its programme but change variables into constants, and vice versa, resulting in a substantially different programme. Would such a ‘superintelligent’ machine capable of projecting paradigm shifts whole cloth be regarded as a salutary revolutionary agent or a threat to the entire scientific enterprise, if not the human condition generally? Much will depend on both first-order views about the state of science and second-order views about the conduct of science at the time such a machine becomes available. The temporal character of expertise evaluations points to the inappropriateness of ‘reliability’ as a standard for judging experts. As a methodological concept associated with what is sometimes called ‘internal validity’, reliability is about the regularity with which the same conditions bring about the same effects. The term ‘mechanism’ is often used both literally and metaphorically to describe something that encompasses the ‘reliable’ relationship. However, if the conditions are not fully specified, then it is not possible to establish that relationship. Yet that is precisely the sort of situation in which a client would engage an expert – and part of that engagement would involve granting a license to the expert to complete the specification of the conditions, which in turn will circumscribe the interpretation and treatment that constitute the response. Insofar as a specific level of competence is required for expertise, its evaluation occurs far from the typical context of use. I refer here to the process by which experts acquire professional credentials, which may involve passing specific academically and practically oriented examinations. In addition, matters of competence may be central to the adjudication of a malpractice suit against an expert – but again, with the expert peer community playing a crucial determining role. However, if the expert receives no formal complaints in the aftermath of an engagement with a client, then the expert is presumed to have been competent, however s/he acted. In sum, what passes for competence in expertise is really the background support of the expert peer community, who through its own mechanisms, independent of any context of use, have come to invest their trust in the expert in question. Their willingness to vouch for the expert in what are typically open-ended conditions of
2.3 Normal and Deviant Interdisciplinarity as Alternative Ways of Organizing…
25
practice can make all the difference in determining the appropriateness of an expert’s actions. The expert community is effectively the corporate underwriter of expertise. Its power is regularly revealed these days by Pope Francis I, who has allowed highranking members of the clergy to be tried for various alleged forms malpractice in civil courts after having withdrawn the immunity of sacred office. This allows the court to judge whether the acts in question would have been appropriate, had they been committed by someone unqualified in the spiritual expertise associated with holding sacred office. Comparable secular examples might involve enquiries into the treatment of (human or animal) research subjects in laboratory experiments without presuming the privilege of the scientific vocation. That ‘competence’ turns out to mean little more than the corporate underwriting of the expert community is obscured by a confused conception of ‘reliability’ in epistemology, especially its ‘naturalistic’ forms. The confusion comes from trying to capture at once two senses of ‘reliable’ in ordinary language: on the one hand, the methodologically relevant idea of regular occurrence, and on the other, the morally relevant idea of trustworthiness that the previous discussion highlighted. As we have seen, these are quite different ideas that normally inhabit different cognitive spaces. But it would require another paper to examine how this confused conception of reliability has wreaked havoc in the recent epistemology literature. Nearly three decades ago, I characterised this confusion as phlogistemology, in homage to that hallowed pseudo-substance, phlogiston (Fuller, 1996).
2.3 Normal and Deviant Interdisciplinarity as Alternative Ways of Organizing the University If normal interdisciplinarity aims to exfoliate the complexity of reality and its deviant counterpart aims to recover some lost unity of knowledge, then how is the historic relationship between philosophy and science cast in the two cases? In normal interdisciplinarity the philosopher recedes from the first-order field of epistemic play in order to referee jurisdictional disputes between disciplines, which typically turn on the need for conceptual clarification and logical analysis of evidence. She acts as an honest broker between their competing epistemic claims but without imposing an epistemic regime of her own. This is close to the selfavowed ‘underlabourer’ role first self-ascribed by John Locke (vis-à-vis the ‘master builder’ Newton) and advocated by many of today’s analytic philosophers of science (Fuller, 2000b: Chap. 6; Fuller, 2006a: Chap. 3). In contrast, deviant interdisciplinarity would have the philosopher use her own understanding of the goal of disciplined inquiry – roughly, an epistemic super-universalism that aims to have all people know all things – as implying standards against which to judge the adequacy of any given disciplinary configuration. This is how philosophy has laid claim to being the foundational discipline of the university (or ‘queen of the sciences’, when directly challenging theology) since
26
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
Wilhelm von Humboldt re-launched the university as an institution dedicated to the resolution of what Kant (1996) memorably called ‘the conflict of the faculties’, more about which below (Fuller, 2007b: 208–13; Fuller, 2009: Chap. 1). From this standpoint, both German idealism and logical positivism – though not normally seen as philosophical bedfellows – turn out to be exemplars of deviant interdisciplinarity. Indeed, each provided the paradigmatic gloss for the nineteenth and twentieth centuries, respectively, on what it means for ‘all people to know all things’. It is different from the sense of ‘unity of knowledge’ originally promoted by Auguste Comte under the guise of ‘positivism’ and his English contemporary William Whewell as ‘consilience’. These would lie between the normal and deviant poles of interdisciplinarity, as both granted a strong integrationist role to a philosophically inspired discipline (sociology for Comte, natural theology for Whewell) but without presuming that everyone should or can have access to such a unified understanding of reality (Fuller, 2007a: Chap. 2; Fuller, 2010b: Chap. 3). Given that the university is the only institution explicitly dedicated to the indefinite pursuit of knowledge, it should come as no surprise that the normaldeviant divide in interdisciplinary horizons was already present in its original medieval formation. The division was marked in two degrees of equal status that matriculants could receive: Master and Doctor. The former is the prototype for the deviant interdisciplinarian, the latter for the normal one. My old teacher Rescher followed the way of the Masters in seeing himself as the self-sufficient site of interdisciplinary integration, whereas today’s networks of distributed expertise follow the way of the Doctors, for whom interdisciplinarity always implies knowledge beyond the competence of any given individual. It is worth observing – though it cannot be followed up here – that the policy of treating the ways of the Master and the Doctor as sequential in a course of study from ‘undergraduate’ to ‘postgraduate’ training in today’s universities sends profoundly mixed messages about the ends of higher education, since ideally each of the two ways calls into question, if not outright obviate, the need for the other. Put starkly: If one could know for oneself all that is worth knowing, then there would be no need to defer to others; but if there really is too much of value for any one person to know, then the very idea of training self-sufficient individuals is futile. Thus, the Masters would be rid of the Doctors, and vice versa, respectively. In terms of the mendicant orders that staffed the original universities, Franciscans were prominent amongst the Masters, Dominicans amongst the Doctors (Fuller, 2011: Chap. 2). Included in the ranks of the former were John Duns Scotus and William of Ockham, in the latter Albertus Magnus and Thomas Aquinas. Perhaps not surprisingly, the Franciscans inspired heretics, while the Dominicans trained their inquisitors in the period leading up to the Reformation (Sullivan, 2011). In the medieval universities, a Master’s degree empowered one to read, write, speak, listen, observe, calculate and measure – the so-called liberal arts. These skills equipped one with the self-mastery needed to deal with others as equals. This is the basis of ‘humanistic’ education, which by the nineteenth century had morphed into a standard of democratic citizenship. Students were expected to integrate the liberal arts with their personal experience into a synthetic whole that expressed their unique
2.3 Normal and Deviant Interdisciplinarity as Alternative Ways of Organizing…
27
public persona. In today’s terms, we would say that the Masters located the value of discipline-based knowledge not in its inherent pursuit but in the ‘transferable skills’ it provided for living one’s life. In this regard, the Master literally embodied the life of the mind in the world. For Humboldt, this amounted to training people of be exemplars of Enlightenment, something that Max Weber’s 1918 lecture ‘Science as a Vocation’ tried to capture for the uncertainty and ambivalence that faced Europe. In contrast, the Doctors were trained to administer over specific aspects of reality – the body (medicine), the soul (theology) or the land (law) – that were understood in markedly geographic terms, remnants of which linger in the idea of ‘domains of knowledge’ separated by ‘disciplinary boundaries’. Doctoral training required a demonstration of one’s competence in the management and productive use of what academics still refer to as ‘field’. The academic degree effectively granted a license to work on that metaphorical plot of land with an expected yield on investment for one’s professional colleagues, the field’s shareholders. Anyone who wished to apply or simply pass through that field of knowledge had to acquire their own license or pay a toll (i.e., at least make formal reference) to a relevant expert. In that respect, the Doctors interpreted the idea of ‘interdisciplinary exchange’ economistically, such that one specialist traded the fruits of his original labours with those of another so that both specialists were mutually enriched in the process, with the overall effect of facilitating the societal governance. Abstracting a bit from this characterisation of the distinction between the Masters and Doctors, we can see the pedagogical basis for, respectively, the coherence and correspondence theories of truth, the former focused on the self as the source of epistemic unity and the latter on an ‘external reality’ conceptualised as a mappable terrain (cf. Fuller, 2009: 62–68). For its first five centuries, the university was mostly dominated by the Doctors, whose conservative bias began to receive uniformly unfavourable publicity during the Protestant Reformation. As we shall see below, what might be called the ‘long early modern period’ – say, the three centuries from the Protestant Reformation (early sixteenth century) to the Age of Romanticism (early nineteenth century) – was a time when the precedence of the Doctors over the Masters were increasingly challenged, though the Doctors would eventually regain the upper hand in the nineteenth century, as universities came to be organized along disciplinary lines (Merz, 1965). In retrospect, the figures of René Descartes and Francis Bacon stand out for their innovative ways of re-asserting the Masters’ prerogative against the scholastic Doctors. I shall now take each in turn. Descartes’ declaration ‘cogito ergo sum’, was understood, at least by younger contemporaries such as Nicolas Malebranche, as re-asserting humanity’s epistemic overlap with God’s point of view in line with the biblical claim of our having been created in imago dei. What philosophers after Kant still call ‘a priori knowledge’ has carried forward this interpretation – namely, that regardless of the actual history of science, every human is endowed with the wherewithal to reason for themselves from first principles about any point in space and time, which implies the capacity to conclude that knowledge could have developed more efficiently. (Logical positivism eventually got the most rhetorical mileage from this possibility.) To be sure, as
28
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
godlike but not full-fledged deities, we may reason falsely, but that does not take away from our godlike ambition and perhaps even its ultimate success, as long as we are not complacent about our epistemic foundations. This image of Descartes was expressly promoted by the French Enlightenment figures Condorcet and Turgot, who were attracted to the idea that the intellectual revolutionary need not recover the meaning of ancient Scriptures because the capacity for receiving and responding to God’s word is ‘always already’ constitutive of humanity’s birth right (Cohen, 1985: Chap. 9). The Enlightenment followed up Francis Bacon’s particular brand of anti- scholasticism by trying to make good on his avowed desire to re-plant ‘the tree of knowledge’ by requiring that disciplines reflect the logical outgrowth of specific mental faculties – as opposed to knowledge being simply allowed to flourish as esoteric, path-dependent traditions of scholarship (Darnton, 1984: Chap. 5). (Nowadays it might be cast in terms of a curriculum that capitalized on the brain’s spontaneous operations [Fuller, 2012a: Epilogue].) Such traditions only served to create sectarian differences that, in the case of England, eventuated in civil war in the generation after Bacon’s. As the Masters would have it, Bacon returned the quest for knowledge to the refinement of the entire mind to realize its full potential – that is, well beyond the narrow matter of learning which specific expert is entitled to deference when one lacks the relevant expertise. To Bacon’s Enlightenment followers, this shift in the ground of epistemic legitimation provided a scientific basis for supposing that any individual, simply by virtue of having been born with a fully functioning mind, could learn all that is worth knowing. This radical interpretation of Bacon, which probably came closer to his original intent than what became the Royal Society of London, eventuated in the 1789 revolutionary signature of ‘Liberty, Equality, Fraternity’. Against such Baconianism stood the clerics who controlled the universities but operated with what les philosophes regarded as confused or obscure conceptions of how the human mind worked, which in turn reflected a superstitious attitude toward how certain ideas have come to dominate the human condition (i.e., what has been is mistaken for what ought to be). The concrete symbol of this revisionist sentiment was L’Encyclopédie, a multi-volume work that was designed to be read not in service of specialised research but in the leisure of bourgeois salons, which would generate conversations closer to what symbolic interactionists call ‘perspective-taking’ than the mutually beneficial exchanges of information favoured by the Doctors. For Descartes, Bacon and their Enlightenment followers, the ‘scientific method’ functioned as a self-imposed discipline of lifelong learning, a generalisation and extension of the ‘arts of memory’ that checked the superstitious associations we are prone to make between the contingent and necessary features of knowledge, as epitomised in our deference to clerical experts for reasons having more to do with the relatively exclusive means by which they have acquired their knowledge (i.e., the contingency of their professional training) than any proof that they possess the requisite competence to address a given problem. In this superstitious frame of mind, to give it a theological spin, we too quickly read signs of the eternal in the temporal. Thus, the clergy, God’s self-appointed representatives, end up being trusted as if
2.3 Normal and Deviant Interdisciplinarity as Alternative Ways of Organizing…
29
they were the deity himself. Galileo’s reckless precursor, Giordano Bruno had been martyred in 1600 for insisting that everyone could realize their divine potential by comprehending their current beliefs as deductions from nature’s first principles, presumably as laid down by a creative deity indifferent to time, for whom the ‘order of knowing’ coincides with the ‘order of being’. Put bluntly, we might re-enact in our own minds how God unfolded Creation, and the coherence of that thought process – we would now say the successful execution of that simulation – would secure its validity (Yates, 1966). What eventually became the ‘scientific method’ – and later the Enlightenment ‘aude sapere’ and still later, and more prosaically, ‘Think for yourself!’ – effectively routinized Bruno’s dangerous charisma. While Francis Bacon envisaged that a state-funded ‘House of Solomon’ would regularly perform experiments in the name of the scientific method, that would have been only the specialist wing of a mental regime that would ultimately set the standard of right thinking in everyday life, as JS Mill’s canons of inductive proof have been sometimes treated (Lynch, 2001). In this respect, Bruno was the first ‘democratic intellect’ who openly disputed authoritative opinion from a standpoint that he took to be available to everyone capable of mastering the arts of memory. His heroic status simply reflects the fact that there was hardly anyone quite like him in his day. Galileo at once learned from but kept his distance from Bruno, whom he had beaten for the chair in mathematics at the University of Padua. The closest Protestant equivalent to Bruno’s style of epistemic self-assertion was Michael Servetus, whom John Calvin had burned in Geneva about fifty years earlier. By the twentieth century, epistemologists had come to recast the distinction between the contingent (aka ‘psychological’) and necessary (aka ‘logical’) dimensions of inquiry in terms of the contexts of ‘discovery’ and ‘justification’, the latter functioning as the secular residue of the divine standpoint sought by Bruno, but without the administrative apparatus envisaged by Bacon (cf. Laudan, 1981: Chap. 11). While in the UK it is common to cite William Whewell as the key nineteenth century figure in this sublimation of science’s divine aspirations, pivotal in Germany was Hermann Lotze, the physician-philosopher often credited with having canonised both the fact-value and the truth-validity distinctions that served to revolutionise methodology in the early twentieth century (Schnädelbach, 1984: 107; Heidelberger, 2004: Chap. 8). As the Scientific Revolution wore on, it became more widely accepted that a properly organized mind might know, if not everything, at least the limits of one’s knowledge, both empirically and conceptually. In this context, Kant’s ‘transcendental’ method may be understood as an attempt to define philosophy as the second- order discipline dedicated to the pursuit of deviant interdisciplinarity – that is, the discipline tasked with providing unity and purpose to the other disciplines, which left to their own devices would pursue their own parochial knowledge interests to the exclusion of the others, and thereby fail to realize their synergistic potential. Kant (Kant, 1996) made this point most concretely in his late essay, The Conflict of the Faculties. This polemical essay fully modernised the Master’s standpoint as a call for university reform, one that inspired Wilhelm von Humboldt, in his capacity as Prussian Minister of Education, to re-dedicate the university as the environment for students to learn to think for themselves. In the meanwhile, the image of the
30
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
autonomous thinker had received a Romantic makeover – less the heretical Bruno and more the polymathic Johann Wolfgang von Goethe – that is, someone who integrated several fields within himself to great effect, leaving an indelible trace on politics, science and art, but without turning himself into a martyr in the process. Indeed, the Humboldtian university was designed to generate just such individuals on a regular basis, each a unique integrator (a ‘genius’) of all that is worth knowing.
2.4 Deviant Interdisciplinarity after Philosophy: From Naturphilosophie to Biological Science Without denying the extent to which Humboldt’s vision was realized in nineteenth century Germany, philosophy itself fell into pronounced decline after Hegel’s death in 1830, as the discipline was seen as having slid into theology’s old dogmatic role of pre-emptively restricting the development of empirical knowledge in the name of ‘anticipating’ it (Collins, 1998: Chap. 12). Nevertheless, arguably the two most innovative sciences of the nineteenth century were attempts to operationalize the post-Kantian idealist take on the natural world, Naturphilosophie. These sciences, psychophysics and thermodynamics, were concerned with the conversion of material differences of degree to those of kind (i.e., ‘quantity into quality’ in Hegel- speak), or as Ernst Cassirer (1923) memorably put it, the conversion of ‘substance’ into ‘function’. In this context, idealism’s Holy Grail, the ‘unity of subject and object’ was made experimentally tractable in terms of control over the physical conditions (the object pole) needed to alter sensory experience (the subject pole). Activities that in the eighteenth century would have been seen as tracking the ‘life force’ were thus increasingly subject to mathematical formulation as exchanges in ‘energy’, even as the latter term stood equivocally – and, after Einstein, unsuccessfully – for both a phenomenological and a physical aspect of nature (Rabinbach, 1990). The spirit of deviant interdisciplinarity travels most clearly after Hegel through this ‘energeticist’ tradition, which over the past two centuries has aimed for a ‘re-enchanted’ sense of science, which is to say, an epistemic state that is accountable to empirical reality yet in terms meaningful to the development of the human condition (Harrington, 1999: Chap. 1). Two of the most notable works in this vein from the mid-twentieth century were Koehler (1938) and Bertalanffy (1950). The prospect of epistemic unity through translation principles was advanced from the 1850s to the 1950s – at first quite literally between forms of energy but increasingly between discourses subsumed under a common semantics (or ‘metalanguage’). Whereas the neutral monist Hermann von Helmholtz focused on the human body itself as the ultimate transducer of caloric to psychic energy, the logical positivist Otto Neurath aspired to a kind of pan-disciplinary Esperanto through which the knowledge content of any specialised discourse could be communicated to any ordinary person (cf. Holton, 1993; Mendelsohn, 1974). In both cases epistemic unity was presented as potentially resolving metaphysically inspired political
2.4 Deviant Interdisciplinarity after Philosophy: From Naturphilosophie…
31
differences, in effect updating Bacon’s juridical understanding of science alluded to earlier – indeed, now often extended to the cause of international diplomacy, a function in which philosophers since Leibniz had dreamed of science serving (Berkowitz, 2005; Schroeder-Gudehus, 1989). A key transitional figure between these two translation-based paths to unity is the last great defender of the universal energy principle, the chemist Wilhelm Ostwald, whose final (1921) edition of Annalen der Naturphilosophie included Wittgenstein’s Tractatus Logico-Philosophicus. However, these philosophical efforts to unify the sciences by finding a point neutral to their operation succeeded more at keeping increasingly disparate forms of inquiry under the umbrella term ‘science’ for purposes of public legitimation than in steering the conduct of inquiry. For the latter purpose, we may turn to two more interventionist ways of pursuing deviant interdisciplinarity, each turning on a distinct image of the natural philosopher. She may be seen as either alchemist (i.e., someone who could make the most with the least through skilful recombination of elements) or architect (i.e., someone who could design the conceptual blueprint for others to fill in empirically). In the balance hangs whether the quest for unified knowledge is interpreted as aiming for reduction or integration. Let us consider briefly the rather different ways in which ‘unity’ is conceptualised in the two cases. Reduction refers to the relations among the objects of knowledge, whose ideal state involves minimising their possible interactions to achieve maximum effects. Thus, one is simultaneously trying to understand both the fundamental principles of matter – what the medieval alchemists called minima natura – and the prospects of their outworking. In contrast, integration is a process unique to the individuals whose idealist-style education equips them with a conceptual cartography of permissible and preferred interdisciplinary traffic. What they make of this may lead to further social conflict but it will be among people who can be presumed to be knowledgeable of all that might be realized. At this point, politics takes over from science as the clash of worldviews. The specific differences in the disciplinary structure of the national university systems of Europe forged over the nineteenth century were largely institutionalised negotiated settlements of this sort (Merz, 1965). Behind these two versions of deviant interdisciplinarity are alternative images of the deviant’s claims to ‘godlike genius’. Whereas integrationists aim to ‘imitate’ God in the strict sense of retaining an ontological distance from the deity as they contest the best way to interpret the divine plan (i.e., their mental maps aspire to be a copy of the conceptual structure of the divine plan), reductionists would lay claim to occupy the divine standpoint for themselves, on the basis of which they would reconstruct (a possibly improved) nature from scratch, a ‘second creation’, to recall the phrase used equally for the twentieth century revolutions in nuclear physics and biotechnology (Crease & Mann, 1986; Wilmut et al., 2000). In terms of general approaches towards inquiry, reductionists are recognisable as practitioners of the Naturwissenschaften. integrationists of the Geisteswissen schaften. We normally distinguish the two types of sciences in terms of their extreme cases – say, atomic physics and historical criticism. However, in what follows I focus on biology as the borderland discipline, or ‘liminal field’: Would biology be the ultimate synthetic molecular science, as revealed by the Naturwissenschaften or
32
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
the foundational ‘science for life’ underpinning the Geisteswissenschaften? As the astute practitioner/historian of molecular biology Michel Morange (1998) has observed, this dual aspect of biology’s identity remains in the relatively independent research trajectories of, on the one hand, the mechanical sense of ‘life’ fostered by biotechnology and, on the other, the more holistic sense retained by modern evolutionary theory. The former is reductionist, the latter integrationist. On the one hand, the reductionist proposes to assist scientists to arrive at the foundational principles of nature that are responsible for everything, both as they are and as they could be. As suggested above, this project was associated with alchemy, which always existed in the interstices of what we would now call chemistry and political economy (e.g., the conversion of lead to gold), which in turn explained the threat it posed to both secular and sacred authorities. But after Erwin Schrödinger’s (1955) famous 1943 Dublin lectures, ‘What Is Life?’, the alchemical ambition was decisively transferred to the interface of chemistry and biology, motivating physical scientists to fathom the structural-functional character of genes, the field that the Rockefeller Foundation itself had christened a decade earlier as ‘molecular biology’, which eventuated a decade after Dublin in the discovery of DNA’s double helix structure (Morange, 1998: Chaps. 7 and 8; Fuller, 2021b). From the DNA epiphany came three interdisciplinary styles of reductionism that persist to this day: (1) synthetic biology, closest in spirit to Schrödinger, which tests various molecular combinations for their genetic consequences, a project academically first championed by Walter Gilbert (1991) and popularised by Craig Venter; (2) strategic chemical interventions to regulate gene expression, following the lead of François Jacob and Jacques Monod and now a mainstay of the pharmaceutical industry; (3) most ambitiously, the promoted by Eric Drexler (1986) under the name of ‘nanotechnology’, which would realize the alchemical dream of creating new beings from the fundamental re-organization of matter. This reductionist project has always had a spiritual side, harking back to what the historian of science Alastair Crombie (1994) called ‘maker’s knowledge’, the idea that motivated the hypothesis- testing style of scientific reasoning: namely, that as creatures ‘in the image and likeness of God’, we are capable of divining the grammar of life so as someday to become capable of reverse engineering the process, at which time we might be able to improve, complete or even divert the course of creation – depending on the brand of heretical theology to which one subscribed. On the other hand, the integrationist projects the prospects for knowledge by imaginatively transcending the empirical limitations of the special disciplines. From this perspective, the emergence of qualitative or subjective difference may be seen as marking an increase in matter’s self-consciousness, again operationalised as increased powers of discrimination, culminating in the reflective philosopher- scientist as (in his/her person) the ultimate register of differences. Thus, Gustav Fechner (1801–1887), the most intellectually adventurous student of Schelling, the greatest of the original Naturphilosophen, spent his career trying to wed a mathematical psychophysics with a metaphysical panpsychism, as if Leibniz had undergone a Romantic makeover (Heidelberger, 2004). Fechner had a vitalist view of the world according to which Newton’s inertial bodies and Goethe’s self-legislating
2.5 The Fate of the Deviant Interdisciplinarian: The Case of Jean-Baptiste Lamarck
33
individuals appear as polar states of matter-in-motion: The former constitute potential yet to be realized at the level of both knowing and being (i.e., ‘dumb matter’), the latter potential fully realized at both levels (i.e., ‘enlightened genius’). Put bluntly, what Newton knew was radically distinct from his being, whereas what Goethe knew was integral to his being. This view attracted practitioners of the Geisteswissenschaften as the culmination of the entire scientific enterprise. Its end product would be not only a ‘science of life’ but also a ‘science for life’ (Veit- Brause, 2001). In their own distinctive ways, Ernst Mach, Charles Sanders Peirce and William James tried to follow Fechner’s lead. The same spirit also drove their contemporary, Friedrich Nietzsche, who was a student in Leipzig when Fechner held the chair in philosophy (Heidelberger, 2004: Chap. 7). But looming in the background of all these integrationist efforts was the spectre of the original evolutionist Jean-Baptiste Lamarck, whose reception has been all too typical of that of deviant interdisciplinarians.
2.5 The Fate of the Deviant Interdisciplinarian: The Case of Jean-Baptiste Lamarck Deviant interdisciplinarians can be hard to identify and trace in intellectual history because in the fullness of time much of their radical challenge comes to be accommodated and/or distorted by mainstream disciplinary knowledge. From the standpoint of normal science historiography, the deviant interdisciplinarian can be made to look like someone who simply landed on the wrong side of such a wide range of debates that his or her seriousness and sanity may come into question. Such has been the fate of Jean-Baptiste Lamarck (1744–1829), the first curator of the invertebrates section of the French Natural History Museum, who is normally credited with the first explicit theory of the evolution of life forms. The theory was proposed in the first decade of the nineteenth century as the cornerstone of a new discipline that he called ‘zoological philosophy’ or ‘biology’. The latter term stuck, though Lamarck’s original scope was much broader than that of today’s biological science. For Lamarck, biology aspired to what we would now call a ‘grand unified theory of everything’, where everything is presumed to have been endowed by the deity with a primitive life force that develops on its own accord (Packard, 1901: Chap. 19). In the century prior to the Enlightenment, say, in figures such as Hobbes and Spinoza, the preferred term of art for this life force was conatus, which suggested that God’s will worked its way through otherwise unruly matter, issuing in some divinely sanctioned cosmic order (Fuller, 2012b). Deist tendencies in the Enlightenment detached God from any direct involvement in the process, resulting in a generic concept of besoin (‘need’), the term favoured by the French physiocratic school of political economy in the generation before Lamarck’s to describe what two traders had to satisfy mutually in order to constitute a ‘just exchange’ (Menudo, 2010). Lamarck added a further conceptual distinction: besoin vs. désir,
34
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
or ‘need’ vs. ‘want’. The physiocrats, for whom the goal of political economy was a sustainable ecology, treated the two words interchangeably, whereas Lamarck clearly meant désir as a specification of besoin in terms of the particular sensory apparatus through which an organism finds that certain objects and environments enable it to flourish and develop (Bowler, 2005). While I may have no choice over the sorts of needs that are necessary for my survival (besoin), exactly how I satisfy those needs is an open question answers to which are provided by an account of my wants (désir). Lamarck’s deviant interdisciplinarian status is complicated by uncertainty over exactly how he would have had the disciplines integrated in aid of a master science of biology. Clearly, for Lamarck, what begins as the expression of divine will ends up as organic functionality, and in that respect, theology eventually yields to biology, corresponding to the increased specification and complexification of the life force over the course of natural history. But this process, while recognisably evolutionary, did not suggest any overarching image to its path. Nowadays it is common to represent Lamarck’s intentions in terms of a ‘convergent’ (as opposed to Darwin’s own quite clearly ‘divergent’) sense of the overall shape of natural history. That image is helped along by Lamarck’s view that the continued existence of relatively simple organisms marks them not as atavisms but as the latest spontaneously generated life forms, all destined to ascend what Gillispie (1958) dubbed the ‘escalator of being’, whereby life re-incorporates or re-organizes itself as it learns to exert greater control over its environment. At the same time, Lamarck regarded freestanding chemical substances of the sort that his compatriot Antoine Lavoisier had begun to call ‘elements’ in a manner much closer to that of the last great phlogiston theorist, Joseph Priestley – namely, as the fossilised residues of the spent life force. In short, Lamarck inhabited an intellectual universe that was almost inside out from our own. Nevertheless, to his more ambitious later admirers, typically heretical Christians like Pierre Teilhard de Chardin (1955), Lamarck’s theory implied nothing less than the history of God’s self-realization in the world through a suitably ‘escalated’ (or ‘enhanced’, as we would now say) version of Homo sapiens. Lamarck’s project did not prevail, perhaps unsurprisingly, for a host of reasons ranging from charges of empirical disconfirmation and conceptual confusion to outright theological anathema. In his own day, Lamarck’s conception of life went against the relatively static notions of health and illness associated with biology’s closest disciplinary rival, medicine. He did not begin his inquiries with a specimen of an existing healthy organism but with some imagined (divine?) origin from which the organism was hypothesized to have evolved. (The word ‘organism’ is ambiguous between individual and species because what might be ordinarily called a ‘species’ was for Lamarck an extended phase in the generic life process.) Indeed, Lamarck famously asked his students to imagine what might have been the original complex of needs that came to be realized in the succession of forms taken by an organism in a given environment (Gillispie, 1958). In that case, norms of health are to be found not in an organism’s statistically normal behaviours but in the vector revealed by examining the long-term history of its behaviours in a common environment. Although no less than the great positivist Auguste Comte was impressed by
2.5 The Fate of the Deviant Interdisciplinarian: The Case of Jean-Baptiste Lamarck
35
these insights during his medical studies at Montpélier, and arguably based the modus operandi of his own historical epistemology on them, Lamarck’s reception had fallen foul of priests and positivists alike by the second half of the nineteenth century (Burkhardt, 1970). After all, Lamarck was effectively claiming that organisms contain powers that come to the fore only on a ‘need-to-respond’ basis that are then inherited by their offspring, making it then very difficult – if not impossible – draw a sharp distinction between normal and abnormal behaviour, given that today’s abnormality may anchor a new norm for future generations. Moreover, once Darwin’s theory of evolution by natural selection came on the scene in the mid-nineteenth century, Lamarck’s overarching vision was distilled into a point of empirical disagreement with Darwin over the nature of hereditary transmission. Lamarck was presented as the champion of what is nowadays known as the ‘inheritance of acquired traits’, whereby changes to an organism’s body can be passed to the organism’s offspring. This thesis is normally seen as having been discredited experimentally by the work of August Weismann, who in 1889 showed that cutting off the tails of twenty generations of mice had no effect on the length of the tails of subsequent generations. On this basis, the famed ‘Weismann barrier’, i.e., the impassable wall separating the somatic and the germ cell line became a central dogma of Darwinian evolution – albeit several decades before the genetic nature of the germ cell line was properly understood (Meloni, 2016). This last point is significant because greater genetic knowledge has actually provided potential opportunities for re-asserting the by-now stereotyped Lamarckian doctrine of the inheritance of acquired traits. Nevertheless, the Weismann barrier is still routinely brandished in high school biology textbooks as the silver crucifix designed to ward off any remaining Lamarckian vampires. Yet, Lamarck may be vindicated in the end. The Weismann barrier appears to apply only for the period of evolutionary history that began with the emergence of organisms with a hard cell membrane (i.e., eukaryotes) and has begun to end with the cracking of the genetic code (Dyson, 2007). Between these two moments, the dominant means of transmitting genetic information has been ‘vertical’, that is, through lines of descent. But both before and after that period in evolutionary history, more Lamarck-friendly ‘horizontal gene transfer’ may be dominant – facilitated in the ancient case by porous cell membranes (something regularly evidenced in the spread of viruses), in the contemporary case by targeted biotechnological interventions. But in both cases, a change to the organism in its lifetime leaves a clear trace in the offspring, which may be itself enhanced or reversed in subsequent horizontal transfers of genetic information. Interestingly, this hypothesis – epitomised in the slogan ‘Evolution itself had to evolve’ – was inspired by a microbiologist, Carl Woese, whose origins in biophysics and repeated run-ins with Darwinian taxonomists marks his own career as that of a deviant interdisciplinarian. Nevertheless, Woese eventually succeeded in establishing the existence of a form of life more primitive than any previously acknowledged, now gathered in the kingdom of Archaea, whose character is reminiscent of Lamarck’s take on spontaneous generation (Sapp, 2008). Viewed in the broadest theoretical terms, the Darwinian definition of evolution as ‘common descent with modification’ may come to be seen
36
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
as limited as Newtonian mechanics is in physics today. In other words, Darwin’s and Newton’s theories cover their respective target realities at the meso-level but not at the extremes. Various other Neo-Lamarckian revivals were attempted in the twentieth century – and continue to be mounted today. In the mid-twentieth century, some geneticists – not least a principal craftsman of the Neo-Darwinian synthesis (and Teilhard de Chardin enthusiast), Theodosius Dobzhansky – saw irradiated genes as a potential evolutionary accelerator that could produce Lamarckian effects by Darwinian means. However, the pioneer of radiation genetics, Hermann Muller, warned against this wishful thinking in the impending Nuclear Age as more likely to result in maladaptive offspring (Ruse, 1999: 109–110). A more promising offspring of the same period is ‘epigenetics’, a term coined by the animal geneticist Conrad Waddington to capture the fact that at least some genes are not inherited in their final form but are shaped in gestation and even early childhood (e.g., by diet or stress levels) in ways that can then be transmitted to offspring. Waddington saw the potential in the concept for explaining and remedying the tendency for class distinctions to biological ones over successive generations of, say, poor nutrition (Dickens, 2000: Chap. 6). Of course, epigenetics in this strict sense is not classically Lamarckian in that the organism’s intentionality is not involved. However, Neo-Freudian theorists quickly seized upon the idea in a spirit that may have been truer to Lamarck’s (e.g., Erikson, 1968). While many lessons may be drawn from the treatment that both Lamarck and Lamarckism have suffered at the hands of history, let me conclude by highlighting issues that are of more general relevance to the pursuit of deviant interdisciplinarity. First, in the spirit of the deviant enterprise, Lamarck made a point of calling his epistemic practice ‘zoological philosophy’, implying that he was aiming for something much more metaphysically ambitious and normatively freighted than, say, Darwin, who presented himself as a natural historian not a natural philosopher. (Keep in mind that our sense of ‘scientist’ was hardly available to either of them.) One telling criticism of Lamarck, typically attributed to the early US developmental psychologist James Mark Baldwin and popularised by the anthropologist Gregory Bateson (1979), bears on the deviant interdisciplinary project as such – namely, a tendency to conflate first- and second-order perspectives on reality (Richards, 1987: Chap. 10). In particular, Lamarck presumed that if a shift in the distribution of traits in a population over several generations brought about adaptive improvement, then it was something that the individuals involved had been trying to achieve. Put that way, Lamarck appears to have fallen victim to the fallacy of division, as he seemed to presume that something present in the whole was also present in the parts. However, if the role of historical order in epistemic progress is taken more seriously, Lamarck may have effectively attributed an emergent second-order awareness to members of a population once they learned the consequences of what might well have been originally fortuitous adjustments. Thus, the insiders come to incorporate the standpoint of the outsider to accelerate their progress, as they render intentional what had been previously unintentional. (To speak in purely logical terms, the ‘Lamarckian trick’ may be characterised as rendering the ‘extensional’ in
2.6 Epilogue: The Fate of Philosophy in a Multi-Disciplinary World
37
‘intensional’ terms– that is, what is presented as a property common to the aggregate turns out to have been a property of each of the constitutive individuals.) Nowadays, after Bertalanffy (1950), this is called the ‘systems-level’ perspective, arguably the secular residue of divine creativity. At least that is one way to gloss the idea raised by Bertalanffy and others who have gone down the deviant interdisciplinarity route in the recent past, namely, that humans, as the only animals without a natural habitat, are compelled to turn everywhere into its home. And the first step involves learning to adopt the ‘view from nowhere’, whereby every seemingly isolated event is understood as a means to a larger end.
2.6 Epilogue: The Fate of Philosophy in a Multi-Disciplinary World I have presented a rather polarised fate for philosophy in a multi-disciplinary world: either an underlabourer for the various disciplines that have developed from its root (the ‘normal’ way) or an overlord who treats such variation as itself a problem for which philosophy provides the disciplinary solution (the ‘deviant’ way). The institutional history of knowledge production over the past two centuries has tended against the latter position, which I nonetheless champion. In that time, pretenders to the role of philosophical overlord have been attracted to a rather general sense of ‘biology’ from which the discipline bearing that name has increasingly distanced itself, a fact most pointedly illustrated by the chequered reputation of Jean-Baptiste Lamarck. I have omitted the prospect that philosophy might inform the other disciplines as an equal operating on a level playing field. It might even ‘go viral’ in the spirit of horizontal gene transfer, which is suited to the flattening of epistemic hierarchies brought about the internet in our post-truth times (Fuller, 2018a, 2020a). One might think of this as adding a missing ethical ingredient (Tuana, 2013) or providing a facilitating service (O’Rourke & Crowley, 2013). In both cases the point would be that philosophy enables disciplines to become truly ‘interdisciplinary’, in the sense of collaborating on a common epistemic project that would be otherwise unachievable in their individual capacities. However, this idea of philosophy as a discipline on par with other disciplines has been always justified more on institutional than intellectual grounds, the unsatisfactory consequences of which are routinely enacted in the so-called ‘analytic-continental’ divide in Anglophone philosophy, which may be seen as reproducing within the discipline philosophy an orthogonal version of the disciplinary distinctions that exist at large. In this respect, a maxim typically attributed to Quine rings true: People study philosophy out of interest in either the history of its doctrines or the logic of its arguments – the former veering to hermeneutics, the latter mathematics. Yet somehow the ‘Department of Philosophy’ needs to accommodate both interests. In that context, which presumes the institutional standing of philosophy as a discipline, the ‘interdisciplinary turn’ may be seen at the
38
2 Deviant Interdisciplinarity as Philosophy Humboldt-Style
same time as an outward-looking turning away from frustrating internecine disputes within the discipline of philosophy. However, there is a more positive way out. It involves taking a page from the Bildungsroman of William James, now normally seen as the first ‘professional’ US philosopher – which is to say, someone ensconced in an academic chair, and not a popular lay preacher à la Ralph Waldo Emerson or a rogue scientist à la Charles Sanders Peirce. Like many nineteenth-century secular middle-class people seeking a career that provided for both spiritual fulfilment and material security, James originally studied medicine. But out of dissatisfaction with the curriculum, he travelled to Germany where he witnessed the interesting cross-disciplinary hybrids emerging in the wake of the institutional meltdown of idealist philosophy – most importantly ‘psychology’. This left James with a metaphysical worldview, called ‘neutral monism’, which enabled him to see the various academic disciplines as simply alternative ways of organizing experience, which through routinisation solidified into habits of thought. However, these habits could easily generate neuroses that would render the trained disciplinarians socially dysfunctional outside of peer- oriented contexts. Into this context the ‘philosopher’ would step to break or prevent such habits of thought. James took this idea quite literally, leading him to oppose the construction of a dedicated building for philosophy on the Harvard campus, what became Emerson Hall (Bordogna, 2008). James believed that the establishment such a site would inhibit philosophers in their nomadic (not peripatetic!) function, which indeed it did over the years. But interestingly, James did not think of philosophy’s role as merely an auxiliary ‘research support service’. Rather, his understanding of how specific disciplinary patterns of thoughts should be broken was informed by an overarching sense of the self-fulfilled person as a ‘rugged individualist’, a phrase popularised by Theodore Roosevelt, which led James to see philosophy as an especially verbally aggressive form of therapy. In this respect, James never really rejected the deviant interdisciplinary way of the medieval Masters or the German idealists but tried to adapt it to the early twentieth century American context. Perhaps he failed but that should not stop us from trying to do something comparable.
Chapter 3
Judgement as the Signature Expression of Academic Freedom
Abstract The chapter begins with a discussion of academic freedom as ‘positive liberty’, which is a form of mutual recognition whereby other people provide the conditions for one’s own freedom. Academic freedom – and the corresponding faculty of judgement – is a specific institutionalized version of positive liberty. In the original Humboldtian context, it served as an exemplar for overall social emancipation. However, the exercise of academic freedom in the nineteenth and twentieth centuries was a mixed bag, resulting in much dynamism and innovation but also controversy and social disorder. The role of ‘bureaucracy’ (a term popularized by Max Weber) is subsequently discussed as a potentially facilitative second order form of judgement, though it has arguably contributed to a version of academic expertise that threatens to undermine the comprehensive and person-defining sense of ‘judgement’ promoted by Humboldt. This topic is explored especially in relation to developments in Germany and the United States, in both philosophy and psychology. The chapter ends with a genealogy of ‘judgement’ as the site of values-based reasoning, starting with the Stoic logical formalization of reasoning in the Athenian courts and extending to religious and secular settings in the modern period.
3.1 Academic Freedom as a Form of Positive Liberty Academic freedom refers to the complementary rights and obligations that bind teachers and students together as ‘free inquirers’ in the broadest sense. In the classic Humboldtian sense, which Max Weber refashioned for the twentieth century, students would be free to inquire into the life they might lead, while their teachers would be free to inquire into the life (or ‘vocation’) that they have chosen for themselves (Fuller, 2009: Chap. 1). In both cases, academic freedom has been historically associated with a distinctive sense of judgement as a mental faculty. The fates of ‘academic freedom’ and ‘judgement’ as concepts rise and fall together. The roots of this sense of ‘judgement’ can be found in the judicial practices of ancient Athens and modern Europe, Stoic philosophy and heretical Christendom, as well as the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Fuller, Back to the University’s Future, Evaluating Education: Normative Systems and Institutional Practices, https://doi.org/10.1007/978-3-031-36327-6_3
39
40
3 Judgement as the Signature Expression of Academic Freedom
aesthetic origins of German idealism and formal logic’s emergence from normative psychology. The institutional preconditions for academic freedom were laid down in the late Roman legal classification of universities with guilds, churches, monastic orders and city-states as indefinitely self-reproducing, self-governing social entities dedicated to activities whose value transcends the interests of their current practitioners (Fuller, 2000a: Chap. 4). Even in the thirteenth century the idea was controversial because members of such corporate bodies would be largely immune from laws to which they would have been normally subject as individual family members or royal subjects. The level of controversy was raised further when academic freedom was made the cornerstone of the university in the modern nation-state by Wilhelm von Humboldt in 1810. As I shall elaborate shortly, this move enabled theology in the German states to be pursued as Wissenschaft – systematic knowledge for its own sake – without specifically pastoral concerns. The distinction is routinely enshrined today in the difference between a largely secular religious studies department in the arts or social sciences faculty and a free-standing divinity school that provides professional training for clerics of particular faiths. However, in its day, the distinction quickly became a source of divisiveness and ultimately civil revolt, the legacy of which is arguably felt in the ongoing ‘culture wars’. A flavour of the discontent may be gleaned by Karl Marx’s early preoccupation with Ludwig Feuerbach, a theologian who migrated to the natural sciences to produce the cornerstone work in the anthropology of religious belief and ritual, The Essence of Christianity (1841). This general academic tendency toward what was often called naturalism (with a nod to Spinoza and Schelling) and materialism (with a nod to the French Enlightenment followers of Epicurus) fuelled the wave of ‘liberal’ revolutions in 1848 that set down a clear marker for the separation of Church and State as a general political principle – that is, beyond the separation that had already occurred in the German theology faculties. It is worth recalling the original context for Humboldt’s work, which both John Stuart Mill and Karl Popper later cited as formative in their own thinking. Nearly two decades before founding the University of Berlin, while still in his twenties, Humboldt published a defence of liberalism, The Limits of State Action in 1792 (Humboldt, 1969). It was a Kant-inflected response to Friedrich Wilhelm II, who four years earlier had made Lutheranism the state church of Prussia, shortly after ascending to the throne on the death of his uncle Friedrich II (aka ‘Frederick the Great’), whose famed patronage of the Enlightenment included a defence of religious diversity as a strategic nation-building function. More generally, Friedrich Wilhelm’s establishment of a state church contravened the separation of church and state that many Protestants – especially the ‘dissenters’ – had argued was essential to the sort of post-Catholic Christendom that had turned the Enlightenment into a full-blown European cultural movement in the eighteenth century. In this respect, such ‘universalising’ ideas as treaty-based international law, cosmopolitanism and even the ‘republic of letters’ were attempts to find secular replacements for various Catholic natural law-based doctrines.
3.1 Academic Freedom as a Form of Positive Liberty
41
Against this backdrop, Friedrich Wilhelm’s actions suggested to Humboldt and other thinkers inspired by Kant’s ‘What Is Enlightenment?’ that the state was trying to replace the role of the church in expressing the will of the people – instead of devolving that role to a generalised freedom of individual conscience, as Spinoza, John Locke and John Toland had originally argued (Israel, 2001). In retrospect, we might say that Humboldt and his allies were concerned that a state-church merger would eventually convert religious authoritarianism into political totalitarianism. That Friedrich Wilhelm’s son subsequently consolidated all the Protestants under state control, rendering himself head of a United Church, confirmed this ideological direction of travel. Arguably Hitler’s ascendancy in the face of the established clergy’s relative quiescence realized even more strongly the fears that Humboldtians had been expressing a century earlier. Nevertheless, the seeds of a countermovement to Friedrich Wilhelm’s centralising tendency had been already sown in Humboldt’s day – namely, the division of the theology faculty into scientific (aka ‘critical-historical’) and pastoral duties (Collins, 1998: Chap. 12). This distinction would provide an institutional microcosm for Max Weber’s (2004) landmark thinking about the radical separateness of science and politics. The distinction was originally drawn by Friedrich Schleiermacher, the founder of modern hermeneutics, who happened to be one of the main supporters of the consolidation of the Christian churches under the Prussian monarchy. Yet, at the same time he insisted that theology performed two functions that were best performed separately. The pastoral mission is about kindling the religious impulse, which Schleiermacher saw in terms of our personal sense of finitude generating a desire to remerge with an infinite being from which we came and have since then been separated (via Original Sin, in the Christian context). In contrast, theology’s scientific mission is no different from the scientific mission of any other discipline, which in this case is to study the conditions under which Christian message has been discovered and developed, with an eye to distinguishing what is essential and inessential to the message’s continued promotion. While Schleiermacher himself believed that the two branches of theology worked symbiotically, in practice the scientific side – very much inspired by Enlightenment rationalism – produced an increasingly stripped down (‘demythologised’) version of Christianity that called into question the institutional legitimacy the ‘church’ and even the divinity of Jesus himself. An unintended consequence of Schleiermacher’s division was that the two sides of theology worked against rather than with each other, at least from the standpoint of the Prussian monarchy. To cut a long story short, this dissonance reverberated throughout post-Napoleonic Europe, where Christian churches were increasingly forced into specific political arrangements with various secular rulers. It culminated in the failed 1848 liberal revolutions, out of the ashes of which emerged The Communist Manifesto, the path to which Marx and Engels had charted two years earlier in the posthumously published The German Ideology. The English translation of this work in the 1960s is normally credited with having launched the ‘humanist’ Marx, the prequel to the better-known economic determinist version that had dominated twentieth century Marxist interpretation. The work itself is a series of
42
3 Judgement as the Signature Expression of Academic Freedom
critical reflections on the leading scientific theologians of the day, the so-called ‘Young Hegelians’, who were accused of sowing the seeds of political dissent without considering how to channel that dissent into something that might result in a productive radical transformation of society. As Marx and Engels saw it, these ‘liberal theologians’, the most philosophically consequential of whom was Ludwig Feuerbach, either lacked the courage of their convictions or simply erred in supposing that ‘free’ people would simply spontaneously self-organize into a new social order. Lest we forget, Humboldt’s original liberal vision predates all the above. Moreover, Humboldt was envisaging a more ambitious sense of the ‘devolution’ of authority than simply the freedom of conscience that the two subsequent generations of Prussian monarchs institutionally suppressed. The Humboldtian vision is epitomized in the resonant phrase that Marx and Engels adapted from the idealist philosopher Fichte, the person whom Humboldt appointed as first Rector of the University of Berlin: the withering away of the state (Kriegel, 1995: Chap. 8). The sort of state that Humboldt and Fichte envisaged as withering away was the paternalistic state, including the ‘benevolent despotism’ of the likes of Frederick the Great that had enabled the Enlightenment’s brand of liberalism to flourish in the first place. To be sure, as the metaphor of withering suggests, the process would take some time but in the long term it would even overtake ‘parliamentary democracy’, understood as a system where the will of the people is not directly expressed but rather is mediated by elected representatives who largely decide matters by cutting deals amongst themselves, which may benefit their own interests than those of their constituents. Instead, the people would be collectively sovereign in the sense of an absolute monarch, Rousseau-style, whose decisions would then be implemented by a dedicated civil service. Humboldt held that those who govern best, govern least, in the sense that the governors allow the governed to act in accordance with their will, which includes allowing them to bear the consequences of those actions. It was against this backdrop that John Stuart Mill popularised the word ‘responsible’ in the mid-nineteenth century to refer to one’s recognition and acceptance of how they are being held to account (McKeon, 1957). There are many ways to envisage a society organized on such a principle. For their part, Humboldt and Fichte appealed to Kant’s ‘Kingdom of Ends’, a society whose members treat each other not as means to their own ends but as ends in themselves. The key to this idea is that in order to treat others as ends in themselves, one must enable them to pursue their ends, responsibility for which they can then fully take. This means in practice that as one pursues one’s own ends, one must become a means to another’s ends. The result looks not so different from the sort of ‘commercial’ society envisaged by Adam Smith (McCloskey, 2006). If that idea seems counterintuitive, it is only because we underestimate the educative role of markets, something that was stressed by Smith’s French counterpart in the promotion of commercial society, the Marquis de Condorcet (Rothschild, 2001). For Condorcet, the prospect of multiple producers and consumers flourishing in a market with low or no entry costs forces producers to be more sensitive to consumer needs and consumers to be more discriminating about producers’ claims about their
3.1 Academic Freedom as a Form of Positive Liberty
43
products (Fuller, 2006c). In effect, I learn more about who I am by being forced to discriminate between options. A market provides the range of opportunities necessary for that learning experience, based on which everyone can take a more effective sense of self-ownership precisely by engaging in mutual recognition. Such a society would be arguably liberalism on steroids, especially when compared with the view of John Locke, who defined the just society in terms of the maximum jointly realizable level of freedom of its members. Whereas Locke had in mind the mutual non-interference of society’s members, Humboldt and Fichte thought in terms of the mutual facilitation of members. In Locke’s society, you have a right to be left alone; in Humboldt and Fichte’s society you have a duty to be recognized. For Humboldt and Fichte, the state would be more proactive than simply the permanent protector of individual liberty, the view that Locke had inherited from Hobbes and adapted for his own purposes. Indeed, for these late German Enlightenment thinkers, the state would be the outright enabler of individual liberty, whose long-term success would be judged by its unobtrusiveness; hence the state’s ‘withering away’. The idea is that after a while, an appropriately educated (‘enlightened’) people would not need to be told what is in their best interest, let alone be forced to do something that they do not want to do. They would achieve the Kantian ideal of freely doing what they ought to do. The state would then be in the business of periodically offering a range of choices to the citizenry based on which they can act in consort by delivering a collective vote. At first, the contrast implied here looks like Isaiah Berlin’s (1958) famous distinction between ‘negative’ and ‘positive’ liberty. For Berlin, positive liberty is largely about people coming to realize their objective potential, in the sense of the old Stoic aphorism, ‘Freedom is the recognition of necessity’. But based on the political experience of the first half of the twentieth century, Berlin saw positive liberty as capable of legitimizing authoritarian or even totalitarian regimes of the sort associated with ‘Big Brother’ in George Orwell’s 1984. Notwithstanding these more authoritarian implications of positive liberty, especially in twentieth century politics, it is striking how little the actual discourse of German idealism bears out this interpretation. Truer to its spirit of positive liberty is what the original neoliberals dubbed ‘liberal interventionism’, which positioned the state – just as the early twentieth century US Progressives had – as the ‘trust-busters’ who unblock the flow of capital and labour to ensure their free association in the market, which the German idealists held to be the crucible of what they called ‘civil society’ (Jackson, 2009). In academia, the labour-capital nexus pertains specifically to knowledge. Indeed, this is how to think about Friedrich Althoff’s radical reforms to the German university system while Bismarck’s Minister of Higher Education, which produced the world’s most dynamic academic culture by the first decade of the twentieth century (Fuller, 2016a: 88–93). One lesson that Althoff learned from the unfolding of the Humboldtian university in the nineteenth century was that academics themselves are not necessarily the most reliable guardians of academic freedom. The temptation for professors to indoctrinate, if not outright anoint, the successors to their chairs, matched by a willingness of their students to succumb to such treatment in order to secure and enhance
44
3 Judgement as the Signature Expression of Academic Freedom
professional status, always threatened to create bottlenecks in the flow of knowledge. These bottlenecks, which continue to travel under such names as ‘disciplinary traditions’ and ‘schools of thought’, already struck Althoff as all too redolent of medievalism. Althoff’s pointed critique of ‘Humboldt in practice’ perhaps reflected his own failure to complete the doctorate and his subsequent refusal to accept a professorial chair. (A similar trajectory applies to the principal of a US venture capital firm that nowadays invests in bright students who forgo university to become innovators [Gibson, 2022].) Althoff’s solution was to decide all professorial appointments himself, in the spirit of a connoisseur, drawing on a largely confidential network of international advisors. The resulting decisions often went against local wishes, forcing ambitious academics to cash in whatever cultural capital they had amassed at their home institutions to play in Althoff’s national lottery (Backhaus, 1993). Althoff’s younger contemporary Max Weber regarded the ‘Althoff System’ as simply a government attempt to abrogate the university’s historic right to self- determination – the naked exercise of power by a ‘bureaucracy’, a term Weber himself popularized (Shils, 1974). But with the benefit of hindsight, it has become clear that Althoff’s policies resulted in an unprecedented cross-fertilization of ideas and approaches that enriched the emerging special sciences and provided a nationwide conception of intellectual competition that raised academic standards across institutions, which in turn set a standard that was emulated by emerging world powers, not least the United States (Fuller, 2016a: Chap. 3). Indeed, Althoff’s exercise of ministerial power is best seen as an instance of the liberal interventionist state in action. Perhaps Weber, and later Isaiah Berlin, failed to make the connection between positive liberty and liberal interventionism because they missed a crucial assumption of the original German idealists – namely, the fallibility of the agents who would be treated as ends in themselves: Even at their most calculating and instrumental, agents cannot anticipate the full consequences of their actions. In this respect, to maximize freedom is by no means to eliminate error. Georg Lukacs (1948) drew attention to Hegel’s early reliance on Adam Smith’s ‘invisible hand’ as the source for his own (and Marx’s) ‘cunning of reason’. Informing this line of thinking – also shared by Kant, Humboldt and Fichte – was the idea that agents who engage in mutual facilitation, be it understood in terms of a ‘Kingdom of Ends’, a ‘civil society’ or simply a ‘market’, unwittingly generate consequences that go beyond the sum of their original ends. Indeed, the consequences may ultimately overwhelm those ends. Lukacs saw Marxism as positioned to be the long- term beneficiary of such an outcome in the case of capitalist civil society. However, a liberal interventionist state might equally benefit in the spirit of what the great late twentieth-century social science methodologist Donald Campbell (1988) called the ‘experimenting society’, namely, a state that treats society as a living laboratory, replete with feedback loops to allow for collective learning over time.
3.2 The Fate of Judgement in the Hands of Bureaucracy from Germany to America
45
3.2 The Fate of Judgement in the Hands of Bureaucracy from Germany to America In the end, ‘positive liberty’ is best understood as Humboldt and Fichte having followed in Kant’s footsteps to reinvent classical republicanism in modern guise – but this time on a much larger scale and allowing for more individuality. Citizens in past republics normally represented themselves, but citizenship carried high entry costs, typically tied to disposable ‘wealth’ in a sense that approaches the modern sense of capital. Possessing a material stake in the future of your society – ‘skin in the game’, as the risk theorist Nassim Taleb (2018) would say – made you worth listening to in the first place, which then enabled the audience to discount anything seemingly selfserving in what the speaker says, while continuing to believe that they’re trying to speak on behalf of the collective. The great Enlightenment defender of republicanism, Jean-Jacques Rousseau, believed on historical grounds that republics had to be small and populated by rough equals to maintain the value consensus needed for majority rule to be accepted as binding on everyone. Rousseau’s guiding intuition, which informed Kant’s formulation of the ‘Kingdom of Ends’, was that our antagonists are sufficiently like us that we can imagine them as alternative – and occasionally better – versions of ourselves, which would then make it rational to abide by their collective judgement. In short, the cultivation of judgement requires learning how to both deliver and receive judgement. Had Rousseau not died in 1778, he might have supported the American and French Revolutions as blows against tyranny in the name of self-governance, while disowning their constitutional aftermaths, mainly for pretending to do too much because the differences between people – in both socio-economic and spatial distance – were too great to command a spontaneous unity of will. In contrast, Kant, Humboldt and Fichte managed to live through both the American and French Revolutions to completion, which they took to mark a step-change in history – namely, that republicanism could indeed be writ large in an increasingly liberal direction. Notwithstanding its disappointing results, the 1848 revolutions provided additional ballast to an idea behind the 1789 French Revolution, namely, that the nation-state replaces God as the object of faith in secular society, with ‘patriotism’ supplanting the ‘paternalism’ of the Christian deity. This helps to explain the strategic longing that Germany developed toward the United States in the nineteenth century as the prospective vehicle for the rational unfolding of universal freedom, what Hegel dubbed the ‘world-historic spirit’. It is worth recalling some of the key moments of this relationship. A convenient starting point for tracing this sentiment is the émigré political economist Friedrich List’s application of Fichte and Schelling to understand the Jacksonian era, circa 1830 – which was also when Alexis Tocqueville visited America. The period marked the first time that the US was governed by people outside of the founding generation – or their offspring, notably John Quincy Adams, whom Andrew Jackson defeated for the presidency in 1828. List contributed a vision of economic protectionism that would bring the states closer together in the
46
3 Judgement as the Signature Expression of Academic Freedom
‘national interest’, understood at once as domestic and foreign policy. The vision was galvanized in the wake of the War of 1812, when the brief British occupation of Washington DC demonstrated that even after nearly two generations of formal independence, the US could not take its national sovereignty for granted. And while Jackson’s implementation of this vision expedited overall economic growth, it also exacerbated the nation’s internal social differences (I.e., the factory-based North v. the plantation-based South), which eventually led to Civil War. But that didn’t stop List’s vision being reimported to Germany as the Zollverein, the customs union that provided the basis for Bismarck’s unification of Germany in 1870. The German-US connection deepened when Ernst Kapp migrated to Texas to support the Union in the Civil War. He was captivated by Ralph Waldo Emerson, who had given German idealism an American makeover as ‘Transcendentalism’, a philosophy of self-empowerment that was expressed in humanity’s harnessing of the powers of nature for its own ends. Indeed, over a century later, Marshall McLuhan simply replaced Emerson’s telegraph with television when he declared that ‘media’ (understood as extended sense organs) are ‘the extensions of man’. Thus, Kapp founded what we now call the ‘philosophy of technology’ (Mitcham, 1994: Chap. 1). But of greater institutional import was the American adoption of the post- Humboldtian ‘graduate school’ model of education of the Bismarckian era, following considerable late nineteenth century scholarly exchange between the US and Germany. One especially astute observer of this general development was Woodrow Wilson, himself an early graduate of the ‘made for export’ German-style Johns Hopkins University – and to this day, the only US President to hold a PhD. The young academic Wilson, a founder of political science as a discipline in the US, argued for a Humboldt-style civil service to execute the decisions taken by the presidency, as informed by the deliberations of Congress, which would be reduced to a glorified talking shop rather than a proper legislative body. Like the UK Fabian Society at the time, Wilson regarded a professional civil service as a bulwark against the potentially capricious character of judgements taken in parliamentary democracies (Hilderbrand, 2013). Of course, Wilson and the Fabians were not alone in their suspicions of parliaments. Notoriously, Carl Schmitt (1988), preferred to address the problem more directly by having the state embodied in a charismatic leader, who says and does what everyone has been already thinking and wanting. (Weber had also introduced ‘charisma’ into the sociological lexicon [Baehr, 2008].) To be sure, such a literal ‘dictator’ would forge exactly the link between positive liberty and authoritarianism that Isaiah Berlin feared. To be sure, Wilson’s Progressive predecessor Theodore Roosevelt had many of the charismatic qualities that aspiring dictators in the Schmittian mould – not least Donald Trump – have subsequently emulated. The point is made in Hawley (2008), written by a law student who subsequently became a Trump loyalist in the US Senate. For his own part, Wilson took a more ‘behind the scenes’ approach, whereby the state accrues power by the civil service becoming a superordinate administrative class, a ‘bureaucracy’ in Weber’s dreaded sense. In Wilson’s day, it was symbolized
3.2 The Fate of Judgement in the Hands of Bureaucracy from Germany to America
47
by the introduction of a national income tax, which was intended for interstate infrastructure but was soon redeployed to finance US involvement in the First World War, even though the nation had not been directly attacked and enthusiasm for the war had to be manufactured by the likes of Walter Lippmann and Edward Bernays, who then drew very different lessons from this experience (Fuller, 2018a: Chap. 2; Fuller, 2020a: Chap. 5). To this day, Wilson is routinely vilified by American libertarians as the originator of ‘Big Government’, which has resulted in more than a century of American taxpayers upholding Wilson’s vision of the US as the underwriter of global security (Goldberg, 2007). Readers can judge for themselves whether this reveals Wilson to have been a hero or a villain. Bureaucracy may be understood as positive liberty’s counterpart to the security function of the police and armed forces that proponents of negative liberty have claimed to be central to the state’s role in maintaining its core value of interpersonal non-interference. Positive liberty differs from negative liberty in extending the state’s mandate of protecting the citizenry from harms to one’s own sphere of freedom that result from decisions taken by others to protecting them from such harms resulting from their own decisions. Thus, a litmus test of whether a nominally ‘liberal’ regime includes an element of positive liberty is whether its citizens can voluntarily sell themselves into slavery, since that would mean freely restricting one’s own future sphere of freedom, perhaps indefinitely. Positive liberty would prohibit this from happening, whereas negative liberty might allow it. In the modern period, this difference in attitude has marked a fork in the road between liberalism in its ‘classical’ sense, which potentially allows for self-enslavement, and republicanism, which treats the maintenance of the conditions for freedom as seriously as the very exercise of freedom (cf. Nozick, 1974; Pettit, 1997). I have argued that the university constitutes a ‘republic’ in this sense, a point on which Weber and I could probably agree (Fuller, 2000a: Chap. 1). In this context, the administrative class – Weber’s bureaucracy – may play a salutary role in the maintenance of academic freedom, be it personified by the university rector or the higher education minister. In that case, Weber’s objections to Althoff reflect Weber’s own confusion about the source and the function of Althoff’s decisions. The very idea of such an administrative class can be traced to the secularisation of angelic messengers in medieval Christendom (Agamben, 2011: Chap. 6). Once bureaucracy acquired a life of its own in late eighteenth-century Prussia, it was quickly exported around the world. Indeed, the German idealist sensibility was instrumental even in the formulation of the famed British civil service code, which effectively popularized the teachings of Hegel’s Oxford and Cambridge followers, starting with T.H. Green and later epitomized in F.H. Bradley’s Ethical Studies of 1876, especially the essay, ‘My Station and Its Duties’ (O’Toole, 1990). Key to the code was the idea that the civil servant clarifies what Karl Popper (1957) called the ‘logic of the situation’, namely, the options available to an agent having to make a decision. This encounter – what in angelic discourse is called an ‘address’ – was meant to expand the agent’s sphere of freedom and responsibility from what it might otherwise be, were the agent left to their own devices. Weber saw the scientist’s relationship to the politician in such terms to ensure the integrity of both vocations.
48
3 Judgement as the Signature Expression of Academic Freedom
On the one hand, the scientist is free to lay out the possible consequences of the courses of action available to the politician without having to choose one; on the other hand, the politician is free to pursue whichever prospect he or she wishes but now under the burden of knowledge and hence responsibility for the outcomes. In the 1980s, a satirical version of this sensibility, whereby the ‘angelic’ civil servant is portrayed as the consummate ‘nudge’ artist, became the UK television hit series, ‘Yes, Minister!’ However, such bureaucratic nudging can result in its own ‘cunning of reason’ effects. Consider the emergence of psychology as an academic discipline in Germany in the final quarter of the nineteenth century. In a nutshell, ambitious people trained in medicine, a highly competitive academic field, were encouraged to migrate to philosophy, a less competitive field, to secure professorships. Once ensconced, they then rendered traditional philosophical problems of metaphysics and epistemology tractable by the latest medical instruments and techniques, often under the guise of solving the ‘mind-body problem’ (Ben-David & Collins, 1966). Moreover, this trajectory was not unique to Germany. William James in the US had a somewhat similar personal history. However, an ironic long-term consequence of this development, which arose from the gradual separation of philosophy and psychology as academic departments in the twentieth century, was that judgement in its full-blown Kantian (and Humboldtian) sense came to lose salience as an object of systematic inquiry (Kusch, 1999). On the one hand, philosophy came to be concerned with the purely logical side of judgement, namely, the adoption of pro- and con- attitudes to propositions, which by the early twentieth century, under the mathematician Gottlob Frege’s influence, had evolved into the formal assignment of truth values, which in turn could be mechanically manipulated as algebraic formulae, resulting in ‘truth tables’. On the other hand, psychology initially focused more on the phenomenology of thinking, in which judgement came to be reduced to the process of directing one’s attention to a task set by the experimenter. But as experimental psychology became methodologically more rigorous, doubts were raised as to the very existence of an experience of thinking distinct from the contents of thought. Meanwhile what had dropped out from this split conception of judgement was the idea of an autonomous subject deciding to, say, assert a proposition or solve a problem. Both philosophers and psychologists, in their newly professionalized guises, seemed concerned only with objects of thought and sites of thinking – not with the integrity of the thinker as such, the judge who exercises judgement (Fuller, 2015: Chap. 3). In that respect, ‘judgement’ is arguably among relatively few terms that over the past century has lost, rather than acquired, technical meaning. Its historic associations with logic, law and art have been largely absent, especially in academic culture. Yet, it would be false to say that interest in judgement as an object of inquiry completely disappeared. Rather, it went underground, especially once behaviorism became ascendant in psychology. Edward Lee Thorndike’s original interest in getting organisms to reach a prescribed goal with least effort effectively displaced judgement in the manner of ‘principal-agent theory’ in political science and economics, whereby the experimenter functions as ‘principal’ and the subject as ‘agent’
3.2 The Fate of Judgement in the Hands of Bureaucracy from Germany to America
49
in the completion of some task (Ross, 1973). In this context, judgement happens behind the scenes of the experiment, as the experimenter selects the task that the subject then tries to execute as efficiently as it can, a process that may be improved over time through focused instruction. Thorndike’s research design, originally applied to cats, was extended to humans in Frederick Winslow Taylor’s influential approach to ‘scientific management’, which caught the eye of President Wilson (Hofstadter, 1955: Chap. 6). But in the end, Thorndike and Taylor were really training their own judgement, teaching themselves how to get cats and humans, respectively, to achieve specific ends, of which Thorndike and Taylor were the ultimate judges. However, it remained unclear whether or how their subjects improved their own capacity for judgement. On the surface, it seemed that the subjects were just learning how to take orders. However, already by the First World War, Wolfgang Koehler’s (Koehler, 1925) pioneering work on primate problem-solving had begun to address this matter by situating the animal subject in an open problem space that could be resolved in various ways, some of which may surprise the experimenter. Koehler’s experimental design became the calling card of the ‘Gestalt’ movement in psychology, which re-entered behaviorism through Edward Chace Tolman and Egon Brunswik who collaborated at Berkeley in the 1930s–50s and were associated with the extended logical positivist community. In this context, Tolman (1948) coined the phrase ‘cognitive maps’, for the process whereby ‘rats and men’ code the physical space of, say, a maze as a conceptual space, akin to a decision tree, such that certain paths in the maze are associated with certain outcomes. Tolman and Brunswik’s students included not only Donald Campbell, mentioned earlier, but also Kenneth Hammond (1978), who perhaps did the most to relocate the study of judgement to the policy sciences. However, a signature feature of this research, which cuts across social and cognitive psychology and whose distinguished recent contributors have included Lee Ross, Richard Nisbett, Amos Tversky and Daniel Kahneman, is a fixation on errors and biases in judgement that persist even among expert reasoners. In this respect, the latter-day of study of judgement has continued to question the prospect for rational autonomy, which had been the basis on which Kant and Humboldt placed such great store by judgement as a mental faculty (Fuller, 1993: Coda). Philosophy’s gradual loss of salience as the foundational discipline for all university subjects, starting after the death of Hegel and full realized in the aftermath of the First World War, provided an institutional backdrop for the decline of judgement (Schnädelbach, 1984). This movement also extended from Germany to America, but the outcome was somewhat different. Regarding himself as carrying on the work of his recently deceased friend Max Weber, Karl Jaspers famously argued that academic specialization meant that the sense of free inquiry promoted by ‘science as a vocation’ had to be pursued within – not beyond – disciplinary boundaries (Ringer, 1969: 105–7). To be sure, Weber’s methodological writings can be understood as implicitly staking out this claim as he defended the borders of sociological inquiry from the ‘higher order’ encroachments of Lebensphilosophie, dialectical materialism and social energeticism, which harked back to the pre- or meta-disciplinary conception of philosophy that had prevailed before Hegel’s death.
50
3 Judgement as the Signature Expression of Academic Freedom
Nowadays a vulgarized version of this Jasperized Weberian argument has become standard in defining the limits of academic freedom – namely, that one is free to speak publicly within one’s expertise, but not beyond it. However, when the argument was first made in the 1920s, it served to destroy the soul of the classical German university, what Fritz Ringer has poignantly called ‘the decline of the German mandarins’. Those who refused to accept the Jasperian re-specification of the scope of academic freedom found themselves increasingly cast – sometimes willingly, sometimes not – as reactionaries who defended a nostalgic image of undergraduate liberal education against the alienated learning represented by the specialist doctors of the modern graduate school. In the case of the United States, a version of the Weber-Jaspers thesis was invoked by the American Association of University Professors in the early twentieth century to protect the tenure of social scientists whose theories and methods openly contradicted the political and religious beliefs of their academic employers (Fuller, 2009: Chap. 3). The results were mixed for the social scientists directly concerned but a long-term constructive consequence of this encounter was, starting in the 1930s, a rededication to general education at the undergraduate level as a counterbalance to the increasing significance of specialized postgraduate education. A bellwether figure here was University of Chicago President Robert Maynard Hutchins, a major US public intellectual of the mid-twentieth century. A self-styled natural law theory ‘modernist’, Hutchins interpreted the theological and philosophical sources of academic freedom syncretically – specifically, blending Aquinas and Locke – as a ‘natural right’. Though no political radical himself, Hutchins (1953) was instrumental in keeping alive the broader ‘trans-disciplinary’ sense of academic freedom that originally animated Humboldt’s vision and continued to inform the self- understandings of both American teachers and students, arguably to this day. He called it ‘perennialism’. Hutchins remained unafraid to pass judgement on the scholarship of his social science colleagues (which he found too specialized) and larger American society (which he found too conformist). Indeed, upon retirement from Chicago, Hutchins established the Center for the Study of Democratic Institutions, where he briefly made common cause with Students for a Democratic Society (SDS), whose 1962 Port Huron Statement quickly became the manifesto for student movements across much of the world for the rest of the decade. SDS’s signature approach was to pass judgement on their elders over a range of issues, including the content and conduct of university education, the research activities of their teachers and, most importantly, their generation’s own life prospects, given the looming threat of a third world war. Hutchins not only invited the student radicals to his Center in 1967, but he also commissioned ex-New Deal intellectual Rex Tugwell to propose a new US Constitution that would be at once more participatory and more accountable (Lacy, 2013: Chaps. 5 and 6). Of course, Tugwell’s proposed constitutional convention never came to pass, but in the next chapter we shall see that the university as itself a ‘revolutionary constitution’ should continue to hold attraction.
3.3 The Genealogy of Judgement from Germany Back to Greece
51
3.3 The Genealogy of Judgement from Germany Back to Greece Humboldt’s Kantian word for the exercise of judgement that is characterizes academic freedom is Urteil. Philosophers are most familiar with Urteil as the guiding concept of Kant’s third critique, which is concerned with aesthetic and teleological judgement. As we shall see in the next chapter, Kant somewhat inverted the intuitive understanding of ‘aesthetic’ and ‘teleological’. We would normally think of their relationship as akin to that of a snapshot and an ongoing film. However, Kant thought more in terms of the generative capacity of the snapshot to inspire multiple films. In that respect, art is ‘purposive without being purposeful’. More to the point, art is inherently judgemental. Humboldt (1988) himself took seriously Kant’s claim that the mind’s spontaneous capacity for judgement provides the backdrop against which formal reasoning develops: Initially we think logically because we need to judge between alternative configurations in the manifold of experience that attract our attention. However, our power of discrimination is not free unless we can adopt a position from which to think about the alternatives that does not reflect our dependency on them: In other words, we can judge the alternatives in their own terms rather than based on how they impact on us. In ordinary German, Urteil is associated with the issuance of a courtroom verdict, which serves as the foundation for passing a sentence. Many modern legal intuitions remain grounded in this sense of judgement. Of special significance for the recent history of epistemology is the deep ‘facticity’ of the normative order, most closely associated with Hans Kelsen’s and Herbert Hart’s legal positivism, Scandinavian legal realism and arguably even Niklas Luhmann’s systems-theoretic sociology (Turner, 2010: Chap. 3). What all these perspectives share is the idea that a decision taken at a particular point serves to demarcate what subsequently is and is not permitted. In a specific courtroom setting, this judgement is delivered as a verdict, but in a larger setting it may serve as the constitutional basis for an entire legal system. Such judgements function as ‘axioms’, ‘rules’ or ‘precedents’ that must remain in place for all following actions (including later judgements) to enjoy legitimacy within the system. The three highlighted terms suggest somewhat different aspects of constraint. However, they do not obviate the deep facticity of the normative order promoted in modern legal theory – what US Founding Father John Adams called ‘an empire of laws, not men’. In twentieth- century epistemology, they were increasingly called ‘foundations’ in the sense that Richard Rorty (1979) eventually deconstructed. But we should never forget that early in that century, there had been a very active debate over the metaphysical status of such ‘foundations’ within logic, mathematics and physics – three fields that were drawing ever closer together (Collins, 1998: Chap 13). Some regarded these foundations as arbitrary conventions and others as reflective of some pre-existent reality. Nevertheless, both these extremes agreed that what really matters is the derivability of any further conclusions from the original judgement. It is what made those conclusions ‘principled’ in a sense that Kant could have appreciated,
52
3 Judgement as the Signature Expression of Academic Freedom
regardless of their ultimate source. Karl Popper and John Rawls were philosophers whose work in their somewhat different ways absorbed the full measure of this conception. The grounding of Urteil in the law helps to explain its later theological, philosophical and scientific import. In ancient Athens, the judges were called hairetikoi, those delegated to decide the outcome of a case. The very need for such figures drew attention to a schism in the community, based on which plaintiffs and defendants were then licensed to mobilize witnesses and advocates. However, the hairetikoi were not professional lawyers like today’s judges but more akin to jurors – that is, citizens in good standing chosen for their neutrality to the case. They were trusted to set the standard of what is ‘fair’ and ‘reasonable’ in the case. In effect, the hairetikoi had the power to ‘name the game’ in terms of which one of the adversaries might prove victorious (cf. Fuller, 2020a). The quest for this second-order sense of ‘judgement’ is still how trial lawyers address juries today, often taking the form, ‘If I were in your position, this is how I would understand the case and weigh the evidence brought before you…’. After the fall of Athens, the Stoics sublimated this juridical style into a general theory that was inherited by modern epistemology. Their signature contribution was the idea of a criterion, or the standard by which judgement can resolve the ambiguities of a case into a determinate outcome, or verdict. Following the school’s founder Zeno, the Stoics presented this process as passing through three stages: prolepsis (a survey of the relevant possibilities), hypolepsis (a selection of one such possibility for further scrutiny) and catalepsis (the decision to accept that possibility as true). Zeno used the analogy of a hand: first, a palm with fingers apart; second, a palm with fingers together; third, fingers drawn together back into the palm as a fist. However, the stages are reversible. In particular, the second stage may revert to the first, whereby the possibility formerly held in the palm slips through the fingers. That is tantamount to the falsification of a hypothesis (Hankinson, 2003). The Stoic ‘sublimation’ of judgement that followed the fall of Athens involved slowing down its pace and the disentangling its parts. Arguably, the high value that the Athenians placed on rhetorical prowess led to a ‘rush to judgement’, which made the city-state at once so exciting and volatile, resulted in the downfall detailed in Thucydides’ Peloponnesian War. Here credit is due to Aristotle, who first divided the juridical style into apagoge and epagoge. More than a century ago, Charles Sanders Peirce captured the distinction as abduction and induction. The cognitive psychologist Daniel Kahneman (2011) has recently dubbed them ‘fast’ and ‘slow’ thinking. Peirce’s elaboration of the distinction is perhaps the more pertinent – which was about how hypotheses are formed and tested, respectively. The original Athenian way of passing judgement tended to conflate the two in haste. In this context, apagoge was about quickly catching a criminal suspect and extracting a confession. The absence of alternative accounts (including suspects) of the harm amounted to conviction. As Aristotle observed, this is proof by showing that the opposite of what is presented as the case is highly unlikely, if not impossible. But have the alternatives really been given a chance to appear, or has the court simply capitulated to the rough justice of the lynch mob? The principle of Habeas corpus,
3.3 The Genealogy of Judgement from Germany Back to Greece
53
which only comes into general force in the late seventeenth century, was partly designed to inhibit apagoge’s impulsiveness. In contrast, epagoge meant inference of a universal from a set of particulars. But it too might contribute to rough justice by encouraging judgements based on social stereotypes of the individual on trial. Even if there are alternative suspects, one individual may fit the stereotype of people who tend to cause such harms. In Kahneman’s terms, now part of popular discourse, this would constitute ‘confirmation bias’. The Stoics importantly departed from Aristotle on the role of contingency in judgement. Aristotle is normally credited with having introduced the concept of contingency (endechomenon) while arguing that our relationship to the past and to the future is radically different: The past is knowable because it has already happened, whereas the future is unknowable because it has not yet happened. In response, the Stoics argued that what’s really at stake is not the unrealized nature of the future itself but the indeterminacy of the basis on which the future will be determined. In other words, logically speaking, what Aristotle treated as a first-order problem, the Stoics treated as a second-order problem. It was on this point of disagreement that the project today called ‘modal logic’ was launched (Fuller, 2018a: 139–140). The Stoics held that once the standard of judgement is decided for an unresolved case (aka ‘the future’), the outcome exists on a continuum whose poles are ‘impossibility’ and ‘necessity’: that is, it may fail to meet the standard and thereby excluded; or it may meet the standard and thereby included. The law calls the former ‘prohibition’ and the latter ‘obligation’. On this account, ‘contingency’ is whatever passes for the time being as ‘not impossible’ yet also ‘not necessary’. This mode of reasoning clearly conforms to the logic of a verdict, which typically appears as a quasi-deduction from the legal principles and precedents that the judge has selected, combined with the evidence that judge has found most salient in disposing of the case. It follows that an ‘acquitted’ defendant is simply deemed innocent of the charges as presented in the trial, not necessarily exonerated of the crime itself. After all, judicial decisions can in principle be overturned – and often are in the course of time. This feature of the law is institutionalized in ‘courts of appeal’, the ultimate of which is called a ‘supreme court’. The Stoic approach to judgement was designed to rationalize a strenuous life in the public eye, where the reversal of fortunes is an ever present prospect. Thus, the leading Stoics have been politicians (Cicero, Seneca), emperors (Marcus Aurelius) and people whose great personal life transformations lead them to teach others (Epictetus). In more recent times, Stoicism has been associated with entrepreneurs. Unsurprisingly, it is the favorite philosophy of Silicon Valley (Holiday, 2014). The underlying moral psychology anticipates our post-truth condition. Contrary to conventional ‘truth-oriented’ epistemologies, in which ‘reason’ and ‘emotion’ are treated as discrete mental faculties that require the former to dominate the latter, ‘reason’ is simply the post facto name we give to an ordered soul, and ‘emotion’ to a disordered soul. They do not constitute the mind but are judgements about the mind. ‘Subjectivity’ is a crucial term in this context, but not for its connotations of ‘partiality’ or even ‘fallibility. Rather, what matters is the sense of ownership of judgements about oneself and the world. It was this sense that was behind the coinage of ‘aesthetic’ in the Enlightenment to refer to a composite perception, or
54
3 Judgement as the Signature Expression of Academic Freedom
‘worldview’, whereby coherence is brought to one’s identity by a drawing together of the often contradictory sensory inputs received from the world into a unique personal perspective. This is what Kant called ‘autonomy’, again a nod to Stoicism. Thus, education should be designed to foster autonomy. Indeed, the very concept of the ‘aesthetic’, which carries both strong empirical and normative connotations, emerged as part of a general reassessment of the Stoics in the early modern period. The Stoics had been previously seen as closer to nihilists or atheists due to their refusal of Christian conversion even in the face St Paul’s early ministrations. However, once John Locke incorporated Stoic ideas to formulate his influential account of personal identity as ‘self-ownership’, the tide began to turn (Golf-French, 2022). In particular, the idea of the soul as a ‘blank slate’ acquired new meaning. While the metaphor had always involved regarding experience as a kind of inscription, Locke interpreted it to imply that process was self-applied, as in the Marcus Aurelius’ Meditations, which were mainly written as post facto reflections on deeds done (Hill & Nidumolu, 2021). The result was a conception of mind as an articulated response to the world, resulting in a judgement on whatever comes before it. Kant’s elaborate architectonic of the mind should be understood in light of this Lockean transformation of the Stoic legacy. Its closest analogue in recent times is not what passes for ‘blank slate’ in Steven Pinker’s (2002) notorious take-down of social science and utopian politics (which he conflates) from the standpoint of evolutionary psychology. Rather, and perhaps surprisingly, it is the theory of generative syntax proposed by his teacher, Noam Chomsky, especially as Chomsky himself originally understood it – namely, as a recursively applied, sui generis organ. Here one needs to imagine, say, Marcus Aurelius literally writing his ‘self’ into existence by discovering its governing principles (aka ‘grammar’) through encounters with the world that compel him to speak his mind on paper, even after having done so in deeds. This compulsion to produce a second-order reflection on first- order action captures the poiesis (‘productivity’) of language as creating another level of reality – a ‘metaverse’, if you will – in which the autonomous self comes into being (Chomsky, 1971; cf. Fuller, 2022). Put in terms closer to that of Humboldt, whose own contributions to linguistics Chomsky holds in high esteem, this process facilitates Bildung – that is, life as a ‘work in progress’, if not a ‘work of art’. More precisely, the quest for self-discovery is an endless struggle against the forces that threaten to pull apart one’s sense of identity. True to the etymology of ‘erring’ as ‘straying’, these threats are often portrayed in terms of one’s being steered off course. The protagonists of the great epic poems of the Classical world, the Iliad, Odyssey and Aeneid, underwent just this process at various points in their respective narratives. These heroes were placed in difficult situations where they had to make decisions that altered the course of both their own and others’ lives, sometimes for many generations – as in Aeneas’ founding of Rome. The Humboldtian university can be understood as a relatively sheltered environment for simulating those sorts of ‘crossroads’ experiences, not only in the classroom where students encounter intellectually challenging lecturers but also in more self-organized settings, especially ‘student unions’. From that standpoint, seminars may be regarded as middle spaces. In a similar spirit, two centuries earlier, King
3.4 The Logic of Judgement: Of Truth and Other Values
55
James I’s personal lawyer, Francis Bacon, developed what we now call the ‘scientific method’, the centerpiece of which was a similar crossroads experience, only now applied to the natural world: that is, the ‘crucial experiment’, whereby a proposed explanation of some phenomenon is subject to stiff interrogation, with only two possible outcomes: be allowed to carry on or be forced to change. Even if the hypothesis survives the trial, it is simply ‘unfalsified’ but not necessarily ‘confirmed’, just like the acquitted defendant. To his credit, Karl Popper picked up on this nuance of Bacon’s method in a way that his logical positivist colleagues had not, hence Popper’s preference for Kant over Hume as a philosopher of the empirical world. For while Hume woke Kant from his dogmatic slumbers, the Critique of Pure Reason is dedicated to Bacon. We shall return to Bacon in the next chapter, but the point here is that experience in its fully alive, self-conscious sense is tantamount to reflective judgement. It lives in the modern understanding of ‘perception’ as a kind of second-order experience – that is, experience judged. Thus, ‘prejudice’ amounts to perception before the presentation of the object of experience. Here one might imagine a continuum extending between unconscious and reflective judgement, one pole of which is Kant’s ‘anticipations of experience’, which provide the categorical structure of the mind. In the middle is what experimental psychology now calls ‘cognitive biases’ and Bacon originally called ‘idols of the mind’. The opposite pole is defined by Bacon’s crucial experiments and Popper’s falsifiable hypotheses. In this respect, universities are meant to provide a living laboratory for the cultivation of judgement, whereby prejudices and biases are subjected to sustained critical reflection, the outcome of which is left for students to decide for themselves in their extramural pursuit of Bildung. From that standpoint, the sort of examinations that students routinely sit to determine their degree class are little more than gatekeeping exercises to ensure university attendance, a bit like glorified primary and secondary school ‘report cards’ that were more concerned with such pseudo-moral qualities as ‘discipline’ and ‘deportment’ than with the enterprise of Bildung. This point has been increasingly overlooked in recent times as universities have become more explicitly ‘credentials mills’, whereby examination results are tied to employment opportunities, rendering them somewhat closer in spirit to the Chinese civil service examinations to which Max Weber devoted so much attention in his efforts to fathom the ‘Oriental’ sense of rationality. To be sure, the seeds of an ‘Occidental’ counterpart were sown with the introduction of ‘intelligence tests’ (later ‘aptitude tests’) in the early twentieth century as a means to track children through the educational system so they can realize what psychometricians of the day had defined as their ‘full potential’.
3.4 The Logic of Judgement: Of Truth and Other Values When Urteil figures in logic, it is about classifying something as belonging to a category for purposes of drawing a conclusion about it. In metaphysics, to issue a judgement is to subsume a particular under a universal. For example, in the famous
56
3 Judgement as the Signature Expression of Academic Freedom
Aristotelian syllogism, the judgement ‘Socrates is mortal’ is based on Socrates being categorized as a human being. Socrates can be categorized in any number of ways, each accessing properties that he shares with other beings. But to judge Socrates as mortal, he must be classed as ‘human’, in the sense of Homo sapiens, a member of the animal kingdom, all whose members are mortal. The example highlights the peculiar combination of chance, freedom and necessity that constitutes judgements. It recalls the Stoic disentangling of Athenian legal reasoning. First, one is thrown into an unresolved situation – namely, defining the ‘suspect’ Socrates for the purpose of further consideration. Second, it is clear that Socrates can be understood in many different ways. Third, it is equally clear that if Socrates is to be understood as mortal, it must be by virtue of his ‘humanity’ understood in the animal sense of beings that undergo a certain life cycle. This way of linking logic and judgement was dominant until the dawn of the twentieth century. It was a kind of normative psychology that overlapped with fields that today’s logicians would treat as parts of rhetoric (aka ‘informal reasoning’) or the scientific method (aka ‘non- deductive inference’). Here it is worth recalling that John Stuart Mill and George Boole were mid-nineteenth century contemporaries who equally saw themselves as ‘logicians’ in this older, broader sense, albeit operating with somewhat different standards of validation. However, Gottlob Frege – and then more strikingly, Bertrand Russell and Alfred North Whitehead – repositioned Boole as the founder a new ‘symbolic logic’ that revealed the analytic power of subjecting reasoning to algebraic notation. An implication of this potted history is that the Humboldtian university was conceptualized in the older spirit of logic as a theory of judgement. It was probably the only legacy of German idealism that the original American pragmatists – Peirce, James and Dewey – never tried to officially disown in their quest to establish a world-historic sense of US philosophical originality. In all other respects, in relation to the idealists, the pragmatists suffered from what Harold Bloom (1973) called an ‘anxiety of influence’, which happens when an aspiring poet eclipses his or her predecessors by repeating them to more brilliant effect, so as to overshadow predecessors’ original contributions. A case in point is John Dewey’s magnum opus, Logic: The Theory of Inquiry, which is now read as a treatise on the metaphysical foundations of problem-solving, but might be usefully read alongside Robert Brandom’s endlessly heroic efforts to turn Hegel into a precursor of Wilfrid Sellars (Passmore, 1966: Chap. 7; cf. Brandom, 1994). A vestige of Dewey’s older way of thinking about logic remains in the concept of soundness, which in ordinary language is still regularly applied to judgements, as when we say someone ‘exercises sound judgement’. Formal logic tries to capture this by applying ‘soundness’ specifically to arguments that are not only deductively valid but also contain true premises. But what makes the premises of an argument true? In one sense, today’s logician would answer just as, say, Mill or Dewey would, namely, that the premise corresponds to the facts. But the meaning of ‘fact’ somewhat differs. Mill regarded a fact as an expression of one’s possible experience of the world – that is, of all the things one could have experienced, what one actually experienced is the fact. Mill’s sense
3.4 The Logic of Judgement: Of Truth and Other Values
57
of fact captures the spirit in which a lawyer approaches a witness during a trial. Dewey moves closer to the formal logician’s view by saying that a fact contributes to a solution to a problem, which provides a second-order frame of reference for determining the kind of facts one needs. This helps to explain why the original Gestalt psychology experiments on thinking – which were of Dewey’s vintage (i.e., 1920s) – had subjects approach the task in the spirit of problem-solving, sometimes simulating a task that past scientists had faced (Fuller, 2015: Chap. 3). Alfred Tarski (1943), who did the most to reconceptualize ‘truth’ in terms of the new symbolic logic, took ‘correspondence’ to the next level by treating the world of facts as a metalinguistic construction that gives meaning to first-order statements. The logical positivists quite fancied this move because it suggested that science might be the fact-bearing metalanguage in terms of which the meaning of ordinary propositions might be resolved, resulting in a conceptually unified understanding of all reality. In Tarski’s terms, ‘Socrates is human’ (the basis on which he is judged mortal) is true if and only if Socrates is indeed human (Passmore, 1966: Chap. 17). And what exactly is the nature of the mutual conditional at the heart of Tarski’s definition? After all, to someone still wedded to judgement-based logic, the definition is too expansive; it fails to respect the judge’s discretion concerning the standpoint from which Socrates is to be understood for purposes of issuing a judgement. To accommodate the judge as free agent, the first ‘if’ condition needs to be dropped from the Tarski definition of correspondence. In other words, the bare fact that Socrates is human would not necessitate an assertion of his humanity, unless the judge had decided it was necessary to the context in which Socrates was under judgement. (And here ‘necessary’ should be understood as connoting a strong sense of relevance.) One suspects that this concern also lurked behind the preoccupation with ‘analyticity’ in late twentieth century Anglophone philosophy. Nearly all analytic philosophers of the period felt the need to say something to say about it, usually when talking about the role of ‘necessary truth’ in science. In many respects, by endorsing Tarski, the logical positivists had reinvented the problem that Aristotle and the Stoics were trying to solve by disentangling the reasoning behind making a judgement to avoid the ‘rush to judgement’ that ultimately doomed Athens. The point of distinguishing apagoge and epagoge was to show that any sense of ‘necessary truth’ that attached to the premises – as opposed to the conclusion – of an argument was a cognitive illusion born of a failure of the counterfactual imagination compounded by confirmation bias. Nevertheless, Aristotle’s definition of ‘science’ kept open the prospect that there might be statements that are true for all cases in a given universe of discourse in a way that no other statements are. These would form the major premise of scientific syllogisms and be reasonably regarded as ‘necessary truths’. Following the Protestant Reformation, the Catholic concept of ‘natural law’ was reappropriated for this purpose by the emerging class of ‘natural philosophers’ who we now call ‘scientists’. Today’s default understanding of ‘laws of nature’ or ‘scientific laws’, courtesy of Isaac Newton, is its most notable legacy. Such ‘laws’ capture both empirical regularities (cf. epagoge) and counterfactual conditionals (cf. apagoge). Taken together, they constitute ‘necessary truths’.
58
3 Judgement as the Signature Expression of Academic Freedom
Indeed, Kant was so impressed by Newton’s achievement, yet at the same time struck by its limitations, that he invented a so-called ‘transcendental’ style of argumentation that granted Newton’s laws a special epistemological significance. Basically, the metaphysical presuppositions of the laws – absolute space, time and causation, all understood in Euclidean terms – defined the limits of human thought and hence our understanding of reality. We simply could not think our way out of them. All apparent attempts to do so – such as Leibniz’s rationalistic project of ‘theodicy’, which aimed to justify why the world had to be as it is – were fantasies that wildly extrapolated from what we can reasonably know to the mind of a God from whom we have been supposedly alienated since Adam’s Fall. To be sure, Kant’s own highly restricted view of human cognitive capacities delayed the acceptance of nineteenth century innovations in mathematics that would have expedited what eventually became the early twentieth century revolutions in physics and logic (Fuller, 2019b). However, Fichte, Schelling, Hegel – the so-called ‘German idealists’ who followed in the wake of Kant and led the Humboldtian university in its early years – repurposed the transcendental style to recover some of the intellectual ground that Kant conceded to our supposed ‘cognitive limitations’. In effect, the idealists privileged apagoge over epagoge. They took Kant to have mistakenly bought into Newton’s premise that the world can be exhaustively understood as ‘matter in motion’. In contrast, the general idealist strategy was to exfoliate the logical consequences of the conceptual presuppositions of the various sciences. The sciences themselves were understood not as different levels in the organization of matter, but as orthogonal paths to the realization of ‘mind’ (Geist) in the broad sense of ‘spirit’, ‘consciousness’ as well as ‘intellect’, the embodiment of which at any given time is what we call ‘humanity’. Here it is worth reflecting on the sense in which the idealists privileged apagoge over epagoge. Many of the sciences – such as what became chemistry, biology and psychology – were not as empirically well-grounded as physics in the early nineteenth century. In short, they lacked epagoge. Nevertheless, each their fundamental conceptions of the world possessed a certain prima facie plausibility that suggested a certain ‘logic of inquiry’, which in turn could be pursued as what we now call a ‘research program’. In short, they possessed apagoge. In this regard, Stephen Pepper’s (1942) ‘metaphysics as world hypotheses’ thesis captures very much the spirit of the idealist approach. Indeed, it is often overlooked that this style of presenting humanity’s epistemic horizons – which was integral to philosophy’s centrality in the Humboldtian curriculum – did inspire the development of the special sciences throughout the nineteenth century, by the end of which self-styled ‘NeoKantian’ philosophers had begun to propose alternative schemes for reintegrating the sciences to comprise a unified image of humanity. Here Ernst Cassirer (1944, 1950) deserves honorable mention as the most wide-ranging and sophisticated contributor to this side of the idealist project, which was already being eclipsed in his own day by the logical positivist project of unifying the sciences through a kind a ‘reductionism’ that in some respects revived Kant’s original fixation on Newton.
3.4 The Logic of Judgement: Of Truth and Other Values
59
What does any of this have to do with today’s issues concerning academic freedom? Notwithstanding its abstraction, the above discussion highlights the ways in which truth, validity, soundness and relevance are bound together with not only so-called facts-of-the-matter, over which we might exert relatively little control, but also how we organize and respond to them, over which we can exert much more control. Academic freedom relies on the concept of judgement to sharpen the difference between these two perspectives, which in turn defines a domain in which academics may legitimately exercise their freedom. Put in contemporary terms, the focus on judgement makes it possible to assert both one’s recognition of the brute facts and one’s openness to alternative theories by which they may be understood. The contrasting state-of-affairs would be one in which facts are seen as already bearing their own interpretation, such that to recognize them properly is ipso facto to draw the appropriate conclusion. One would expect this latter stance as the official epistemology of a society dominated by religious orthodoxy or political ideology. The contrast brings out the amount of free play effectively allowed under the rubric of ‘academic freedom’ and perhaps helps explain its controversial character both in and out of academia. A train of thought that has migrated from the philosophy of science to the larger intellectual culture over the past sixty years has obscured the significance of what is at stake here. It turns on the idea, originating in Hanson (1958) but popularized in Kuhn (1970), of ‘theory-laden observation’. For many years it has been common to treat this phrase as an implicit license for relativism, reading it to mean that people will see only what their background beliefs allow them to see. No doubt this reading was influenced by Kuhn’s notorious incommensurability thesis, which implied that scientists are so wedded to the theories in which they were trained that a generational change – that is, a cohort of scientists trained in the next theory – is necessary before a scientific revolution fully succeeds. In other words, scientists never change their minds simply based on the facts. But that is only the negative side of theory- laden observation. The positive side is that the facts do not speak for themselves but require a theory for their articulation. The license to produce such a theory is what academic freedom protects, as it requires passing judgement on what are the essential and accidental features in a given domain. Here it is worth recalling the trivialization of the idea of ‘value judgement’ as it migrated from German to English academic discourse in the twentieth century. Nowadays value judgements are seen as aspects of one’s personal experience that are brought with mixed feelings to the research and teaching environment. They both enable and disable the observation of certain things. Value judgements are a sign of the researcher’s subjectivity and a potential contaminant to an objective appraisal of what is under investigation. Attitudes towards value judgements vary wildly in research methods textbooks: Positivists fear bias and distortion, while postmodernists celebrate the giving of ‘voice’. Without denying any of these points, the original German context placed the affirmation of value judgements in a more positive light – that is, as a defensible extension of academic freedom. I refer here to the widely misunderstood Werturteilstreit (‘value-judgement dispute’) that occurred in the first decade of the twentieth century, which resulted in the
60
3 Judgement as the Signature Expression of Academic Freedom
institutionalized separation of sociology from social policy in German academia (Proctor, 1991: Chap. 7). It was in the wake of the Werturteilstreit that ‘value-judgement’ and ‘value- freedom’ started to acquire international currency as constitutive of the academic ethos. In this context, Max Weber served as the point of reference for promoting these concepts, especially as his works were translated in English, courtesy of Edwards Shils and Talcott Parsons. However, lost in translation from Weber’s original context, yet still relevant today, is the question of who should supply the values that inform the theoretical standpoint from which academic research is conducted. Weber and his sociology colleagues wanted that power to remain in the hands of the academics themselves – not the government. In this respect, the Werturteilstreit may have been the first organized academic protest against what we now call ‘evidence- based policy’, whereby academics simply act as sophisticated data collectors and analysts for projects whose intellectual horizons had been set by their political paymasters. This was certainly how Weber and sociology’s other early German founders sarcastically cast the German ‘social policy’ academics – namely, as purveyors of Kathedersozialismus (‘socialism of the chair’). To be fair, the social policy academics tended to be worthy social democrats who wanted to ameliorate the conditions of industrial labor to prevent the proletarian revolution from happening in Germany, as Marx had predicted. Indeed, Bismarck’s grand preemptive strike on Marx’s prediction was a national insurance scheme that set in train the modern welfare state (Rueschemeyer & Van Rossem, 1996). Nevertheless, it was the principle of leaving value decisions about research focus in the hands of academics that Weber and his colleagues were meaning to uphold. But this did not mean that Weber and his like-minded ‘sociologists’ were apolitical. On the contrary, they were generally speaking liberals, which for them was less an ideology than a ‘meta-ideology’, in the sense of being a general attitude toward the relationship between value commitments and the pursuit of inquiry – namely, that it should never be dogmatic. Very much like Karl Popper, Weber took human fallibility and open-mindedness to be two sides of the same coin. But this had a specific sociological implication, namely, that the sphere of free inquiry – what I have called ‘the right to be wrong’ (Fuller, 2000a) – approximates a secular sanctuary (cf. Sassower, 2000). Thus, a remarkable feature of Weber’s many public engagements is that whether pro or con current government policy, his comments were stated so as not to jeopardize continued protection of academics to set their own theoretical horizons. At the same time, Weber conjoined that right with the duty to permit ‘value-free’ tests of those horizons – that is, by evidence collected from a standpoint independent of the theorist. In other words, ‘value-freedom’ refers specifically to freedom from the values that inform the theory subject to validation. Another way to capture this sensibility is the distinction drawn by philosophers of science between the ‘context of discovery’ and the ‘context of justification’. The latter is ‘value-free’ in that the mark of a ‘scientific’ knowledge claim is the extent to which its validity can be established without one’s having undergone the same set of experiences and sharing the same value assumptions as the person who originally made the claim. This is what makes scientific validation different from, say,
3.5 The Religious Sources of Academic Freedom: The Decision to Dissent
61
religious validation, which requires a sense of personal witnessing and existential commitment that psychologically inhibits the ‘hypothetical’ frame of mind in which one should entertain knowledge claims.
3.5 The Religious Sources of Academic Freedom: The Decision to Dissent Let us now return to the theological legacy of hairetikos, a member of an Athenian jury, which provides the root of the modern word heretic. Two substantial changes in Western intellectual culture enabled this transition. First, whereas Athenian jurors were picked randomly in response to a similarly random event that had created a serious division in society, ‘heretics’ in the medieval and modern senses self-select: They volunteer to stand judgement over their religion or society more generally – and to be themselves judged on those terms. Thus, even though ‘heretics’ in this sense typically regard themselves as revealing a long-standing tension, they are normally seen as themselves the source of the tension. For this reason, heretics are often turned into scapegoats, who may be crucified, burned at the stake or, as in the case of Ibsen’s An Enemy of the People, subject to public humiliation and ostracism. Often the most effective strategy under these conditions is simply to walk away, as Martin Luther did after making his stand at the Diet of Worms in 1521. The second change was the presence of fixed written law that served as a basis to judge the heretic’s pronouncements. In the Middle Ages, this was provided by the Roman Empire’s Justinian Code, which was supported by the body of Christian theological commentary that we now call ‘natural law theory’. Since this corpus of writing was presumed to underwrite the taken for granted order of the world, heretical pronouncements easily triggered a sense of existential risk to the community to which they were addressed. A modern equivalent is the concept of ‘hard cases’ in the law, whereby a court hears a case whose complexity suggests that it might become a precedent for future cases. In this context, US Supreme Court Justice Oliver Wendell Holmes warned that hard cases make bad law because the striking character of such cases — typically due to idiosyncrasies of the time, place and parties involved – can easily lead the judge to rule in a way that undermines the spirit of the law. In effect, hard cases are modern heresies delivered in situ rather than in person. But of course, notwithstanding such admonitions, there may still be reasons to alter the status quo in the direction of the heretics and the hard cases (Dworkin, 1975). I earlier noted that the classical conception of academic freedom, with its focus on the exercise of independent judgement, was a product of the modern German university. I also observed that the first test of this conception came with the separation of scientific (i.e. ‘critical-historical’) from pastoral theology. But another aspect of the religious origin of academic freedom is specifically denominational, reflecting its Protestant – not Catholic – roots. Consider that in the university’s original Catholic environment, the relation between teacher and student was at least as much
62
3 Judgement as the Signature Expression of Academic Freedom
pastoral as ‘instructional’ in the strict academic sense. The teacher was the surrogate parent – ‘in loco parentis’ – of the student who lived away from home on university grounds. However, Humboldt reformulated the relationship between teachers and students as a system of complementary rights and duties, known as Lehrfreiheit (freedom to teach) and Lernfreiheit (freedom to learn). Teachers were obliged to open the minds of students to learning without indoctrinating them to a particular worldview (Metzger, 1955). The focus here was the very Protestant one of clearing a space – Heidegger’s Lichtung – so that students can freely decide what to believe – or at least what is important to learn (cf. Fuller, 2016c). An important waystation between the old Catholic-style pastoral view of the teacher-student relationship and the stark Protestant-style ‘decisionism’ of the modern university was the Jesuit idea of cura personalis, in which the teacher serves as the student’s ‘companion’, trying to awaken a ‘life in Christ’ by appealing to the student’s own intellectual and psychic resources but making it clear that in the end it’s the student’s decision how they take the encounter. The hope is that the student would develop discernment. In our explicitly secular times, the pedagogical route of least resistance has been for the teacher to present the student with several options on a given matter, including their strengths and weaknesses but without drawing an authoritative conclusion. But as Max Weber (1958) would be the first to observe, this focus is always tricky to maintain in practice. Indeed, an academic especially influential with students in the post-Weber generation, Martin Heidegger, spoke of Gelassenheit (‘equanimity’), which was understood in the 1930s to license an indifference to conventionally opposing positions to search for a deeper, more personal truth – an attitude on which Nazism perversely fed. The second feature of the classical conception of academic freedom that marked a shift from Catholicism to Protestantism is that the modern university professor claimed to find the spark of the divine from within, rather than the demigods and demagogues from without. The corresponding mode of institutional being was charismatic rather than collegial: German professors attracted students and created fiefdoms that might evolve into disciplines, understood as secular ‘denominations’. This was opposed to, say, their Oxbridge counterparts, who were preoccupied with collective self-governance and displayed little interest in proselytism (Clark, 2006). Set against this institutional context, ‘judgement’ acquires an additional depth of meaning, which relates to what is ideally transmitted from teacher to student. The answer implied in the idea of Lernfreiheit is not mastery of a doctrine but the capacity to judge what is worth learning. It consists of adopting the attitude, though not necessarily the content, exemplified by the professor. Students who were incapable of ‘striking the pose’ of professorial authority for themselves remain in a state of ‘nonage’, that is, someone who waits for others to tell them what to think – as Kant famously characterized the pre-Enlightenment mind. In this respect, the sense ‘judgement’ promoted by academic freedom is tied to the promotion of subjectivity that first came into its own with early nineteenth century Romanticism. Schelling and Hegel set precedents for the Romantic expression of academic freedom in the natural and social sciences, respectively. Their most lasting legacy to this discussion is the idea that the point of the university is to unify knowledge in a way that renders
3.5 The Religious Sources of Academic Freedom: The Decision to Dissent
63
it comprehensible to a single human mind, an aspiration of the curriculum that is underwritten by our having been created in the image and likeness of God. Humboldt’s Kant-inspired innovation here was to replace dogmatic theology with critical philosophy as the secular source of pedagogical unity. Both naturalistic and humanistic traces of this ‘unitarian’ impulse remain in contemporary academic life and, perhaps predictably, test the limits of academic freedom. On the humanistic side, there have been perennial and invariably polemical debates over ‘the canon’, that is, the finite number of ‘great books’ that anyone should master in order to be ‘liberally educated’. In the United States, where students generally are not required to select a major subject until half-way through their undergraduate education, debates over the canon have become totemic rituals, whereby academics bear witness to their calling and challenge others who appear to bear false witness. Elsewhere in the world, where students are expected to specialize much earlier in their academic careers, debates over the canon have turned into a staple of the public intellectual literature, in which academics contest the merits of various would-be classics with journalists and politicians who might have been once their students. On the naturalistic side, there is the equally controversial – and popular – preoccupation with ‘grand unified theories of everything’. Although this quest derives its impetus from physics, the same ambition has applied to the ‘Neo- Darwinian synthesis’ in biology since the 1940s (Fuller, 2007a: Chap. 2). Here the sense of unity consists of the ability of understanding the widest range of phenomena by appealing to the smallest set of general principles. The striking point here – which makes the German idealists and the logical positivists strangely natural bedfellows – is the significance of reduction in both the humanistic and naturalistic instances of unity. An interesting transitional figure between the idealists and the positivists was Emil DuBois-Reymond, a leading late nineteenth century physiologist and intellectual descendant of Schelling who popularised the idea of natural scientists as exemplars of academic freedom in his 1882 inaugural address as Rector of the University of Berlin, Humboldt’s original base (Veit-Brause, 2002). Where an idealist like Hegel exhorted academics to embody the ‘world-historic spirit’ (Geistesweltgeschichte) by determining what from other times and places belong to the emerging self-consciousness of humanity, DuBois- Reymond spoke of their dedicated search for causal explanations as the driving force of history (Causalitätstrieb). In any case, it is unlikely that unity would have become such an esteemed epistemic value had not the individual human being qua free inquirer been its locus of concern. After all, the value of unity lies not in the sheer recognition of reality as inherently unified (because that is not possible – and here the post-Kantians defer to the deity) but in the active construction of a unified vision of reality as indicative of one’s autonomy. In this context, the shibboleth ‘reductionism’ should be understood somewhat more charitably as referring to the autonomous person’s capacity to discriminate the wheat from the chaff in the phenomena he or she faces: What is causally relevant or explanatorily deep, as opposed to merely a striking accident or persuasive observation? One takes responsibility for the answers given to such questions by articulating a Weltbild (literally ‘world construction’) that attempts to make sense of them.
64
3 Judgement as the Signature Expression of Academic Freedom
Of course, it is to be expected that other autonomous individuals operating from different Weltbilder will contest these judgements – and in the context of these trials the significance of ‘value-freedom’ becomes clear. However, matters got complicated at the dawn of the twentieth century, when for various intellectual and political reasons physicists following Max Planck started to advocate that all members of the same academic discipline should adopt the same Weltbild, what Kuhn later popularized as ‘paradigm’ (Fuller, 2000b: Chap. 2). Thereafter, the distinction between being ‘original’ and being ‘crackpot’ started to acquire a salience within academia that it had previously lacked, as peer-based intellectual conformity came to be seen as a strength rather than a weakness. Although Schelling and Hegel left stronger imprints in the development of the special sciences, the self-expressive ideal of academic freedom highlighted above was most explicitly embodied in their older contemporary, Johann Gottlieb von Fichte. In Fichte’s spoken and written work the various strands supporting this ideal were brought together – including the original philosophical defense of authorial copyright in publishing. This he presented as the legal recognition of intellectual freedom – a radical move in a country that lacked the generalized freedom of expression underwritten in the recently enacted US Bill of Rights. In effect, Fichte argued that authors added something unique beyond the sheer physical labor of writing about a topic that deserved legal protection – namely, the distinctive stance they adopt towards the topic, as reflected in their unique mode of expression (Woodmansee, 1984). Behind this conception lay the sense that self-assertion is by no means natural. On the contrary, it is an endless struggle (Sturm und Drang) against conformity to nature, especially submission to animal appetites. Speaking one’s mind and finding one’s voice is hard work – and deserves to be rewarded accordingly – because one is always navigating between the Scylla of the herd mentality and the Charybdis of perverse contrarianism. Both are ‘instinctive responses’ that let others determine my own beliefs – in the latter case, I simply believe the opposite of whatever most others believe. Thus, they abandon the rational self-determination that is at the heart of academic freedom. Of course, as the histories of both Protestant Christianity and German academia demonstrated all too well, the price of encouraging such independence is fractiousness, as students are effectively encouraged to outgrow their teachers, especially at the postgraduate level, where the stress is placed on ‘originality’ as the mark of research worthy of academic certification. Indeed, students’ own sense of charisma may extend to believing that they are better positioned to take forward a common body of knowledge than the professors from whom they learned it. The result is a proliferation of ‘schools’, academia’s version of churches that make it difficult to maintain the transgenerational, depersonalized sense of ‘discipline’ associated with monasticism. To be sure, academics continue to be reminded of this suppression of academic subjectivity in, say, the restricted use of personal names to refer to experimentally generated ‘effects’ – but not general principles of nature and, more generally, the hermeneutically casual, if not irresponsible, stance that especially natural
3.5 The Religious Sources of Academic Freedom: The Decision to Dissent
65
scientists are encouraged to adopt towards the ‘literature’ to which they contribute. The ease with which articles that have never been read, let alone read in context, attract huge citation counts speaks to an academic culture that is trying its hardest to erase any interest in the author when assessing knowledge claims (Fuller, 1997: Chap. 4).
Chapter 4
Hearing the Call of Science: Back from Max Weber to Francis Bacon
Abstract This chapter looks at the many issues raised by Max Weber’s suggestion that the academic life is best understood as the secular equivalent of a religious calling. In his day, it was understood as an argument to keep scientific inquiry independent of political demands. But at a deeper level, it reflected the conflicted Protestant legacy of ‘calling’ (Beruf), which may be understood in terms of submission (Luther) or autonomy (Calvin). Weber clearly takes the Calvinist line, which is aligned with Kant’s conception of ‘purposiveness’, a sense of freedom that embraces the uncertainty (and risk) of where inquiry might lead. The aesthetic dimension of this idea, integral to Kant’s conception of judgement, is explored, especially against the role of ‘productivity’ in modern academia. Perhaps the most practical expression of academia’s Calvinist calling is the expression of reasoned dissent in the face of a majority opinion, which is linked to Francis Bacon’s effort to make inquiry as game-like as possible, especially via the design of ‘crucial experiments’, a practice championed in the twentieth century by Karl Popper. The chapter ends by considering the Humboldtian university as governed under what Bruce Ackerman dubs (with reference to the US Constitution) ‘revolutionary constitutionalism’ in a way that might satisfy Bacon’s original and still unrealized vision of organized inquiry.
4.1 The Religious Roots of Max Weber’s Secular Calling: Luther Versus Calvin The original 1917 context of Weber’s (1958) address to graduate students, ‘Science as a Vocation’, was notable in several respects, not least that it coincided with Weber’s appointment to one of the first German chairs in ‘sociology’. Weber himself had been trained in law and migrated to economics, which followed the pattern of the establishment of economics as a discipline in the German-speaking world. (This also explains the distinctiveness of the so-called Austrian School of Economics.) But Weber migrated still further to ‘sociology’, a discipline that emerged in the German context in explicit opposition to ‘social policy’, a field to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Fuller, Back to the University’s Future, Evaluating Education: Normative Systems and Institutional Practices, https://doi.org/10.1007/978-3-031-36327-6_4
67
68
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
which many economists were already contributing as Kathedersozialisten (‘socialists of the chair’), a polemical phrase for academics whose research aimed to promote national advancement (Proctor, 1991: Part II). They functioned in a capacity that is nowadays associated with think-tanks (Rueschemeyer & Van Rossem, 1996). While generally aligned with the Social Democratic Party, their bottom line was ‘Germany First’. The ideological fluidity of this stance is perhaps most noteworthy in the case of Weber’s great lifetime rival, Werner Sombart, whose tenure as a ‘socialist of the chair’ eventually extended to an embrace of National Socialism. The sense of ‘sociology’ to which Weber thought he was contributing – and which he developed as a more general thesis in ‘Science as a Vocation’ – was, logically speaking, less contradictory than orthogonal to ‘social policy’. Weber wanted a social science that was free not only to challenge official interpretations of social phenomena but also to say things about them that did not use policy-relevant categories – as when a social scientist crafts and tests an ‘ideal type’ that purports to capture a wide range of social phenomena but which does not happen to conform to the conceptual framework of the state’s administrative apparatus. Indeed, Weber was envisaging a social science that conducts ‘original research’ that generates conceptualisations and data that policymakers are not normally afforded. Of course, the state may contract academic sociologists to do policy-relevant research, but Weber’s point was that they should not go beyond presenting policymakers with options for possible courses of action based on the available data. In short, not only should academics retain their independence from the state but also the state should be compelled not to hide behind academics to justify whatever they decide to do. The autonomy of both science and politics needs to be upheld for both to operate as effectively as they can, which in turn explains the stress on ‘conviction’ that Weber places in his follow-up lecture on ‘Politics as a Vocation’ to a similar graduate student audience (Weber, 2004). Weber’s emphasis on the mutual independence of science and politics rested on his distinct sense of ‘vocation’. The original audience to Weber’s lecture would have understood ‘vocation’ (Beruf) to mean one of two things, representing a significant parting of the ways in Protestant theology, which are usefully called ‘Lutheran’ and ‘Calvinist’. In terms of Weber’s specific discursive context, they correspond to the vocation of, respectively, the academic practising social policy and the one practising sociology. Let us make the analogy explicit at the outset. The social policy academic professes the Lutheran vocation, insofar as s/he is willing to be fully incorporated into the state’s conceptual framework, effectively becoming a mouthpiece of the state. In contrast, the sociologist professes the Calvinist vocation, insofar s/he engages in a mutually binding contract, each side of which promises to do the best they can, notwithstanding the consequences of their actions. Thus, the state allows the sociologist free rein in investigating a matter while it stays free to draw on the sociologist’s findings selectively for policy purposes. In this way, science and politics remain distinct vocations – and not the merged single vocation that the Lutheran approach suggests. The Lutheran calling occurs when God or his angelic messenger comes unexpectedly to say that you must radically re-orient your worldview, and your
4.1 The Religious Roots of Max Weber’s Secular Calling: Luther Versus Calvin
69
recognition of the source of that message leads you to follow dutifully. St Paul’s conversion on the road to Damascus is the paradigm case. In his former life, ‘Saul’ was a leading Pharisee who was more concerned with maintaining the purity of the Jewish community in its state of Roman captivity than in fully realizing Judaism’s messianic promise. Thus, he persecuted both Jews who sold out to the Romans and those followed ‘prophets’ such as Jesus who disturbed the Jews’ limited autonomy. Similarly, Martin Luther as professor of moral theology had been more concerned with identifying the corruption of the Roman Catholic Church due to its secular entanglements than improving the lot of practicing Christians, either in this life or the next. Luther’s conversion, though frequently romanticized, amounted to his seeing a way to turn secular authority to divine advantage without leaving the impression that the king calls the shots. Rather, the king expresses the will of the people, the euphemism for which is ‘church’. However, the Calvinist calling is closer to Weber’s intent. Whereas the Lutheran vocation involves subordinating oneself to a recognized authority to follow a determined path, the Calvinist vocation is freely chosen yet its ultimate destination – while equally preordained by God — remains unknown to those who are receptive to the call. The Calvinist conversion experience lies in the realization that you are free to give yourself over to God, even as you understand that your fallen nature means that you are still likely to go astray. Moreover, whatever you do, you eventually perish in this world and your posthumous fate is ‘always already’ known only to God. The mentality here is close to someone who willingly undertakes a risky venture to change one’s life, be it in the guise of a ‘leap of faith’ (e.g., the wager with his life that Blaise Pascal, a Calvin-leaning Catholic, claimed to have made) or an overseas investment (e.g., the persecuted English Puritans who paid to migrate to America to start a new life). The model is St Augustine’s account of his own Christian conversion in the Confessions, which takes the form of a gradual elimination of alternative philosophies and religions to satisfy a deep spiritual yearning. When he comes by chance to a passage in Paul’s Letter to the Romans, the deal is sealed because he reads it as encapsulating his life and offering a future course. Put brutally, the difference between the Lutheran and the Calvinist is that the former would have you submit your will to God, who is the vehicle to your salvation (assuming the sincerity of your conversion), while the latter would have you own a fate that God has ‘always already’ decided, effectively turning your life into a quest to figure out the rules of this ultimate game at which you may be either a winner or loser. The alternative senses of ‘humble’ implied in the Latin paulus, the source of the name ‘Paul’, capture the difference here. ‘Humble’ in the Lutheran sense suggests obedience to a known superior, whereas in the Calvinist sense it connotes taking a risk on an unknown superior. Such words as ‘submission’ and ‘trust’ are used in both contexts, since your fate is in someone else’s hands – but they are inflected differently. The difference is illustrated in the life of St Paul himself. His own moment of conversion fits the Lutheran sense of paulus, whereas the acts of conversion that he subsequently made possible to others (e.g., St Augustine) by his Epistles fit the Calvinist sense. Analogues from the secular world may prove helpful. Consider the difference between servitude (Luther) and gambling (Calvin). In
70
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
both cases, your fate is in someone else’s hands but only in the latter case do you continue to ‘own’ that fact, whereas in the former case you have forfeited ownership to the master. Thus, Luther held that faith is the key to salvation, whereas Calvin held that most – but not all — believers (as well as unbelievers) are damned. In many ways, this division in sensibility represents a schism that strikes at the heart of Christianity’s development after the death of Jesus, which in doctrinal history is captured by the adjectives ‘Petrine’ and ‘Pauline’, respectively, named for the two early disciples, who promoted the Christian message in rather different ways. Following the Greek origin of his name, Jesus anointed ‘Peter’ as the rock on which the ‘Church’ was built, establishing an authoritative path to salvation through the proliferation of individual churches, each staffed with people authorized to spread what came to be known as ‘Christian doctrine’. In terms of ‘vocation’, the stress is placed on the sending of the call and remaining faithful to the original intention upon its receipt. The fact that Peter knew Jesus personally as a human being is significant. Indeed, the Roman Catholic Church has always portrayed papal succession as a virtual dynasty descending from Peter. Before Luther broke from the Church of Rome, he had been very much part of it, which added weight to his dissent. Not surprisingly perhaps, Lutheranism did not break with all the institutional structures and liturgical practices of the Catholic Church; rather it established independent versions in parallel to the Church of Rome. In light of that history, it is perhaps unsurprising that Frederick the Great’s successors were able to co-opt the Lutherans in much the same way as earlier secular European monarchs had co- opted the Catholics. In contrast, Calvin began life not as a theologian but as a lawyer with Humanist sympathies. He focused more on the receiving of the call, which necessarily involves interpretation and hence the prospect of error, however faithful one wishes to be to what has been heard. For the Calvinist, the key feature of Paul’s conversion is that he encounters Jesus only as an apparition: He never meets the human Jesus. The uncertain nature of that initial encounter colours all that follows: Paul takes a chance by reorienting the rest of his life on the assumption that it was the Messiah who had addressed him on the road to Damascus. But he always remained free to change course, which made his conversion his alone. The strength of his faith was thus measured by its durability under various challenging conditions. Paul’s Epistles, which were addressed to a wide range of Mediterranean audiences, can be read as a set of test cases for the universality of Jesus’ message. The challenge facing Paul himself was twofold: Not only must he remain faithful to the Christian message, but he also had to persuade each audience in terms that were true to them. This task of varying the presentation while retaining the essence would be the hallmark of another Protestant lawyer, Francis Bacon, who had been raised as a Calvinist and supervised the translation of the first English (‘King James’) version of the Bible. Modern philosophy of science calls this ‘hypothesis testing’. And in both the religious and the scientific context, the extent to which one needs to change course – if at all — after having undergone such a ‘trial of faith’ has remained a central practical concern.
4.2 The Purposiveness of the Academic Calling: Weber Pivoting Between Kant…
71
4.2 The Purposiveness of the Academic Calling: Weber Pivoting Between Kant and Popper I have dwelt on these alternative theological interpretations of ‘vocation’ because Weber’s clear preference for Calvin over Luther carries significant epistemological and political implications about the source of ‘meaning’ in the academic life. To be sure, his student audience might have expected that preference, given Weber’s unique focus on Calvinism as the spiritual source of the capitalist economy, bureaucratic governance and other signature ‘rationalizing’ projects across all sectors of modern society that presume the prospect of indefinite progress, subject to endless measurement and calculation. However, it would have been equally well known that Weber saw these developments as placing humanity in an ‘iron cage’ (Mitzman, 1969: Chap. 3). This is due to the so-called means-ends reversal, which from the mid-nineteenth century onward has been frequently invoked to explain a variety of unintended system-level social phenomena. The obvious precedent is provided by Marx’s account of commodification under capitalism, which resulted once the status of money altered from a medium of exchange to something pursued in its own right. In effect, both the intrinsic and the use value of products were replaced by their ‘exchange value’, as defined by the money they can command in the market, where the accumulation of money was the name of the game. While Marx saw commodification as a negative development in the history of humanity, his judgement is not obviously correct (Fuller, 2010a, 2013). Indeed, the emergence of art has been often explained in terms of a practice of originally instrumental value that over time comes to be pursued for its own sake. This may be simply because the practice was superseded by more efficient technology, in which case ‘art’ is the name given to technology’s past lives. Alternatively, the original practice itself may have been rendered more efficient, leaving time and energy to study its intrinsic, or ‘formal’ character. This latter explanation was extended by Karl Popper (1972) to account for the existence of ‘objective knowledge’, notably mathematical objects. Popper clearly regarded this sense of means- reversal as a good thing. But how to tell whether a means-ends reversal is healthy or pathological? Popper and Weber were on the same page about most things, but here they parted company. Weber was the more sceptical about the means-ends reversal, especially when it came to the pursuit of knowledge. This is the context for understanding Weber’s apparent ambivalence to ‘positivistic’ vis-à-vis ‘hermeneutical’ approaches to the emerging social sciences. Weber welcomed the probabilistic-causal thinking that his home field of law was importing from physics at the end of the nineteenth century mainly because it encouraged a mode of inquiry that was indefinitely revisable in light of new evidence (Gaaze, 2019). It follows that each scientific paper should be treated as no more than a rough draft of some final statement of a truth that probably will be penned by someone else. In contrast, the humanities seemed to encourage a kind of textual fetishism, perhaps reflecting a theological logos-like identification of the ‘spirit’ with the ‘word’, in which certain human works
72
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
effectively supplement if not supersede the Bible. Stated in such bald terms, the view courted charges of idolatry in religious circles. However, the inclusion of religiously inspired works by Augustine, Aquinas and Dante in the ‘humanistic’ canon as it developed in the nineteenth century proved to be an adequate response. However, one countermove to theology’s default textual fetishism is worth mentioning. It treats the Bible as a revisable work of science. This move was urged upon Thomas Jefferson by his friend, the great Unitarian minister and pioneering chemist Joseph Priestley. The result is now called ‘The Jefferson Bible’, a radically streamlined version of the New Testament designed to eliminate the superstition, error and useless historical detail that impeded the contemporary reception of the universal message of Jesus. Notwithstanding its late eighteenth century’s sense of the contemporary, the Jefferson Bible continues to resonate sufficiently to be sold today in Washington DC museum shops. The Jefferson Bible helps explain the emphasis that Weber’s The Protestant Ethic and the Spirit of Capitalism places on the role of ‘self-help’ works in translating Biblical teaching into capitalist practice. These works, such as Poor Richard’s Almanack by Jefferson’s fellow US founding father Benjamin Franklin, similarly highlighted what remains of value in the Biblical message after removing the explicitly ‘biblical’ character of the message. Operating here was not only a desire to make the Bible relevant but also a keen – perhaps even ‘reflexive’ — awareness of the ever-present potential for error in not only our reception of God’s word but also its original transmission through human authors who were no less fallible. This profound sense of human fallibility was formative in modern translation studies, which took as its cornerstone the diverse strategies used to render the Bible relevant to people’s lives across broad expanses of space and time. This in turn fed into late twentieth century secular philosophical debates about the ‘indeterminacy’ and ‘incommensurability’ of meaning, in which both Quine and Kuhn drew on the Biblical scholar Eugene Nida (Fuller, 1988: Chap. 5). It helps to explain Weber’s rather self-sacrificial and even fatalistic comments in ‘Science as a Vocation’ that one’s own research is bound to be superseded over time by others, perhaps leaving one’s own contributions forgotten. In this respect, although Weber was generally hostile to the capitalisation of academic life, one capitalist concept that he could accept is ‘planned obsolescence’. Popper’s falsificationist scientific ethic should be sympathetic to this line of argument, but over time Popper came to warmly embrace means-ends reversal as a defining feature of knowledge production. Indeed, the warmth of his embrace was one that only metaphysics could provide. Popper came to believe in a ‘world three’ beyond matter (‘world one’) and mind (‘world two’) that contains ‘knowledge’ in the sense of the contents of some ultimate library. Here Popper’s intuition is not so different from that of Bacon, Galileo and other champions of the seventeenth century Scientific Revolution, for whom nature and the Bible were read as alternative routes to divine truth. Knowledge is possible because the world is inherently knowable, but that requires understanding the language in which it is written – be it Hebrew, Greek, Aramaic or, for that matter, mathematics. Indeed, both before and after the First World War, this world of ‘universal documentation’ had acquired renewed vigour as a bulwark for world
4.2 The Purposiveness of the Academic Calling: Weber Pivoting Between Kant…
73
peace and civilizational maintenance in the hands of such transdisciplinary visionaries as Paul Otlet, Wilhelm Ostwald, Otto Neurath and perhaps most notably H.G. Wells (Wright, 2014). Wells famously imagined – as if anticipating today’s algorithmically driven integrations of big data sets — that interaction among the contents of Popper’s world three would spontaneously generate a world four, which Wells dubbed the ‘world brain’ (Wells, 1938; Rayward, 1999). While Weber might have understood the sentiment behind Popper’s ‘world three’, he was loath to fetishize the finished text, which for him epitomized the idolatry of modern academic culture. Thus, whereas Popper spent most of his life trying to come up with the definitive edition of his first book, The Logic of Scientific Discovery, Weber never really seemed to have oriented himself that way, which perhaps explains his failed herculean attempt to complete the ersatz magnum opus we now call Economy and Society. Popper insisted on conducting arguments on his terms (e.g., ‘open vs closed societies’), which meant that unlike his famous students Imre Lakatos, Paul Feyerabend, Joseph Agassi, he engaged in relatively few polemical exchanges. Weber was quite the opposite, as much of his work was developed in dialogue if not outright disagreement with various interlocutors. An implication of this difference in modus operandi is that Weber’s exact position on topics is much more difficult to pin down than Popper’s, since Weber dialectically pivoted in relation to the specific interlocutor. He may appear more ‘positivist’ when debating a hermeneutician and more ‘hermeneutical’ when debating a positivist. This reflects a deeper difference in sensibility, which is related to what may be called an anti- monumentalism that Weber took to be central to the pursuit of ‘science as a vocation’. ‘Anti-monumentalism’ alludes to Hegel’s aesthetics, according to which art progresses as more of the human spirit is liberated from its material encumbrance. In that case, the most primitive stage of art is the ‘monument’, understood as art simply standing as proxy for the person, event or idea that it represents. Hegel associated this stage with the Egyptian pyramids, which for him literally buried the human spirit. The pyramids epitomize a form of art that lacks a life of its own but impresses simply as a reminder of — in the case of the pyramids — the pharaonic power that originally inhabited it. This was the sentiment that Percy Shelley ironized in his poem, ‘Ozymandias’, an updated version of which might take a Borgesian turn and be about a scholar whose works fill the shelves of libraries that have fallen into disrepair and turned into food for worms. In terms of the famous dichotomy that Kant developed in The Critique of Judgement, the pyramids are purposeful, as opposed to purposive. Their purpose is transparently available to the observer, who simply needs to appreciate how the functioning parts contribute to the overall whole. In that respect, the thing’s purpose has been already fulfilled. It has become a ‘take it or leave it’ proposition, which marks art at its most primitive stage, a self-contained organism whose life and death are two sides of the same coin. In contrast, ‘purposive’ implies that the thing’s purpose is not yet – and may never — be fulfilled, yet nevertheless the thing displays a sense of purpose, indeed, one that could be resolved in various ways. I shall return to this point at the end of this book, when discussing academic performance as an artform, whereby ‘purposeful’ and ‘purposive’ in Kant arguably
74
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
corresponds to what Nelson Goodman (1968) called ‘autographic’ and ‘allographic’ forms of art, respectively. In any case, the distinction is related to an ambiguity that the Oxford philosopher Gilbert Ryle (1949) noted in the meaning of verbs, many of which may be understood as either implicitly referring to an outright achievement (cf. ‘purposeful’) or simply to a task in process that may never be achieved (cf. ‘purposive’). In my original work on social epistemology, I discussed the implications of the ambiguity for the verb ‘translate’, the objective of which has been too often presumed to be achieved when in fact it has been merely attempted. Deconstruction’s radical break from hermeneutics in the final quarter of the twentieth century can also be understood as deriving from this awareness – a reassertion of the purposive over the purposeful (Fuller, 1988: Chap. 5). And the sense of academic freedom defended by Weber in ‘Science as a Vocation’ is similarly predicated on academic work being purposive without being purposeful. In the end, I am inclined to split the difference between Popper and Weber on the means-ends reversal, granting Popper the ontological point about expanding the domain of inquiry by generating new knowledge objects and Weber the epistemological point about inquirers’ deep ignorance of the ultimate significance of their inquiries. The intellectually honest inquirer really never knows the ultimate nature of his or her contribution to knowledge, since its fate is ultimately in the hands of her successors, who will determine whether it was ahead or behind the curve of history, or simply running with the pack. The image of collective inquiry implied here is not of new contributors ‘standing on the shoulders of giants’, as Newton put it, but of their being drawn into trying to complete an unfinished work still open to multiple resolutions, as in the principle of Prägnanz in Gestalt psychology. Moreover, instead of embracing fatalism, Weber took this fundamental uncertainty to license a proto- Popperian attitude to one’s own knowledge claims – namely, that one should not endlessly shore them up but test them to their limits. Needless to say, all this has potentially explosive consequences for the normative structure of modern academic life, which has come to fetishize the finished product – be it book, article or patent. To be sure, the ‘finished product’ is nowadays understood as a commodity whose value is determined by national accounting systems, such as the UK’s Research Excellence Framework. However, even in Weber’s day, sheer academic industriousness was valorized to the extent that academics were expected to produce a ‘big book’ in their careers. In the Germanized world, such a work was necessary simply to qualify for a regular professorship. This book was supposed to be something ‘solid’, in the sense of a building block or foundation that would operate as a constraint on what future inquirers can do. At the time of Weber’s famous lecture, the most knowing satire of this sense of academic productivity was penned by his younger law colleague, the notorious Carl Schmitt (1918). In Schmitt’s Swiftian tale of the tribe of ‘Buribunks’, the typewriter — nowadays we would say ‘word processor’ — is venerated as having revolutionized knowledge production just as the factory system did in the Industrial Revolution. Schmitt had in mind all the qualities that Marx associated with commodification, only now given a positive spin as values of knowledge: efficiency, definitiveness, cumulativeness. Thus, Buribunkian monumentalism is reflected in
4.2 The Purposiveness of the Academic Calling: Weber Pivoting Between Kant…
75
the tribe’s core epistemological tenet – namely, that the securest grounds for thinking that something exists is the continual increase in the number of scientific studies about it. In recent research policy, a more sophisticated version of Buribunkianism has prevailed, albeit without Schmitt’s sense of irony. It focuses on the rate of increase in the number of studies, such that a falling rate of scientific productivity means that a field is diminishing in utility as a lens for focusing inquiry (De Mey, 1982: Chap. 7). A consequence of sticking to such a principle is that science turns into an elaborate pyramid scheme, whereby fields of inquiry survive only through never-ending campaigns to recruit new contributors through doctoral training programs (Fuller, 2020a: Chap. 7). Weber argued that this research-driven monumentalism could be especially diabolical if leveraged into the pedagogical arena, where the lecturer may be tempted to present his opinions as carrying the weight of history that will propel thinking into the future. Here the lecturer’s academic freedom threatens to dominate the student’s. In Weber’s day, the most likely suspects of monumentalist pedagogy were Marxists, given their claims to a ‘science’ of historical materialism that predict capitalism’s imminent collapse as a result of increasing class warfare and the falling rate of profit. Against that backdrop, students who might wish to question the inevitability of capitalism’s demise risked being charged with ideologically motivated resistance. Nevertheless, one academic from the political right, who was coming to prominence at the time of Weber’s admonition, pursued a similarly monumentalist pedagogy to arguably greater effect on the emerging generation of German-speaking students. I refer here to the Viennese political economist Othmar Spann, whose academic lectures were public events that invoked the historic resources of Germanic culture to project a radical rebirth of a society that had just been crushed in the First World War. Moreover, he actually did what many imagined Heidegger to have done – namely, to conduct seminars in the forest to foster a holistic sense of oneness with others and nature in a common state of being in the world. Somewhat surprisingly, given these arguably demagogic gestures, Spann supervised the doctoral dissertations of such ideologically diverse and independent figures as the pioneering game theorist Oskar Morgenstern, the neoliberal economist Friedrich Hayek and the neoconservative political scientist Eric Voegelin. Let us collect the strands of this ‘deep reading’ of Weber’s modus operandi. I have focused on the nature of his ‘vocation’, which could mean two different things in the wake of the Protestant Reformation. Luther’s meaning stressed a call to obey, whereas Calvin’s stressed a call to decide. The former was about submitting oneself to a recognised authority, the latter about risking oneself on an unknown outcome. Although the Lutheran vocation was arguably the existentially safer option, the Calvinist one was truer to the sense of free will that the Bible said humans retained even after the Fall of Adam – and which would be instrumental (but in a way that only God knows with certainty) in whether one is ultimately saved or damned. In ‘Science as a Vocation’, Weber consistently associated the pursuit of free inquiry with the Calvinist call. It was pitted against what I have identified as the ‘monumentalism’ of more Luther-driven approaches to the academic vocation, whose respect for authority is manifested in an overriding desire to build on it. In this context,
76
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
Weber feared that the state had not only become the facilitator but also the guarantor of academic knowledge production, a proxy for the truth itself. This had been behind the decision taken by Weber and several of his German colleagues in 1909 to style themselves ‘sociologists’ in contrast to the explicitly policy-oriented ‘socialists of the chair’. Moreover, such monumentalism had downstream effects in both the conduct of inquiry and the inquirer’s self-understanding. Academia came to reproduce the bureaucratic structure of the state’s administrative apparatus, the civil service, in the form of a system of well-defined disciplines that constitute ‘domains of knowledge’, to which one contributed in the orderly manner satirised by Schmitt in ‘The Buribunks’ and later expressed more piously by Kuhn (1970) as ‘normal science’. Moreover, this ‘epistemology recapitulates bureaucracy’ mentality was largely justified by the ‘Neo-Kantian’ school of German philosophy that was dominant in Weber’s time (Collins, 1998: Chap. 13; Fuller, 2009: 81). Weber’s disquiet over academic monumentalism reflects his antipathy to the ‘means-ends reversal’ explanation for the conversion of social practices from tools for survival to activities pursued for their own sake. In academia, it involves what would otherwise be waystations on the path of inquiry morphing into pieces of intellectual real estate – a process I call academic rentiership, which I explore in more detail in the next two chapters. In Weberian terms, it amounts to the revenge of the Kathedersozialisten, now in the name of ‘expertise’. Experts perform a ‘bait and switch’ on the public, invoking their own powers of clarity and control to replace the public’s genuinely messy existential concerns with the conceptual neater but empirically inadequate cognitive resources they can bring to the problem at hand. Thus, the public interest comes to be colonized by unelected representatives (Fuller, 1988: Chap. 12). In what follows, I explore the extent to which this de facto ‘cognitive authoritarianism’ inhibits dissent, which is the most obvious expression of freedom in any collectivized form of inquiry. My touchstone is William Lynch’s (2021) Minority Report, a recent comprehensive and fair-minded attempt to give epistemic dissent its due in science. Lynch thinks there is something fundamentally wrong with how science currently incorporates dissenting voices, yet he does not question the legitimacy of the scientific establishment as such. However, as I argue below, his version of ‘dialectically engaged science’ is not sufficient to meet the challenges posed by the current ‘post-truth condition’, in which distrust in expert authority is at an all-time high and expressed by the most scientifically educated and most well-informed people in history.
4.3 The Call of a Voice that Few Hear: The Problem of Minorities in Organized Inquiry While Lynch (2021) believes that scientific inquiry tends to move in the right direction, it needs safeguards so that it is neither distorted by inappropriate private interests nor prejudiced against appropriate group interests. Moreover, he recognizes
4.3 The Call of a Voice that Few Hear: The Problem of Minorities in Organized Inquiry
77
that today’s calls for increased ‘openness’ – that is, allowing more dissent – are related to science’s closure of alternative paths of inquiry in its past. In other words, the cognitive authoritarianism that characterizes science today is, to a large extent, the continuation of past practice. Indeed, a notable contribution of Lynch’s book is to revive and extend Paul Feyerabend’s (1979) proposal that the scientific community has a meta-level social responsibility to prevent its own disagreements from polarizing into ‘incommensurable’ forms of life that end up feeding into existing social divisions. Without this sense of responsibility, Popper’s dream of the ‘open society’ gets perverted in the ways we see today, whereby the ‘scientific consensus’ on a whole host of issues is under siege by dissenters whose refusal of a hearing by the establishment results in their own views diverging increasingly from those of the establishment, even on matters of fact. Feyerabend realized that the vaunted open- minded character of scientific inquiry can easily fold in on itself and become a source of cognitive authoritarianism simply by a refusal to communicate with dissenters. The excommunicated are now redefined as the ‘ignorant’. They are ignorant in two senses, one direct and the other indirect: 1. They are formally ignored. Not counting as members of the same community of inquirers, their work is no longer cited as contributing to the field’s collective body of knowledge. Over time, this stance may be reciprocated, resulting in a systematic mutual incomprehensibility that is experienced as ‘incommensurability’. While Kuhn and Feyerabend drew attention to this process in both science and art (albeit drawing somewhat different conclusions), the historic precedent was set by the ostracism of the Protestant Reformers from the Church of Rome in the sixteenth century. 2. A long-term consequence of this incommensurability is that the two sides diverge still further as they develop independently from each other. It results in a deformation of perspective on both sides, as the expelled are deprived of access to the resources needed to conduct original research, while also being deprived of the right to appropriate their shared history to legitimize their inquiry. Consider an example that Lynch raises and to which I have devoted some attention: ‘intelligent design theory’ (IDT) as an alternative to the Neo-Darwinian synthesis (Lynch, 2021: Chap. 3; Fuller, 2007c; Fuller, 2008). IDT is basically a scientifically updated version of creationism, insofar as it attempts to pursue the line of inquiry that is most opposed to the spirit, if not the letter, of Darwin’s theory of evolution – namely, that in some sense an overarching ‘intelligence’ is responsible for the origin and direction of life. The two theories of evolution competing with Darwin’s in the late nineteenth century – Jean-Baptiste Lamarck’s and Alfred Russel Wallace’s – were as different from each other as they were from Darwin’s, but both would have been friendlier to today’s IDT. Notwithstanding their metaphysical differences, Lamarck and Wallace inscribed the human into their accounts of nature as a proxy for the divine. In contrast, the feature of Darwin’s theory that has always most disturbed scientists and non-scientists alike is its presentation of nature as ultimately blind to human concerns, control and perhaps even comprehension. This concern ran like a red thread through the otherwise consistently robust early defense that
78
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
Thomas Henry Huxley provided for Darwin’s theory. Nowadays, ‘species egalitarianism’ is Peter Singer’s positive spin on this resolute removal of humanity’s cosmic privilege (Fuller, 2006b: Chap. 11). And so, when Richard Dawkins claims that Darwin’s theory licenses atheism, he means that evolution by natural selection implies that there is no cosmic meaning to any form of life, including our own. Of course, atheism cannot be logically deduced from the premises of Darwin’s theory, yet Dawkins has never said what would lead him to reverse his judgement — that is, to countenance an explanation that makes room for – if not explicit reference to — nature’s ‘intelligent design’, let alone a divine agency informing the process. In short, Dawkins seems to imply that his atheism is unfalsifiable. And even though Dawkins would not be able to point to the moment when Darwinism already had or will have decisively refuted its IDT rivals, he would simply observe à la Laplace that life can be explained without invoking any IDT-style hypotheses — albeit only in principle. And that much appears to be true – at least for the time being. Dawkins’ Laplacian move speaks to the Realpolitik of institutional science, whereby not only are certain theories dominant at a given time but those theories also determine which minority voices are heard in the scientific conversation. Such is the totalizing import of what Kuhn (1970) originally called a ‘paradigm’, which had so greatly offended Popper and his followers, whose work Lynch substantially revisits. It is as if science were the sort of game in which the top teams are permitted to interpret their game’s rules in ways that allow them to dictate which other teams are fit to play them – and who referees each match (aka ‘peer review’). Larry Laudan (1977) tried to put a brave face on this point by equating paradigm shifts with a statistical shift in allegiance over a fixed period, about which scientists acquire ‘pre- analytic intuitions’, a kind of sixth sense for where the epistemic trail is leading. And so, while by no means did all physicists accept Einstein’s relativistic physics when he first proposed it in 1905, twenty years later one could not be a front-line practitioner without knowing it (Fuller, 1988: Chap. 9). In a similar vein, examining the original reception of Darwin’s theory, David Hull suggested à la Kuhn that that the twenty-year timeframe corresponded to generational change within the community of biologists (Hull et al., 1978). However, the people who performed the much vaunted ‘Gestalt switch’ in both cases were simply successors to the chairs, editorships and other positions of institutional authority previously held by their elders. Their relative youth meant that they lacked a personal investment in the old way of thinking and could see certain strategic advantages to adopting the new way. But in the end, the story is one of the ‘circulation of elites’, as Vilfredo Pareto would say, within a strongly self-contained scientific enterprise, the favored euphemism for which is ‘autonomous’. Lynch’s general response to this problem is to develop Lakatos’ (1978) strategy to expand the pool of competitors in science beyond the dominant paradigm’s natural allies and anointed successors. But it requires that potential competitors abide by the maxim, ‘No anomaly mongering!’ In other words, alternatives must not simply try to undermine dominant paradigm by pointing to its persistent failure to solve certain crucial puzzles. In addition, it should also provide testable proposals of their
4.3 The Call of a Voice that Few Hear: The Problem of Minorities in Organized Inquiry
79
own that aim to either solve or dissolve the puzzles. In that spirit, Lynch opens the door to an ‘extended evolutionary synthesis’ that includes such controversial notions as group selection and epigenetic inheritance — but not IDT, which he sees as a version of anomaly mongering, whereby Neo-Darwinism’s deficiencies are turned, ‘God of the gaps’ style, into an argument for an ‘intelligent designer’ behind nature’s daunting complexity. As it turns out, Lynch’s demarcation of worthy and unworthy anti-Darwinists seems to be shared by the Royal Society of the London, judged by the conspicuous presences and absences at its latest attempt to define the frontiers of the life sciences — the November 2016 conference ‘New Trends in Evolutionary Biology: Biological, Philosophical and Social Scientific Perspectives’. Nevertheless, the label ‘God of the gaps’ is just as unfairly applied to IDT as it was to Newton’s attempt to justify divine interventions in an otherwise lawlike world. To be sure, political proscriptions against referring to divine agency in scientific argument has informed such irritating indirectness on the part of both Newton and today’s IDT enthusiasts. Those proscriptions had been already written in the Charter of the Royal Society, of which Newton was himself an early member. We shall return to that foundational scientific institution in the next section but let us first consider the long-term consequences of this particular form of repression for today’s scientific world. It has helped that the people contributing to the so-called extended evolutionary synthesis, whom Lynch presents as exemplary ‘minority reports’ to Darwinism, were never so totally expelled from the scientific establishment as to be effectively alienated from their own history. Thus, the 2016 Royal Society’s evolution conference, while relatively ecumenical in composition, did not rise above the level of a troubleshooting exercise for Neo-Darwinism (Zimmer, 2016). While that prospect may be attractive to ‘minorities’ who seek acceptance from the ‘majority’, the real losers in the scientific game, such as IDT advocates, still didn’t get a hearing. In practice, it means that the losers remained disinherited from the past they share with the winners. It illustrated Kuhn’s (1970) perceptive observation that a paradigm’s ascendancy is marked by a winner-take-all approach to the field’s history, which is codified in the textbooks used to recruit the next generation of scientists. This happens through a kind of revisionism, in which that common heritage is laundered of whatever elements of divergence had contributed to the original schism. Thus, in the case of Darwinism’s ascendancy, the fact that many — if not most — of those who provided the evidence that Darwin used to forge his original theory, as well as the evidence for today’s Neo-Darwinian synthesis, were more metaphysically open than Darwin himself to the idea of a cosmic intelligence (often personalized as God) is erased from the historical presentation of their contributions — or else treated as idiosyncrasies that perhaps even impeded the correct interpretation of their findings. (Michael Ruse’s historically informed philosophical underlabouring for Neo- Darwinism has been quite explicit on this point.) In short, theology is retrospectively written out of the official historical record of science, and students are left with the impression that all the scientists who contributed to the Neo-Darwinian synthesis thought about the world in more or less the same way Darwin did. Kuhn
80
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
wasn’t kidding when he compared this process to the work of the Ministry of Truth in George Orwell’s 1984. Moreover, the highly technical and largely ahistorical (aka ‘state of the art’) character of the terms on which scientific disputes are normally settled has enabled the scientific community to protect this history from interlopers who would offer alternative histories that draw attention to these moments of closure — as well as the repressed alternative futures that might still be worth recovering now. Such repressed ‘alt-histories’ are the stuff out of which ideologies have been forged in the political sphere, sometimes to revolutionary effect, as when these lost futures provide the framework for the revolutionary party’s utopian horizons. Indeed, in the nineteenth century, the history of science seemed to be contributing to an alternative vision of secularism to that being forged by the emerging nation-states of the Europeanized world. Here I mean to include the followers of such disparate thinkers as Auguste Comte, Karl Marx and Herbert Spencer. Each in his own way tried to leverage the old trans-national framework of Christendom to envisage a world order governed by scientific principles instead of ‘natural law’. Of the three, only Comte seemed to think that this sense of global governance would still require a ‘superstate’ of the sort to which Roman Catholicism had aspired. Marx remained ambivalent, and Spencer did not think that a state would be required at all in a world where humanity’s scientific impulses were channeled into industry rather than warfare. However, as ‘science’, now understood as open yet disciplined inquiry, came to be integrated into the teaching regimes of national universities, science’s threat to the nation-state as an alternative secular regime gradually receded. By 1870, Comte’s ‘positivism’ had become part of the emerging modern academic establishment; Marxists were still outside the establishment but as its political fortunes waned, they too became part of it; and Spencer’s followers have always remained somewhere in between, with an increasingly strong footing both inside and outside of academia. To be sure, there were various efforts, especially after the First World War, to position science as an honest broker of international differences, if not an outright consolidator of human interests. The quest for a ‘universal language’ in the twentieth century — flying under such banners as Esperanto, Ido, Logical Positivism and General Semantics — largely tracks these failed attempts (Gordin, 2015: Chaps. 4, 5, and 6). And as science became academically domesticated, and ‘scientism’ became a pejorative for the very idea of science as a political movement, the history of science itself became just one more discipline sitting next to the other scientific disciplines, a challenge neither to the sciences themselves nor to the larger society that sustains science and science shapes. In short, the history of science as a discipline exemplifies the Orwellian world that Kuhn believed has enabled science to thrive. But is it the only world in which science could thrive? And is science thriving even in this world? Here is a way to consider these questions: If science itself is to be founded on a scientific basis, shouldn’t its own foundations be subject to the same kind of scrutiny as the objects of scientific inquiry? In that case, a ‘science of science’ should be as scientific as the science it studies. At the very least, it means that one is concerned about the disposition of the evidence that science uses to legitimize itself. And that
4.4 What Would Francis Bacon Have Said?
81
includes historical evidence. This was the spirit in which Ernst Mach’s Science of Mechanics challenged the continued dominance of the Newtonian paradigm in physics in the late nineteenth century with his ‘critical-historical’ approach to science (Fuller, 2000b: Chap. 2). In effect, he claimed that the Newtonians had engaged in a largely successful two century campaign to suppress countervailing evidence to their most fundamental principles. Mach’s larger point – made more by his own practice than by explicit argument — was that this sort of critical historiography was as much part of the methodology of testing scientific claims as controlled experiments in a laboratory. Although Mach expressed his view at a time when epistemologically and ontologically sharp differences were being drawn between the sciences of Geist and Natur, it would still have been recognizable to his readers as re-enacting the prehistory of modernity. After all, the modern theory of evidence, with its strong forensic orientation, had developed in the Renaissance and the Reformation out of concerns that the modern Latin-based understandings of a range of authoritative texts from the Bible to the Donation of Constantine were false, if not deliberately falsified. Francis Bacon and others with legal training applied this critical approach to evidence to testimony more generally, resulting in the ‘method’ associated with the nascent experimental sciences in the seventeenth century (Franklin, 2001). Moreover, a quick survey of Bacon’s ‘idols’ reveals that the mental liabilities targeted by this sense of critique cut across the modern science/theology divide. At one level, this should be obvious, because the distinction was only in the process of being drawn in Bacon’s day. But at a deeper level, Bacon had observed that the ‘idols’ persist, notwithstanding one’s formal disciplinary training. In this respect, Daniel Kahneman (2011) and his followers reincarnate the spirit of Bacon in today’s psychology laboratories. Indeed, an interesting history of the topic of cognitive limitations and biases could be written, starting with Bacon, passing through Marx’s early application of Biblical criticism to classical political economy to the early ‘social constructivist’ phase of Science and Technology Studies (STS). On the more strictly science side of this trajectory, Joseph Priestley’s The History and Present State of Electricity, which inspired Faraday and Maxwell, might be inserted alongside Mach’s book, which influenced Einstein and Bohr. A couple of books of mine offer some substantial clues about how such a history might be executed (Fuller, 1993; Fuller, 2015).
4.4 What Would Francis Bacon Have Said? Although he presided over a legal system that was gradually disentangling itself from the strictures of Roman law, Francis Bacon remained an admirer of its inquisitorial mode of delivering justice, whereby the judge – not the plaintiff — set the terms of the trial. Indeed, this feature of Bacon’s thought attracted Kant, the logical positivists and Popper. They liked the inquisitorial mode’s studied neutrality to the contesting parties, even in cases where one side had prima facie conceptual or empirical advantage over the other. As Bacon himself had astutely realized, such
82
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
neutrality requires a purpose-made agency — in the case of the ‘new science’, a sort of epistemic judiciary capable of settling disputes by virtue of its independence from other authorities yet also with the Crown’s backing for its rulings. Bacon’s ‘science court’ vision was probably inspired by his role as personal lawyer to King James I, who famously justified his own ‘free monarchy’ (aka ‘absolutism’) on similar grounds: The King encouraged officials to collect grievances from the people and discuss them in Parliament, but the King’s word would be final on the disposition of cases. In this respect, Bacon conceived of the scientific method as a legal procedure for executing the ‘royal prerogative’ regarding epistemic matters (James VI and I, 1995). Here it is worth observing that at least since Thomas Aquinas, the role of the monarch had been associated with status in a sense more related to ‘state of play’ (cf. status quo) than social class. The so-called ‘divine right of kings’, on which both Aquinas and James agreed, was more about the power to make binding decisions than any mysterious papal sanctity might hang over them. Bacon’s understudy Thomas Hobbes understood this point very well: It is about ending the argument for now, drawing a line under the decision taken by the Crown, thereby establishing a fact that may be overturned in the future, pending a new challenge. This line of thought has provided the metaphorical basis for the modern idea of the state, which is responsible for the authoritative readout of the facts concerning its jurisdiction (aka ‘statistics’). Thus, the state became the entity in terms of which various social groups and political parties jockeyed for position, driving the vectors of policymaking in the process. In Bacon’s vision, the state would evaluate competing knowledge claims by testing them under fair – what we now would call ‘experimental’ – conditions and keeping track of the outcomes. Bacon held that officers of the state (‘civil servants’) should be dedicated to this project because he thought that an ongoing forensic inquiry into all domains of knowledge would enhance society’s wealth generating capacity — perhaps in the spirit of a stock market, whose price fluctuations are read as a signal of investor confidence in an array of prospects. Unfortunately, both Bacon and James died only a few years after developing their respective ideas about science and politics. The next generation saw England engulfed in civil war, resulting in Oliver Cromwell’s brief but potent ‘republic of virtue’, which in many respects was more authoritarian than James’ absolutism. That train of events greatly reduced the appetite for the sort of ‘absolutism’ originally entertained by the Lord Chancellor and his royal patron. The Royal Society of London, supposedly Bacon’s institutional legacy, captures this aversion well. Its charter constitutes a mutual non-interference pact between science and the state, in return for which science makes itself available to the state. Lynch’s (2001) first book was devoted to the considerations behind this metamorphosis – if not perversion – of Bacon’s vision. In any case, the Royal Society has set the benchmark for ‘autonomous’ science over the past 350 years, not least in the work of Robert Merton and Thomas Kuhn – and in so doing, has anchored modern science policy thinking (Proctor, 1991). Note the difference between the Bacon/James vision of the state’s role in knowledge production and two alternative visions that have become dominant in the
4.4 What Would Francis Bacon Have Said?
83
modern era, each of which is intimated in the Royal Society’s charter: (1) The state itself is in the business of funding original research. (2) The state lets research rivals decide matters amongst themselves so long as they don’t challenge the state. Both violate what might be called the state’s ‘concerned neutrality’ that was central to the Bacon/James vision, and which John Rawls famously updated in terms of ‘justice as fairness’. The political scientist Roger Pielke’s (2003) phrase ‘honest broker’ for the science policymaker is worth revisiting in this context. The point is often missed because the popular image of ‘absolutism’ is one of monarchical willfulness. And while the image may fit its more decadent expressions, absolutism had meant to position the monarch as ultimate referee, the final court of appeal. Thus, when King James wrote his own job description in 1598 as a ‘free monarch’, he had in mind settling differences between parliamentary factions that would otherwise never be settled for very long because none of the factions could command the enduring respect and trust of the others. Note that settling a match by an independent referee is neither by mutual agreement of the teams (that would be ‘match fixing’) nor by someone who might be a player in a subsequent match (as in ‘peer review’). Moreover, the referee’s declaration applies only to the match in question and the season of play of which it is a part. In that respect, a ‘sunset clause’ is built into the relative standing of the competing teams based on a set of referee decisions: The team that tops the league table in one season still needs to prove itself from scratch next season. And while it is common to speak of consistent team performance over several seasons in ‘dynastic’ terms (e.g., Manchester United Football Club under Alex Ferguson), those terms are strictly metaphorical, given that match-based tournament games such as football are designed to encourage rivals to study their own and their opponents’ vulnerabilities in order to improve their own performance in future matches. Now, couldn’t science operate this way? If so, what would that entail? There are certainly enough precedents in the history of politics and law to enable it to happen. I alluded to the ‘sunset clause’ in much legislation (also much favored by Plato), which in the context of science would remove the presumption of indefinite continuation to an established knowledge claim (Kouroutakis, 2017). In other words, a knowledge claim’s epistemic status would expire, unless it is somehow renewed. Theories of democratic politics that take seriously the fallibility of collective decision-making have come up with various ways of institutionalizing this idea. The most familiar and most game-like procedure involves the constitutional requirement of periodic elections, regardless of the current regime’s state of play. Yes, this would mean that Newton’s laws would come up for regular review. But that is not as unreasonable as it may first sound, considering the extent to which scientific research design has changed over time; hence the difficulties that arise when one tries to replicate an ‘established’ result from many years ago. Often it seems that scientists today would never have arrived at significant findings made in the past, had they been working with today’s methods and standards. This is an epistemologically deeper issue than what surrounds the increasingly publicized cases of replication failures in science. In those cases, the
84
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
problems turn on fraud and other malfeasances associated with ‘research ethics’ (Fuller, 2007a: Chap. 5). However, here we’re not talking about assertions at variance with the scientist’s own actions, to which they might be held accountable in the present, but rather about assertions at variance with the actions of previous scientists, who are no longer around to participate in a reckoning where the standards of scientific evaluation themselves had shifted since the actions were first taken. While not obviously right or wrong, this slippage in meaning and perhaps even verifiability routinely happens under the radar of most scientists and the society supporting them. After all, Newtonian mechanics was ultimately overturned, at least in part, as a result of improved experimental techniques that brought the theory’s anomalies into such focus that they could no longer be ignored or deferred. While spokespersons for the dominant scientific paradigm are quick to admit their ‘fallibility’, such declarations often seem hollow, if not passive-aggressive, without regular testing of the paradigm’s main findings and assumptions. In this regard, science might learn from organized team sports on matters of methodological vigilance, in order to ensure that minority voices truly have their day in court.
4.5 Bacon Redux: The University as a Revolutionary Constitution Explaining the emergence and duration of the world-religions, Max Weber (1963) famously argued that the great problem of institutionalization is how to manage the transition from prophet to priest, or what he called the ‘routinization of charisma’. In other words: How does a body of diverse followers stay true to the message once the inspirational leader holding them together has left the world-historic stage? One way, of course, is for some of the leader’s closest followers to take control and then impose ‘rule of law’, which streamlines the chain of command from the rulers to the ruled and allows this newly formed corporate entity greater scope for expansion. Weber believed that over time this process came to be rationalized as ‘bureaucracy’, which itself became self-reproducing, raising all the familiar questions about ‘spirit’ vs ‘letter’ of the law, understood as the original leader’s message. In the history of Christianity, the process just described fits best the history of the Roman Catholic Church – and perhaps explains the inevitability of the Protestant Reformation. The Protestants offered a different vision of institutionalization, based on an alternative history of Christianity. Here Jesus’ death is marked by his followers fully embodying his message as a kind of personal discipline, without any need for an elaborate clerical hierarchy. To be sure, such a broadly ‘democratic’ understanding of the faith resulted in the fractious ‘denominational’ history of Protestantism, whereby churches formed simply based on common readings of the Bible, typically as a result of having broken away from some other church. In this context, the difference between prophet and priest is completely erased; hence the adventurous nineteenth century US attempts to ‘rewrite’ the Bible by, say, Mormons,
4.5 Bacon Redux: The University as a Revolutionary Constitution
85
Christian Scientists – and even Thomas Jefferson, as we saw earlier. But this policy of ‘endless renewal’ or ‘permanent revolution’ tends to result not in an improved reorganization along an agreed standard but the proliferation of organizations, each operating by its own standard. And so, what began as an attempt to purify Christianity of its worldly corruption arguably only served to erect a Tower of Babel. It is against this backdrop that Yale law professor Bruce Ackerman (2019) proposes to reconcile the above two visions of institutionalization – one elitist, the other populist – in terms of revolutionary constitutionalism. According to Ackerman, the US exemplifies a successful revolutionary constitution. I shall explore the extent to which the modern university might be understood in a similar light. But Ackerman himself aims to capture the attitude of the US Founding Fathers towards the remarkably resilient form of republican government they designed. Other revolutionary movements across the world have tried to follow its example, with varying degrees of success. In Ackerman’s telling, key to the Founding Fathers’ success was that they planned for their own obsolescence. Unlike the great prophets (or, for that matter, absolute monarchs), they did not, so to speak, ‘die in office’. The Founders tried to ensure that the people identified neither with them nor their anointed successors but with the Constitution as such, resulting in a ‘government of laws, not men’, to recall John Adams’ resonant phrase. George Washington’s refusal to continue as President after having served two terms in office symbolized this sensibility at the start of US political history. In today’s terms, the people had to ‘own’ the Constitution. The Constitution was not a dispensable document that could be easily changed by the ‘will of the people’, but more like the rules of a game that outlasts its players, no matter how illustrious they might be. Moreover, the players derive their own luster from having agreed to play by those rules, albeit in their own distinctive ways. The theological model for the planned obsolescence of the supreme leader is Deism, a kind of post-Protestant secular religion that animated Enlightenment thinkers generally, including the US Founding Fathers. To be sure, Deism was treated with a certain amount of derision in its day by Voltaire for conceiving of the deity as deus absconditus, God fleeing from the scene of Creation, as if it were a crime. But truer to the spirit of Deism is to say that it is the death of God by his own hand – a thought that no doubt had inspired Nietzsche. Put in cooler academic terms, Deism was about a radical revision of the idea of ‘God the Father’, a revolt against ‘paternalism’ in the strongest possible terms. Keep in mind that the default legal position positioned the father as ‘lord of the manor’, typically based on having built the family home or extended it significantly, with presumptive authority over all in his domain, especially offspring, until he dies; hence the prima facie legitimacy of the ‘divine right of kings’. Eighteenth century legislation made it easier for at least male offspring to escape paternal authority by leaving home and effectively establishing a new identity based on their own actions. Thus, Kant’s famous definition of Enlightenment focused on humanity’s release from ‘nonage’ — or ‘infantilism’, as we now say. However, Deism’s conception of divine paternity reversed the polarities of this liberal legal tendency: Instead of humans leaving the divine household, which could invite atheism, God himself vacates the premises, leaving his human offspring to occupy it in perpetuity.
86
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
This point helps to explain the repeated appeals that the Founding Fathers made to ‘natural law’, or the ‘higher law’, as it is sometimes still called in US constitutional circles. As the late Harvard science historian I.B. Cohen (1995) observed, the relevant conception of ‘natural law’ was Newton’s, not Aquinas’. In other words, the emphasis was on the ‘law’ rather than the ‘natural’ part of ‘natural law’. The Founding Fathers understood ‘natural law’ not simply as a normative order that conforms to the nature of things as they normally are, but more expansively as God’s great infrastructure project, the comprehensive outworking of a supreme will for the purpose of habitation by others who have yet to exist: the realm of the possible rather than the actual. The design of the US Constitution was meant to mimic the Newtonian world-system, understood as an autonomous machine that can function no matter who occupies the various designated roles. We normally think about this feature in terms of the Constitution’s intricate construction, with its famed ‘separation of powers’ and ‘checks and balances’, which is designed to withstand the vast range of interests that might drive not only those who occupy the offices of state but also the citizenry at large. And in a democratic republic, the differences between these two groups are meant to dissolve over time, as people as a whole develop a greater capacity for self-representation. In this respect, the US has made a great success of the Founding Fathers’ vision, insofar as the dispositional difference between members of the US Congress and those they represent is now much narrower than members of the UK Parliament. However, more subtle but equally important is the complementary effect that the Constitution has in disciplining people’s interests, which is not so different from how the rules of a well-designed game discipline the players, in terms of molding their ‘native’ capacities to the ‘artificial’ field of play. There are several ways of thinking about this. In a Freudian vein, we might think in terms of sublimation. Thus, the Founding Fathers themselves placed great store by hypocrisy as a political virtue, a point driven home in recent years by Jon Elster (1998) and David Runciman (2008). A more provocative way of thinking about the matter is to consider ‘method acting’, whereby an actor draws on his or her own personal experience to play their assigned role in a drama. Konstantin Stanislavski, who introduced this approach to theatre in the closing years of the Russian Empire, came to be celebrated in the Soviet Union for implying that, in principle, anyone could play any role if they can pour themselves entirely into the world dictated in the dramatic script. In practice, it is about the actor adopting a frame of mind that enables him or her to speak the words in the play ‘authentically’, so that the audience believes that the actor could have been the author of the playwright’s words (Benedetti, 1982). If this entire way of putting matters seems strange, it is because revolutionary constitutionalism drives a wedge between authenticity and sincerity in our moral psychology (cf. Fuller, 2020a: Conclusion). Epistemologically, the difference boils down to whether we believe what we say (authenticity) or we say what we believe (sincerity). The Deist shift occurs when authenticity is privileged over sincerity. Sincerity presupposes that belief formation happens by some means other than our capacity for self-representation. It is anti-hypocritical. Thus, a sincere person’s belief tends to be seen as spontaneous and self-certifying, that is, as something they
4.5 Bacon Redux: The University as a Revolutionary Constitution
87
are compelled to believe. ‘Evidence’ has functioned in modern epistemology as the catch-all name for that independent source of belief formation. On this view, you can’t intentionally make yourself believe something, or even simply decide to believe something. On the other hand, one might see this whole way of understanding belief formation as little more than alienating people from their capacity for judgement. Thus, there is equally a line of modern thinkers – Blaise Pascal and William James, most notably – who predicate belief formation on the capacity for self-representation (Fuller, 2020a: Chap. 12). It is pro-hypocritical. These thinkers are about ‘walking the talk’, which is close to the normal understanding of ‘authenticity’. If you act like you believe something, you end up believing it – ‘owning’ it, in that sense. Here the human is envisaged not as a receptacle but a forger of belief – an imperfect version, if you will, of the deity who demonstrates his own divinity by creating through the word (logos). To be sure, many senses of ‘imperfection’ are implied here. Unlike God, humans are not in full control of the consequences of what they say, due to their plurality, natural equality and individual finitude. Conflict of wills is therefore inevitable. This is the context in which a ‘government of laws, not men’, both in the natural and social worlds, truly matters. In the history and philosophy of science, the most obvious analogue to revolutionary constitutionalism is the scientific method, especially in the spirit originally articulated by Francis Bacon. As personal lawyer to King James I, Bacon regarded universities as not simply standing apart from the Protestant monarchy but arguably more faithful to the Pope than to the Crown, the result of which might inhibit knowledge production in the emerging ‘British’ national interest. Thus, a state agency should be entrusted as supreme neutral arbiter. It would be a kind of judiciary, but on the model of an inquisitorial rather than an adversarial legal system. In other words, this scientific judiciary would actively prosecute cases on its own initiative rather than simply wait for plaintiffs and defendants to self-identify (Fuller, 2007a: Chap. 5). In any case, intellectual litigants would need to translate their contesting knowledge claims – which were often informed by fervently held alternative readings of the Bible — into the regime imposed by a decision procedure that would be itself neutral to the contestants but capable of resolving their dispute to at least the temporary satisfaction of all sides – at least until the next contest. Bacon’s original proposal still defines the baseline understanding of the scientific method that science students are taught and about which philosophers of science continue to theorize. But more relevant for our purposes is that Bacon wanted his neutral procedure to exert supreme epistemic authority, so that everyone would know the standing of their respective epistemic claims at any given moment, as well as the terms under which that standing could be contested, presumably with the hope of improvement. It is surprising that commentators have not made more of the game-like character of Bacon’s original proposal, though more modern attempts to flesh it out in terms of probability theory, such as Bayes Theorem, capture its flavor. For his own part, Bacon spoke of the moments of contestation as ‘crucial experiments’. This was at a time when ‘experiment’ simply meant ‘experience’. The overall meaning of Bacon’s phrase was carried by ‘crucial’, which suggested a crossroads, understood as a metaphor of Jesus’ Crucifixion. The idea was that one
88
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
had to take a stand, regardless of the consequences. In the case of science, the need to take a stand is regularized. The paths presented at any given crossroads may be mutually exclusive, but the journey itself involves many crossroads. There is an interesting ‘deep theology’ about this way of thinking about the science- religion relationship, especially in light of Bacon’s sponsorship of an ‘authorized’ (‘King James’) English version of the Bible and his suspicions about a university sector that still oriented towards Rome, notwithstanding the Crown’s separation from the Pope. In this context, one might regard crucial experiments as the mechanization of divinity, insofar as the prospect of genuine collective epistemic advance is tied to the adjudication of specific knowledge claims (Fuller, 2021c). (Think about the role that computers often played in early science fiction as arbiters of competing human claims and resolvers of human doubt more generally.) But more relevant to the current argument is exactly how Bacon envisaged the enforcement of the supposedly ‘binding’ nature of the results to these crucial experiments. Part of the answer lay in the reasoning that Bacon himself offered to justify what we continue to recognize as the scientific method. Nowadays we associate this reasoning with a kind of ‘empirical’ or even ‘inductive’ rationality — though Karl Popper, the Second Coming of Bacon, rightly balked at ‘inductive’, given what the term had come to mean by the time he wrote in the twentieth century. In any case, clearly more was required – but Bacon himself didn’t provide it. Death spared Bacon a mass revolt against the son of the monarch he had served. This ‘civil war’ had been fueled by warring Protestant factions, all suspicious of each other and foreign Catholic designs. However, Bacon’s private secretary Thomas Hobbes survived the conflict and witnessed the Royal Society of London’s attempt to institutionalize Bacon’s legacy. Hobbes famously was refused membership after his various objections to its founders’ interpretation of Bacon’s legacy, which together amounted to an outright rejection of the terms of the Royal Society’s famed Charter (Shapin & Schaffer, 1985). Hobbes objected not only to the self- certification of the Royal Society’s knowledge claims – what we now call ‘peer review’ – but also to the Royal Society’s corporate autonomy from the newly restored Crown. Notwithstanding its official eschewal of religion, the Royal Society was setting itself up as a ‘new Rome’, as in the Catholic Church. Hobbes specifically feared that the Royal Society would establish an alternative source of authority to the Crown that occluded the absolute power that is required to enable a society of free individuals to flourish as a whole. For his own part, Hobbes notoriously dramatized the requisite ‘absolute power’ as a collectively manufactured deity — the ‘Leviathan’ — to whom the society’s members grant a monopoly of force in exchange for the requisite sense of order for free play. Of course, Hobbes’ successors, including Locke, Rousseau and the US Founding Fathers, went on to embody this absolute power in various forms. The distinctiveness of the US Constitution lay in the explicitness of the Founding Fathers, who clearly wanted to make the ‘social contract’ binding for future generations in perpetuity. Theirs was a bureaucratically distributed Leviathan. The phrase ‘popular sovereignty’ captures only part of the underlying idea of revolutionary constitutionalism, which helps to explain the default feeling among
4.5 Bacon Redux: The University as a Revolutionary Constitution
89
American citizens that they ‘own’ the Constitution. Yes, the people are the Constitution’s ultimate upholders and underwriters. However, the Constitution’s specification of their permissible interactions in various individual and collective roles suggests something closer in conception to a dramatic script. (The original counting of a slave as 3/5 person should be considered in this light.) Of special note for our purposes is the care taken to specify how one might enter and exit the roles of designated characters in the constitutional script. Of course, in normal theatre, the same script may be performed by different actors in different venues, but the script does not specify how the actors are chosen or replaced – simply that they should perform their roles according to the brief character sketches in the play’s notes. To be sure, such non-scripted factors as the popularity or virtuosity of particular actors may determine their longevity in theatrical roles. However, the US Constitution is written to ‘internalize’ these ‘externalities’, so to speak. In effect, you are meant to fully inhabit the role to which you are assigned by the Constitution as both citizen and potential office holder. It is a game you play for life with your fellow co-habitants, passing in and out of different roles. Thinking about this self- transformative character of the US Constitution as an endlessly unfolding drama (a ‘soap opera’?) on the world-historic stage has been relatively easy for successive wave of immigrants motivated to lose their old identities in favor of new ones. This sensibility was epitomized in the Progressive era image of the US as the world’s ‘melting pot’. Two countervailing considerations must have weighed on Hobbes’ appraisal of the Charter of the Royal Society. On the one hand, there were Plato’s original objections to the dramatists, given that their words have the potential to generate an alternative reality that may result in social unrest, especially if the plays are performed well. Indeed, this was what Plato meant by calling the playwrights and actors ‘poets’, after poiesis (i.e., ‘produce out of nothing’). A former student of the Cambridge science historian Simon Schaffer, Adriano Shaplin, vividly drew out this point in his 2008 play The Tragedy of Thomas Hobbes, which portrays the Royal Society as a theatre that gave the King private showings of its latest productions. In this context, Hobbes appears as a carping critic incapable of appreciating experimental demonstrations as entertainment of potentially universal appeal. On the other hand, in a more Calvinist vein, Hobbes may have alighted on the fact that God deals most directly with humans as independent agents in both the Old and New Testaments through a series of ‘covenants’, which Hobbes himself secularised in his own political philosophy as the ‘social contract’. Indeed, he went further, turning the social contract into the crucible in which the deity is forged. From this standpoint, the Royal Society’s Charter camouflages drama’s potency in the guise of a covenant, very much in the style of an established church that remains formally separate from, yet unaccountably influential over, the Crown. As a point of jurisprudence, Hobbes’ strategy for dealing with the threat posed by the Royal Society was straightforward. Its pseudo-covenant with the Crown would be replaced by a super-covenant in which the Crown is formally legitimized as supreme authority, in virtue of which the Royal Society would then be licensed. In short, England (now UK) would be endowed with what it still lacks, namely, a
90
4 Hearing the Call of Science: Back from Max Weber to Francis Bacon
written constitution from which the rights and duties of individual and collective bodies are derived. To be sure, the US Constitution aimed to fill that void for the original thirteen British colonies in North America. In Hobbes’ case, more justice would have been done to Bacon’s legacy had Bacon’s prototype for what became the Royal Society, ‘Solomon’s House’, been incorporated as a state agency in the social contract. Such an agency would serve two main functions. One would be to incubate vanguard projects that promote society’s long-term security and prosperity. The closest contemporary analogue is DARPA, a product of the US Cold War defense strategy (Fuller, 2020b). The other function would be to serve as a kind of ‘supreme court’ of knowledge claims, which in the late twentieth century was floated on the international level as a ‘science court’. The Royal Society turned out to be neither of these, yet it has had an anchoring effect of the Royal Society’s Charter on the subsequent institutionalization of science, especially once it was incorporated in the university curriculum in the nineteenth century. The key normative principle of the Royal Society’s Charter that troubled Hobbes was its declaration of mutual non-interference with the affairs of state, what is nowadays regarded as one of the earliest modern statements of the ‘neutrality’ of science vis-à-vis matters of politics and morals. In practice, it left the door open for the Crown to be influenced by the Royal Society, should the Crown be so moved. In The Tragedy of Thomas Hobbes, Shaplin presents this prospect by casting the members of the early Royal Society as a new species of actors and dramaturges, who are keen to persuade the King that experimental demonstrations are the supreme form of entertainment. In other words, when the King is not directly attending to the affairs of state, his mind should be preoccupied with the diversions of the laboratory. For Hobbes, this amounted to an incentive for the King to become beholden to an alternative reality, which had also characterized his understanding of the Church’s influence. Three components of a revolutionary constitution are relevant to the reconfiguration of the university: (1) The Constitution reigns supreme, resulting in a ‘government of laws, not men’. (2) The Constitution is a social contract across generations, which the parties collectively ‘own’ in the strong sense of its being constitutive of their identities and the source of their sovereignty. (3) The Constitution is elaborated as an all-encompassing exercise in role-playing – as in a game or a drama — that potentially never ends but nevertheless possesses a clear sense of how one enters and exits a role, and one’s standing within in it at any given time. In meeting these three conditions, the revolutionary constitution is ‘purposive’ without being ‘purposeful’ in Kant’s sense. It is more about perpetuating certain productive attitudes and relations in the citizenry than organizing them to produce a specific end, let alone a utopian social order. The key to regarding the university as a revolutionary constitution is to take seriously Francis Bacon’s attempt to de-center the medieval universities with an alternative institution of higher learning and knowledge production. Of course, the version of Bacon’s vision that succeeded, the Royal Society of London, anchored the peer-reviewed, paradigm-driven sense of science that remains dominant today. However, Bacon’s private secretary Thomas Hobbes envisaged this alternative
4.5 Bacon Redux: The University as a Revolutionary Constitution
91
institution arguably more in Bacon’s own spirit – namely, as one directly embedded in the terms of the social contract. He saw the nascent Royal Society as potentially providing an alternative source of authority to the Crown, very much in the pernicious spirit of the established church that would sow the seeds of revolt in the future. Hobbes’ objections to the Royal Society can be turned into the basis for thinking about the university as a ‘revolutionary constitution’, which is exactly what Humboldt did.
Chapter 5
The University as the Site of Utopian Knowledge Capitalism
Abstract ‘Utopian knowledge capitalism’ signals a re-engagement with the spectrum of ‘utopian socialism’ from Saint-Simon to Proudhon that Marx disparaged to make room for himself. The US economist Philip Mirowski correctly observes that Marxists fail to realize that capitalism taken to its logical conclusion – as neoliberalism does – deconstructs the ontological status of ‘knowledge’, just as it does ‘labour’. Both are just forms of capital in what neoliberals imagine to be a dynamic market environment, whereby technological innovation plays a key role as ‘creative destroyer’ of capital bottlenecks that arise from the ‘inheritance’ of wealth at various levels, resulting in capital being concentrated, sometimes even monopolized. The pre-Marxist ‘utopian’ forms of socialism differed amongst themselves whether the relevant sense of ‘organization’ needed to ensure capital’s endless mobility should come from above (Saint-Simon) or below (Proudhon). This chapter explores what is worth reviving in these relatively suppressed strands of socialism’s history, concluding with a reassessment of the Popperian tradition in the philosophy of science, which can be understood as exploring a second-order version of the same intellectual bandwidth as utopian socialism. In this context, the significance of the ‘Gestalt switch’ is highlighted.
5.1 Marxism as a Casualty in the Fight to Reclaim Progressivism from Neoliberalism In the spirit of pruning the tree by sawing off the limb on which one sits, US economist Philip Mirowski (2019) has subtly eviscerated the many left-inspired critiques of neoliberalism for misconstruing the movement’s true epistemological basis, which has effectively ‘rendered socialism unthinkable’ – at least as far as Mirowski is concerned. Of course, Mirowski continues to regret neoliberalism’s apparent triumph just as much as the critics that he criticizes. Characteristically, much of his argument is conducted by ‘persuasive definition’, in which he couches in explicitly pejorative terms quite incisive accounts of how neoliberalism works, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Fuller, Back to the University’s Future, Evaluating Education: Normative Systems and Institutional Practices, https://doi.org/10.1007/978-3-031-36327-6_5
93
94
5 The University as the Site of Utopian Knowledge Capitalism
perhaps to convey a greater sense of conspiracy on the part of the ‘neoliberal thought collective’, as Mirowski likes to say, than is necessary. Indeed, one might recast his account in a way that makes the neoliberals appear closer to how they have envisaged themselves, namely, as consistent defenders of human freedom, taking the battle to the final frontier of unwarranted restriction of access: knowledge itself. Drawing on the full range of twentieth century capitalist economists – from the Austrian school to the neo-classical and even Keynesian schools – Mirowski (2002) had earlier identified a tendency to treat the ‘market’ as an all-purpose information processor. The motivating intuition is something like the following. Say I produce a good – by whatever means, for whatever purpose. What exactly is its value? The market answers this question in terms of the price at which I’m willing to part with it, which happens when I assume the role of seller and then attract a buyer. At the same time, such transactions contribute to a deeper discovery process, as their pattern reveals the true purpose of the good. After all, some goods are directly consumed, some contribute to other transactions, and some are simply ‘banked’ or in some other way treated as an asset. Moreover, some goods have staying power in the market, whereas other goods are soon superseded. The various capitalist schools differed over the market’s reliability to perform these functions, left to its own devices. While the Austrians trusted the market’s reliability, the Keynesians did not. Indeed, the latter justified the need for the state as the producer of certain ‘public’ goods based on the unlikelihood that the market would naturally produce them. And while the Keynesian account remains a popular justification for state control over health, education and utilities, it is nevertheless telling that it trades on a circular definition of ‘public’, which is simply whatever cannot be produced by the privately driven forces of the market. Strictly speaking, ‘public’ in the Keynesian lexicon is a residual term based on which the state is then leveraged as a ‘God of the gaps’ to make up for ‘market failure’. Thus, like the other capitalist schools of economics, Keynesianism treats the market as foundational to the ‘business of life’, as the English founder of neo-classical welfare economics, Alfred Marshall had already said in 1890. ‘True socialists’, as Marxists have always regarded themselves, are sensitive to this point, which helps to explain their increasing suspicion of the ‘welfare’ preoccupations of ‘social democrats’ as the twentieth century wore on. The question of the market’s default standing in economic life arguably reached its head in what came to be known as the ‘socialist calculation debate’. This debate served to define neoliberalism’s steadfast opposition to a conception of ‘socialism’ that always threatened to spill over into Marxism (Steele, 1992). The original site for this debate were two politically opposed circles – one socialist and one liberal – both of which were somewhat outside the academic establishment in post-First World War Vienna. The socialist circle, associated with logical positivism, was led on socio-economic matters by Otto Neurath, while the liberal circle, associated with Austrian economics, was led by Ludwig Mises, though a more junior member, Friedrich Hayek ended up having a more enduring impact. Neurath held that a comprehensive scientific grasp of society’s central tendencies allowed the state to administer to people’s needs more accurately and efficiently
5.1 Marxism as a Casualty in the Fight to Reclaim Progressivism from Neoliberalism
95
than a simple reliance on the fallible judgements of individuals subject to market forces. Neurath called his position ‘Neutral Marxism’ to indicate that he accepted Marx’s socio-economic analytic framework while distancing himself from Marxist revolutionary politics (Proctor, 1991: Chap. 9). For his part, Mises updated the Austrian school of economics for the twentieth century by stressing science’s inability to capture in any overarching fashion the spontaneously formed intersubjective agreements that constitute social relations generally – and which the market epitomizes. Mises and his circle – including Hayek – came into economics through law, and so were mindful of the role that privately undertaken contracts played in the modern era as a countervailing force to the classical idea of law imposed through legislation, either by a sacred or secular sovereign. Contracts (aka ‘prices’ in a strictly economic context) historically generated greater levels of freedom, and in turn productivity, to which legislators had sometimes responded with authoritarian measures, resulting in resistance if not outright revolution. The Mises Circle did not want science to be the basis of a ‘New Leviathan’, which is what they feared was happening in the newly formed Soviet Union (Hayek, 1952). As this brief sketch suggests, the two sides of the socialist calculation debate read the lessons of the modern era as presenting rather opposed routes to improving the human condition. Whereas Neurath saw science as coming to dominate individual decision-making, once people appreciated the improvements to their lives that resulted from such delegation to expertise, Mises saw the law’s increasing liberalization as granting freedom in ever more aspects of life, resulting in people being more comfortable with taking risks. While Neurath believed that the state could pre-empt unnecessary error, Mises regarded the state as potentially pre- empting necessary liberty. Knowledge is implicated in both cases as enabling the state to be a vehicle of what I have called modal power (Fuller, 2018a). This theme is of course familiar from the work of Foucault, albeit in his characteristically disengaged way. However, for our purposes, the socialist calculation debate highlights the original Platonic point that an emphasis on ‘knowledge’ leads to a concentration and asymmetry of modal power – that is, the state is allowed to restrict individual freedom to keep the collective efforts of society focused – whereas a de-emphasis leads to a more symmetrical distribution of modal power. Plato regarded the latter state as doomed to chaos, However, I shall present it below more positively as ‘liberal interventionism’. When Mirowski rails against neoliberalism’s appropriation of ‘deconstructive’ and ‘postmodernist’ epistemological tropes, he is referring to just that sort of levelling, which even without the benefit of these trendy French movements, Mises had already proposed was better than Neurath’s top-down approach to organizing the economy. Here too Mirowski is wise to observe that Marxism’s blindness to this move results from its more ontologically grounded, or ‘realist’, conception of ‘truth’, which sees knowledge as aligned not merely with specific power relations but with material conditions that pre-exist the intentions of the particular agents who constitute a field of power. Mirowski also correctly understands Marx’s great stress on human labour as a factor of production in terms of this heightened metaphysical orientation, one shared by Aquinas and even Locke – but which Mises and
96
5 The University as the Site of Utopian Knowledge Capitalism
the neoliberals, as true moderns, resolutely do not share. But as we shall see below, contra Mirowski, to acknowledge the metaphysical blinders on Marxism’s understanding of capitalism is not to indict all forms of socialism. In fact, Marxism is the only form of socialism that has openly engaged in ideological warfare against capitalism, even though – in theory at least – it too regards socialism as somehow ‘completing’ the industrial revolution begun by capitalism. My own point in all this is that we should reverse the polarity of Mirowski’s value judgements by arguing that neoliberalism’s epistemological horizon is in fact more ‘realistic’, but at a meta-level, in the sense of Realpolitik – namely, it is concerned with, as Bismarck memorably defined politics, the ‘art of the possible’. In that respect, the state has one clear superordinate role in neoliberalism, which is to increase the realm of possibility. The legal means for this, once again, is to break default entitlements, which came to be known in the early days of neoliberalism’s intellectual cradle, the Mont Pèlerin Society, ‘liberal interventionism’ (Jackson, 2009). According to this doctrine, the state breaks up monopolies and other rent- seeking socio-economic entities that impede the flow of capital. In the case of intellectual property, this means ‘knowledge capital’, or more to the point, ‘human capital’. ‘Liberal interventionism’ in this sense was the calling card of self-described ‘Progressive’ political movements on both sides of the Atlantic in the early twentieth century, including the Fabian movement, which helped found the UK Labour Party (Fuller, 2018b). Although Theodore Roosevelt and Woodrow Wilson, on the one hand, and Sidney and Beatrice Webb, on the other, may seem unlikely comrades in arms, they shared a profound aversion to what was passing as laissez-faire capitalism, which amounted to the default accumulation of advantage over time without any concern for its long-term social value, given the impediments that would be increasingly imposed on new market entrants (Fried, 1998). At the very least, this would entail an enormous waste of talent, if one operated with an open-minded attitude toward the contributions of future generations. Here one needs to recall the supposedly scientific basis for the laissez-faire doctrine in ‘Social Darwinism’. The slogan ‘survival of the fittest’, a phrase adapted from Herbert Spencer (not Darwin), epitomized the spirit of the movement, which seemed to accept the inevitability of wasted talent given a world of scarce resources. But the more politically salient point was that Social Darwinism had elided the difference between biological and financial advantage by its acceptance of the Lamarckian doctrine of the ‘inheritance of acquired traits’, which was intuitively helped along by ambiguity in the colloquial usage of ‘inheritance’ and such associated words as ‘dynasty’. Under the circumstances, the Progressives feared that capitalism would turn into a high-tech and globalized form of feudalism – that is, unless the state intervened to disrupt what amounted to an unlimited license for monopoly formation. Knowledge itself soon came into the sights of the Progressives, insofar as restricted access to both education (via selective schools) and research (via intellectual property) structurally reproduced the spread of advantage across the population. Theodore Roosevelt came up with a novel solution to part of the problem, which would have the big corporate monopolies – Rockefeller, Carnegie, Ford,
5.2 Utopian Socialism as Post-Rentier Capitalism: Saint-Simon as Social Epistemologist
97
etc. – channel much of their profits into ‘foundations’ dedicated to research and education for public benefit, in lieu of paying taxes. These predated publicly funded bodies in the US by two generations, and arguably made the biggest contribution to both natural and social scientific innovation in the twentieth century (Fuller, 2018a: Chap. 4). The more general Progressive solution was to foster mass education and minimize – if not outright eliminate – intellectual property rights (e.g., Nye, 2011: Chap. 5). Of course, this was easier said than done. Indeed, over the years several thinkers associated with neoliberalism’s ascendancy to power in the late twentieth century came to embrace monopoly formation, rendering ‘liberal interventionism’ in practice a policy for corporate tax relief and market de-regulation, policies associated with the Reagan-Thatcher years (Davies, 2014: Chap. 3). I shall return to this curious turn in neoliberalism’s intellectual trajectory, when considering Saint- Simon’s brand of ‘socialism’.
5.2 Utopian Socialism as Post-Rentier Capitalism: Saint-Simon as Social Epistemologist Once socialism outgrows its nostalgia for Marx, it will regroup around a polarity defined by Saint-Simon and Proudhon – the corporatist and the libertarian, and thereby ‘socialism’ will be seen as simply the name for alternative capitalist futures. Indeed, given that both Saint-Simon and Proudhon completed their most important work before Marx, one can read what follows as an updated version of what socialism might have been, had Marx never existed. Basically, the versions of ‘socialism’ proposed by Saint-Simon and Proudhon were alternative strategies for harnessing capitalism’s full productive potential for overall social benefit. As a first approximation, the one calls for greater concentration of capital and the other for greater dispersion of capital. Both have their appeal, and as I have suggested, neoliberalism has flipped from a more ‘Proudhonian’ to a more ‘Saint-Simonian’ orientation. However, unlike Marx, both Saint-Simon and Proudhon celebrated capitalism’s entrepreneurship and risk-taking. The two simply differed over the kind of society in which this should happen: Should it be governed top-down vs bottom-up? Is entrepreneurship best done by one on behalf of all or by everyone as a whole? These are profound metaphysical disagreements, to be sure. Moreover, the associated policy differences between Saint-Simon and Proudhon to some extent track the socialist calculation debate. Both were concerned with minimizing waste and maximizing productivity, realizing that various sorts of trade-offs had to be made between the two. In response to these, Saint-Simon generally favoured scaling up and Proudhon scaling down: incorporation vs subsidiarization (cf. Cahill, 2017). Nevertheless, despite these profound differences, their attitude toward labour was much more ontologically flexible than Marx’s – and in that respect, showed a greater appreciation for the implicit metaphysics of capitalism. Here it is worth recalling that by the time the labour theory of value had made its way from Aquinas to Marx,
98
5 The University as the Site of Utopian Knowledge Capitalism
it had undergone a curious metamorphosis. The appeal of this medieval doctrine in the early modern era had rested on its clear opposition to the value regime supporting slavery and other forms of indentured servitude. Thus, for Locke and Adam Smith, the labour theory of value defined the proper reward of an individual’s labour. To be sure, Marx went on to re-specify ‘labour’ as a collective factor of production. However, Marx retained a metaphysical residue from the original theory. Although Marx officially dissociated the value of labour from any theological conceptions of human exceptionalism, he still regarded it as qualitatively distinct from the value of nature and other forms of capital. Indeed, the Marxist critique of capitalism’s ‘exploitative’ character depends on this point. Nevertheless, as greater liberalisation in the nineteenth century enabled people to deploy their labour in a wider arena and in various combinations with others, it became natural to see labour as itself another shapeshifting form of capital. David Ricardo pioneered this awareness by explicitly theorizing the terms – which he took to be inevitable – on which the drudgery of labour might be replaced by more efficient technology. And whereas Marx denounced Ricardo for justifying – if not encouraging – unemployment as dictated by the ‘logic of capital’, Ricardo had himself anticipated later neoliberal boosters of the ‘knowledge economy’, whereby ‘technologically unemployed’ workers would adapt to changing market conditions by acquiring new skills (Fuller, 2019a). Thus, for Ricardo, the real enemy of labour is not technological innovation but rent-seeking practices – including trade unions and professions – that restrict these renovated workers from entering new markets where they might be competitive with more established players. After all, if one thinks of labour as a shapeshifting form of capital – indeed, perhaps the most protean of all such forms – then the real problem is not that you might lose your job but that you might not find another job because the market is dominated by ‘closed shops’, the American expression for businesses that hire only union members. Both Ricardo and Marx were notorious foes of rent as a source of capital. However, the above discussion suggests a difference in the diagnosis of rent’s failures, which in turn reflects a difference in commitment to the labour theory of value. Marx was the stronger adherent to the theory, ultimately anchoring the value of labour in the work that people actually do, independently of how many other people could do the same job or its exchange value in the market. Rent is a problem from this standpoint primarily because it is unearned wealth: Income is accrued from sheer ownership without the owner adding productively to what is owned. It amounts to a legally protected power grab that gives the owner free rein over how to dispose of others. Social justice-based arguments against ‘worker exploitation’ can be easily mounted on this basis. In contrast, Ricardo saw rent primarily in terms of what it means for other market players – namely, it restricts their access to resources that they might use more productively than the current owners themselves. Thus, the circulation of capital is impeded, and the economy loses dynamic capacity to deliver prosperity for all. In short, the moral force of the critique of rent shifts from freedom arrogated (Marx) to freedom obstructed (Ricardo). In a liberal society, the latter is of greater normative concern. It is also closer to what I earlier identified as the ‘progressive’ position, including what I shall later characterize as ‘Georgism’.
5.2 Utopian Socialism as Post-Rentier Capitalism: Saint-Simon as Social Epistemologist
99
Interestingly, some Marxist-inspired economic historians have begun to come round to the Ricardian position, albeit in their own tortured way (e.g., Christophers, 2019). Before turning to Saint-Simon, it is worth observing that historically the normative case for allowing rentiers absolute control over access to their property rested on the fact that, contra Marx, rentiers do indeed contribute to the value of what they own by preventing its degradation, thereby setting a standard of maintenance that future leaseholders are obliged to uphold. In his renewed defence of this case, Roger Scruton (2012) rightly associates what we would now regard as an ‘ecologically sustainable’ orientation with the economic horizon of classical conservatism. Moreover, much the same argument is implied in today’s discussions about the value of universities as ‘custodians’ of knowledge, notwithstanding the barriers that this entails in terms of various ‘entry costs’, whether it be in terms of acquiring credentials or simply understanding the content of academic texts. Here we need to imagine that the ‘turnover’ of both private land and technical language through repeated and extended use generates concerns about the maintenance of quality control, which in turn justify the existence of said ‘custodians’. In contrast, Saint-Simon saw the advantage of monopolies from a much more dynamic perspective, one that nowadays would be most naturally cast in terms of ‘economies of scale’. Moreover, unlike both the eco-conservative and the Social Darwinist perspectives, he clearly saw nature as an opponent that can be conquered. Indeed, Saint-Simon may be credited with having conceptualised the market economy in the sort of explicitly ‘political’ – that is, ‘friend versus foe’ terms – that could win over Carl Schmitt (1996). He held that the step change in human evolution – from ‘military’ to ‘commercial’ societies – occurred once we shifted from exploiting each other in wasteful conflict to exploiting nature together productively. Here nature is pitted in decidedly anti-ecological terms as humanity’s ultimate foe, the conquest of which is necessary for humanity to fulfil its species potential. Saint- Simon’s hostile view of nature – which he inherited from Francis Bacon and from whom Marx inherited it whole cloth – is a secular holdover of the doctrine of Original Sin, whereby Adam’s Fall amounts to our species’ descent from divinity to animality. Perhaps unsurprisingly, Saint-Simon dubbed his philosophy, the ‘New Christianity’, which did not sound quite as strange in the early nineteenth century as it does today – though the advent of transhumanism may bring this theological dimension of Saint-Simonianism back in fashion (cf. Fuller & Lipinska, 2014). The Saint-Simonian vision had been already justified by the medieval jurists who invented the universitas, a very interesting Latin concept that is normally – and rightly – translated as ‘corporation’, of which ‘universities’ and ‘incorporated communes’, or ‘cities’, were among the original exemplars (Kantorowicz, 1957). The universitas is truly an ‘artificial person’ in the sense that Hobbes later appropriated for the secular state and is nowadays associated with androids, à la ‘artificial intelligence’. What unites these materially quite different cases is a statement of founding principles, a ‘charter’ or ‘constitution’, that – in the manner of computer algorithms today – serves to assemble initially distinct agents into ‘parts’ that constitute a ‘whole’ whose purposes transcend the particular ends of those who happen to be its members at any given time. (One is reminded of the frontispiece of the first
100
5 The University as the Site of Utopian Knowledge Capitalism
edition of Hobbes’ Leviathan.) Unlike, say, a mutual aid society, the sort of military expeditions that fuelled the Crusades or even the joint-stock companies of early modern Europe, the universitas is not a local and temporary arrangement. Rather, it enjoys a life of its own, perhaps even in perpetuity. Thus, the current members of a universitas are disposable based on their functionality. In sociological terms, these functions – the ‘parts’ that constitute the ‘whole’ – are called ‘offices’, or simply ‘roles’, each of which is associated with a procedure for selecting and replacing individuals in a line of succession, say, via examination or election. This situation poses special challenges for the Humboldtian university, given, on the one hand, the indefinite need to reproduce the universitas, which is largely an administrative function and may involve the university responding symbiotically to other similarly positioned ‘corporate’ players (e.g., the state, business) and, on the other hand, the need to allow the academics and students who constitute the university at a given time to the indefinite freedom to pursue inquiry. As far as Saint-Simon was concerned, Adam Smith half-understood what was needed to turn the market economy into an engine of human progress – if not human redemption. Smith realized that markets allowed people to freely pursue their talents in consort with their fellows without the artificial legal restrictions of birth rights and monopolies, which serve only to allow future action to be overdetermined by past settlements. Moreover, Smith was right to think that this liberalization would result in a self-organizing ‘division of labour’, whereby everyone plays to their strengths and is rewarded by others for doing just that. Such is the moral basis of the market economy, a side-effect of which is the increased prosperity of all. Thus, it is no accident that The Wealth of Nations was preceded by The Theory of Moral Sentiments. That’s the right way round to understand the moral argument for capitalism, as opposed to how his ‘free market’ defenders interpret him, which would have the tail of increased wealth wag the dog of mutual respect. Deirdre McCloskey (2006) is one free market economist who actually understands this point. However, Saint-Simon observed that Smith’s model does not automatically scale up. Historically, markets were centres of action that operated according to the needs of the communities housing them. They emerged ‘spontaneously’ as people traded their surpluses to satisfy their needs. However, markets were neither continuously operating nor coordinated across communities, which could prove problematic if the surpluses and/or needs exceeded local market capacities. This was the problem that Saint-Simon attempted to solve under the rubric of ‘socialism’. Indeed, by ‘socialism’ Saint-Simon meant at least corporate capitalism, perhaps also involving a corporate state. Instead of people spontaneously organizing themselves, they would be explicitly organized by a change agent – a ‘catalyst’, as chemists would say – into what came to be known as an ‘organization’, which is reasonably understood as an artificial organism. These catalysts are Saint-Simon’s ‘captains of industry’, the entrepreneurs of social innovation that we nowadays call ‘knowledge managers’ (Fuller, 2002: Chap. 3). Moreover, the science of chemistry itself is strongly implicated in these developments, well into the twentieth century. Wilhelm Ostwald, who had discussed ‘organizing’ in the context of how scientists produce a chemical synthesis in the laboratory, also championed a second-order version of this
5.2 Utopian Socialism as Post-Rentier Capitalism: Saint-Simon as Social Epistemologist
101
process in terms of the conduct of science itself (Fuller, 2016b). His French contemporary Pierre Duhem (1991) disparagingly dubbed this ‘German science’, but it came to be normalized as large-scale, team-based laboratory work led by a ‘principal investigator’ who organizes the talent such that the team’s scientific output is much greater than what any of them could have accomplished individually, aka ‘Big Science’ (Price, 1963). As I mentioned above, Saint-Simon spoke of humanity’s adversarial relationship to nature as requiring ‘exploitation’, a term that Marx later threw back at capitalists who, as he saw it, continued to exploit fellow humans just as they exploited nature: Both were rendered ‘capital’. Nevertheless, the desire to exploit human nature lingers – albeit in domesticated form – in the postwar welfare state extension of the idea of public goods from roads and utilities to health and education, notwithstanding the long history of private provision for the latter pair of services. The underlying intuition was rather different from, say, that of Bismarck, who saw public health and education primarily in national security terms. In contrast, welfare economists of the mid-twentieth century regarded them in the spirit of human capital development for purposes of increasing national productivity (‘GNP’). This was much more in Saint-Simon’s original spirit, whereby ‘peaceful’ economic competition would replace warfare. It also helps to explain the backlash that began in the 1960s with the University of Chicago economist Gary Becker (1964) and colleagues who became the vanguard of ‘neoliberalism’. They questioned whether the return on investment justified the elevation of health and education to public goods: Does state intervention add value beyond what would result if the market left to sort it out? The same questioning was also extended to state provision of research – but less vociferously, since the ongoing Cold War was being fought largely on the battlefield of scientific and technological prowess. This last point is relevant to the US government’s increasingly critical posture toward Silicon Valley tech giants, whose internet-based social media platforms are founded on infrastructure originally provided by the US Defense Department in the Cold War as an alternative channel of communication in the event of a nuclear war (Fuller, 2020b). However, as throughout the modern period, once the exigencies of war were removed (and state debts started to be repaid), such ‘next generation’ innovation was redeployed to the private sector, resulting in a few enterprising individuals taking advantage of the new and potentially very lucrative markets that had been opened. They largely still dominate the market more than a quarter-century later. In the case of Silicon Valley, the Congressional hostility is interestingly bipartisan. Consider recent best-sellers by left-leaning Democratic Senator Amy Klobuchar (2021) and right-leaning Republican Senator Josh Hawley (2021), two Ivy League-trained lawyers who want ordinary Americans to understand the role of antitrust legislation in underwriting America’s legacy of liberty and productivity, in order to appreciate the injustice now being perpetrated by companies whose modus operandi they have come to take for granted in the name of convenience. Fuelling this animus is the sense that public goods have been effectively stolen, or at least Silicon Valley’s stewardship of them is open to serious question (cf. Morozov, 2013). The argument’s moral force is informed by the fact that the
102
5 The University as the Site of Utopian Knowledge Capitalism
internet had been manufactured to be a public good – albeit one designed in wartime for a post-apocalyptic society. In contrast, most public goods – from roads to radio – have come under state stewardship and provision after having had a patchwork history under private control. Nevertheless, the fundamental principle that animates the two paths to public goods are the same – and very much in the spirit of Saint-Simon: namely, that the full expression of human potential is arrested if historically based entitlements come to dominate our understanding of legitimate succession. In this regard, the economic sphere is no different from the political sphere: Monopolies are dynasties – and both require a counter-inductive remedy, be it antitrust legislation or periodic elections. Such events offer opportunities for people to reconsider lost futures that they might wish to recover. Here academic peer review appears regressive, as it aims to reinforce the trajectories of already established research, even in cases where novelty is admitted. More to the point it goes against the spirit of the Humboldtian university’s emphasis on bringing frontline research to the classroom, specifically to inform students of how the future need not be like the past – and that one should understand the latest academic thinking even if one does not plan to contribute to it directly. Indeed, the Humboldtian university remains the most reliable site for converting new information into knowledge as public good. But there remains the question of whether the university will stay true to Humboldt, if or when the state withdraws its support.
5.3 Saint-Simon Deconstructed and the Promise of Proudhon In the late 1930s, lawyer-economist Ronald Coase, a product of both the London School of Economics and the University of Chicago, reinvented Saint-Simonianism for the emerging neoliberal worldview in the guise of the firm, a superordinate market agent that minimizes the transaction costs between first-order market agents by granting them privileged access to each other as incorporated parts of a single functioning unit (Coase, 1988: Chap. 2; cf. Birch, 2017a: Chap. 6). This arrangement minimizes potential ‘negative externalities’ from what been the moment-to-moment misalignments of supply and demand in a collection of independent agents. These costs are ‘internalized’ into what is now a unified system’s normal operation. Thus, the concerns raised by Neurath in the socialist calculation debate about imperfect information flows and miscommunication in markets are met through the backdoor, rendering the firm in effect a protected free market. The European Union was originally conceptualized in that spirit. But equally, modern imperialism – with what Marxists call its ‘global division of labour’ – can be understood as the logic of the firm projected on the world stage. Indeed, this had famously led Lenin (1948) to see imperialism as capitalism’s ultimate self-expression. He meant this as an indictment, but Saint-Simon might have taken it as a compliment – especially if he didn’t read the fine print.
5.3 Saint-Simon Deconstructed and the Promise of Proudhon
103
However, Lenin’s was not the only understanding of capitalism’s relationship to imperialism in the early twentieth century. The alternative, due to Joseph Schumpeter, gets to the difficulty of capitalism achieving the sort of ‘corporate capitalism’ to which Saint-Simon aspired. For Schumpeter (1955), imperialism is simply the reinvention of feudalism in capitalist garb, as it tends to stabilise a ‘global division of labour’ on land-based terms (‘colonies’), which in turn become sources of rent and – as in historic feudalism – potential targets of military conquest. In the process, Saint-Simon’s much vaunted evolutionary transition from military to commercial society gets undone. The problem here is not that a global division of labour per se exists but that its imperialist version tends to arrest the economy’s dynamism, creating artificial monopolies, over which wars can then be fought. After all, targets come more easily into view when their boundaries don’t change. In a similar vein, Schumpeter regarded the Marxist fixation on classes as an organizing principle of capitalism as a holdover from feudal estates. Collective mobilization based on class is likely to succeed only if the mode of production is subject to relatively little innovation, resulting in a static labour market. In short, the appeal of class consciousness is inversely related to the prospect of social mobility. This helps to explain the pattern of Marxist revolutions in the twentieth century, starting with Russia in 1917. They typically succeeded in societies that had yet to be fully industrialised – contrary to Marx’s own prediction, of course. The challenge facing latter-day Saint-Simonians is how to avoid the Marxist trap of presuming such a rigid conception of capitalism’s social relations as to ignore the inherent dynamism of capitalism’s mode of production. The general Saint-Simonian response is to conceptualise class in radically functionalist terms. In order to meet demand in response to changing market conditions, firms must be able to regularly alter their internal division of labour: Not only role-players but the roles themselves must change. This means that any grouping of workers is always expedient relative to a job for only as long as it needs to be done: Teams must easily assemble, disassemble and re-assemble. Saint-Simon, who popularised the use of ‘industrious’ to refer to a personal quality, understood the increased productivity of labour, like Ricardo, as being opposed to the cultivation of a line of work for its own sake, as in such residually medieval institutions as trade unions or professions. Indeed, knowledge management is arguably the only ‘profession’ allowed in Saint-Simon’s world, in which universities would effectively become high-toned business schools. Even so, how can a firm retain its integrity if its development is likely to require so much external and internal change? Before the existence of the firm, another version of the universitas faced a comparable problem on an arguably larger stage: the state. Recall that natural law theory, the discursive context in which the universitas was constructed in the Middle Ages, presupposed a cosmos governed as a ‘divine corporation’, which amounted to a theologised understanding of ecology (Schneewind, 1984). On this view, nature is the outworking of divine intelligence in matter, such that life is tantamount to the expression of mind. (The intuitiveness with which we accept the idea of life as the product of a ‘genetic code’ may be a secular downstream effect of this sensibility.) From that standpoint, Saint-Simon’s innovation amounts to asserting that God has
104
5 The University as the Site of Utopian Knowledge Capitalism
delegated to humans the right to operate in a similar fashion, but now within – rather than from outside – nature. In this way, the imago dei doctrine of the Bible morphed into political economy’s principal-agent theory. An early adopter of this idea turns out to have been John Stuart Mill, who made a semi-serious case for the existence of a ‘limited liability God’ (Copp, 2011). But before any of that happened, Thomas Aquinas had already departed from Aristotle in envisaging the political order not as a spontaneous feature of human nature (i.e., zoon politikon) but as a second-order extension, what today we might call an ‘extended phenotype’ or ‘superorganism’ (Fuller, 2016b). To be sure, these latter-day conceptions reflect the influence of evolutionary thinking that was unavailable to Aquinas. Indeed, Aquinas thought of the ‘state’ in biblical terms as imposing an artificial constraint on humans who might otherwise regress to their fallen condition. Moreover, his fixation on this superordinate entity as a ‘state’ (status) suggests something capable of maintaining what biologists call ‘homeostasis’, which entails a standard of equilibrium between the organism and the environment, as well as a means to restore equilibrium when necessary (Kantorowicz, 1957: Chaps. 5 and 6). This helps to explain, for example, the right of slaves to revolt against their masters if they are mistreated. Such mistreatment, while not providing grounds for eliminating the master-slave relationship altogether, does presuppose a ‘state’, which combines the sacred and secular features of natural law to redress such injustices, thereby restoring a ‘natural’ order in which masters and slaves deal with each other with the dignity accorded to their respective natures. This vision of the divine corporation held sway well into the eighteenth century, the last great expression of which was probably Carolus Linnaeus’ theory of ‘natural economy’, out of which came the two-named basis of for the classification of animals, plants and rocks that remains in modified use today (Koerner, 1999). However, that same century witnessed the rise of an ‘epigenetic’ approach to life that gradually eroded the intuition that had grounded Aquinas’ conception of the state – namely, a static nature that is front-loaded towards the perpetuation of previously expressed traits, aka hereditary entitlements. This shift away from ‘inheritance’ in the broadest sense began in studies of cell development in the embryo, as scientists moved from thinking of the nascent life-form as ‘preformed’ to ‘pluripotential’ – which is to say, capable of various ultimate expressions, depending on the embryo’s developmental context. This provided a metaphorical basis for people to claim that their ‘potential’ was being held back by adverse conditions that impeded their ‘free development’, which soon became a rallying cry for modern revolutionary politics. Two centuries later, Gilles Deleuze and Felix Guattari (1977) would retrace this turn to the original disputes over epigenesis, in which they famously adapted Antoine Artaud’s phrase ‘body without organs’ for the pluripotential embryo – what is nowadays usually called a ‘stem cell’. And roughly halfway between the rise of epigenesis and Deleuze’s ‘bodies without organs’ stands Proudhon, who regarded each individual as an inherently pluripotential entity open to a variety of social arrangements, all of which might be of potential benefit to humanity as a whole. French sociologist Daniel Colson (2019) takes this position to be the metaphysical foundation of the various movements that
5.3 Saint-Simon Deconstructed and the Promise of Proudhon
105
have styled themselves ‘anarchist’ over the past two hundred years. For our purposes, Proudhon’s sense of ‘socialism’ as a kind of spontaneously organized capitalism interestingly contrasts with Saint-Simon’s more hierarchical conception. Proudhon starts by suspecting the very concept of property as both masking the collective nature of any human achievement and stifling the inherent dynamism of the modern economy. Indeed, he regarded private property as a feudal residue in modern capitalism, whose risk-seeking tendencies he admired as the fount of innovation. It is thus easy to see why Proudhon instinctively opposed treating the range of entities normally covered under intellectual property law as ‘property’ in any strict sense that might accrue monopoly benefits to innovators (Guichardaz, 2020). More specifically, he accepted Adam Smith’s premise that people are ‘natural equals’ at least in terms of not being able to do everything for oneself, which in turn encourages a commercial culture based on mutual recognition (McCloskey, 2006). It follows that the designated ‘innovator’ is simply the person who configured many talents distributed across the population and consolidated them in an invention that then employed the efforts of other people to transform society as a whole. The innovator’s ‘originality’ lies simply in having catalysed this chain of configurations, which increasingly extend beyond what the innovator could reasonably anticipate. For Proudhon, two countervailing points follow from the above, which should be read as the broad outlines of a ‘natural history of innovation’. On the one hand, the sheer publicity of the innovation removes any sense of the innovator’s proprietary ownership, since others may end up employing the invention in ways that make it more beneficial if not profitable than what the innovator could have done or even envisaged. We might think of this point as the technological antecedent of the late twentieth century ‘death of the author’ thesis promulgated by Barthes, Foucault, Derrida and ‘post-structuralist’ thinkers, who stressed the extent to which the meaning and significance of a text always escapes and exceeds its author’s intentions. On the other hand, it is certainly true that each innovator provides ‘added value’ as the catalyst that consolidates an invention. In that respect, innovators deserve some compensation for their work. Proudhon proposed that the state should provide both authors and inventors a ‘salary’. Proudhon’s proposal was clearer in its conception than its implementation. Nevertheless, the guiding intuition was that anyone could be an innovator, but the success of an innovation depends on everyone else. An innovation requiring major effort may generate little effect, whereas one involving trivial effort may turn out to be very profitable – in both cases, for reasons that are largely out of the innovator’s control. A salary would thus strike a middle ground between no compensation and a permanent entitlement to the innovation. Here it is worth recalling that the distinction between copyright and patent as forms of intellectual property had yet to be clearly drawn in the early nineteenth century. Indeed, at the 1858 International Congress on Literary and Artistic Property in Brussels, authors and inventors were treated as one regarding the central question of whether their rights should be specifically understood as property rights (Machlup, 1950). An influential figure in this context and on Proudhon was the lawyer Charles Renouard, who recognized that ‘property’ entails much greater legal protection than ‘right’. A right is simply a permission to access, whereas property implies
106
5 The University as the Site of Utopian Knowledge Capitalism
restriction of access as well. Renouard concluded that authors and inventors require a temporary property right over their innovations not to reward them properly for their ‘genius’, but to ensure that their having been first doesn’t turn into a disadvantage. Precisely because others could have arrived at the same innovation at roughly the same time, they would probably appropriate many if not all its benefits, if not explicitly prevented, once the innovation was made public, thereby placing an ironic spin on Proudhon’s notorious slogan, ‘Property is theft!’ Thus, for a limited period, any such would-be usurper would require permission from the actual innovator (Haynes, 2010: 87–89). While many legal and philosophical justifications have been given over the years for extending or restricting intellectual property rights, Renaud’s has remained the grounding intuition for assigning them in the first place. It presumes a preference for ‘right’ over ‘property’ as the relevant legal category governing innovation, which is suited to a process that all everyone agrees is subject to profound contingency, both in terms of origin and uptake. One might even go a step further and suggest that on this Proudhonian view, ‘property’ is subsumed under ‘right’, insofar as one’s entitlement to ownership is ultimately predicated on one’s ability to make productive use of the object in question, which is to say, it is never a settled matter. In principle, everything is collectively owned, but in practice the ‘true’ owners are the producers not the inheritors. We shall explore this point in more detail below under the guise of ‘Academic Georgism’.
5.4 Towards a Truly Open Science and the Prospect of Academic Georgism The ‘open science’ movement began when academics started to complain about the potential lack of access to the journals where they have published or might want to publish. The loudest complaints came from universities that could afford to strike deals with publishers to enable ‘open access’. From a classical Marxist standpoint, the ‘open science’ looks very bourgeois, given the ease with which it can be resolved by monetary payment. Moreover, this settlement has been legitimized – and even standardized – by public funders requiring that knowledge published in academic journals be freely available to anyone. At the same time, many researchers are excluded from such arrangements, perhaps due to their universities’ lack of funds or simply by their own lack of a university affiliation. They can’t enter what is de facto a ‘protected market of open science’, and hence can’t turn the knowledge it contains to their own advantage, let alone alter its market dynamics substantially. Thus, ‘open science’ may not be as open as one might wish (Fuller, 2016a: 67–71). Yet, there are also two other senses of ‘openness’ – already exemplified by the internet – that have the potential to reorganize the political economy of open science. They pertain to freer entry and freer use. The internet promotes freer entry by not imposing an initial intellectual or financial toll on the user. Closely related to
5.4 Towards a Truly Open Science and the Prospect of Academic Georgism
107
that fact is the relatively free rein that users are given in how they operate in the virtual knowledge environment. In contrast, the protected market of open science is primarily aimed at making conventional academic knowledge transactions (i.e., journal communications) as frictionless and their results as transparent as possible. In this respect, the open science movement may be seen as a reinstatement of the Charter of the Royal Society on a digital platform. One consequence of such academic protectionism is that knowledge producers are valuable simply by virtue of being part of a protected market consisting of those universities that subscribe to the journals in which the academic publishes. This basic fact is often obscured by the ideology of ‘peer review’ which legitimizes academic protectionism and creates a halo effect around its public face, ‘expertise’. But as a form of political economy, it amounts to rentiership, the bane of both David Ricardo and Karl Marx. They agreed that value is intrinsic neither to nature nor even to property, which was too often inherited as a ‘second nature’, accruing to its possessors a merit that they do not deserve. For Ricardo and Marx, value must be earned through the application of labour. Of course, whereas capitalists hold that a free market would incentivize property owners to make investments that create opportunities for labour rather than simply collect rents, socialists call for stronger, state-based measures, including taxation and other more proactive forms of wealth redistribution. As we saw earlier in this chapter, the US Progressive Era was distinctive in turning the anti-rentier mentality uniting Ricardo and Marx into the liberal interventionist state, which made it its business to turn capital bottlenecks into free markets through various legal instruments. The talismanic intellectual figurehead of this movement was Henry George, the political economist and author of the worldwide best-seller Progress and Poverty (1879), who argued that the only legitimate tax was on land whose owners generate wealth merely by renting it out to others who then do the real work of ‘improving’ the land by either developing or conserving it. The Financial Times’ chief economics columnist, Martin Wolf (2023), has recently recast George’s thesis for its readership as a moral obligation to tax resources whose supply avoids the price mechanism, which is capitalism‘s default dispenser of justice. Moreover, failure to tax such resources means that what remains of the potential tax base for redistribution – earned wealth – results in the rich bribing the poor to make themselves richer without enabling the poor to improve their lot. It’s effectively a policy of permanent handicapping (Samuels, 1992: Chap. 5). In this respect, Georgism simply recovers the sense of ‘natural justice’ that first motivated Adam Smith and the Marquis de Condorcet to propose the modern market-based economy (Rothschild, 2001). And while George followed Ricardo in speaking of land rent as capitalism’s original sin, ‘land’ is a proxy for any unilaterally protected market that impedes capital circulation. Accordingly, ‘Academic Georgism’ would target the various barriers to free entry and free use of academic knowledge. Such a policy would have interesting and potentially radical implications for the assignment of credit in academia. After all, among the most highly cited people in academic publications are those who are dead or if not dead, long ago ceased to research in the areas for which they are cited. This helps to explain the highly
108
5 The University as the Site of Utopian Knowledge Capitalism
stratified nature of academic citation patterns, whereby the citation-rich get richer and the citation-poor get poorer, nowadays as measured by the notorious ‘H-index’ (Fuller, 2018c). Such dead and undead folks are effectively reaping indefinite dividends in areas to which they themselves no longer add value. From a Georgist standpoint, their trademarks should be assigned the status of mere labels – or perhaps, where appropriate, stylistic inflections of public life. It amounts to the most consistent albeit ironic gloss on Proudhon’s slogan, ‘Property is theft!’ After all, a thief usually feels under the great obligation – perhaps more than the person from whom she stole the item – to protect and transform it, so that others recognize it as the thief’s own. The Yale literary critic Harold Bloom (1973) famously wrote about this as the ‘anxiety of influence’ that poets feel towards their predecessors, from whom they creatively plagiarise and pray not to get caught – or at least, if caught, to be seen as having been improvement on the origin (cf. Fuller, 2020a: Chap. 10). A good way to appreciate the ever-present threat of rentiership to academic knowledge production is to recognize that knowledge is not by nature a public good but must be made one (Fuller, 2002). Understood as a product of research, genuinely innovative knowledge – that is, a discovery or an invention – is born elite, in the sense of being ‘ahead of the pack’. It is at first known by few people, who in turn potentially acquire some sort of advantage vis-à-vis fellow researchers, if not society at large. In this respect, all knowledge begins as what economists call a positional good, namely, its value is tied directly to its scarcity (Hirsch, 1976). This initial condition makes knowledge ripe for rent. For knowledge to become a public good, more people need to learn about, use and develop it, without the value of the knowledge diminishing – and ideally, with its value enhanced. Humboldt’s great innovation was to harness cutting edge research to the reproduction of the society’s authority structure to convert the universities from conservers of tradition – that is, guardians of rentiership – to dynamos of social change. Humboldt’s renovated academic was to present the path of inquiry not as largely charted but as fundamentally open, with the expectation that the next generation will not build upon but overturn the achievements of the past. As we have seen, Max Weber’s famous 1918 speech to new graduate students, ‘Science as a Vocation’, provides a deep and eloquent statement of what that expectation entails, both personally and institutionally. I have characterized the conversion of knowledge from positional to public goods in terms of the ‘creative destruction of knowledge as social capital’ (e.g., Fuller, 2002: Chap. 1; Fuller, 2016a: Chap. 1). By that phrase I mean the multiple roles that Humboldt-style teaching plays in undermining the competitive advantage – and hence the potential for rentiership – that is inherent in cutting edge research. These roles include the presentation of difficult ideas in more ordinary terms, as well as streamlining the learning process so that students do not need to recapitulate the entire history of a field before feeling for themselves or being deemed capable of contributing to it. ‘Philosophy’ started to name an academic discipline in this context, specifically to level the epistemic playing field by forcing even the most established knowledge claims to be justified from first principles that anyone could understand. ‘Teaching’ in this Humboldtian sense – the transmission of what is already known to those who don’t yet know it – is arguably the most efficient and
5.4 Towards a Truly Open Science and the Prospect of Academic Georgism
109
reliable path to fostering an innovative spirit in society at large because it demystifies the ‘genius’ aspect of previous innovations. A claim to genius tends to polarize the emotional horizons of audiences: It can either challenge or inhibit any aspiring innovators. The way to understand university ‘teaching’ as producing knowledge as public goods is not as technical training but as anti-technical training. Put provocatively, the goal of teaching is the very opposite of the faithful transmission of knowledge to the student as the teacher learned it. To assume otherwise would be to suggest that restricted access is inherent to the nature of knowledge. Knowledge can be manufactured as a public good only if students are enabled to acquire the ‘same’ knowledge by starting from their own premises. This implies a specific approach to teaching. First, teachers need to have a clear sense of what is and is not essential in the knowledge they wish to convey to students. Teachers should assume that much of how they themselves have come to know something is simply an accident of their biography – that is, the fact that they went to certain schools, read certain books, had certain teachers. (This is what the logical positivists and the Popperians mean by the ‘logic of discovery’.) Those aspects of the innovators’ background do not require reproduction. But equally important, teachers need to have a reasonable sense of what students already know, as that will be the basis on which they can understand the relevant sense of novelty that is presented in what they are taught. The result is bound to be a form of knowledge that is sometimes dismissed as ‘popularisation’. Nevertheless, it is the sort of knowledge that can be genuinely counted as a ‘public good’, simply because excessive access costs to the knowledge have been stripped away. However, the contemporary world poses institutional obstacles to turning knowledge into a public good. Together they effectively subordinate teaching to research, instead of treating them as equal and complementary university functions. Two such barriers stand out: academic writing conventions and intellectual property legislation. Each in their own way imposes additional restrictions on access to knowledge beyond the bare fact that new knowledge is at first available only to a few. Such restrictions enable knowledge to become a form of ‘social capital’ that accrues power to those able to pay the entry costs to gain access. This is close to the sense of ‘knowledge as power’ that the Protestant Reformers found corrupting of the Roman Catholic Church. Nevertheless, it is fair to say that Plato and even his latter- day followers such as Auguste Comte would have been pleased, since knowledge-based power is arguably the least violent way to impose social order (Fuller, 2006a: Chap. 4). Of course, I refer here to the technical training required to be deemed an ‘expert’ with ‘credentials, the ultimate expression of research-based knowledge. This in turn breeds a complementary culture of ‘trust’ and ‘deference’ on the part of those who lack the relevant credentials. The result is a very explicit sense of social hierarchy. Although teaching was hardly mentioned at all in my first book, Social Epistemology (Fuller, 1988), the above discussion of teaching – which has figured increasingly in my work in subsequent years – is inspired by my long-standing interest in translation, the subject of Part Two of that original book. A key distinction
110
5 The University as the Site of Utopian Knowledge Capitalism
recognized in translation theory, due to the Bible translator Eugene Nida, is between formal equivalence and dynamic equivalence. In Social Epistemology, I observed that it corresponds to the difference between exegesis and hermeneutics of the Bible: The former tends to construct the Bible’s authority by distancing the work from its readers, while the latter constructs Biblical authority based on its continuing relevance to people’s lives. The distinction can be equally cast in terms of education. Teaching in the spirit of a formally equivalent translation aims to get students to see the world as the teacher sees it. This is the essence of technical training, which – as Thomas Kuhn rightly stressed – is the sense in which a scientific paradigm imposes a worldview on practising scientists. Thus, someone who trains to be a physicist implicitly agrees to adopt the cognitive framework of, say, Newton or Einstein, as mediated by the instructor. But this arrangement renders knowledge a positional good, not a public good. Notwithstanding the significance increasingly accorded to the Ph.D. in the modern university, if the university is to remain faithful to the Humboldtian spirit, it should be the natural enemy of this approach. In Thomas Kuhn: A Philosophical History for Our Times, I made this point in terms of Ernst Mach’s vehement opposition to Max Planck’s proto-Kuhnian approach to the teaching of science in the early twentieth century (Fuller, 2000b: Chap. 2). Here I follow Mach in relating education to translation as dynamic equivalence. Thus, the teacher aims to get the students to understand the significance of a piece of knowledge on their own terms, which means that the teacher must recast it in a way that directly addresses the students’ worldview, even if it turns out to challenge, test or critique the students’ beliefs. The measure of successful teaching in this context is that students come to see the knowledge that they acquire as relevant to their normal lives, rather than as a self-contained expertise that is relevant only in certain ‘professional’ settings, in which they exert power over others. Education in the dynamic equivalence mode is clearly difficult for teachers whose identity is tied to the way they learned the content they wish to impart. This burden perhaps weighs more heavily on teachers with degrees from elite institutions, which have left them thinking that their training provides them a unique position from which to pursue knowledge. And this is certainly true, sociologically speaking, given the ease with which they can secure attractive posts, funding and recognition more generally. However, it is much less obviously true in strictly intellectual terms. In any case, a sign of your intellectual maturity lies in the ability to release your knowledge from the context in which you acquired it. The act of teaching is an opportunity for teachers to reacquaint themselves with what they were taught, if only to remain competitive with the students to whom the teachers have imparted knowledge. All this activity ‘translates’ knowledge in the sense of adding meaning to it through recontextualization. In this respect, grammar – understood as the laws that modern states have drafted to regulate language flow within their borders – enables language to be turned into an object of rentiership, as barriers are placed to what counts as adequate transmission in what would otherwise be the free exchange of thought and expression. It was in that spirit that the famous Italian
5.4 Towards a Truly Open Science and the Prospect of Academic Georgism
111
saying, ‘Traduttore, traditore’ (‘To translate is to betray’) was coined in the fourteenth century to characterize the French translators of Dante’s Divine Comedy, arguably the first classic work in the newly emerging Italian language. It was as if the French had stolen something belonging to the Italians by rendering it in their own language. However, Chomsky (1971), invoking Humboldt, observed that the very possibility of such grammatical boundaries – albeit artificial – between languages implies the existence of a ‘universal grammar’ innately possessed by all humans, which is the mental equivalent of commonly owned property. From that standpoint, translation is less a transgression of the source language than a reappropriation that adds value to the original text precisely by being rendered in the target language. Indeed, history is full of cases in which a text that was relatively ignored in its original language become a gamechanger for readers in the language into which it is translated – often due to the translator’s spin. More speculatively, the filial relations among languages suggests that ‘multilingualism’ might be better seen as polyvalent monolingualism. This was the spirit in which students of language from the mid- eighteenth to the mid-twentieth centuries, across the humanities and nascent social sciences, took something called ‘mythology’ to have revealed our species-level capacity for the articulated arrangement of words to establish group identity, solidarity and mutual recognition among groups over great expanses of space and time. Among the most notable of these scholars of mythology were Max Müller, James George Frazer and Claude Lévi-Strauss, all of whom believed that the repeated storylines across the world’s mythologies reflected a relatively small number of ultimate source languages (Ursprachen), perhaps even resolvable into one, at which point Chomsky’s universal grammar comes into the frame. This overall view led the great nineteenth-century US Transcendentalist thinker Ralph Waldo Emerson to characterize language as ‘fossil poetry’, whereby ‘poetry’ literally referred to its productive (poiesis) character to conjure up worlds, à la mythology. Any rebooting of the university’s Humboldtian legacy needs to take this profoundly emancipatory conception of language into account. In conclusion, let me return to my original point that knowledge needs to be manufactured as a public good, and that this can only happen through the ‘creative destruction of social capital’. I have increasingly characterised university tendencies to privilege research over teaching – and hence reduce teaching to technical training – as intellectual ‘rent-seeking’. One of the few things on which capitalist and socialist economists agree is that they abhor rent as a source of value, mainly because it relies on what they regard as an artificial restriction of access to capital. ‘Rentiers’, in Marx’s memorable term, literally try to capitalize on sheer ownership, even though the fact that someone owns something valuable now should not by itself be the source of future value; rather, ownership should be treated simply as the basis on which new value is generated through productive labour. To conclude otherwise is to mystify the original acquisition process by which someone comes to own something. Yet, when scientific paradigms are named after, say, Newton,
112
5 The University as the Site of Utopian Knowledge Capitalism
Darwin or Einstein, origins are effectively mystified. Newton did not merely arrive at an impressive number of truths about the physical universe. His methods for arriving at those truths became the template for physical inquiry, arguably to this day. This last point recalls what economists call ‘path dependency’ in the history of technology, which means the tendency for one innovation to become so entrenched that all competitors must play by the rules of its game. Thus, when Henry Ford shifted the paradigm of personal transport from the horse to the automobile, future competitors aimed to build a better car, not a better horse. However, as we have seen, the intellectual trajectory that flows from the Protestant Reformation to the scientific method has been rightly hostile to the narrowing of horizons that this tendency involves. Thus, instead of grounding the legitimacy of the Christian message on the dynastic succession of the Popes from St Peter onward, the Protestants favoured the approach of St Paul, who spread Christianity to many audiences in a more direct and even customized manner. But arguably, path dependency is a bigger problem in science today. To be sure, it captures well the emergence and evolution of a Kuhnian scientific paradigm, as each new generation of scientists is compelled to ‘stand on the shoulders of giants’, to recall Newton’s characterisation of his own achievement. Evidence of this may be found in academic citation practice, whereby it is near impossible to establish one’s own claims to knowledge without referencing specific precursors, on whose work one then claims to build. But interestingly, science’s instinctive Protestant tendencies have pushed back against path dependency. The clearest expression of this hostility is the insistence on a sharp distinction between the context of discovery and the context of justification. This distinction, associated in my student years with the logical positivists and the Popperians, has a long prehistory. It is rooted in the intuition that if science aspires to be a ‘universal’ form of knowledge, then its knowledge must be both universally valid and universally available. It follows that the originator of any knowledge claim is no more than a historical accident, from an epistemological standpoint. Popper sometimes even said that the context of discovery was ‘irrational’. What he meant was that we should not fetishize the origins of knowledge claims because anyone else could have come up with them, under the right conditions – assuming that they are indeed valid knowledge claims. This in turn makes the epistemological task inherently social, in a way that is related to the role of teaching in the Humboldtian university, which is ultimately about demystifying the origins of knowledge to enable everyone to use that knowledge pursue their own future. The question that remains, of course, is whether this ‘empowering’ view of knowledge, which has been central to my version of social epistemology, can be pursued in a way that in the end retains a common understanding of ‘society’ sufficient to sustain knowledge as a public good. I shall pick up this thread again in the next chapter, but I shall now round out the current discussion with a deeper look at the relevance of Popperian philosophy to open science.
5.5 Utopian Knowledge Capitalism as Philosophy of Science: Revisiting the Popperians
113
5.5 Utopian Knowledge Capitalism as Philosophy of Science: Revisiting the Popperians Although in many respects Saint-Simon and Proudhon presented opposing – perhaps even diametrically opposed – visions of socialism, nevertheless both wanted to maintain the dynamic character of the capitalist system, which is associated with entrepreneurship’s innovative brand of risk-taking. Here they were joined as one against the Marxists, who tended to regard that signature feature of capitalism as the harbinger of its ultimate self-destruction. The corresponding spirit in the philosophy of science to this anti-Marxist stance is the hostility to ‘induction’ as a form of inference in the broadest sense, whereby the future is presumed to be overdetermined by the past. To be sure, induction is inherent to our default understanding of reality, in the sense that we recognize change over time only relative to invariance. However, as Karl Popper famously insisted, the progressive nature of scientific inquiry depends on our suspending that intuition by systematically disturbing the background condition of invariance that which ‘we’ take for granted. And by ‘we’, Popper included the scientific community, especially in its institutional understanding that was made popular in the 1960s by Thomas Kuhn. In this context, the experimental method functions as the critical foil, as epitomized in the ‘null hypothesis’, whereby the default expectation is pitted against a rival newcomer who proposes an alternative future – at least in terms of specific predictions – based on a common past. The source of Popper’s instinctive opposition to inductive reasoning lay in his original training in Gestalt psychology under Karl Bühler, Freud’s great Viennese rival in the 1920s. It played a subtle but significant role that continued to influence the philosophy of science for the rest of the twentieth century. Key to this influence was the famed ‘Gestalt switch’, an experimental effect whereby an ambiguous figure can be interpreted in two radically different ways, depending on how the experimenter contextualises the figure for the subject. What Popper called a ‘crucial experiment’ potentially functioned in this capacity if the newcomer hypothesis proves to be correct, thereby forcing scientists to reconsider whether their default understanding of the phenomenon in question had been flawed all along at some deeper level. To put it in Gestalt switch terms, if what you originally saw as a ‘rabbit’ suddenly appears to be a ‘duck’ because the background conditions of the ambiguous figure have changed, then was the figure a duck all along? For Popper’s great opponent Kuhn, this characterised a paradigm in ‘crisis’ mode, the prelude to a ‘scientific revolution’, a Gestalt switch on the level of religious conversion (Fuller, 2015: Chap. 4). A striking feature of the Gestalt switch, which emboldened Popper but disturbed Kuhn, is the epistemic efficiency with which a radical change in understanding is brought about. After all, a scientific revolution – just like the shift from the ‘rabbit’ to the ‘duck’ interpretation – does not generally involve replacing all the old data with new data. On the contrary, it involves reorganizing the old data in light of some strategically generated new data (e.g., from a crucial experiment) so as to give the
114
5 The University as the Site of Utopian Knowledge Capitalism
totality of data an entirely new look. And this ‘new look’ may extend retrospectively to reinterpreting what past scientists had ‘really’ been talking about. On this basis, Popper’s most sophisticated follower, Imre Lakatos, saw the very idea that our past knowledge automatically carries over into the future – the presumption of induction – as a failure to explore rival hypotheses. From him, a Kuhnian paradigm, whereby the dominance of one theoretical horizon restricts the range of scientifically viable alternatives, operates more like a closed shop than an open market. In response, Lakatos (1978) advanced a ‘rationally reconstructed’ understanding of the history of the science, which explored how science could have developed more efficiently even within its historical constraints. Some Lakatosians went on to promote Bayesian inference, which depicts scientific progress as occurring through successive rounds of competing hypotheses responding to common evidentiary challenges (Howson & Urbach, 1989). The epistemological motivation here is like what animated the Progressive reformers of the early twentieth century discussed earlier. Both take the sheer fact of primacy – that a piece of land is owned by descendants of its first owners, that an invention was patented or a discovery made by a particular individual or, indeed, that a field of science follows a paradigm based on some foundational achievement – to be ‘contingent’ in the strong sense that, under slightly different conditions, the originating party could well have been different and the course of events would have taken a different path, opening up different opportunities and developments. Cognitive psychologists nowadays encapsulate this entire range of phenomena – when it occurs in the mind of a single person – as ‘confirmation bias’ (Fuller, 2015: Chap. 1). It would seem to follow, as a matter of ‘progressive’ science policy, that one should project that sense of contingency into the future, what Popper (1957) himself called ‘reversibility’. His model was regular democratic elections, whereby voters are provided the opportunity to change the party of government regardless of whether things are going well or poorly. This is the exact opposite of the implied policy of his rival Kuhn, who held that the epistemic strength of a science is that the paradigm dominating a field’s research agenda is given an exclusive right to fail on its own terms – that is, it must persistently fail to solve problems it had set for itself, before the sort of fundamental alternative research trajectories that might result in a ‘scientific revolution’ are licensed. As the economists today would say, Kuhnian paradigms enjoy a license to run ‘path dependency’ into the ground. 150 years ago, Renouard and Proudhon would have simply accused Kuhn of encouraging the scientific community to treat literally its domain of inquiry as ‘intellectual property’. Underlying the liberal and even proactive attitude to scientists fundamentally changing their minds that one finds in Popper and his students is a belief that our relationship to reality is sufficiently secure that it can tolerate a variety of ways of accessing and utilizing it, including ones that at a given time might be deemed ‘unfair’, ‘unjustified’, ‘immoral’ or even ‘untrue’. Such was the hidden lesson of Galileo’s ‘success’ as told by Popper’s most radical follower, Paul Feyerabend. According to Feyerabend (1975), Galileo fabricated his most famous experiments and could not explain the optics behind his own makeshift (‘pimped’, in today’s
5.5 Utopian Knowledge Capitalism as Philosophy of Science: Revisiting the Popperians
115
slang) telescope. In that respect, the Papal Inquisition was right to prosecute Galileo – and nowadays he would be pronounced guilty of ‘research fraud’. Yet, of course, the subsequent history of science proved Galileo to have been largely correct, even though his beliefs were not justified at the time he presented them (Fuller, 2007a: Chap. 5). On this basis, Feyerabend notoriously proposed ‘methodological anarchism’ as a principle of science policy, contrary to the position held by most philosophers of science, including arguably Popper himself. In short, for Feyerabend there is no royal road to the truth. Sometimes a vivid imagination and persuasive rhetoric can do the work of rigorous methodology, especially – as in Galileo’s case – it inspires others to make up the probative difference. Feyerabend’s position has proved perplexing for philosophers because it involves a head-on collision between the ethics and the epistemology of knowledge production. After all, a cynical lesson that might be taken from Feyerabend’s Galileo tale is that deception, perhaps even self-deception, may be a surer route to the truth than proceeding in a more ‘truthful’ manner. This goes against the general tendency since the end of the Second World War to tie truth-seeking to moral scruples. This turn to ‘research ethics’ began in the wake of the 1946 Nuremberg Trial, when scientists adopted the convention of not citing the results of Nazi research on human subjects, due to the unethical character of its conduct. This mentality was extended over subsequent decades, such that nowadays publication in scientific journals may be prohibited or retracted – and authors may be ostracised from the scientific community – under a variety of morally relevant conditions. These include failure to declare background financial and political interests supporting one’s research, failure to secure proper consent from subjects, failure to acknowledge properly the sources of one’s insights and failure to represent properly the means by which one’s findings were obtained (Schrag, 2010). These potential breaches of research ethics reflect two countervailing features of the contemporary research environment. The first is a preoccupation with setting limits on the access to knowledge claims. Researchers must not overstate their achievements. The need to declare interests and secure consent function as a ‘handicap’ in a research environment that is presumed to be intensely competitive and where certain players, due to their initial financial and/or political advantage, might otherwise be granted too much authority. Put cynically, the researcher is procedurally inhibited from taking arguably the most direct path to a desired outcome – say, by acquiring all the money and all the power needed to impose it on the research community. The second feature, which cuts against the first, is that the knowledge claims proposed by researchers are ultimately judged in terms of their face validity: Is the experiment or field work described in a way that leads one to believe that it took place, and are the outcomes plausible, both in themselves and the conclusions drawn from them? Read against the backdrop of the increasingly formulaic presentation of scientific research, such criteria invite both plagiarism and outright fabrication. The only real question here is whether there really has been an increase in research fraud or simply more of it has been caught because the stakes – often financial – motivate peer reviewers to dig more deeply into the details of the research than they would have in the past.
116
5 The University as the Site of Utopian Knowledge Capitalism
Given these countervailing features of the contemporary research environment, one might reasonably conclude that the drive to place ‘ethics’ at the heart of research practice is bound to fail. However, I don’t wish to decide the matter here. Instead I would simply remind readers that the philosophy of science has long included a tradition of thought that is studiously indifferent to the truth of fundamental theories and even the truthfulness of the people proposing them: instrumentalism. For the instrumentalist, a theory’s scientific significance is simply the span of control that it allows over a target range of phenomena. The more one controls, the more one knows: knowledge is power. To be sure, the instrumentalist understands ‘knowledge’ in terms of savoir (‘knowing how’) rather than connaissance (‘knowing that’). Nevertheless, for our purposes, what matters is that the scientist need not believe the theories she proposes and theories themselves need not be true; indeed, she need not even care. At most, so says the instrumentalist, the scientist needs to act ‘as if’ her theories were true. Perhaps the moral attitude most closely aligned with instrumentalism is hypocrisy. Not surprisingly perhaps, instrumentalism has always been regarded with some suspicion. Consider the two leading philosopher-scientists promoting instrumentalism in the early twentieth century: Ernst Mach and Pierre Duhem. The former would arguably reduce science to technology and the latter would subordinate it to theology. In both cases, science would be pursued as a means to some ‘higher’ end, not as an end in itself. Put bluntly, ‘the end justifies the means’, in which the conduct of science itself constitutes the ‘means’. Duhem favoured instrumentalism because it prevented science from overtaking theology in terms of setting humanity’s metaphysical horizon, whereas Mach favoured the same doctrine because it prevented science from ossifying into a secular theology that would then be imposed as society’s metaphysical horizon. (The latter explains Lenin’s demonization of ‘Machists’, since Lenin wanted to impose Marxism in just such a fashion.) Philosophers of science have tended to ignore these countervailing reasons for embracing instrumentalism because they characteristically fixate on the similarity in content of what Mach and Duhem endorsed at the expense of their radically different political motivations. Mach was a liberal parliamentarian and Duhem a Catholic restorationist. Interestingly, Popper and his followers – including Lakatos and Feyerabend – have also been accused of scientific instrumentalism, yet they have vehemently denied it – and with considerable justification (Fuller, 2003: Chap. 4). Yet, the suspicion lingers. The reason, I suggest, is that the Popperians treat ‘science’ in the same spirit as both Saint-Simon and Proudhon treated ‘socialism’, namely, as the material culmination of humanity’s spiritual journey. The charge of ‘instrumentalism’ functions as the charge of ‘capitalism’ that Marxists made against these so- called ‘utopian socialists’. In both cases, the charge boils down to presuming that the material world is much more fluid, pliable and biddable than it really is. But contra the naysayers, one can take or leave fundamental theories or social structures under the right conditions and with minimal effort in the name of ‘progress’. Revolutions are ultimately about rearranging the parts of already existing wholes to meet new challenges. They are Gestalt switches. The idea that the future will significantly repeat the past – the intuitive basis for induction – presumes that memory is
5.5 Utopian Knowledge Capitalism as Philosophy of Science: Revisiting the Popperians
117
saved rather than lost over time. But this is false to human psychology. In fact, memory needs to be actively maintained so as not to be lost. This explains the significance of both education and propaganda as institutions dedicated to the reinforcement of collective memory, as well as the profound and often subtle role that generational change has played in the reconstitution of collective memory. Once again, the economists got the measure of the situation by reducing the intuitiveness of induction to a ‘sunk cost’, a ‘that was then, this is now’ attitude, which should not inhibit a change of course in the future (Steele, 1996). In short, there is always everything to play for. Pace Fredric Jameson and Slavoj Zizek, it is only Marxists who find it easier to imagine the end of the world than the end of capitalism. They can’t get their heads around the true nature of capitalism. For the past two decades, this has been the hidden message of Philip Mirowski’s relentless critique of neoliberalism, which unwittingly has done more to undermine his fellow critics than neoliberalism itself. The increasingly baroque appeals to ontology – nowadays called ‘critical realism’ – that have characterised accounts of capitalism inspired by ‘Western Marxism’ for the past six decades have amounted to a reification of Marxists’ own ignorance of capitalism’s workings, which in the meanwhile neoliberalism has raised to epistemological self-consciousness. As Ricardo first realized, once human labour is seen as profoundly ‘fungible’ – namely, that it might be done more efficiently by new technology – then attempts to protect jobs and secure wages started to look like rent- seeking. The logical conclusion, which goes to the ‘spirit of capitalism’, is that the ‘human’ is whatever manages to recover and enhance its value by shifting its shape in a dynamic market. In this respect, the human is not opposed to capital; it is capital in its most protean form, a proposition that transhumanists nowadays promote as ‘morphological freedom’ (Fuller, 2019c: Chap. 2). In effect, neoliberalism has managed to perform a Gestalt switch on Marxism.
Chapter 6
Prolegomena to a Political Economy of Knowledge beyond Rentiership
Abstract This chapter begins by situating academic rentiership in the deeper issues surrounding ‘cognitive economy’ that lie at the heart of the Western philosophical tradition. The psychological phenomenon of the ‘Gestalt shift’ exemplifies various ways in which the flow of information can be channelled to enable or disable certain possible ways of regarding the world, or ‘modal power’. Against this background, such familiar concepts from academic practice as ‘quality control’ and ‘plagiarism’ acquire a new look. Quality control sustains rentiership, whereas plagiarism undermines it. Aesthetics – especially modernist concept of ‘artworld’ – offers guidance on how to understand this point. The rest of the chapter is largely concerned with the historical swings back and forth between rentiership and anti-rentiership in the history of knowledge production. Rentiership’s decline is associated with corruption of its quality control processes, which reveals academia’s reliance on equally corrupt power elites in society. Moreover, the periodic introduction of new communication media over which academia cannot exert proprietary control – from the printing press to the internet – has historically served to redistribute the sphere of plausible knowledge claims, and modal power more generally. The chapter ends with a consideration of the stakes in the reduction of knowledge to information, an explicit aim shared by Silicon Valley enthusiasts and neoliberals alike.
6.1 The Cognitive Economy of Gestalt Shifts: Plato and Aristotle Redux The ‘normative’ dimension of my version of social epistemology flies under the flag of ‘knowledge policy’, a phrase that has always carried strong economic overtones (e.g., Fuller, 1988: part 4; Fuller, 2002; Fuller, 2015: Chap. 1). In this respect, I have walked in the footsteps of Charles Sanders Peirce, who invented a field called the ‘economics of research’ and his latter-day follower Nicholas Rescher (1978, 2006). However, it would be a mistake to think that regarding knowledge in economic terms is merely a late nineteenth century innovation. On the contrary, ‘economy’ in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Fuller, Back to the University’s Future, Evaluating Education: Normative Systems and Institutional Practices, https://doi.org/10.1007/978-3-031-36327-6_6
119
120
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
the sense of efficiency has been endemic to the Western pursuit of knowledge from its inception. Indeed, both Plato and Aristotle were interested in ‘explaining the most by the least’, though they interpreted this principle in rather opposing ways, which have served to determine the subsequent history of philosophy. Plato interpreted the principle as calling for a unified understanding of reality, very much in the spirit of physicists who continue to seek a ‘Grand Unified Theory of Everything’. This meant that the diversity of phenomena that we normally experience constitutes an inefficient understanding of reality that must be ‘resolved’ in some fashion, say, in terms of the outworking of the laws of nature under specific conditions. Such laws are in character quite unlike the phenomena experienced because they range over not only actual but also possible states of the world. Thus, what we normally see as necessary features of the world – ‘structures of the lifeworld’, as Alfred Schutz might say – turn out to be, under closer and deeper inspection, features that are contingent on certain prior opportunities and decisions – ones that perhaps faced some cosmic intelligence. Plato and his immediate followers sought to grasp this play of hidden forces through the faculty of nous in a manner that is still associated with mathematical intuition and thought experiments. However, starting in the Middle Ages, these mental manipulations were increasingly performed, not in one’s head but on the world itself, in what we now call ‘experiments’ in the proper sense. Accompanying this development was a more specified understanding of the Platonic quest for unity, namely, that the test of our knowledge of the underlying laws is that we can use them to simulate the aspects of reality that we wish to understand – and quite possibly improve upon. This is what historians of science call the ‘mechanical worldview’, and it survives in all fields where model-building is taken seriously as a route to knowledge. For today’s descendants of Plato, the phenomena of the world are the outputs of some executed cosmic computer programme, in terms of which scientific inquiry amounts to reverse engineering or even hacking. In contrast, Aristotle saw the diversity of reality as more directly indicative of nature’s efficiency, In that case, relatively minimal cognitive adjustment on our own part is required to be optimally attuned to reality. What philosophers nowadays call the ‘correspondence theory of truth’, first expressed by Aristotle’s staunchest medieval champion Thomas Aquinas, derives from this tradition. Indeed, Aquinas’ own formulation – veritas adequatio intellectus ad rem (‘truth is the alignment of the mind to the thing’) – suggests that such a state of mind could arise with relatively little deliberate effort. In that case, science is not about the search for hidden laws that are far from our normal way of seeing things. Rather, it is a glorified pattern recognition exercise whereby we come to see the various patterns which together constitute the world. The path from perception to cognition on the Aristotelian view is rather shorter than on the Platonic view. In the modern era, the Aristotelian position has been often touted as ‘common sense’ realism, a more sophisticated and contemporary version of which is Nancy Cartwright’s (1999) ‘patchwork’ scientific ontology. Over the centuries, the sorts of hierarchies that represent ‘natural order’ in biological classification systems have most consistently expressed Aristotle’s sense of cognitive efficiency, insofar as the
6.1 The Cognitive Economy of Gestalt Shifts: Plato and Aristotle Redux
121
relationships between ‘orders of being’ are based on morphological resemblances – that is, how organisms appear to the eye. In this context, today’s controversy in taxonomy over whether morphological resemblance should yield to genetic overlap as the basis for organizing species in the ‘tree of life’ marks a Platonic challenge to a field traditionally dominated by Aristotelian sensibilities, as it would allow two similar looking species to have been brought about by radically different genetic paths (Wilson, 2004). In effect, the taxonomic judgements of the field biologist (aka Aristotelian) would have to defer to those of the lab biologist (aka Platonist). The Plato-Aristotle dispute over cognitive economy has been more central to Western intellectual history than is normally acknowledged. Consider the role of substitution in logic and economics, which is normally rendered as ‘functional equivalence’. For the founder of modern symbolic logic (and arguably analytic philosophy), Gottlob Frege, the ‘Morning Star’ and ‘Evening Star’ are functionally equivalent because both refer to the planet Venus, but under different observation conditions. Likewise, two qualitatively different goods are functionally equivalent in the market if a price is agreed for their exchange. In that sense, the two goods are simply alternative ways to spend money; hence, Marx’s critique of ‘commodification’, according to which capitalism turns money into metaphysics, as the exchange relation becomes the ultimate ground of being. The supposed difference between the logical and economic senses of ‘substitution’ is that the former involves a presumed identity, whereas the latter requires identity to be constructed. But is it really that neat? Perhaps not, since it took an astronomical inference by Pythagoras to show that ‘Morning Star’ and ‘Evening Star’ both refer to Venus. That inference happened at a particular time and place, the force of which is masked by using ‘discovery’ to describe the event. After all, ‘discovery’ (as opposed to ‘invention’) suggests that Pythagoras’ inference could and would have been made at some point by someone. To be sure, this is the Platonic stance in a nutshell, and Plato’s own concern was over who should take that decision – the one or the many. In the future, the Turing Test is likely to be the proving ground for this sensibility, as artificial intelligence-based intellectual work begs the question of what is the ‘added value’ of being human (Fuller, 2019a). In effect, if something ‘not-human’ passes as ‘human’, then the Platonist would welcome this as educating us that what it means to be ‘human’ does not require what we had previously thought in terms of substratum. In effect, we will have found a different (‘machinic’) path to realizing humanity. Indeed, this prospect was adumbrated as ‘functionalism’ at the dawn of the computer age by Hilary Putnam (1960). In contrast to this entire line of thought, an Aristotelian would regard the mistaking of a computer for a human as simply a misjudgement, since the ‘human’ is intimately tied to the sorts of creatures that we normally take to be ‘human’. Gestalt psychology provides an interesting lens through which to see this difference between Platonic and Aristotelian understandings of cognitive efficiency, one that was first surfaced by the social psychologist Kurt Lewin (1931) and later taken up by Edmund Husserl (1954). (In both cases, the Platonic position is called ‘Galilean’.) The very idea that we tend to see the world as ‘always already’ patterned
122
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
would suggest an Aristotelian sensibility, were it not for the fact that the pattern we see can be so easily manipulated depending on the context of perception, which in turn suggests a Platonic sensibility. Thus, in a ‘Gestalt shift’ experiment, we may start by seeing an ambiguous figure as a duck but end up seeing it as a rabbit, yet at both moments the image appears as an ‘efficient’ representation of reality, both in terms of directness of perception and comprehension of experience (i.e., the phenomena are ‘saved’). Aristotle may explain the experience of the subject, but Plato explains the behaviour of the experimenter. Unsurprisingly perhaps, in the mid-twentieth century, the Gestalt shift was a popular means – used by, among others, Ludwig Wittgenstein, Russell Hanson and Thomas Kuhn – to explain conceptual change, especially in science (Fuller, 2018a: Chap. 6). The meta-lesson of Gestalt psychology is that your perception of the world is rendered more changeable once you change your understanding of how that perception was brought about. This insight has made Gestalt psychology an endless fount of ideas for propaganda and advertising. It has been also used to explain how the people behind the early modern ‘Scientific Revolution’ came to shift from an Aristotelian to a Copernican (aka Platonic) worldview: that is, from the standpoint of the Earth to the standpoint of the Heavens. It involved thinking in radically different terms about the relationship between what we experience and what we know. In effect, these original scientists understood reality from the standpoint of the Gestalt experimenter rather than the Gestalt subject – where the experimenter was a proxy for a cosmic intelligence, aka ‘The Mind of God’. Optics was perhaps the main site of contestation for trying to explain how our senses filter reality, which the mind then actively reconstructs (Crombie, 1996: Chap. 16). To this day, philosophy frames most ‘problems of knowledge’ in terms of the higher order interpretation of visual data.
6.2 Modal Power and the Defeat of Knowledge as a Public Good Now what does the above have to do with rentiership as an economic feature of academia? Let us start with an observation about the history of technology but is more generally applicable to any strategic decision-making, including knowledge policy. For any path-breaking innovation, such as the automobile, there are usually at the outset several competing prototypes, with various strengths and weaknesses. However, over time one becomes dominant and then sets the pace for the rest. This suggests two complementary concepts: opportunity cost and path dependence. An opportunity cost consists in alternative states of the world that are made less likely if not impossible as a result of a decision taken – such as the market’s decision to back Ford’s way of doing cars in the early twentieth century. Path dependence refers to the efficiency gains that result from any such decision, as infrastructures develop to reinforce its efficacy, removing the original alternatives from serious
6.2 Modal Power and the Defeat of Knowledge as a Public Good
123
consideration in the future. In the case of Ford, relatively low prices and simple design trump concerns about human safety and environmental protection. These anti-Fordist concerns only resurface a half-century later, once the Ford-anchored automotive market has stabilized. By that time, they have become ‘negative externalities’ that need to be ‘managed’, but more in the spirit of an exception to a rule rather than a game changer. At stake in opportunity costs and path dependence is what I call modal power, the manipulation of intuitions about what is possible, impossible, necessary and contingent. I regard modal power as the cornerstone of the ‘post-truth condition’ (Fuller, 2018a: Chap. 2). Opportunity costs look at modal power from Plato’s ‘second order’ standpoint, as the logicians say, while path dependence sees it from Aristotle’s ‘first order’ perspective. Classical political economy’s default ‘free market’ mentality – especially its assumption that more competitive markets are ‘freer’ – may be seen as part of a concerted second-order attack on rentiership as a first- order phenomenon, whereby rentiership is understood to be the power that accrues to sheer possession, by whatever means it was brought about and to whatever ends it might serve. While Marx correctly identified the classical political economists as capitalism’s original house theorists, their support of private property was focussed mainly on the opportunities for investment that it provided. In that respect, for them venture capitalism is ‘capitalism’ in its purest sense. In classical political economy, land was valued not as a power base for its owner who could then create bottlenecks in the flow of capital, but as a platform for launching any number of projects from which all those involved in the transaction might benefit. It is worth pausing to consider the peculiar – some might say alchemical – metaphysics that underwrites this alliance of scientific Platonists and free market capitalists whom I’ve portrayed as being so vehemently opposed to the modal power embodied in rentiership. A commonplace of the modern economic worldview is that humans harbour infinite longings but are constrained by limited resources. Such is the fate of spiritual beings trapped in a material world – at least that would have been the gloss given by Joseph Priestley, William Paley and Thomas Malthus, who were among several radical natural theologians who contributed to the foundations of classical political economy in the late eighteenth and early nineteenth centuries. Nevertheless, and this is the main point, even these natural theologians recognized that humanity had already managed to substantially improve its lot over that of other animals. They differed amongst themselves over the terms on which one might speak of ‘limits to growth’, but they agreed that the ‘Industrial Revolution’ dawning in their midst promised much overall growth for the foreseeable future. Was this apparent human capacity to generate ever greater wealth in the face of absolute scarcity an illusion or reflective of some strategy that we had implicitly discovered to overcome our material limitations, if not our species finitude altogether? Adam Smith already started the ball rolling by suggesting that the secret lay in the rational organization of labour. A half-century later, Count Saint-Simon repaid the compliment by coining the word ‘socialism’ for the policy of governing all of society on this basis. The difference between Smith and Saint-Simon was that the former believed that people left to their own devices amongst
124
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
themselves – without the imposition of legal restrictions on land transfers and labour mobility – could reorganize peacefully and profitably, whereas Saint-Simon thought that this required a special expertise – so-called ‘captains of industry’, professional organizers of humanity, who nowadays we might associate with ‘knowledge managers’ (Fuller, 2002). Notwithstanding their well-rehearsed differences, the founders of capitalism and socialism shared the belief that the way out of human finitude was to disembed people from their traditional social relations and re-embed them in contexts that made the most productive use of their talents, effectively releasing their hidden energies. This way of looking at people amounts to entire societies undergoing a Gestalt shift. In other words, people’s capacities remain constant across the shift but there is a productivity gain from ‘before’ to ‘after’ the shift as those capacities come to be more fully ‘exploited’ (a term that acquires negative connotations only after Marx). In Gestalt psychology terms, the ‘figure’ remains the same, but the ‘ground’ has changed. Put crudely, when a slovenly serf is shifted from the field to the factory, he or she becomes a productive worker. Agriculture provided the model for this way of thinking: The starting shot for the Industrial Revolution was fired when the first person saw a relatively undisturbed part of nature as ‘raw materials’ for human use. An implication of speaking about modern societies as ‘dynamic’ is that they try to minimize the opportunity costs of its members having been born a certain way – that is, at a particular time and place, to a specific family, with certain capacities, etc. They make people more ‘shiftable’ in the Gestalt sense. That someone was born of a family of serfs doesn’t mean that he or she must remain a serf forever – and hence in an indefinite state of low productivity. One can become more productive than that – and thereby provide greater overall benefit – under the right conditions. To be sure, this leaves open how exactly those conditions are to obtain, answers to which capitalists and socialists then provide competing answers. In either case, change is in the cards, with path dependence cast as the ever-present foe, as enslavement to one’s birth morphs into the slavery of routinized labour or Big Brother. The phrases ‘creative destruction’ and ‘permanent revolution’ are vivid expressions of the capitalist and socialist antidotes to path dependence. The shared mindset perhaps explains why Marx used the word ‘protean’ to characterize capital (Fuller, 2021a). This general line of thought gets complicated in the second half of the nineteenth century, as thermodynamics adds nuance to the idea of nature’s scarcity. It is no longer simply about the inherent finitude of material reality, which might have resulted from the Biblically fallen state of humans, which had been the hidden Calvinist stick to prod humans into self-improvement. In addition, our very efforts to reorganize matter to increase productivity also raise the level of material scarcity, now understood in terms of reducing the amount of ‘free energy’ available in the universe. This opened up two radically opposed horizons: on the one hand, pessimists who believe that the irreversibility of this loss of free energy – aka entropy – means that all our efforts are wasted in the long term; on the other, optimists who
6.2 Modal Power and the Defeat of Knowledge as a Public Good
125
believe that the secret to beating this tendency – or at least indefinitely delaying the inevitable – is to become more efficient (Georgescu-Roegen, 1971; Rabinbach, 1990). The latter option involves striving to do more with less, which the theologically inclined might associate with our heroic striving to mimic God’s original position of creating everything out of nothing (creatio ex nihilo), the ultimate feat of efficiency, which in secular guise is carried forward as ‘transhumanism’ (Fuller & Lipinska, 2014: Chap. 2). But it also inspired more downsized expressions in discussions of political economy. The drive to minimize ‘transaction costs’ is one of the more creative responses, especially as an economic argument for ‘institutions’ as agencies whose existence transcends the that of the particular agents involved in any set of market exchanges. As originally formulated by Ronald Coase (1937), institutions are designed to anticipate and mitigate costs so that trade can flow without substantial interference – that is, more ‘freely’ than it might otherwise. And so began the ‘law and economics’ movement, which measures human progress in terms of minimizing the costs of harm without necessarily preventing the harm itself. For example, if there is an overall benefit to my building a house even though it could destroy the value of adjacent land, then I should compensate in advance, so as to avoid later complaints that could waste still more time, effort and money of all the parties concerned (Calabresi & Melamed, 1972). Of course, such a scenario supposes that some superordinate party – typically a judge – takes a decision to optimize over all the parties’ interests, which are fungible with respect to the prospect of financial compensation. The bottom line is that everyone has got their price when it comes to neutralizing harms, and the only question is how to find it (Ripstein, 2006: Chap. 8). While for many this is an intuitively harsh principle of justice, it is nevertheless future-oriented rather than past-oriented. There is no presumption in favour of retaining the status quo if it potentially blocks a better future for all concerned – at least in the aggregate sense of ‘all’. Those who stand in the way of progress should be willing to ‘cash out’ their stakes under the right conditions. And in a world innocent of the long-term effects of environmental degradation, it was easy to see how such a policy could hasten the conversion of farms to factories, resulting in the rentiers yielding to the capitalists as the dominant economic force. But it would be a mistake to understand this mentality as limited to ‘political economy’ as conventionally understood. It also extends to ‘knowledge production’. In this context, I will unpack the economic definition of public good to show that it implies a similar hostility to the conservative bias embodied in rentiership. For economists, something counts as a public good if it would cost more to restrict than to permit access to it. The availability of such a good is probably hard to restrict by nature, and increased access to it would probably serve to increase society’s overall wealth and well-being. An implication is that restricting access to the good in question means lower productivity. As we earlier saw in the previous chapter, such considerations have historically persuaded judges to rule in favour of freeing markets, regardless of which party’s interests actually benefit – as in the anti-monopoly rulings in the US Progressive Era (Fried, 1998). This is because the judges have thought of markets as ultimately about information, which by nature
126
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
flows and spreads (Kitch, 1980). It is an idea that Stewart Brand has popularised for the Silicon Valley set with the slogan, ‘Information wants to be free’, and one which in recent years the economist Philip Mirowski has demonized as the thin edge of the wedge for neoliberalism to colonize the academy (Mirowski & Nik-Khah, 2017). We shall return to this point in the Postscript to the chapter. For capitalists and socialists, notably both David Ricardo and Karl Marx, public goods in the sense of goods presumed to be free are diametrically opposed to rent, which given our earlier analysis can be understood as the economists’ conception of evil, as it replaces efficient use with its two polar opposites – on the one hand, sheer idleness (i.e., non-use); on the other, excessive effort (i.e., high-cost use). Thus, starting with Coase, the law and economics movement has turned the eradication of this evil into a positive principle of justice by disincentivizing the rentier’s tendency to charge high tariffs to restrict the access of others who might use their property more productively. Such universal hostility to rentiership also explains the instant positive worldwide acceptance of Thomas Piketty’s (2014) Capital in the Twenty- First Century. That book focuses specifically on the role of inherited and unearned wealth as the most persistent source of inequality in societies across the world. Some of course have gone farther than Ricardo and Piketty – but perhaps not Marx – to declare that the institution of strong private property rights is itself the economic version of Original Sin, as it creates the legal conditions for the restriction of resource use by the bare fact that what’s mine is by definition not yours. Without private property, nothing would be rentable, and hence information flow would not be blocked at all. This anarcho-communist position, which traces its roots back to Rousseau and Proudhon, should be kept in mind when we turn to contemporary academia’s obsession with plagiarism. Of course, ‘rentiers’ do not present themselves that way at all. They see themselves as protecting an asset whose value might otherwise degrade from unmonitored use (Birch, 2017b, 2020). Thus, the landowners whom Ricardo and Marx held in contempt for impeding human productivity are reasonably seen as proto-environmentalists for the resistance they provided to factory building on their property. This issue of ‘quality control’, which Garrett Hardin (1968) made vivid to economists as the ‘tragedy of the commons’, recurs in academia through the idea of ‘gatekeeping’, which was originally a set of tolls to channel traffic in privately owned lands. The term was then repurposed by Kurt Lewin in the 1940s for the filtering of messages in a mass media environment. And nowadays ‘gatekeeping’ is routinely used to characterise the ‘peer review’ processes that characterise the evaluation and publication of academic research. It is worth lingering here over the idea of ‘quality control’. It is a fundamentally uneconomic concept by virtue of assuming that the value of the good in question is intrinsic rather than extrinsic. More to the point, ‘intrinsic value’ means continuing to respect already known qualities of the good. Thus, the proto-environmentalist landowners in the early Industrial Revolution presumed that their property possesses a value – say, associated with its specific physical constitution or cultural context – that cannot be exchanged for anything else, which in turn justified the high tariff levied on any prospective user. These landowners didn’t see themselves as
6.3 Learning from Plagiarism: Knowledge as an Artworld
127
exploiting their relative advantage but as performing a hereditary role as stewards of the Earth. A more democratic version of this line of reasoning is familiar today in terms of the allegedly irreducible benefits of ‘natural’ over ‘artificial’ food ingredients, which informs the largely European resistance to the introduction of ‘genetically modified’ organisms into agriculture. To his credit, Roger Scruton (2012) locates this mentality at the heart of ‘Conservatism’ as a political ideology, notwithstanding the more Left-sounding rhetoric of today’s self-styled ‘eco-warriors’. However, economics is by nature a ‘Liberal’ science, and from that standpoint such ‘Conservative’ arguments for quality control look like attempts to discourage the flow of capital by making it harder for competitors to enter the market. After all, the lifeblood of capital is fuelled by the prospect that anything that is currently done can be done more efficiently and to greater effect – and quite possibly by someone other than the current owners of the means of production. The agent of change remains an open question: It may or may not be the current owners. The problem is that the current owners may not be motivated to step up to the plate. In that context, appeals to ‘intrinsic value’ simply grants ideological licence for rents to be added as an artificial layer of scarcity on top of nature’s scarcity to limit humanity’s ability to rise to the economic challenge.
6.3 Learning from Plagiarism: Knowledge as an Artworld A sign that academia has become more protective of its rentier tendencies is its increasing obsession with plagiarism (Fuller, 2016a: 44–46). It reflects a rise in intellectual property thinking more generally in academia that is a mark of its desperation in the face of challenges to its authority, which ironically only serves to subvert the idea of free inquiry that the Humboldtian university aims to uphold. After all, plagiarism is ultimately about syntax fetishism, the heart of copyright, which confers intellectual property rights on the first utterance of a particular string of words or symbols, even though it could have been uttered by any other grammatically competent person under the right circumstances. (At least that’s how Chomsky would put the matter.) The mystified legal expression for this privileging of the first utterance is ‘authorship’, which was subject to much criticism, deconstruction and even scorn in the 1960s and ‘70 s, especially in France (Barthes, Foucault, Derrida). Nevertheless, automated plagiarism detectors such as ‘Turnitin’, through which students nowadays must submit their papers prior to academic evaluation, uphold the syntax fetishism associated with ‘authorship’ in that principled way that only machines but not humans can. Unfortunately, no credible philosophy of language supports the policy of projecting semantic value from syntactic originality. The meaning of any string of words or symbols is ultimately up to the context of use, which inter alia depends on the other strings in which it embedded, what Derrida would call its ‘intertextuality’. This even applies to students to who cut-and-paste, say, bits of Kant into essays that are presented as ‘original work’. To be sure, there are intellectual grounds on which
128
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
such efforts should fail. But they are less to do with the failure to acknowledge sources than simply the failure to meet the demands of the assignment. Indeed, this should be the basis on which the teacher is bothered by ‘plagiarism’ in the first place – namely, the inappropriateness of the cut-and-pasted bits of Kant to the topic that she has asked the student to address. However, a student capable of cutting-andpasting Kant into their texts such that the teacher takes it be part of a good answer to an exam or essay question – even if the teacher doesn’t realize that Kant himself originally said it – deserves praise, not condemnation. Praise is due not merely because the student had outfoxed the teacher at her own game. That sort of response remains beholden to the syntax fetishism that fuels the academic taboo on plagiarism. Rather, praise is due because the student had demonstrated good judgement regarding what sort of thing belongs in what sort of place – in this case, the bits of Kant that end up on a highly graded essay. If any ‘originality’ is involved, it consists in the ability to pour old wine into new bottles such that it tastes different, if not better. (Recall the Borges short story, ‘Pierre Menard, Author of the Quixote’.) What we normally call ‘intelligence’ or ‘aptitude’ is primarily about that capacity, which has nothing to do with some ultimate sense of originality. Indeed, this strategic deployment of plagiarism underwrites the justification of multiple-choice tests, where the value of one’s knowledge is presumed to be a function of its display in context, not its origin. Indeed, we underestimate the extent to which we ‘always already’ live in the world of the Turing Test, where the cheaters and chancers can’t be distinguished from the geniuses on a principled basis. Drawing such distinctions requires the additional work of embedding test performance in larger narratives about how the test takers came to manifest such knowledge, on the basis of which the value of their performance might then be amplified or discounted, resulting in categories like ‘cheater’, ‘chancer’ and ‘genius’. This anti-proprietary approach to plagiarism recalls the ‘artworld’ theory of aesthetics developed by Arthur Danto (1964), who stumbled upon it after wondering what made Andy Warhol’s ‘Brillo Box’ a work of art. After all, it would be easy to regard the box on display in an art gallery as Warhol’s cheap attempt to profit in violation of the ‘Brillo’ trademark (‘Brillo’ is a US brand of scouring pad). Of course, most aestheticians would recoil from such a crass judgement, even if they wouldn’t rate Warhol’s effort very highly. Yet that crass judgement does approximate contemporary academic attitudes to plagiarism. The student, like Warhol, would seem to be brazenly appropriating someone else’s intellectual property for their own purposes. Danto’s own approach was to say, first, that art is a matter of seeing something as art and, second, that this perception requires a context to recognize the work as art. In short, we need to inhabit an ‘artworld’. Danto went further with this idea. Nelson Goodman (1968) had famously proposed that art may be divided into those works that can be forged (because they constitute a unique completed object) and those that cannot be forged (because they can be completed in many ways). He had in mind the distinction between a painting or sculpture, on the one hand, and a musical score or dramatic script, on the other. Against this intuition, Danto (1974) proposed imagining that two artists generate paintings that appear the same to the observer but one used Rembrandt’s method
6.3 Learning from Plagiarism: Knowledge as an Artworld
129
and the other Jackson Pollock’s. Goodman might claim that subtle differences between the two paintings could always be found, based on which one painting might be judged superior and the other even a forgery. One might suppose that Goodman’s judgement would depend on suspecting that the two paintings had been produced at different times and by different means – and resolving those technicalities would settle the normative issue of authenticity and hence value. For Danto, if you like one, you should like the other. If anything, knowing that they were produced differently should enhance not detract from your aesthetic experience. The Pollock might even be valued more, given the prior improbability of its result. Danto’s point was designed to undermine the very idea of forgery – but it works equally for forgery’s cheap cousin, plagiarism. For him, unlike Goodman, an aesthetic judgement involved treating not only the future but also the past of a candidate work of art ‘performatively’. Just as we potentially learn something new about music or drama with each new performance, the same applies to our unlearning ideas about the ‘unique craftsmanship’ of a painting or sculpture upon realizing that it can be (and could have been) brought about differently. This sense of temporal symmetry dissolves Goodman’s original distinction. Of course, aesthetic judgement then gets more squarely placed on the shoulders of the judge – and in that sense, becomes more ‘subjective’. Indeed, Danto’s championing of Warhol’s Brillo Box as art led many critics to claim that Danto dissolves the concept of art altogether. And this brings us back to the challenge that academics are nowadays afraid to face: trusting their own judgement. It is perhaps difficult for epistemologists to comprehend that we might normally inhabit an artworld: life as one big interactive virtual gallery. However, it will become easier in the future. For the past quarter-century, it’s been fashionable to speak in terms of our possibly living in a ‘simulation’ (Bostrom, 2003). The uncanniness of this proposal rests on the Cartesian assumption that someone other than ourselves – perhaps an evil demon or the ‘Matrix’ – is running the simulation in which we live. However, if Mark Zuckerberg gets his way, soon we’ll all be dwelling in the ‘Metaverse’, subject to the terms and conditions of his arcane Facebook- like contracts. But even a demon as benign as Zuckerberg is not really needed. Humanity may be routinely running different simulations on itself as history is imperceptibly revised to reconfigure future horizons. (In historiography this is called ‘revisionism’, often in a pejorative light.) In Gestalt terms, we need to imagine that the same figures (from the past) are reorganized against a new ground (i.e., a renewed horizon for the future) to constitute a new perceptual whole – what Danto himself called the ‘transfiguration of the commonplace’. The previously suppressed comes to be salient – and vice versa. Inspired by Freud, Jacques Derrida (1978) likened the activity involved in this transfiguration to the writing on a palimpsest. He wasn’t far off the mark. A similar sensibility is also expressed in Eagleman (2009), a work inspired by recent research in neuroscience. Plato’s anamnesis, the knowledge that comes from recalling what had been forgotten, provides a prototype for this kind of activity. However, the relevant sense of ‘memory’ is not the strict mental recovery of a determinate past but rather a liberalization of our thought about how the past might determine the future. Once
130
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
again, it may simply involve a rearrangement of what is already known to form a new whole. Thus, Warhol did not uncover the hidden truth about a box of Brillo – say, that it was an artwork masquerading as a household product. Rather, he saw the box’s potential to be reconfigured differently and thereby perform a different sort of function – and in so doing, added value to the Brillo box. In this respect, Plato may be seen as the original champion of ‘intellectual recycling’, in which case anamnesis provides the epistemological basis for plagiarism, especially since Plato regarded memory as collectively owned by whichever beings are capable of hosting it. The obvious case in point is language, especially if one subscribes to a Chomsky-style ‘universal grammar’ approach, which Chomsky (1971) believed that Humboldt himself did. In that case, the innate creativity of everyone uttering unique sentences is complemented by the capacity that any of us possesses to have uttered those sentences. In that case, the ‘added value’ of the person called the ‘originator’ consists in no more than being at the right place at the right time. This way of thinking has profound implications for our understanding of historical significance. Consider Kuhn’s (1970) influential conception of ‘scientific revolution’. Even after the great paradigm shift from classical to relativistic physics, the achievements of Copernicus, Galileo, Newton and Maxwell did not disappear, but they did appear in a different light, which altered their significance vis-à-vis the new future horizon of physics. Nelson Goodman (1955) interestingly encapsulated the epistemology of the situation as the ‘new riddle of induction’, namely, how the same past can logically imply radically alternative futures. In Kuhn’s case, the alternatives were that classical physics carries on regardless of its anomalies or that it shifts to a new foundation in relativity theory. After 1905 the physics community began to take the latter course, after having followed the former course for the previous two hundred years. For his part, Goodman presented a more abstract example. He posited two competing properties – ‘green’ and ‘grue’ – that emeralds might possess. Both can account for the green colour of emeralds before a stipulated prediction, but they diverge thereafter, one predicting the next emerald will appear green, the other that it will appear blue. While it may be obvious which hypothesis will prevail, we really don’t know until the prediction happens. But suppose the emerald does turn out to confirm the ‘grue’ hypothesis, what follows? We might simply say that the test emerald was an outlier, an exception that proves the rule – or, we might say that emeralds had been ‘grue’ all along and we had been deceived by their superficially green appearance. The latter judgement would licence a rewriting of the past. That rewriting would be associated with our now having come to see something ‘deeper’ about the emeralds than we previously had. This is precisely how a scientific revolution is portrayed by those who back the winning hypothesis: That ‘crucial experiment’, as Popper would put it, that pits ‘green’ against ‘grue’ triggers the deeper realization that appearances aren’t all that they had seemed. Philosophers of science have yet to adequately answer whether the historical succession of these ‘deeper realizations’, this trail of ‘grue’ events we call ‘revolutions’, display an overall sense of purpose that amounts to a definitive growth in knowledge. Is it genuine scientific progress – or, is that conclusion just a figment of the narrative imagination, what Kuhn called science’s ‘Orwellian’
6.3 Learning from Plagiarism: Knowledge as an Artworld
131
historiography, which uses history to justify current policies? Kant famously counselled agnosticism about all such ‘teleological’ judgements, introducing the concept of ‘purposiveness’, the mere appearance of purpose. But that may turn out to be ‘purpose’ enough, at least for purposes of promoting knowledge production. The rise of automated plagiarism detectors (e.g., ‘Turnitin’) has nudged academics toward treating knowledge as intellectual property more than they otherwise might or should do. I have gone further to suggest knowledge would be better served by an artworld than the rentiership model that underwrites the fixation on plagiarism. However, academic rentiership is not simply a reactionary response to contemporary pressures from an increasingly sceptical public for whom ‘Don’t trust the experts!’ has proved to be an effective political rallying cry. Indeed, if academics were more concerned about spreading ideas than rewarding authors, plagiarism would not be the moral panic that it is today. But of course, contemporary academia is not simply about efficiently producing knowledge as a public good but also about properly crediting the producers. However, these two goals cut against each other, resulting in the rather tortured path dependent ways in which academics are forced to make their knowledge claims in the professional literature, namely, by citing ‘proper’ precursors, which is the functional equivalent of paying rent to maintain one’s standing in a domain of inquiry. This ongoing need to publicize other authors ends up making academic knowledge claims more arcane than they might otherwise be, given that in most cases one could reach similar conclusions by citing fewer authors. The template for intellectual proprietarianism had been already set in the seventeenth century in the Charter of the Royal Society of London, according to which an agreed self-selecting and self-organizing group of members would decide on what counts as valid knowledge without government interference. In effect, the Crown gave the Society exclusive rights over the certification of public knowledge as long as they kept their own house in order. Accordingly, individuals would submit their knowledge claims to ‘peer review’, in return for which they would be able to claim limited intellectual property rights as a ‘discoverer’. This setup put a premium on being first, and latecomers would be expected either to build upon or work around predecessors, as they all staked out a common ‘domain of inquiry’. This conception of knowledge informs Kuhn’s (1970) account of puzzle solving in ‘normal science’. Thus, open conflict among scientists has increasingly focused on priority disputes, which resemble the sorting out of competing entitlement claims to a piece of land (Collins & Restivo, 1983). In the long term, this arrangement changed the character of referencing knowledge claims and claimants. What might be called a ‘Westphalian system’ of epistemic authority emerged, with the aim of establishing the jurisdiction of every claim and claimant to knowledge. From this arrangement came the mutual recognition exercise – some might say mutual protection racket – that is associated with ‘expertise’, which is reflected in today’s credit-assigning academic referencing practices, aka ‘citation’. Meanwhile an older referencing convention – the footnote – has lost favour. It had been often used to openly contest the epistemic authority of rivals, which in turn suggested a sphere of inquiry with multiple
132
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
overlapping jurisdictions always potentially at odds with each other – in short, a ‘pre-Westphalian’ world of knowledge (Grafton, 1997).
6.4 Academic Rentiership as a Tale of Swings and Roundabouts One way to tell the history of academia would be in terms of alternating waves or rentiership and anti-rentiership. The onset of rentiership may be generally associated with the rise of ‘schools of thought’ whose authority is maintained by fidelity to a set of ‘original sources’ rather than innovation away from them. I refer, of course, to the commentary culture surrounding Plato, Aristotle, the Bible and their theological commentators in the Middle Ages. Spaces were introduced in the margins of books as new frontiers for claiming intellectual property rights, which others would then need to incorporate in their own books. This was the original context in which the famous quote normally attributed to Newton – ‘If I have seen as far as I have, it is because I have stood on the shoulders of giants’ – when it was first uttered in the twelfth century (Merton, 1965). The balance of power between old and new academic rentiers gradually shifted over the centuries as marginalia first became footnotes and then completely internalized as citations embedded in the aspiring rentier’s text. To be sure, less display – and arguably less mastery – of past work is now required to advance one’s own knowledge claims, if one is already in the academic knowledge system. Citations, properly arranged, function as currency that one pays to be granted a lease on a staked-out piece of intellectual property. Admittedly, that constitutes a triumph of ‘efficiency’ in a certain sense – but only for those who have paid the high entry costs (e.g., doctoral training) involved in knowing which works to cite. This in turn makes it harder for those who have not followed that particular path of inquiry to make sense of, let alone evaluate, the knowledge claims in question. Contrary to its own hype, academic knowledge is less a ‘public good’ than a club good (Fuller, 2016a: Chap. 2). In a club, several forms of rent are paid, starting with the submission of oneself to background checks and sometimes examination, followed by periodically renewed subscriptions, alongside the expectation that, once accepted, one will provide preferential treatment to other club members in the wider society. The acquisition and maintenance of academic expertise follows a very similar pattern. Indeed, the technological infrastructure of academic knowledge production has played a diabolical role in rendering the system more club-like. In the roughly half a millennium that has passed between the introduction of the printing press and the internet, it has become easier for those already in the know to get direct access to the knowledge they need to advance their claims, while at the same time keeping out those who haven’t yet paid the high entry costs. The relative ease with which books and journals have been made available means that that each new knowledge
6.4 Academic Rentiership as a Tale of Swings and Roundabouts
133
producer needs to reproduce less of the work of their predecessors in their own – a citation and a verbal label will often suffice. This allows a quickening of the pace of academic knowledge production, a point that was already in evidence by the time of Scientific Revolution, less than two centuries after Gutenberg (Eisenstein, 1979). Until universal literacy became an official aspiration of nation-states in the late nineteenth century, the education of the general public was lagging behind the advances of the emerging knowledge elites. Indeed, as we have seen, Saint-Simon’s original ‘socialism’ justified this new form of social distancing in the name of a more rational organization of society. It had certainly contributed to the Enlightenment’s self-understanding as a ‘progressive’ and ‘vanguard’ movement, which laid the groundwork for the modern culture of expertise, with Positivism as its house philosophy. Indeed, the philosophes generally supported the establishment of free-standing research institutes that trained future front-line knowledge producers – but remained separate from the ordinary university teaching which prepared the traditional administrative professions: the clergy, law and medicine. Its legacy remains in the academic cultures of such countries as France and Russia, where one normally has a dual appointment – one for teaching and one for research, each operating quite independently of the other. It is against this historical context that one can begin to appreciate the radically anti-rentier cast of Humboldt’s early nineteenth century proposal to reconceptualise the academic vocation in terms of the integration of research and teaching in the same person called an ‘academic’. But before turning to academia’s historic tendencies against rentiership, note that even today, the supposed communicative efficiency of academic referencing erects higher intellectual trade barriers to what has become a widely and generally educated public. One needs to know who academic authors know before evaluating what the authors claim to know. At the same time, most academic publications simply tread water to show that their authors are still in the game. The main beneficiaries are the people the authors cite. All told, it perpetuates an illusion of continuity in fields where people really don’t know what to say. And ultimately the power lies in the hands of the peer reviewers, who these days often end up being selected simply because they’re the only ones willing to spend the time to exploit the uncertainty of the review process. It’s too bad that this point is not made more forcefully in the ongoing debates concerning ‘open access’ academic journals, which present the false premise that the primary obstacles to the public’s utilization of academic knowledge are the prices charged by publishers. High prices merely financially disadvantage those academics wishing to improve their own chances at epistemic rentiership. Removing that barrier does not remove the club-like character of academic knowledge production, which resembles a cargo cult fronted by a Potemkin Village. The first major swing away from rentiership in academia began in the sixteenth century Renaissance. In effect, the ‘Humanists’ reverse engineered and thereby undermined the entitlement structure of medieval scholastic authority by showing that the received interpretations of Christendom’s canonical texts were based on false translations. (It is worth recalling that the scholastics read nearly everything in Latin translation, an epistemic condition comparable to Anglophone academia
134
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
today.) But two diametrically opposed conclusions were then drawn. Those who remained loyal to Roman Catholicism tended to adopt the relatively relaxed if not cynical attitude of ‘Traduttore, traditore’ (‘To translate is to betray’), while Protestant dissenters adopted the more earnest posture of trying to find the original meaning of these texts. This led to a variety of signature modern epistemic moves, ranging from more first-hand acquaintance with authoritative sources to the search for corroborating testimony, or ‘evidence’, as we would say today, for the knowledge claims made by those sources. This sparked considerable innovation in the independent testing of knowledge claims, which became the hallmark of the ‘scientific method’. Indeed, Francis Bacon made it quite clear that an important – if not most important – reason to submit knowledge claims to independent evaluation was to short-circuit the trail of commentary, whereby academics simply build or elaborate on misinterpretations of sources that may themselves not be very reliable. Bacon’s rather ‘post-truth’ way of referring to the problem was in terms of the ‘Idol of the Theatre’, a phrase that evokes the image of academics mindlessly playing the same game simply because everyone else does. In his magisterial survey of global intellectual change, Randall Collins (1998) observed that the periodic drive by intellectuals to level society’s epistemic playing field by returning to ‘roots’, ‘sources’, ‘phenomena’, ‘foundations’ or ‘first principles’ tends to happen at times when the authority of the academic establishment has been weakened because its normal political and economic protections have also been weakened. This in turn undermines a precondition for the legitimacy of academic rentiership, namely, an acceptance by non-academics of the self-justifying – or ‘autonomous’, in that specific sense – nature of the entitlement structure of academic knowledge production. The Renaissance exemplifies what the turn against rentiership looks like in action: Charges of ‘corruption’ loom large, most immediately directed at academics violating their fiduciary responsibilities as knowledge custodians by transmuting the public’s trust into free licence on their own part. This is the stuff of which ‘research fraud’ has been made over the past five hundred years – and which of course looms large today across all academic disciplines, now as part of an ongoing crisis in the peer review system (Fuller, 2007a: Chap. 5). But behind this charge lurks a morally deeper one, namely, that academics are not merely protected by larger powers but collude with them so as to constitute that mutual protection racket known in political sociology as ‘interlocking elites’. The presence of scholars and clerics as royal courtiers in early modern Europe is the most obvious case in point. In this respect, Galileo remains a fascinating figure because he was always trying to turn that situation – his own – against itself (Biagioli, 1993). The modern-day equivalent is, of course, researchers whose dependence on either state or private funding agencies effectively put them ‘on retainer’ yet are then expected to act as ‘honest brokers’ in the public interest (Pielke, 2003). Nevertheless, it took the Protestant Reformation to begin to draw a clear line under these excesses of academic rentiership. And another quarter-millennium had to pass before the point was driven home in a philosophically principled fashion. What Immanuel Kant advanced as a ‘critique of pure reason’ was in fact a
6.4 Academic Rentiership as a Tale of Swings and Roundabouts
135
systematic deconstruction of the academic arguments normally used to justify the status quo – that is, the set of interlocking elites in theology, law and medicine who would have the public believe that they live in the best possible world governed by the best possible sacred and secular rulers. At most such beliefs were ‘regulative ideals’ (aka ‘ideologies’) that served to buy time for the elites to carry on. The salience of Kant’s critique in his own day lay in the increasingly open contestation for epistemic authority in the royal court among the learned professions – the clergy, law and medicine – in a period of uneven secularisation. Kant (1996) himself shone a light on this matter in his polemical 1798 essay, The Conflict of the Faculties, a major inspiration for the Humboldtian university. Notwithstanding its ever-changing fashions and often to the exasperation of fellow academics in other disciplines, philosophy – at least in its pedagogy – has stuck doggedly to the Humboldtian mission. But while Humboldt continues to inspire the idea of an anti-rentier academy, it only took about a little more than a generation for his vision to morph into a ‘re-professionalisation’ of academic life, resulting in the discipline-based structure of the university that persists to this day, albeit under increasing pressure. This return to rentiership is normally attributed to nation-building and a broadly ‘Positivist’ ideology that regarded academic experts as the secular successors of the clergy in terms of ministering to the needs of modern society. In German academia, it also carried more metaphysical implications – namely, that each discipline is regulated by a distinct ‘value’ that it tries to pursue, which in practice served as a quality control check on scholarship (Schnädelbach, 1984). This ‘Neo-Kantian’ turn was associated with an extended training period for all disciplines, culminating in the ‘doctorate’ as the ultimate mark of accreditation. And again, like past rentiers, these new-style academic professionals became increasingly entangled with the affairs of state and industry, the high watermark of which was Germany’s establishment of the Kaiser Wilhelm Institutes shortly before the First World War. This turned out to be a fatal embrace, as virtually the entire academic establishment was behind the war effort, which resulted in a humiliating defeat. The Weimar Republic that followed has been often characterised as the ‘anti- intellectual’, or more specifically ‘reactionary modernist’ (Herf, 1984). However, it would be more correct to say that it was an anti-rentier backlash that levelled the epistemic playing field between academic and non-academic forms of knowledge. Thus, folk knowledges, astrology, homoeopathy, parapsychology and psychoanalysis – all of which had been regarded as beyond the pale of academic respectability – acquired legitimacy in the 1920s and ‘30 s through popular acceptance. They were facilitated by the emergence of such mass media technologies as radio, film and tabloid newspapers, which provided alternative channels for the spread of information. At the same time, academia responded by returning to ‘fundamentals’ in two rather distinct ways, which anchored the rest of twentieth century philosophy and much of cultural life in the West and beyond. One was existential phenomenology, which like the Renaissance Humanists and the Protestant Reformers focused on the deception and self-deception (aka ‘corruption’) involved in expert rationalizations of our being in the world. Here what Heidegger originally called ‘deconstruction’
136
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
played an important role in recovering the original language of being, which in his hands turned out to be the aspects of Greek to which German is especially attuned. However, I will dwell on the second return to fundamentals, which was via ‘logical positivism’ (in the broad sense, to include Popper and his followers), as it has been the more comprehensively influential. It is worth noting that while existential phenomenology and logical positivism are normally portrayed as mortal enemies, they were agreed in their near contempt for the leading philosophical custodian of academic rentiership in their day, the Neo-Kantian Ernst Cassirer (Gordon, 2012; cf. Fuller, 2016c). The logical positivist strategy was to strip down academic jargon – the calling card of rentiership – to data and reasoning that in principle could be inspected and evaluated by anyone, including those who aren’t privy to the path that the experts took to reach their agreed knowledge claims. In this context, the prospect that the future might be like the past – ‘induction’ – was regarded with considerable suspicion: Even if it appears that the future will resemble the past, why should it? Our earlier discussion of Nelson Goodman epitomized this attitude. Indeed, Goodman (1955) called sticking with the current hypothesis in the face of an empirically equal alternative ‘entrenchment’, a not altogether positive term that implies a path dependent mode of thought. The larger point is that the act of projecting the future is a much more evaluative enterprise than the phrase ‘extrapolating from the data’ might suggest (Fuller, 2018a: Chap. 7). By the 1960s this concern morphed into a general problem of ‘theory choice’, whereby at every step along the path of inquiry one needs to ask why one hypothesis should be advanced rather than another if both hypotheses account for the same data equally well. Thus, Bayes theorem rose to prominence in epistemology as a vehicle to inspire and record experiments that weighed on comparable hypotheses differently. In effect, it kept the opportunity costs of carrying on with past epistemic practices under continual review. In the philosophy of science, it led to a tilt toward theories that suggested more paths for further inquiry and other such horizon-broadening, future-oriented values (e.g., Lakatos, 1978). Much of the long-term cultural influence of logical positivism relates to its extramural interests in overcoming the ‘burden of the past’ in multiple senses, ranging from differences in natural languages to the tendency of art and architecture to reinforce historical memory. In this respect, the movement was ruthlessly ‘modernist’ and occupied the Left of the political spectrum, at least during its European phase, when it was seen as part of ‘Red Vienna’ (Galison, 1990). Moreover, logical positivism’s transatlantic passage involved a further degree of ‘complexity reduction’ that its proponents largely welcomed – namely, the need to reformulate in English everything that was originally expressed in German. This process, common to other intellectual migrants in the interwar period, was somewhat easier than supposed, as the German connotations (‘excess baggage’) were often gladly lost in English translation. In this respect, logical positivism’s much vaunted respect for ‘clarity’ of expression can be seen as valorizing the very features of translation that Heidegger would regard as ‘forgetfulness of Being’. In any case, this
6.5 The American Hegemony and its Protscience Digital Afterlife
137
was how English quickly became the undisputed language of international exchange (Gordin, 2015: Chap. 5 and 6). To be sure, against this general trend stood the Frankfurt School genius, Theodor Adorno, who returned to Germany as soon as he could after the Second World War to recover what Judith Butler (1999) has popularised for US academia as the ‘difficulty’ in writing that is necessary to pursue the deepest issues of concern to humanity. Yet, in many respects, Butler’s appeal to Adorno’s ‘difficulty’ is no more than rent nostalgia, by which I mean a longing for a mistaken understanding of a past based on rentiership. Adorno himself was a creature of Weimar culture who easily mixed disciplines and worked in think tanks both in Germany and the US until his postwar German return. Indeed, much of what Adorno published in his lifetime was music criticism, which could be understood by a broad range of readers – and even in later life wrote a book entitled ‘The Jargon of Authenticity’, a takedown of Heidegger’s much vaunted ‘difficulty’ (Adorno, 1973). Moreover, the ‘difficult’ postmodern French theorists on whom Butler has based most of her own work were not nearly as academically entrenched in their homeland as they came to be in the United States. The likes of Foucault, Deleuze and Derrida cycled in and out as public intellectual figures in France, each trying to grab attention from the others, which helps to explain their flashy style – but not why Americans should base entire journals and schools of thought around them (Cusset, 2008). Notwithstanding her credentials as a ‘continental’ philosopher, Butler is more parochially American than she and her admirers think.
6.5 The American Hegemony and its Protscience Digital Afterlife The underlying inconvenient truth is that even though most of the German migrants were willing to abandon the academic rentiership of their homeland, they found themselves confronted with an American academic system that by the end of the nineteenth century had come to emulate the doctorate-driven ‘Wilhelmine’ model of academic rentiership, again as part of a nation-building exercise. Most of the transatlantic migrants responded as the logical positivists did, already starting with Hugo Münsterberg (1908), whom William James had invited to run his psychology lab at Harvard and then went on to invent the role of the ‘expert witness’ in the courtroom. Having been outsiders in the system they left, these Germans became doyens in the system they entered. A key moment was the 1912 presidential election of that Bismarck-admiring founder of the US political science profession, Woodrow Wilson. Once Wilson masterminded America’s late but decisive entry into the First World War, the sense that Europe was transferring stewardship for a valuable piece of intellectual real estate called ‘Western Civilization’ to the United States became palpable. The main academic indicator was the rise of American undergraduate
138
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
courses in the ‘Great Books’, the core of a renovated liberal arts curriculum. By the 1980s it became the intellectual terrain on which the ‘canon wars’ were fought – perhaps the clearest display of academia’s rentiership tendencies at the pedagogical level (Lacy, 2013: Chap. 8). The question explicitly put on the table was: ‘Who owns the curriculum?’ In this context, ‘identity politics’ opened the door to retrospective intellectual entitlement claims by ‘dispossessed’, typically ‘non-Western’ cultures, the result of which for better or worse has been to breathe new life into academic rentiership. A sign of the times is that ‘cultural appropriation’ tends to be seen pejoratively as ‘plagiarising’ rather than extending the value of the appropriated culture. The idea that the US was the primary – perhaps even the only reliable – standard- bearer for ‘Western Civilization’ reached its peak in the Cold War. Unsurprisingly, it was also the period when the ‘Great Books’ approach to liberal arts reached its peak, even though the launching of Sputnik in 1957 diverted much of the pedagogical focus toward the more usual research-based form of academic rentiership, in which students were encouraged to specialize in the physical sciences to help defeat the Soviets (Fuller, 2000b: Chap. 4). However, I don’t wish to dwell here on the remarkable faith in the projectability of academic expertise onto geopolitics that this diversion involved – or the role of the US Gestalt psychologist Jerome Bruner in midwifing this development, for that matter. In any case, it resulted in students flooding into physics and chemistry courses, the market for which then collapsed at the end of the Cold War. Rather, I would turn our gaze to the ‘auxiliary’ knowledge enterprises that capitalised by presenting cheaper alternatives to what passed as ‘mastery’ of the Great Books. They were the harbingers of the knowledge economy we currently inhabit. I refer here to such Cold War creations as Masterplots, Cliff’s Notes and Monarch Notes. Although they presented themselves as ‘study guides’, they were basically in the business of supplying the sort of understanding that was needed of a ‘Great Book’ to pass an academic exam or sound intelligent in conversation. Implied in this multi-million-dollar business, which took advantage of the cheap paperback market, was that the knowledge required of even the most classic of works (e.g., Shakespeare) is sufficiently patterned that it can be presented more efficiently to today’s readers than in the hallowed originals. Moreover, the strategy seems to have worked because an internet-based version started by Harvard students – Spark Notes – has flourished since 1999, now as part of the New York-based bookselling empire, Barnes & Noble. But more importantly, the internet has permitted a plethora of spontaneously generated summaries and commentaries, which when taken together undermine the finality of academic authority. This explains the ascendancy of Wikipedia as the world’s most widely used reference source (Fuller, 2018a: Chap. 5). Let me conclude with a few observations about this development, which even now tends to be regarded as unseemly by professional academics, yet it may contain the most robust seeds of anti-rentiership to date. I began with a reflection on the classical ideal of knowledge as ‘explaining the most by the least’. Historically it has involved judgements about what is and is not necessary to include in such an
6.5 The American Hegemony and its Protscience Digital Afterlife
139
explanation, or ‘modal power’. These judgements have been influenced by the communication technology available – and, more importantly, who has control over it. The modern threat to academic rentiership began with the introduction of a technology over which academics did not exert monopoly control, namely, the printing press. To be sure, academics generally welcomed the ensuing legislation – from papal imprimaturs to royal licences – that were designed to contain printing’s threat to their rentiership. However, the advancement of Protestantism on the back of mass Bible reading campaigns seriously undercut these relatively ineffectual attempts at censorship. As a result, the great modern political and economic revolutions happened typically first in the wider society already capable of making use of the new technology and then only grudgingly in academia. The internet promises to revisit this situation with a vengeance. A general feature of modern revolutions from Martin Luther onward is that their instigators were trained in the old regime but for whatever reason lacked sufficient personal investment to perpetuate it. We now live in a time with unprecedented levels of formal education yet widespread disaffection with the judgements taken by its academic custodians. Perhaps that is how it should be. The emancipatory potential of education is not fully realized until the educators themselves are superseded. This would certainly explain the robust empirical finding that the more people have studied science, the more they take exception to expert scientific opinion – yet all along without losing faith in science itself. Wikipedia itself is a good case in point. Notwithstanding the anonymously non-expert nature of its contributors, it still requires adequate sourcing to the claims made on its pages, potentially giving readers the opportunity to ‘reverse engineer’ the content of the entries by returning to the original sources. I have called this phenomenon ‘Protscience’, a contraction of ‘Protestant Science’ (Fuller, 2010b: Chap. 4; Fuller, 2018a: Chap. 5). Like the original Protestantism, Protscience requires an alternative medium for expressing what the old regime represses. Step forward the internet, the true successor to the printing press (pace Marshall McLuhan), especially once the World Wide Web equipped personal computer interfaces with hyperlinks and audio/video displays, which have served to fuel the now hyper-educated imagination. This enhanced internet publicises, amplifies and democratises the quiet elite revolution that had begun with what the media theorist David Kirby (2011) has called ‘Hollywood Knowledge’, whereby academic experts freely migrated to that most non-academic of media – film – in order to explore their own and their audience’s capacities to imagine alternative worlds that academia normally discourages as ‘improbable’ if not ‘impossible’. An early case in point is German-American rocket scientist Wernher von Braun’s admission that Fritz Lang’s 1929 proto-feminist Weimar film, Frau im Mond, inspired the idea for how to launch a rocket into space (Fuller, 2018a: 38). If Plato were advising today’s defenders of academic rentiership, he would tell them that the biggest mistake that their political and economic protectors made was when they regarded emerging alternative communication technologies – starting with the printing press – as possibly being of mutual benefit. Thus, instead of Plato’s preferred route of censorship, modern power elites have tended either to tax new
140
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
media or keep them on retainer as propaganda vehicles – and sometimes both, of course. While the power elites have thought of this as a win-win situation, placating their potential foes while presenting an outwardly ‘liberal’ image, historically it has only sowed the seeds of dissent, which eventually led to a turn away from any entrenched form of power, including academic rentiership. Here it is worth recalling that most revolutions have occurred in societies that had already been liberalised to some extent but not up to the liberals’ promoted expectations (Wuthnow, 1989). The general lesson here is that tolerance for the dissenting views expressed in alternative media serves to dissolve the ultimate taboo with which Plato was concerned, namely, between ‘the true’ and ‘the false’. We all end up becoming ‘post- truth’. In this context, ‘fiction’ is ambiguous, in that in the first instance it merely implicates a maker in whatever image of the world is presented. And of course, the mere fact that something is made does not necessarily mean that it is ‘false’. Thus, some further judgement is required, which in turn threatens to reveal the exercise of modal power over what people are allowed to think is possible. The modern accommodation has been to regard fiction as ‘literature’ with its own academic space that can be judged on its own terms, without reference to other forms of knowledge. In this respect, it has simply followed the path of other fields of inquiry that have been domesticated by becoming an academic discipline. But of course, lawyers and scientists who adopt an ‘instrumentalist’ attitude to their activities routinely regard their ‘fictions’ as possessing efficacy in the larger world regardless of their ultimate ontological standing. To be sure, Plato found this attitude desirable when restricted to the philosopher-king but socially destabilizing when permitted in society at large, which was his main complaint against the Sophists (Fuller, 2018a: Chap. 2). The internet adds an interesting dimension to a key feature of this story, which requires a backstory. It has been often remarked that many ingredients of Europe’s Scientific Revolution were first present in China, yet the comparable Chinese scholars and craftsmen never interacted in the way they started to do in twelfth century European cities such as Oxford and Paris, which over the next five centuries paved the way for modern science and technology. The relatively free exchange between European scholars and craftsmen allowed ideas and formulas to be seen in terms of how they might be realized more concretely, as well as how machines and techniques might be extended and improved (Fuller, 1997: Chap. 5). This allowed for an empowering interpretation of the ‘maker’s knowledge’ principle, famously championed by Francis Bacon and which underlies the centrality of modelling in almost all branches of science today – namely, that one can only understand what one can make. (I say ‘empowering’ because historically ‘maker’s knowledge’ has been invoked to suggest both strong limits and no limits to human knowledge.) The internet enables this largely self-organized activity to bear fruit much more quickly through explicit networking, especially ‘crowdsourcing’, whereby invitations are issued for those with the relevant aspirations to hook up with those with the relevant skill sets. This now commonplace cyber-practice opens up the space of realizability by creating the conditions for the surfacing of ‘undiscovered public knowledge’ that allows the already existing pieces to combine to solve puzzles that were previously regarded as insoluble (Fuller, 2018a: Chap. 4). In short, what looks like a glorified
6.6 Postscript: Are Neoliberals Right to Want to Reduce Knowledge to Information?
141
matchmaking service can render the impossible possible, and thereby diffuse modal power. For a parting shot of academic rentiership, consider the following. The only difference between an establishment scientist like Richard Dawkins who says that the public opposes evolution because they don’t know enough cutting-edge science and an establishment humanist like Judith Butler who says that the public opposes postmodernism because they don’t know enough cutting-edge humanities is that Dawkins is more urgent than Butler about the prospective consequences for society if the public doesn’t actively profess faith in his brand of Neo-Darwinism. Whereas Butler simply wants to continue reaping the rewards that she has been allowed from a relatively insulated (aka ‘autonomous’) academic culture, Dawkins insists on nothing less than a conversion of the masses. In a democratic knowledge economy that treats academic rentiership with suspicion, neither should be allowed to stand, though Dawkins may be admired for his forthrightness. But make no mistake, Butler and Dawkins are the two sides of academic rentiership: the parasite (Butler) and the colonist (Dawkins) of knowledge as a public good.
6.6 Postscript: Are Neoliberals Right to Want to Reduce Knowledge to Information? Stewart Brand – editor of the Whole Earth Catalogue, founder of the Long Now Foundation and godfather of the Silicon Valley mindset – famously remarked in 1984 at one of the original hackers’ conferences: ‘Information wants to be free’. It was apparently meant as part of a formulation of the complex nature of the economics of information. Put in its strongest form, the idea is this: As the cost of acquiring information is driven down by efficiency advances in technology (aka ‘information wants to be free’), the cost of getting the information you need increases because you are now exposed to so much of it (Clarke, 2000). A compounded version of this formulation describes the quandary facing today’s internet users in our ‘post-truth condition’, since information about the missteps and errors of established authorities is also routinely part of the mix. Moreover, Brand’s insight suggests that a market will open for agencies capable of customizing the information overload to user needs. In effect, a supply-driven change has manufactured new demand – a take on ‘Say’s Law’ for our times. Ever since the advent of the daily newspaper in early eighteenth-century England, modern mass and social media have been largely dedicated to filling this market. It is worth observing that whatever else this dynamic demonstrates, it is not that information is inherently a public good. This is not to deny Brand’s premise: Information arguably does want to be free. But information may be set free in all sorts of ways that then require that people find the means to turn the resulting flow to their advantage. Of course, the media – including what is nowadays called ‘social media’ – has spontaneously risen to the challenge, but the question remains whether
142
6 Prolegomena to a Political Economy of Knowledge beyond Rentiership
the media truly make information available to all, and even if it does, whether it does to everyone’s benefit. Those two conditions need to be met for information to be deemed a public good. In other words, information must constitute genuine knowledge – and this requires a specific kind of sustained effort that goes beyond the removal of barriers to information flow. While the media certainly channel information in specific ways, their modus operandi is different. Trying to increase market share is not the same as aiming for universal coverage. On the contrary, depending on the return on investment, a media outlet usually finds after a certain point that its resources are better spent shoring up its current market share than increasing it still further with greater outreach. This helps to explain why welfare economists have regarded the state as the natural deliverer of ‘public goods’, however they are defined. This reflects a certain view about the basis of the state’s legitimacy: namely, that it is predicated on universal coverage. This means both that everyone within its borders is subject to its rule and everyone expects that the rulers are held to account by the ruled. The latter condition has become increasingly important with the democratization of politics. Moreover, when welfare economists speak of the state’s role in delivering public goods, they often invoke the idea of ‘market failure’, which suggests an interesting epistemic relationship between the state and ordinary market players such as business firms. The paradigm cases of public goods tend to be about infrastructure, such as roads and utilities, without which business could not transpire efficiently, if at all. However, no single firm or even group of firms can imagine turning a profit from investing in such matters due to all the others who would equally benefit in the process. These hidden beneficiaries would be effectively ‘free riders’ to the investment, who would also be given an incentive to compete against the original investors. However, the state – understood as an epistemically superordinate entity – can see how this kind of investment would in fact benefit everyone in the long term, including even the reluctant investors. Such a judgement presumes that the state has the capacity to see more comprehensively – in both spatial and temporal terms – than any potential business firm under its political remit. We see once again a politically domesticated – that is, non-Marxist – replay of the Socialist Calculation debate of the 1920s, which effectively split the social democratic ranks of Europe into ‘liberal’ and ‘socialist’ wings, a division that grew during the Cold War and morphed afterwards into today’s ‘libertarian’ and ‘social justice’ tendencies in democratic regimes more widely. At stake here is much more than simply whether the state or the market can best provide public goods. It is about the very existence of public goods as anything other than an artifice of the state. The suspicion is that it is only because the state sets an artificial boundary around the sphere of social relevance – namely, the entire human population within its jurisdiction – that the prospect of certain people lacking certain goods becomes a problem. After all, even more basic than public goods are the ‘natural goods’ originally identified by Adam Smith, such as air and water. And while Smith was
6.6 Postscript: Are Neoliberals Right to Want to Reduce Knowledge to Information?
143
right that in one very clear sense such goods are not scarce at all, they have been of variable quality at the point of access – even in Smith’s day: Pollution has always been an issue. Indeed, the mortality of humans and other life forms is directly related to this variability. Perhaps not surprisingly, by the end of the nineteenth century, some radical liberal (aka ‘laissez faire’) economists were proposing that the variability of natural goods contributes to Darwinian natural selection, comparable to a market in which the buyer must always beware because the seller is relatively indifferent to the quality of his goods not only because he monopolizes the market but also because he expects a high turnover in buyers. So, even if some are poisoned by the air and water, others will soon emerge to replace them. We now call this argument ‘Social Darwinist’. Nevertheless, it resurfaced in the recent COVID-19 pandemic by those who regarded the prospect of vaccinations as a ‘counter-selectionist’ artifice, as libertarians might oppose government subsidies for suppressing the ‘natural’ outworking of the market. In that case, infant and geriatric mortality rates should be treated in the same spirit as high failed business start-up rates: Both should be tolerated in the name of what epidemiologists today call ‘herd immunity’, as these ‘failures’ point to the limits of what constitutes a sustainable population in a given ecology (Fuller, 2020a: Chap. 12). I have raised the nature/artifice dichotomy because it goes to the heart of the normative question that underwrites the idea of public good in a strong sense that implicates state involvement: Is nature the standard or merely the default position? When universitas (‘corporation’) entered Roman law in the later Middle Ages, the idea was to manufacture an ‘artificial person’ whose ends transcend the interests of those who constitute it at any given time. Such entities originally included churches, monasteries, universities, city-states – and much later, nation-states and finally business firms. Before that time, personhood was by default based on parental descent, with temporary exceptions made for goal-specific associations (e.g., a military or commercial expedition) that required novel social arrangements for their execution. So, the exact relationship between ‘artificial’ and ‘natural’ persons in the new legal environment was unclear and resulted in an increasingly fractious European political scene. However, Thomas Hobbes shifted the debate decisively in the mid-seventeenth century by explicitly arguing for the superior power of an artificial person (‘Leviathan’) over all natural persons. From that point onward, the prospect of the state as a reformer, if not outright enhancer, of humanity’s default settings became increasingly presumed in politics, even as theorists debated the merits of various authoritarian and democratic regimes. And 150 years after Hobbes, it also provided legitimacy for Humboldt’s reinvention of the university as central to the modern nation-building project. However, as the state recedes as the core artifice for human transformation, will there need to be another machine for which the university can serve as the ghost? And if so, might that ‘New Leviathan’ be artificial intelligence?
Chapter 7
Appendix: Towards a Theory of Academic Performance
Abstract The Appendix sketches a theory of academic performance for the aspiring Humboldtian. Its fundamental principle is that speaking and writing are orthogonal media of communication. Both need to be cultivated, allowed to influence each other, but without prioritizing one medium over the other. Indeed, speaking and writing should influence each other in the spirit of improvisation, very much as in music, where it is the wellspring of creativity. This point is related to the original humanistic basis of the medieval university and then brought up to date with a proposal for new set of ‘Three Rs’ for academic performance based on my own personal experience: Roam, Record and Rehearse. The Appendix ends with my own original academic performance, an address to my university graduating class, which foreshadowed the shape of things to come.
7.1 To Speak or to Write? That Is the Question Academics spend so much time worrying about how students should present themselves as knowers that we rarely set a good example ourselves. We are more concerned with what happens in the exam room than the classroom. In short, we fail to regard teaching as a performing art, something that is integral to the personal development (Bildung) of not only the student but also the teacher. Moreover, this failure has enabled the administrative reduction of pedagogy to ‘content delivery’, which in turn has invited a host of anti-academic innovations ranging from ‘flipped classrooms’ to ChatGPT, the overall effect of which is to diminish the central role that teaching has played in the Humboldtian university. It may even threaten teaching with extinction as part of the academic profession, given the predominance of short-term contract teachers in higher education today and the overwhelming focus on research in postgraduate training. It is against this background that I offer some guides to the private and public conduct of the aspiring Humboldtian. There is a time-honoured tradition—one that goes back to the ancient Greek Sophists — that says that when it comes to important matters, one should always © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Fuller, Back to the University’s Future, Evaluating Education: Normative Systems and Institutional Practices, https://doi.org/10.1007/978-3-031-36327-6_7
145
146
7 Appendix: Towards a Theory of Academic Performance
write down one’s thoughts before speaking them (Fuller, 2005: Chap. 1). Many reasons are given for this advice. Writing can nail down specific thoughts that might be difficult to recall in speech. Writing also allows complex thoughts to be laid out in a logical fashion, something that can be hard to do when speaking ‘off the cuff’. While this seems good advice, in fourth century BC Athens it encouraged the sort of staged public performances that inspired citizens to engage in increasingly reckless behaviour, resulting in the city-state’s downfall, as recounted in Thucydides’ Peloponnesian War. It left an indelible mark on the young Plato, who as an adult came to be a principled champion of literary censorship. To be sure, writing before speaking remains the presumed academic modus operandi, but in a way that might meet Plato’s approval, since if nothing else, academic writing is ‘disciplined’. But this whole way of thinking about the relationship between speaking and writing is misleading. In fact, they are orthogonal communicative media. I learned this lesson in the early 1980s when I was preparing to give my first academic ‘talk’, which meant reading from a specially written text. I was in my twenties, still working on the Ph.D., never having prepared a forty-minute lecture. The event involved my travelling from Pittsburgh to Denver, about 2000 kilometres. This was just before air overtook train as the cheapest form of transcontinental travel in the United States— and before the portable typewriter was replaced by the personal computer as the cheapest means of producing text. Thus, I spent nearly a day and a half on a train romantically named the ‘California Zephyr’, banging out my paper on an Olivetti portable typewriter. It was not a pretty sight—or sound—for fellow passengers. On the day of my talk, I read the text much too quickly, probably because I had just written it and so I could anticipate each succeeding sentence all too easily. Nevertheless, the event was a reasonable success. The University of Colorado at Boulder, which sponsored the talk, became my first employer a couple of years later. However, it was clear to me from the audience questions that only brief fragments of the talk were understood, but they were sympathetically supplemented with the questioners’ own concerns. This allowed me to connect what little they understood of what I said with what really mattered to them. I have come to believe that this is the best outcome of an academic ‘talk’, when the academic reads a specially prepared text. This is true even for senior academics speaking at more distinguished conferences, with presumably better-informed audiences. Yet, most of these interactions between speaker and audience end up being exercises in interpersonal diplomacy. After all, both parties have a vested interest in not appearing foolish, and they realize that if one party looks foolish, the other might appear foolish as well. To avoid the worst outcome of academia’s version of the Prisoner’s Dilemma, a lot of bullshit is generated during the ‘Q&A’ after the talk to ‘save face’. I believe that this phenomenon has led many speakers—and not only in academia—to regard the Q&A period as a slightly annoying form of public relations, specifically as opportunities to repeat the original message to members of the audience who may have not quite got it the first-time round. Given this undesirable state of affairs, I have concluded that speaking and writing are simply different activities that serve different purposes and thus should be judged differently. Of course, the two activities may productively relate to each other. The most productive
7.1 To Speak or to Write? That Is the Question
147
relations occur in the fields of drama and music – and hence Plato’s concerns about their deployment. In both cases, writing is a work of composition, and speaking is a work of performance. Each draws on the other, but the art that is created in each case is distinct. Playwrights and actors are equally but differently valued in the theatre, just as composers and performers are equally but differently valued in music. However, I want to make the point more broadly. Writing is a solitary and self-involved process. The audience is located in one’s own mind. The result can be quite articulate and sophisticated, but invariably difficult to follow unless the reader can reproduce the author’s thought process. In contrast, speaking is a publicly oriented process, one best conducted by looking at the audience, not a text. The speaker holds forth for shorter periods at a time and is compelled to respond sooner after she has concluded her communication. While a great writer need not be a great speaker (or vice versa), academics who know what they’re talking about should find the move from writing to speaking relatively easy because the hard work of writing is inventing a pretext for what one wants to say. If the audience already provides the pretext, then it is much easier to cut to the chase. Nevertheless, academic culture privileges the writer over the speaker. Popular culture is a different matter. We remember the singers and actors rather than those who write their songs or script their lines—though all of them may be amply rewarded in their own way. Interestingly, law and business—despite the significance that they officially attach to the written word—nevertheless also accord considerable respect to those masters of the spoken word, courtroom advocates and salespeople. I am glad that my training in writing and speaking were conducted separately. In the case of writing, I learned how to weigh the selection of individual words and to read many words in sequence before passing judgement over whether they were true, false or even sensible. Learning to write as a distinct art gave me a tolerance for ambiguity in expression. Very few things make sense immediately in writing, unless you have previously seen what has been written. To be sure, this often happens, because authors repeat themselves and each other. This is why ‘skimming’ and ‘scanning’ a text is often a successful substitute for properly ‘reading’ it. However, repetition is usually considered to be a vice—not a virtue—of writing. (It may even count as ‘self-plagiarism’.) Writing is supposed to express a unique thought in each sentence, and ‘reading’ is the art of dealing with a text as if that supposition were true. The virtues of speaking are very different. Repetition is tolerated—and even encouraged—to keep an idea in the forefront of the audience’s mind. This point was first made clear to me when I was taught to recite various speeches, poems and songs as a student of Jesuit schooling. Such works include ‘motifs’, ‘refrains’ and ‘choruses’ precisely because they have been written to be performed in voice. It is against this backdrop that so much of twentieth-century music and poetry—and academic writing, one might add—is said to be ‘difficult’. These compositions typically lack the performance-oriented features of more classically composed works. For example, if the sequence of notes in a piano-based musical score is too complicated, then the pianist cannot simply do it from memory but must perform at a level below her normal technical expertise, as she is forced to read the notes while playing.
148
7 Appendix: Towards a Theory of Academic Performance
Nevertheless, the virtues of speaking come out clearest in relation to writing. When as a student I was required to recite speeches, poems and songs, I had to engage in minor forms of improvisation. I had to decide where to place emphasis on particular words and phrases to get the point across to a particular audience. Sometimes I felt compelled to alter the original composition to improve the flow of my performance. There is an art to this too, since audiences, not least the instructor, may fail to respond well to your improvisation—I remember many frowns and grimaces. Over the years, I have come to believe that improvisation—that risky zone between writing and speaking—is the true crucible of creativity. You need to be able to both write and speak well, but then you must play them against each other. Jazz is the most obvious art form that is based on improvisation. But I would add teaching as well. Great improvisers go beyond simply making one text come alive. Rather, they can combine multiple texts with surprising results that would not be evident from performing the texts individually. In practice, this means that when I teach a course, I assign an eclectic mix of readings, the mutual relevance of which may not be obvious to the naked eye. However, the point is for the student to attend my lectures to see how I weave them together – and then try to do something similar in their assigned work. If you see writing and speaking as distinct activities, as I do, then your own patterns of written and spoken discourse will change. Writing will become more compressed, very much like a musical score, in which no note is wasted. However, speaking will become more expansive, very much like a raconteur who can spin out a narrative indefinitely. This may create considerable dissonance in how others perceive you. People generally find me a very difficult writer but a very accessible speaker. While the sharpness of this perception surprises me, nevertheless I see it—and I believe that others should cultivate it. Because improvisations are by definition tied to specific performances, their sense of ‘creativity’ may be difficult to pin down beyond the bare fact that a particular audience finds what it hears likeably novel. Moreover, audio and video recording has helped to convert improvisations into texts in their own right for others to study and improvise upon. This is the standard of performance that you should always aim for—what artists and musicians call ‘virtuosity’. Your current performance should never be reducible to a past performance, let alone a composition on which the performance is nominally based. In other words, what you say should always be worth recording. You should not speak as you write but your speech should be worth writing about.
7.2 Improvisation as the Wellspring of Human Creativity The value of improvisation is as a public demonstration of creativity. It was precisely this aspect of rhetoric that the ancients placed in such high esteem. It manifested the free exercise of reason, the hallmark of the self-realized human, someone worthy recognizing as a peer in the polity. The Greeks called improvisation heuresis (the root of ‘heuristic’) and the Romans inventio (the root of ‘invention’). However, both
7.2 Improvisation as the Wellspring of Human Creativity
149
‘heuristic’ and ‘invention’ are nowadays terms mainly associated with the technological, even the makeshift. To understand the temporal continuity from antiquity to modernity, imagine that the ancients saw our individual bodies as unique instruments that the mind plays. In the Middle Ages, this attitude was formalized as a set of disciplines that impart what are nowadays called ‘transferable skills’. They provided the original basis for the liberal arts and was taught in two course packets, named (in Latin) for the number in each packet: Trivium (grammar, logic, rhetoric) and Quadrivium (geometry, arithmetic, astronomy, music). Plato’s fingerprints are all over this curriculum, as can be seen by the foundational role played by memory (anamnesis) in both courses. Whereas the Trivium trained the memory to be delivered initially in speech but increasingly also in writing, the Quadrivium added training to the eyes and ears to mediate that delivery. (On another occasion, I might propose a ‘Quinquivium’ of five qualitative and quantitative research methodologies used across a range of academic disciplines that further mediate and enhance ourability to understand and intervene in the world.) Those who underwent this curriculum were thus able to judge the world for themselves without having to rely on the word of others. They could literally ‘speak for themselves’. This capacity for improvisation would be the mark of their freedom – ‘liberal’ in its etymological sense. In this context, Plato’s view of ‘memory’ is crucial. It is the key to the improvisational character of the liberal arts. Plato believed that memory was ultimately collective, and that an individual memory is a ‘recollection’, which is to say, a recombination of an initial set of pre- existing possibilities in the collective mind. In that sense, each ‘remembering’ is an act of improvisation, whereby the individual improvises their identity by creatively elaborating this shared psychology (Bartlett, 1932). In a more scientistic vein, we might say that Plato thought about memory as a kind of probabilistic grammar, which individuals play out in speech and writing with varying degrees of effectiveness. The point of education in the liberal arts, then, is to increase the effectiveness of these improvisations through the various sorts of mental discipline offered in the curriculum. Thus, the liberal arts increasingly focused on rhetoric, which in the Renaissance was epitomized as the ‘art of memory’, whose most distinguished practitioners included those icons of the Scientific Revolution, Galileo and Francis Bacon (Yates, 1966). But the secret ingredient of this curriculum – about which Plato had very mixed feelings – was literacy. Literacy implies that the presupposed grammar of possibilities can be stated as a code, knowledge of which enables multiple, potentially conflicting, instantiations. It’s hard to improvise if you’re the only source of the code. You must then focus your energies on transmitting the code. Thus, in cultures where literacy is either highly restricted or non-existent, your identity depends on others regarding you as an authoritative instantiation of some part of your culture. This is what is normally meant as ‘reproduction’ by both sociologists and biologists. For this reason, Marshall McLuhan’s mentor, Harold Innis (1951), somewhat counter-intuitively, treated both the stone-based writing of the ancient Near East and the spoken Homeric epics as ‘time-biased media’. Innis meant that they principally aim to extend the longevity of the message, one through the durability of the medium itself
150
7 Appendix: Towards a Theory of Academic Performance
and the other through a system of intergenerational inheritance, whereby invention is regarded as imperfect reproduction. This is why in non-literate cultures, speaking tends to be quite rigid and directive – at least from the standpoint of those coming from literate cultures. Without the external memory storage provided by widespread literacy, the personal exercise of memory is the main vehicle for reproducing what already exists (aka social structure). Private remembering is a form of self-discipline (as in ‘rote memory’) and public remembering a relatively scarce and even sacred moment of cultural endowment to the audience, who would have no other way of acquiring the culture. Consider the sense of pastoral security that a literate clergy provided to the illiterate laity well into the modern era. Priests enjoyed a monopoly over the sacred code, in terms of which they could then justify (or not) the ‘improvisations’ of everyday secular life. It was a world that Plato could have endorsed. However, with the advent of mass literacy in the Reformation, whereby everyone was encouraged to acquire access to ‘The Code’ (aka the Bible), the human mind was gradually freed up from having to reproduce the past, which permitted the showcasing of novelty, which has increasingly been seen as distinctive of humans as a species. Such novelty first flourished in the Protestant splintering of Christianity into denominations, but over the centuries came to be generalized and secularized as market differentiation. Rational choice accounts of religion have been well attuned to this phenomenon, which amounts to a downstream effect of Max Weber’s The Protestant Ethic and the Spirit of Capitalism (Stark & Bainbridge, 1987). Improvisation had already become relevant to higher education with the institution of the ‘lecture’ at medieval universities. On its face, the lecture hall was a factory for mass producing the latest edition of authoritative texts. Specifically, the student audience manufactured texts as they copied lecturers’ recitation of their own copies of those texts, supplemented by the lecturers’ commentaries. It made the establishment of any copyright-friendly sense of ‘authorship’ difficult. New comments were often incorporated as part of the commented text. This haphazard textual accretion has arguably made the history of both factual and fictional literature more ‘evolutionary’ in the Darwinian sense than one might have expected or wanted. (For a vivid Borges-inspired illustration of what this means, whereby Don Quixote ‘evolves’ into Ulysses over four centuries of successive re- and mis- transcriptions, see Berlinski, 1996: 29). It also explains the slow pace of intellectual progress until the period we call ‘modern’, even though most modern ideas had been incubated in medieval universities. But, as Elizabeth Eisenstein (1979) showed, the printing press revolutionized the situation by freeing up the mind’s inventive capacity, not merely in speech but now also in writing. One no longer had to provide an embellished repetition of past statements. A simple mention (or ‘link’, as we now say) would suffice to conjure up the missing background thatg frames the communication. And while the footnote is normally seen as the start of the modern referencing system, it began as a way to spatially distinguish the multiple arguments that could now be simultaneously pursued on the two-dimensional surface of the written page (Grafton, 1997).
7.2 Improvisation as the Wellspring of Human Creativity
151
The question then became how to spread this ‘meta-capacity’ across the entire population. This is trickier than it sounds, because ‘literacy’ is not a univocal skill. For example, literacy as a requirement for citizenship is relatively superficial. It simply means that you read sufficiently well to provide a competent written responses to the certain exam-like questions – and very often ticking a box will do. A deeper sense of literacy is required for improvisation. It depends on one’s having read enough of what others have written to be able to create something that is interestingly new. Thus, the more writing to which you are exposed – whether you were the original author – the freer you become, as you have more texts to combine that, in turn, you will have seen combined in more ways. Indeed, improvisation is most clearly understood as an interpretive performance of something written, be it a score, a poem, a script – or an academic text. In that sense, it is a ‘reading’ in the broad sense that was popularized by French post-structuralist philosophers in the 1970s – a more metaphysically informed extension of the conventional French academic practice of explication de texte. Jazz is often seen an inherently improvisational genre because performances tend to be informed by multiple pieces of music that are artfully played off each other to produce a unique result. In this respect, the best teachers are very much like the best jazz artists: They are not simply rehearsing one well-read text but are rather drawing on multiple texts simultaneously, a feature that is only imperfectly represented in an ordinary course syllabus. Students exposed to such a pedagogical improvisation can interpret their own task as either to reproduce the performance, rendering it a fixed text (which probably means some stylisation on the part of the student) or to reverse engineer the performance to reveal the interplay of its constitutive elements, ideally to produce a new complex that may also exchange some of the textual elements at play. I prefer the latter, but it also lays itself more open to the charge of ‘mere bullshit’ (Fuller, 2009: Chap. 4). Improvisation remains one of the few things equally valued by the Humboldtian university and the neo-liberal academy. Indeed, only closet authoritarians fear and loathe improvisation. In effect, the improviser wants the audience to judge him or her sui generis, which is a high-risk strategy, as it simultaneously invites the audience to project their own normative models of what the improviser is trying to do. Some may see mess, where others see art – and even if they see art, it may not be according to the improviser’s own implied standard. Karl Jaspers went so far as to describe the lecture as ‘a high point in one’s professional responsibility and achievement, finally to renounce all artificiality’ (Jaspers, 1959: 57–8). Unfortunately, academics have become less willing to submit themselves to such trials by improvisation. Instead, they prefer to be judged by scripts written by others – or themselves in a different state of mind from the one in which they do the presentation. When medieval intellectual life is casually labelled as ‘Scholastic’, this is what is meant. I leave it to historians to determine whether that epithet applies to the Middle Ages itself. Nevertheless, ‘Scholasticism’ certainly exists as a free-floating signifier that is attachable to a variety of intellectual practices across time and space. In the case of the High Middle Ages (I.e., starting in thirteenth century Christendom, largely in response to the Muslim threat to Europe), lecturers legitimized their own words by
152
7 Appendix: Towards a Theory of Academic Performance
presenting them as interpretations of, say, Aristotle or the Bible. It was a risk-averse strategy to shore up a sense of politico-religious authority that might otherwise be seen as dwindling. In any case, it is opposed to the innovative spirit of the modern Humboldtian university. This is not the deny the intellectual virtuosity demonstrated under the medieval conditions. But fast forward to the present, and given today’s questioning of the university’s future, we shouldn’t be surprised that Scholasticism enjoys a technological facelift in the mindless lip syncing to Powerpoints that routinely happens in both the lecture halls and the conference centres of academia.
7.3 The Three ‘Rs’ of Academic Performance: Roam, Record and Rehearse Several years ago, an enterprising Korean publicist, Douglas Huh, canvassed the study skills of creative people in various fields. I responded that my most successful study skill is one that I picked up very early in life – and perhaps is difficult to adopt after a certain age. Evidence of its success is that virtually everything I read appears to be hyperlinked to something in my memory. In practice, this means that I can randomly pick up a book and within fifteen minutes I can say something interesting about it – that is, more than summarize its contents. So, this is not about ‘speed reading’. Rather, I make the book ‘my own’ in the sense of assigning it a place in my cognitive repertoire, to which I can then refer in the future. There are three features to this skill. One is sheer exposure to many books. Another is taking notes on them. A third is integrating the notes into your mode of being, so that they function as a script in search of a performance. If the original 3Rs were ‘Reading, (W)riting and (A)rithmetic’, I give you the new 3Rs: Roam, Record and Rehearse. Let’s start with Roam. Reading is the most efficient means to manufacture equipment for the conduct of life. It is clearly more efficient than acquiring personal experience. Put better, reading itself is a form of personal experience. In this context, ‘Roaming’ means taking browsing seriously. By ‘browsing’ I mean forcing yourself to encounter a broader range of possibilities than you imagined was necessary for your reading purposes. Those under thirty years old may not appreciate that people used to have to occupy a dedicated physical space – somewhere in a bookshop or a library – to engage in ‘browsing’. It was an activity which forced you to encounter works both ‘relevant’ and ‘irrelevant’ to your interests. Ideally, the experience would challenge the neatness of this distinction, as you came across books that turned out to be more illuminating than expected. To be sure, ‘browsing’ via computerized search engines still allows for that element of serendipity, as anyone experienced with Google or Amazon will know. Nevertheless, browser designers normally treat such a feature to be a bug in the programme to be fixed in its next version, so you end up finding more items like the ones you previous searched for. As a teenager in New York City in the 1970s I spent my Sunday afternoons browsing through the two biggest used bookshops in Greenwich Village, Strand and
7.3 The Three ‘Rs’ of Academic Performance: Roam, Record and Rehearse
153
Barnes & Noble. Generally speaking, these bookshops were organized according to broad topics, somewhat like a library. However, certain sections were also organized according to book publishers, which was very illuminating. In this way, I learned, so to speak, ‘to judge a book by its cover’. Publishing houses have distinctive marketing styles that attract specific sorts of authors. Thus, I was alerted to differences between ‘left’ and ‘right’ in politics, as well as ‘high’ and ‘low’ in culture. Taken together, these differences offer dimensions for mapping knowledge in ways that cut across academic disciplinary boundaries. The more general lesson here is that if you spend a lot of time browsing, you tend to distrust the standard ways in which books – or information, more generally – is categorized. Let’s move from Roam to Record. Back in New York I would buy five used books at a time and read them immediately, annotating the margins of the pages. However, I soon realized that this was not an effective way to make the books ‘my own’, in the sense of part of my personal repertoire, let alone constitutive of my unique personhood. So, I shifted to keeping notebooks, in which I quite deliberately filtered what I read into something I found meaningful and to which I could return later. Invariably this practice led me to acquire idiosyncratic memories of whatever I read, since I was effectively rewriting the books that I had read for my own purposes. In my university days, under the influence of Harold Bloom (1973), I learned to call what I was doing ‘strong reading’. The practice of recording has carried into my academic writing. When I make formal reference to other works, I am usually acknowledging an inspiration – not citing an authority – for the claim I happen to be making. My aim is to take personal responsibility for (‘own’, in today’s lingo) what I say. I dislike the tendency for academics to obscure their own voice in a flurry of scholarly references that simply repeat connections that can be made by a standard Google search of the topic under discussion. This proactive view of notetaking has also influenced my rather negative view of Powerpoint presentations. Powerpoints dumb down the lecturer and the audience, as both are led to think that they understand more than they probably do. The ultimate casualty is the distinctiveness of the academic encounter, beginning with interchangeable lecturers on short-term contracts and ending with students preferring artificial intelligences over humans as lecturers altogether – assuming they can still tell the difference. In pre-Powerpoint days, even though the books and articles discussed by lecturers were readily available, the need to take notes made students aware of the difference between speaking and writing as expressive media. Even when lecturers read from ‘lecture notes’, they typically improvised at various points that often turned out to be the most interesting things they said. And if they simply read verbatim as if speaking from a medieval podium, the students would notice – and often not favourably. To be sure, the art of capturing those impromptu spoken moments through notetaking is far from straightforward, but there was no consistent alternative, even in the 1970s. That was still the era of clunky tape recorders rather than recording apps on smartphones. Students like me had to rely on our wits to translate meaningful speech into meaningful writing on the spot. This profound skill is lost to students who rely on Powerpoint’s false sense of cognitive security.
154
7 Appendix: Towards a Theory of Academic Performance
Finally, let’s turn from Record to Rehearse. Rehearsal already begins when you shift from writing marginalia to full-blown notebook entries insofar as the latter forces you to reinvent what it is that you originally found compelling in the noteworthy text. Admittedly the cut-and-paste function in today’s computerized word processing programmes can undermine this practice, resulting in ‘notes’ that look more like marginal comments. However, I rehearse even texts of which I am the original author. You can keep yourself in a rehearsal mode by working on several pieces of writing (or creative projects) at once without bringing any of them to completion. In particular, you should stop working just when you are about to reach a climax in your train of thought – an intellectual coitus interruptus, if you will. The next time you resume work you will then be forced to recreate the process that led you to that climactic point. Often you will discover that the conclusion toward which you thought you had been heading turns out to have been a mirage. In fact, that so-called ‘climax’ opens up a new chapter with multiple possibilities ahead. Some people will recoil from above, as it seems to imply that no work should ever end, which is anathema for anyone forced to produce something on schedule to earn a living! Nevertheless, even as the author of twenty-five books, from my own standpoint, each one ends arbitrarily and even abruptly. (My critics occasionally notice this.) And precisely because I do not see the books as ‘finished’, they continue to live in my mind as something to which I can always return – and to which others are invited to resume and redact. In any case, they remain part of my repertoire, which I periodically rehearse as part of defining who I am to a new audience. In a sense, this is my solution to the problem of ‘alienation’ which Karl Marx famously identified. Alienation arises because industrial workers in capitalist regimes have no control over the products of their labour. Once the work is done, it is sold to people with whom the workers have no contact. But of course, alienation extends to intellectual life as well, as both journalists and academics need to write quite specific self-contained pieces targeted at clearly defined audiences with whom they would otherwise have no affinity. Under the circumstances, there is a tendency to write in a way that enables the author to detach him- or herself from, if not outright forget, what they have written once it is published. Often this tendency is positively spun by saying that a piece of writing makes its point better than its author could ever do in person. And that may often well be true – and damning. Nevertheless, my own thinking proceeds in quite the opposite direction. You should treat the texts you write more like dramatic scripts or musical scores than completed artworks – that is, ‘allographically’, in Nelson Goodman’s (1968) terms. They should be designed to be performed in many different ways, not least by yourself, which in turn explains my liberal attitude towards plagiarism. Whatever is plagiarised is never put to the same use as the original, not even in a student assessment context. A student who works Kant’s prose seamlessly into an assignment on Kant is doing something other than what Kant originally did, because whatever else Kant might have been trying to do in his texts, he wasn’t trying to show that he understood himself. That would have been taken for granted by his intended audience. However, the student is obviously in a different position, operating at a distance from the person of Kant and hence trying to show that he understands Kant. Kant’s text is incomplete until someone
7.4 Ave Atque Vale! My Own Back to the University’s Future
155
brings it to life. That person might be Kant himself, or equally it might be a student plagiarist or an advanced Chat-GPT program – or some combination! Our intuitions about this matter are confused because we presume knowledge of the identity of the source in each case, which biases our judgement. In this regard, I’m recommending that we take a more ‘Turing Test’ approach to the matter and suspend knowledge of the author. In any case, the text is always in need of execution, which is another way of saying ‘rehearsal’. Taken together, Roam, Record and Rehearse has been a life strategy which has enabled me to integrate a wide range of influences into a dynamic source of inspiration and creativity that I understand to be very much my own.
7.4 Ave Atque Vale! My Own Back to the University’s Future What follows is my Salutatory Address delivered to the 1979 graduating class of Columbia College, Columbia University, New York. US college graduating ceremonies are traditionally opened by the person who graduated number two in terms of grade point average (all students are ranked together, regardless of subject major). This person is the ‘Salutatorian’. The person ranked number one closes the graduation ceremony, and hence is called the ‘Valedictorian’. Much of the animus behind my speech can be traced to my contempt for pre-professional degrees in law and especially medicine. It anticipates the anti-expertise line pursued in these pages. In the case of medicine, the better students routinely achieved artificially inflated grades based on a bell curve distribution of the marks. It meant that a student could do astonishingly well simply by being a couple of standard deviations ahead of their average classmate, without the need for the instructor to have set a prior standard of adequate performance. (My class valedictorian was a pre-med major.) In contrast, virtually all my coursework was judged by instructors who were laws unto themselves when it came to marking. This meant that, at least in principle, everyone could do very well or very poorly. Ah, the good old Humboldtian days... The text is reproduced from the original typescript of the speech with no editorial changes. It was delivered two months before the author turned twenty. *** THE ACADEMY: FROM DIVINITY TO BOVINITY (a fabula rasa) Steve Fuller, 15 May 1979, Columbia College, NYC Once upon a time… … Back in the days when teaching was still a marketable profession, its bargaining power derived from being the sole distributor of a certain indispensable commodity, which we shall call Divinity. Divinity secured this power in several ways, each of which capitalized on the structure of the academic establishment. First, Divinity was attributed a constant presence in everyday life. This very nicely made up for the potential weakness of education lasting too short a period of time to make any difference in the student’s life. However, in order to bolster this divine
156
7 Appendix: Towards a Theory of Academic Performance
ever-presence, a second trait – that of limited access – was necessary. On an obvious level, Divinity appears to be a fabrication since students do not know of its intricacies until they are educated. Yet, if the academicians argue that its presence is a secret one which requires privileged knowledge, then the possibility of fabrication gets turned around to emphasize the fundamental stupidity of the students. But there is one more element that was needed to seal this power, and that was the explicit superiority of Divinity in relation to other possible realities. Making the unseen esoteric is not nearly as difficult as making it indispensable. A student may be quite willing to accept his ignorance of Divinity in order to capitalize on other virtues, such as his sexuality. But the academicians, clever as usual, equated the unseen with the initial determining force of the divine agency. They then attributed their own esoteric grasp of the matter to the tapping of a natural resource that recreates this initial occurrence of Divinity in every action, namely, the soul. If the student became content with his ignorance, say by luxuriating in the splendor of sexual awareness, his soul was subsequently doomed, which was said to be a rather unpleasant state- of-affairs – so unpleasant that it was unimaginable. Admitting this much ignorance on the part of the academicians was important so as to guarantee a certain verisimilitude in their teachings, which were those of mortals feebly contemplating the truth. In order to be effective, the Divinity had to be separated from the diviners; the product from the producers. Producers come and go, but the product always remains. We have all heard that somewhere before. For all these slanderous comments about divine academicians, there is one overriding positive note, namely, that Divinity encouraged a consistently critical attitude toward anything that exists. There was talk about the illusion of the senses that might make us think the usual, undivine route was the correct one. In short, the facts never got in the way of the truth. One responded not with examples from real life but with possible situations, and it was the logic of the argument rather than a majority of consenting adults that carried the day. As keepers of the Ivory Tower, the divine academicians found the much-flaunted empirical world to be only one of many. Thus, the authoritarian myth of Divinity became a license for all types of unearthly thinking and pooh-poohing of current affairs. These ideas then were said to have divine powers because they divested one of the baggage that inexorably drags down the person committed to following the course of the oh-so-mundane facts. A revolutionary transcendence could always be discussed in the Ivory Tower, and with the power of Divinity the diviners could actually scare enough people into its practice. But such affective measures could not be expected to last forever. The bottom of the market eventually fell out of Divinity, and the diviners were brought down to earth – so far down that they were reduced to facing, of all things, the facts. Their subsequent ruminations compensated for the prior disregard of the facts by raising them to the level of sacred cows, which gives the name to this new academic order, Bovinity. Like their mammalian namesakes, the boviners always exhibit a profound look of impotence, which is said to be the result of ponderous deliberation that looks at both sides before crossing the issue. And crossing the issue is indeed a lot of hard
7.4 Ave Atque Vale! My Own Back to the University’s Future
157
work for these timid creatures. Consequently, the work-value of ideas became very significant: Do they work? Which can be translated as whether they take into account the ‘hard facts’. Students nowadays consigned to a temporary brush with Bovinity have broken up into two classes: the pre-professional and the pre-nothing. These two types of boviners can be distinguished by their relation to the facts. The pre-professional is very clever because he knows what a fact is. As we have seen, facts are indeed curious little things. ‘Fact’, as you may know, derives from the past participle of the Latin verb ‘to make’, a completed state of making – something that has already happened. To say, with the believers in Bovinity, that the facts determine the future is therefore quite an endorsement of the way things have been. Pre- professionals capitalize on this fact by entry into fields that aim at maintaining a social equilibrium or – if I may venture a political word – the status quo. Of course, I am referring to law and medicine. Both of these are founded on the fundamental weakness of the individual, who is always trying to recoup his losses in order to break even. The technical term, I believe, is ‘a standard of living’. The law works its wonders by being a constant reminder that the natural condition of the state is a war of all against all. Nobody minds their own business because they’re trying to take over yours. Medicine is even nobler since it prolongs the amount of time you have to break even, which is to say, the amount of time you have to engage in legal services. But naturally you never break even – not even in death – for it is at that point that the friendly giants of professionalism come to blows. Medicine won’t let you transcend your factual existence so that the law cannot execute what facts remain. Pre-professionals see the academic establishment quite rightly as the bovine transmission of these facts. And they find such things as a classical education very useful in that direction, for it neatly maps out the royal road from Solon and Hippocrates to Perry Mason and Marcus Welby in such a way as to capture even the imagination of the pre-nothings who employ Bovinity as their own standard-bearer. (And, as we all know, among comrades the pre-professional will quite openly admit the instrumentality of these bovine features in accomplishing the grand mission, which goes under the heading of Bull.) As for the pre-nothings, such as myself, nothing much can be said. Sometimes I think a pre-nothing doesn’t have the attachment to Bovinity that the pre-professional does, but I fear that this is not the case at all. Demonstrations are generally convenient media for making one’s existence felt, and if you’re pre-nothing the transcendence of nothingness in indeed a pretty tall order. Unfortunately, pre-nothings never demonstrate their existence but only some disturbing fact – such as the impending nuclear holocaust, covert slave trades in forbidden continents, and the like. However, Bovinity as it is can quite readily ruminate a response that restores the balance of facts and returns the pre-nothings to a state of nothingness – until the next disturbing fact comes along. Again, we break even. If we lived in a divine age, the pre-nothing might feel it quite natural to demonstrate about nothing have the keepers of the Ivory Tower divine some reasons for the discontent. Surely, a survey of all the possible problems in an academic establishment could turn up some rather interesting arguments. But then again, that might be a little too divine.
References
Ackerman, B. (2019). Revolutionary constitutions and the rule of law. Harvard University Press. Adorno, T. (1973). The jargon of authenticity (Orig. 1964). Northwestern University Press. Agamben, G. (2011). The kingdom and the glory: For a genealogy of economy and government. (Orig. 2007). Stanford University Press. Arnauld, A., & Nicole, P. (1996). Logic, or the art of thinking. (Orig. 1662). Cambridge University Press. Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. Free Press. Backhaus, J. (1993). The university as an economic institution: The political economy of the Althoff system. Journal of Economic Studies, 20(4/5), 8–29. Baehr, P. (2008). Caesarism, Charisma and Fate. Routledge. Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge University Press. Bassnett, S. (1980). Translation studies. Methuen. Bateson, G. (1979). Mind and nature: A necessary Unity. Bantam Books. Becker, C. (1932). The heavenly city of the eighteenth-century philosophers. Yale University Press. Becker, G. (1964). Human Capital. University of Chicago Press. Ben-David, J., & Collins, R. (1966). Social factors in the origins of a new science: The case of psychology. American Sociological Review., 31, 451–465. Benedetti, J. (1982). Stanislavski: An introduction. Theatre Arts Books. Berkowitz, R. (2005). The gift of science: Leibniz and the modern legal tradition. Harvard University Press. Berlin, I. (1958). Two concepts of liberty. Clarendon Press. Berlinski, D. (1996). The deniable Darwin. Commentary, 101(6), 19–29. Bertalanffy, L. V. (1950). An outline of general system theory. British Journal for the Philosophy of Science, 1, 134–165. Bhaskar, M. (2016). Curation: The power of selection in a world of excess. Little, Brown. Biagioli, M. (1993). Galileo, Courtier. University of Chicago Press. Birch, K. (2017a). A research agenda for neoliberalism. Edward Elgar. Birch, K. (2017b). Rethinking value in the bio-economy: Finance, assetization and the management of value. Science, Technology & Human Values., 42, 460–490. Birch, K. (2020). Technoscience rent: Toward a theory of Rentiership for Technoscientific capitalism. Science, Technology & Human Values, 45, 3–33. Bloom, H. (1973). The anxiety of influence. Oxford University Press.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Fuller, Back to the University’s Future, Evaluating Education: Normative Systems and Institutional Practices, https://doi.org/10.1007/978-3-031-36327-6
159
160
References
Bordogna, F. (2008). William James at the boundaries: Philosophy, science, and the geography of knowledge. University of Chicago Press. Bostrom, N. (2003). Are you living in a computer simulation? The Philosophical Quarterly, 53, 243–255. Bourdieu, P. (1988). Homo Academicus. (Orig. 1984). Polity Press. Bowler, R. (2005). Sentient nature and human economy. History of the Human Sciences, 19(1), 23–54. Brandom, R. (1994). Making it explicit. Harvard University Press. Burkhardt, R. (1970). Lamarck, evolution and the politics of science. Journal of the History of Biology, 3, 275–298. Butler, J. (1999). ‘A “Bad Writer” Bites Back’. The New York Times (20 March). Cahill, M. (2017). Theorizing subsidiarity: Towards an ontology-sensitive approach. International Journal of Constitutional Law, 15, 201–224. Calabresi, G., & Melamed, D. (1972). Property rules, liability rules, and inalienability. Harvard Law Review, 85, 1089–1128. Campbell, D. T. (1988). Methodology and epistemology for social science. University of Chicago Press. Cartwright, N. (1999). The dappled world: A study of the boundaries of science. Cambridge University Press. Cassirer, E. (1923). Substance and function. (Orig. 1910). Open Court Press. Cassirer, E. (1944). An Essay on Man. Yale University Press. Cassirer, E. (1950). The problem of knowledge: Philosophy, science and history since Hegel. Yale University Press. Chomsky, N. (1971). Problems of knowledge and freedom: The Russell lectures. Random House. Christophers, B. (2019). The problem of rent. Critical Historical Studies, Fall, 303–323. Clark, A. (2003). Natural-born cyborgs. MIT Press. Clark, W. (2006). Academic charisma and the origin of the research university. University of Chicago Press. Clarke, R. (2000). ‘Information wants to be free...’ https://www.rogerclarke.com/II/IWtbF.html Coase, R. (1937). The nature of the firm. Economica, 4, 386–405. Coase, R. (1988). The firm, the market and the law. University of Chicago Press. Cohen, I. B. (1985). Revolution in science. Harvard University Press. Cohen, I. B. (1995). Science and the founding fathers: Science in the political thought of Thomas Jefferson, Benjamin Franklin, John Adams and James Madison. W.W. Norton. Collins, H. (1990). Artificial experts: Social knowledge and intelligent machines. MIT Press. Collins, R. (1998). The sociology of philosophies: A global theory of intellectual change. Harvard University Press. Collins, R., & Restivo, S. (1983). Robber-barons and politicians in mathematics: A conflict model of science. Canadian Journal of Sociology, 8, 199–227. Colson, D. (2019). A little philosophical lexicon of anarchism from Proudhon to Deleuze. (Orig. 2001). Minor Compositions. Copp, S. (2011). A theology of incorporation with limited liability. Journal of Markets and Morality, 14, 35–57. Crease, R., & Mann, C. (1986). The second creation: The makers of the revolution in twentieth- century physics. Macmillan. Crombie, A. (1994). Styles of scientific thinking in the European tradition: The history of argument and explanation especially in the mathematical and biomedical sciences and arts, 3 vols. Duckworth. Crombie, A. (1996). Science, art and nature in medieval and modern thought. Cambridge University Press. Culler, J. (1982). On deconstruction. Cornell University Press. Cusset, F. (2008). French theory: How Foucault, Derrida, Deleuze, & co. transformed the intellectual life of the United States. University of Minnesota Press.
References
161
Danto, A. (1964). The Artworld. Journal of Philosophy, 61, 571–584. Danto, A. (1974). The transfiguration of the commonplace. The Journal of Aesthetics and Art Criticism, 33(2), 139–148. Darnton, R. (1984). The great cat massacre and other episodes in French cultural history. Basic Books. Davies, W. (2014). The limits of neoliberalism: Authority, sovereignty and the logic of competition. Sage. De Mey, M. (1982). The cognitive paradigm. Kluwer. Deleuze, G., & Guattari, F. (1977). Anti-Oedipus: Capitalism and schizophrenia (Orig. 1972). Random House. Derrida, J. (1978). Writing and difference. University of Chicago Press. Dickens, P. (2000). Social Darwinism: Linking evolutionary thought to social theory. Open University Press. Drexler, E. (1986). Engines of creation: The coming age of nanotechnology. Doubleday. Dreyfus, H., & Dreyfus, S. (1986). Mind over machine: The power of human intuition and expertise in the era of the computer. Free Press. Duhem, P. (1991). German science. (Orig. 1915). Open Court Press. Dworkin, R. (1975). Hard cases. Harvard Law Review, 88, 1057–1109. Dyson, F. (2007). ‘Our biotech future’, the New York review of books., 19 July. Eagleman, D. (2009). Sum: Forty Tales from the afterlives. Canongate. Eisenstein, E. (1979). The printing press as an agent of change (Vol. 2 vols). Cambridge University Press. Elster, J. (1998). Deliberation and constitution making. In J. Elster (Ed.), Deliberative democracy (pp. 97–122). Cambridge University Press. Erikson, E. (1968). Identity: Youth and crisis. W.W. Norton. Faust, D. (1984). The limits of scientific reasoning. University of Minnesota Press. Ferrier, J. (1854). Institutes of Metaphysic. Blackwood. Feyerabend, P. (1975). Against method. Verso. Feyerabend, P. (1979). Science in a free society. Verso. Franklin, J. (2001). Science as conjecture: Evidence and probability before Pascal. Johns Hopkins University Press. Fricker, M. (2007). Epistemic Injustice. Cambridge University Press. Fried, B. (1998). The progressive assault on laissez faire. Harvard University Press. Fuller, S. (1988). Social epistemology. Indiana University Press. Fuller, S. (1993). Philosophy of science and its discontents. 2nd ed. (Orig. 1989). Guilford Press. Fuller, S. (1996). Recent work in social epistemology. American Philosophical Quarterly, 33, 149–166. Fuller, S. (1997). Science: Concepts in the social sciences. Open University Press. Fuller, S. (2000a). The governance of science. Open University Press. Fuller, S. (2000b). Thomas Kuhn: A philosophical history for our times. University of Chicago Press. Fuller, S. (2002). Knowledge management foundations. Butterworth-Heinemann. Fuller, S. (2003). Kuhn vs Popper: The struggle for the soul of science. Icon. Fuller, S. (2005). The intellectual. Icon Books. Fuller, S. (2006a). The philosophy of science and technology studies. Routledge. Fuller, S. (2006b). The new sociological imagination. Sage. Fuller, S. (2006c). The market: Source or target of morality? In N. Stehr, C. Henning, & B. Weiler (Eds.), The moralization of the market (pp. 129–153). Transaction Books. Fuller, S. (2007a). New Frontiers in science and technology studies. Polity. Fuller, S. (2007b). The knowledge book: Key concepts in philosophy, science and culture. Acumen. Fuller, S. (2007c). Science vs religion? Intelligent design and the problem of evolution. Polity. Fuller, S. (2008). Dissent over Descent: Intelligent Design’s Challenge to Darwinism. Icon Books. Fuller, S. (2009). The sociology of intellectual life: The career of the mind in and around the academy. Sage.
162
References
Fuller, S. (2010a). Capitalism and knowledge: The university between commodification and entrepreneurship. In H. Radder (Ed.), The commodification of academic research: Science and the modern university (pp. 277–306). University of Pittsburgh Press. Fuller, S. (2010b). Science: The art of living. Acumen. Fuller, S. (2011). Humanity 2.0: What it means to be human past, present and future. Palgrave Macmillan. Fuller, S. (2012a). Preparing for life in humanity 2.0. Palgrave Macmillan. Fuller, S. (2012b). Conatus. In M. Grenfell (Ed.), Pierre Bourdieu: Key concepts (2nd ed.. (Orig. 2008), pp. 169–178). Durham UK. Fuller, S. (2013). On commodification and the progress of knowledge: A Defence. Spontaneous Generations, 7(1), 12–20. Fuller, S. (2015). Knowledge: The philosophical quest in history. Routledge. Fuller, S. (2016a). The academic Caesar: University leadership is hard. Sage. Fuller, S. (2016b). Organizing the organism: A re-casting of the bio-social interface for our times. Sociological Review Monograph. (Special issue on ‘Biosocial Matters’), 64(1), 134–150. Fuller, S. (2016c). Towards a new foundationalist turn in philosophy: Transcending the analytic- continental divide. In S. Rinofner-Reidel & H. Wiltsche (Eds.), Analytic and continental philosophy: Methods and perspectives: Proceedings of the 37th international Wittgenstein symposium (pp. 111–128). Walter de Gruyter. Fuller, S. (2018a). Post-truth: Knowledge as a power game. Anthem. Fuller, S. (2018b). Transhumanism’s Fabian backstory. In J. Castro, B. Fowler, & L. Gomes (Eds.), Time, science and the critique of technological reason: Essays in honour of Herminio martins (pp. 191–207). Springer. Fuller, S. (2018c). ‘Must academic evaluation be so citation data driven?’ University World News, no. 522 (28 Sep): http://www.universityworldnews.com/article.php?story=20180925094651499 Fuller, S. (2019a). Technological unemployment as a test of the added value of being human. In M. Peters et al. (Eds.), Education and technological unemployment (pp. 115–128). Springer. Fuller, S. (2019b). The metaphysical standing of the human: A future for the history of the human sciences. History of the Human Sciences, 32, 23–40. Fuller, S. (2019c). Nietzschean meditations: Untimely thoughts at the Dawn of the Transhuman era. Schwabe Verlag. Fuller, S. (2020a). A Player’s guide to the post-truth condition: The name of the game. Anthem. Fuller, S. (2020b). If science is a public good, why do scientists own it? Epistemologja and Filosofija Nauki, 57(4), 23–39. Fuller, S. (2021a). Permanent revolution in science: A quantum epistemology. Philosophy of the Social Sciences, 51, 48–57. [Translated into Russian as the Preface for the Russian edition of Fuller (2003).]. Fuller, S. (2021b). ‘Schrödinger’s What Is Life? As Postdigital Prophecy’ Postdigital Science and Education, 3, 262–267. Fuller, S. (2021c). The prophetic bacon. Epistemologja & Filosofija Nauki, 58(3), 78–86. Fuller, S. (2022). Is the path from aphorism to tweet the royal road to knowledge? Educational Philosophy and Theory. https://doi.org/10.1080/00131857.2022.2109461 Fuller, S., & Collier, J. (2004). Philosophy, rhetoric and the end of knowledge (2nd ed. (Orig. by Fuller, 1993)). Lawrence Erlbaum Associates. Fuller, S., & Lipinska, V. (2014). The Proactionary imperative: A foundation for transhumanism. Palgrave. Gaaze, K. (2019). Max Weber’s theory of causality: An examination on the resistance to post-truth. Russian Sociological Review, 18(2), 41–61. Galison, P. (1990). Aufbau/Bauhaus: Logical positivism and architectural modernism. Critical Inquiry, 16, 709–752. Georgescu-Roegen, N. (1971). The entropy law and the economic process. Harvard University Press. Gibson, M. (2022). Paper belt on fire: How renegade investors sparked a revolt against the university. Encounter Books.
References
163
Gilbert, W. (1991). Towards a paradigm shift in biology. Nature, 349, 99. Gillispie, C. (1958). Lamarck and Darwin in the history of science. American Scientist, 46(4), 388–409. Goldberg, J. (2007). Liberal fascism. Doubleday. Golf-French, M. (2022). From atheists to empiricists: Reinterpreting the stoics in the German enlightenment. Modern Intellectual History. Goodman, N. (1955). Fact, fiction and forecast. Harvard University Press. Goodman, N. (1968). Languages of art. Bobbs-Merrill Company. Gordin, M. (2015). Scientific babel: The language of science from the fall of Latin to the rise of English. University of Chicago Press. Gordon, P. (2012). Continental Divide. Harvard University Press. Grafton, A. (1997). The footnote: A curious history. Harvard University Press. Guichardaz, R. (2020). The controversy over intellectual property in nineteenth-century France. The European Journal of the History of Economic Thought, 27(1), 86–107. Hammond, K. (Ed.). (1978). Judgement and decision in public policy making. Routledge. Hankinson, R. J. (2003). Stoic epistemology. In B. Inwood (Ed.), The Cambridge companion to the stoics (pp. 59–84). Cambridge University Press. Hanson, N. R. (1958). Patterns of discovery. Cambridge University Press. Hardin, G. (1968). The tragedy of the commons. Science, 162(3859), 1243–1248. Harrington, A. (1999). Holism in German culture from Wilhelm II to Hitler. Princeton University Press. Harrison, P. (2007). The fall of man and the foundations of science. Cambridge University Press. Hawley, J. (2008). Theodore Roosevelt: Preacher of righteousness. Yale University Press. Hawley, J. (2021). The tyranny of big tech. Regnery. Hayek, F. (1952). The counter-revolution of science. University of Chicago Press. Haynes, C. (2010). Lost illusions: The politics of publishing in nineteenth-century France. Harvard University Press. Heidelberger, M. (2004). Nature from within: Gustav Fechner and his psychophysical worldview. University of Pittsburgh Press. Herf, J. (1984). Reactionary modernism. Cambridge University Press. Hilderbrand, R. (2013). Wilson as chief executive: Relations with congress, the cabinet and staff. In R. Kennedy (Ed.), A companion to woodrow Wilson (pp. 91–105). Wiley-Blackwell. Hill, L., & Nidumolu, P. (2021). The influence of classical stoicism on Locke’s theory of self- ownership. History of the Human Sciences, 34(3–4), 1–24. Hirsch, F. (1976). Social limits to growth. Routledge. Hofstadter, R. (1955). The age of reform. Random House. Holiday, R. (2014). The obstacle is the way: The timeless art of turning trials into triumph. Profile Books. Holton, G. (1993). From the Vienna circle to Harvard Square: The Americanization of a European world conception. In F. Stadler (Ed.), Scientific philosophy: Origins and developments (pp. 47–74). Kluwer. Howson, C., & Urbach, P. (1989). Scientific reasoning: The Bayesian approach. Open Court Press. Hull, D., Tessner, P., & Diamond, A. (1978). Planck’s principle. Science, 202(4369), 717–723. Humboldt, W. V. (1969). In J. Burrow (Ed.), (Orig. 1792). On the limits of state action. Cambridge University Press. Humboldt, W. V. (1988). On language. Cambridge University Press. Husserl, E. (1954). The crisis of European sciences and transcendental phenomenology. (Orig. 1936). Northwestern University Press. Hutchins, R. M. (1953). The University of Utopia. University of Chicago Press. Innis, H. (1951). The bias of communication. University of Toronto Press. Israel, J. (2001). The radical enlightenment. Oxford University Press. Jackson, B. (2009). At the origins of neo-liberalism: The free economy and the strong state, 1930-47. The Historical Journal, 53, 129–151.
164
References
James VI and I. (1995). ‘The Trew law of free monarchies’ (Orig. 1598). In J. Sommerville (Ed.), King James VI and I: Political writings (pp. 62–84). Cambridge University Press. Jaspers, K. (1959). The idea of the university. Beacon Press. Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus & Giroux. Kant, I. (1996). ‘The conflict of the faculties’ (Orig. 1798). In A. Wood & G. DiGiovanni (Eds.), Religion and rational theology (pp. 238–322). Cambridge University Press. Kant, I. (1999). An answer to the question: What is enlightenment?.’ (Orig. 1784). In M. Gregor (Ed.), Practical philosophy (pp. 11–22). Cambridge University Press. Kantorowicz, E. (1957). The King’s two bodies: A study in medieval political philosophy. Princeton University Press. Kelly, K. (2010). What technology wants. Random House. Kirby, D. (2011). Lab coats in Hollywood. MIT Press. Kitch, E. (1980). The law and economics of rights in valuable information. The Journal of Legal Studies, 9, 683–723. Kitcher, P. (1993). The advancement of science. Oxford University Press. Klobuchar, A. (2021). Antitrust: Taking on monopoly power from the gilded age to the digital age. Random House. Koehler, W. (1925). The mentality of apes. (Orig. 1917). Kegan Paul. Koehler, W. (1938). The place of value in a world of facts. Liveright Publishing. Koerner, L. (1999). Linnaeus: Nature and nation. Harvard University Press. Kouroutakis, A. (2017). The constitutional value of sunset clauses: An historical and normative analysis. Routledge. Kriegel, B. (1995). The state and the rule of law. Princeton University Press. Kroeber, A. (1917). The Superorganic. American Anthropologist, 19, 163–213. Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed. (Orig. 1962)). University of Chicago Press. Kusch, M. (1999). Psychological knowledge: A social history of philosophy. Routledge. Lacy, T. (2013). The dream of a democratic culture: Mortimer Adler and the great books idea. Palgrave. Lakatos, I. (1978). The methodology of scientific research Programmes. Cambridge University Press. Lamont, M. (2009). How professors think. Harvard University Press. Langley, P., Simon, H., Bradshaw, G., & Zytkow, J. (1987). Scientific discovery: Computational explorations of the creative process. MIT Press. Laudan, L. (1977). Progress and its problems. University of California Press. Laudan, L. (1981). Science and hypothesis. Reidel. Lenin, V. I. (1948). Imperialism: The highest stage of capitalism. (Orig. 1917). Lawrence Wishart. Lewin, K. (1931). The conflict between Aristotelian and Galilean modes of thought in contemporary psychology. Journal of General Psychology, 5, 141–177. Lukacs, G. (1948). Der Junge Hegel. Europa Verlag. Lynch, W. (2001). Solomon’s child: Method in the early royal society. Stanford University Press. Lynch, W. (2021). Minority report: Dissent and diversity in science. Rowman and Littlefield. Lyotard, J.-F. (1983). The postmodern condition. (Orig. 1979). University of Minnesota Press. Machlup, F. (1950). The patent controversy in the nineteenth century. Journal of Economic History, 10(May), 1–29. McCloskey, D. (2006). The bourgeois virtues: Ethics for an age of commerce. University of Chicago Press. McKeon, R. (1957). The development and significance of the concept of responsibility. Revue Internationale de Philosophie, 39(1), 3–32. Meloni, M. (2016). Political biology. Palgrave Macmillan. Mendelsohn, E. (1974). Reduction and revolution: The sociology of methodological and philosophical concerns in nineteenth century biology. In Y. Elkana (Ed.), Interaction between science and philosophy (pp. 407–427). Humanities Press.
References
165
Menudo, J. M. (2010). Perfect’ competition in a.-R.-J. Turgot: A Contractualist theory of just exchange. Economies et Societes, 44(12), 1885–1916. Merton, R. (1965). On the shoulders of giants. Free Press. Merz, J. T. (1965). A history of European thought in the nineteenth century (Vol. 4 vols. (Orig, pp. 1896–1914). Metzger, W. (1955). Academic freedom in the age of the university. Random House. Mirowski, P. (2002). Machine dreams: How economics became a cyborg science. Cambridge University Press. Mirowski, P. (2019). Hell is truth seen too late. Boundary, 2(February), 1–53. Mirowski, P., & Nik-Khah, E. (2017). The knowledge we have lost in information. Oxford University Press. Mitcham, C. (1994). Thinking through technology: The path between engineering and philosophy. University of Chicago Press. Mitzman, A. (1969). The iron cage: A historical interpretation of max Weber. Grosset and Dunlap. Morange, M. (1998). A history of molecular biology. Harvard University Press. Morozov, E. (2013). To save everything, click here. Public Affairs. Münsterberg, H. (1908). On the witness stand: Essays on psychology and crime. Doubleday. Nickles, T. (Ed.). (1980). Scientific discovery, logic and rationality. D. Reidel. Noble, D. (2001). Digital diploma Mills. The automation of higher education. Monthly Review. Nozick, R. (1974). Anarchy, state and utopia. Basic Books. Nye, M. J. (2011). Michael Polanyi and his generation: Origins of the social construction of science. University of Chicago Press. O’Rourke, M., & Crowley, S. (2013). Philosophical intervention and cross-disciplinary science: The story of the toolbox project. Synthese, 190, 1937–1954. O’Toole, B. (1990). T.H. green and the ethics of British officials in British central government. Public Administration., 68, 337–352. Packard, A. (1901). Lamarck, the founder of evolution. Longmans. Passmore, J. (1961). Hägerström's philosophy of law. Philosophy, 36(137), 143–160. Passmore, J. (1966). A hundred years of philosophy (2nd ed. (Orig. 1957)). Penguin. Pepper, S. (1942). World hypotheses. University of California Press. Pettit, P. (1997). Republicanism: A theory of freedom and government. Oxford University Press. Pielke, R. (2003). The honest broker: Making sense of science in policy and politics. Cambridge University Press. Piketty, T. (2014). Capital in the twenty-first century. Harvard University Press. Pinker, S. (2002). The blank slate. Random House. Popper, K. (1957). The poverty of historicism. Routledge. Popper, K. (1972). Objective knowledge. Oxford University Press. Price, D. (1963). Little science, big science. Penguin Press. Proctor, R. (1991). Value-free science? Purity and power in modern knowledge. Harvard University Press. Putnam, H. (1960). Minds and machines. In S. Hook (Ed.), Dimensions of mind (pp. 138–164). New York University Press. Putnam, H. (1978). Meaning and the moral sciences. Routledge & Kegan Paul. Rabinbach, A. (1990). The human motor: Energy, fatigue and the origins of modernity. Basic Books. Rayward, B. (1999). H.G. Wells's idea of a world brain: A critical reassessment. Journal of the American Society for Information Science., 50(7), 557–573. Rescher, N. (1978). Peirce’s philosophy of science. University of Notre Dame Press. Rescher, N. (2006). Epistemetrics. Cambridge University Press. Richards, R. J. (1987). Darwin and the emergence of evolutionary theories of mind and behavior. University of Chicago Press. Ringer, F. (1969). The decline of the German mandarins. Harvard University Press. Ripstein, A. (2006). Private wrongs. Harvard University Press.
166
References
Ross, S. (1973). The economic theory of agency: The Principal’s problem. American Economic Review, 63(2), 134–139. Rothschild, E. (2001). Economic Sentiments. Harvard University Press. Rueschemeyer, D., & Van Rossem, R. (1996). The Verein für Sozialpolitik and the Fabian Society: A study in the sociology of policy-relevant knowledge. In D. Rueschemeyer & T. Skocpol (Eds.), States, social knowledge and the origins of modern social policies (pp. 117–162). Princeton University Press. Runciman, D. (2008). Political hypocrisy: The mask of power from Hobbes to Orwell and beyond. Princeton University Press. Ruse, M. (1999). Mystery of mysteries: Is evolution a social construction? Harvard University Press. Ryle, G. (1949). The concept of mind. Hutchinson. Said, E. (1978). Orientalism. Random House. Samuels, W. (1992). Essays on the economic role of government (Vol. 2). Macmillan. Sapp, J. (2008). The iconoclastic research program of Carl Woese. In O. Harmen & M. Dietrich (Eds.), Rebels, mavericks, and heretics in biology. Yale University Press. Sassower, R. (2000). A sanctuary of their own: Intellectual refugees in the academy. Rowman and Littlefield. Schmitt, C. (1918). Die Buribunken. SUMMA, 1(4), 89–106. Schmitt, C. (1988). The crisis of parliamentary democracy (Orig. 1923). MIT Press. Schmitt, C. (1996). The concept of the political. (Orig. 1932). University of Chicago Press. Schnädelbach, H. (1984). Philosophy in Germany, 1831–1933. Cambridge University Press. Schneewind, J. (1984). The divine corporation and the history of ethics. In R. Rorty, J. Schneewind, & Q. Skinner (Eds.), Philosophy in history (pp. 173–192). Cambridge University Press. Schrag, Z. (2010). Ethical imperialism: Institutional review boards and the social Sciences. Johns Hopkins University Press. Schrödinger, E. (1955). What is life? The physical aspects of the living cell (Orig. 1944). Cambridge University Press. Schroeder-Gudehus, B. (1989). Nationalism and internationalism. In R. Olby et al. (Eds.), Companion to the history of modern science (pp. 909–919). Routledge. Schumpeter, J. (1950). Capitalism, socialism and democracy (2nd ed.. (Orig. 1942).). Harper and Row. Schumpeter, J. (1955). The sociology of imperialisms.’ (Orig. 1919). In J. Schumpeter (Ed.), Imperialism and social classes (pp. 7–98). Meridian. Scruton, R. (2012). Green philosophy. Atlantic Books. Shapin, S., & Schaffer, S. (1985). Leviathan and the air-pump. Princeton University Press. Shils, E. (Ed.). (1974). Max Weber on universities: The power of the state and the dignity of the academic calling in. University of Chicago Press. Stark, R., & Bainbridge, W. S. (1987). A theory of religion. Peter Lang. Steele, D. R. (1992). From Marx to Mises: Post-capitalist society and the challenge of economic calculation. Open Court Press. Steele, D. R. (1996). Nozick on sunk costs. Ethics, 106, 605–620. Sullivan, K. (2011). The inner lives of the medieval inquisitors. University of Chicago Press. Taleb, N. N. (2018). Skin in the game: Hidden asymmetries in daily life. Random House. Tarski, A. (1943). The semantic conception of truth. Philosophy and Phenomenological Research, 4, 341–375. Teilhard de Chardin, P. (1955). The phenomenon of man. Harper and Row. Tolman, E. C. (1948). Cognitive maps in rats and men. Psychological Review, 55(4), 189–208. Tuana, N. (2013). Embedding philosophers in the practices of science. Synthese, 190, 1955–1973. Tullock, G. (1966). The organization of inquiry. Duke University Press. Turner, S. (2010). Explaining the normative. Polity. Veit-Brause, I. (2001). Scientists and the cultural politics of academic disciplines in late nineteenth century Germany: Emil DuBois-Reymond and the controversy over the role of the cultural sciences. History of the Human Sciences, 14(4), 31–56.
References
167
Veit-Brause, I. (2002). The making of modern scientific personae: The scientist as a moral person? Emil Du bois-Reymond and his friends. History of the Human Sciences, 15(4), 19–49. Weber, M. (1958). Science as a vocation. In H. Gerth & C. W. Mills (Eds.), From Max Weber (pp. 129–158. (Orig. 1918).). Oxford University Press, Beacon Press. Weber, M. (1963). The sociology of religion. (Orig. 1923). Weber, M. (2004). In D. Owen & T. Strong (Eds.), (Orig. 1917-19) The vocation lectures. Hackett. Wells, H. G. (1938). World brain. Methuen. Wilmut, I., Campbell, K., & Tudge, C. (2000). The second creation: Dolly and the age of biological control. Farrar Straus and Giroux. Wilson, E. O. (2004). The meaning of biodiversity and the tree of life. In J. Cracraft & M. Donoghue (Eds.), Assembling the tree of life (pp. 539–542). Oxford University Press. Wolf, M. (2023). The case for a land value tax is overwhelming. Financial Times (5 February). Woodmansee, M. (1984). The genius and the copyright. Eighteenth Century Studies, 17, 425–448. Wright, A. (2014). Cataloguing the world: Paul Otlet and the birth of the information age. Oxford University Press. Wuthnow, R. (1989). Communities of discourse: Ideology and social structure in the reformation, enlightenment and European socialism. Harvard University Press. Yates, F. (1966). The art of memory. Routledge and Kegan Paul. Zandonade, T. (2004). Social epistemology from Jesse Shera to Steve Fuller. Library Trends 52, no.4 (Spring 2004): 810–32. Zimmer, C. (2016). The biologists who want to overhaul evolutionary theory. The Atlantic (November).. https://www.theatlantic.com/science/archive/2016/11/the-biologists-who-want-to-overhaulevolution/508712/
Index
A Academic freedom, 4, 7, 12, 39–65, 74, 75 Academic Georgism, 106–112 Aquinas, T., 4, 26, 50, 72, 82, 86, 95, 97, 104, 120 Aristotle, 52, 53, 57, 104, 119–123, 132, 151 Artworld, 127–132 Augustine (of Hippo), 21, 72 B Bacon(ian), F., 21, 27–29, 31, 55, 67–91, 99, 134, 140, 149 Berlin, I., 43, 44, 46 Bible, 70, 72, 75, 81, 84, 87, 88, 104, 109, 110, 132, 139, 150, 151 Bildung (personal development), 4, 12, 54, 55, 145 Bismarck(ian), O. von, 43, 46, 60, 96, 101, 137 Bloom, H., 56, 108, 153 C Calvin(ism/t), J., 29, 67–71, 75, 89, 124 Cambridge University, 3, 47, 89 See also Oxbridge Capitalis(m/t), 2, 6, 7, 12, 44, 71, 72, 75, 93–117, 121, 123–126, 150, 154 Catholic(ism), 2, 6, 19, 40, 57, 61, 62, 69, 70, 80, 84, 88, 109, 116, 134 Chomsky, N., 7, 54, 111, 127, 130
Comte, A., 26, 34, 80, 109 Condorcet, M. de, 28, 42, 107 Constitution, US, 50, 86, 88–90 D Danto, A., 128, 129 Darwin(ian/ism/t), C., 16, 18, 34–36, 77–79, 96, 99, 112, 143, 150 Deist/m, 5, 33, 85, 86 Derrida, J., 105, 127, 129, 137 Descartes, R., 6, 21, 27, 28 Dewey, J., 9, 10, 56, 57 Discovery, context/logic of, 6, 13, 17, 29, 60, 109, 112 Dissent, 42, 61–65, 70, 76, 77, 140 E Einstein, A., 17, 23, 24, 30, 78, 81, 110, 112 Emerson, R.W., 38, 46, 111 Enlightenment, 1, 3, 4, 6, 7, 11, 12, 27–29, 33, 40–43, 45, 53, 85, 133 Expert(ise), 4, 9, 13, 17–28, 50, 76, 95, 107, 109, 110, 124, 131–133, 135–139, 147 F Fabian(ism), 46, 96 Feyerabend, P., 73, 77, 115, 116 Fichte, J.G., 4, 5, 42–45, 58, 64 Foucault, M., 95, 105, 127, 137
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Fuller, Back to the University’s Future, Evaluating Education: Normative Systems and Institutional Practices, https://doi.org/10.1007/978-3-031-36327-6
169
170 G Galileo/an, 6, 29, 72, 114, 115, 130, 134, 149 George, H., 107 German idealist/m, 4, 5, 11, 26, 38, 40, 43, 44, 46, 47, 56, 58, 63 See also Fichte, J.G.; Hegel; Schelling, F.W.J. Gestalt, 12, 49, 57, 74, 78, 113, 116, 117, 119–122, 124, 129, 138 Goodman, N., 74, 128–130, 136, 154 H Habermas, J., 2 Hayek, F., 75, 94, 95 Hobbes, T., 21, 33, 43, 82, 88–91, 99, 143 Humboldt(ian), W. von, 1–46, 48–51, 54, 56, 58, 62, 63, 91, 100, 102, 108, 110–112, 127, 130, 133, 135, 143, 145, 151, 155 I Improvisation, 148–152 Interdisciplinarity, 4, 11, 15–38 J James I (King of England), 55, 82, 87 James, W., 33, 38, 48, 82, 137 Jaspers, K., 49, 151 Judgement, 2, 11, 12, 14, 20–23, 39–65, 71, 73, 78, 87, 95, 96, 121, 128–131, 138–140, 142, 147, 154 Justification, context/logic of, 13, 17, 29, 60, 112 K Kant(ian), I., 4, 6, 7, 11–13, 26, 27, 29, 40–45, 48, 49, 51, 54, 55, 58, 62, 71–76, 81, 85, 90, 127, 128, 131, 135, 154 Kuhn, T., 12, 16, 18, 23, 59, 64, 72, 76–80, 82, 110, 113, 114, 122, 130, 131 L Lakatos, I., 73, 78, 114, 116, 136 Lamarck(ian), J.-B., 18, 33–37, 77, 96 Liberal(ism), 1, 3, 11, 12, 26, 40–45, 47, 50, 60, 85, 94–98, 107, 114, 116, 127, 138, 140, 142, 143, 149, 154 Liberal interventionist/m, 43, 44, 95–97, 107 Locke, J., 25, 41, 43, 50, 54, 88, 95, 98
Index Logical positivist/m, 6, 26, 27, 30, 49, 55, 57, 58, 63, 80, 81, 94, 109, 112, 136, 137 London School of Economics, 102 Luther(an), M., 61, 67–71, 75, 139 Lyotard, J.-F., 2 M Mach, E., 10, 33, 81, 110, 116 Marx, K., 5, 40–42, 44, 60, 71, 74, 80, 81, 95, 97–99, 101, 103, 107, 111, 121, 123, 124, 126, 154 Marxis(m/t), 41, 44, 75, 80, 93–98, 102, 103, 106, 113, 116, 117 Merton, R., 8, 82, 132 Mill, J.S., 21, 29, 40, 42, 56, 104 Modal power, 95, 122–127, 139–141 N Natural law, 2, 50, 57, 61, 80, 86, 103, 104 Neoliberal(ism), 2, 43, 75, 93–98, 101, 102, 117, 126, 141–143 New Deal, 50 Newton(ian), I., 17, 23, 25, 32, 33, 36, 57, 58, 74, 79, 81, 83, 86, 110–112, 130, 132 O Orthogonal(ity), 23, 37, 58, 68, 146 Oxbridge, 6, 8, 62 Oxford University, 3, 47 See also Oxbridge P Path dependenc(e/y), 8, 18, 112, 114, 122–124 Peirce, C.S., 8, 10, 33, 38, 52, 56, 119 Plagiaris(m/t), 13, 14, 115, 121, 123, 126–132, 154 Plato(nism/t), 3, 10, 83, 89, 95, 109, 119–123, 129, 130, 132, 139, 140, 146, 147, 149, 150 Popper(ian/ism), K., 6, 14, 40, 47, 52, 55, 60, 71–76, 78, 81, 88, 109, 112–117, 130, 136 Positivist/m, 3, 21, 26, 34, 35, 51, 59, 63, 73, 80, 133, 135 Post-truth, 8, 9, 16, 37, 53, 76, 123, 134, 140, 141 Progressiv(e)ism (early c20 US political movement), 89, 93–98, 113, 114, 133 Protestant(ism), 21, 29, 40, 41, 61, 62, 64, 68, 70, 84, 87, 88, 112, 134, 139, 150
Index Protscience, 137–141 Proudhon, P.-J., 97, 102–106, 108, 113, 114, 116, 126 Public goods, 7, 8, 13, 101, 102, 108–112, 122–127, 131, 132, 141–143 R Rentier(ship), 8, 17, 76, 107, 108, 110, 119–143 Republican(ism), 11, 45, 47, 85 Ricardo, D., 98, 103, 107, 117, 126 Roosevelt, T., 38, 46, 96 S Saint-Simon, C.H. de, 97–106, 113, 116, 123, 124, 133 Schelling, F.W.J., 4, 32, 40, 45, 58, 62–64 Schumpeter, J., 18, 103 Silicon Valley, 53, 101, 126, 141 Smith, A., 42, 44, 98, 100, 105, 107, 123, 142 Social epistemolog(y/ist), 4, 20, 21, 74, 97–102, 109, 110, 112, 119
171 Socialis(m/t), 12, 60, 68, 76, 93–97, 100, 102, 105, 107, 111, 113, 116, 123, 124, 126, 133, 142 Socrates, 10, 56, 57 Sophist, 10, 140, 145 Stoic(ism), 39, 43, 52–54, 56, 57 T Translation, 11, 13, 30, 41, 60, 70, 72, 109–111, 133, 136 U University of Chicago, 50, 101, 102 Utopian socialis(m/t), 97–102, 116 W Weber(ian), M., 4, 7, 14, 27, 39, 41, 44, 46, 47, 49, 50, 55, 60, 62, 67–91, 108, 150 Wilson, W., 46, 47, 49, 96, 121, 137 Wittgenstein, L., 5, 21, 31, 122