Judging the Past: Ethics, History and Memory 3031345118, 9783031345111


123 44 3MB

English Pages [244]

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface and Acknowledgements
Contents
1: Prelude: The Demo(li)tion of Edward Colston
2: Introduction
1 The Problems of Access and Relevance
2 The Past as both Foreign and Familiar
3 When ‘Jagged Worldviews Collide’
3: The Relativity of Distance
1 Williams on the Relativity of (Temporal) Distance
2 Internal and External Reasons
3 Justice
4: Choosing a Standpoint
1 ‘Take Nature’s Path, and Mad Opinions Leave’ (Alexander Pope)
2 Reason and Sentiment in Sociable Living
3 Sociability in the Kingdom of Ends
5: Agents, Acts and the Relativity of Blame
1 Witches and Slaves: The Bearing of Ideology on Moral Responsibility
2 Fricker and the Relativity of Blame
3 The Scope and Limits of Conscience
4 Siuation-Adjusted Moral Judgements
6: Interlude: A Late-Medieval ‘Hand-List’ of Offences against Sociability: Or, Plus ça change, plus c’est la même chose
7: History: Morally Heavy or Morally Light?
1 To Judge, or Not to Judge?
2 Defining the Historian’s Role(s): A Short History of History
3 History and Human Self-Knowledge
8: The Morality of Memory
1 The Need to Remember
2 Warts-and-all History (But Not Forgetting the Beauty-Spots)
9: Historical Biography: Giving the Dead Their Due
1 Who Should Be Remembered?
2 Historical Biography and its Pitfalls6
3 Reputation and the Passage of Time
10: Postlude: ‘Consider the Ant’
Bibliography
Name Index
Subject Index
Recommend Papers

Judging the Past: Ethics, History and Memory
 3031345118, 9783031345111

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Judging the Past Ethics, History and Memory Geoffrey Scarre

Judging the Past

Geoffrey Scarre

Judging the Past Ethics, History and Memory

Geoffrey Scarre Department of Philosophy Durham University Durham, UK

ISBN 978-3-031-34510-4    ISBN 978-3-031-34511-1 (eBook) https://doi.org/10.1007/978-3-031-34511-1 © The Editor(s) (if applicable) and The Author(s), under exclusive licence to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Cover illustration: Classic Image / Alamy Stock Photo This Palgrave Macmillan imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface and Acknowledgements

This book presents an extended argument for the thesis that people of the present day are not debarred in principle from passing moral judgement on people who lived in former days, notwithstanding the inevitable differences in social and cultural circumstances that separate us. This thesis is controversial, and it may also seem an unfashionable one in an age which eschews dogmatic pronouncements about morals and which favours (or purports to do so) liberal and pluralist attitudes towards human values. Some philosophers reject the claim I defend, holding with Bernard Williams that because we can see things only from our own peculiar historical situation, we lack a sufficiently objective vantage point from which to appraise past people and their acts. If they are correct, then the judgements passed by twenty-first-century people must inevitably be biased and irrelevant, grounded on moral standards that would have seemed alien in that foreign country of the past. This position I wish to challenge, believing that it seriously underestimates our ability to engage imaginatively with people who, however much their lifestyles may have differed from our own, were our fellow human beings, endowed with much the same basic instincts, aversions, desires and aspirations as ourselves. Taking my stand on a naturalistic theory of human beings, which I couple with a Kantian conception of the equal worth of all human members of the Kingdom of Ends, I argue that historical moral v

vi 

Preface and Acknowledgements

judgements can be sensitive to circumstances, fitting and fair, and untainted by anachronism. Although this study is intended in first instance as a contribution to moral theory, an important subsidiary theme concerns the role of moral judgement in the writing of history. History was once conceived as an essentially moralistic enterprise, the classical Greek writers, for instance, looking to the past as a rich source of inspiring or cautionary examples of virtue and vice. Modern historical writers are rather less likely to see moralising as their major business; yet many historians do pass moral judgements on individuals, acts and institutions without much apparent embarrassment that in doing so they might be overstepping the limits of their trade. And this, I shall argue, is entirely in order. Indeed, I shall even go further and suggest that history writing that is shorn of moral judgement lacks a dimension of engagement that is a normal feature of our human encounters, and is no less appropriate when the people we are concerned with are dead. An apology is perhaps due to non-UK readers for the fact that what may appear to be a disproportionately large number of examples of historical writing cited in this book are drawn from British authors. The simple explanation for this is that I am most familiar with British history and its historians, and I certainly have no intention to insinuate that the history of the UK is somehow more interesting or more significant than that of other lands. As the examples adduced are simply there to help make or illustrate theoretical points, this bias towards instances taken from the history of my own country will, I hope, be forgiven me. Many colleagues and friends have read portions of the book in manuscript or discussed its leading ideas and arguments with me. I am particularly grateful in this regard to my colleagues in the Department of Philosophy at Durham, and I owe a special debt of gratitude (not for the first time) to Professor Anthony Bash of the Department of Theology & Religion, who read drafts of every chapter and provided much incisive comment, forcing me to clarify my argument at many points. My sincere thanks go too to an anonymous reader for the publisher who provided valuable criticisms and suggestions when the manuscript was at a mature

  Preface and Acknowledgements 

vii

stage of development. Needless to say, the faults that remain are all my own. Finally, I would like to acknowledge with great gratitude the unstinting support and encouragement I have received throughout from my two editors at Palgrave Macmillan, Brendan George and Eliana Rangel. Durham, UK March 2023

Geoffrey Scarre

Contents

1 Prelude: The Demo(li)tion of Edward Colston  1 2 I ntroduction 15 1 The Problems of Access and Relevance  15 2 The Past as both Foreign and Familiar  21 3 When ‘Jagged Worldviews Collide’  26 3 The  Relativity of Distance 37 1 Williams on the Relativity of (Temporal) Distance  37 2 Internal and External Reasons  44 3 Justice  52 4 C  hoosing a Standpoint 63 1 ‘Take Nature’s Path, and Mad Opinions Leave’ (Alexander Pope)  63 2 Reason and Sentiment in Sociable Living  70 3 Sociability in the Kingdom of Ends  79 5 Agents,  Acts and the Relativity of Blame 87 1 Witches and Slaves: The Bearing of Ideology on Moral Responsibility  87 2 Fricker and the Relativity of Blame  97 ix

x Contents

3 The Scope and Limits of Conscience 4 Siuation-Adjusted Moral Judgements

105 114

6 Interlude:  A Late-Medieval ‘Hand-List’ of Offences against Sociability: Or, Plus ça change, plus c’est la même chose125 7 History:  Morally Heavy or Morally Light?141 1 To Judge, or Not to Judge? 141 2 Defining the Historian’s Role(s): A Short History of History 149 3 History and Human Self-Knowledge 154 8 The  Morality of Memory163 1 The Need to Remember 163 2 Warts-and-all History (But Not Forgetting the Beauty-­Spots) 172 9 Historical  Biography: Giving the Dead Their Due189 1 Who Should Be Remembered? 189 2 Historical Biography and its Pitfalls 195 3 Reputation and the Passage of Time 201 10 Postlude: ‘Consider the Ant’207 B  ibliography219 N  ame Index229 S  ubject Index235

1 Prelude: The Demo(li)tion of Edward Colston

On 7 June 2020, in Bristol, England, a number of demonstrators taking part in a ‘Black Lives Matter’ protest removed from its plinth and dropped into the nearby harbour a life-size statue of a man once considered to be among the city’s most prominent worthies. The statue, erected in the late nineteenth century, commemorated Mr. Edward Colston (1636–1721), merchant, pious Christian, benefactor to the people of Bristol, and (here came the rub) slave-trader. In the course of his long life, Colston was responsible for the forcible transportation of thousands of men, women and children from the West Coast of Africa to work in the tobacco and sugar plantations in the West Indies. Colston became a member of the Royal African Company in 1680 and served as its Deputy Governor in 1689–90; the Company ran a highly lucrative trade in gold, silver and ivory besides slaves, and many of its members made themselves immense fortunes. It is thought that around 84,000 slaves were shipped to the Americas during Colston’s involvement with the Company, of whom 19,000 may have died on the transatlantic passage. If Colston was no friend to blacks, who to him were merely saleable commodities, to the people of Bristol he was the philanthropist par excellence. Builder of a

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1_1

1

2 

G. Scarre

school, two alms-houses for the poor and generous contributor to other charitable and religious good causes, he was later estimated to have expended more than £70,000 (equivalent to about £8 million in today’s values) on such works. In 1808 Colston was described by a posthumous admirer as ‘the great benefactor of Bristol’. The statue later erected in his honour similarly celebrated his lavish donations to the city, conveniently ignoring the sources from which the funding for them came.1 The toppling of Colston’s statue by the demonstrators on 7 June could hardly have taken many people by surprise. Demands for its removal had been voiced for several years previously, stirring up a heated public debate. Although no one defended Colston’s slave-trading activities, many Bristolians thought it fitting to continue to honour him for his many benefactions to the city, and particularly to its poorer citizens. Many felt, too, that Colston’s story was part of Bristol’s heritage, and that while parts of it were deeply upsetting, it was better confronted head on than simply erased from public memory. Against this it was pointed out that remembering a man and honouring him with a statue were different things, and that the continued presence of the effigy was an affront to black people.2 If the demonstrators on 7 June had not taken matters into their own hands, it is likely that removal of the statue would in the fairly near future have been ordered by the City Council. It is not easy to understand how a man who cared deeply enough about the poor of his native city to lavish much of his great fortune on them could also act so callously towards, and treat as mere expendable commodities, the members of another race. Colston’s benefactions to his home city were truly spectacular in both range and value: a catalogue of his public charities appended to the published version of his funeral sermon delivered at All Saints’ Church in Bristol in 1721 takes up nine closely printed pages (Harcourt, 1721, pp. 39–47). Nor is there anything to suggest that Colston was a hypocrite who hid his nefarious trading activities behind a mask of philanthropy and Christian piety. He seems to have been exactly what the preacher at his funeral service described him: a ‘sincere Lover of God, and Friend of Man’, through whose charities ‘the Ignorance of the Young, the Miseries of the Inform, and the helpless Necessities of the Old [were] remov’d, eas’d and reliev’d’ (Harcourt, 1721, p. 10, 11). To make the clichéd remark that Colston was ‘a man of his

1  Prelude: The Demo(li)tion of Edward Colston 

3

time’ is to explain nothing: the problem is to grasp why so many people of his time (though by no means all) could entertain what appear to us to be such skewed and inconsistent moral notions. It is true that Colston saw at first hand the hardships of the poor of Bristol but not the sufferings of the Africans in the ‘middle passage’ or the sugar plantations of Jamaica; and to a man of limited vision, what is not immediately before the eyes may more readily escape attention. ‘Out of sight, out of mind’ can sometimes explain, even if it rarely excuses, moral negligence. But while Colston may never have viewed the horrors he promoted, it hardly takes a giant leap of imagination to form some idea of how it must have felt to be abducted from one’s home, imprisoned for weeks in the hell-­ hold of a slaving ship, sold like an animal, and finally forced to labour until one died of exhaustion and mistreatment. What stopped Colston and other slave-traders from making that leap and grasping something of the awful reality of a slave’s life? Was the problem simply one of a deficit of imagination? Or did they calamitously (if, for them, conveniently) believe that black people’s feelings were duller than those of white people, with the result that they didn’t suffer much in their life as slaves? Or was it that they knew that a slave’s existence was an awful one but simply didn’t care? If being indifferent to the sufferings of one’s fellow human beings is a serious fault, being willing to inflict injury on them is a still worse one. Visiting Brazil in 1836 during the voyage of the Beagle, Charles Darwin was appalled at what he described as the ‘heart-sickening atrocities’ inflicted on the black slaves and household servants he encountered. One old lady living near Rio de Janeiro ‘kept screws to crush the fingers of her female slaves’ when they displeased her. In one house in which Darwin stayed, ‘a young household mulatto, daily and hourly, was reviled, beaten, and persecuted enough to break the spirit of the lowest animal’ (Darwin, 1906 [1845], p. 480). What most surprised Darwin was that deeds like these were ‘done and palliated by men, who profess to love their neighbours as themselves, who believe in God, and pray that his Will be done on earth!’ (1906 [1845], p. 481). The inconsistency between the Christian principles professed by the slave-owners and their cruel practices was matter for wonder; but Darwin did not allow wonder to deflect moral judgement: what the owners did was wrong and ought to have been

4 

G. Scarre

recognised by them as such. Happily, reflected Darwin, the ‘sin’ of slave-­ trading was now extinct in Britain, but ‘It makes one’s blood boil, yet heart tremble’ to look back on the days when Englishmen ‘with their boastful cry of liberty’ were so eager to remove it from others (1906 [1845], p. 481). Darwin’s view appears to be that slave-traders like Colston should have behaved better because they should have known better, guided by Christian principles of brotherly love. So were men like Colston simply hypocrites? Or was it some extraordinary deficiency of moral vision that allowed them to perpetrate such injuries on others just because their skin was of a different colour? It is not clear, however, that, if there is blindness here, it is really so very extraordinary. If study of the past teaches us anything, it is that morally catastrophic discrimination between ‘us’ and ‘them’ is a recurring motif throughout the human story. If such discrimination stems from moral blindness, then probably a sizeable majority of all human beings who have ever lived have been in need of moral spectacles. Of course, self-deception in one’s own interests is another very common phenomenon, and wilful blindness has no doubt often substituted for the genuine article. Considerable ingenuity has been applied to surmounting the conceptual difficulties that arise in explaining why we should have more privileges, or be entitled to better treatment, than they should. One common ploy has been to represent the dis-favoured others as less than fully human, or as human beings of the second rank (or of no rank at all), inferior in all significant respects to members of our own group. Thus when the manacled slave in the famous image circulated by opponents of slavery around the turn of the nineteenth century posed the poignant question, ‘Am I not a man and a brother?’ a committed slave-­ trader would have rejected the notion with a resounding ‘No!’ Descriptive categories and classifications are intimately connected with the moral estimations we make. Colston would probably have agreed with Kant that members of the kingdom of ends should be treated with equal respect; but he would not have extended membership in that kingdom to black people. Did he sincerely believe that blacks were inferior to whites, or did he deceive himself (or allow himself to be deceived), into thinking this because it served his interests? Alternatively, might he have recognised that black lives matter as much as white ones do, but

1  Prelude: The Demo(li)tion of Edward Colston 

5

hypocritically pretended that they don’t? This third possibility is perhaps less likely than the previous two; the Christian benefactor to the poor of Bristol was more likely blind than bad—which is not to deny that such blindness is itself a terrible defect, whether innocently or self-­ interestedly caused. Fortunately, moral imagination does occasionally appear able to overcome existing conceptual boundaries and prompt a reordering of moral values. Often it is a matter of changing one’s angle of vision. So, for example, while Kant excluded animals from the kingdom of ends on account of their inability to reason, Jeremy Bentham more perceptively recognised in their having the capacity to suffer a ground for ascribing them intrinsic value. Judging which qualities are value-adding, and how much value they add, is not always easy, and views can legitimately vary. But some categorisations appear more artificial or contrived than others, and those which are adduced to justify differential treatment of human groups or individuals need to be treated with especial caution (differences in skin colour forming an obvious instance here). If progress in moral thinking is concerned in part, as Nietzsche claimed it was, with the re-­ evaluation of values, it is partly constituted, too, by changes in the pattern of assignment of existing values. Here, demotions are possible, as well as promotions. We (right-thinking individuals!) no longer think that being black, or being of the female gender, or being gay are reasons to suffer negative discrimination. Nor do we believe either that people should be held to be of greater worth, or to merit extra privileges, just because they come from ‘noble’ or ancient families, or possess wealth beyond the average. Yet ideas about who is within and who is outside the charmed circle of valuable humanity can sometimes be strangely resistant towards dislodgement even in the case of people whose powers of imagination are beyond question. One telling, and disturbing, example of the power of fixed ideas to distort moral judgement is provided by the novelist Charles Dickens. Everyone is familiar with Dickens’s concern for the underprivileged classes of Victorian Britain, whose sorrows and (less frequent) joys are depicted with so much empathetic insight in his books. Yet when it came to black Africans or other indigenous peoples in different parts of the world, the champion of the poor struck a very different note. Although

6 

G. Scarre

Dickens sincerely abhorred slavery and condemned it for its inhuman cruelty as well as for the degrading effect it had on both its victims and its practitioners, he appears to have had little love for people of ‘colour’. Indeed, his appalling essay ironically entitled ‘The noble savage’, published in 1853, breathes an almost genocidal hatred of Africans and other ‘savage’ races, looking forward to their eventual ‘absence’ from the scene as ‘a blessed relief and an indispensable preparation for the sowing of the very first seeds of any influence that can exalt humanity’ (Dickens, 1853, p. 1). For Dickens, everything about ‘Zulu Kaffirs’ or North American Ojibbeway Indians (two of the examples he cites) is repellent, and to pretend, as some in Europe or America have done, that there is anything ‘noble’ about ‘a howling, whistling, clucking, stamping, jumping, tearing savage’ is sheer humbug. Dickens piles on the negative epithets: ‘savages’ are typically ‘cruel, false, thievish, murderous; addicted more or less to grease, entrails, and beastly customs’ (1853, p. 1). The noble savage mutilates his own features, engages in bloodthirsty rituals, and ‘has no moral feelings of any kind, sort, or description; his “mission” may be summed up as simply diabolical’ (1853, p. 2). Dickens gives no hint that he thinks the ‘savage’ capable of improvement, as a mid-Victorian Englishman might understand improvement; in his view, Zulus and Indians are not merely uncivilised but uncivilisable. While we should not be ‘cruel to the miserable object’ that is the savage (as the slave-dealers and slave-owners are), that is the extent of our duties toward him. And nor need we regret it much when he is cruel to himself: for, according to Dickens, the pleasure that he takes in ‘wars of extermination’ is ‘the best thing about him’, since it helps to advance the happy day ‘when his place knows him no more’ (1853, p. 2).3 To encounter such misanthropic sentiments in a man celebrated for his benevolent attitudes is both shocking and puzzling. What on earth had happened to Dickens’s kindly nature that he could breathe such venom against other human beings not only for the way they behaved but also for the way they looked (even the ‘best shaped’ of the Zulus ‘exhibited’ at the St George’s Gallery in London he found to be ‘extremely ugly’). ‘[I]s it idiosyncratic in me,’ he asked rhetorically, ‘to abhor, detest, abominate, and abjure him?’ (1853, p. 2). It may be some very slight defence of Dickens to remark that encountering Zulus and other ‘native’

1  Prelude: The Demo(li)tion of Edward Colston 

7

people in the context of public exhibitions which amounted to freak shows was never likely to develop respect or understanding for cultures so alien to the viewer’s own. Given this context, Dickens’s reaction was no doubt far from ‘idiosyncratic’. Still, something better might have been hoped for from the foremost imagination of the age. 12 years later, Dickens was to reveal his racist credentials anew when he joined the defence committee set up to counter the attempt, led by John Stuart Mill, Thomas Huxley and other prominent liberals, to prosecute for murder the former Governor of Jamaica, Edward John Eyre, who had ordered the brutal and bloody suppression of a small-scale riot by black plantation workers at Morant Bay in that island.4 In 1865, as in 1853, Dickens failed to be on the side of the angels. Morally speaking, the great man turned out to have feet of clay. If Dickens could get things so wrong, what hope was there that the average nineteenth-century white person would see things much better? Imagination may generate a feeling-insight into other people’s experience (empathy) and a disposition to share in their pleasures and pains (sympathy); but—as Dickens’s case shows—it is a mistake to suppose that imagination always acts as a reliable moral torch, shining a beam into every dark corner. In a well-known essay, Jonathan Bennett praised the youthful Huckleberry Finn who, in Mark Twain’s famous story, follows his feelings and assists his friend, the black slave Jim, to escape from his mistress, although in helping him he believes he is doing wrong. In Bennett’s view, Huck’s case and some others he discusses show that, by and large, moral sentiments (which he appears to understand in a basically Humean manner, as benevolent feelings towards our fellow human-­ beings) are more trustworthy than moral principles, which are frequently mistaken, as they are in Huck’s case (see Bennett, 1974). But while moral principles often are mistaken, the trouble with Bennett’s tempting thesis is that it relies on an unrealistic view of moral sentiments, which in fact are rarely, if ever, ‘pure’ emanations from some sensitive capacity, but are typically framed in large measure by the conceptual structures that we already have in place. So if one starts by believing, as Colston and Dickens did, that blacks are very poor specimens of humanity, then that belief inevitably enters into and conditions the sentiments one forms about them. Maybe if Dickens or Colston had been fortunate enough to have

8 

G. Scarre

had a black friend, as the fictional Huckleberry Finn did; or if, like Darwin, they had witnessed at first hand the screams of a slave being whipped half to death on a Brazilian plantation; then they might have come to feel differently about the value of black lives (although in Colston’s case his sense of self-interest may have formed a further impediment). The dependence of moral attitudes and behaviour on a highly complex weave of beliefs, categorial assignments, values, feelings, interests and desires might seem to militate against the attainment of any stable moral outlooks: for change any of the contributory factors and the final product may reasonably be expected to change as well. Yet experience shows that major alterations in moral perspectives are less common than these considerations might suggest they would be. That moral outlooks tend to be notably stable and change-resistant may owe something to people’s natural conservatism but it reflects, too, something important about the nature of moral values and what they mean to us. Many factors may condition the formation of a moral outlook but it would not be recognisable as a moral outlook if it were not seen by its holder as authoritative and demanding loyalty. Values would not be values if we were ready to abandon them at the drop of a hat. Still, there is a balance to be struck here. People who refuse ever to consider that their moral views may be wrong display a foolish self-confidence and risk falling into serious error. Maintaining our moral evaluations of people and practices through thick and thin, resisting à outrance the possibility that we might be mistaken, is as irrational as holding that we know some empirical proposition infallibly. Human beings are very good at doing evil. That fact is depressing but undeniable, as even a meagre acquaintance with the story of the past reveals. ‘History, writes Janna Thompson, ‘is a tale of unrequited injustice’ (Thompson, 2002, p. vii). That statement may be too strong: some injustices do come in time to be recognised for what they are, and sometimes amends are made for them. But harm done is frequently irreversible, even if it is regretted; and dead victims of injustice are beyond the reach of all practical compensation. The myriad forms that man’s inhumanity to man has taken over the centuries have not always been recognised as crimes, and cruelty has often been defended by appeal to

1  Prelude: The Demo(li)tion of Edward Colston 

9

principle. Self-deception may sometimes be involved here, but it would be too quick to accuse all those who defended the transatlantic slavetrade, the medieval Crusades or the persecution of witches or of heretics of hypocrisy. The disconcerting fact is that even men and women whom we might usually think of as good people—of whom Dickens may fairly stand as an instance—can be way off the mark in some of their moral evaluations. It might be objected that the description of Dickens as a ‘good’ person who committed an error in moral judgement begs the question in representing his fault as being one of understanding rather than of character. Is this a completely safe assumption? It would be nice to think that if Dickens had become better acquainted with people of colour, the moral scales would have dropped from his eyes. That might well have been so, considering the abounding human sympathies that Dickens displayed elsewhere, but unless we believe, with Socrates, that all unjust behaviour and sentiment is ultimately based on ignorance, it would be rash to press a similar diagnosis in the case of every white person who ever treated a black one badly. Colston, for all we know, may have had a visceral hatred of black people even if Dickens’s dislike was a product of ignorance. Evil can flow from bad character as well as from bad judgement, but these are not always easy to tell apart, and the work of distinction is particularly hard when we attempt to assess the acts and motives of people of past times, whose cultural milieu was profoundly different from our own. This book is about the making of moral judgements which relate to past people and their acts. We shall meet in its pages many examples of behaviour that were once deemed to be acceptable, even praiseworthy, which we should now look upon as being profoundly wrong. Does consistency require that we should judge them also to have been wrong at the time? Or are our moral standards not applicable to their acts, in view of the differences between their thought worlds and our own? Are moral standards properly—even necessarily—indexed to times and places? Or can some more universal, non-time-bound standards be located which justify judging that some things that were done in the past were wrong, even if some measure of excuse might be found for those who committed them on the grounds of ignorance or error? In other words, might it sometimes be appropriate to condemn the sin but spare the sinner?

10 

G. Scarre

We shall see that some writers have argued that moral assessments of past actors and actions are always out of order. Two related reasons are commonly adduced in support of this opinion: the first, that we can never fully comprehend the contexts (allegedly too different from our own) in which past people acted; the second, that we cannot sufficiently distance ourselves from our own cultural biases to be able to act as fair and impartial judges. These are weighty objections to the making of historical moral judgements, not to be lightly dismissed. Yet it is exceedingly difficult to reflect on what happened in the past without employing moral categories—and this observation applies just as much to professional historians as it does to armchair readers of historical romantic fiction. Human speaks to human across the centuries, and while the language may often be hard to follow, species kinship ultimately counts for more than cultural difference. Nihil humanum alienum me puto: in the last analysis, moral indignation pays little regard to dates and focuses rather on the what than on the when. The suffering of a slave in ancient Rome or on an ante-bellum cotton plantation in the American South appear quite as repellent and atrocious as that of a victim of modern slavery. Such responses may be spontaneous, but that does not, of course, entail they are ‘correct’ in any sense other than that they correspond to contemporary western liberal-democratic values. Edward Gibbons’s great history of the decline and fall of the Roman empire bristles with moral judgements on every page, reflecting the rationalist and humanist values of the Enlightenment. If we can now recognise Gibbons’s values as time-­ indexed, should we not allow that the same thing can be said (and no doubt will one day be said) of our own? We start from where we are and not another place. David Armitage rightly advises that we should frankly acknowledge ‘the active role the historian’s mind—her mental categories and structures as well as the horizon of possible questions, meaningful encounters, and plausible interpretations—plays in shaping history from the fragmentary evidence of the past’ (Armitage, 2003, p. 7). That historians and all who reflect on the past are men and women of their own day, born and raised within a cultural landscape that fixes the triangulation points of their mental map, is hard to deny but also disturbing. For it suggests that what has come to be known by modern historians by the

1  Prelude: The Demo(li)tion of Edward Colston 

11

pejorative name ‘presentism’—that is to say, the anachronistic imposition of contemporary ideas and ideals on agents and actions of the past—is a scarcely avoidable feature of even our most sensitive attempts to sit in moral judgement on our forebears.5 Arguably, then, the prudent course to take if we want to avoid imposing inapplicable (because time-bound) moral standards on the past is to eschew attempts at normative appraisal altogether and stick to relating ‘the facts’ (on the hopeful assumption that these will not suffer equal distortion when viewed through the lens of our current conceptual categories). However, I believe that something better than this is possible, and that so much prudence is unnecessary. It will be my contention in this book that moral judgements that can be referred for their justification to considerations of the things that naturally matter to beings of our species and that contribute to our flourishing, carry an authority that is overlooked if we insist on seeing them as relying simply on ‘time-indexed values’. So when we criticise the racist views of men like Colston or Dickens, we are not merely expressing moral opinions that happen to differ from theirs, but expressing opinions that are superior because they are more appropriately keyed into crucial (naturalistic) considerations regarding human well-being. Or so I shall argue. On 11 June 2020, 4 days after its ignominious ducking, the statue of Edward Colston was retrieved from Bristol harbour by order of the City Council, and transferred to a museum where—it is promised—it will be displayed in a more appropriately ‘contextualised’ manner. It can be presumed that this ‘re-contextualisation’ will not be value-free but will condemn the evil of the trade by which Colston made his fortune. It is hard not to see this as a step in the right direction, a belated acknowledgement that Colston’s philanthropy came, ironically, at a terrible human price. Whether it will lead the visitor to a better understanding of Colston, the man, and his deeds remains to be seen. The moral outlook of an early-­ modern slave-trader may remain permanently elusive because it is hard for us today to grasp how a person could seemingly act sometimes like an angel and sometimes like a devil. Ascribing to Colston a Jekyll-and-Hyde personality is a tempting but not really very plausible way of making sense of him; more likely, he managed to square his particular moral circle in a way that satisfied himself. But even if we manage to comprehend

12 

G. Scarre

his point of view, we are not obliged to accord it a parity with our own simply on account of its belonging to a different era—provided that we have the arguments to show why our anti-racism is morally superior to his racism. And demonstrating that such arguments can be non-­question-­ beggingly supplied will be the major purpose of this book.

Notes 1. There is not at the time of writing any modern biography of Colston although there are many recent internet sources which detail his life and legacy; I have drawn on several of these for the basic facts cited in the text. On the Royal African Company, see Davies (1999), Pettigrew (2014). On the Atlantic slave-trade in general, see Thomas (1997). 2. As Helen Frowe has recently observed, public statues of persons are not erected simply to signify that those persons are of historical importance, but ‘typically express a positive evaluative attitude towards the subject’ (Frowe, 2019, p. 1). 3. For Dickens’s hatred of slavery, see in particular Chapter 17 of his travelogue American Notes and the ‘American’ chapters of his novel Martin Chuzzlewit. In the former work, sympathy for the sad plight of the slaves combined with disgust at what Dickens saw as the hypocrisy of the slave-­ owners: ‘It is the Inalienable Right of some among [the American people], to take the field after their Happiness, equipped with cat and cartwhip, stocks, and iron collar, and to shout their views halloah! (always in praise of Liberty) to the music of clanking chains and bloody stripes’ (Dickens, 1985 [1842], p. 167). The seemingly inconsistent character of Dickens’s views on race has given rise to a large scholarly literature (see, for instance, Ledger & Furneaux, 2011; Moore, 2004). It would be hard to deny that Dickens found it much easier to empathise with members of the white race than with people of other races. Writing to Angela Burdett-Coutts following the suppression of the 1857–58 Sepoy Rebellion (alias the ‘Indian Mutiny’) against British rule in north-eastern India, he wrote: ‘I wish I were Commander in Chief in India. The first thing I would do to strike that Oriental race with amazement … should be to proclaim to them … that I should do my best to exterminate that Race upon whom the stain of the late cruelties rested … and was now proceeding, with all

1  Prelude: The Demo(li)tion of Edward Colston 

13

convenient dispatch and merciful swiftness of execution, to blot it out of mankind and raze it off the face of the Earth’. While justifiably appalled by the treatment of white women and children by some of the Sepoy extremists, Dickens conveniently ignored the atrocious reprisals carried out by British troops on Indians in the wake of the uprising. (I am indebted for the quotation to Ben Wilson (Wilson, 2016, p. 294). The original source is Dickens, 1953, p. 350.) 4. For Mill’s own recollections of the Governor Eyre affair, see his Autobiography (Mill, 1981[1873], pp. 242–243. The best book-length study remains Semmel (1962). 5. Note that ‘presentism’ in the sense commonly understood by historians and historiographers is not to be confused with the metaphysical thesis known as ‘presentism’ which asserts that only the present is real. In this book, I follow the former usage rather than the latter. For discussion of some of the variant ways in which ‘presentism’ has been understood within the historiographical debate, see Armitage (2003).

2 Introduction

1 The Problems of Access and Relevance The question is whether we can legitimately make moral judgements about people who lived before our own time, in social and cultural settings which were very different from ours. Is it right to criticise their acts and practices, their values and institutions, in view of the often profound contrasts between their standards of rightful or virtuous conduct and our own? As Bernard Williams reminds us, ‘we see things as we do because of our historical situation’ (Williams, 2005, p. 66), and that historical situation is unique to us. When we contemplate the past, we confront a confusing mix of familiarity and strangeness. Patterns of behaviour that resemble our own exist alongside modes of action and belief that have a distinctly alien feel. To quote again the oft-quoted opening words of L.P. Hartley’s novel The Go-Between, ‘The past is a foreign country; they do things differently there’ (1958, p. 7). To be sure, some periods and places in the past appear more foreign to us than others. Temporal distance and cultural difference both have a part to play here. The worlds of the Aztecs or of the medieval Crusaders seem a lot more remote from us than do, say, those of the late Victorian era or the inter-war years. Yet

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1_2

15

16 

G. Scarre

even those last-named environments were home to certain ethical conceptions and norms that differed significantly from those prevailing in most liberal democracies today. Many views that passed as standard in the 1930s on such topics as race, social class, imperial responsibilities, the rights of women, gender roles and sexual conduct, now appear not just outdated but repugnant. What ethical stance should we now take towards those views and the people who held them? The scientific study of history reveals much about the reasons which motivated the acts of our ancestors. But it cannot enable us to live in the past, as people of the past. That is not just for the superficial reason that there are no time-machines to transport us back, but because we could not remain the people we are, with our own way of looking at and experiencing the world, while simultaneously imbibing the cultural mores of some previous era. Even the most insightful and empathic of historians are cultural outsiders in relation to the societies they study. This raises the question of whether we can ever know enough about what it felt like to live in some former cultural environment to be entitled to appraise its people and practices. Might any judgements we venture to make be considered null and void on the score of our invincible ignorance? I shall call this the problem of access to the past. It is not, however, the only difficulty which confronts us, as potential moral judges. For even if it is conceded that the past is not entirely beyond our powers of empathetic understanding, there remains a question whether the moral standards we live by today can be pertinently applied to people who lived in cultural milieux which differed substantially from our own. Just as we cannot live and breathe as people of the past, neither could people of the past come to inhabit our world. So is there any point in judging the Aztecs or the Crusaders according to our own norms, when, in Williams’s words, the outlook which sustains those norms was not a ‘real option’ for them, our social setting not being one ‘they could live inside … in their actual historical circumstances’ (Williams, 1985, p.  160). This I shall term the problem of relevance. Since the twin problems of access and of relevance will be with us throughout this book, some further scene-setting will be useful here. In regard, first, to the problem of access, it might be argued that this cannot really be so very worrying, since there are many psychological traits which

2 Introduction 

17

are common to all human beings, belonging to them by virtue of their membership of the species homo sapiens sapiens. Men and women may safely be considered to be much the same kind of creatures everywhere and everywhen, even if their drives, emotions, fears and desires manifest themselves in culturally conditioned forms in different locations. So while it might be extremely difficult to imagine what it would be like to be a member of some other species, such as Thomas Nagel’s bat, the subjective life of any human being should never in principle be beyond our grasp (cf. Nagel, 1974). Consequently, it would be epistemically over-­ scrupulous to suppose that the subjective experience of former people is a totally closed book to us. Four comments are in order on this argument. First, it may be allowed to it that the supposition that the ‘feel’ of life to people living in earlier cultures is a completely closed book to us errs by exaggeration. Sidestepping the unilluminating debate about whether there is or not a universal or ‘fixed’ human nature, it may readily be admitted that similarities in action and reaction appear and reappear in widely differing historical environments. However, second, if the book of history is not a completely closed one, many of its chapters are missing or incomplete— the information at our disposal is too fragmentary, selective or partisan to provide the secure basis on which an accurate picture of past lives can be built. And where data is in short supply, interpretation must inevitably be speculative and influenced by the interpreter’s presuppositions and biases. Third, and relatedly, even if culture does not, as some anthropologists have alleged, go ‘all the way down’, it nevertheless goes a very long way, and it is easy, especially where information is meagre, to miss or misunderstand the cultural nuances of behaviour in societies that are not our own. Human beings, as Williams observes, ‘live under culture’ (1995, pp. 79–80), and translating the practices of the members of one culture into terms in which they make sense to members of another can be a challenging project. Fourth and last, even if we were to succeed in arriving at a realistic, truly insightful understanding of what it was like to live inside some past cultural environment, we could never be sure that we had in fact done so. Historiography, which has much to say about methods for establishing historical facts, provides no criterion of correctness for our interpretations of subjective experience. R.G.  Collingwood

18 

G. Scarre

claimed that the historian, confronted by a shortfall in the evidence about past experience, ‘must re-enact the past in his own mind’ (Collingwood, 1961 [1946], p. 282). But how can he know that his re-enactments are correct? Even if the historian did succeed in thinking himself into the skin of an Aztec priest, or a marauding Viking warrior, or a Saracen-­ slaying Crusading knight, how could he be sure he had accomplished the feat? These comments are perhaps better taken as caveats, rather than as show-stopping objections to the quest for accurate imaginative engagement with the mental world of our forebears (an engagement which seeks to reveal the values and meanings that mattered to and motivated them). The kind of understanding so attempted (which is often referred to, after Wilhelm Dilthey, by the German word Verstehen (see Dilthey, 1989 [1883])) might be of little interest to historians who are interested only in ‘hard facts’ amenable to statistical analysis. But that is a very narrow sort of history which misses what Collingwood calls ‘the central point of all historical thinking’, which is to recover the character of past experience (1961 [1946], p.  283). Humanity speaks to humanity across the ages, even if the message is liable to distortion in the process. And if some past worlds appear very foreign indeed, the same is certainly not true of all. Where information is more abundant, and particularly where literary sources enrich and enhance the factual record, the imagination may have quite a lot to work on. So while the fourteenth century, for instance, was a very different age from our own, yet the talkative, quarrelsome, fun-­ loving and intermittently pious pilgrims so vividly described by Chaucer in his Canterbury Tales bring their world marvellously alive. It is not so very hard to suppose ourselves riding the road to Canterbury along with the stately knight, the coy prioress and the drunken miller, and we can empathise with, even where we do not share, their concerns. While it would be rash to think that we could ever grasp in its entirety how medieval pilgrims experienced the world, it would be equally rash to regard their world as being utterly beyond our comprehension. So much, for the moment, on the problem of access. The problem of relevance arises because there may seem to be what Williams has described as ‘a radical contingency’ in the ‘ethical conceptions’ of our own or any other society (Williams, 2002, p. 20). It is a truism to say that a society’s

2 Introduction 

19

prevailing ethical norms provide the touchstone for moral behaviour for its own members; but is it ever appropriate to apply those norms to judging the members of other societies who follow different norms? Of course, we could just bluntly assert that since our norms are uniquely the true or right ones, they are eo ipso applicable to all other societies and their members, even if blameworthiness for breaching them may sometimes be mitigatable by ignorance. This looks at first sight like a very arrogant thing to say, and a serious case of begging the moral question in our own favour. Yet is it not part of the logic of maintaining a moral position that we treat it as being superior to the alternatives? If we refuse to pass moral judgements on the ground that other people may dissent from them, we commit ourselves to what Williams describes as a crude and confused ‘nonrelativistic morality of universal toleration’ (Williams, 1985, p. 159). Our values, ends and norms are crucial components of not only our social but also of our individual identity, essential to our sense of who we are; they may be, and rightly are, subjected to criticism and revision, but their wholesale abandonment would threaten to alienate us from ourselves. We may certainly assert our own moral views without denying that in other social settings we might have thought differently. Conceding that many circumstances that might have been otherwise contributed to forming our current ethical conceptions is perfectly compatible with asserting the superiority of those conceptions to others—including some that went before them in earlier phases of our own society, such as those which sanctioned rebarbative practices like slave-trading. When we confront our own moral values we may reasonably be tempted to echo the words of Martin Luther: ‘Here I stand; I can no other.’ Nothing else seems a real possibility for us; nothing else looks remotely right. As Tim Heysse remarks, ‘If we do not object to the use of the predicate “true” in ethics, we may say that we are confronted with the (ethical) truth of an outlook’ (Heysse, 2010, p. 225). J. David Velleman makes a similar point when he remarks that moral statements may stem from specific cultural standpoints but are not typically indexed to those viewpoints: thus when the Kikuyu of East Africa assert that female circumcision is ethically unobjectionable, they mean that it is ethically unobjectionable tout court, and not just that it is unobjectionable for the Kikuyu (Velleman, 2015, pp. 75–77). Our own ethical outlook presents

20 

G. Scarre

itself to us as the ‘true’ or ‘right’ one, and as such applicable in all circumstances. Admittedly, this does not entail that we are entitled to attempt to impose our outlook on people who do not share it. Whether it is ever right to do so is a political question, and one generally to be answered in the negative—though respectful persuasion may sometimes be justifiable where coercion would not be.1 Our present concern, however, is with the philosophical question of whether there is ever any point in mentally judging people to have been right or wrong when they acted by standards which were different from our own. Obviously, no literal moral conversation is possible with people of the past. Since Edward Colston could not (to cite Williams’s words again) have ‘live[d] inside’ our contemporary ethical world ‘in [his] actual historical circumstances’, it may seem idle—or worse still, unfair—to condemn his slave-trading on the basis of principles he did not hold and which we are unable to advocate to him. For Williams, contemporary standards are irrelevant to judging behaviour in the past because of a difficulty of access conceived not so much, as I have conceived it, in terms of the epistemic difficulties of knowing how former people experienced the world, as in the impossibility of achieving a ‘real’—that is, an actual physical—‘confrontation’ between ourselves and them. Applying our own ethical concepts (e.g. our concepts of virtue and vice) to former societies, he argues, ‘involves taking the people in abstraction from the social practices in which they lived, and so, often, we do not see them realistically’; hence ‘a reflective ethical outlook’ ought to leave room for ‘the relativism of distance’, a perception that our moral assessments are inapplicable because there can be no ‘real confrontation’ between past societies and our own (Williams, 1985, p. 162; his emphasis). We shall see later that Williams does not altogether reject the possibility of moral criticism of former societies, since he believes that there are considerations of justice that may be presumed to be valid universally. But his thesis of the relativism of distance challenges all easy assumptions that moral judgements of the past can be either true or useful. For Williams, it is idle to wax indignant with King Louis XIV of France for his tyrannical and absolutist ways, since we cannot now meet the Sun King, and the world of the ancien régime is as dead as the dodo (Williams, 2005, p. 66).

2 Introduction 

21

How far, though, should this line be pressed? Is it similarly idle to feel indignant with Adolf Hitler on the same ground that we cannot meet him, and because Nazism was defeated in 1945? A plausible riposte is that although Hitler himself may be dead, Nazism, fascism and extreme forms of nationalism and racism are prone to resurrection in new guises, so that squaring up morally to the ‘values’ Hitler represented remains a useful weapon in our ethical armoury. But from the same practical point of view, disapproving the ‘values’ of Louis XIV seems far from idle either, given the perennial need to face up to the threats of overweening governments, denials of personal liberty, and racial and religious bigotry. King Louis may have left the stage but tyranny and totalitarian rule are dangers forever waiting in the wings. Budding Louis XIVs abound on the world’s stage. Williams’s thesis of the relativity of distance has been extensively debated by recent writers, and his and their arguments will be discussed more fully in Chap. 2. But meanwhile note that if Williams is correct, then what I have termed the ‘problem of relevance’ turns out to be insoluble. That conclusion, I shall attempt to show, is resistible, because the relativity of distance thesis is false. In the meanwhile, it is worth discussing another reason, not considered by Williams, for holding the moral appraisal of past acts and actors to be a valid and valuable exercise.

2 The Past as both Foreign and Familiar This reason can be stated quite simply. Understanding and critically evaluating our current ethical conceptions is greatly assisted by situating them in their historical context. Tracking the development of our moral ideas helps to reveal the salient features that have enabled them to triumph over rivals and achieve their present dominance. What were the factors which over two centuries conditioned the change in mainstream European and American opinion from a general acceptance of to a detestation of slavery? Why did the anti-slavery arguments which appear to us both obvious and indisputable apparently elude even thoughtful men and women of the eighteenth century? What led people eventually to see the matter differently? History throws light on the conditions under

22 

G. Scarre

which reasons persuade or fail to do so. Studying the processes by which our modern conceptions evolved helps to explain why some moral ideas have come to be preferred to others that were weighed in the balance and found wanting. Causal accounts of origins are, to be sure, not justifications, and current ethical conceptions cannot be validated simply by explaining where they came from. Still, by identifying and evaluating the reasons that persuaded our forebears of the rightness of their moral ideas (or the wrongness of others), we gain an additional point d’appui from which to criticise our current notions. The history of morals, by disclosing the myriad factors that have shaped the development of our present moral conceptions, reinforces Williams’s claim concerning their ‘contingency’; but it serves to reveal, too, that contingency is not to be confused with accident. Williams justifies his use of the adjective ‘radical’ in characterising the kind of contingency he believes ethical conceptions to possess by saying that ‘the changes that brought them about are not obviously related to them in a way that vindicates them against possible rivals’ (2002, p. 20). But this misses the fact that changes in ethical thought are frequently sired by reflection on experience, and are defended by arguments that do vindicate them against possible rivals, including their own predecessors. We believe we can offer better reasons for rejecting slavery than our eighteenth-century forbears could for endorsing it, even if we concede that those forbears might have had difficulty grasping those reasons just as we do, owing to their different cultural circumstances. The historical forces that have led us to where we are and how we think today might, indeed, have been very different; but that aspect of contingency about our present circumstances has no bearing on the cogency of the moral arguments that have justified the rejection of slavery. As historian Robert Winder reminds us in the 2020 edition of the English Heritage Handbook, ‘The past is where we come from, the soil in which we grew. When we visit a historic site we feel something that is both familiar and strange; it is in places like these that our larger, shared ancestry still glimmers’ (Winder, 2020, p. 18). Something similar can be said when we ‘visit’ the moral outlooks of our ancestral pasts, where the same blend of familiarity and strangeness confronts us. Winder’s ‘glimmer’ should not be mistaken for a shining light. It is not open to the historian, as it is to the anthropologist, to visit in the literal sense the

2 Introduction 

23

societies she studies. The anthropologist who gets to know an alien culture by residing within it for a lengthy period of time acquires a privileged position as a commentator; nothing similar is possible for the student of past societies. Not that things are always plain sailing even for the anthropologist: the theoretical frameworks and presuppositions that the anthropologist brings to the task of interpretation can distort the finished product. Some of the pioneering giants of modern anthropology, who recognised the value of getting to know a society through personal acquaintance, were notoriously prisoners of their own preconceptions. Their belief that human beings were much the same everywhere and cultural differences relatively superficial could impede more subtle discriminations. When Margaret Mead spent time studying the coming of age in Samoa, in preparation of the famous book of that name, she described her work as an attempt to ‘learn many things about the effect of a civilization upon the individuals within it’ by investigating a ‘simple’ and ‘primitive’ society which differed from modern European or American civilisations not in its fundamentals but chiefly in being less complex (Mead, 1961 [1928], p. 14). Similarly, Ruth Benedict in her Patterns of Culture (1934) envisaged treating exotic-seeming but basically rather simple societies as models for understanding quite general patterns of development of social traditions. More recent writers, such as Clifford Geertz and Michelle Moody-Adams, have pricked the pretensions of anthropology in its older self-presentation as an objective science capable of proving universal truths about the human species, preferring to see it as a humanistic discipline providing insightful interpretations of individual cultures in their endless variety (see, e.g., Geertz, 1973; Moody-­ Adams, 2002). This approach parallels that of historians who in their own field favour the pursuit of Verstehen in preference to any more ‘objective’ search for laws of historical development or cause-effect relationships, which throws but a feeble light on subjective experience. By forcing the data of observation to fit pre-formed theoretical templates, the anthropological investigator risks throwing away the advantage she has over the historian of being able to study her subject at first-hand. Theoretical Procrusteanism is not, though, peculiar to one particular discipline, and historians too need to take care to ensure that the theoretical frameworks they bring to the interpretation of their data

24 

G. Scarre

are tested in their turn against those data; the ideal is to arrive at a state of reflective equilibrium in which theories frame facts and facts test theories. However, this piece of sound methodological advice may sound like a counsel of perfection in some contexts. For since historians cannot visit the past in person and often have to rely on woefully scant information as a basis for their conclusions, theory (‘This is presumably what must have happened …’) inevitably steps in to fill the gaps. Historians like to remind us that past people were flesh and blood individuals, ‘people like ourselves’. To quote the English Heritage Handbook again: ‘The people who lived were real people, as real as we are today; they lived, loved and lost in a landscape filled with meaning and possibility’ (EH Handbook, 2020, p. 18). G.M. Trevelyan made the same point with his accustomed eloquence in his English Social History: It is the detailed study of history that makes us feel that the past was as real as the present. … [T]o us as we read, [past people] take form, colour, gesture, passion, thought. It is only by study that we can see our forerunners, remote and recent, in their habits as they lived, intent each on the business of a long-vanished day, riding out to do homage or to poll a vote; to seize a neighbour’s manor-house and carry off his ward, or to leave cards on ladies in crinolines. (Trevelyan, 1964 [1942]: vol.1, p. 13)

Recognising that the people we meet in history were as real as ourselves does not, unfortunately, help us to know that reality with any degree of intimacy—though it may fool us into thinking it does. If arriving at anthropological understanding of contemporary societies that can be visited and inhabited is a demanding enterprise, calling for researchers to achieve a delicate balance between objective detachment from and empathetic engagement in the life of the community they are studying, then how much harder is grasping the inscape of former lives that cannot be encountered at first-hand. Trevelyan remarks that historians have ‘amassed by patient scholarship a great sum of information’, the records, letters and journals they have edited being enough to fill ‘lifetimes of reading’. Yet ‘even this mass of knowledge is small indeed compared to the sum total of social history’; and the historian is forced to generalise from a small number of instances which he hopes will be

2 Introduction 

25

typical (Trevelyan, 1964 [1942]: vol.1, p. 12). The blunt fact is that ‘we cannot put ourselves back into the minds of our ancestors’ (1964 [1942]: vol.2, p. 239). A similar line is struck by Lytton Strachey in his Elizabeth and Essex, published in 1928. Strachey warned against the illusion of thinking we know more about the Elizabethan age than we really do, just because we know its ‘outward appearances and the literary expressions of its heart’. How well, he asks, can we really understand people who could, at one moment, listen with rapture to a ‘divine madrigal sung to a lute’, and at the next, enjoy the spectacle of a chained-up bear being baited by dogs? By what art are we to worm our way into those strange spirits, those even stranger bodies? The more clearly we perceive it, the more remote that singular universe becomes … the creatures in it meet us without intimacy; they are exterior visions, which we know, but do not truly understand. (Strachey, 1928, p. 8)

In creating his colourful narrative of the strange, and strained, relationship between Queen Elizabeth I and her wayward favourite, the Earl of Essex, Strachey allowed himself a degree of imaginative latitude that would be frowned on by the historian who is committed to a stricter adherence to the primary sources. In view of the chronic shortage of evidence referred to by Trevelyan, ought historians to concede that they can never open wide a window on the past but must content themselves with looking through a glass darkly? Is the problem of access too close to being insoluble to allow us to claim any real understanding of past people—still less to presume to pass moral judgement on them? This gloomy outlook is, I think, overly-pessimistic. It may be hard for us, in view of the paucity of information and the obvious cultural remoteness, to grasp what it was like to ride with Attila the Hun or participate in a Druidic human sacrifice, but we know very much more about, for example, life in 1660s London (vividly evoked in the journals of Samuel Pepys and John Evelyn), or the work of a domestic servant or factory hand in the mid-Victorian period. As literacy increased, more and more people felt the urge to record their reactions to events both great and small, the result by the Victorian era being a repository of writings which

26 

G. Scarre

throw an unprecedentedly bright light on contemporary experience. Alongside the writings of the privileged and learned, there are the journals and letters produced by persons who occupied the humbler positions in society, such as the house-maid Hannah Cullwick (1833–1909) or the curate Francis Kilvert (1840–1879) (see Cullwick, 1964; Kilvert, 1949); these first-hand records of people who were neither rich nor famous supplement the work of more famous chroniclers of the life of the English lower classes, including Charles Dickens, Friedrich Engels, Henry Mayhew and William Booth. Here are the minutiae of everyday lives, intimately described, providing much for our imaginations to work on. Yet caveat lector! None of these individuals thought or felt exactly as twenty-first people do: if we could visit their country, we would still find it foreign enough to require a travel guide. Among other things, we would need to allow for the fact that some of the Victorians’ most cherished values were very different from our own. Many of the things then commonly held to be important have shed that status today (e.g. deference to social superiors; acceptance of one’s station in life and its attendant duties; respectability; patriarchy; ‘manliness’ and the stiff upper lip; sobriety; devotion to God, Queen and Country; chastity before marriage). That said, this was also a changing world in which certain of our current pre-­ eminent values were beginning to grain traction: political and social equality, women’s rights, economic security for all, the end to racial and religious discrimination, protection for children and animals, sexual liberation. In contemplating the Victorians, we feel that mixture of familiarity and strangeness of which Winder speaks. If the former seems to bring them within the scope of our moral judgement, the latter reminds us that such judgement should be exercised with caution.

3 When ‘Jagged Worldviews Collide’ As was noted in the Prelude, it is very hard to look on the acts and experiences of our fellow human-beings without assuming a moral attitude towards them, however different from our own their values may seem to be. However, it would be a bad mistake to suppose that whenever we compare our values with those of past people we will discover only

2 Introduction 

27

difference. Many things that we think wrong have been judged to be wrong in past tines too although these points of convergence are easily overlooked if we focus only on the more spectacular practices on which we differ (e.g. slavery, human sacrifice, crusading, rigid social-class ranking, exposure of unwanted infants). Courage, for instance, as has often been observed, is a virtue valued in all known societies, while cowardly behaviour has been universally despised. Likewise, anger, revengefulness, disloyalty (to leaders, parents, and friends), intemperance, ingratitude, wastefulness of goods, lying for one’s own selfish purposes, have been generally condemned, albeit with variations in their weightings and rankings as faults. We may seem to be on safer ground when we condemn some past act that was equally condemned or condemnable at the time it was performed than when we criticise a formerly accepted practice such as slavery. The significance of such areas of moral convergence for the legitimation of historical moral judgements will be considered more fully later; but the caveat is worth bearing in mind that agreement in judgements that an act (or act-type) X is right or wrong does not always guarantee that the reasons why X is held to be right or wrong coincide, nor can it be assumed that the motives for doing or refraining from X will typically be the same. For example, medieval folk were more likely than people nowadays to cite God’s commandments or the fear of incurring His wrath as reasons for abstaining from acts that they believed to be wrong. Some values of societies where they ‘do things differently’ can appear so remote from our own that, wearing our philosopher’s or anthropologist’s hat, we may be disposed to describe them as ‘incommensurable’. And where values are ‘incommensurable’, there seems to be, in Williams’s words, ‘no one currency in which each conflict of values can be resolved’ (1981, p.  77). How deep the differences between what appear at first sight to be radically dissimilar values really go is not always immediately plain, however, some differences being rooted less in disparate moral visions than in disagreements in empirical beliefs about the nature of the world or in the exigencies of local circumstances. Yet even where the values of other societies appear at their most puzzling, it is hard not to appraise the practices that they sanction from the perspective of our own values. This may seem question-begging, and it is question-begging if it relies merely on the trumpeting of our own prejudices and the knee-jerk

28 

G. Scarre

rejection of other people’s where the two diverge. However, I shall try to show that the criticism of other moral systems from our own point of view can be legitimate where it relies on reasons which the people we criticise could in principle be brought to accept as bearing relevantly on their own human condition, or which at least are relevant to that condition, however hard it might be in practice to get them to see that. But where there is no obvious common currency into which the claims made within disparate value-systems can be converted, it may be difficult to the point of impossibility to bring about a meeting of minds between the proponents of rival systems. Williams mentions but quickly dismisses the suggestion that utilitarianism might provide such a currency, objecting that not only is the sense of ‘utility’ too vague and disputable to do the work required but it is ‘too outside the other values … to count as a way of measuring them’; indeed, even if it were taken to provide guidance as to what ‘it was better, all things considered, to do’, it would not determine anything about the worth of the conflicting values themselves (Williams, 1981, pp.  78–79). Professor Leroy Little Bear has used the evocative phrase ‘jagged worldviews colliding’ in describing the encounter of North American First Nations philosophies and European positivist ‘Scientific’ thought (cited by Henderson, 2009, p.  64). There is no mutually-acceptable concept of ‘utility’ to appeal to when, for instance, Native Americans have clashed with academic archaeologists over the ethically correct treatment of ancient human remains. First Nation communities that demand repatriation of ancestral remains for ritual reburial have frequently proved at odds with archaeologists who wish to retain those remains for scientific study.2 Here we have a good example of the kind of practical incommensurability that Williams refers to, where minds and hearts alike fail to meet. The claim that value pluralism is an empirical fact ought not to be confused with the assertion that an unnuanced value pluralism is correct as a meta-ethical position. That two ‘jagged worldviews’ may collide and prove practically irreconcilable does not in itself establish that one might not in principle be superior to the other. There may be many ways in which good lives can be lived but not all possible ways of living are good. Nor are all good modes of living equally good, while even the best may leave room for improvement. Moral realists who assert that values exist

2 Introduction 

29

objectively, as part of the furniture of the universe, will say that systems which recognise the values that actually exist are superior to systems which deny those values, or which posit others that don’t exist. Other versions of moral realism have posited a more naturalistic, less metaphysically accented, basis for morality, according to which one value system is superior to another if and only if its prescriptions are better calculated to promote human flourishing, where flourishing is to be understood in terms of the maximal satisfaction of human needs (Reader, 2007), the flowering of excellences of mind and character (Aristotle, 1954; Foot, 2001), or the enhancement of capacities for individual and social development (Sen, 1985, 1993; Nussbaum, 2006, 2011) —to mention just some of the variants on offer. Arguably, realist theories of a more metaphysical stamp also need, in the last analysis, to be supplemented by others of a more naturalistic flavour, since it is hard to see how else than by an appeal to considerations of human flourishing or welfare any claims that such-and-such values are the ‘real’ or the ‘genuine’ values can ultimately be defended. But however that may be, it is evident that the concern of naturalistic theories with quality-of-life considerations must lead them immediately to disapprove of practices such as slavery, aggressive warfare, oppression and exploitation of the weak by the strong, gross inequalities in the distribution of privileges, discrimination against women, minorities and ‘outsiders’, and others which institutionalise the immiseration of whole races or classes of people. I believe, and shall argue in the sequel, that naturalistic moral realism which starts from empirical facts about the basic conditions required for human beings to be able to live flourishing lives together in a social setting (allowing that there will never be just one ideal blueprint for doing this) affords the most robust and defensible foundation for the ethical criticism of agents and actions, both past and present. For now, I return to the fact that recognising at a theoretical level that where ‘jagged worldviews collide’, reconciliation between the conflicting opinions may be difficult or impossible to achieve, does not stop us in practice from viewing morally significant phenomena through the lens of our particular value-system. We spontaneously admire, detest or remain neutral about practices that other people, at other times or places, have felt about quite differently. As it is virtually impossible to ‘turn off’ our moral responses,

30 

G. Scarre

there is little point in saying that we ought to cultivate an attitude of studied moral detachment towards practices that have been deemed acceptable in other societies but not in our own. Ought, after all, implies can. But the more substantive issue here is that it would argue that we did not take our own values seriously if we did not apply them consistently across the board. Excuses might be found for ‘man’s inhumanity to man’ in contexts where it appears that ignorance, false belief or a misguided sense of duty led people to do terrible things to one another—but the things they did were no less terrible because they were done for what were believed to be worthy reasons. Think, for a grim instance, about the industrial-scale killing of prisoners by the Aztecs (more correctly, the ‘Culhua Mexica’) in ritual sacrifices which appalled even the hard-bitten Spanish conquistadores when they arrived in Mexico in the early sixteenth century. Inga Clendinnen’s macabre description is worth quoting at length. [The high priests and rulers] carried our most of their butchers’ work en plein air, and not only in the main temple precinct, but in the neighbourhood temples and on the streets. The people were implicated in the care and preparation of the victims, their delivery to the place of death, and then in the elaborate processing of the bodies: the dismemberment and distribution of heads and limbs, flesh and blood and flayed skins. On high occasions warriors carrying gourds of human blood or wearing the dripping skins of their captives ran through the streets, to be ceremoniously welcomed into the dwellings; the flesh of their victims seethed in cooking pots; human thighbones, scraped and dried, were set up in the households—and all this among a people notable for a precisely ordered polity, a grave formality of manner, and a developed regard for beauty. (Clendinnen, 1991, p. 2)

How, it is tempting to ask, could people ever have acted like this and thought that they were doing right? Yet human beings have very often behaved in appallingly cruel ways when their consciences have functioned as faulty compasses—or what we would see as such. As we saw earlier, Jonathan Bennett’s claim that it is generally better to act according to one’s moral sentiments than to be guided by conceptions of duty

2 Introduction 

31

(because the latter may be mistaken) relies on an over-simplistic normative theory, since sentiments too can be wayward when under the spell of false belief (Bennett, 1974). Who is to say that the Culhua Mexica did not develop moral sentiments that marched in lockstep with their fateful conviction that the gods would bring the world to a catastrophic close if sacrifices of human hearts were discontinued? Nevertheless, there is little evidence to suggest that the Aztecs found the duty they had to do distasteful, even if ‘to end on the killing stone was represented, in the discourses of the elders, as a most bitter fate’ (Clendinnen, 1991, p. 104). The lack of squeamishness that accompanied their festivals of killing may be the most surprising thing about them. That we feel horrified by the Aztec sacrifices is, I suggest, an attitude as proper as it is natural and immediate for people raised in the values of our contemporary (more or less) liberal cultures. Indeed, if David Hume is right that benevolence and sympathy are universal human traits (if not always predominant in practice—as they seem not to have been in the case of the Aztecs), then the naturalness of our attitudes may have deeper roots than merely local cultural ones (Hume, 1888 [1739]: Bk. II passim). Tim Heysse notes that an interpreter who seeks to understand an unfamiliar moral concept in the outlook of some other culture ‘cannot avoid the question as to whether [it] points to something ethically significant for him, for his own life’ (Heysse, 2010, p.  238). To understand others, it is necessary to understand their value judgements. And understanding their value judgements involves reflecting on what it would be like to take up those values ourselves. ‘Interpreting an outlook (grasping its meaning),’ writes Heysse, ‘and judging it (appraising its ethical meaning) are inextricably intertwined: we discover its significance by defining our position towards it’ (2010, p. 238). The fact that becoming an Aztec and participating in the Aztec way of life is not a real option for us does not mean that relating that way of life to our own is a fruitless exercise: for doing so helps us to understand more not only about the Aztecs but also about ourselves. The idea that the study of history teaches us things serviceable to ourselves is, of course, one with a venerable ancestry. According to Lord Berners, the first translator of Froissart’s Chronicle into English, ‘The

32 

G. Scarre

most profitable thing in this world for the institution of human life is history’ (Berniers, 1908 [1523], p. xxviii). For the humanist historians of the Renaissance, history was viewed less as a ‘unified and continuing process leading to the present’ than as ‘a hunting ground for examples to teach moral philosophy’ (Boone, 2000, p. 564). Thus for Lord Berners’ contemporary the French historian Claude de Seyssel, history was ‘a source of political knowledge intended to inform policy-making; his interest was not in the past but in the present’ (Boone, 2000, p. 565). This pragmatic concern with history as a repository of moral exemplars and a toolbox of stratagems and devices for securing good government, home security and social order may not be the paramount interest of most academic historians today, but it has its echo in the famous cautionary sentiment of George Santayana that ‘those who cannot remember the past are condemned to repeat it’ (Santayana, 1905, p. 284). Even such a modern academic historian as Keith Thomas is interested in exploring how the men and women of early modern England thought about such ‘perennial questions’ as ‘What is the summum bonum of human life? … What did they seek to make of themselves? What goals did they pursue? What were the objectives which, in their eyes, gave life its meaning?’ (Thomas, 2009, p.  1). It is impossible not to be both fascinated and enlightened by seeing how earlier generations dealt with such questions, which are old and yet forever urgent. Rebutting Williams’s view that it cannot now matter to us what we think of Louis XIV, George Crowder insists that ‘In the case of retrospective judgements of the kind involving Louis, there is also the possible goal of self-understanding’ (Crowder, 2017, p. 125; emphasis in the original). One reason for this is quite easy to see: ‘It is natural, and purposeful, to ask whether we are doing any better than those who went before us’ (2017, p. 125). But clearly—posing that question presupposes the possibility of making comparative ethical judgements about past and present. In The Idea of History, R.G. Collingwood’s reply to the question ‘What is history for?’ is that ‘history is “for” human self-knowledge’ (Collingwood, 1961 [1946], p. 10). Studying the past enables us to learn more about ourselves: the ‘value of history … is that it teaches us what man has done and thus what man is’ (1961 [1946], p. 10). Collingwood’s claim that

2 Introduction 

33

history is ‘self-revelatory’ (1961 [1946], p. 18) may be thought to depend on too uncritical an assumption (similar to Maude’s and Benedict’s) that human beings are much the same everywhere, everywhen, and in all circumstances. This assumption can and has been challenged by social scientists who believe it underestimates the impact of socialisation in making us the kinds of beings that we are, human beings being effectively cultural variations on a biological type. But even if culture should go a great deal of the way down, that would hardly make history the mere antiquarian study of extinguished kinds. For Collingwood’s claim is entirely correct that history enables us to learn about the species we belong to: about the hopes and fears, motives and reasons for action, aspirations and aversions which, though they may vary in content, form and occasion, are the basic stuff of all human lives. Whatever the differences, there are family resemblances enough amongst us to give the historian something to work on even where human actions and reactions seem at their most puzzling; by comparing the unfamiliar with what is familiar in her own experience, the historian who is prepared to exercise empathy and imagination may manage to find points of contact even with a people so initially alien as the Culhua Mexica. Needless to say, empathy and imagination are far from infallible guides to the real experience of past people, and the extent to which we can ‘put ourselves in the place of ’ an Aztec priest or victim and know just what they were thinking or feeling is obviously limited; more seriously still, as mentioned previously, even our best-informed guesses are subject to no criterion of correctness. (We may guess that the Aztec victim felt fear and horror at his impending fate—but how did he conceive the rightness of what was being done to him?) Plausible interpretation, not cast-iron demonstration, is the most that history can hope for. It is also easy to be misled by the use in different historical contexts of ‘thick’ ethical concepts which change their meaning over time and cannot safely be assumed to bear exactly the same nuances now as they formerly did. Words such as ‘virtuous’, ‘honest’, ‘courageous’, ‘loyal’, ‘trustworthy’, ‘noble’, ‘just’, ‘unselfish’ and their negative correlates have been in common use for centuries but not always with exactly the same sense and extension.

34 

G. Scarre

Despite our possessing a sense of human kinship with people who lived centuries ago and in circumstances very different from our own, can we really be sufficiently sure that we comprehend enough about how they conceived their place in the world to pass moral judgements on them with any confidence? Might this be a case of over-confident fools rushing in where more circumspect angels would fear to tread? Heysse’s claim that through attempting moral evaluations of past agents we learn more about ourselves sounds right, in so far as the exercise forces us to engage imaginatively with situations outside the run of our normal everyday experience; still, it does not establish that those evaluations are justified as moral judgements. Would the truly wise historian refrain from attempting moral appraisal of people who lived in that foreign country of the past? Perhaps, after all, what Williams calls the ‘relativism of distance’ divides us too profoundly from those whose motives, acts and characters we purport to assess for any of our judgements to be morally or epistemically reliable. It may be very difficult to ‘turn off’ our own value-system when we direct our gaze at very different societies, whether of the past or of the present. But maybe that is just what we should try to do, acknowledging, with Williams, that we have no locus standi to act as moral critics of people whose ethical environment is or was alien to our own. If we nevertheless insist on judging the Aztecs or the slave-traders or the assassins of Julius Caesar, our judgements will be baseless and idle, besides expressing an unwarranted moral arrogance. This is a challenge to the making not just of some historical moral judgements but to the legitimacy of all such judgements—with the possible exception, perhaps, of those relating to the quite recent past, where the moral environment had much in common with our own. (But then, how far back does the ‘recent’ past extend? A year? A decade? A century?) Clearly, it would be premature to ask questions about the sort of moral judgements we might make about the past, or about the criteria that should guide us in making them, while this radical sceptical challenge to the very idea of historical moral judgement remains outstanding. Consequently this radical scepticism, particularly as it has been formulated by Williams via his principle of the ‘relativity of distance’, will be the main topic of the following chapter.

2 Introduction 

35

Notes 1. Arguably, colonial authorities were justified in forbidding such oppressive practices as sati in India or head-hunting in New Guinea, though even in these extreme cases persuasion and education could be considered more respectful modes of deterrence than the more forceful methods sometimes adopted. On occasion, the distaste of colonial administrators for indigenous practices led to quite needless and insulting attempts at repression, as in the Canadian Government’s 1885 banning of West-Coast potlatch ceremonies (a ban that was reversed in 1951). 2. One especially notorious controversy concerned the fate of the so-called ‘Kennewick Man’, the 9000-year-old skeleton washed out of the banks of the Columbia River in Washington State in 1996. Although this skeleton became the object of intense interest to researchers investigating the early populations of the Americas, local tribespeople were keen to reinter the remains of the man they described as the Ancient One. Finally, following years of scholarly and legal wrangling, the US House and Senate in 2016 ordered the remains to returns to be transferred to a coalition of Columbia Basin tribes for ceremonial reburial (Los Angeles Times, 2017).

3 The Relativity of Distance

1 Williams on the Relativity of (Temporal) Distance Bernard Williams’s defence of his thesis of the ‘relativity of distance’, briefly touched on in the previous chapter, has generally been credited as the most intellectually powerful attempt made to date to exhibit the obstacles in the way of applying contemporary moral standards to acts and actors of the past. Yet despite the attention they have commanded, Williams’s various discussions of his thesis have not generally been found ultimately persuasive by his critics. Indeed, it seems that Williams himself was never entirely easy about the relativity thesis, which he tinkered with in a number of writings stretching over three decades, making various changes of nuance and emphasis. The nub of the idea is presented in an early essay from the mid-1970s, where Williams says of ethical conceptions of the past that they are ‘related to our concerns too distantly for our judgements to have any grip on them’, because our own ethical conceptions could never have been a ‘real option’ for people who lived in quite different cultural circumstances (1981, p. 142).1 Williams explained further what he meant by a ‘real option’ in the text of a

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1_3

37

38 

G. Scarre

posthumously-­published undated lecture in which he approvingly quoted an (unreferenced) statement of Collingwood’s: ‘We ought not to call [the past] either better than the present or worse; for we are not called upon to choose it or to reject it, to like it or dislike it, to approve it or condemn it, but simply to accept it’ (2005, p. 67). However, immediately after this comes a caveat: Williams qualifies Collingwood’s assertion by saying that ‘I do not agree that there are no judgements that one can make about the past; I am going to claim that there are some that one must be able to make’ (2005, p. 67). Herein lies the rub: the arguments that Williams offers against the making of historical moral judgements would seem on the surface to rule out the making of any and all such judgements yet that is a length to which he was not always willing to go. Well aware that he risked an accusation of inconsistency, Williams put much effort into explaining why there was one—but only one—subset of historical moral judgements that was legitimate in principle, his final, but controversial, account favouring the idea that there exist certain universal rights to justice. The claim that some historical moral judgements are legitimate while others are not is not in itself implausible, although it is no easy task to define precisely what the conditions of legitimacy are and the criteria that should govern their application. It is a reasonable presumption that cultural distance has some role to play here (we may well be better fitted to pass moral judgements on Adolf Hitler than on Julius Caesar); but the difficulty for Williams is that the Collingwood-inspired considerations he cites against the making of moral judgements about agents who lived in cultural environments where they ‘did things differently’ have an apparent scope and sweep that allows for no exceptions concerning justice or any other particular ethical category. In a recent paper, A. Gaitan and H. Viciana refer to empirical evidence that suggests that some ideas about justice may be innate in human beings, appearing spontaneously, for instance, in young children (Gaitan & Viciana, 2018, 319f.); but, as Williams himself observes, human beings live ‘under culture’, and culture has evidently had much to do with shaping the many and often contradictory conceptions of justice encountered in the historical record (1995, p. 80). ‘We see things as we do because of our historical situation’, as did our ancestors before us (2005, p.  66); Williams stresses that it is just

3  The Relativity of Distance 

39

because there can be no ‘real encounter’ between people in one historical situation and those in another that it is futile to attempt to pass moral judgement on one situation from the perspective of the other. One could, he allows, ‘imagine oneself as Kant at the court of King Arthur’; but if we entertain such fantasies, ‘exactly what grip does this get on one’s ethical or political thought?’ (2005, p. 66). One curious feature of Williams’s discussion, which a number of commentators have seen as unsatisfactory, is his insistence that a relativity principle holds in regard to societies that are separated from one another by time but not to those contemporary societies that are separated by space. Williams explains that an alternative ethical outlook is a ‘real option’ for a people if they could ‘go over to it’, that is, ‘if they could live inside it in their actual historical circumstances and retain their hold on reality, not engage in extensive self-deception, and so on’ (1985, pp. 160–61). ‘Today,’ he considers, ‘all confrontations between cultures must be real confrontations’, and relativism ‘over merely spatial distance is of no interest or application in the modern world’ (1985, p. 163). For even people who live within the ‘exotic traditional’ societies that survive today could in principle be converted to a modern point of view and be brought to see the world in a different way (1985, p. 163). This remark, however, misses the main issue, because while it is doubtless true that New Guinea highlanders or remote Amazonian tribes who live a traditional life-style could be offered the choice of abandoning that lifestyle in favour of a more modern mode of life, it is also true that their current lifestyles are not a ‘real option’ for us. As Miranda Fricker comments, it is ‘thoroughly unconvincing’ to claim that all confrontations across moral differences in the present must be ‘real’, and not merely ‘notional’ (i.e., of the sort in which we imagine ourselves as Kant at the court of King Arthur) (Fricker, 2010, p. 156). For, she explains: ‘I can think of a number of moral cultures, up and running in the world at this time, where I am pretty certain that a group of people like me could not authentically live them out around here as a moral subculture, because the social and moral-psychological leap from there to here is too great’ (Fricker, 2010, p. 156). It would seem of more importance to the possibility of making trans-cultural moral judgements how widely cultures differ from one another than whether they are separated by time or space. In the undated

40 

G. Scarre

lecture quoted above, Williams proposes a slightly different way of explaining why historical moral judgement is improper: it is because ‘the past is not within our causal reach’ (2005, p. 69). The present, of course, is within our causal reach, but Fricker’s point holds good regardless: the fact that we can make a causal impact on a society like that of the New Guinea highlanders is no evidence at all that we could live their lifestyle authentically. Maybe, then, Williams should have allowed that the relativity of distance operates over space as well as time. This, however, he was unwilling to do, probably because conceding that moral relativity applies over space would be to tack too closely to more ‘vulgar’ forms of relativism which block any trans-cultural moral criticism whatsoever, and which he had always argued were mistaken and even incoherent.2 The doubtfully stable picture that Williams presents, in which relativity is taken to operate diachronically but not synchronically, is open to further objection regarding the claim that there needs to be a ‘real’ and not merely a ‘notional’ confrontation between members of one culture and those of another before members of the former are entitled to pass moral judgements on members of the latter. Williams has drawn much fire from his critics here. Fricker asks bluntly: ‘Why shouldn’t we allow our moral sensibility to range over even the most distant and different of moral cultures?’ (Fricker, 2010, p. 159). George Crowder is similarly sceptical: ‘From the fact that we cannot affect the past, it does not follow that we cannot judge the past’ (Crowder, 2017, p. 123). Indeed, if evaluative judgements were permissible only in regard to situations we could affect, that would debar us from making judgements about many aspects of even the ‘immediate or recent past’ (Crowder, 2017, p. 123)! Tim Heysse thinks that Williams exaggerates the importance of ‘real’ confrontations. In his view, ‘the interpretation of the past is sufficiently analogous to interpreting a real alternative’ to be well within our conceptual and imaginative reach; besides which, [notional] ‘confrontations with the past can arouse strong emotional reactions of condemnation or endorsement, of sympathy and dislike’—showing that the past is not, ethically speaking, ‘indifferent to us’ (Heysse, 2010, p. 238). These criticisms of Williams, I think, move in the right direction but they need to be treated with some caution; as shortly stated here they are,

3  The Relativity of Distance 

41

to use a favourite term of Williams’s own, a little ‘brisk’. It is indisputable that events of the past, including the very distant past, can arouse in us, as Heysse says, powerful emotional responses. Imagination, working on the historical data, yields vivid pictures of the past, its joys and sorrows, triumphs and tragedies—as well as the more humdrum aspects of former lives. Not passing moral judgements on the people and happenings of the past is, in truth, almost impossible, as we noted in the previous chapter; Williams’s bland and disarming term ‘notional’, as an adjective for the kind of confrontation we can have with the past, masks the fact that such ‘non-real’ confrontation can be profound and heartfelt. Yet how confident can we be that our imaginative reconstructions match the reality? Can we be sufficiently sure that we understand the meaning of events to those who experienced them, and whose ethical thought may have operated with very different ‘thick’ concepts than our own, to venture moral judgements other than tentatively? Vividness of conception is no criterion of the correctness of our reconstructions of ‘what really happened’, and gut emotional responses of horror, delight, disgust, approval and others similar, both positive and negative, are never self-validating. One aspect of the problem of access, as I noted earlier, is that it is all too easy to think—and to feel—about events in the past in anachronistic ways without realising we are doing so; we imagine that people of the past would have had much the same thoughts and reactions which arise so spontaneously in us. And while we may often be quite right about this, we may sometimes be wrong, without knowing that we are. Fricker rightly acknowledges that, to have any claim to be valid and worth making, moral judgements of past people need to be grounded on a grasp of ‘the significance of their actions in their own time’. Unless we have ‘an informed understanding of the society in question’, we ‘will lack the proper starting conception even of what they did’ (Fricker, 2010, p. 159; her emphasis). This is quite true, but the difficulty is to know when we have arrived at that level of understanding and have not merely begged the crucial interpretative questions. Even the most careful historian may unconsciously ascribe to past people ideas and sentiments that come naturally to herself but would not have come naturally to her subjects. So, for example, we regard it as sound procedure, when pondering some ethically problematic practice, to look at it from the perspectives of

42 

G. Scarre

all the persons or parties involved in it, rather than privilege one particular perspective over others. Considering slavery from the slave-­owners’ point of view and ignoring that of the slaves, as men like Edward Colston would seem to have done, appears to us to be very poor ethical thinking.3 The ethical necessity of accounting for all relevant stakeholders’ interests appears indisputable to those of us today who believe in the equal worth and entitlement to respect of all human beings—that is to say, their commonality of membership in the Kantian Kingdom of Ends. Yet the widespread acceptance of such egalitarian views is a relatively recent phenomenon,4 and the fact is that many of the ‘underprivileged’ in earlier historical periods were probably much more accepting of their lowly position than we might readily imagine them to have been on the basis of our contemporary expectations. In her essay ‘Trying out one’s new sword’, Mary Midgley alludes to the Japanese Bushido tradition that a Samurai warrior was entitled to test the sharpness of a new sword by using it to slice in half a randomly-selected peasant (Midgley, 1983). Critical of writers of a relativist persuasion who would argue that there was no point of view then available from which this appalling practice could have seemed wrong, Midgley reasonably points out that, while this mode of testing his sword might have suited the Samurai well enough, a rather different view of it might have been taken by the hapless peasant awaiting bifurcation. Unfortunately, Midgley conflates two distinct issues here, for while a peasant who found himself selected for the test-stroke can scarcely have felt happy about his looming fate, it cannot be assumed that he would have judged it to be morally wrong (he may have done so, but we cannot be sure about that). For all we know, he may have regarded his fate, although dreadful, as part of the settled order of things mandated by heaven, an ethically legitimate ending for a person like himself on the bottom rung of the social ladder. The problem, then, with essaying moral judgements when our confrontations with the actors of the past are of the ‘notional’ variety is not that such judgements are beyond our conceptual or imaginative resources, but that we cannot be certain whether and when they are grounded on a correct understanding of what Fricker refers to as the ‘significance of [the agents’] actions in their own time’. (Interestingly, this same difficulty also besets those historians who, in order to escape the dreaded fallacy of

3  The Relativity of Distance 

43

‘presentism’, focus on providing what they conceive to be purely ‘factual’ description of how historical agents thought and acted.) If this is at the root of Williams’s worry, it is easy to see why he thinks it idle to attempt moral judgements of historical characters such as Louis XIV. We might ‘get indignant’ with the Sun King, but the ethical thoughts we have about him are not ones that he himself would have recognised as valid (Williams, 2005, p. 66). As a further reason for not getting indignant with Louis, Williams suggests that ‘if we don’t, we may do better in understanding both him and ourselves’ (2005, p. 66). The thought behind this suggestion appears to be that if we suppose that Louis got some things ethically wrong which we in our greater wisdom get right, we tacitly commit to a ‘progressive view of history’ which regards moral thinking as developing in quality and sophistication over time—a falsely optimistic idea, in Williams’s view, since moral thinking can go backwards as well as forwards (witness the rise of Nazism in the twentieth century or of Islamic jihadism in the twenty-first). However, there is no need to believe that moral thinking can only ever progress upwards to the light to recognise that reflection on experience, both our own and that of other people, is a prime, and arguably the only genuine, source of ethical wisdom (if we exclude the alleged faculty of moral intuition which many philosophers, from Shaftesbury through to Moore and beyond, have favoured, but without ever being to give a satisfactory explanation of its origins or operations (Shaftesbury, 1999 [1711]; Moore, 1903)). In one specific connection, Williams seems perfectly willing to acknowledge this. In Shame and Necessity, he remarks of the ancient Greeks that ‘They are among our cultural ancestors, and our view of them is intimately connected with our view of ourselves.’ Therefore, ‘to learn about the Greeks is … part of [our] self-understanding (Williams, 1993a, p. 3). But if this can be said about the Greeks, then why not too about the many other ancestral cultures that have contributed to shaping our view of ourselves? Even many of the less attractive or inspirational ‘worlds’ of the past, such as that of Louis XIV, have been causally significant in making us what we are today. Williams’s singling-out of the ancient Greeks as the only people with whom we can have a notional confrontation that is truly enlightening reflects his own intense admiration for the Greek world but is unreasonably neglectful of other

44 

G. Scarre

important cultural influences on the present (most particularly, the manifold Christian cultures of the past). In general, and as Heysse observes, ‘our outlook and our ethical language may grow richer because of confrontations with the past’, since the interpretation and appraisal of past outlooks go hand in hand (and are not separate exercises, as Williams usually portrays them); consequently, by seeking to understand men like Louis, we hone and refine those tools of ethical criticism that serve us in our dealings with our contemporaries (Heysse, 2010, pp. 240–41). Understanding the ‘significance of [the agents’] actions in their own time’ would seem vital if we are to understand their significance for our own. Yet the problem remains that it is hard to be sure that we have interpreted the moral conceptions of past societies correctly. Even where we have identified the reasons that moved them, we may not have understood why those reasons appeared forceful to them, or why they preferred them above other reasons that may seem to us more powerful. Sometimes we may think we have understood our predecessors when we have not. At others, we may frankly be puzzled as to how they could maintain the ethical notions they did. How could a man like Edward Colston behave so well towards the poor of his native Bristol and so badly towards the slaves he traded? Why did a Christian gentleman and philanthropist fail to see the acute distress of the black people he bought and sold, and the insult done to their humanity by their enslavement, as compelling reasons to abandon that terrible business? These reasons appear to have remained ‘external’ to Colston’s mind-set, in one or other of two senses: either he did not notice them at all, or they seemed to him unpersuasive.

2 Internal and External Reasons On the sort of ‘internalist’ theory of reasons that Williams himself favours, practical reasons are always ‘internal’ in the sense that they start from the subject’s existing motivations, providing a reason to act ‘only if they are related to other [of his] reasons for actions, and generally to … [his] desires, needs, and projects’ (Williams, 1993b, p. xiii). Elaborating on this idea in his essay ‘Internal and external reasons’, Williams explains that, on the internalist account, A has a reason to Φ ‘implies, very roughly,

3  The Relativity of Distance 

45

that A has some motive which will be served or furthered by his Φ’ing’ (1981, p. 101). It is, of course, an obvious, and trivial, observation that a person will only ever be motivated by the reasons she holds, and never by those that have not occurred to her, or which she has considered and rejected. But a more controversial issue concerns what to say about the status for her of reasons that others—ourselves, say—might consider to be good ones but which lie outside what Williams calls the agent’s own ‘subjective motivational set’ (S). In an essay written in 1989, Williams considers the example of a man who beats his wife and who is unreceptive to all attempts to persuade him that there are good reasons why he should treat her better (1995, p. 39). The fact that she is his wife moves him not at all, and he is indifferent to criticisms that he is ‘ungrateful, inconsiderate, hard, sexist, nasty, selfish, brutal, and many other disadvantageous things’. It appears that there is really ‘nothing in his motivational set that gives him a reason to be nicer to his wife as things are’. An ‘external reasons theorist’, says Williams, is likely to say that the man nevertheless ‘has a reason to be nicer’. But what could make this thought ‘appropriate’ when the wife-beater has already made it clear that nothing that has been said to him connects with anything in his S? Williams thinks that there is nothing more to be said here, and that consequently the claim that the wife-beater has a reason to desist from mistreating his wife makes no sound sense (1995, pp. 39–40). External reasons are, in effect, no reasons for an agent, failing as they do to connect in an appropriate manner with items in his S. Williams’s internalist view of motivating reasons adds further support for his thesis of the relativity of temporal distance (though not, it seems, in his view, any thesis of the relativity of spatial distance). If the reasons that militate against slave-trading according to our present view of things were not within Colston’s subjective motivational set, or closely enough related to it to permit of their ready absorption, then, if Williams is right, there is no convincing sense in which it can be said that they were, even so, reasons for him. But if they were not reasons for him, then condemning him for failing to follow them seems no more justifiable than criticising a man for innocently drinking from a bottle of poison that he believed to contain a harmless liquid; in neither case are the reasons for not acting as he does within the agent’s S.

46 

G. Scarre

Williams’s claim that ‘the sense of a statement of the form “A has reason to Φ” is given by the internalist model’ (1995, p.  40) has some counter-­intuitive implications. It sounds odd to say that the man who drank poison from the bottle misleadingly labelled ‘Gin’ had no reason not to drink it: although there was no reason for not drinking that he recognised or (let us suppose) could have recognised in the circumstances, the fact that the liquid was poisonous grounded one very strong argument why he should not drink it. Here, the reason against drinking appears real enough, despite being external to the agent’s S and un-­ regarded by him. To be sure, the unlucky drinker, when he subsequently writhes in agony on the floor, cannot be blamed for getting himself into his present state; but he has assuredly done something that it would have been better not to do, as he now sees himself. Williams, in relation to the wife-beater example, questions the existence of a difference between ‘saying that the agent has a reason to act more considerately’ and saying, ‘for instance, that it would have been better if they acted otherwise’; as the former statement fails to get a grip (on internalist presuppositions), then the latter must do so too (1995, p. 40). But we might simply reject this claim: one does not have to be a feminist to protest that there are many very powerful reasons why husbands should not beat their wives, and that there is something amiss with a theory that undermines our ability to say of wife-beaters who are blind to those reasons that ‘it would have been better if they had acted otherwise’. Still, there would be very little point in criticising the wife-beater for not regarding the reasons for not mistreating his wife in the way he does if it were sheerly impossible for him to grasp those reasons. And that might be the case if his motivational set were a completely closed affair, capable of admitting no new reasons (and perhaps incapable of shedding any of its present contents). Intriguingly, Williams himself rejects as insufficiently ‘liberal’ internalist theories of this restrictive sort. Instead: ‘We should not … think of S as statically given. The processes of deliberation can have all sorts of effect on S, and this is a fact which a theory of internal reasons should be very happy to accommodate’ (1981, p. 105). But the question is now why, if an agent’s S is permeable, in the sense that it may be reached by new reasons that were not previously within it, moral reasons should not be able to do this—so that, in the case of the

3  The Relativity of Distance 

47

wife-beater, reasons against beating his wife might not, through deliberation, gain an entry; and if he refuses to engage in such deliberation, he would be open to censure for that refusal. However, there are considerable limits to the extent to which Williams is willing to liberalise his version of internalism. He envisages a person who has no interest in looking after his own health, and who resolutely rejects all attempts to persuade him that this is something that he has good reason to do. This person may surprise us, and we may find it hard to believe that he really has no interest in preserving his health; but once we accept that he truly is uninterested, then in saying that he nevertheless has a reason to preserve his health ‘we must be speaking in another sense, and this is the external sense’—and that, according to Williams, is a quite mistaken view of reasons (1981, p. 106). To this it might be objected that no rational person would be indifferent to preserving his health; but Williams counters this by endorsing a basically Humean theory of motivation, according to which what is rational for a person to do must be related in some appropriate way to his actual desires. The trouble for the external reasons theorist is that she needs to find some way of explaining what makes the health-neglecting subject’s position irrational for him; but, according to Williams, she has no resources with which to do this. (Note that she cannot say that, as a rational being, he ought to want to do the rational thing: for while he may readily accede to this, for him it is not ‘the rational thing’ to seek to preserve his health, given his lack of desire to do so.)5 There is, claims Williams, ‘an essential indeterminacy in what can be counted a rational deliberative process’, which the external reasons theorist seeks to disguise; and to criticise someone like the health-neglecter by suggesting that he is being irrational, ‘once the basis of an internal reason claim has been laid aside, is bluff’ (1981, p. 110, 111). While Williams accepts, then, that an agent’s S is not a closed system, he is insistent that novel reasons for action will only be admitted when they are related suitably to his existing S and can be readily integrated with its existing contents. Therefore claiming, for instance, that there were ethical reasons militating against slave-trading that Edward Colston should have recognised is (like waxing indignant with Louis XIV) merely ‘idle’ if the reasons stood in no such appropriate relation to Colston’s current S. But is it actually true that the moral reasons against slavery and

48 

G. Scarre

slave-trading were beyond Colston’s reach? We may not think so, even if we are prepared to concede to Williams his internalist theory of reasons. It is not easy to say what exactly was missing in Colston. Did he fail to recognise blacks as fully human, or see them as a lower grade of human? (But even in this case, why should he have thought that that justified inflicting avoidable suffering on them?) Was he genuinely unaware of the real horrors of the slave-trade, or was he guilty of what Thomas Aquinas called ‘affected ignorance’, keeping his eyes and his mind averted from those horrors? Did he self-deceptively assure himself that the slaves’ conditions were probably not so bad as the opponents of slavery painted them? Or did he think that the efforts made to Christianise the slaves, and thereby afford them the chance of eternal salvation, were an ample compensation for their enslavement? Whatever the truth, it does not seem too far-fetched to suppose that things could have happened to Colston that removed the scales from his eyes, enabling him to see the black person as the anti-slavery movement portrayed him, as ‘a man and a brother’. Damascus-road conversion experiences do occasionally happen, and an internalist theory which rules them out a priori seals its own fate. Colston’s motivational set already included desires for the well-being of the poor of his own race, and first-hand acquaintance with some poor blacks might have given him the nudge he needed to see that the similarities between them were greater than the differences, and discriminatory treatment irrational. The notion of a reason being ‘outside’ an agent’s S itself requires more clarification that Williams supplies. A reason, R, could be currently not included in a particular agent, A’s, S but be one that A is aware of as a potential reason; he may know it as a reason that has gained entry to some other people’s subjective motivational sets, though it has not yet to his own. Or R may be a reason that has never crossed A’s mind, whether or not it is ever likely to do so or appear persuasive to him if it does, given his social or individual circumstances. R may even be a reason that has crossed the mind of no one in A’s society, being altogether alien to any of the thought patterns to be found there. One may initially wonder how much sense it makes to speak of R being a ‘reason’ at all for members of a society in which no one recognises it to be one; certainly it can have no impact on people who have never considered it (or never believed it to

3  The Relativity of Distance 

49

have the slightest plausibility). Yet the idea of a reason being real though unrecognised gains cogency where grounds can be given for thinking that acknowledgement of that reason would have been good for the moral health of that society—an admittedly vague formula, but one which would permit us to say, for instance, that recognition of the reasons against wife-beating would have been a good thing in a society in which that practice was rife. Some philosophers favour the idea of what Michele Moody-Adams describes as ‘a culturally induced “moral blindness” that renders persons unable to see what is wrong with a particular practice, or to recognize the possibility of alternative social and cultural arrangements’ (Moody-Adams, 2002, p. 86). To illustrate this, she cites Michael Slote’s contention that ancient Greek slave-owners ‘were unable to see what virtue required in regard to slavery’ because, while virtue really does make some demands in this area, they were blinded by their belief that slavery was ‘natural and inevitable and thus beyond the possibility of radical moral criticism’ (Moody-Adams, 2002, pp. 86–87; citing Slote, 1982, p. 102). Valid reasons, in other words, existed for the Greeks to give up their slave-owning practices which they failed to recognise as forceful, even if they cannot be blamed for doing what they could not perceive in their circumstances to be wrong or unvirtuous. Reasons, then, can be ‘outside’ a subject’s motivational set with different degrees of remoteness. Reasons which he does not (yet) accept but which he knows to be accepted by people whose opinion he respects may be very close indeed to his personal S, and little persuasion may be needed to for him to take these on board. Other reasons he may know to be accepted by some fellow-members of his society but not by those who ‘think like him’; these have a lesser chance of becoming members of his own S, although if he is epistemically responsible, he will not dismiss them without giving them further thought. But reasons which fail to gain any foothold in his society may be too remote for them to be, in Williams’s phrase, ‘real options’ for him. Even so, it would be a mistake to suppose that reasons that no one in a culture accepts are always reasons that no one in that culture could accept. Moody-Adams urges that it is ‘one of the most serious flaws of the relativism of distance’, as Williams understands it, that ‘He assumes that hypotheses about what some agents could not do can be based solely on observations about what they did not do’

50 

G. Scarre

(Moody-Adams, 2002, p. 100). Slote’s concept of ‘moral blindness’ she criticises on the same ground: ‘Merely by learning a language—in particular, learning to form the negation of any statement—every human being has the capacity to question existing practices, and to imagine that one’s social world might be other than it is’ (Moody-Adams, 2002, p. 100). None so blind as those that will not see. Our unwillingness to admit that, for instance, the Greeks could have entertained some different thoughts about slavery stems, thinks Moody-Adams, from our own dislike of seeming ‘smug or self-righteous’ when it comes to judging the past. Here—ironically—‘[r]elativism gives way to a decidedly non-­ relativistic reliance on evaluative notions—such as the moral unacceptability of smugness or self-righteousness—that ought to govern moral reflection on the past’ (Moody-Adams, 2002, pp. 100–101). Moody-Adams is right to warn that our current liberal predilection for a more tolerant and pluralistic approach to ethics, along with our desire to avoid repeating the self-validating pretensions to virtue characteristic of some of our forebears, may lead us to be less critical of the faults of past people than we really ought to be. Maybe our expectations of what past people were capable of are set too low and we unintentionally insult them by systematically underestimating their capacities for moral deliberation. (We are, after all, not the only people who can think.) Moody-Adams points to empirical evidence from history, anthropology and sociology which suggests that no societies are without dissenting voices that pose a challenge to prevailing orthodoxies. But if no society is wholly monolithic in respect of its moral thought, then it must be wrong to suggest that there is ever only a single ethical option on offer. Indeed, ‘the fact that no culture is an integrated fully individuable whole undermines the very basis on which it makes sense to think of a culture (or “historical circumstances”) as a single determinate cause of behavior’ (Moody-­ Adams, 2002, p. 95). Moody-Adams’s cautionary note is well struck; yet it may, perhaps, sound a little too loudly. Orthodoxies can be very powerful and dissent a lonely business, requiring a degree of self-confidence that may itself seem like vicious self-assertion to its subject. A slave may detest the way he is treated and hate his master but it does not follow that he will morally condemn the institution of slavery; he may look on it as simply a part of

3  The Relativity of Distance 

51

the normal world order. Miguel de Cervantes Saavedra, the author of Don Quixote, served in his youth as a galley-slave in a Moorish ship, but references to slavery in his later great novel are remarkably lacking in critical tone. Moody-Adams’s remark that any speaker of a language who understands negation ‘has the capacity to question existing practices’ appears overly optimistic. The fact that a person who understands the sentence ‘Slavery is an acceptable practice’ also understands ‘Slavery is not an acceptable practice’ is a very slim basis for ascribing to her a capacity to envisage ‘that one’s social world might be other than it is’. That calls for an effort of imagination which goes well beyond recognising that ‘Slavery is not acceptable’ is a syntactically proper sentence of English. One can think, in the sense of entertaining in mind, the proposition that slavery is unacceptable without thinking (or even being able to think) that slavery is unacceptable in the assertoric sense of taking this to be the case (or even just possibly the case). Yet if Moody-Adams is somewhat too sanguine in regard to the average socialised individual’s ability to ‘think outside the box’, she is right to question views which err in the opposite direction by maintaining that all thought must be thought inside the box. That is not true either, though it takes more than just an ability to understand negation to cast a sceptical eye on what most others in one’s society regard as gospel. To be a dissenter requires not only the intellectual and imaginative abilities to envisage how things could be different, and why they would be better if they were, but also such qualities of character as self-­ confidence and self-belief, courage (of the moral variety, though sometimes physical courage may be needed as well), steadfastness, and—occasionally—sheer bloody-mindedness. Moreover, as Moody-­ Adams observes, if the dissenter finds few people to agree with her, or considers the prospects of changing others’ views to be poor, then she ‘may reasonably wonder about the possible futility of acting alone’ (Moody-Adams, 2002, p.  100). For many reasons, a dissenter’s lot is commonly not a happy one. Of course, it is easier to be an ethical dissenter in a society that values freedom of thought and expression, or in which plural or conflicting values already have some currency, than in one which prescribes a single mode of ethical thought—especially if it backs this up with penalties for deviation. All social living presumes some agreement in norms amongst

52 

G. Scarre

its members. On the very lowest estimate of what these should be, members must commit to refrain from murdering or seriously harming one another and consent to some mutually serviceable principles of exchange of goods and services; but the scope of such norms, and the extent to which they are enforced, varies. Someone who refused to conform even to such basic behavioural standards as these would be not so much a dissenter as a deserter, whose only option is to leave the society of her own free will before she is pushed. It is both a practical and a moral problem how much can reasonably be expected from a reflective person who rejects, on ethical grounds, certain practices of her society, where the active expression of dissent is unlikely to have much effect and/or is likely to be personally costly. History provides many inspiring examples of people who, individually or collectively, have protested against what they see as evil and have paid a very high price for doing so. Some brave souls have been willing to stand up for what they believed to be right even though they perceived that the evils they confronted would most certainly overwhelm them. The student members of the White Rose group who resisted the Hitler regime, printing and distributing anti-Nazi leaflets before, in a spectacular act of defiance, daubing the slogan ‘Down with Hitler!’ on the walls of Munich’s most hallowed Nazi shrine, the Feldherrnhalle, in February 1943, knew they were ensuring their own destruction but believed that the symbolic gesture needed to be made.6 Their heroic action may have gone well beyond the demands of moral duty, but in cases where the personal costs of protest are less high, failing to stand up and be counted may deserve to be considered a form of moral failure.

3 Justice To people who care about justice, considerations of justice will be a source of internal reasons for action. Sometimes such considerations may come into competition with other, more self-regarding, internal reasons, and then a choice has to be made between them. The members of the White Rose group were not indifferent to their own safety but they chose to put their lives on the line because they cared more about making a stand against Nazi tyranny. However, it is sadly true that it is easiest to feel

3  The Relativity of Distance 

53

passionate about justice where one is oneself on the receiving end of (what one takes to be) unjust treatment; and considerations about justice which concern primarily the interests of people with whom one has little or nothing to do may be less effective at generating internal reasons for action. Nevertheless, the responsible moral agent is not entitled to turn a blind eye to injustice where opportunity offers of opposing it to some effect. Where injustice is being done which she could do something to prevent or alleviate, she recognises that she has a reason to act. It might seem that Williams should object that she only recognises that reason as being within her S provided that she is already committed to ‘the morality system’. But for Williams, reasons of justice have a force that is somehow independently grounded of considerations of ‘morality’—considerations which he instead links to notions of ‘human rights’ (see especially 2005, pp. 62–74). When it comes to justice, ‘It’s nothing to do with me’, furnishes, in Williams’s eyes, a lame excuse for inaction where one sees wrongs that one could help to right. Williams believes that justice should be done, and human rights respected, by agents who are capable of ethical reflection, thereby implying that considerations of justice are a source of valid, non-ignorable internal reasons (or at any rate, reasons that, whether they are initially present or not, deserve to be brought within an agent’s S on the basis of their intrinsic merits) for all such agents. Williams’s readiness to make fairly bullish statements about what justice demands has surprised many of his readers, appearing as they do to be in tension with his rejection of moral objectivism and embrace of a pluralistic theory of values. That, after all, appeared to be the lynchpin of the thesis of the relativity of distance, which contests the right to pass historical moral judgements on the basis of our own ethical conceptions where these differ from those of the people we profess to judge. We may pass moral judgements on the practices of alien cultures within our contemporary world, he concedes: but that is only because we can engage in ‘real’ and not merely ‘notional’ confrontations with them. We have a chance of persuading wrongdoers of the present day to alter their ways, but we have no means whatever of persuading Louis XIV to treat the Huguenots more fairly. Yet when it comes to justice, as distinct from other ethical categories, Williams embraces a trans-temporal universalism which sits uneasily with the

54 

G. Scarre

relativity of distance. Indeed, he is quite explicit about the tension, writing in Ethics and the Limits of Philosophy that ‘It may be that considerations of justice are a central feature of ethical thought that transcends the relativism of distance’ (1985, p. 166). In the essay ‘Human rights and relativism’ he is equally forthright: ‘there are some [ethical] judgements we can make about the past’, and of these he singles out as a particularly significant subset ‘those that we make in virtue of what I called the first problem of politics, the question of order’ (2005, p. 69). To be sure, he concedes that political order can be achieved in more than one way, and the relativism of distance may still be taken to apply to the extent that past societies devised different ways of producing order that may not always be readily susceptible of moral comparison. (‘[O]ur theories should not lead us to treat like manifest crimes every practice that we reject on liberal principles’ [2005, p. 72].) Still, in Williams’s view, there are some conceptions ‘which apply everywhere’, such as that ‘might is not per se right: the mere power to coerce does not in itself provide a legitimation’ (2005, p. 69). One might want Williams to be right about the universal validity of certain basic standards of just conduct, whether for his own reason that these are essential to the achievement of minimal order in any society, or for more ‘utilitarian’ reasons to do with their propensity to foster flourishing human lives. Still, it is hard not to feel that Williams is attempting something of a high-wire act in seeking to sustain a generally sceptical approach to the notion of universal obligations while insisting that there exist principles of justice that are forceful everywhere and everywhen. In Ethics and the Limits of Philosophy, ‘the morality system’, with its central concept of obligation, is alleged to be just one possible mode of ethical thought (that is, for Williams, thought about how we should live), yet the one which for various historical reasons has gained such pre-eminence in Western culture that it has come to be regarded as the only right way to think about ethics (1985, pp. 6, 174–96). In a powerful review of Ethics and the Limits of Philosophy, Samuel Scheffler argued that there is an ‘instability’ in Williams’s position, since what he repudiates of ‘the theoretical structures produced by reflection’, in so far as these rely on the ‘thick [moral] concepts’ which are standard within our own moral thinking but may be alien within other cultural contexts, ‘leaves him with

3  The Relativity of Distance 

55

critical resources that are inadequate to the tasks of social criticism he himself wishes to engage in’ (Scheffler, 1987, p. 421). Williams seeks to criticise racial and gender discrimination, for instance, as practices which can be shown to be irrational without needing to invoke concepts of ‘the morality system’, but his explanation that discrimination is ‘wrong because unjust’, says Scheffler, ‘sounds for all the world like a conventional judgement made from within the morality system’ (Scheffler, 1987, p. 425). Indeed, when Williams upholds the idea of ‘basic human rights’ which should not be violated (1985, p. 192), he appears to help himself to a notion of obligation of precisely the kind he objects to when considered as a component of the morality system (Scheffler, 1987, p.  426). Williams is keen to distinguish the obligation to respect basic human rights from what he sees as the bogus obligations of the morality system: in his view, the obligation to respect rights is more soundly based than the alleged obligations of the morality system because it is an obligation of justice. But can he make good on his claim that the demands of justice (or at least the broadest principles of justice) apply across all times and cultures, and explain why they, uniquely, are not subject to the relativity of distance? Ideas about justice are doubtless present within any society, but the content of those ideas varies. Williams’s difficulty is to justify thinking that one particular set of ideas—to wit, the ones which we who live in modern liberal (or quasi-liberal) societies accept and which condemn, for instance, racial and gender discrimination—should be held to be superior to the alternatives. This is a difficult task; and Scheffler for one is sure that the demonstration cannot be made. The claim that we are entitled to appraise as just or unjust ‘societies that are temporally quite remote from our own’, he complains: authorises us to use a certain form of words in talking about temporally distant societies, but that is all it does. What it certainly does not do is to secure the possibility of truth or objectivity for the judgements we make about the justice or injustice of these societies. Nor does it secure for our judgements of justice and injustice any sort of privilege at all as compared with conflicting judgements made from the vantage points of the remote societies themselves. (Scheffler, 1987, p. 428)

56 

G. Scarre

This criticism of Williams is hard to rebut, and in the final analysis it should probably be admitted that his attempt to salvage some hard or quasi-objective ethical truths from what he sees as the wreckage of the morality system fails through its own inconsistency. Admittedly, this does not entail that any alternative attempt to argue that some historical conceptions of justice have been more defensible than others must fail too; the special problem for Williams is that his swingeing attack on ‘the morality system’ cuts the ground away from under his own feet when he attempts to demonstrate why a certain conception of justice and ‘basic human rights’ can be saved from the general ruin. In fact, as we shall see, he himself gestured towards the possibility of ‘giving to ethical life an objective and determinate grounding in considerations of human nature’ if it can be shown that the adoption of certain thick ethical concepts rather than others would be more effective for generating ‘the best social world for human beings’ (1985, p. 153, 155). However, in a downbeat assessment which I shall challenge later, he adjudged this ‘optimistic’ project to be ‘not, in my view, very likely to succeed’ (153). One reason for Williams’s pessimism here is that the idea of a best form of life for human beings has come to be a very suspect one; in a liberal and pluralistic age, any concept of a ‘most ideal’ form of life will seem purblind at best, at worst a generator of intolerance or paternalistic interference in other people’s lives (the lives of those who are deemed not yet to have seen the light). Besides, as Williams points out, an excellent life does not stand to the beliefs involved in that life simply ‘as premise stands to conclusion’, since the having those beliefs is a proper part of what makes that life excellent: hence, there is no simple detachment of the contents of the excellent or flourishing life from the ethical beliefs which inform it (1985, p. 154). Ethical beliefs are valuable not merely for the instrumental role they play in making possible certain forms of social living, but because to hold ethical beliefs is itself an excellent and enriching thing. However, the thick concepts that are involved in such enriching belief are culturally variable, and reflection on the excellence of a life that involves them ‘does not itself establish the truth of judgements using those concepts or of the agent’s other ethical judgements’ (1985, p. 154).

3  The Relativity of Distance 

57

There is no Archimedean point from which different cultures can be compared in respect of the excellence of their excellence-­contributing ethical beliefs. Beliefs about justice are one kind of ethical beliefs which enrich their holder. And the lack of an Archimedean point from which to adjudicate between different conceptions of justice would not matter very much if, as a matter of fact, those different conceptions were really quite similar, people in diverse cultural settings largely agreeing on the fundamental standards of justice. For Williams, of course, uncovering inter-societal agreement on those standards would not demonstrate their objective truth: for the notion of objective truth in ethics is simply another mistake of the ‘morality system’. Nevertheless, inter-societal agreement on standards of justice would allow us, he thinks, to ascribe to them, if we so wished, the only kind of ‘objectivity’ that is possible in the case of ethical, as distinct from factual, judgements; for such widespread acceptance would show that they were ‘true … in the oblique sense that they were the beliefs that would help us to find our way around in a social world which … was shown to be the best world for human beings’ (1985, p. 155). If, as I suggested in Chapter 2, the most plausible kind of moral realism is the naturalistic kind which locates the truth-makers of ethical judgements not in some realm of metaphysical abstractions but in real-­ life facts about human beings’ needs and interests, then what Williams refers to rather unflatteringly here as truth in ‘the oblique sense’ should actually be considered the central notion of truth in ethics—or even the only one that makes any real sense. We shall return to the issue of truth in ethics and the virtues of a naturalistic account in Chapter 4, but meanwhile it is worth noting that Williams’s own mode of supporting his account of ‘oblique’ ethical truth by reference to the existence of widespread agreement in judgements of justice rests on shaky ground, for the very good reason that convergence of views on justice has been far from outstandingly conspicuous in the historical record. The fact—which is embarrassing for Williams—is that opinions about justice have historically been many and multiform. Indeed, it could well be argued on the evidence of the historical record that this is the area of ethical thought in which opinions have been most various of all. Hence

58 

G. Scarre

Williams makes things particularly difficult for himself when he singles out justice as being the field of ethics in which normative demands are at their most plausibly universal. Scheffler comments that ‘people who have the concept of justice will disagree about its application even in central cases’ (Scheffler, 1987, p. 328). Alasdair MacIntyre reports Karl Marx’s view that appeals to justice made by English trade unionists fighting for recognition in the 1860s ‘were pointless, since there are rival conceptions of justice formed by and informing the life of rival groups’, including those which are divided by social class (and not only those) (MacIntyre, 1984, p. 252). Some proponents of natural rights have managed to combine what may seem to us to be exceedingly odd or inconsistent views about justice. For example, Locke, whose call for toleration of religious dissenters and views on procedural justice with regard to the acquisition of property have been enormously influential over the last 300 years, is thought to have helped to draft the Fundamental Constitutions of Carolina of 1669, which embedded the ownership of slaves (see Armitage, 2004). Aristotle, likewise, had praised justice and the just man while giving his sanction to the practice of slavery—even if Greek slavery was probably on the whole less cruel and dehumanising than was the later plantation slavery of the Americas. Views about justice are influenced by ideologies of politics, race, class, religion, gender and other factors that have varied greatly over the course of history; all too often these have turned on the making of invidious distinctions between ‘them and us’ which have soured relations among people and impeded the development of more liberal conceptions of justice and human rights. While there is no denying the historical evidence for the existence of divers views of justice, closer inspection shows that at least some of the seeming differences amongst conceptions are superficial rather than fundamental, representing culturally varying modes of applying the same basic ideas. This fact may offer a crumb of comfort to Williams. Aristotle early recognised it when he drew a careful distinction between ‘natural’ and ‘legal’ political justice in Book Five of the Nicomachean Ethics. On his account, ‘natural justice’ is ‘that which everywhere has the same force and does not exist by thinking this or that’; ‘legal justice’ is ‘that which is originally indifferent, but when it is laid down is not indifferent, e.g. that a prisoner’s ransom shall be a mina, or that a goat and not two sheep shall

3  The Relativity of Distance 

59

be sacrificed’ (Aristotle, 1954, p. 124 [1134b]). One could imagine the citizens of a Greek city in which the prescribed means of expiating some offence was the sacrifice of a goat, clucking their tongues in disapproval at seeing the inhabitants of another city sacrifice two sheep for a similar offence; but their complaint that justice had not been done would mistake the surface for the substance. Aristotle’s sensible view is that the demands of ‘natural justice’ need to be painted in broad brush-strokes, while ‘legal justice’ supplies the fine detail, setting out the conventions for their expression in different settings. Aristotle’s conception of natural justice is a strikingly capacious one; at 1129b, he even goes so far as to suggest that justice is ‘complete virtue in its fullest sense’, the just man being essentially the man who displays all the other virtues in regard to his own affairs and in his dealings with his neighbours (Aristotle, 1954, p.  108 [1129b]). Justice conceived in Aristotle’s way as a virtue is concerned with the choice of actions. On the broadest formulation: ‘if a man harms another by choice, he acts unjustly; and these are the acts which imply that the doer is an unjust man, provided that the act violates proportion or equality’ (that is to say, fails to give others their due, according to Aristotle’s somewhat complicated system of calculating what people in different social positions are entitled to) (Aristotle, 1954, pp. 127–28 [1136a]). The unjust man is ‘grasping’ (Aristotle, 1954, p. 107 [1129b]), and his grasping nature leads him into all sorts of bad behaviour, of which Aristotle gives a long list of examples: ‘theft, adultery, poisoning, procuring, enticement of slaves, assassination, false witness, assault, imprisonment, murder, robbery with violence, mutilation, abuse, insult’ (Aristotle, 1954, pp. 111–12 [1131a]). This list of species of injustice includes what Williams would wish to consider as included in the concept and then some more. Indeed, there is not much belonging to ‘the morality system’ that is not encompassed by Aristotle’s conception of justice. And since Aristotle considers, with much plausibility, that the offences he lists have a common root in a natural disposition to favour one’s own interests above those of others (something that we all ought to learn to restrain in ourselves), he poses a tacit challenge to Williams to explain why justice as he conceives it should be singled out from the rest of ‘the morality system’ and alone considered to be of universal application and to transcend the relativism of distance. In

60 

G. Scarre

fact, Williams never makes it very clear exactly how he means to draw the dividing line between justice and the objectionable ‘morality system’; and while he sometimes qualifies the noun ‘justice’ by the adjective ‘political’, his habit of linking talk about justice with talk about human rights (another vaguely delimited but evidently extensive category) might suggest that, like Aristotle, he understands justice in a markedly broad way. And if that is so, then his opposition to the morality system becomes still harder to sustain. Williams would certainly regard as justice rights such rights as those not to have one’s freedom taken away, or one’s goods stolen, or one’s political privileges denied or removed on racial or religious grounds. These must therefore be rights of the kind he views as not being subject to the relativity of distance. But would he also regard rights not to be insulted, or to be ridiculed or lied to, or to have the promises made to one broken, as rights of justice? If he considers these as remaining outside that charmed category, because they are rights of some lesser kind or not genuine rights at all, then he ought to dismiss their pretensions to hold good always and everywhere. Yet it is hard to see why a right, say, not to be gratuitously insulted should be seen, just because it is not classifiable (on Williams’s conception) as a justice right, as having a poorer claim to universal validity than a right not to have one’s property stolen. If, instead, Williams were to concede that the right not to be insulted was a justice right, then he would seem to be employing the idea of a justice right in a sense wide enough to admit a great many more rights which, though not pertaining to ‘political justice’ narrowly construed, could reasonably be considered to find their grounding in the ‘morality system’. The dilemma for Williams, then, is that he must either stipulate an implausibly restrictive list of those human rights which are valid always and everywhere, or, by allowing that the list of such rights contains a lot more rights than simply those pertaining to ‘political justice’, effectively abandon his outright rejection of the ‘morality system’ and allow that it, or at any rate substantial aspects of it, are universally valid too. It is difficult to see how Williams could escape this dilemma, and the conclusion of this discussion must be that his attempt to draw a distinction between the ethical principles (those relating to justice) which escape

3  The Relativity of Distance 

61

the relativity of distance and those that do not (those belonging to the ‘morality system’) is a failure. Nevertheless, Williams’s desire to establish that there is some basis for making at least some historical moral judgements, even if the objectivity of morals that metaphysical meta-ethicists believe in is a will o’ the wisp, deserves respect. It is a rebarbative thought that we are so culturally divided from the past that we can assume no moral common ground with our forebears, rendering any attempt on our part to pass historical moral judgements an unjustified pretension. As we have seen, Williams himself refers, albeit offhandedly, to the notion that certain norms and standards for doing things may be more effective than others at assisting us to ‘find our way’ within a ‘social world’ that is the ‘best world’ for human beings (1985, p. 155). In the next chapter, I shall argue that Williams’s apparent reluctance to concede that practical considerations of this kind supply a basis for the ethical appraisal of norms, acts and practices not only of the present day but of the past too is unwarranted—always provided that we can overcome satisfactorily the obstacles posed by the previously-mentioned problems of access and of relevance.

Notes 1. All date-only references in the present chapter refer to works of Bernard Williams. 2. The fundamental problem with ‘vulgar relativism’ is that it inconsistently maintains that since ‘right’ always means ‘right for society X’, it is wrong for members of another society, such as Z, to criticise X’s value judgements—the inconsistency being that the word ‘wrong’ is being used here to make an objective judgement (see Williams, 1972, pp. 34–35, 1981, pp. 142–43). 3. The ethic of the slave-traders and slave-owners appears to us at fault in the further respect that it supposes that there can be property rights in human beings. 4. Even in the twenty-first century, the equal membership of all human beings in the kingdom of ends is evidently far from being accepted by all: racism, sexism and religious bigotry are alive and well in many places and continue to spew their familiar poisons. Yet there are some hopeful signs:

62 

G. Scarre

bigotry and discrimination are less taken for granted, more forced onto the defensive, than they ever were before and notions of human rights continue to garner international acceptance. 5. Gilbert Harman, defending a strong form of meta-ethical relativism, once wrote that if Hitler ‘was willing to exterminate a whole people, there was no reason for him not to do so’ (Harman, 1977, pp. 108–09). Williams’s remarks on the man who is careless about his health suggest that his Hume-influenced version of reasons internalism should allow the same disturbing conclusion about Hitler, although doubtless he would insist that the ban on genocide comes within the ambit of justice. 6. For the story of the White Rose group, see Dumbach and Newborn 2006. Many members of the group were executed by the guillotine, others being killed by firing-squads.

4 Choosing a Standpoint

1  ‘Take Nature’s Path, and Mad Opinions Leave’ (Alexander Pope) In seeking to reconcile his thesis of the relativity of distance with the contention that there are certain basic standards of justice that ought to be observed in any human social situation, Bernard Williams attempts, as we have seen, a delicate balancing act which is, on the kindest estimate, only doubtfully successful. On a less generous appraisal, Williams might be said to be engaged in an ultimately hopeless endeavour to reconcile the irreconcilable. Yet it should not be forgotten that the motives moving Williams are highly respectable ones and by no means peculiar to himself. Other philosophers too have asked the question of how we can be liberal, tolerant and open-minded in the face of values and value-systems that differ from our own, while rejecting an unrestrictedly permissive view that rules no ‘morality’ out of order, however rebarbative the practices it sanctions. Centrifugal and centripetal forces pull here in contrary directions, the pluralist impulse to escape from the strait-jacketing view that there can only be one set of right principles for living an ethical human

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1_4

63

64 

G. Scarre

life being restrained by the desire to deny an ‘anything-goes’ approach to moralities which concedes validity to the ethical standards of Nazis or slave-owners or Aztecs. To bring these competing forces into equilibrium requires determining the proper limits of liberalism by selecting a standpoint of appraisal which is applicable to particular moral systems both past and present. It might be objected that ‘determining the limits of liberalism’ is itself hardly a liberal project. Yet as Tim Mulgan has shrewdly observed, in the real world often ‘liberals must take sides, and found their theory on practices and beliefs which are congenial to only some religious or cultural groups’ (Mulgan, 1999, p. 69). And indeed, there is nothing to be said for a liberalism that finds nothing to choose between the ethical views of Adolf Hitler and those of Mahatma Gandhi. A different sort of objection is that by favouring one set of moral views over another we implicitly commit ourselves to a universalist view of ethics according to which acts and practices are in some sense ‘objectively’ right or wrong, whatever people of a particular culture may believe about them. As we saw in Chapter 2, there is no need to suppose that the claim that some ethical conceptions are superior to others must involve some metaphysically tendentious postulation of moral facts or values built into the very fabric of the universe. Rather than maintain that our ethical beliefs are correct when they correspond to such abstract and mysterious metaphysical structures, it is far more intelligible to look for a grounding for ethics in natural facts about human beings and how they can flourish.1 Ethics, after all, could be of very little practical interest to us if it were not immediately and fundamentally concerned with our own well-being; and it is impossible to make sense of the idea of good reasons in ethics unless those reasons are keyed to considerations about how lives go well or badly. Ultimately the reason why we should not be cruel is not that cruelty is ‘wrong’, in some sense of that word to be explained in terms of the ‘moral structure’ of the universe, but because cruelty hurts and belittles people, even, at its worst, making them wish that they were dead. It is for that and no other reason that we call it ‘wrong’. It is worth noting, too, that a naturalistic approach to ethics has the further advantage that it is highly compatible with a liberal and pluralistic outlook, since it allows that there can be many different ways of living a flourishing human life which

4  Choosing a Standpoint 

65

respect basic human needs and interests (on which more anon). Therefore, in seeking to identify a standpoint from which trans-cultural and transtemporal moral judgements can be made, it is to naturalistic theories I shall look, since approaches of this kind offer much the best hope of supplying a point d’appui on which moral judgements can be based which avoid being either anachronistic or question-begging. One writer who is keenly opposed to the notion of objective moral values and to the view that there is one single ‘best’ way of living is J. David Velleman. Although Velleman is a self-professed moral relativist who denies ‘that there are universal moral norms of any kind, and that there are necessarily ubiquitous norms of morality’, he is keen to point out that there are nevertheless significant ‘eligible points’ at which the norms of different societies are liable to converge, on account of certain constraints that are imposed by human nature (Velleman, 2015, p. 94). Observing that people have lived according to many different moral norms, he insists that ‘no one has ever succeeded in showing any one set of norms to be universally valid’ (2015, p. 75). Even so, what Velleman terms ‘the drive towards sociability’, which he takes to be deeply rooted in human beings, creates a need for norms which can form a common frame of reference for individuals’ social existence and make them ‘mutually interpretable’ to one another (2015, pp. 83–84). Norms appropriate for fostering sociability will express pro-social rather than anti-social attitudes and actions, ‘in the sense that they will favor mutual benefit over mutual harm’ (2015, p. 95). And because what constitutes benefit and harm is in considerable part determined by our human nature, there are ‘certain attitudes on which we humans cannot help but converge’: e.g. we are all averse to ‘pain, separation, and frustration’ and inclined towards ‘pleasure, connection, and the fluid exercise of skill’; we also have ‘an array of physiological appetites in common’ (2015, p. 94). Velleman is adamant that there is no universal morality that provides a mode of comparing the sets of norms that different societies elect to live by, but he allows that some ways of life are better than others at producing a balance of mutual benefit over mutual harm. There is some tension here between Velleman’s relativistic refusal to admit that there are any ‘necessarily ubiquitous norms of morality’ and his admission that some ways of doing things are better than others. For

66 

G. Scarre

his contention that while an East African Kikuyu mother has a good reason to circumcise her daughter ‘within the Kikuyu way of life’, our own way of life which outlaws female circumcision may be ‘more efficient’ at fulfilling the ‘requirements of sociability’, comes suspiciously close to acknowledging a standard of judgement (to wit, efficiency at promoting a balance of mutual benefit over mutual harm) that enables different norms to be compared (Velleman, 2015, 75f.). This seems not so far away from Williams’s concession that ethical judgements that express ‘beliefs that would help us to find our way around the social world which … was shown to be best for human beings’ are true in an ‘oblique’ sense (Williams, 1985, p. 155). Admittedly, Velleman’s use of the adverb ‘necessarily’ may be intended to signify no more than that since human nature might have been substantially different than it is, some of the norms that have widespread appropriateness given the way we are not might not have done so has we been markedly other than we are. But that, while true, seems a fact of very little interest. Given the way that human beings actually are, it is less than clear that certain norms (such as a rule against wanton or random killing of other people) are likely ever to be dispensable. But even if Velleman were right and there were no ‘ubiquitous norms of morality’ that any society must accept if it is to achieve a minimally acceptable level of sociability, there plainly are facts about human nature that speak in favour of the adoption of certain norms in preference to others. Velleman is not alone among philosophers who defend moral relativism to allow that some norms and standards of behaviour are more apt than others to promote human flourishing. Another writer who does so is David Wong, who advocates a position he calls ‘pluralistic relativism’. Wong finds no difficulty as a relativist in allowing that an ethical system that is not well geared to the cultivation of human well-being is defective. While the good life can be lived according to many different recipes, he is careful to emphasise that there are some weighty constraints on what counts as an acceptable ‘conception of flourishing’: Viable conceptions of flourishing center on interests that are deeply rooted in human nature such that they can be identified as what human beings really want, that have powerful motivational force in overriding other

4  Choosing a Standpoint 

67

interests, and whose satisfaction or frustration widely ramifies throughout a person’s character and life. These might be called needs, and ‘true’ ones at that. (Wong, 2006, p. 221)

Wong explicitly describes his account as ‘naturalistic’, and he argues that it ‘generates significant constraints on what could count as an adequate morality, given its functions and given human nature’ (2006, p. 44). Like Velleman, he lists some of the things that human beings wish for, ‘given their nature and potentialities’: these include ‘the satisfaction of their physical needs, the goods of intimacy, sociability, and social status’, plus the opportunities to discharge their energies, pursue variety in their activities, and to acquire knowledge. Our ‘deep human propensities’, thinks Wong, do not determine morality, but they do place limits on it (2006, p. 44). Also like Velleman, he is chary of postulating norms that must be included in any acceptable moral system (compare Velleman’s rejection of ‘necessarily ubiquitous norms’) but he is clear that certain forms of behaviour that prevent or impede the promotion of flourishing lives deserve everywhere to be ruled out. Wong also remarks the important point that ‘some forms of flourishing are partly constituted by promoting the flourishing of others’ (2006, p. 221). It is indeed hard to see how a system of purely selfish or self-seeking norms could avoid defeating its own ends, in view of the deep human need for emotional connection with others. A naturalistic theory of morality need not be tied to a (or what is claimed to be a) relativistic meta-ethic. Robert Audi claims to be a pluralist rather than a relativist, but he concurs with Velleman and Wong in holding that ‘certain basic elements must figure in any good life—or at least any life that is both morally permissible and social rather than solitary’. Thus ‘certain moral standards must be respected, particularly minimal standards of justice, freedom, and happiness’; also to be recognised are ‘obligations of fidelity and veracity, beneficence and non-injury, gratitude and self-improvement, and reparation and justice’ (Audi, 2007, p. 278). Another naturalistic theory which is explicitly non-relativistic is that of Philippa Foot, which she classifies as a form of naturalistic realism. Foot’s account is explicitly indebted to Aristotle’s thesis that each species has its own distinctive good, although she does not follow Aristotle in

68 

G. Scarre

taking this to imply that there is a single best form of life for human beings if they are to be eudaimon (happy or flourishing). Foot concurs with the view of Velleman and Wong that some prospective norms are more apt than others to foster sociability and social living, granted some basic truths about human beings. She argues that ‘the grounding of a moral argument is ultimately in facts about human life’, and she takes serious issue with those writers who hold it to be ‘monstrous’ to suppose that ‘the evaluation of the human will should be determined by facts about the nature of human beings and the life of our own species’ (Foot, 2001, p. 24). Just as there are conditions that constitute the flourishing of other species of plants and animals, so too there are ‘patterns of natural normativity’ in the case of human beings, ‘properties and operations’ which make for their flourishing given the kind of beings they are— albeit ‘there will be a great increase in the number of respects in which evaluation is possible, if only because human lives contains so many and such diverse activities: because human beings do so many different kinds of things’ (2001, pp. 38–39). Foot’s ‘patterns of normativity’ thus find an objective footing in relevant facts about the conditions for human flourishing. Similarly to Hume before her, Foot believes that there is a human nature that remains the same behind all the local variations in ways of living, and that values and practices are always in principle open to evaluation by reference to facts about that nature (cf. Hume 1963 [1751], pp. 83–84).2 It does not quite follow from the fact, assuming it to be one, that there are needs and interests common to all members of the human species that people will always in principle be mutually intelligible to one another, provided only that they make sufficient effort to discover common ground. Hence when Foot remarks that facts about human nature provide a foundation for judgements evaluative of values and practices, this should not be taken to imply that an understanding of why people act or believe as they do will be the invariable result of patient enquiry; occasionally, when we look at ‘alien’ cultures of the past or present, we might just have to confess ourselves baffled as to why members of those cultures thought, acted or evaluated in the ways they did; it is simply too difficult to get inside their heads. However, even when understanding eludes us, we may still be justified in judging that certain acts, practices, modes of

4  Choosing a Standpoint 

69

social organisation, religious commitments, etc. are ill-designed to foster the flourishing of beings with characteristics of the human type. As we shall see more fully in the next chapter, criticising a people’s practices can be detached from criticising the people themselves: as the scriptural injunction to hate the sin and not the sinner reminds us, the moral quality of agents cannot be unproblematically inferred from the moral quality of their deeds. Terrible things have often been done from the best of motives, or what have seemed to be so to the doers. But that they are terrible, and that it would have been much better if they had not been done, may be apparent from a standpoint which is sufficiently external to the context of action to be impartial yet which, by referring to objective facts about natural human needs, has no pretensions to transcend the human. Some modes of living are more efficient than others at promoting mutual good, and some lay themselves open to criticism for fostering restricted or discriminatory forms of sociability, but most philosophers today have wisely given up the search for the one form of life which is better than all others. Basic human needs can be satisfied in culturally variable ways, and certainly there are many forms of life that accomplish more than simply this bare minimum. In the words of Isaiah Berlin: There are many kinds of happiness (or beauty or goodness or visions of life) and they are, at times, incommensurable: but all respond to the real needs and aspirations of normal human beings; each fits its circumstances, its country, its people; the relation of fitting is the same in all these cases; and members of one culture can understand and enter the minds of, and sympathise with, those of another. (Berlin, 1991, p. 84)

It must be emphasised that accepting the pluralist’s claim that there can be many different modes of living a satisfying human life by no means commits us to asserting that there is no properly neutral standpoint from which any particular mode can be criticised. Nature itself is a source of quality controls, and a form of life that fails to satisfy such basic human needs as those for food, shelter, security from violence, love and companionship, self-respect, freedom to develop skills, rest and leisure, is defective at the deepest level. Likewise inadequate is a social order which satisfies these needs for certain people only and not for all; such an order

70 

G. Scarre

fails to promote that sociability which Velleman views as being both a general and an individual good.3 As Audi reminds us, ‘For many of our best experiences are social experiences or, if not strictly social, then socially grounded, as where we think of our audience as we write, or of the game we are to play as we practice alone’ (Audi, 2007, p. 62).

2 Reason and Sentiment in Sociable Living It would be an impoverished view of sociability which saw it merely as a means for the (more) efficient satisfaction of essentially self-regarding interests, though it does have that function amongst others. In explaining his concept of sociability, Velleman alludes to Hume’s image of the need of two men who are rowing a boat to coordinate their actions with the oars so that the boat will go in the desired direction rather than revolve in aimless circles. Unlike where two persons are riding on a tandem bike, Velleman points out, neither oarsman can afford to slack off and let the other do the work; in this instance, ‘[t]he need to coordinate … produces mutually beneficial joint effort’ (Velleman, 2015, p. 95; cf. Hume, 1888 [1739], p. 490). This observation, allied to his subsequent remark that ‘[d]ifferent communities, already made alike by human nature, will also be shaped alike by the need for coordination, which favors their pro-­ social over their anti-social tendencies’ (2015, p. 96), suggests that Velleman sees sociability chiefly as a set of coordinating strategies which enable individual members of society to realise benefits which they would not be able to realise by their own unaided efforts or realise so effectively if (like the lazy tandem rider) they relied on others to do the heavy lifting. As such, this is a notably instrumental account of what sociability is and why it matters, suggestive of a contractualist view of social mores which finds their justification in their ability to benefit the individuals who consent to abide by them. One may concede, however, the usefulness to individuals of living by certain generally agreed principles without thinking that this need be all there is to sociability; to revert to Hume’s analogy, while a purely selfish man might agree to coordinate his oarsmanship with that of another purely because he wants to get to the other side of the river, a less selfish person might discover a further motive for taking

4  Choosing a Standpoint 

71

up the oars in the shape of his rowing partner’s similar wish to arrive there. The latter oarsman may be said to be more truly sociable, or be sociable in a deeper sort, than the former. Were human beings altogether various in their basic requirements and desires, arriving at a mutually-acceptable set of arrangements by which they could live in community together would be well-nigh impossible. Contractualist theories of justice and political obligation, such as the famous version presented in John Rawls’s A Theory of Justice, rely on the assumption that there is sufficient similarity in our fundamental needs to make it feasible in principle to devise a system of social arrangements which everyone could agree to as treating them fairly as individuals (see Rawls, 1971). While actual societies, past and present, have mostly failed—some more conspicuously than others—to meet the standards set by the Rawlsian model of ‘justice as fairness’, the model nevertheless offers a valuable template for thinking about social justice in a manner informed not by airy ideals but by empirical facts about normal human interests.4 Like Velleman and other writers, Rawls proposes that there are certain goods that every person must have in some degree, and therefore that every rational person will wish to have, if he or she is to lead a life of positive value. ‘Primary goods’ he characterises as those goods of which we each need at least a modicum for our life to be even minimally satisfying. The category of primary goods breaks down into three subsets. The ‘social’ primary goods include ‘rights and liberties, powers and opportunities, [sufficient] income and wealth’; ‘natural’ primary goods are ‘health and vigor, intelligence, and imagination’. Then there is the good of self-­ respect, which Rawls suggests is perhaps the most important primary good of all, since a life from which it is absent would be a subjectively intolerable life (Rawls, 1971, p. 62, 440): a person who could not value herself—maybe because she failed to receive any respect from those around her—would be utterly wretched and feel that her life was not worth living. Besides the primary goods, there are other goods which may be of great importance to some individuals without being so to all since they are heavily dependent on personal tastes or inclinations: such ‘secondary goods’ could include music-making for Mary and playing baseball for John. Although secondary goods are goods-according-to-taste and in that sense non-essential, the freedom and opportunity to enjoy the

72 

G. Scarre

secondary goods of one’s choice may be considered to be itself a significant primary good. While secondary goods have individual rather than universal appeal, it would be a remarkably dull life in which no secondary goods at all were valued or sought for. On a contractualist model such as Rawls’s, contractors are motivated to sign up to the contract because they perceive that their doing so, provided that everyone else does so too, is to their personal advantage. This is, as Rawls concedes, a selfish motive for agreeing to be bound by rules and norms, although he readily acknowledges that real people are not often purely selfish but usually entertain some more other-regarding feelings of sympathy, benevolence, love and affection (1971, p. 13, 129). But because such feelings are variable in strength and for the most part quite partial (e.g. we usually care more about the welfare of our friends than we do about that of distant strangers), Rawls thinks them incapable of providing a stable basis for a social contract and prefers the claims of self-­ interest instead. Yet it is not entirely comfortable to think that the rational grip of social norms on individuals boils down in the end merely to considerations of self-interest. This makes morality look much too similar to a commercial contract in which each of the signatories is concerned with the other’s welfare primarily in order that they should be capable of delivering the goods or services promised under the terms agreed. It would be pleasant to be able to believe that the rational force of the norms that govern one’s moral relationships with others stemmed from something less narrowly self-centred than merely a prudential concern for one’s own welfare. To be sure, adherence to common norms often does generate a richer kind of sociability than is involved in a merely ‘business’ arrangement: thus Hume’s two men who row the boat together, initially each for his own advantage, may enjoy their companionship and finally become firm friends, each delighting to serve the other’s interests when opportunity offers. However, for the contractualist such friendly or fraternal feelings can be really only the icing on the cake, never an essential ingredient. Someone who followed the rules of the contract scrupulously but otherwise cared nothing about the well-being of her neighbours would still count as a thoroughly moral person: on the contractualist perspective, she does all that is, or reasonably can be, demanded from her.

4  Choosing a Standpoint 

73

It appears, then, that a theory of morality that does not accord a central role to a non-self-interested concern for others is at worst mistaken, at best incomplete.5 But it is by no means immediately obvious exactly what form the required concern for others ought to take. People who are kind, sympathetic, loving and affectionate are more attractive than those who are selfish, cold and heartless; yet in noting that feelings and sentiments are too variable from person to person, often too partial in their targets and insufficiently constant to anchor a workable system of norms, Rawls’s view may remind us of Kant’s observations that ‘love, as an affection, cannot be commanded’, and that while people who ‘find a pleasure in spreading joy around them’ are undoubtedly very ‘amiable’, there is a distinction between acting from any such ‘inclination’, however appealing, and acting from a sense of duty (Kant, 1909, pp. 14–15). Arguably, Kant here makes the ‘inclination’ to do good to others sound a more selfish stimulant to action than it really is, since unless a person does good to others for the distinctly unamiable purpose of looking good or seeking for some quid pro quo, she takes pleasure in what she does precisely because she genuinely cares about the happiness of others. Still, the contention that ‘love, as an affection’ (or what Kant calls, rather dourly, ‘pathological love’ [pathologische Liebe]) is not something that can be commanded is hard to dispute. Warm or affectionate feelings are not under our voluntary control, even if it is sometimes possible to make the effort to discover features in a person that will make him more loveable in our eyes. This explains why Kant will have nothing to do with Hume’s view that ‘Moral good and evil are certainly distinguish’d by our sentiments, not by reason’ (Hume, 1888 [1739], p. 589). According to Kant, the only kind of love that can be commanded is ‘practical love’ [practische Liebe]—which he explains as doing our moral duty by our neighbour—because that alone is ‘seated in the will, and not in the propensions of sense—in principles of action and not of tender sympathy’ (Kant, 1909, pp. 15–16). Kant explains the duty to love in the practical mode by reference to his theory of the Categorical Imperative. This advances from a demonstration of the rational necessity of the Moral Law to the conclusion that those beings which are capable of grasping and of following the Law are worthy of the highest respect for their possession of ‘rational nature’.6 Human beings may not be the only such beings—God and the angels, if

74 

G. Scarre

they exist, are others—and owners of rational nature are to be conceived as forming a Kingdom of Ends whose members are rationally committed to abide by the Moral Law (whose precepts are to be determined by the test of the Categorical Imperative) and who uniquely possess intrinsic worth. Although we need not pursue more closely here the stages of Kant’s subtle and intricate argument, its upshot should be noted that human beings, as rational beings and consequently ends-in-themselves, ought never to be treated merely as means to others’ advantage. Particularly obnoxious as failing to treat people as ends-in-themselves are practices of exploitation which deny them the freedom to exercise the privilege of rational agents of making their own action-choices, subject to the categorical imperatives of the Moral Law (1909, p. 49)). Such practices are unacceptable because they are disrespectful to beings who possess a power of rational willing that raises them above the lower animals and whose dignity demands that they must never be treated as mere means or commodities. A man, says Kant, ‘is not a thing, that is to say, something which can be used merely as means, but must in all his actions be always considered as an end in himself ’ (1909, p. 47). This principle does not prevent people from entering freely into social compacts in which they agree to exchange goods or services as free and equal partners, but it does prohibit institutions which value people solely or primarily for their usefulness to others. On Kantian principles, slavery and other exploitative employment practices are necessarily interdicted, as is also the treatment of women as chattels in patriarchal societies. The notion of respect for persons as ends-in-themselves which occupies so central a place in Kant’s ethics has particular resonance in our own day, when the language of respect has come to play a very prominent part in social and political discourse. Whenever we talk of respect for persons, or more obliquely speak of respect for human rights, we walk in the tracks of Immanuel Kant. Stephen Darwall has drawn a further useful distinction between two kinds or species of respect, which he labels ‘recognition respect’ and ‘appraisal respect’ (Darwall, 1977). Some people live lives that are more meritorious than those of others; their achievements and successes are more substantial and significant, or they display more moral excellence in their character and actions. Mary’s piano-playing deserves greater ‘appraisal respect’ than Peter’s does, because Mary’s playing is

4  Choosing a Standpoint 

75

superior to Peter’s.7 But neither Mary nor Peter merits more ‘recognition respect’ than the other, because this is the more fundamental variety of respect that is due to them on account of their status as equal members of the Kingdom of Ends. It is ‘recognition respect’ to which Kant refers when he declares that no human being should be regarded as a ‘thing’. And because membership of the Kingdom of Ends is a status that can never be forfeited or taken away, recognition respect is owed to all human beings irrespective of how well or badly they live their lives.8 The respect that Kant considers due to all ends-in-themselves obtains expression in the ‘practical loving’ that consists in treating other people according to the precepts of the Moral Law. Warm and benevolent feelings for others are, in Kant’s eyes, as we have seen, dispensable. Unsurprisingly, many of his readers have found this to be a doctrine lacking in heart. Kant is often thought to do himself no favours when he writes that a man whom ‘nature had not specially framed … for a philanthropist’, and who possessed ‘a temperament cold and indifferent to the sufferings of others’ would, in relieving others’ distress out of a sense of duty, display ‘a far higher [moral] worth’ than one who acted at the prompting of a ‘good-natured temperament’ (1909, pp. 14–15). It is not, indeed, that Kant disapproves of acting out of goodness of heart; his claim is rather that the moral value of character is ‘highest of all’ when an agent ‘is beneficent, not from inclination, but from duty’ (1909, p. 15). The trouble with goodness of heart or ‘pathological love’ is that it is capable of leading us astray from the moral path, as, for instance, when one helps a loved one to evade justice or to obtain goods or privileges to which she has no right. Kant’s conviction that feelings are unreliable guides to moral conduct persuades him to expel them entirely from the process of moral deliberation. Hence his view is the polar opposite of Bennett’s ‘Humean’ account according to which feelings are generally a more reliable guide to moral action than conceptions of duty, the latter rather than the former being the more liable to lead an agent into error (Bennett, 1974). The obvious Kantian response to this claim is that moral misjudgements (such as those which Bennett lays at the door of Heinrich Himmler, who dutifully followed the genocidal commands of his Führer) should not occur, provided that the commands of duty are properly determined by the method of the Categorical Imperative. And Kant would

76 

G. Scarre

certainly conclude that we had not determined our duty correctly if we should ever find ourselves, like Himmler, treating other persons as ‘things’ rather than as ends in themselves. For Kant, then, morality and moral conduct are a matter of the head, not the heart. But if we find it difficult to share Kant’s degree of admiration for the man whose temperament is ‘cold and indifferent to the sufferings of others’ yet who grits his teeth and does his duty by them, we might seek ways to mix the Prussian blue of this stern account with some softer shades. One rather obvious suggestion is that, even if we conceded to Kant that the hard-hearted man of duty displays ‘a far higher worth’ than the good-hearted agent who acts well from ‘philanthropy’ alone, we could nevertheless insist that the agent who shows the highest worth of all is the one who is both dutiful and good-hearted. In any case, experience strongly suggests that in the majority of moral agents, duty and inclination operate in a more integrated and organic manner than Kant seems to envisage. In portraying moral reason as being concerned exclusively with the appraisal of maxims by the criterion of the Categorical Imperative, Kant divorces it from the sentiment he calls ‘pathological love’. Yet when he speaks of the respect that is due to human beings as possessors of rational nature, he may draw more closely than he realises to allowing a role for feeling. For respect does not appear to be a matter of the intellect alone, but rather to mingle thought and sentiment. The Concise Oxford Dictionary defines ‘respect’ as ‘deferential esteem felt or shown towards a person or quality’ (my emphasis). One could not entertain genuine respect for someone or something unless one believed her or it to possess value or worth, but to respect someone or something is not simply to think highly of them or of it, but to feel a sense of awe, wonder, admiration or other similar emotions that encourage one to act well towards the person or object in question. In reality, the recognition that human beings are deserving of respect is far more closely entwined with ‘pathological love’ or benevolent feeling than Kant acknowledges. The sympathetic pain we feel at watching television reports of children starving and homeless in a war-zone is inextricably combined with regret for the loss of human value that such a tragedy entails. And the repugnance and disgust we feel at witnessing this waste of human life is testament to our Kantian conviction that human beings deserve to be treated,

4  Choosing a Standpoint 

77

always and everywhere, as ends in themselves. Our sorrow for the suffering child is sorrow at the utterly inappropriate suffering of a being possessing intrinsic worth, a member of the Kingdom of Ends. How, then, should we conceive the interplay of reason and sentiment in moral reasoning? Ideally, I suggest, they should be brought into a state of reflective equilibrium in which each serves to check or moderate the other, and moral judgement is the outcome of a partnership between them.9 Among the more specific roles that have been assigned to reason is that of supplying general rules of conduct—whether these are taken to be absolute rules, as Kant claims, or as prima-facie principles that normally hold good but may occasionally be waived where the cost of following them is unusually high (e.g. one may be permitted to break the usual prohibition against lying if doing so is necessary to save an innocent life) (cf. Ross, 1930), or simply as rules-of-thumb which substitute in our busy lives for the individual utility calculations which a perfect and more leisured intelligence would employ in deciding how to act (Hare, 1981). Reason can also temper, moderate and correct our spontaneous moral impulses, helping to ensure, for instance, that when we are inclined to act benevolently, we do so wisely and justly, taking into account all pertinent interests and relevant circumstances.10 But if reason is taken to replace sentiment altogether, as it is by Kant, then the account of the moral life is left essentially incomplete. One does not have to go all the way with Hume’s claim that ‘Morality … is more properly felt than judg’d of ’ (Hume, 1888 [1739], p. 470) to conclude that a theory of ethics which leaves no room for sentiment or feeling has omitted something. To be fair to him, when Hume declared that ‘Moral good and evil are certainly distinguish’d by our sentiments, not by reason’ (Hume, 1888 [1739], p. 589), he did not mean to deny reason any role at all in moral judgement. Although only sentiment could determine what is morally right or wrong, sentiments are not always mere gut responses since many have a cognitive content which results from rational reflection. That is to say, ‘sentiments may arise either from the mere species or appearance of characters and passions, or from reflexions on their tendency to the happiness of mankind, or of particular persons’ (1888 [1739], p. 589; my emphasis). The sight of a starving child arouses in us, through the mechanism of sympathy, an immediate sentiment of moral revulsion; but the reflection

78 

G. Scarre

that a certain act, such as a deed of charity, has ‘a tendency to the happiness of mankind’ can be similarly effective at inspiring in us a moral sentiment—in this case, one of approbation. Hume’s account accords a central place to the psychological mechanism of sympathy in the generation of moral sentiments, whether they arise immediately or via a process of rational reflection. The disposition to experience sympathetic feelings of pleasure or pity when we witness or learn of other people’s joys or sorrows is, according to Hume, a human universal, and he characterises it as a natural passion which arises in us by virtue of the fact that ‘All human creatures are related to us by resemblance’ (1888 [1739], p. 369). The heavy reliance which Hume places on our capacity for sympathetic feelings in his explanation of the generation of moral sentiments faces, however, at least two difficulties. The first is that, even if sympathy is found sufficiently commonly in human beings to be reasonably accounted a species characteristic, the degree of sympathy which different people feel, and the range of the objects towards which they feel it, vary hugely. That the sympathies of some individuals are severely limited, being at best highly partial and at worse subdued to the point of nullity as a result of bad genes or bad upbringing, is a fact not adequately acknowledged by Hume. The second difficulty is that he never really explains just how a psychological feeling such as pity translates into, or gives rise to, a moral sentiment of approbation or disapprobation; that there is a category shift involved here seems to pass unnoticed. Both of these problems, in fact, can be referred to the same deficiency, the inadequate attention Hume pays to the notion of value. Even if, contrary to fact, all human beings were alike in their tendency to feel sympathy for others’ suffering and delight in others’ joy, it is not clear that such feelings would qualify as fully moral feelings if they were unaccompanied by any conceptions of the value of the beings who were suffering or joyful. Some non-human species of animals (e.g. dogs, elephants, dolphins, and more certainly bonobo chimpanzees) appear able to feel something like sympathy or sorrow on witnessing the death or suffering of others of their kind, but whether or not this can be represented as a form of rudimentary moral sentiment is uncertain, the major reason it falls well short of human morality being that it is unarticulated by any thought of the negative value of what is being witnessed—the pity of the loss of life or the

4  Choosing a Standpoint 

79

badness of the suffering. What needs to be added to the Humean picture is some Kantian colouring to bring out the intrinsic worth of persons, their status as ends in themselves. Indeed, a person whose sympathies are naturally dull or restricted in their scope is neither incapable of moral action nor excused from the responsibility to exercise that capacity: for as a being capable of rational reflection he can be expected to recognise that human beings possess an intrinsic worth that provides a reason to treat them well, even if the weakness of his natural sympathies is a defect in moral character. One can also imagine a person of the opposite character, whose sympathies for others’ sorrow or suffering are abounding. This person takes pleasure in others’ happiness and sorrow in their distress, but these sentiments need to be accompanied by a sense of the positive and negative value of happiness and suffering respectively if they are to qualify as moral responses rather than remain as mere ‘inclinations’ in Kant’s sense. Unless and until one grasps that people and their lives matter, one will be morally purblind, no matter how sympathetically one feels towards them.

3 Sociability in the Kingdom of Ends Our quest in this chapter is for a satisfactory standpoint from which defensible historical moral judgements can be made. What considerations might count in the selection of principles, standards or criteria that can be appropriately invoked in the making of moral judgements of broad applicability to people living at times and in conditions very different from our own? There could be no answer to this question if human beings were infinitely various in their needs, interests, physical properties and psychological dispositions. Fortunately they are not; and what they have in common as members of the same species is ultimately far more important in determining the requisites for their flourishing than all the cultural differences in the world. Writers as divergent in other respects as Hume, Rawls, Williams, Foot, Wong, Velleman and Audi have concurred in the view that natural facts about human beings provide the crucial pointers to the best ways of living a human life—even if Williams and Berlin with their pluralist outlooks, and Velleman and Wong as moral

80 

G. Scarre

relativists, have been reluctant to use the language of a ‘universal moral system’. (Audi is less reticent, being prepared to allow that ‘There are universal values, but they are realizable in a multitude of ways’ (Audi, 2007, p. 80).) Williams’s claim that the demands of justice were valid everywhere and everywhen was premised on the entirely reasonable idea that there are certain basic needs (for food, security, shelter, respect, etc.) that are shared by everyone. (The main problem with his view, it will be recalled, concerned the under-explained exclusion of the principles of justice from the relativism of distance he defended in regard to other kinds of ethical principle.) It is therefore carrying caution too far to suppose that we are always at too great a cultural and epistemic remove from the people of the past to be competent to pass moral judgement on them. Undoubtedly there can be more than one way of living a flourishing human life in society, and our moral judgements must be flexible enough to allow for this. Yet when all is said and done, the basic requirements for human flourishing are not so very variable, and any system of ethics that fails adequately to provide for them is to that extent defective. Cruelty, theft, rapine, violence, exploitation, lying, breaking of promises and contracts, exploitation of some for the benefit of others, and failure to protect the weak and vulnerable, are antithetical to sociability and as such deserve condemnation. Societies that countenance these things are defectively sociable. When such evils occur, they inhibit the flourishing of members of a species possessed of our distinctive natural characteristics. A caveat is needed here. If no individual is an island, neither is any society; for societies have their external relations with other societies as well as their internal relations amongst their own members. How societies treat other societies is as important an ethical question as that of how they treat their own members. However, the historical record makes all-­ too clear that societies frequently pursue their own flourishing at the expense of others’, commonly employing war or hostility against neighbours as a means of reinforcing bonds of loyalty and obedience within their own membership. Fear of the Other is a perennial source of human anxiety and is readily transmuted into hatred; it is ever ripe for use by political leaders desirous of consolidating their command of the ship of state. It is a truism that there is nothing like a war to bring people together, and a nation which has no enemies may discover a need to invent some.

4  Choosing a Standpoint 

81

Complex inter-societal relationships are cynically reduced to binary divisions: those who are not with us can only be against us. The sociability which thrives on dividing Us from Them looks likely, in the broader perspective, do more harm than good—or at the least, a great deal of harm alongside any good. This narrow kind of sociability, in which one feels at one and in sympathy only, or primarily, with the members of one’s own society (also narrowly defined) is, fortunately, not the only kind, and it is most certainly not the best kind. And it is a very dangerous form of sociability when it assumes the visage of ‘ourselves against the world’. Some years after publishing A Theory of Justice (1971), John Rawls acknowledged that the version of social contract theory there defended, by focussing overmuch on the contractual basis of just relations among the members of a given society of liberal inclination, cast little light on how that society might interact justly with similar societies elsewhere. The solution he proposed in his 1999 book The Law of Peoples was to extend the notion of a social contract on lines adumbrated by Kant in his Perpetual Peace of 1795 and specifically Kant’s idea of a foedus pacificum [a peaceful contract or treaty]. In Rawls’s words: ‘I interpret this idea to mean that we are to begin with the social contract idea of the liberal political conception of a constitutionally democratic regime and then extend it by introducing a second original position at the second level, so to speak, in which the representatives of liberal peoples make an agreement with other liberal peoples’ (Rawls, 1999, p. 10). Rawls’s concern here is with the need to establish an international polity which embraces a ‘Law of Peoples’ suitable for realising fully ‘the freedom of citizens’ (1999, p. 10). This does not exhaust what ethics is about, nor does Rawls pretend that it does, and for reasons pointed out earlier in this chapter it would be undesirable and unrealistic to represent the grounds of ethical action as reducing entirely to considerations of (even enlightened) self-interest. Sociability, sympathy, affection, kindness, selfless concern for other people’s welfare, and— most crucially—the attribution of intrinsic value to others and their lives, are at the heart and centre of ethical existence. Analogously to Rawls’s extension of the idea of the social contract from the national to the international setting, I propose that we should extend the idea of a merely local sociability to that of a sense of universal

82 

G. Scarre

brother- and sisterhood, where we regard our kinship in the human race as transcending in importance our membership of a state, race, tribe, religion or other more ‘local’ affiliation. Just as Rawls drew on Kant’s idea of a foedus pacificum as the basis for extending the notion of the liberal social contract to the international sphere, I propose to justify the broadening of the scope of the idea of sociability by reference to the Kantian conception of the common membership of all human beings in the Kingdom of Ends. This permits the claim that societies, whether past or present, which permit or promote offences against sociability in its extended sense can be weighed in the balance and found wanting. A society which encourages the sociability of its own members but which endeavours to further their advantage at the expense of the members of other societies is ethically defective because it flouts the requirements of sociability in this widest sense. Even societies that discriminate between human beings on national, racial, gender, religious, class or other grounds have generally conceded that human beings (even the least valued of them) belong to a different, and higher, value category than other animals. Having realised this much, the pity of it is that many societies have got stuck at this point, being unable or unwilling to make the transition from this partial recognition of human value to something more capacious. I suggest that it is neither radical nor revolutionary, question-begging nor anachronistic, to place the idea that human beings have intrinsic value and deserve to be shown the recognition respect appropriate to members of the Kingdom of Ends, as the foundational element in the standpoint from which historical moral judgements can be made. This means, among other things, that historians who, fearing to succumb to the ‘presentist’ danger of imposing their own contemporary moral assumptions on past people, refrain from attempting any moral appraisals of the past at all, may overcome this self-­ denying ordinance by having recourse to standards of judgement that appeal to more universal truths about the value of human beings and the conditions of their flourishing. Even where ignorance or cultural indoctrination may be allowed to mitigate the blame that she attaches to individuals, the historian who is keen to avoid anachronistically importing her present-day values into judging an institution like the African slave-­ trade need not hold back from roundly condemning it as evil. To assert

4  Choosing a Standpoint 

83

that the only viable alternative to presentism is to eschew all passing of historical moral judgements whatsoever is to propound a false dichotomy. There are more universal standards to appeal to than this bleak historiographical outlook allows. If the idea of the intrinsic worth of human beings has not always obtained the recognition due to it, the same remains sadly true in our own day: but we rightly do not regard that as a valid reason for withholding our criticism of contemporaries who treat others disrespectfully by ignoring their human rights. That liberal and Kantian document, the UN Universal Charter of Human Rights, mandates respect for the equal rights of every human being to enjoy the conditions most fundamental to the achievement of a life that is at least minimally satisfying. And because past human lives were just as intrinsically important as present human lives are, the fact that our predecessors did not always recognise one another’s proper worth does nothing to invalidate our charging them with moral error. As we shall see more fully in the next chapter, it is important to distinguish between criticising practices, acts and policies of the past and criticising the people whose practices, acts and policies they were. Adverse cultural circumstances may excuse or lessen the guilt of agents who do dreadful things and it is especially irresponsible to make snap moral judgements of past people when, as frequently happens, their precise motives and intentions remain opaque. Honesty compels the admission that often we know too little about the context of action or the reasons that moved the actors to judge them other than provisionally and tentatively—if we venture to judge them at all. It is also necessary to do the best we can to identify and allow for those cultural biases of our own which are liable to influence our ethical assessments. Recent study of the nature and prevalence of biases, and of especially of the phenomenon of unconscious bias, may prompt the depressing thought that we can never be sure that we have located, still less duly discounted for, all our own biases.11 Yet to take this thought to its logical conclusion is to bring into doubt our capacity to pass any valid moral judgements at all, even of our own contemporaries, being daunted by the consciousness of our impotence to escape from the clutches of our prejudices. Such moral Pyrrhonism is neither practical nor reasonable, since sociability would be

84 

G. Scarre

unattainable if no one applied any moral standards to his neighbour, for fear they might be biased.12 And neither should the potential impact of unconscious bias to affect our judgement be treated as a reason to refrain from all criticism of past people. In any case, there could be no better helpmate in the task of identifying our own biases than the study of history itself, with the spotlights it throws on other minds and alternative modes of thought. By bringing both the similarities and the contrasts into sharp relief, history helps us to pinpoint our time-bound assumptions, prejudices and fixed ideas, while also revealing what is constant and perennial.

Notes 1. For a classic critique of metaphysical moral realism, see Mackie 1977, especially chapter 1. Although many subsequent attempts have been made to defend non-naturalistic forms of moral realism, with much ingenuity being applied to the task, it has proved hard to show that we could have good reasons for engaging in ethical behaviour unless those reasons were ultimately grounded in facts about human needs and interests. For a robust recent attempt to defend a version of metaphysical moral realism, see Enoch 2011. 2. However, Foot rejects as unrealistic Hume’s theory of desire that denies the existence of limits on what a person may rationally desire (Foot, 2001, pp. 21–23). 3. The idea that nature can provide a yardstick for evaluating views on how to live predates such modern writers as Velleman and Foot; it is already conspicuous in the Nicomachean Ethics, in which Aristotle moulds his account of the happy or flourishing (eudaimon) life in close accordance with his empirical theory of man’s physical and psychological nature. A not-dissimilar appeal to the natural facts in discerning how happiness may be attained is made in some lines from Alexander Pope’s An Essay on Man from which I have drawn the title to the present section of this chapter: Take Nature’s path, and mad opinions leave; All states can reach it, and all heads conceive; Obvious her goods, in no extreme they dwell;

4  Choosing a Standpoint 

85

There needs but thinking right, and meaning well; And, mourn our various portions as we please, Equal is common sense and common ease. (An Essay on Man, Epistle IV, lines29–34). (Pope, 1956 [1733], p. 206) 4. Rawls’s theoretical model of a just state is not intended to provide a template against which the justice of any society can be measured, but is meant to apply primarily to those modern states which possess scientific and technological expertise and have relatively developed economies. However, aspects of Rawls’s model can be applied, mutatis mutandis, in appraising the fairness of older societies, in so far as their concern is with the satisfaction of unchanging human needs. 5. Note that a satisfactory theory need not at the same time exclude a concern for one’s own interests. Jesus himself enjoined us to ‘Love your neighbour as yourself’ (Luke 10, p. 27). 6. The Categorical Imperative in its initial formulation runs: ‘Act only on that maxim [a maxim being a ‘subjective principle of volition’] whereby thou canst at the same time will that it should become a universal law’ (Kant, 1909, p. 38). This constitutes a test or criterion by which the individual agent can determine whether some particular maxim she may consider adopting (e.g. ‘I shall never tell lies’, or ‘I shall tell lies whenever it serves my advantage to do so’) could reasonably function as a universal principle of conduct without either undermining itself (which is termed by Kant a ‘contradiction in reason’) or being impossible to be willed rationally (a ‘contradiction in the will’). 7. I ignore here some complicating issues concerning the influence of luck and circumstance which would need to be considered in a fuller discussion of appraisal respect. We respect Mary as a piano-player more highly than Peter because she plays the instrument better than he does; but if Mary’s superiority depends in part on the fact that she was able to take lessons from a first-rate teacher while Peter was not, this may lead us to nuance our judgement somewhat. Mary may be the better player, but Peter may deserve greater appraisal respect for what he has managed to achieve in less favourable circumstances. 8. Kant held that the basis of human worth was the possession of ‘rational nature’. To many critics, this has seemed too narrow a way of valuing human beings, since it ignores our capacities for rich emotional experience, love, creativity, imagination, etc. However, it might be retorted on Kant’s behalf that none of these capacities could develop very far without

86 

G. Scarre

the application or admixture of conceptual thought. So there remains a case for considering rationality to be the ultimate ground of the value of human beings and their lives, even if a purely rational human being without affective attitudes would be a monstrosity. 9. For the idea of reflective equilibrium, see Rawls 1971, 20f, pp. 48–51. Rawls is specifically concerned with reflective equilibrium between reasoned moral judgements and moral ‘intuitions’—roughly, our spontaneous or prima-facie moral impressions. 10. It is not only in the western philosophical tradition that the need for benevolence to be guided by reason has been emphasised. In fifth-­ century BC China, Confucius advised his disciples that ‘To love benevolence without loving learning is liable to lead to foolishness’ ([Analects XVII:8] Confucius 1979, p. 144). Confucius accorded the highest praise to the quality of benevolence or ‘human-heartedness’ (ren in Chinese), a form of moral sentiment which he identified with loving one’s fellow men ([Analects XII: 22] 1979, p. 116). Like Kant, however, Confucius believed that love in its undisciplined state could lead men astray; therefore it was essential that when one loved, one did so wisely. Wise loving depended on knowledge of two sorts: knowledge of one’s fellow men and knowledge of ‘the rites’—roughly, the traditions of conduct which centred on respect for the family and one’s ancestors, and which formed the recognised template for ‘correct’ living in ancient China. If the reference to rites is somewhat alien to modern Western ears, it is easy to agree with Confucius that wayward or undisciplined benevolence can readily lead to injustice, envy, loss of friends, and other forms of social disruption. 11. For some of the philosophical implications of the research into implicit and unconscious bias, see Brownstein and Saul (2016). 12. This point resembles, though it is not quite the same as, that made by Peter Strawson in ‘Freedom and resentment’ that, as participants in the social world, we naturally and unavoidably entertain reactive attitudes to our fellow human beings (anger, resentment, gratitude, approval, and so on) even though we may consider them, from a more ‘objective’ perspective, to be items in the natural world and as such subject to deterministic forces (Strawson, 1962).

5 Agents, Acts and the Relativity of Blame

They [witches] ought to be put to death according to the Law of God, the civill and imperiall Law, and municipall Law of all Christian nations. King James I & VI (1616, p. 134) There are no pages of human history more filled with horror than those which record the witch-madness of three centuries, from the fifteenth to the eighteenth. H.C. Lea (1939, p. xxx)

1 Witches and Slaves: The Bearing of Ideology on Moral Responsibility No precise estimate is possible of the number of people, most of them women, who were executed in Europe for the crime of witchcraft between the late medieval period and the eighteenth century, but extrapolations from known statistics suggest a figure approaching 40,000—not counting an unknowable number of unofficial lynchings of suspected witches within local communities. In continental Europe convicted witches were mostly burned at the stake; in England and North America hanging was

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1_5

87

88 

G. Scarre

the usual penalty. Witchcraft, in the Christian conception of it, involved making a compact with Satan, who lent his aid to do evil magic (maleficium); as a rejection of the Christian way of salvation it was normally classed as a form of heresy. Suspected witches were routinely tortured to make them produce the names of confederates, with the inevitable result that witch-hunts had a tendency to spread until large numbers of people were caught up in the witch-hunters’ nets. In the worst ‘witch-panics’ in German and France, dozens or hundreds of women and men died. Doubtless some of the accused were guilty at least of the intention to do harm by means of evil magic; to some socially powerless and downtrodden individuals, witchcraft must have appeared a tempting way of improving their social standing, gaining riches, or getting their own back on their more prosperous or successful neighbours. But it is certain that many people who had no such intention were also condemned for the crime, owing to faulty or inadequate judicial process.1 To the Huguenot scholar Meric Casaubon, writing in 1668, the ‘proofs’ elicited by the investigators and prosecutors of witchcraft were so complete and conclusive as to leave no room for scepticism about the reality of the crime: ‘What hath [God] left to us, that we can call truth,’ he asked rhetorically, ‘if this be but phancy [fancy]?’ (Casaubon, 1668, p. 44). Yet not everyone was so convinced, even in those pre-­Enlightenment times. Casaubon’s bête noire was the sceptical German physician Johann Weyer, who had argued that old women who believed themselves to be witches endowed with special powers by the Devil were suffering from over-active imaginations. Another doubter was the essayist Michel de Montaigne, who considered that ‘it is to put a very high value on one’s surmises to roast a man alive for them’ (Montaigne, 1993 [1588], p. 1169). However, the prevalent orthodoxy amongst the intelligentsia of early-modern Europe was that witchcraft was a real, substantial and growing threat to the Christian common weal, and one that had to be dealt with urgently and severely. Some modern commentators on the European witch-hunt have been unsparing in their moral condemnation of the witch-hunters. According to Rossell Hope Robbins, writing in the 1960s, the European witch-hunt was a ‘shocking nightmare, the foulest crime and deepest shame of modern civilization, the black-out of everything that … reasoning man has

5  Agents, Acts and the Relativity of Blame 

89

ever upheld’ (Robbins, 1963, p.  3). Hugh Trevor-Roper entitled his influential study The European Witch-Craze and portrayed the witch-hunt as a weapon of confessional conflict between Catholics and Protestants, for the most part viciously pursued (Trevor-Roper, 1969). Some feminist historians have argued that the witch-hunt was a product of male patriarchy and an outstandingly nasty means for keeping women in a subordinate position to men (Ehrenreich & English, 1976; Barstow, 1994). None of these interpretations has worn particularly well (it is worth remembering that men as well as women were prosecuted for witchcraft, albeit as an overall average women outnumbered men in the proportion of roughly three to one), and recent research has tended to confirm the impression that in the generality of cases the witch-prosecutors acted from ‘pure’ motives, genuinely believing in the justice of what they did. Writing a century before Robbins and Trevor-Roper, William Lecky, the great Victorian historian of rationalism, shrewdly commented that: ‘Men were so firmly convinced of the truth of the doctrines they were taught, that those doctrines became to them the measure of probability, and no event that seemed to harmonise with them presented the slightest difficulty to the mind’ (Lecky [1882] 1865, Vol. I, p. 67). For Lecky, those who prosecuted witches were tragically mistaken rather than morally guilty—we might say today that they were the subjects of bad moral luck. Lecky’s recognition that world-views consist in webs of mutually-­ supporting beliefs that face the bar of experience collectively rather than individually anticipated a position taken by some twentieth-century philosophers (e.g. Quine, 1960), and he recognised more clearly than many other writers have done how hard it can be to see the error of one’s ways when one is in the intellectual grip of a paradigm. When that paradigm has a moral dimension to it, its grip can be especially tight. Furthermore, to repeat words I wrote many years ago on this subject, ‘the outstanding savagery with which [witch-prosecution] was so often conducted is, by a bitter irony, a testimony to the purity of intention of the judges and other officials: for people are least attentive to any restraining voice of conscience when they feel compelled to be ferocious by their principles’ (Scarre, 1987, p. 63; Scarre & Callow, 2001, p. 73).2 The era of witch-hunting is not yet wholly over. In some parts of the world there remain communities that believe in the existence of

90 

G. Scarre

maleficent witchcraft and inflict severe punishments on individuals deemed to be guilty of the crime (see, e.g. Evans-Pritchard, 1937; Newall, 1973; Marwick, 1982). But in the ‘enlightened’ Western world, witchcraft has been relegated to the realm of children’s tales and Halloween parties. The witch prosecution of the early-modern period therefore provides an excellent case-study to help us to think about the sort of moral judgements we should make when confronted by a practice of the past which had appalling consequences at the time but in which we have entirely lost faith. It hardly needs to be said that burning women or men at the stake for an impossible crime is a very terrible thing to do, and even in its day it was regarded as a dreadful penalty to inflict, albeit one held to be amply justified both by the wickedness of the crime and the need to deter others from offending. The prosecution of witches was clearly wrong in the sense of being conceptually mistaken, based as it was on a wholly false theory. But in what sense, if any, was it also morally wrong? Certainly it would have been better had witchcraft prosecution never taken place: it produced much suffering for no socially useful result. Sociability must have been in short supply in communities that were riven with anxiety at the thought that secret enemies were operating in their midst. The word ‘wrong’ in the moral sense is normally used to signify that something has been done which should not have been done, and for which blame is appropriate. But is that the right way to characterise the witch-hunters who acted in the sincere belief what they were doing was commanded by the laws of God and man? Judges who condemned those found guilty of witchcraft to death were fond of citing the scriptural injunction: ‘Thou shalt not suffer a witch to live’ (Exodus 22:18). Could there ever be a better proof that witchcraft really existed than that the Bible said that it did?3 Doubtless some prosecutors of witches did act from such bad motives as malice, envy, revengefulness or a sadistic delight in cruelty, and their offences were heinous. But the historical record makes clear that this was not the case with all, or even most, of them. Hilary Putnam would have no qualms about condemning what the witch prosecutors did as morally wrong. He focuses on the example of the Aztec practice of human sacrifice to appease the gods, but what he has to say about this can be applied equally well, or badly, to the prosecution

5  Agents, Acts and the Relativity of Blame 

91

of witches. Putnam construes Williams’s thesis of the relativity of distance to imply that we should desist from passing any moral judgement whatsoever on a practice that is as culturally remote from us as is that of Aztec human sacrifice—or early-modern witch-hunting. His objection to the relativity thesis is delivered in few words. Williams, he claims, is seriously at fault in supposing that, while the Aztecs’ (or the witch-hunters’) beliefs were wrong, their sacrifices (or the executions of witches) were not wrong, given those beliefs (Putnam, 1992, p. 106). On Putnam’s dissenting view, because the Aztec practice of sacrifice (the prosecution of witches) and the beliefs about the world which supported it ‘were interdependent’, to say that one of these was wrong but not the other is to say something inconsistent (1992, p. 106). He allows that the Aztec practice may have been ‘understandable given the false factual belief ’, but he appears unwilling to concede that it is excusable on that account. (Or if he does think it is, he does not say so). He shows even less inclination to suppose that performing human sacrifice (or executing a witch) could ever have merited any praise as a deed done at the behest of conscience. Putnam alleges that, but scarcely explains why, it is inconsistent to say that the Aztec beliefs about the gods were wrong yet refrain from saying that their practice of human sacrifice was wrong. The moral wrongness of the sacrifices does not logically follow from the factual incorrectness of the beliefs, even though the beliefs and the practice were, as Putnam remarks, ‘interdependent’ in so much as the former supplied the theoretical justification for the latter. On Putnam’s way of thinking, it seems that we would have to say that a trial judge who on the basis of convincing but actually false evidence sentenced a man to prison for a crime he did not commit erred morally. Putnam might object that the judge in such a case at least acts on the basis of the right kind of evidence, even though it misleads him on this occasion, whereas the Aztecs’ theology lacked all connection with reality. But as both the judge and the Aztecs act on the ground of what they believe to be the truth, it is not clear how the more radical departure from truth of the Aztecs’ views should increase their culpability. Putnam’s objection to the relativism of distance is that accepting it commits us to taking—inappropriately, in his view—a morally neutral stance towards awful acts that were committed by people in the past, just because

92 

G. Scarre

their cultural circumstances allowed them to know no better. This is mistaken, he thinks, because the awful things done by the Aztecs or the witchhunters merit outright moral condemnation. To advance the discussion beyond this point, it is important to distinguish more sharply than Putnam does between the doer and the deed. It is perfectly possible to ‘hate the sin yet love the sinner’; and in less exotic cases than Aztec human sacrifice or the execution of witches, we quite often find ourselves deploring an act while acknowledging some excuse or mitigating circumstance that reduces the actor’s blameworthiness. Putnam may be gesturing distantly towards this distinction when he concedes that the Aztecs’ behaviour may be ‘understandable’ given their cultural circumstances, although he shows no sign that he takes this to let them even slightly off the moral hook. What might it mean to say that human sacrifice or witch prosecution was morally wrong if we wish to leave open the issue of the praise- or blame-worthiness of the responsible agents? As a first stab, it might be suggested that an act is wrong if and only if it is one that no morally well-­ informed, or no morally competent, agent would commit, or approve of others committing.4 But what are the criteria by which to determine whether an agent fits this description? Clearly it will not do to say that the morally well-informed or the morally competent agent is one who would not commit, or approve of others committing, wrong acts, for then our account of what constitutes a wrong act is merely circular. Nor is it satisfactory to propose that a morally well-informed or competent person is a person who can lay claim to long and extensive moral experience, and to have reflected on that experience. For we may presume that many of the Aztecs and the witch-prosecutors satisfied that criterion, and it would be begging the question against them to suppose otherwise just because the moral beliefs they arrived at are unacceptable to us. The Aztecs believed that if the gods did not receive regular offerings of human blood, the result would be the end of life on earth; the witch-prosecutors were convinced that they were engaged in a desperate struggle to defend the Christian community from the unremitting malice of Satan. No conscientious Aztec or witch-prosecutor could justify any backsliding in the performance of his duties prompted by misguided feelings of pity. Failing to act in the face of what one takes to be a grave existential threat can never be moral or wise but always culpably irresponsible.

5  Agents, Acts and the Relativity of Blame 

93

There is a further reason why attempting to define a wrong act as one that a morally well-informed or morally competent agent would disapprove of must be inadequate. For we would still need to know why (i.e. for what sort of reasons) well-informed and/or competent agents would reject some act as wrong. No act, after all, is wrong just because it is, or would be, disapproved of by an informed or a competent moral agent, and there can be no sufficient account of what makes an act wrong that is silent about the kind of features that make it wrong, thereby justifying an adverse judgement. For a more promising way of explaining what it means for an act or practice to be wrong while leaving open the question of the guilt or innocence of its actor(s), I propose that we look to a naturalistic account. To put this at its simplest, wrong acts or practices can now be characterised as acts or practices that cause harm to human beings (or other sentient creatures). Harms may be thought of in the broad sense as setbacks to (genuine) interests—impediments to our flourishing in ways appropriate to our nature as sociable individuals.5 We have already noted some plausible accounts of basic features that human lives require to go well for creatures like ourselves, and it would be fortunate if we could now use such accounts to give a satisfactory explanation of what it means to say that an act or practice is wrong. However, to say that witch prosecution, Aztec human sacrifice or the transatlantic slave trade were harmful to the people who suffered under them, although true so far as it goes, seems not to go quite far enough to capture what makes these practices morally wrong. ‘Harmful’ is still too much of a descriptive term, lacking in moral punchiness; a word like ‘cruel’ seems better able to convey the needed moral force. Of course, not all harmful acts are cruel acts: e.g. a surgeon who carries out a dangerous operation to save a life but actually makes things worse does something harmful but not cruel. The difference between the Aztec priest or witch-prosecutor on the one hand and the unsuccessful surgeon on the other is that while the latter has no intention of doing harm to the subject, the former caused it knowingly and willingly. This seems enough to qualify their acts as ‘cruel’. Even so, it would be going too fast to assert that a cruel act must always be a wrong act. The Aztec priests and witch-prosecutors who acted from a sense of duty might well have admitted that their work was cruel to those who suffered at their hands, yet insisted that it was work that had to be done;

94 

G. Scarre

they might, like Oliver Cromwell following the execution of King Charles I, have described what they did as a ‘cruel necessity’.6 But their believing it to be necessary does not make it so: the justifications they claimed for their cruel practices were entirely erroneous. And it is this fact that affords the opening to characterise those practices as wrong; they were not only cruel, but there was nothing that truly justified that cruelty—although that was unapparent to their practitioners. Hence the response I earlier imagined that Putnam might make to the challenge to justify his moral criticism of the Aztecs works better at showing that the Aztec practices were wrong than it does the charge that the Aztecs themselves were blameworthy for engaging in them: the falsity of the beliefs leaves the cruel practices with no moral leg to stand on but it does not justify moral condemnation of the practitioners.7 But—it might be objected—are not cruel deeds normally done by cruel people, and can anyone truly be morally blameless (or excusable) if she is prepared to act cruelly towards others? To the second question, the blunt answer is, I think, Yes; for while sentencing a witch to death or a prisoner to be sacrificed may call for a certain inflexibility of purpose not so likely to be found in a person of more ordinary human sympathies, someone who performs these acts from a stern, unwavering sense of duty may still be open to excuse. Nor, in answer to the first question, is it clear that this person must be described as ‘cruel’, although we may certainly see him as stern and unbending, in the style epitomised by such old Roman patriots as Lucius Junius Brutus or Marcus Porcius Cato. The willingness to act harshly by our fellow human beings is not an endearing trait of character, yet it would be hard to deny that exigencies sometimes arise in the course of human affairs that require resolute and uncompromising responses that are costly in terms of human suffering. Distinguishing the moral status of acts from that of the agents who perform them, and allowing that morally bad things may sometimes done by people who do them with upright intentions, seems correct in principle, but such distinctions are not always easy to make in practice. There are, to be sure, some relatively clear-cut cases. Consider the following pair of responses that might be offered by agents who are charged with having done something cruel:

5  Agents, Acts and the Relativity of Blame 

95

1. It is cruel, but unfortunately it’s necessary too, and I’m duty-bound to do it. 2. It is cruel and immoral, but I don’t care, since it serves my own ends. One could imagine the first of these being spoken by the witch-­prosecutor or the Aztec priest who sincerely believes that he must do what he does, and that not to do it would be a culpable dereliction of duty. Here, if anywhere, we may describe the act(s) he performs as morally bad acts while allowing the actor to be morally excusable, if sadly misguided. Statement 2), by contrast, which one might imagine being spoken by a slave-trader in a rare self-revealing moment, displays the agent in a much poorer light: here we patently have both a bad act and a bad agent. But consider now some further possible responses by the slave-trader or witch-prosecutor: 3. It is cruel, but the persons affected have forfeited their right [or: have a lesser right than others] to better treatment. 4. It may seem cruel, but actually we’re doing these persons [witches, black slaves] a favour by enabling them to obtain Christian salvation which would otherwise elude them. 5. It’s not really cruel, though it may appear so, because these people [witches, black slaves] don’t have the same sensibilities as you or I have and can’t suffer in the same way, or to the same degree. Examples of statements on the lines of 3) through 5) are not hard to find within the historical record, having frequently been offered to justify or to excuse harmful practices. (To be precise, 4) is a justification and 5) is an excuse, while 3) could be either, depending on context).8 Each statement, repellent though it is, might conceivably be offered with total sincerity by someone who genuinely accepts the ideological commitments that underpin it. Where that is the case, it may be reasonable to judge the (misguided) doer less harshly than the thing that is done.9 But sincerity of belief is not always sufficient to ward off moral blame: it spectacularly fails to do so where the belief that supports the conduct has been negligently or impulsively taken up without due scrutiny. One plausibly universal rule of moral conduct is that agents should always check relevant

96 

G. Scarre

facts carefully and examine the soundness of their value beliefs before engaging in any action that is liable to harm other human beings. Where good or evil is concerned, agents are always obliged to look before they leap. Another kind of scenario in which sincerity is not much of a defence for believing wrong to be right is where it is the poisoned fruit of self-­ deception. Self-deceivers are for the most part crafty rather than careless believers; they are selective in the evidence they consider and dishonest in the weightings they attach to it; they are also highly adept at turning a blind eye to inconvenient facts. Self-deception characteristically conceals its own false tracks: typically, self-deceivers tell themselves that they have reached their conclusions by a straight and honest route. Self-deception is particularly hard to escape when it assumes a collective form, where members of a group reinforce one another’s beliefs by constantly reaffirming the same self-serving views. Slave-traders who mixed socially with other slave-traders created ideological feedback loops which excluded dissenting views and stiffened the backbone of any traders possessed of a more fragile conscience. It is not difficult to imagine how a typical conversation between slave-traders on an early-eighteenth century Liverpool or Bristol dockside might have run: Dubious newcomer to the trade: Maybe these Africans really are people like ourselves, underneath the skin. Veteran trader, who has been shipping slaves for many years: That’s nonsense, you know; of course these savages are not like us. You only have to look at their colour to see that! Besides, slavery can’t be wrong, because it’s sanctioned in the Bible. Newcomer (seeking to reassure himself ): Yes, I suppose you’re right. And anyway, we all have to make a living and support our families.

People who are outside such not-so charmed circles, and particularly those who find themselves at the cutting end of the slave-trader’s whip, may well beg to differ. Slaves and victims of witchcraft prosecution are less likely to regard any of the propositions 3) to 5) as supplying acceptable excuses or justifications for their own bad treatment—for either the doers or their deeds.10

5  Agents, Acts and the Relativity of Blame 

97

2 Fricker and the Relativity of Blame Although self-deception may have played a role in suppressing inconvenient doubts that might have arisen in the minds of some slave-traders, witch-prosecutors, Aztec priests, makers of empire, Nazis, or other agents whose activities were harmful to others, it can rarely, if ever, have been the whole story. And here a distinction is worth making between slave-­traders and witch-prosecutors. While the majority of witch-prosecutors may be presumed to have acted from a sense of duty, no one can reasonably have believed he had a duty to trade in slaves. To be sure, some traders believed they were duty-bound to Christianise the people they bought and sold, but there were patently other and gentler ways than enslavement of bringing Christian enlightenment to those they saw as ‘the poor, benighted heathen’. Most likely, then, self-deception and the turning of a blind eye to the human costs played a rather larger role within the slave-trading community than it did amongst the persecutors of witches, most of whom perceived themselves to be Christian warriors against Satan, even if some were actuated by less respectable motives such as sadism, revengefulness or envy. It is with the ‘honest’ doers of cruel or oppressive acts that Miranda Fricker is chiefly concerned in her paper, previously alluded to, in which she proposes to substitute for Williams’s notion of the relativity of distance an alternative conception which she designates ‘the relativity of blame’ (Fricker, 2010). Fricker’s first formulation of this variant form of relativity runs as follows: ‘Very roughly, if someone for historical reasons was not in a position to grasp the moral status or significance of X, then they cannot be blamed for the relevant action or omission’ (2010, p. 152). Fricker does not go into detail as to just how ‘the moral status or significance of X’ is to be determined but her discussion would seem to presuppose a naturalistic theory of the human good of the kind described in the previous chapter. By locating in the conditions under which human lives normally go well or badly a trans-temporal basis for moral discrimination, it will be recalled that such theories have the advantage of being able to explain, without any objectionable begging of questions, why cruel practices like slave-trading, witch prosecution and human sacrifice are

98 

G. Scarre

not only bad and wrong for us today but always were so, whatever former people may have believed about them. Fricker explicitly avoids taking a position on more general forms of moral relativism; the thesis of the relativity of blame expresses what she calls a ‘localized relativity’ to the effect that judgements of blame ‘are properly construed as relative to the moral thinking of the time’ (2010, p. 152). If the Aztecs believed that ripping out the hearts of living men was the right thing to do in order to keep the gods happy, then they do not merit condemnation for what they did, even though this was really very bad indeed. Fricker does not go so far as to propose that the fact that they were acting in good faith provides a reason for praising them (even if a defeasible reason). Her thought is rather that ignorance, misconception and false belief can excuse a multitude of sins, not that they render them in some manner or degree commendable. The proposition that ‘What the Aztecs did was right by the standards of their time’ is elliptical for something like ‘What the Aztecs did was thought by them to be right by the standards of their time’; as such it represents rather an anthropological observation than a meta-­ ethical judgement. Fricker observes that the relativity of blame applies not just to the historical but to ‘other structural forms of moral-epistemic incapacity’ too, including in particular the ‘cultural’; thus we should be similarly restrained in our criticism of contemporary people who engage in morally bad practices, or what we consider to be such (female genital mutilation being one plausible example), provided they are sanctioned by the local value system. Witch-prosecutors, slave-traders, and Aztec priests who entertained some tragically mistaken factual and ethical beliefs can be ascribed a ‘moral-epistemic incapacity’ which generated ‘epistemically non-culpable moral ignorance’ (2010, p. 152; Fricker’s italics) sufficient to exonerate them from blame. For: ‘Blame is inappropriate if the relevant action or omission is owing to a structurally caused inability to form the requisite moral thought’ (2010, p. 167). Recent moral philosophy has devoted considerable attention to analysing the notions of moral praise and blame, and writers have not all agreed on just what we are doing when we praise or blame an agent or action, or on the precise function of praise and blame within the moral life. This is not a debate which needs to be followed in detail at present, but some general observations are in order. To begin with, note that it is primarily

5  Agents, Acts and the Relativity of Blame 

99

moral praise and blame that we are concerned with, and not the employment of these concepts in non-moral contexts. When we blame the wind for blowing down the apple tree or the cat for tearing the curtains, it is mere causal, and not moral, responsibility that we mean to ascribe to the wind or to Felix (we may call the latter ‘naughty’, but we are unlikely to believe him seriously to be a moral agent). Commonly, too, we praise or blame people for their achievements or failures in fields such as sport, the arts, musical performance or the stage, where our appraisals are based on what may broadly be called ‘aesthetic’ standards of excellence rather than on moral ones (albeit these may be present as well, as when we praise the ball-player for his courage on the field or blame the pianist’s poor performance on her previous laziness in practising her pieces). Turning now specifically to moral praise and blame, one question that is sometimes asked is whether these are simple correlates of each other, in the sense of forming what Nomy Arpaly and Timothy Schroeder have called ‘a natural pair’. Arpaly and Schroeder are disposed to dispute the symmetry of the two notions, arguing that while ‘Praising is an action … blaming is rather an attitude one takes, an attitude on the basis of which one can then act’ (Arpaly & Schroeder, 2013, p.  159). However, this distinction seems somewhat forced, since it seems that both praising and blaming may remain either at the level of internal attitudes or find expression in verbal or other forms of outward expression. Another objection to the symmetry claim sometimes encountered is that blame is due when a normative expectation has been inexcusably violated, whereas praise requires that a normative expectation has been not merely met but exceeded. In reply, it may readily be admitted that a person who goes beyond the call of duty and performs a supererogatory action may be worthy of especially high praise, yet it is ungenerous to hold that those who ‘merely’ do their duty (which often, as we all know, can be hard enough) fail to reach the minimum threshold at which praise becomes due. At any rate, in what follows I shall treat moral praise and blame as approximately symmetrical notions, construing the former as extending moral approval to a person or action and the latter as extending its opposite (the word ‘extending’ being understood broadly enough to cover both the entertaining of internal sentiments of approval or disapproval

100 

G. Scarre

and the outward expression or demonstration of those appraisals, whether to the objects of those appraisals or to others).11 Fortunately, their claim of asymmetry, or lack of natural pairing, between praise and blame appears not much to affect Arpaly and Schroeder’s intuitively appealing definitions of praiseworthiness and blameworthiness: To be praiseworthy for a right action is to act out of good will (an intrinsic desire for the right or good), or out of indifference to the lure of wrong or bad; to be blameworthy for a wrong action is to act out of ill will (an ­intrinsic desire for the wrong or bad), or out of indifference to the lure of the right or good. (Arpaly & Schroeder, 2013, p. 158)

Arpaly and Schroeder note that acting with ‘good will’ has been construed in different ways by different philosophers—as maximising happiness or welfare, or as abiding by the moral law, or as treating other people with the respect due to their humanity. To act with ill will may then be identified with minimising happiness or welfare, or with wilfully breaking the moral law, or with demonstrating disrespect for persons (2013, 164f.). In spite of their differences, all of these theories relate good and ill will ultimately to human well-being and the conditions which promote or subvert it, among which Arpaly and Schroeder consider that being treated with respect for one’s personhood is of the first importance. In the last analysis it is human nature, its strengths and vulnerabilities, needs and interests, desires, fears and affections which afford the quality controls on our action and attitudes. It follows from Arpaly and Schroeder’s premises that acting with good will cannot be identified simply with doing what one believes to be right. This is because what an agent (e.g. an Aztec priest or an early-modern European witch-prosecutor) believes to be right may really be profoundly wrong, being seriously detrimental to human well-being. In order to merit praise for his actions, the agent has to be motivated by good will, even if, like Huckleberry Finn, it is his moral intuitions, or gut feelings, rather than the ideas of right and wrong he has imbibed from his culture that steer him along the morally proper path. Indeed, ‘agents who think of what they are doing under the concept immoral [as Huck did of his

5  Agents, Acts and the Relativity of Blame 

101

efforts to help the slave Jim to escape his mistress] nonetheless are morally praiseworthy in acting because they are motivated to act by intrinsic desires for the good correctly conceived’ (Arpaly & Schroeder, 2013, p. 177). Conversely, the agent who does something awful, believing it to be right, deserves no praise, for the reason that his motivations are not directed appropriately on the good, in spite of his beliefs about the case. Should he, moreover, take sadistic pleasure in the pain he causes, he is independently blameworthy for his egregiously bad emotions; however, where his actions are dictated purely by an erroneous sense of duty, Arpaly and Schroeder concede that he may be wholly or partly forgivable for what he does—albeit this is not because he believes that he is acting rightly but because, given his cultural background, ‘he is ignorant of the really salient reasons for not acting like this’ (Arpaly & Schroeder, 2013, pp. 186–87). At this juncture their view appears to diverge slightly from Fricker’s, in so far as Fricker is more prepared to allow that false moral conceptions of duty and the right can be potentially extenuating factors; but they concur on the more essential point that the underlying roots of wrong ideas and the harms that issue from them are to be traced in the social and cultural milieux which the wrong-doing agents inhabit. Where Fricker prefers to speak of suspending moral judgement in the case of bad actions performed in conditions of culturally-induced moral ignorance, Arpaly and Schroeder refer instead to the forgivability of actions in such circumstances (which presumes the legitimacy of moral judgement, given that forgiveness must logically await on blame). More important than this difference, however, is their agreement on the more fundamental proposition that the moral ignorance of agents in culturally-unfavourable circumstances of the type envisaged may play a significant extenuating role. Fricker rightly emphasises that not just any moral ignorance can be considered as excusatory. There is a difference between ‘structural’ moral ignorance, which even the most conscientious and careful moral agent might be unable to escape, given her cultural situation, and the moral ignorance of the individual who has simply not thought enough about some morally significant matter, or who has self-deceitfully turned a blind eye to uncomfortable or inconvenient moral considerations. But Fricker may be on shakier ground when she denies that ‘conscientious

102 

G. Scarre

moral stupidity’ is able to ‘qualify an agent as non-blameworthy in her subsequent conduct’ (2010, p. 168); for this judgement seems hard on people who are not very good at moral, or maybe any kind of, reasoning, and whose best efforts to determine and subsequently to do the right thing are frequently failures. Theirs may not be structural moral or epistemic ignorance yet it is ungracious not to grant it some exculpatory potential. Fricker is sharply critical of Gideon Rosen for defending a theory of moral responsibility which she thinks too generous in allowing forms of non-structurally caused moral ignorance to cancel or lessen blameworthiness; yet there is plausibility to Rosen’s claim that ‘The agent is responsible for the ignorant act only if he is responsible for the ignorance from which he acts; and he is responsible for the ignorance only if he is responsible for some prior failure to discharge one of his procedural epistemic obligations’ (Rosen, 2004, p. 303; cf. Rosen, 2003).12 To which it may be added that while some people may be capable of thinking through their moral commitments more thoroughly than they actually do, it needs first to dawn on them that those commitments might in principle be open to doubt. And this insight is likely to be uncommon in cultural contexts in which the questioning of orthodox assumptions is denigrated as foolish, heretical, disloyal, or as all three together. Fricker does not think that where ‘structural moral ignorance’ relieves people of blame for the bad things they do, there is never anything of a critical tenor left to be said about their agency. In fact, ‘There is a form of critical moral judgement we may adopt towards distant others who are slow to pick up on … dawning moral-epistemic innovation: [namely] moral-epistemic disappointment’ (2010, p. 173). This she describes as an attitude that becomes appropriate in the rather special circumstances in which, although most agents are still acting in accordance with moral thoughts that are ‘routine in the collective moral repertoire’, there are signs that a new, superior routine of moral thinking is on the cusp of development; moral-epistemic disappointment thus latches on to the fact that the agents at issue have not yet made the transition to that better way of thinking. In illustration, Fricker asks us to imagine a schoolmaster of a few decades ago who corporally punished his pupils in a manner that was at the time generally judged to be acceptable (2010, pp. 173–75). The fact that new ideas more critical of corporal punishment were starting to

5  Agents, Acts and the Relativity of Blame 

103

be aired around that time, and were therefore in principle ‘available’ to the schoolmaster, justifies, she claims, our feeling morally disappointed in him, even if it would be unfair to hold him blameworthy for doing only what was then conventional (2010, p. 175). Moral-epistemic disappointment as Fricker understands it is thus a critical moral attitude, albeit one more muted in tone than blame; agents who attract it are perceived as morally lacking in some respect, even though the lack is not regarded as their own fault. (Moral-epistemic disappointment with agents should also be distinguished from the patient-centred feeling of sadness that we feel when we reflect on wrongs people have suffered in the past; while this may be heartfelt, it is not intrinsically a moral-critical attitude but rather a product of our natural capacity for sympathy).13 Michael Brady has objected that Fricker is understanding disappointment in a rather odd way, disconnecting it from the usual notion of the non-fulfilment of reasonable expectations. Because, he says, it is unreasonable to expect that agents should be ‘moral pioneers or … engage in exceptional moral imagination when it comes to judgement and behaviour’, disappointment would seem to be out of place when nothing better could have been expected from them (Brady, 2010, p. 184). Our expectations of how people will think and act must be conditioned by the routine, not the exceptional. Brady suggests that Fricker would do better to link moral-epistemic disappointment to the non-fulfilment of hopes rather than of expectations. People generally cannot be expected to abandon routine modes of moral thought and embrace more enlightened alternatives, but where circumstances are not wholly hostile to the advent of new ideas there is at least a chance that some exceptional individuals will alight on them. If hope, in contrast to expectation, is consistent with a relatively low estimate of the probability of a particular event occurring, then linking disappointment with the non-fulfilment of hopes rather than of expectations would still leave room for moral-epistemic disappointment as an appropriate response to unenlightened routine behaviour, so long as there were a sliver of hope that a more enlightened alternative was a real one. But Brady is sceptical whether even this switch from expectation to hope in the characterisation of disappointment will ultimately help Fricker very much. His main worry is that by making the notion of moral-epistemic disappointment relative to hope, we implicitly

104 

G. Scarre

make it ‘relative to our possession of a particular prospective emotion’, and thus rely on the questionable assume that our present standards are the proper standards on which to base our moral judgements of the past (Brady, 2010, p. 187). I suggest that this worry is not in fact very significant if we can show, as I have argued above to be possible, that we are not merely begging the question in favour of our own contemporary moral standards but can produce independently cogent reasons for believing them to be superior to certain of the standards they have superseded. But another objection, which Brady also concedes has some force, may be harder to remove: where the probability is very low that any but the most exceptional thinkers—and perhaps not even they—will manage to escape the grip of received ideas, then ‘resigned acceptance of this state of affairs’ may well be a more rational attitude than hope (189). For hope against hope is really indistinguishable from hopelessness. And even if something better may realistically be hoped for in certain contexts from some exceptionally advanced or independent thinkers, that can scarcely justify assuming an attitude of moral-epistemic disappointment towards the great majority of people who think ‘routinely’. Where exceptional thinkers arise, of course, something stronger than moral-epistemic satisfaction may be appropriate regarding them. Consider a dissident Aztec priest (whether any such ever existed I do not profess to say) who after long consideration comes to condemn the practice of human sacrifice on what he sees, correctly, to be sound ethical grounds and not, say, simply because he is squeamish or dislikes the sight of blood. To say only that we feel a moral-epistemic satisfaction with this man would be inadequate—a case of damning with faint praise. Nor does it seem likely that Fricker or Brady, given their enthusiasm for independent moral thinking, would wish to stint their praise for the unwilling priest, despite the fact that he was acting so contrarily to his national ethic. Arpaly and Schroeder would certainly praise him for his moral insight and independence of mind. Independently-minded individuals who escape their cultural shackles and penetrate to new and superior moral insights may be reckoned among the major benefactors of humankind. And those very special individuals who not only question in their own breasts the morality of a practice like human sacrifice or witch-­ hunting but publicly condemn it as evil deserve particular

5  Agents, Acts and the Relativity of Blame 

105

commendation. There is much intuitive force in the idea that a witchhunter or an Aztec priest who did terrible things to other human beings could draw excuse for what he did from the cultural-ethical context in which he acted. That is the truth contained in the doctrine of the relativity of blame. But that relativity is not the whole moral story. Fricker and Brady, Arpaly and Schroeder, seem implicitly as much committed as we have seen such other writers as Williams, Foot, Velleman and Wong to be to the idea that some conditions of life are more conducive to human flourishing than others, and that moral insight consists in, or at least crucially involves, recognising with some accuracy what these are. Just as some people possess superior aesthetic sensibilities and see beauties in nature or in art that many others miss, so there are individuals whose moral insights exceed the reach of most other people’s. Having a deeper sense of what makes human beings and their lives valuable, they succeed in transcending the historical-cultural limitations of their society’s perspective.14 Therefore, while a significant role must be allowed for relativity in the making of historical moral judgements, not every judgement we make needs to be, in Fricker’s phrase, ‘relative to the moral thinking of the time’. When we praise the rebellious Aztec priest who has seen the light, we step outside a relativised framework to do so and appeal to something more universal. This individual, we believe and make bold to assert, has grasped something about the conditions for human flourishing that has eluded others with whom he shares a cultural background. His form of sociability is something broader and better than one that unblushingly promotes the flourishing (as he sees it) of some subsection of human beings at the expense of that of others.15

3 The Scope and Limits of Conscience There is an important difference between doing something wrong where one mistakenly believes it to be the right thing to do, and acting badly where one has simply failed to think about the moral status of one’s action. Moral carelessness of the latter kind is normally culpable, though it may occasionally be excusable: where, for instance, the agent’s mind is in a state of upset or distraction, relevant moral considerations may be

106 

G. Scarre

temporarily swamped by other concerns, with mischievous consequences but moderated blameworthiness. Roughly speaking, the more good or evil that depends on one’s prospective actions, the greater the obligation to think carefully before one acts and the weightier one’s guilt if one fails to do so and harm ensues. As Locke remarked in the Essay Concerning Human Understanding, whenever our assent or non-assent to a proposition ‘is thought to draw consequences of moment after it, and good or evil to depend on choosing or refusing the right side’, the mind should ‘seriously … inquire and examine the probability’ (Locke, 1961 [1690], vol.2, p. 307). Agents who do something wrong under the belief that what they do is permissible, even obligatory, clearly call for a different style of moral assessment. Acting according to conscience has generally been regarded as a laudable way of behaving. But problems arise if conscience sometimes provides us with the wrong answers. The idea that conscience may be fallible or faulty has been disallowed by writers who see it as a God-­ given faculty for telling right from wrong. According to Joseph Butler, ‘Conscience does not only offer itself to shew us the way we should walk in, but it likewise carries its own authority with it, that it is our natural guide; the guide assigned to us by the author of our nature’ (Butler, 1970 [1727], p. 37). For Bishop Butler, conscience, as ‘the rule of right’ that we all have within ourselves, can be relied on not to lead us astray, provided that we attend to it carefully and guard against being misled either by ‘superstition’ or by ‘partiality to ourselves’—these being the two most significant distorters of conscience’s messages (1970 [1727], p.  36). Butler’s bullish confidence that conscience, as a divinely-implanted oracle for telling right from wrong, is trustworthy provided we attend to it carefully persuades him that its dictates should be attributed the force of law: ‘Your obligation to obey this law, is its being the law of your nature’ (1970 [1727], p. 37).16 But what if conscience, rather than being a God-given moral faculty, is not a faculty at all but a messy congeries of natural and learned attitudes, ideas absorbed from the local culture, gut feelings (more politely, ‘intuitions’), and ratiocinations possessed of varying degrees of cogency? If that is what conscience is more nearly like, then it is not surprising that it functions at times as a faulty compass or as a ‘broken thermometer’

5  Agents, Acts and the Relativity of Blame 

107

(Arneson, 1999, p. 120). Especially when one considers it one’s duty to act in a way that is prejudicial to the interests of others, one needs to ensure that one stands on very firm ground in regard to both one’s facts and one’s values. (That closer scrutiny of one’s basis for action may, of course, only strengthen one’s conviction that one is doing the right thing even where one is not). Unfortunately, an agent’s internal-reason set may contain nothing that would help her to focus the appropriate critical spotlight on her factual or moral errors. What she assumes to be, in Williams’s phrase, a ‘sound deliberative route’ from reasons to actions may be nothing of the sort, either because her premises are wrong or because they fail to support the practical conclusions she draws from them (cf. Fricker, 2010, p. 161). Good reasons for thinking other than she does may escape her altogether or appear too fantastical to be admitted to her motivational set. Alternatively, the practical moral bearing of genuine facts may be misconstrued, as when the possession of black skin is treated as a justifier of practices of enslavement and exploitation. But we should not be too quick to assume that people’s internal-­reasons sets are surrounded, as it were, by walls which block the entry of all intruders. Fresh experiences and encounters with new people provide fruitful opportunities in most lives for expanding our horizons, extending our sympathies and re-evaluating our beliefs. Michael J. Zimmerman out-Rosens Rosen when he begins an article with the stark statement: ‘When Auschwitz camp commandant Rudolf Höss had over two million people put to death, he was not to blame.’ This, he allows, may be a bitter truth, but he argues in its defence that ignorance of wrongdoing excuses the wrongdoing ‘when it is constituted by the failure to believe that one is doing wrong’ (Zimmerman, 2002, p. 483). Even if it is conceded that ‘moral-epistemic ignorance’ (Fricker) or ‘normative ignorance’ (Brady) can have exculpatory force, ignorance is not, pace Zimmerman, the same as a ‘failure to believe’, and if Höss failed to believe that his calamitous actions were wrong because he did not sufficiently consider the moral arguments against them, then his ignorance, being avoidable, was blameably irresponsible. Normative ignorance that is based on simple failure to think about the relevant moral issues does not excuse the performance of bad actions; these, on the kindest assessment, must count as negligent or reckless. Similarly incapable of providing excuses for bad actions is the

108 

G. Scarre

kind of ignorance which Moody-Adams refers to as ‘affected’: this involves ‘choosing not to know what one can and should know’ about pertinent facts or values (Moody-Adams, 1994, p. 296). Turning a blind eye to, or otherwise downplaying the significance of, uncomfortable moral considerations, especially where paying proper attention to them would expose faults in one’s own conduct, can be a tempting option for agents who are keen to maintain a clean conscience and are prepared to do so even at the price of self-deception. The slave-owner or Nazi concentration-camp commandant who succeeds in keeping his mind from dwelling on the humanity of the people he abuses is not truly ignorant of the fact he affects to disregard; at some deep level he must be aware of it, though he prevents it from reaching the forefront of his consciousness. ‘Affected ignorance’ of this variety is more accurately classified as wilful heedlessness. Genuine (blameworthy) affected ignorance results from deliberately failing to acquire knowledge of facts that, if known, might put an inconvenient moral damper on one’s self-serving activity. W.K.  Clifford famously imagined a reckless ship-owner who hires out an ancient vessel to a party of emigrants without first checking on its seaworthiness; after the ship subsequently sinks with the loss of everybody on board, the owner’s ignorance of its faults, being avoidable, affords no reduction whatsoever of his moral responsibility; he is ‘verily guilty of the deaths of those men’ (Clifford, 1999 [1877], p. 70). It may seem hard in retrospect to understand how someone living in the twentieth century could have been quite so ignorant about—or neglectful of the force of—the relevant moral considerations as Zimmerman supposes Höss to have been. Germany at the advent of the Hitler regime was one of the most highly educated and culturally sophisticated countries in the world. Many Germans still chose to read Goethe, Kant, Schiller or Thomas Mann in preference to Mein Kampf. Yet it cannot be assumed that the Nazis rejected or were uninterested in moral considerations; to speak of ‘Nazi ethics’ is not a contradiction in terms, no matter how repellent the Nazis’ ethical ideas may now appear to be. Rudolf Höss may have believed, sincerely and deeply, that the German people had enemies everywhere, and that the prime moral duty of every good German was to support the Führer in his defence of the Fatherland against its foes, including, above all, the crafty and insidious Jews (see

5  Agents, Acts and the Relativity of Blame 

109

Höss, 1959).17 ‘Obedience to authority’, Gitta Sereny has argued, was in any case ‘intrinsic to the German character before and during Hitler’s time … and for many years afterwards’ (Sereny, 2000, p. 346). Hannah Arendt in her celebrated study of the post-war trial of the Nazi Adolf Eichmann in Jerusalem reports how Eichmann professed to have read Kant in his youth but to have re-fashioned the Categorical Imperative as enjoining obedience to the commands of the Führer, which represented for him the ultimate source of law (Arendt, 1968, pp.  135–37). Eichmann’s perversion of the Categorical Imperative from its original meaning to one that delivered practical consequences of an entirely opposite nature to any that Kant could have approved can stand as emblematic of the Nazi ethical misadventure: this was less a rejection of one system of ethics in favour of another as the transmutation of an older ethic into a far darker version of itself, in which such recognised values as obedience to law, loyalty to leaders and to the community, subordination of one’s individual interests for the sake of the greater good, courage, perseverance in the face of obstacles, resoluteness and tenacity of purpose, were radically re-orientated in ambit and application. The fact that not all Germans in Hitler’s Reich were persuaded by the Nazi refashioning of traditional ethics shows that those ideas were not inescapable. Still, this does not mean that they were always easy to escape from for individuals of a certain psychology or social background. Steven Lukes has proposed that ‘[r]easons are available to all who can reason and cannot be completely internal to a particular way of life or culture’ (Lukes, 2003, p. 7). But this view looks too sanguine: how easy was it really for young Germans during the 1920s and 1930s to escape the lure of Hitler, in the aftermath of a war in which a defeated and humiliated country appeared otherwise doomed to a limitless future of economic and social blight? To many citizens of the Weimar republic, the Nazi program, though in some respects distasteful, appeared to hold out Germany’s best hope of regaining its rightful place among the community of nations. Lukes’s claim seems even less persuasive when considered in relation to the Aztecs or the witch-prosecutors. For an Aztec, the powerful myths that sustained the practice of human sacrifice were the only ones in town, and any passing doubts about the ethics of sacrifice would have seemed foolish and fantastical. And although Montaigne may have entertained

110 

G. Scarre

doubts about the reality of demonic witchcraft, he was an exceptionally thoughtful and intelligent man; consequently it would be unfair to infer from the fact that Montaigne was capable of questioning received ideas that the average person of the time was capable of a similar intellectual performance. For the majority of ordinary thinkers, the reasons that were ‘internal to [their] particular life or culture’ were far more likely to appear compelling than the fine-spun doubts of sceptics such as Montaigne or Weyer. Moody-Adams, like Lukes, rejects the idea that living within a specific milieu fatally impairs the faculty of critical judgement of its mores; in her view, cultures are created and transmitted by people, but not in the way that one transmits a virus: ‘having a culture simply is not a physical reality like having a disease’ (Moody-Adams, 1994, pp. 303–04; cf. Moody-­ Adams, 2002, p.  82). Although she concedes that a ‘serious effort’ is required to question prevailing cultural assumptions, she believes that, where that effort is made, thinkers of even quite average capacity can transcend their starting-points sufficiently to cast a critical eye over their basic conceptions. Even if this claim were true—and neither the record of history nor common experience affords it strong support—people would still need to see a reason to question their basic conceptions before being disposed to do so, and this they are less likely to do in illiberal cultural environments. The notion of questioning the basic beliefs and values of one’s environment will have scant appeal to people whose cultures disvalue scepticism and dismiss all challenges to orthodoxy as wicked or heretical (although this is not to say that such questioning would necessarily be seen as ‘mad’, as Moody-Adams maintains that defenders of the idea of hermetically-sealed cultures would wish to claim (2002, p. 82)). From the standpoint of cultures that laud the virtue of unquestioning acceptance of authority, it is the liberal perspective which appears to be morally skewed. Open-mindedness may seem an obvious virtue to those of us who live in modern liberal societies but that appearance is itself a cultural artefact. James Montmarquet has remarked that we would not be impressed by someone who sincerely claimed to be unable to see the point in ‘openness’ to truth and truth-related considerations (Montmarquet, 1999, p. 845). A person who purported to be unamenable to persuasion by cogent evidence or sound logic would certainly be

5  Agents, Acts and the Relativity of Blame 

111

puzzling; in effect, she would be admitting that she was indifferent to truth. Yet someone might sincerely declare that she was wholly open to being persuaded by cogent evidence concerning some matter of debate while setting limits to the scope of what she considered to count as cogent evidence that would not be acceptable to a more liberally-minded person. Open-mindedness of this variety is not to be confused with broad-mindedness. Many contemporary writers have taken issue with J.G. Herder’s once-­ popular concept of a culture (Kultur) as a complex of beliefs, values and aspirations, intellectual and religious tendencies—a cultura animi—that binds a community together in a unity of thought and feeling and is mostly closed to external influences (see Kuper, 1999; Benhabib, 2002). Cultures are increasingly seen as typically open and flexible structures, rather than as monolithic and rigid ones. According to Lukes, ‘cultures are never coherent, never closed to the outside, never merely local, and never uncontested from within and without—though, of course, the degree to which these things are true will vary from case to case’ (Lukes, 2003, p.  34). However, this line of thought, while usefully correcting older views of cultures that over-estimated their separateness and distinctiveness and underestimated the amount of interaction there has always been between them, is itself misleading if pushed too far. Cultures may not be conceptual prisons but they are conceptual homes, the places we begin from and which set the bearings for any journeys we venture to make beyond our familiar environs. An Aztec who met a Spanish conquistador for the first time discovered that there were some very different ways of thinking about the world than those to which he had been accustomed, but only when the Bible was accompanied by the point of the sword was he strongly persuaded to take them seriously. When writers like Lukes and Moody-Adams speak about the ‘availability’ of conceptions that are alternative to our own, they need to recognise that not every conception that is ‘available’ to an audience will necessarily appear to it as a realistic option. We certainly do not consider the Aztec cosmology to be a genuine candidate for our own credence just because its details have been made ‘available’ to us by the work of scholars. Moody-Adams suggests that to look on the people who live by a different culture as ‘fundamentally “other”’ is ‘ultimately to view them as less

112 

G. Scarre

than fully human’ (1994, pp. 308–09). This looks like rhetorical overstatement: one may find another person very hard to understand without necessarily doubting his humanity. Whether we ever view members of other cultures as ‘fundamentally “other”’ in the way Moody-Adams intends is in any case open to doubt (absent a committed exercise in self-­ deception); even an Aztec will display plenty of familiar human features alongside his otherwise strange tastes in religious ritual. We certainly have no right to consider ourselves as paradigmatic representatives of our species—nor have the members of another culture the right to question our own humanity. But the fact remains that where ‘jagged worldviews collide’, grasping how those human beings who inhabit a different cultural landscape see the world is never an easy task. To be sure, strongly contrasting cultures can always engage in dialogue of a kind (Aztecs may talk to Christians) but the conversation may be halting, difficult, and ultimately inconclusive. Moody-Adams asserts that ‘blaming cultures’ for the moral mistakes of individuals ‘ignores the ways in which cultural conventions are modified, reshaped, and sometimes radically revised in individual action’ (Moody-­ Adams, 1994, p. 306). And so they are; but by and large change is slow and incremental rather than radical and swift, and the ‘boundaries’ that are pushed at generally lie at no very great distance from the place we are now. Moreover, to repeat a highly salient point, the fact that some exceptional individuals may be capable of thinking right outside the box is no basis for supposing that the majority can do so, or that cultural background does not go a very long way towards explaining, and in some cases excusing, the moral errors that people happen to make. To err is human, but errors by individuals commonly have cultural foundations. John Benson has remarked that we rightly take much on trust because we have no realistic alternative: we lack the time, knowledge, means and opportunity to probe all our ideas and opinions for their truth or falsity. While one can be too dependent on others’ ideas, it would be foolish to rely on oneself when it would be more sensible to accept the counsel or the testimony of others: ‘Indeed to accept the counsel of someone more experienced, more imaginative or better at drawing together the implications of a complex array of considerations may mean that one is more likely to have the right opinion’ (Benson, 1983, p. 14).18 Many people

5  Agents, Acts and the Relativity of Blame 

113

understandably feel a need for guidance from authority both in determining their moral principles and in applying them in a complex, confusing world. To rely on ourselves alone to perform these tasks is implicitly to ascribe to ourselves powers that may be much too feeble to do the job unaided. A person who, recognising the fallibility of her own moral judgement, defers to established opinion should not be typecast as a moral and intellectual coward. Bravado in moral thinking no more deserves the praise due to courage than does bravado in practical action. One needs to see good reasons for asking questions and not merely ask them in order to earn a reputation as an ‘independent thinker’. It is true that a culture in which no one ever questioned the basic assumptions would remain entrenched in its mistakes and incapable of any ethical evolution, and there are plentiful examples in history of outstanding men and women who have incurred ridicule, dislike or worse for daring to doubt what others fervently believe. Being prepared to risk persecution or even death for one’s ethical or religious convictions displays courage of a high order but it is not always associated with the possession of moral truth; some people have given their lives for some very bad moral causes (and they continue to do so, as the example of Islamic jihadist suicide-­ bombers starkly shows). Moral error can occur whether one follows the crowd or prefers to ‘think for oneself ’. There is no royal road to ‘correct’ moral belief. The question is how far wrongful action can be excused when it stems from sincere adherence to a false ideology or other culturally-embedded ideas. Can one be a murderer in good faith? Answers to this range from the view of Zimmerman that conscientious acceptance of a bad ideology such as Nazism can totally exonerate an agent, to that of Moody-Adams that it provides little or no excuse (since everyone can think for herself, and is not tied by inescapable cultural shackles to a particular way of thought). I suggest that the truth lies somewhere in the middle, although this somewhat bland-sounding answer requires to be nuanced. For if we ask where precisely between these extremes the truth lies, the reply is that there are as many answers as there are individual predicaments. An agent’s degree of moral responsibility is a function of a set of personal and situational factors, only some of which are within her control. To put matters very bluntly, less can fairly be demanded from the stupid, the

114 

G. Scarre

unimaginative or the naturally unsympathetic than from those who are more lavishly endowed with qualities of mind and heart. When Zimmerman lets the Commander of Auschwitz completely off the moral hook, he neglects to consider whether Rudolf Höss could have put up a better mental performance than he did, or whether he failed to ask the questions that he should—and could—have asked, thereby being chargeable with wilful heedlessness. Some of the agents of Hitler’s will were patently better equipped or more favourably placed than others to subject the claims of the regime to critical scrutiny. No doubt some Germans who were capable of tasking a critical stance failed to do so because they scented opportunities for personal advancement so long as they obeyed the party line; for them, it is hard to find any excuse. Others may have failed to exercise properly their power of taking care from sheer laziness or cowardice or a liking for the quiet life; some, more respectably, may have been convinced that people in authority must be wiser than they were. The crucial meta-ethical principle here is that more is morally demanded from those who are capable of more.

4 Siuation-Adjusted Moral Judgements Moody-Adams’s strictures against those writers who declare, often without much evidence or argument, that such-and-such conceptions could get no grip or would even be rejected as mad within a particular culture are not without their force. It is not a logical consequence of the fact that people did not think thoughts of a particular kind that they could not have had those thoughts; in some cultures, some people may have entertained dissident ideas but found them unconvincing. Moody-Adams takes Williams and Slote to task for supposing that incapacity rather than negligence explains the failure of the ancient Greeks to reflect on the ethics of slavery more adequately than they did (Moody-Adams, 2002, p. 88). None so blind as those who will not see. Aristotle, whose own view was that ‘some men are by nature free, and other slaves, and that for these latter slavery is both expedient and right’ admitted that ‘[e]ven among philosophers there is a difference of opinion’ about slavery, and

5  Agents, Acts and the Relativity of Blame 

115

that ‘many jurists … detest the notion that, because one man has the power of doing violence and is superior in brute strength, another shall be his slave and subject’ (Aristotle, 1959, p.  35; 1255a). Whether the Greeks in their own cultural situation could ever have been brought to estimate the moral arguments against slavery to be as cogent as we do today must remain a moot point. However, it would be unsafe to conclude that wherever we see people of the past engaging in practices that we judge to be morally questionable, or worse, it must always have been either incapacity or negligence that prevented them from seeing the error of their ways. For there is a third possibility that needs to be considered, namely that in certain historical circumstances practices which are morally less than ideal might have been justifiable, or in some degree excusable, given the conditions of the time. For an illustrative example, consider the practices of imposing very harsh punishments on convicted criminals in societies which lacked effective policing or other crime prevention methods. The ‘end of punishment’, wrote the English jurist Sir William Blackstone in the 1760s, ‘is to deter men from offending’, adding that ‘it is but reasonable that among crimes of different natures those should be most severely punished, which are the most destructive of the public safety and happiness’ (Blackstone, 1787 [1769], Vol.4, p.  10, 16). Accordingly, severe punishments in Britain and in most other European countries until well into the nineteenth century were often very severe indeed. In Britain in 1815 there were still no fewer than 220 offences on the statute book for which death was the prescribed penalty; these included the theft of goods worth five shillings from a shop, house-breaking, sheep-stealing, and—more bizarrely—causing damage to London’s Westminster Bridge (Thomson, 1950, p. 17). Blackstone, although famously twitted by Jeremy Bentham for his legal conservatism (see Bentham, 1988 [1776]), advised that punishments should not be disproportionately harsh in relation to the offences to which they were affixed, and that legislators ought to be ‘extremely cautious of establishing laws that inflict the penalty of death, especially for slight offences’; this was not only inhumane but it did not always have the deterrent effect expected of it, since (and here he cites Montesquieu) ‘crimes are more certainly prevented by the certainty, than

116 

G. Scarre

by the severity of punishment’ (Blackstone, 1787 [1769], Vol.4,  p. 10, 17). But the big problem was that in the absence of any adequate police force to detect crime and to enforce the law, there was very little certainty that crimes would be punished. Therefore deterring potential criminals by wielding the threat of swingeing penalties on those offenders who were detected and convicted appeared to be the sole means available of preventing social anarchy. Just how bad things can become when law-enforcement is haphazard or non-existent is shown by statistics for the English city of Lincoln during the high medieval period. The King’s justices who visited this city of around 6000 inhabitants in 1202 ‘had to deal with some 114 cases of homicide, 89 of robbery (generally with violence), 65 of wounding, 49 of rape, besides a number of less serious crimes’ (Lane-Poole, 1955, p. 392). Crimes such as these, and committed on this scale, plainly called for vigorous measures of judicial retaliation. There is no reason to suppose that Lincoln was in any way unusual in respect of its crime statistics, and whether or not the heavy punishments inflicted by the justices made Lincolnites think twice about offending on such an industrial scale in after years, there was no place for judicial tenderness when it came to dealing with such a turbulent populace. Justice had to be done with a firm, unflinching hand where exemplary punishment offered the only practicable substitute for effective law-enforcement (Lane-Poole, 1955, p.  392). I suggest that it is not unreasonable to consider that punishments which would rightly be regarded as excessive or unacceptably cruel if inflicted today in Europe or America were morally defensible in periods when better mechanisms for maintaining social order were lacking. Blackstone’s principle that judicial punishments should not be harsher than they needed to be in order to provide an acceptable amount of public protection may justify different levels and types of punishment at different times and places, according as other effective crime-prevention methods are available. If there is a relativity involved here, it is important to see that it is not Williams’s relativity of distance but rather a relativity to circumstances which is neutral with regard to moral principles. To be sure, it was bad moral luck that a judge in a thirteenth- or eighteenth-­ century court of law probably had somewhat less opportunity than his modern counterpart to exercise the virtues of mercy, tolerance and

5  Agents, Acts and the Relativity of Blame 

117

sympathetic understanding. But judges of every period might reasonably be expected to agreed that—to parody W.S. Gilbert’s Mikado—the punishment must fit the time.19 The example of crime and punishment demonstrates that we need to exercise sensitivity to the circumstances of particular societies, their specific exigencies, capacities and resources, if we are to avoid appraising them inappositely and unfairly. What I shall call ‘situation-adjusted moral judgements’ should not be thought of as ‘soft’ judgements expressive of moral condescension on our part (‘The poor benighted creatures knew no better’), but as judgements that respect the agency of former actors while acknowledging the practical constraints under which they were compelled to operate. High ideals may need to be put on the back-­ burner where their inflexible pursuit would risk causing overall more harm than good. Agents in past societies who have grasped this truth and acted in accordance with it do not deserve to be condemned by us just because contemporary people, in more favourable circumstances, are not constrained to make the moral compromises they made. It is not always easy to decide just how much weight should be given to the factor of adverse circumstances when venturing moral judgements of people’s sub-ideal actions. The judge who, in a poorly policed state, inflicts a penalty on a criminal that is harsher than would be defensible if inflicted on a similar offender in a polity possessing better crime-­ prevention methods may be justified in what she does. But what about the situation of the would-be virtuous agent who faces the prospect of acting in an environment in which ethical standards are generally rather low? ‘By modern standards,’ writes J. R. Lander, ‘societies all over Europe were turbulent, feckless, inefficient’ in the fifteenth century, and ‘standards of conduct which decent men today consider normal … until as late as the seventeenth century were considerably above the customary practice of even the upper classes’ (Lander, 1969, pp. 164–65). At the end of the Middle Ages, ‘financial dealings were accompanied wherever plausible by unblushing perjury’ (1969, p. 40). If we assume, for purposes of argument, the accuracy of Lander’s unflattering assessment of the ethical condition of late-medieval society, we might try to imagine the situation of a man (or more rarely it might be a woman) who plans to engage in commercial enterprise at that period. If this budding merchant

118 

G. Scarre

is to have any prospect of success, he or she knows that it is a recipe for failure to be too morally ‘nice’ in the conduct of business; and while this fact may be regretted, it has to be accepted if anything is to get done. The merchant who was unwilling ever to engage in bribery, perjury, chicanery, false accounting or other forms of double-dealing would be unlikely to survive long in the corrupt world of fifteenth-century commerce. The question is to what extent the recognition that ‘The world’s like that’ provides moral leeway for the merchant who feels that the choice lies between conforming with the way that things are conventionally managed or desisting from doing business altogether. Admittedly, a person of even moderate virtue would eschew gross offences of the order of murdering a commercial rival, piracy on the high seas or robbery with violence; but what should he think about such lesser immoral practices as bribing corrupt officials, exaggerating the quality of his product, squeezing his workforce to deliver maximum output for minimum pay, taking a cavalier attitude towards weights and measures, or being sluggish in the settlement of accounts? It is easy to say that a really virtuous person would have no truck with any such dishonest practices; but if people of modest virtue are never willing to get their hands dirty by acting in the world, then they abandon the field to those whose moral scruples are slight or non-existent. It is a moral mistake to allow the best to become an enemy of the good, or even—occasionally—of the mediocre. Our prospective late-medieval merchant might well consider that the (legitimate) private and public interests that would be served by his or her doing business are sufficiently important to justify some occasional dirtying of hands. That, I suggest, is a judgement we might endorse, even if we would be less forgiving of a merchant who behaved similarly in an environment in which higher standards of commercial conduct were the norm. It may be plausible, then, to make some allowances when we judge historical actors who operated in environments in which high standards of conduct were, for one, reason or another, difficult to maintain. It might with some justice be said that all moral judgements of agents ought similarly to be situation-adjusted, in the sense that they should take into account such factors as the agent’s knowledge, capacities, values, religious convictions, cultural background, and so on. But here I am especially concerned to notice something of a more down-to-earth sort, which is

5  Agents, Acts and the Relativity of Blame 

119

easily—and frequently—overlooked in philosophical discussions of the ethical implications of culture on agency. It is not only cultural conditioning which has a bearing on how people act and how they think they ought to act; sometimes the simple brute facts on the ground impose pressures on agents to act in ways that they themselves may recognise as sub-ideal but which they see as unavoidable if they are to act at all. Where this is the case, as in the example of the fifteenth-century merchant, we ought in fairness to moderate the severity of our moral judgements, acknowledging that the agent may be less a blameworthy doer of wrong than a victim of a species of bad moral luck.

Notes 1. In modern common law, criminal intentions are not normally subject to penalty unless there is also some act that is more than a preparatory step taken towards fulfilling them. Application of this principle to those who attempted maleficium would imply that they could lawfully be found guilty of an attempted crime, if not of a completed offence. However, the complication is that once the reality of witchcraft is rejected, the offence comes within the legally and philosophically problematic category of an ‘impossible crime’, and jurists dispute whether such ‘crimes’ should be subject to legal sanctions. A stock, if fictional, example commonly cited in discussions of impossible crimes is the attempt, in Arthur Conan Doyle’s story ‘The Empty House’, by Colonel Sebastian Moran to murder Sherlock Holmes by shooting at a wax dummy which he mistakenly believes to be the real Holmes. Since one cannot kill a person by destroying an effigy of him, the question is whether the Colonel can legally be charged with attempted murder. For further on the topic of impossible crimes see, e.g., Colvin (1991); Duff (1993). 2. It has been argued that Europe in the twelfth and thirteenth centuries became more disposed to practise persecution of witches, heretics, Jews, lepers and other minority groups at least in part because it suited the purposes of ‘a new class of functionaries—clerics and courtiers—for whom persecution might serve the twin ends of providing the means to extend the power and advance the interests of their masters, while consolidating their own position and undermining potential rivals’ (Moore,

120 

G. Scarre

2007, p. 144). No doubt some witch-hunters and judges were moved by such motives, but there is no good reason to suppose that all, or even most, of them were (whatever may have been true of the ‘functionaries’ who were responsible for the persecution of other minority or unpopular groups). The prospect of demonically-assisted witchcraft terrified both literate and non-literate people alike; and except where there is clear evidence to the contrary, the default assumption should always be that witch prosecution was done in good faith. 3. The Biblical endorsement of the reality of witchcraft was cited as proof of the theoretical possibility of ‘commerce with evil spirits’ as late as the 1760s by Sir William Blackstone in the final Book of his influential Commentaries on the Laws of England, although Blackstone approved the discontinuation of prosecutions for anything more than the minor offence of ‘pretending to use witchcraft’ in view of the difficulty of giving credit to allegations of ‘any particular modern instance of it’ (Blackstone, 1787 [1769], Vol.4, pp. 60–62). 4. One might wish to add the qualifier ‘in normal circumstances’ to allow for exceptional cases in which an act that would usually be wrong may be permissible (pace Kant) if is the only, or the best available, means to prevent some evil. 5. The idea of a harm as a setback to an interest is associated especially with Joel Feinberg (see Feinberg, 1984, passim). 6. A muffled midnight visitor, alleged to be Cromwell, is said to have made this remark while standing over the coffined figure of the executed king as it lay in St James’s Palace awaiting burial. See Wedgwood (1983, pp. 201–02). 7. Might the act of the misguided judge who sentences an innocent man to prison, referred to above, likewise be classed as a wrong act, being both harmful in its consequences and based on factual error? I find my own intuitions ambivalent here. Were the judge to sentence the accused to a cruel penalty, the case would be more clear-cut; however, if the element of cruelty is lacking, the sentence is still inappropriate to the case and causes the victim inconvenience and distress. Hence it may not be stretching words too far to call the sentence morally wrong, while holding the judge to be innocent of any bad intentions. But the point need not be insisted upon. 8. The difference between a justification and an excuse was crisply set out by J.L. Austin: ‘In the one defence, briefly, we accept responsibility but

5  Agents, Acts and the Relativity of Blame 

121

deny that it was bad [justification]; in the other, we admit that it was bad but don’t accept full, or even any, responsibility [excuse]’ (Austin, 1970, p. 176). 9. Another possibility is identified by J.H.  Parry: ‘A slave, even though legally a chattel, does not necessarily lack personality in the eyes of his owners; but in the Americas, the wholesale character of plantation slavery and of the trade in slaves robbed most Europeans of any sense of the humanity of the Africans whom they purchased and employed’ (Parry, 1964, p. 282). Just as it can be hard to spot faces in a crowd, it can be hard to identify individuals as individuals when they are traded and employed en masse. 10. But, to repeat a point made earlier, it is possible that some, or even many, of the slaves themselves accepted that slavery was part of the normal order of things and thus not in any meaningful sense wrong, even while they deplored their own sad fate. And we have it on the testimony of Primo Levi and other victims of Nazi atrocities that inhuman treatment can sap the self-respect of its victims, bringing them to believe their oppressors’ claims of their lack of value (Levi, 1987, p. 96). Reducing one’s victims to the point of believing in their own worthlessness is perhaps the very worst thing one can do to human beings. 11. Derk Pereboom has argued for a more restrictive conception of the function of moral blame which emphasises its role in correcting and rebuilding relationships that have been damaged by wrongdoing: ‘Blame can be conceived as taking a non-retributive stance of moral protest, whose function is to secure forward-looking goals such as moral reform and reconciliation’ (Pereboom, 2021: Abstract). Pereboom’s determination to construe blame in a manner which stresses the constructive and healing roles it can perform, rather than the more backward-looking, ‘retributive’ character which has more frequently been ascribed to it, is welcome and refreshing. However, while he is undoubtedly right to draw attention to some of the previously neglected functions which blame can serve, his account may perhaps be better seen as complementing rather than replacing more ‘backward-looking’ accounts; for blame, and likewise praise, are plausibly regarded as having a Janus-like character, looking both backward and forward in time. Furthermore, in the particular case of blaming or praising past people or their actions, this forwardlooking function must be restricted to their effects on present or future people, since they are unable to play any healing or constructive role in

122 

G. Scarre

respect of the agents whose actions are their target. It is too late now to reform the Aztec priests or witch-hunters of former days and persuade them to see the error of their ways. 12. Rosen thinks that a person may be excused for his normative ignorance even in the more extreme case where ‘the ignorance in question is what might be called bare ignorance of the rational force of moral considerations’ (Rosen, 2004, p. 305; italics in the original). When this agent acts in some self-serving way, not recognising the superior (or maybe any) force in countervailing moral considerations, his act is ‘a perfect manifestation of practical rationality’, hence ‘he is not properly culpable for his bad action’ (2004, p.  305). I am inclined to agree that an agent of this description might escape culpability for his wrong acts, but the fact that Rosen is prepared to ascribe normative ignorance to his condition somewhat ­undermines the claim that he exhibits some high level of practical rationality; it would appear, rather, that the agent in question has failed to recognise the rational character of moral demands, and is to that extent irrational. (Compare: a person who is confident she understands the rules of the differential calculus may apply them correctly according to her lights; but if she actually applies them wrongly, then she displays defective rationality as a mathematician). 13. Jeffrie Murphy has suggested that when something has been done to us that we believe to be morally wrong, and also that ‘because of certain factors about the agent (e.g. insanity) it would be unfair to hold the wrongdoer responsible or blame him for the wrong action’, then sadness rather than resentment is the most appropriate response (Murphy, 1982, p. 506). Murphy’s notion of sadness, being agent-directed, may be closer to Fricker’s idea of moral-epistemic disappointment than it is to the more purely psychological and non-critical feeling I have designated ‘sadness’. However that may be, Murphy’s view is plausible that resentment is not an appropriate attitude to feel when one has been injured by a person who is not responsible for his or her actions (nor, it may be added, is ‘sympathetic’ resentment an appropriate response when the targets of non-­blameworthy wrong acts are other people than oneself ). 14. Agreement on basic principles of equality, liberty and the intrinsic value of human persons and their lives still leaves plentiful room for disagreement over just how these principles should be spelled out in practice and/or weighed against one another. One reason for the intractability of contemporary debates on such topics as abortion, euthanasia or the lim-

5  Agents, Acts and the Relativity of Blame 

123

its of free speech is that reference to agreed basic principles provides no plain and univocal steer on these issues. 15. Note that I do not mean to imply allegiance to any notion of natural law by these remarks. Traditional conceptions of natural law represent nature as a source of rules of conduct, whereas my appeal is to the natural features of human beings that are the crucial determinants of the forms of life and action that suit them well or badly. Nature is not normative in the sense in which a law-code is normative; it gives us no commandments. It is a source, so to speak, not of rules but of ‘appropriatenesses’. Nature’s ‘path’, to repeat Pope’s word cited in the previous chapter, is not a royal road laid out before us but one we discover for ourselves through self-examination. 16. Butler’s position on conscience is less nuanced than that taken centuries earlier by Thomas Aquinas. In the latter’s view, conscience was an infallible guide to the ‘first’ or universal principles of natural law but it could err when it came to the application of those principles to specific situations. Aquinas added that acting from false conscience (conscientia erronea) does not release from sin when it impels agents to act in ways that contravene divine law. But nor does a person who believes falsely that something right is wrong (e.g. drinking wine or eating flesh meat) sin if he acts against his conscience: for there can be no sin except where the agent acts contrary to the will of God (Quodlibeta, Bk.3, Q.26, 27). Aquinas’s theory of conscience can be challenged on several counts (how, for instance, can he be so confident that conscience infallibly informs us about moral first principles?); but in acknowledging the possibility of false conscience he shows greater awareness than some later writers have done of the pitfalls of looking to conscience as a never-erring moral guide. 17. Höss remains a puzzling figure. It is also possible—and it is very probable in the case of some of Höss’s fellow-Nazis—that sadistic pleasure in wielding power over helpless people was a significant motivator. Höss and his colleagues may have salved, or tried to salve their consciences by telling themselves that the victims they mistreated merited nothing better. In their book on the Holocaust, Bloxham and Kushner draw the disturbing conclusion from the evidence of twentieth- century genocide that ‘irrespective of education, socialization, and prior ties with the victim community, people will kill under particular circumstances’ (Bloxham & Kushner, 2005, p. 159).

124 

G. Scarre

18. Benson is more inclined to think we can rely on our own judgement when it comes to moral, as distinct from factual, matters. Siding with Hume’s view that ‘morality is more properly felt than judged of ’, he argues that because our feelings are capable of putting us on the right track, we can do without others’ moral guidance or instruction Benson, 1983, p.  14). I have already argued that feelings are not the infallible moral guide (in Chaps. 2 and 4) that Benson supposes them to be, and I shall not rehearse those arguments again. It is also worth noting that it can be hard to sustain sympathetic feelings towards members of generally feared or disliked groups (e.g. suspected witches, or Jews in Nazi Germany) where people live in what Kathie Jennie has called ‘an obfuscating social climate’ in which spontaneous moral feelings are officially disvalued, or diverted away from certain targets (Jennie, 2003, p. 287). Feelings are just as capable of being distorted or perverted as thoughts are. 19. I assume here that deterring crime is at least one of the functions of judicial punishment, whether or not it is the only one. Retributivists hold that the prime—some would say the sole—purpose of judicial punishment is to ensure that the offender suffers his or her just deserts. Whether the deterrence justification of punishment, which derives from a consequentialist outlook, can coherently be hybridised with a retributivist theory—which is generally viewed as being deontological in character— is a complex and difficult question; I say more about it in my 2004. Note that the claim that what constitutes the ‘right’ or a ‘suitable’ punishment for a given crime requires reference to the social context in which it is committed poses a potential problem for the retributivist: for, given this relativity to context, how should what constitute the ‘just deserts’ for that crime be determined? Would it be consistent with retributivist principles to punish similar acts of theft more severely in an ill-policed state than in a well-­policed state? Or would such differentiation affront the principle that the punishment must always fit the crime?

6 Interlude: A Late-Medieval ‘Hand-List’ of Offences against Sociability: Or, Plus ça change, plus c’est la même chose

In presenting his thesis of the relativity of distance, Bernard Williams contends that the ethical conceptions of the past are ‘related to our concerns too distantly for our judgements to have any grip on them’; conversely, the concerns of past people are too different from our own for our ethical conceptions ever to have been options for them (Williams, 1981, p. 142). We saw in Chap. 2 that there are several serious objections, on both philosophical and empirical grounds, to the relativity of distance thesis, and that Williams is generally too ready to exaggerate the cultural, psychological, social, as well as the moral, differences that distinguish human societies past and present. Once those differences are reduced to their proper proportions and more attention is paid to the striking similarities that exist between human beings in all eras, the idea gains plausibility that certain ethical standards serve their needs and interests better than others. But not only this: many of the most serviceable principles (for instance, those banning murder, cruelty, theft, lying and deception, promise-breaking, cheating, disloyalty, neglect of parents or children, treachery to leaders, betrayal of friends, etc.) have figured in the moral codes of many, if not of most, societies. The existence of substantial

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1_6

125

126 

G. Scarre

convergence in the moral beliefs and practices of different peoples is easily overlooked by philosophers and anthropologists who focus on what divides rather than on what unites; such outré and horrific practices as Aztec human sacrifice or Crow scalp-hunting tend to command an attention not aroused by the mundane rules and conventions which most people mainly rely on in order to get along with one another. The lure of the exotic needs to be resisted if we are to gain a more balanced view of human moral life. With this in mind, Chap. 3 explored the prospects of locating a standpoint for moral judgement of past cultures that centres on the constant features of human nature: the physical and psychological dispositions, needs and interests that people of all eras have in common. It is not very surprising that when we compare the past with the present it is the differences that more immediately strike us; the similarities, in the way of all familiar things, slip easily below the threshold of conscious vision. This blindness to the ordinary readily breeds the illusion that the past is more alien, exotic or incomprehensible than it really is. Hartley’s apophthegm that the past is a far country where ‘they do things differently’ is more catchy than true; people were born, grew up, married, made love or war, hated, competed, cooperated, hoped, desired, feared, pursued ambitions, quarrelled, forgave, believed in leaders, religions and ideas, fell ill and recovered again, and died hard or easy deaths, much the same in the past as they still do today. Accidentals should not be taken for essentials, nor surface differences for cultural gulfs. The fundamental factors that make for or against sociability do not alter greatly over time though the forms they take vary with circumstances. Caring for the sick and vulnerable promotes human well-being and togetherness in any period—whether it be prosperous Dives giving beggar Lazarus a loaf of bread, or a contemporary organisation like Oxfam or the UNICEF providing emergency relief to a famine-stricken region. Likewise, theft is theft, and socially obnoxious, whether the thief is a horseman in a tricorn hat who waylays travellers at the crossroads or a computer hacker who covertly removes funds from his victims’ online bank accounts. It is evident that societies can be organised in diverse ways, according to a wide variety of models that are more or less effective at promoting social stability. But social stability should not be confused with sociability in the sense in which (following Velleman) I wish to represent it as a

6  Interlude: A Late-Medieval ‘Hand-List’ of Offences… 

127

good. Some of the most stable societies have been repressive tyrannies or dictatorships, where a miserable population is kept in check by the threat of heavy penalties for stepping out of line. Such societies are not ethical societies (though they may contain many ethical individuals) and the restraints they place on human relationships ensure they are woefully deficient in sociability. There are other societies that have pursued a conception of the good but one that was perverse and misguided: examples would be recent Daesh (ISIS) in Iraq and Syria, and Spain in the sixteenth century, where deviations from strict Roman Catholic orthodoxy were punished with uncompromising harshness by the agents of the Holy Office. In such unsociable, or imperfectly sociable, societies, self-interest or ideology are distorting glasses which present at best a caricatured image of the human good. Where such environments prevail, what Abraham Lincoln called the ‘better angels of our nature’ can be very hard to hear. It is a leading thesis of this book that the basic conditions that make societies sociable are more closely constrained by facts about human needs and interests than authors of a relativist or pluralist inclination have generally allowed. Genuinely sociable societies are more alike in the deep structure of their governing rules and norms than the differences in their surface structures might initially suggest. Of course, that does not mean that people will always act well; rules are often honoured more in the breach than in the observance. But people may subscribe to the same general principles of behaviour even if they are not always very good at following them. Because people must adapt themselves to the situations in which they find themselves, the ways in which principles are applied are necessarily flexible. But the relativist’s contention that different ages mean fundamentally different ethics is not strongly supported by the historical record. Admittedly, some past societies have developed their moral thinking to a more sophisticated level than others: there may well have been more ethical discernment to be found amongst the citizens of Aristotle’s Athens than amongst the followers of Attila the Hun—though we should be cautious about presuming ethical simplicity where we are relatively ignorant about a society, as we are about the Huns. But even in what have been looked on (or more often looked down upon) as ‘primitive’ societies, qualities such as loyalty, courage, honesty, veracity, cooperativeness, justice, kindness, self-sacrifice and many others have commonly been no less

128 

G. Scarre

highly valued than they are in the advanced modern world. When the shipwrecked Spanish Conquistador Cabeza de Vaca, along with a handful of companions, made his celebrated trek from Florida to California in the years from 1528 to 1536, it was the similarities among people everywhere rather than the contrasts between cultures that impressed him, many of the indigenous peoples he encountered treating him with a kindness that totally belied their reputation as ‘savages’ (Cabeza de Vaca, 2003). Convinced that, when one digs down, people are much the same everywhere, regardless of their cultural peculiarities, Cabeza de Vaca was subsequently to protest against Spanish mistreatment of the native Indians, insisting that they deserved to be shown the respect that is due to every human being, irrespective of race, creed or culture.1 The historical inaccuracy of ‘Different ages, different ethics’ may be best shown by examination of actual examples of ethical systems from the past. One such example, taken from fifteenth-century Italy, I consider in detail in the remainder of this Interlude. It goes without saying that no generalisation can be proved by a single instance. But the purpose here is not to prove a theory but to help to release the grip of the idea that the concerns of past people were always, as Williams puts it, ‘too distant from our own’ for their ethical ideas and our own to march in tandem. While what follows is not integral to the developing argument of this book and may be skipped if the reader so wishes, the illustration gives weight to the idea that the perceived requirements for sociable living are more constant through the ages than relativists are prepared to admit. It should especially be noted how similar many principles of this scheme are to principles that are still widely accepted today, notwithstanding many inevitable differences in application or emphasis. To begin with, some background. Antonio Pierozzi, better known as St Antoninus of Florence (1389–1459), was a Dominican friar and author who, after governing in succession several of the convents of his order, was promoted to the key position of Archbishop of Florence in 1446. An advisor to popes and statesmen, he was an influential figure in both secular and ecclesiastical politics as well as a highly respected author of works on canon law and church discipline, practical guidance for pastors and world history. As Archbishop of Florence, Antoninus earned the love and esteem of the people of the city by his humble style of life and his care for

6  Interlude: A Late-Medieval ‘Hand-List’ of Offences… 

129

the poor in times of plague and famine. He was canonised by Pope Adrian VI in 1523. The specific work of St Antoninus to be cited here is entitled De Restitutionibus (Concerning Restitutions). This is not a theological treatise or a theoretical treatise on ethics but a highly practical catalogue of social offences, together with recommendations on the ‘restitutions’ that ought to be made following their commission. Although Antoninus writes from the perspective of an orthodox Christian prelate, most of the forms of wrongdoing he discusses in De Restitutionibus could be described as offences against sociability, his sternest reprimands being directed against acts or practices which disrupt community life by causing harm to individuals or subverting social relationships. (Note that he is not concerned in the present work with personal offences such as sexual misdemeanours, which he discusses in other works). Not all of the offences that Antoninus censures would be likely to figure in a modern list, but this has more to do with changes in the social and cultural environment than with changes in values: so, for example, offences to do with ecclesiastical matters, such as the simoniacal purchase of spiritual offices or the misuse of church funds, were of much more importance when the Church was a significant power in the land than they are today, when religion has lost its major role in the governance of Western society. Antoninus’s somewhat puritanical distaste for dancing, sports and female adornment may, it is true, appear overblown by contemporary standards, but it doubtless reflected, besides the natural austerity of the friar, the indignation he felt at witnessing the Florentine rich make merry while their less-privileged compatriots laboured and went hungry. One thing that strikes us immediately on encountering Antoninus’s catalogue of social offences is its large size and scope. The Archbishop lists no fewer than one hundred offences, divided into twenty sets of five. Another thing that straightaway catches the eye is the strangeness (to us) of the mode of organisation of the list as a series of ‘hands’ and ‘fingers’, where every ‘hand’ stands for a category of offending and every ‘finger’ for some subordinate offence. No one compiling a similar list today would organise things this way, but a contrast in cultural idiom is not a difference of substance and Antoninus’s divisions and sub-divisions of offences are plausible and philosophically interesting, despite the artificiality of their twenty-by-five arrangement.2 The allegorising of offences as

130 

G. Scarre

hands and fingers was probably selected by Antoninus in part for mnemonic purposes but it also displays that medieval penchant for florid metaphorical and allegorical thought which has largely disappeared from our modern disenchanted world.3 The ‘hand-list’ of offences, as I summarise it below, comes from Chap. 1 of De Restitutionibus; a second chapter (not summarised here) addresses the question: ‘what, how much, to whom, and in what order restitutions should be made’. Antoninus’s descriptions of each ‘finger’ are crisp and brief, and in a few cases their succinctness creates ambiguities which I have endeavoured to resolve in my translations from the Latin text. To keep the presentation within reasonable bounds, I have also omitted some of the sub-types and instances of offences mentioned by the author.4 THE USURIOUS HAND (manus feneratoria). Its fingers or modes: 1. Using goods pledged or deposited by another as working capital for increasing one’s own profits. 2. Using goods or valuables deposited with one for the depositor’s advantage for one’s own advantage instead. 3. Selling goods at too high a price, or paying too little for goods one acquires. 4. Pretending to be using one’s own capital in commercial transactions when the money is really borrowed. 5. Requiring a profitable return on the loans one makes to another.5 THE GRASPING HAND (manus raptoria). Its fingers or modes: 1. Usurping lordships, castles or cities for the sake of the emoluments forthcoming from them. 2. Imposing illegal dues and taxes. 3. Imposing new dues and taxes without the consent of the prince or legitimate authority. 4. Robbery by land or sea (i.e. piracy). 5. Taking Christians captive for the purposes of ransom or sale as slaves. (N.B. the same prohibition does not extend to non-Christian ‘Saracens’ [‘Sarraceni’]).

6  Interlude: A Late-Medieval ‘Hand-List’ of Offences… 

131

THE THIEVING HAND (manus furatoria). [N.B. Antoninus has particularly in mind here thefts carried out covertly or by stealth.] Its fingers or modes: 1. Treating as derelicts and detaining for one’s own use found objects to which one has no title. 2. Illicitly retaining things that one knows to belong to some other person. 3. Making use of something that has been left with one as a deposit or pledge, without the knowledge or consent of the owner. 4. Covertly appropriating for one’s own use things that belong to someone else and to whom one owes special duties (e.g. a parent, husband, lord or academic master). 5. Silent appropriation by a person in religious orders of something for which a superior’s permission is required but is unlikely to be granted. THE WARLIKE HAND (manus bellatoria). Its fingers or modes: 1 . Waging an unjust war. 2. Raising funds to fight a war which goes beyond the [legitimate] purposes of defence, and is aimed instead at winning booty from the enemy. 3. Inequitably distributing among those who have taken part in fighting a just war the goods [legitimately] taken in that war. 4. Seizing in the course of a just war the goods of the Church or of Christian pilgrims. 5. Prosecuting a just war in a tardy or negligent manner. THE CORRUPTING HAND (manus damnicausatoria). Its fingers or modes: 1. Ordering others to commit wrongful acts or training ‘pupils’ to commit them. 2. Persuading or cajoling others to act wrongly when they would not otherwise be inclined to do so. 3. Enlisting partners or helpers when one is unable to commit a contemplated crime unaided.

132 

G. Scarre

4. Praising or speaking well of crime in order that others may be induced to commit it. 5. Receiving stolen goods or (in the case of an official) concealing them or turning a blind eye to the theft. THE PARTICIPATING HAND (manus participatoria). Its fingers or modes: 1 . Keeping for oneself gifts or offerings which ought to go to one’s lord. 2. Enjoying the fruits of theft or usury (as, e.g., the family of a thief or usurer might do). 3. Providing a dowry for a bride out of ill-gotten wealth. 4. Purchasing goods that one knows to have been stolen or otherwise unjustly obtained. 5. Knowingly accepting as a bequest or legacy goods that one knows to have been unjustly acquired. THE SACRILEGIOUS HAND (manus sacrilega). Its fingers or modes: 1. Stealing sacred church goods (e.g. chalices and sacred vessels, relics, crosses). 2. Stealing goods which, although not sacred in themselves, are the property or in the keeping of a church or consecrated place. 3. Demanding [regular] dues or taxes from a church that has been damaged by fire or some other calamity; more generally, demanding unreasonably high taxes from sacred places or clerics. 4. The diversion by clerics of church income to fund their own pleasures and amusements, or to give away to their families. 5. Taking church dues when one is not entitled to them or where one has acquired one’s ecclesiastical position by simony. THE UNJUST HAND (manus iniuste). (N.B. Antoninus is here concerned specifically with the acts of judges in criminal or civil cases). Its fingers or modes:

6  Interlude: A Late-Medieval ‘Hand-List’ of Offences… 

133

1. Issuing an unjust judgement, motivated by love or hate [i.e. without due impartiality]. 2. Reaching a judgement without having sufficient knowledge of the facts of the case. 3. Reaching a judgement without having consulted the appropriate authorities and books. 4. Accepting a bribe for judging one way rather than another. 5. Issuing a judgement that is skewed by one’s desire not to displease, or one’s wish to favour, a party in a case. THE INEQUITABLE HAND (manus iniudicio). Its fingers or modes: 1. Undertaking to defend a person one knows to be in the wrong, and hoping to make some profit out of the injured party. 2. Knowingly giving bad or fraudulent counsel so that a person with justice on his side will lose his case. 3. Bringing dishonest charges against a person in order to benefit from any fines or expenses that may be forthcoming from the case. 4. Denying a charge that has been justly brought against oneself, with the consequence that the accuser is forced to pay a penalty. 5. Bearing false witness, or accepting payment for giving one’s testimony in a legal case. THE FRAUDULENT HAND (manus fraudatoria). Its fingers or modes: 1. Selling one substance as another (e.g. low-quality wine as if it were high-quality). 2. Giving short measure to customers. 3. Selling defective goods as perfect. 4. Making unreasonably large profits on the sale of goods (particularly when the buyer does not know how little you paid for them in the first place, or when he can be forced to pay an unfairly high price by his urgent need of them). 5. Evading legitimate taxes and customs duties.

134 

G. Scarre

THE FALSIFICATORY HAND (manus falsificatoria). Its fingers or modes: 1 . Using false weights and measures. 2. Coining false money. 3. Falsifying apostolic Bulls in order to obtain goods or personal advancement. 4. Falsifying wills and testaments. 5. Pretending that illegitimate children born through [a woman’s] adultery are the husband’s own legitimate offspring, particularly where this leads to the legitimate heir being deprived of his birthright. THE TREACHEROUS HAND (manus proditoria). Its fingers or modes: 1. Depriving the legitimate lord of his castles, people or lands by treacherous action. 2. Betraying one’s lord or one’s friend into the hands of an enemy, in order that he may kill him or hold him for ransom. 3. Giving away to an enemy money or other goods belonging to one’s lord or one’s friend. 4. Aiding an enemy by betraying one’s lord’s secrets. 5. Covertly breaking a pact or agreement, while pretending to keep faith with it. THE SIMONIACAL HAND (manus symoniaca). Its fingers or modes: 1 . Trading benefices or other church offices for money or goods. 2. Selling religious items such as consecrated altars, baptismal water or sacramental oil. 3. Requiring payment for the performance of sacred rites and services. 4. Selling to another a religious office that one occupies. 5. Acting as a middleman in the buying and selling of benefices and religious offices.

6  Interlude: A Late-Medieval ‘Hand-List’ of Offences… 

135

THE IMPEDING HAND (manus impeditoria). Its fingers or modes: 1. Preventing someone from carrying out the duties of his legiti mate office. 2. Interfering with the tilling of fields, or of the reaping of crops or vineyards, belonging to another person. 3. Making laws or rules which have the effect of protecting creditors from being obliged to pay their debts to lenders.6 4. Issuing letters to secure debtors against the just demands of their creditors. 5. Impeding with unjust intent legitimate commercial transactions by concealing relevant documents or writings; alternatively, blocking access to ecclesiastical tribunals in which rights in disputed cases can be determined. THE GAMBLING HAND (manus lusoria). Its fingers or modes: 1 . Cheating by the use of false dice or cards. 2. Engaging in games of chance with a person’s servant, wife or other family member in order to get one’s hands on his goods. 3. Using forceful persuasion to induce an unwilling person to gamble, or to keep him at play when he wants to stop. 4. Gaming in places where it is forbidden to do so (e.g. in churches and sacred places). 5. Gaming in places in which gaming is legally permissible but in contravention of local custom. THE HAND THAT MISUSES LOANS AND LEASES (manus locationis).7 Its fingers or modes: 1. Deceitfully loaning goods which are in some way defective, such as a lame horse. 2. Demanding more in rent for a house, field or other property than it is really worth.

136 

G. Scarre

3. Taking on the lease of a property at an unreasonably low rent, or being dilatory in paying what is due in money or goods to the owner. 4. Being sluggish or negligent in farming land that one has rented. 5. Misusing property that one has leased, allowing it to deteriorate, or wearing out farm animals by overwork. THE DISHONOURABLE MONEY-MAKING HAND (manus turpiter lucratoria). Its fingers or modes: 1 . Making money by prostitution or procuration. 2. Earning money by prohibited acts, e.g. manufacturing playing cards and dice, producing cosmetics, divination and fortune-telling, incantations. 3. Gaming at forbidden times (e.g. on religious feast days), or working to make money at those times. 4. Engaging in commercial activity in prohibited places such as churches; also, doing business with Saracens without a papal licence. 5. Devoting oneself to business with the sole aim of making money, rather than for the sake of satisfying people’s needs. THE SLANDEROUS HAND (manus detractoria). Its fingers of modes: 1 . Accusing someone falsely of some crime or offence. 2. Passing judgement of guilt on a person whom one knows to be innocent, or of whose guilt one is uncertain. 3. Exaggerating a (correct) charge of wrongdoing made against some person, or failing to protest against an exaggerated accusation. 4. Revealing to others an accusation of wrongdoing, concealing the fact that that one has received it in confidence. 5. Reporting a person’s private or hidden faults.8 THE BODY-HARMING HAND (manus persone lesoria). Its fingers or modes:

6  Interlude: A Late-Medieval ‘Hand-List’ of Offences… 

137

1. Murder. 2. Mutilation. 3. Wounding. 4. Unjust imprisonment. 5. Unjustly exiling a person from his native place. THE SOUL-HARMING HAND (manus animarum predatoria). Its fingers or modes: 1. Dissuading someone from entering the religious life [i.e. joining a religious order], or persuading a person who has already entered it to withdraw. 2. Cajoling a person to commit some offence by means of blandishments or deceptions. 3. Leading someone into an error of faith through one’s conversation or preaching [sermo]. 4. Allowing one’s children (or one’s flock, in the case of a religious pastor) to fall into bad ways, without attempting to check or discipline them. 5. Corrupting others by the example of one’s own bad habits, e.g. through gaming, dancing, blaspheming, or by the use of female adornments.

Notes 1. Cabeza de Vaca’s respect for our shared humanity was anticipated by the thirteenth-century Persian poet Saadi Shirazi, whose words are inscribed on a carpet presented by the people of Iran and displayed in the United Nations building in New York. Its English-language version runs: Human beings are members of a whole, In creation of one essence and soul. If one member is afflicted with pain, Other members uneasy will remain.

138 

G. Scarre

If you have no sympathy for human pain, The name of human you cannot retain. (Translation by M. Aryanpoor) 2. It is pleasant to reflect that a similar charge of artificiality can be levelled at a much later exercise in analytical categorisation, namely Kant’s four-­ by-­three division of his categories of the understanding in the Critique of Pure Reason. 3. The most exotic example of such ‘allegorical classification’ that I know of is to be found in Alain of Lille’s work De Sex Aliis Cherubin [Of the Six Wings of the Cherubin] (circa, 1200), in which the six wings stand for, respectively, confession, satisfaction, purity of body, purity of mind, love of one’s neighbour, and love of God; each of these wings then has five ‘feathers’ to represent its more detailed aspects. 4. I have used for my translations the edition printed as the second half of the volume Defecerunt Scutantes Scrutinio—Titulus De Restitutionibus, printed by Bartholomaeus Cremonensis in Venice in 1473. (This edition lacks foliation, pagination and signatures). 5. Antoninus’s position on usury has sometimes been claimed to be more ‘modern’ than that of most other medieval writers on the subject, but, as Raymond de Roover has pointed out, his views are not always clear and consistent (Roover, 1967). As a resident of the great banking city of Florence, Antoninus saw how the practice of lending money for capital investment was transforming commercial activity, with potential benefits for all; money employed in this way was patently no mere barren means of exchange (as Church critics of usury generally portrayed it) but fruitful in generating new wealth. Nevertheless, the moral problem with lending money for the sake of profiting from the interest charged was that its usual motives were avarice or self-interest, rather than a charitable wish to serve another. Antoninus conceded that banks or individuals that were compelled to lend money or goods to the state were entitled to be paid some interest in return, and he also thought it fitting that those who borrowed money should show their appreciation to the lender by repaying it with a small addition as a free and unconstrained gift (Roover, 1967, p. 31, 39). But in the last analysis, the Christian injunction to lay up one’s treasure in

6  Interlude: A Late-Medieval ‘Hand-List’ of Offences… 

139

heaven rather than on earth was incompatible with any whole-hearted endorsement of the morality of lending money at interest. 6. Antoninus here seems to be concerned that laws promulgated with the laudable purpose of limiting usury need to be framed or enforced in such a way that they do not allow debtors to get away with the non-payment of debts to legitimate creditors. 7. Antoninus himself fails to find a simple adjective for this hand: the Latin is ‘MANUS LOCATIONIS et conductionis vitiatoria’). 8. Here Antoninus presumably has in mind faults whose revelation serves no public interest and which may therefore charitably be allowed to remain private.

7 History: Morally Heavy or Morally Light?

Writers of histories shew, open, manifest and declare to the reader by example of old antiquity, what we should enquire, desire and follow, and also what we should eschew, avoid and utterly fly. (Lord Berners, 1908 [1523], p. xxviii) What history is, what it is about, how it proceeds, and what it is for, are questions which to some extent different people would answer in different ways. (Collingwood, 1961 [1946], p. 7)

1 To Judge, or Not to Judge? Consider the statement: ‘The Thirty Years’ War brought death and destruction to millions of people in Europe.’ That is not a moral statement but a descriptive one, even if it refers to a matter of significant moral concern, the human loss and suffering caused by war. But the writer who follows it with ‘This catastrophe was caused by the greed, insatiable ambition and religious intolerance of powerful European rulers’ combines factual description with an unmistakable note of moral censure: it was the vicious psychological propensities of the rulers in question that led them

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1_7

141

142 

G. Scarre

to cause the harm they did. I have argued so far in this book that the making of historical moral judgements is not ruled out in principle by the difficulties posed by the problems of access and relevance, and that we often (though by no means always) understand enough about the motivations of past agents and the cultural conditions in which they acted to venture moral judgements with some degree of confidence. This is possible because there are relevant standards we can appeal to—standards grounded on the nature of human beings and the conditions of their flourishing—which transcend the limitations implied by the ill-conceived ‘relativity of distance’ thesis, and which can be applied even in cultural contexts differing greatly from our own. In the present chapter I want to move on to the further question of whether, granting that historical moral judgements are perfectly legitimate in principle, it comes within the proper province of historians to make them. History as it is nowadays conceived by most professional historians is not a moralising enterprise but an evidence-based social science. Contemporary historians working in university departments of history or specialised research institutes are unlikely to regard themselves, as many of their predecessors did, in the role of moral guides, teachers of virtue, or panegyrists of ‘great’ men and women of the past. Their business is to uncover, describe and explain what happened in the past objectively and truthfully, rigorously excluding any personal biases or predilections that could distort their interpretation of the facts.1 The typical academic historian of the twenty-first century sees herself as an analyst rather than as a judge; her concern is with causal, not moral, responsibility, and if ever she feels tempted to venture moral judgements, she may consider she ought to resist the inclination as ‘subjective’ and ‘unscientific’. Much the same professional modes of practice have also filtered down to the large number of amateur historians who engage in research in their spare time. Often amateur historians elect to investigate topics of specialised or local interest that may escape the attention of their professional counterparts. The micro-historical studies of local people, neighbourhoods or industries undertaken by individual enthusiasts and local historical societies are an invaluable complement to the macro-historical researches more typically undertaken by institution-based professionals

7  History: Morally Heavy or Morally Light? 

143

and add much to the stock of historical knowledge. Amateur historians mostly lack formal training in the discipline but many local historians and societies maintain close relationships with neighbouring universities and colleges, which allows amateurs and professionals the chance to exchange information and ideas to their mutual benefit. This fruitful partnership is only possible because professionals and amateurs subscribe to the same broad methodological ideals and principles governing their craft, including the paramount importance of primary sources, the accurate determination and checking of data, and the impartial reading and presentation of the evidence. The conception of the model historian as a metaphorically white-­ coated technician who painstakingly analyses her test-tubes of data according to established models of research before publishing her findings in refereed academic journals has a lot to be said for it. Slapdash, unsystematic or selective ‘history’ produces a false or distorted view of the past which peddles fiction as fact; at best this misleads, at worst it becomes a source of ‘alternative facts’ open to exploitation by unscrupulous politicians and ideologues desirous of advancing their own pernicious agenda. History has unquestionably gained much by joining the ranks of the social sciences and adopting disciplinary methods that leave no place for groundless conjecture, facile assumption or wishful thinking. But the downside is that, in developing into a fully-fledged social science, history has lost something of its traditional humanistic status as a source of existential and practical wisdom: that is to say, its potential to teach us something about how to live by setting before our eyes moral exempla and cautionary tales. This is the conception of history familiar to the early Tudor writer Lord Berners, quoted at the head of this chapter, and it remained alive and vibrant into the age of Edward Gibbon. Indeed, the idea that history can teach us, as Berners put it, ‘what we should enquire, desire and follow’ or alternatively ‘eschew, avoid and utterly fly’, is echoed as late as 1946 in R.G. Collingwood’s well-known dictum that ‘history is “for” human self-knowledge’ (1961 [1946], p. 7). The crucial fact is that there is no sound reason to believe that history and the other social sciences inevitably imperil their scientific credentials if they ‘trespass’ into the territory of moral judgement. History can be at once a pukka social science and a humanistic discipline which advances

144 

G. Scarre

human self-knowledge by examining the fitness of the norms that people have selected to govern their affairs, and their success in living up to them. The self-denying ordinance that prohibits historians, as responsible social scientists, from venturing on any moral commentary is ill-­conceived and unnecessary. To enter moral territory is not to trespass, but to deepen our engagement with people of the past, whom we only take seriously once we regard them as individual possessors of will who can be held morally responsible for their actions (rather than, say, as mere social units or items in statistical tables). Philosophers of a pluralist persuasion assert that there can be many ways of achieving a fulfilling human life: the historian reveals what some of those ways might be. History also shows us what forms of living are best avoided, as being generally detrimental to human happiness. It is not always clear in what measure an historian intends to sound a moralistic note. This is because, when ‘thick’ terms are used, it can be difficult to be sure exactly what balance of descriptive and evaluative meaning is intended by the writer. ‘Thick’ terms, it may be recalled, are those nouns, adjectives, verbs or adverbs which carry a moral charge along with their descriptive meaning. To say that Jones is a coward, or that his behaviour is cowardly, is to identify Jones as someone who runs away from danger and is morally reprehensible for doing so. This example presents a clear-cut case, since one cannot coherently call a person a coward or cowardly while disclaiming any intention to censure him. Other terms which are unmistakably commendatory or the opposite are ‘just’, ‘unjust’, ‘gracious’, ‘cruel’, ‘jealous’, ‘kindly’, ‘malicious’, ‘loyal’, ‘treacherous’. But some thick terms are more morally ambiguous than this: instances are ‘ambitious’, ‘hard’ (as in ‘King Edward I of England was a hard ruler’), ‘proud’, ‘patriotic’, ‘ascetic’, ‘fastidious’, ‘humble’, ‘modest’ or ‘chaste’ (this last a favourite example of Williams). The moral charge conveyed by these and many other terms varies over time and from person to person, and determining exactly what a given speaker or writer intends by using them requires paying careful attention to the context of occurrence. An historian who observes that ‘Louis XIV was a highly ambitious monarch’ may intend a piece of purely psychological description, devoid of all moral overtones. But the sentence takes on a distinctively moral edge if

7  History: Morally Heavy or Morally Light? 

145

the writer has already made clear how fiercely she abhors the ‘vaulting ambition’ of kings. In many cases it is not very difficult to determine whether an historian is expressing a moral attitude or not and, if so, what, and how strongly intended, that attitude is. There is an obvious difference between a writer who reports the bare fact that Marcus Junius Brutus was one of Julius Caesar’s assassins and another who asserts that Brutus treacherously and impiously murdered his former benefactor. Some historical writings are heavy with expressions of moral sentiment; others sound a moral note so slight as to be barely audible. Others again—and not just those of recent authors imbued with ideals of social-scientific academic purity—eschew moral comment altogether. For an example of such ‘morally light’ history, consider the following extract from an early version of the Anglo-­ Saxon Chronicle, which is here recording some early incursions into Britain by Saxon raiders from the Continent (the numbers refer to dates): 495 Two chiefs, Cerdic and his son Cynric, came to Britain with five ships in the place called Cerdices ora, and fought with the Britons the same day. 501 Port and his two sons Bieda and Maegla came to Britain with two ships in the place called Portes mutha, and killed a young British man, a very noble man. 508 Cerdic and Cynric killed a British king called Natanleod, and five thousand men with him. That land was afterwards called Natan leaga as far as Cerdices ford. 514 The West Saxons, Stuf and Wihtgar, came to Britain with three ships in the place called Cerdices ora, and fought with the Britons and drove them into flight. (Quoted in Stenton, 1971, pp. 20–21)

This passage describes some excessively grisly events but voices neither triumph, indignation, disgust nor any other moral attitude; it presents the bare facts in a manner so dry it might almost be called deadpan. (Note that in describing the young man who is killed as ‘noble’, the writer is referring to his social standing rather than his moral character). The moral colourlessness of the extract makes clear that the author’s (or compiler’s) purpose was simply to record events to preserve them in memory, not to adjudicate on their rightness or wrongness. This is history writing

146 

G. Scarre

at its sparest, unadorned with descriptive detail, personal opinion or the any other indication of moral attitude. Contrast such annalistic writing with the almost contemporary work of the Byzantine historian Procopius, who pulls no punches in describing the character of the Emperor Justinian (reigned 527–65): Well, then, this emperor was dissembling, crafty, hypocritical, secretive by temperament, two-faced; a clever fellow with a marvellous ability to conceal his real opinion, and able to shed tears, not from any joy or sorrow, but employing them artfully when required in accordance with the immediate need, lying all the time; … A treacherous friend and an inexorable enemy, he was passionately devoted to murder and plunder; quarrelsome and subversive in the extreme; easily led into evil ways but refusing every suggestion that he should follow the right path; quick to devise vile schemes and to carry them out; and with an instinctive aversion to the mere mention of good (Procopius, 1981, p. 80).

More in the same vein follows before Procopius moves on to provide an even more devastating pen-portrait of Justinian’s wife Theodora, a former actress and courtesan who by devious and indecent methods (or so Procopius claims) had wormed her way into the imperial favour. Aware that his character assassination requires some justification in case it should be assigned to malice, Procopius primly suggests at the start of his Anecdota (better known to English readers as The Secret History) that his purpose in writing is to warn future monarchs that if they imitate ‘the deeds of blackest dye’ committed by Justinian and Theodora, ‘the penalty of their misdeeds is almost certain to overtake them’. Future rulers should take note of the political and social consequences that flowed from their bad government: if they do, then they will be themselves ‘less ready to transgress’ (1981, p. 38). Commentators have disagreed over whether Procopius was moved by genuine moral ardour or whether, for reasons not now known, he had a personal axe to grind against the emperor and empress. John Julius Norwich dismisses him as ‘a sanctimonious old hypocrite’ and suggests

7  History: Morally Heavy or Morally Light? 

147

that many of his charges should be taken with a pinch of salt (Norwich, 1988, p. 192). G.A. Williamson, more kindly, in his introduction to the Penguin edition of The Secret History, remarks that ‘Procopius was unquestionably on the side of right, and the things which are disgusting to us were equally disgusting to him’ (Procopius, 1981, p. 31). Gibbon was ready to give general credence to Procopius’s account of persons and events, while questioning some of his more extreme or under-evidenced charges. Whatever we think of it, The Secret History is a prime example of an historical work which is, or at least purports to be, moralistic. The quoted extracts from the Anglo-Saxon Chronicle and The Secret History represent opposite extremes of morally-light and morally-heavy historical writing. But there are other methods of putting across a moral point of view besides that of explicit commentary. An historian may choose to present her data selectively, drawing attention to certain praiseworthy or blameworthy persons or acts while passing over others in silence. The problem with such selective history is that, even where it tells no actual lies, it can be misleadingly economical with the truth. Even the passage quoted from the Anglo-Saxon Chronicle may inadvertently present a slanted view of early Saxon England by its focus on strictly military events. One classic instance of selective reporting is John Foxe’s Actes and Monuments, more familiarly known as the Book of Martyrs (first edition, 1563), which relates harrowing tales of the sufferings of English Protestants during the brutal reign of the Catholic Tudor Queen Mary I (1553–58). Foxe brings out vividly the viciousness of the Catholic reaction, but there is no condemnation in his 1000-plus pages of similar atrocities inflicted on Catholics under Protestant regimes in England and elsewhere. Then a few centuries later, imperialist historians praised the merits of British empire-builders and the benefits of ‘civilisation’ that soldiers and merchants, missionaries and engineers, policemen and bureaucrats brought to the allegedly grateful ‘natives’ of many countries, while remaining discreetly silent about the profit-taking and exploitation, the curtailments of liberty, the racial discrimination and the contempt for local cultures that were the inevitable concomitants of empire.2

148 

G. Scarre

Where the readership is likely to share the moral opinions of the author, there may seem at first to be no great need for the historian to labour some moral point that she wishes to put across. Why waste words when writer and reader are already agreed on what they should think? But the evident danger now is that both take too much for granted, cosily begging the crucial ethical questions and neglecting alternative points of view. For Winston Churchill, for instance, there could be no serious doubt that the British Empire, notwithstanding its occasional blunders, conferred inestimable blessings on the world, and most early British readers of his voluminous historical writings were more than ready to agree with him. Hence Churchill’s works, from The River War (1899) to the History of the English-Speaking Peoples (1956–58), while still readable for their inimitable prose style and narrative verve, exude a complacency, even smugness, about the benefits of British imperial rule that now looks badly outdated. It is arguable that the qualities which made Churchill a great political leader prevented him from being a great historian; the immovable opinions he held on the major political and ethical questions of the day facilitated his dominance as a politician and war leader but left him little disposed to question received ideas about government and empire and their underpinning values. Of course, Churchill was in no way exceptional amongst historical writers for taking his moral values for granted: this is an occupational hazard for all historians, since all are products of their culture, tied by visible and invisible threads to a specific system of ideas, beliefs, values, hopes and aspirations. Yet historians, with their professional awareness of the changeableness of things, are better placed than most people to avoid mistaking the contingent and time-bound aspects of their own culture for invariable norms. And given that they have, or should have, a greater capacity to get things right, they are also more blameworthy when they fail.3 Although historians who allow their moral assumptions to pass below their own radar run a risk of making historical moral judgements that are poorly considered or tainted by passing cultural fads and fashions, not every moral opinion that an historian might hold needs to be justified by

7  History: Morally Heavy or Morally Light? 

149

explicit supportive argument. An historian writing about the Nazi Holocaust or the Turkish massacre of ethnic Armenians in 1915 has no need to justify his belief in the wrongness of genocide as a prelude to condemning those atrocities. If the writer does not know that and why massacring Jews or Armenians is wrong, then his moral perspective on the world is, at the very best, alarmingly incomplete. But since the same goes also for the reader, the argument can and should be taken as read. Some moral propositions can be taken for granted not because they are unamenable to defence, but because that defence is already, or ought to be, a common property.4 Occasionally, moreover, the most effective means of conveying a moral position is by understatement. In the 850 pages of The Holocaust: The Jewish Tragedy, Martin Gilbert is very sparing of explicit moral comment; this he doubtless considers quite superfluous when the terrible facts speak so eloquently for themselves (Gilbert, 1987). Any reader possessed of an ounce of human sympathy must be appalled by the relentless piling-up of the stark details of death, cruelty and ceaseless persecution. As Piers Brendon remarks in a blurb written for the paperback edition, ‘No cloacal imagery, no savage indignation, and no elaborate interpretation will ever capture the nauseating realities of the Holocaust more exactly than this masterpiece of the chronicler’s craft.’ Brendon’s use of the noun ‘chronicler’ here is well-taken: Gilbert’s book, though far more lavish in its detail, is stylistically more akin to the Anglo-Saxon Chronicle than to the Secret History. Sometimes less is more. Moral commentary need not be multiplied beyond necessity.

2 Defining the Historian’s Role(s): A Short History of History Time was when virtually all historians were moralists. To the classical historians of Greece and Rome, historical narrative shorn of explicit or implicit moral inflection would have seemed (if the anachronism be allowed) like Hamlet without the prince. The Greeks looked to history to

150 

G. Scarre

supply examples of virtue and vice that would inspire or warn, and celebrating great men and their deeds, particularly if they were Greeks, was considered both a pious and a practical act which honoured the dead at the same time as it offered models to the living. Herodotus wrote his Histories in order that ‘the actions of men may not be effaced by time, nor the great and wondrous deeds displayed both by Greeks and barbarians deprived of renown’ (1992, p. 3). According to Oswyn Murray, Thucydides intended his history of the Peloponnesian War to illuminate the real-world consequences of Athenian imperialist theory (which was essentially the view that ‘might is right’) and ‘especially the resulting problems of morality’ (Murray, 1988, p. 190). The Romans likewise valued history for its concrete representations of virtue and vice, which they saw as a complement to the more abstract ethical reflections of philosophers. Tacitus’s Histories and Annals traced the causes of Rome’s social and political problems at the start of the second century to the decline of republican virtue under the first emperors. The same author’s Germania was an historico/ethnographical study designed to exhibit the ‘moral contrasts … between the decadence of Rome and the crude vigour of the teeming, and potentially threatening, peoples beyond the Rhine’ (Grant, 1974, p. 8). Tacitus believed that a Rome that ignored the warnings from history was sleepwalking towards eventual disaster. (In this he proved quite right). In Michael Grant’s assessment, ‘Moral purpose is never absent from Tacitus’ mind. The sequence of events on which he chose to focus provoked the sternest moral reflections’ (Grant, 1974, p. 16). The idea that history should be a useful source of instruction maintained its hold in the Renaissance—unsurprisingly, given that the age drew so much of its cultural inspiration from classical antiquity as well as from the biblical tradition of historical writing. A little earlier, in the mid-­ fourteenth century, Boccaccio’s colourful thumb-nail sketches of the catastrophes occurring to famous people of the past (De casibus virorum illustrium) were meant to point the message that pride comes before a fall. Two centuries later, Machiavelli wrote his Florentine History with the clearly-avowed aim of showing rulers how to govern more effectively—a purpose which, if not moral in the strictest sense, was at any rate a highly practical one. In Machiavelli’s view, ‘if any reading be profitable for men that governe in Common-weales, it is that which sheweth the occasions

7  History: Morally Heavy or Morally Light? 

151

of hate and faction: to the end that being warned by harme of others, they may become wise, and continue themselves united’ (Machiavelli, 1595: sig. A3r). Yet history, thought Machiavelli, should delight as well as instruct, and this it did best when it provided abundance of descriptive detail to keep the reader’s interest alive. Later, David Hume, in his essay ‘Of the study of history’, likewise considered that history ought to be amusing as well as instructive. For Hume, the reading of history was essential for every intelligent person because no other subject so effectively expanded our understanding of humanity.5 ‘A man acquainted with history,’ he pithily observed, ‘may, in some respects, be said to have lived from the beginning of the world’ (Hume, 1903 [1742], p. 561). Like many previous authors, Hume was keen to stress the value of the pragmatic and moral lessons that could be drawn from study of the past. There is an unmistakable echo of Tacitus in his remark that history helps us to recognise the virtues which contributed to the greatness of former empires and the vices which brought about their downfall (1903 [1742], p. 560).6 Something like Hume’s conception of history (which informed his own best-selling History of Great Britain) is exemplified in Edward Gibbon’s Decline and Fall of the Roman Empire (published 1776–88). Gibbon read voluminously in all the printed sources then available and made every effort to get his facts right. But Decline and Fall was also a thoroughly moralistic work, conveying the author’s moral evaluations— which were most often disapproving ones—on virtually every page. No academic historian writing about ancient Rome today is likely to be as morally censorious as Gibbon, even if she privately shares his moral distaste for many of the less than savoury characters that appear in his story. The chief reason for this is the evolution over the last decades of the eighteenth century and the first decades of the nineteenth of the modern idea of history as an objective social science which is committed to eschewing anything that hints at authorial subjectivity—including the expression of moral opinion. Originating in the German universities in the 1780s, the new conception of history (Geshichtswissenschaft) spread throughout most of Europe within a few decades and still remains, with some modifications, the dominant model of how things should be done.

152 

G. Scarre

Exactly what sort of a social science history is, or should be, has continued to be debated up to the present day, although it has been generally accepted, following the work of Leopold Von Ranke (1795–1886), that the location and analysis of primary sources is fundamental to the historian’s business. A significant focus of controversy has been the extent to which history can be considered to be a purely empirical study, in which the facts may be allowed to speak for themselves, as Von Ranke for one believed they could. For some later historiographers, such as E.H. Carr, this view smacks of the naïve, since it underestimates the degree to which ‘facts’ are identified as significant only via the theoretical frameworks which historians bring to their interpretation (Carr, 1986 [1961]). While we need not pursue this particular debate here, it may be noted that the gap between a social science like history and the natural sciences may be narrower than some historiographers have imagined it to be, given that the natural sciences are scarcely less dependent than the social on the application of theoretical paradigms to the selection and analysis of their raw data (see Kuhn, 1962). But the point of most present relevance is that history’s location among the social sciences has quite needlessly encouraged many historians to depart from their predecessors’ tendency to ‘moralise’ about their subject-matter, in the belief that this is incompatible with scientific standards of objectivity, impartiality and ethical neutrality. This self-denying ordinance—to repeat—has really no solid ground to stand on. For the historian can be fully as objective, impartial, judicious and evidence-guided in her moral judgements as she is in her judgements of fact. She can praise or condemn acts that were done in the past by reference to natural facts about human beings and the conditions under which they flourish or fail to do so. She can painstakingly uncover and describe objectively the facts about the transatlantic slave-trade or the Holocaust and subject those facts to moral scrutiny. The two tasks are not mutually exclusive and may, indeed, be considered to be complementary exercises. Moral-lightness is not an obligation of the responsible historian. The absence of any principled reason why historians should not pass moral judgements does not, however, mean that they should rush to judgement at the drop of a hat. Un- or ill-considered moral appraisals are unfair to the subjects of judgement and of no service to the reader.

7  History: Morally Heavy or Morally Light? 

153

Historians who target their work at a popular rather than a specialist readership may occasionally feel tempted to spice up their narratives with expressions of moral admiration or disgust: titillating details of murder, mayhem and sexual misconduct, packaged with suitable expressions of moral horror and outrage, sell books. Their more conscientious ‘academic’ confrères may not unreasonably turn up their noses at such efforts. But while simulated or exaggerated moral indignation is unworthy of the historian’s craft, honest moral appraisal is not, where the charges can be adequately justified by the facts. Problematic for a different reason are historical writings in which the author has, or may be suspected of having, a moral or other axe to grind; in these cases, the objectivity and the justice of the treatment may come into question even if the sincerity of the writer does not. Daniel Jonah Goldhagen’s controversial book Hitler’s Willing Executioners: Ordinary Germans and the Holocaust is a polemical J’accuse aimed at the mass of ‘ordinary’ German citizens of the Nazi Reich who, in the author’s view, were criminally responsible for bringing Hitler and his associates to power and subsequently keeping them there (Goldhagen, 1996). Goldhagen argues that the German population was far more complicit in the atrocities carried out by the regime than had previously been supposed; on his revisionary view, huge numbers of ‘ordinary’ Germans were aware of the genocidal campaign against the Jews and either approved of it or were happy to look the other way. Goldhagen’s book might be read simply as an attempt to correct an historical misunderstanding about the extent of the average German’s support for the Holocaust; but it is plainly a great deal more than this. Every page breathes anger against the German people who are characterised en masse as ‘Hitler’s willing executioners’. Goldhagen’s disturbing charges are supported by meticulous investigation of an impressive range of published and unpublished documentary sources, as befits any serious piece of historical research. Yet the level of polemic goes well beyond the occasional expression of moral opinion encountered in most contemporary historical writings. If the arguments I have given so far are correct, there need be nothing in principle wrong with this. Goldhagen joins a long line of historians who are justly critical of the Nazis, the difference in his case having to do not so much with the scale as with the scope of his condemnations. If there is a problem about

154 

G. Scarre

Ordinary Germans, it is not that the book is written with moral passion but that the author’s fervour to see justice done may have caused him (as some critics argue it has) to overstate his case for the average German’s complicity in Nazi crimes.7 Whatever the truth about this, Ordinary Germans runs a risk common to any historical text that takes a decided moral stance: the risk, namely, that the author views the facts that support his position as possessing a greater salience than those that do not. Even the most conscientious writer is liable to draw skewed conclusions from skewed premises. The existence of this occupational risk should not discourage historians from voicing firm moral opinions where they believe these are justified by the evidence. But they need to set out the reasons for their judgements with clarity and candour, so that others may check the accuracy of their facts and the fairness of their strictures.

3 History and Human Self-Knowledge Whether we prefer our history to be ‘morally light’ or ‘morally heavy’ will depend on what we conceive the purpose of historical writing to be, and on this there can be more than one opinion. As Collingwood remarks in the epigraph to this chapter, what history is and how it should be pursued are questions to which different people will give different answers. While some of those answers may be superior to others (history should certainly not be conflated with propagandising or with myth-making), there may be more than one ‘right’ answer. Collingwood, as we have seen, favoured a strikingly broad view of history’s role when he declared that ‘history’ ‘is “for” human self-knowledge’ (1961 [1946], p. 10). On this conception, history ‘teaches us what man has done and thus what man is’; historians record and interpret res gestae [the things that are done] in order to throw light on human mind and character; through history we learn more about ourselves both as individual actors and as members of the species (1961 [1946], pp. 7–9). Collingwood concedes that this is a notably philosophical view of history and that not all historians may be used to thinking about their discipline in quite so high-flown terms. But he thinks there would be little disagreement that history as practised today has four leading characteristics: (a) it is ‘scientific, or begins by asking questions’; (b)

7  History: Morally Heavy or Morally Light? 

155

it is humanistic, being concerned with the actions of men (and not, say, gods or spirits); (c) it is rational, basing its answers on evidence; (d) it is self-revelatory, throwing light on aspects of human nature (1961 [1946], p. 18). For Collingwood, the task of historians is to enhance human self-­ knowledge by showing how people have behaved in the past; and because the historian is obviously ‘not an eye-witness of the facts he desires to know’, he must instead ‘re-enact the past in his own mind’ by applying all the powers of his imagination to the evidence he has at his disposal (1961 [1946], p. 282). Given the importance of moral action and decision in every human life, Collingwood must accept that the work of re-­ enaction involves reconstructing the moral experience of people who lived in past times. Admittedly, it does not follow from this that the historian should engage in the moral appraisal of past behaviour, nor does Collingwood say that he ought. But neither does he say that he ought not. Collingwood’s silence on the question of whether moral judgement is (ever) the business of historians is maybe surprising in a writer who combined the academic professions of historian and philosopher. Yet it is possible to discover some warrant for the view that historians may be moral judges within Collingwood’s spacious conception of history. When historians engage in the imaginative reconstruction of the past in order to discover more about what human beings are like, they need to envisage themselves in real confrontation with people of the past: and real confrontations with people are inevitably normatively-charged confrontations. One cannot meet another human being without being forced to ask oneself the question: How should I behave towards this person? Even the unknown stranger one passes in the street enters a momentary moral relationship with oneself; one recognises (or ought to) that he is not an object to be elbowed roughly aside or carelessly walked over; and one expects to be treated with a reciprocal respect. And while one obviously cannot encounter people who died centuries ago in this same literal manner, imaginative ‘historical’ encounters resemble ‘real’ encounters in so far as one recognises Brutus or Louis XIV or Hitler as human beings towards whom certain attitudes would have been appropriate and from whom certain attitudes could have been expected (‘expected’ in the normative sense, that is, if not in the descriptive). Great Caesar, as Hamlet

156 

G. Scarre

remarks, may now be stopping a hole to keep a draught away, but when we employ our historical imagination, it is the living man we think about, and as a living man he invites our normative engagement. Needless to say, not all history writing raises, or needs to raise, ethical issues. An academic article on wool production figures in late-medieval East Anglia or changing population patterns in eighteenth-century Saxony may lack moral comment and be none the worse for that if it is intended purely as a study of economic trends (though it might leave something to be said about the exploitation of the peasantry). The mistake would be to require that all respectable writing about the past should be similarly moral-light. Because history means different things to different people, it would be foolish to try to impose a one-size-fits-all model of how history should be ‘done’. That would not only constrict historical vision but waste the talents of any would-be historian who refused to be constrained within that interpretive straitjacket. A more generous understanding of the discipline finds room for a variety of approaches—for the micro-analytical study, laden with facts, figures and statistical tables, as well as for emotionally-engaged, morally heavy texts such as Goldhagen’s— without insisting that one of these is superior to the others. I have argued that historians should feel entitled to pass historical moral judgements without worrying that in doing so they are overstepping their professional boundaries. An important further question is whether historians not only may but, on some occasions or in certain contexts, ought to voice moral judgements on past agents or events. This suggestion may make some historians uneasy. They may complain that the methodological norms governing their craft are suited to the establishment and analysis of facts, not to the production of moral judgements, and that they have no special expertise as historians to make such judgements. To this it may be replied that, although it is true that the methods of historical research are not modes of moral analysis, the understanding of the past which their use provides places the historian in a privileged position to issue moral judgements which are well-evidenced and objective, and which make due allowance for cultural circumstances. Historians might have no special ethical expertise qua historians, but they are experts in the subject-matter which calls for moral judgement. Furthermore, if Collingwood is right that history is a distinctive source of

7  History: Morally Heavy or Morally Light? 

157

insights into ‘what human beings are like’, then the historian may notice morally-relevant features of agents and their acts that others are more likely to miss. It may still be objected that, whatever advantages the historian’s special knowledge and training may afford her in regard to the making of sound moral judgements, it remains to be established that ethical evaluation is the proper business of historians. There may be no pressing practical need for an historian to voice an explicit moral position on long-past events which have little or no present resonance—though there is also no good reason why she should not pass moral judgement on them if she wishes. The writer who could relate the cruelties inflicted by the distant Assyrian Empire or the ravages of Tamerlane in a wholly dispassionate manner might appear to be a little deficient in feeling. But for an historian to treat the Holocaust, or the eighteenth-century slave-trade, or the nineteenthcentury ‘scramble for Africa’ (or even some rather some more distant events which still raise hackles, such as the Crusades) in a morally light manner, as purely matter for ‘research’, would be rather more shocking. These events which continue to reverberate in the contemporary world demand a response that combines the ethical and the factual. To estimate the number of victims murdered at Auschwitz with the same detachment as one might calculate the number of sheep fleeces exported from East Anglia to Flanders in the fifteenth century would be to fall short, I suggest, as an historian. To say this is not to insist that the scrupulous historian must always spell out her moral views explicitly; sometimes she may leave them to be read between the lines or she may invite her readers to frame their own responses. There are besides, as we have seen, instances in which moral commentary is superfluous, since the awful facts speak loudly enough to be heard by all but the morally tone-deaf. To intrude a moral judgement can be a species of indecency where what is stated should be evident. Yet such is the complexity of human affairs that ethical judgements are not always easy and straightforward. How, in the final analysis, should we sum up a complex character like Edward Colston, slave-treader and Christian philanthropist, whom we met at the beginning of this book? What did Colston think he was up to when he used his profits from the slave-trade to provide charitable benefits for the labouring poor of Bristol?

158 

G. Scarre

An historian who rejects such a question as too hard to answer forfeits the opportunity to engage with vital issues about human character and action which are of interest well beyond the confines of her own discipline. But she also falls short as an historian if, as Collingwood asserts, a major objective of the study of history is to show what human beings are like by revealing what human beings have done. For what, in fact, did Colston do, where this question is understood as demanding, beyond a specification of the ‘raw’ acts, some clarification of his thoughts and intentions? Acts obtain their meaning from the context of their occurrence, including the normative conventions current when they were performed, and they cannot be understood except by reference to that background. The historian needs to make moral sense of Colston if she is to make any sense of him at all. If she accepts Williams’s thesis of the relativity of distance, she effectively throws in the towel, confessing that the ethical world that Colston inhabited is too remote from our own for our moral conceptions to allow us to understand him. That thesis, I have argued, should be rejected on the basis of a naturalistic theory of the conditions required for human flourishing, many of which should have been just as plain to Colston and his contemporaries as they are to us in our own time. To say that Colston’s moral world is not a closed book to the present-­ day historian is, admittedly, not quite the same as to say that the historian is entitled to judge Colston morally, and the historian may still hesitate to pass judgement on the basis that aspects of Colston’s world-view were sufficiently unlike our own to provide him with some moral wriggle-­ room. But such caution can be overdone. There is quite enough in common between the early-eighteenth-century moral outlook and our present ethical conceptions to legitimise retrospective criticism of what was done or thought then, so long as due attention is paid to any extant factors that might have mitigated blame (e.g. the false, but seemingly sincere, belief of many white people that black people lacked sensitivities of the same acuity as whites possessed). That said, the salience of such mitigating factors should not be exaggerated, much moral error and blindness being due, as we saw previously, to culpable self-deception or affected ignorance.8 The historian who is intimately acquainted with her period is uniquely well placed to venture moral appraisals which are founded on the facts, knowledgeable about prevailing cultural conditions and free from

7  History: Morally Heavy or Morally Light? 

159

anachronism (it would be very foolish to condemn Attila the Hun for failing to recognise the existence of human rights). Heavy-handed or dogmatic moralising should naturally always be avoided, by historians as by anyone else. And the historian should be ready to defend her moral opinions whenever she is challenged. Yet this does not mean that she should be sparing or mealy-mouthed in her moral judgements; it is perfectly possible to be robust without being dogmatic. Forceful moral comment may be called for especially where it is desirable to correct previous misrepresentations—and instances where such revisionary treatment is needed abound. As an example, many late nineteenth- or early-twentieth century accounts of the glories of the British and French Empires, the ‘civilising’ efforts of generations of European colonists in the ‘savage’ parts of the world, and the superiority of the achievements of the white race, reflected ways of thinking which, although outmoded and objectionable, have still not entirely lost their grip. Victorian and Edwardian historians writing about the Indian Sepoy Rebellion (aka the ‘Indian Mutiny’) of 1857–58 were usually far readier to record the atrocities—which were undoubtedly numerous and terrible—committed by the rebels than they were the fearsome reprisals carried out by the British authorities in the aftermath of the rising. Looking back to the Rebellion from the end of the Victorian era, the author J.W. Sherer was typical of many in lavishing praise on the gallant British soldiers who had fought against tremendous odds and condemning the Indian rebels as treacherous, unreasonable and barbaric. While Sherer was happy to praise the loyal Indian Princes who refused to join the mutiny, he considered its happening at all wholly unsurprising in view of the fact that ‘India is very strong in a peculiarly dangerous and abominable class of scoundrel—bravo, gambler, black-mailer, and thief ’ (Sherer ed. n.d., p. 338). He conceded that although there may have been some mistakes committed on the British side that helped to precipitate the mutiny, much the greatest share of the blame for what followed was ascribable to the unreasonableness of the Indians. Such accounts as Sherer’s are now generally regarded as presenting an unbalanced and biased view of the events of 1857–58, and historians, including British authors, who have written since the end of the British Raj in 1947 have been far less selective in the facts they record and considerably more

160 

G. Scarre

scrupulous in their moral judgements. Thus the distinguished military historian John Harris, in an admirably balanced account of the rebellion, after detailing the horrors inflicted by certain of the mutinous sepoys on British women and children in the city of Cawnpore, pays equal attention to the disproportionate and cruel punishments afterwards visited on the surviving sepoys by order of the vindictive Brigadier-General James George Neill (Harris [1973] 2001, p. 97). In similar corrective mode, Lawrence James, in his authoritative history of the British Raj, records the brutal suppression of the rising and the almost genocidal hatred of Indians that was generated back in Britain by the reports (often exaggerated) of the crimes of the sepoys (James, 1997).9 Truth, it is often said, will out in the end. I suspect that very few historians would subscribe to this very doubtful bit of wisdom. If truth is to out at all, it usually requires a lot of help. This is as much the case with moral truth as it is with its factual counterpart. Whether we consider the determination of the truth, of either kind, as a peculiar professional responsibility of historians or one that devolves on them from a more universal duty to tell the truth, what matters is that the knowledge of what men are which Collingwood claims to flow from knowing what men have done will necessarily be incomplete unless we also ask whether men have acted well or badly.

Notes 1. Excluding personal bias can be hard to achieve, and historians are only human. Further, the conscientious historian may believe that she has identified and discounted for her prejudices even when she has not. Peer review may sometimes reveal distortions which were invisible to the writer, but this is rather less likely where reviewer and reviewed are both prisoners of the same illusions. 2. It is also possible to damn with faint praise, as when a writer of imperialist history concedes that aspects of indigenous cultures may have had their merits, but none that could compare with the enlightened ‘British values’ which displaced them.

7  History: Morally Heavy or Morally Light? 

161

3. In the more than half century since Churchill’s death in 1965, attitudes towards the British Empire among historians have passed from praise to disapproval and most recently to a kind of bemused wonder that such a ramshackle and internally conflicted institution survived for as long as it did. Jeremy Paxman sums up the history of the Empire in a few pithy sentences: ‘The British Empire had begun with a series of pounces. Then it marched. Next it swaggered. Finally, after wandering aimlessly for a while, it slunk away’ (Paxman, 2012, p. 285). 4. If someone is unconvinced that genocide is always wrong, or wrong in some specific case, and demands an argument before he will admit to this, then he barely seems to share the same moral universe of discourse as ourselves. But if a person were to raise the question, then we might respond in the language that Kant employs in the Groundwork of the Metaphysic of Morals, and seek to convince him that all human beings, as possessors of rational understanding and will, deserve to be treated as ends in themselves (that is, as bearers of intrinsic worth). Yet even this is not guaranteed to persuade him if it is precisely the universalism of that species of ­argument which evokes his scepticism in the first place—as may well have been the case with some of the original perpetrators of genocide. What, then, might be left to do? Perhaps the most promising strategy would be to bring him up close and personal with members of the race or group whose value he questions, so that he might come to appreciate how similar in fundamental respects their life is to his own. Acquaintance sometimes proves itself a more effective educator than argument. But even this is not quite certain to succeed. Many witnesses of the conflicts that accompanied the break-up of former Yugoslavia in the 1990s were struck by how quickly members of different ethnic or religious groups who had lived side by side for centuries came to look upon another as strangers and enemies. Sad to say, identity politics in its more strident manifestations appears to be more than a match for the human sympathies generated by neighbourliness and familiarity. 5. If Hume had any thoughts of excepting philosophy here, he does not mention them. 6. In a characteristically sanguine comment, Hume added that historians have been ‘almost without exception, the true friends of virtue’ (Hume, 1903 [1742], p. 562). It would be nice to be able to believe this but it is probably a generalisation too far. There has been far too much slanted or propagandist historical writing to make it easy to accept that all writing of

162 

G. Scarre

history is done from morally pure motives. Yet it could with some justice be said that an ‘historian’ who is seriously indifferent to the truth is not really a historian at all but rather a purveyor of fictions which masquerade as facts. So Hume may be right in the sense that the genuine historian manifests friendship towards virtue, in so far as she cares about the truth. 7. I shall not here venture to judge whether Goldhagen’s moral evaluation of the ‘ordinary Germans’ is fair or not. (He has even been accused of holding racist views of the German people by some of his critics). A rather more qualified evaluation of ‘ordinary’ German’s attitudes to Nazi crimes has been defended by Christopher Browning (1998). For a discussion of the Goldhagen-Browning debate and some further references, see Zangwill (2003). 8. The historian who studies peoples and cultures more distant from our own, such as the Aztecs or the Huns, may be more cautious about passing moral judgements on individuals on the ground that ‘they ought to have known better’. But she may still judge the destructive activities these peoples practised as ethically bad, given their harmfulness to other ­ human beings. 9. In Charles Dickens’s case, it was entirely genocidal. See above, Prelude, fn.2.

8 The Morality of Memory

1 The Need to Remember In The Ethics of Memory, Avishai Margalit raises the questions: ‘Are there episodes that we ought to remember? Are there episodes that we ought to forget?’ (Margalit, 2002, p.  48). Margalit explains that by ‘we’ in this context he means ‘the collective or communal we’, the members of some particular society (‘the society at hand’, as he elastically but handily defines it). ‘Natural communities of memory,’ he proposes, ‘are families, clans, tribes, religious communities, and nations’ (2002, p. 67)—though they can also be as broad as humanity in general (2002, pp. 74–83). To be a participant in what Margalit refers to as a ‘shared memory’, one does not need to have experienced some specific episode personally, or even have been alive when it happened; it suffices that one is a member of the community to which the episode is significant. On this basis, certain ‘moral nightmares’, such as the Holocaust, should be amongst the shared memories of every human being, given their massive pertinence to our species as a whole. If the events of the Holocaust would be extremely difficult to forget, Margalit notes that not all memories are like this; many

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1_8

163

164 

G. Scarre

events of a less overwhelming or a more localised character may need to be deliberately recalled and recorded if they are not to pass into oblivion (2002, pp. 55–58). But events need not always be tragic or dramatic to be worthy of remembrance: the quotidian experience of any society is an experiment in living together which affords its own useful lessons for posterity. ‘Shared memory’, according to Margalit, is ‘the cement of community’ (2002, p. 67). As such, one might suppose that it ought at least to be accurate, comprehensive and candid. Surprisingly, Margalit is not fully committed to this proposition, allowing that there is one distinctive category of ‘shared memory’ which may not need to be strictly veridical in content. This is the category of what he calls ‘traditional shared memory’, of which he gives the example of the Jewish people’s ‘shared memory’ of the Exodus from Egypt, an event which may or may not actually have happened (2002, p. 60). But it may reasonably be objected that including events that may not have occurred, or even those which may have occurred in some form or other but whose details are now elusive, within the category of ‘memories’ involves stretching the notion of memory rather far. Pace Margalit, however valuable one considers such traditional stories for their aesthetic or their moral qualities, their capacity to inspire, warn or reassure, it is misleading to accord them the status of memories so long as their veracity remains unsettled. By standard convention, ‘I remember that p’ implies that p is true. (What are sometimes referred to as ‘false memories’ are not really memories at all, any more than a fake Rembrandt is really a Rembrandt). To press this point is not to deny that foundation and other myths and traditions can be very important components in the construction of a community’s identity; but memories they are not. Jeffrey Blustein observes that ‘collective memory is answerable to history in this sense: good remembering has truth as one of its values, and truth is, or ought to be, the concern of history’ (Blustein, 2008, p.  179; my emphasis).1 Claims to remember may, of course, be honestly mistaken, but a ‘memory’ that turns out to be such, no matter how firmly or sincerely it was previously held, must be discarded from the list of memories. Someone who continues to claim that he ‘remembers’ something that he now acknowledges to be erroneous exhibits either confusion or deceitfulness; at best he may say that he thought  he remembered it.

8  The Morality of Memory 

165

In the sense in which Margalit speaks of memory, a society’s ‘memories’ originate in the experiences of its members although there may be no members now alive who were around when the remembered events occurred. (Nor, needless to say, must every member of the society remember a particular event personally; to insist on that impossible condition would empty the category of social memory). Many of a society’s most significant memories concern events that are well beyond the scope of living memory. The battle of Waterloo (1815) holds an eminent place among the collective memories of modern Europeans, having influenced the course of events in that continent for decades afterwards—indeed, in many respects right up to the present day—but everyone who was living at the time has long since passed away. No one alive today has any personal recollection of Waterloo; yet to deny on that basis that it can be an object of genuine memory is to construe the notion of memory too narrowly. What happened at Waterloo can be recovered from the sources which still survive: written eye-witness testimonies, contemporary records and diaries, archaeological evidence from the battlefield, and such like. In general, vehicles of memories form a broad and miscellaneous set: beyond those just mentioned are, for instance, traditions and handed-down tales, poems and pictures, artefacts and ‘memorabilia’, landscapes, buildings and monuments.2 Such items provide the raw material from which minds of the present construct the story of the past, bringing into focus the reality of what happened. Naturally, where the vehicles fail, memory must fail too; all too often, surviving evidence is insufficient to permit more than the haziest of glimpses of some past era or episode. Fortunately, the mists of time often can be blown away, or at least partially dispersed, by the application of professional methods of research and analysis which uncover new sources of evidence or elicit fresh information from that which was available before. Margalit does not discuss the role of historians or of the academic historical profession beyond remarking that ‘history’ is more focused on securing objective truth than is ‘traditional shared memory’ (2002, p. 60). Here he is in agreement with Blustein. But he rightly takes issue with critics who suppose that ‘history … [is] cold, even lifeless, whereas memory can be vital, vivid, and alive’ (2002, p. 67). For if tradition and myth are important ways of bringing the past to life (or, to be more precise,

166 

G. Scarre

certain visions of the past), history is another—and historians have the right tools to correct unhelpful distortions of memory stemming from sentimentality or nostalgia (2002, pp.  61–63). Memories of any kind rarely wear their meanings on their sleeves and require situating within broader contexts of interpretation for their sense and import to emerge in full view. The writer who provides a dramatic narrative of the course of the Battle of Waterloo satisfies a certain need but does not thereby explain why the conflict mattered so much to succeeding generations of European people. Performing the latter task calls on the application of specialist analytical and interpretative skills which may be considered to be especially, though perhaps not exclusively, the province of the historian. Moreover, the study of history enables those who are not members of a specific community of memory, and who have not inherited its myths and traditions, to engage with features of its past that are of more than localised interest.3 History expands the range of people for whom certain experiences can become shared memories, worthy of their attention and reflection. Historians may be regarded as the professional memorialists who preserve and help to disseminate the memories which matter to people: memories of the actors and events that have gone to make communities what they are, shaping their distinctive beliefs, values and aspirations. A shorthand way or putting this is to say that historians are stewards of memory. In order to fulfil this role responsibly, they must at all times be honest, objective and un-biased. They may also be called on to show the moral—and sometimes even the physical—courage to stand by the facts when the story they have to tell is not that which their audience wishes to hear. Historians are called upon to evaluate the truthfulness of collective memories, and their input is vital in part because, as Blustein notes, communities have frequently ‘fabricated stories about the past that legitimize their possession of or claim to power and dominance’, asserting these to have the authority of memory—often with dire consequences for other communities (Blustein, 2008, p. 208). Moreover, communities are often readier to hear about the wrongs that they have suffered at the hands of other communities than to recall those which they themselves have visited on others. People do not as a rule enjoy being reminded that they, or their ancestors, had blood on their hands.4 If historians fall short in their

8  The Morality of Memory 

167

commitment to truth, or if they truckle to popular demands to tell a pleasing story which bolsters a community’s amour propre, their fault is not merely epistemic but moral. Failing to meet academic standards of rigour and exactitude, they are also unworthy stewards who cheat the communities they aim to serve by subverting their formation of an honest self-image. Some historians may regard the label of ‘stewards of memories’ as too spacious a description of their role, complaining that it imposes too much responsibility on them to be gatekeepers of the temporal record. The historian who is content to work on her own small plot of ground in the garden of social-science research may have no wish to be an arbiter of what needs to be remembered and of what may safely be forgotten. Yet if every historian were to choose exclusively ‘safe’ topics to research and to teach, in the hope of avoiding social controversy or causing upset, then moral eyebrows might justifiably be raised. The historical profession could be regarded as guilty of collective failure if none of its members were willing to tackle such harrowing topics as the Holocaust or the Atlantic slave-trade. Fortunately that is not the case, and there is certainly room for members of the profession to work on non-contentious or emotive subjects so long as at least some of their compeers address the more distressing passages in the human story. Horrors such as the slave-trade need to figure amongst our collective memories not only for the obvious reason that we need to be on our guard against the danger of similar things ever recurring, but also in order that we may know ourselves better, our human strengths, weaknesses and vulnerabilities. Neglecting to record and reflect on the inhumanity that man has practised against man would mark a sad dereliction of moral duty. Plausibly, it is a duty for at least three reasons: (a) because we owe it to the dead to recall their sufferings, as a way of showing them the respect that is due to them as human beings, and that was insufficiently shown to them during their lifetime; (b) because it is right to acknowledge and regret our own sins and the sins of our forebears with whom we are linked by close cultural and social ties; (c) because acknowledging and trying to understand the wrongs committed in the past should help present generations to take the appropriate steps to guard against similar things happening in the future. I shall return to some of these themes later, but first the question must be asked: If

168 

G. Scarre

remembering the wrongs committed by or endured by our own ancestors is a moral duty, then on whose shoulders, precisely, does it bear? Historians are evidently the people best equipped by their training and experience to elicit, validate, systematise and record what ought to be remembered. But that is only to say that historians are peculiarly well-fitted to discharge the duty of stewards of collective memories, not that they bear the ultimate responsibility for doing so. The most persuasive answer is that the obligation to remember the past is a broader social one, devolving on whatever society is ‘at hand’, in Margalit’s plastic phrase. Furthermore, it is not only the negative features of its past that a society does well to remember, even if these demand a particular effort of attention. For any society that cares about where it is going needs to know where it has come from and only a society that had no interest in its own future would see no reason to preserve its memories. Because societies ought to care about and to protect the welfare of their members, including those of members not yet born, the imperative to remember carries its own moral weight. James Armitage has remarked that ‘human flourishing—the individual’s maximization of her human capabilities, and our collective endeavor to realize the best for humanity as a whole’ is simultaneously ‘present-centered, future-oriented, and past dependent’. This is because in planning in the present for the future good of the species, we need to consult ‘the collective record of the past’ that ‘only history’ can supply, in order that we may choose wisely among alternatives, maximise possibilities, and avoid previous mistakes (Armitage, 2003, p. 20). A community that desires to preserve its shared memories thus requires memorialists for more than a single purpose. Nowadays historians are the specialists most commonly trusted by society to carry out this role in a competent and professional manner. (In the past, less reliable ‘memorialists’ have included such figures as epic poets, sages, story-tellers, priests and prophets). To be sure, historians commonly pursue their trade because they find the past fascinating in its own right as well as on account of its capacity to teach moral lessons or buttress a community’s sense of identity. To many of its practitioners, history is an academic activity which provides immense intellectual and emotional satisfaction, even where it reveals the more upsetting aspects of the human past. Though the pleasure that the study of history brings can often be bittersweet,

8  The Morality of Memory 

169

recovering what happened in the past and recreating the inscape of earlier lives makes a profound appeal to many imaginations. Historians are stewards of collective memory which ‘binds the members of a community to one another, inspires collective action, and encodes the values that give meaning to its collective pursuits’ (Blustein, 2008, p. 181). But to the individual historian, the recovery of memory may also offer rewards of a more personal kind, deepening her understanding of human beings and enhancing her own self-knowledge. Margalit explains his term the ‘society at hand’ as referring, depending on context, to any collective body of humans from the species as a whole, down through nations, tribes, racial groups, social classes, countries, political parties, to such small associations as town councils, neighbourhood watch groups, or sports clubs. The specific responsibilities to remember that a society incurs depends on its particular purposes and modes of activity. Larger-scale societies will typically have wider-ranging, though not necessarily more serious, evils than such micro-societies as football clubs or church congregations to recall (as cruelty can be practised on a small scale as well as a great). A ‘memory’ that is inaccurate, partisan or incomplete is worse than none at all where it entrenches privilege or offers false excuses for past crimes. Occasionally memories may also be used to ‘justify’ present crimes: Tzvetan Todorov cites the case of the Serbs during the 1990s who sought to defend their aggression against the Bosnian Muslims by recalling how, centuries ago, they had fought against other Muslims, the Turks, in defence of their land. As Todorov remarks, ‘Discovering and telling the truth about the past is perfectly legitimate, but it does not justify wars of aggression’ (Todorov, 1999, pp.  257–58). Shared memories are an important part of the glue that binds a society together but they are open to abuse by the unscrupulous. While the moral responsibility to remember the past is best conceived as a social responsibility, the optimal way for a society to discharge its duty to remember is to entrust the preservation of the critical memories to those who are skilled in historical techniques. The original duty to remember does not belong to historians qua historians (although it may sometimes apply to individual historians qua members of some society that bears that responsibility collectively). Historians’ responsibility to remember is one that is devolved to them by the society ‘in question’.5

170 

G. Scarre

Just as civic society relies on members of the medical profession to preserve the physical health of its members, it relies on the efforts of historians to preserve its significant shared memories. To discharge the responsibilities of their stewardship role effectively, historians must conduct their work without fear or favour, according to the highest scholarly standards. Yet the characterisation of the historian as a steward of memories is potentially in one respect misleading. In common parlance, a ‘steward’ is someone who looks after the goods of some other person or institution on behalf of and for the benefit of that person or institution. Hence it would be a bad steward—though it might sometimes be an honest individual—who was prepared to sacrifice the owner’s interests for the sake of those of some rival interest-holder. This might suggest that a historian, in order to be a good steward of the record of some given society, is one who would be willing to tell a partisan story where she judged this best served its special interests. Since this is clearly unacceptable, the idea of the historian as a steward of collective memory needs to be construed in a way that excludes this kind of partisanship. The historian has an overriding responsibility to exercise the duties of stewardship on behalf of humanity as a whole, rather than in the peculiar interests of some given community where these are in conflict. Historians are obliged always to be honest brokers with the facts, non-biased reporters who ‘tell it as it is’ and are ready to reveal the more disreputable aspects of a society’s record as well as the facts that flatter it. In the last analysis, the historian is a steward of shared memory for that largest society of all, the community of mankind. Todorov’s Serbian historian who exploited his country’s record to justify oppressive treatment of Bosnian Muslims was therefore a bad steward of memory, regardless of how well his narrative may have pleased his Serbian audience. Yet the proposition that he was acting as a good steward even on behalf of his own community is itself open to challenge. How well were the Serbs really served by being given a slanted and partisan version of their own history? A society that is unwilling to face up to the truth about its own past inhabits a fool’s paradise; its self-image is founded on illusion. In the long run, this is likely to do it far more harm than good, the damage being moral—a degrading loss of ethical compass and subservience to unworthy ideals—as well as practical or political. It is not

8  The Morality of Memory 

171

just the interest of humanity as a whole that are poorly served by deliberate distortions of the historical record: those of particular communities are likewise jeopardised when tricks are played with the truth, whatever first appearances may suggest to the contrary. The historian who aims to be a conscientious steward on behalf of a society will render a candid and unvarnished account, listing both the credits and the debits as accurately as she can. Admittedly, while all stewards of memory should discharge their role conscientiously, failure or shortfall would seem to be unequally serious in different cases. Learning what happened in the Nazi period is more important for twenty-first century people than being well informed about the depredations of Attila the Hun. The battle against Nazism was the most significant existential struggle of the twentieth century in the West and its moral and practical echoes continue to resonate. Therefore an historian of the Nazi era who wilfully or carelessly misstates the facts potentially causes more harm than a writer who purveys lies or half-truths about Attila.6 That said, even the careless biographer of Attila errs morally in falsifying that human past on which our understanding of human motives and actions is grounded. And where sound stewardship is absent, memories are left to the mercy of people with a vested interest in their suppression or perversion. (Recall how dubious myths about the ancient origins of the Germanic Volk played a sinister role in the construction of the Nazi ideology). The landscape of memory is a delicate environment, perennially threatened with ecological disaster and requiring expert attention to protect it. To present one’s work as ‘history’ is implicitly to claim that one is relating facts, not fantasies. Where a shortfall of evidence makes interpretation of events doubtful, speculation and ‘imaginative reconstructions’ must be clearly identified as such. The obligation to tell the truth, and to tell it shorn of bias or distorting selectivity, applies to all historians, whether they be professional historians working in academies, or amateur or ‘weekend’ historians interested in discovering the past of their local neighbourhoods. And it applies to professional historians whether they intend their work for a predominantly specialist readership of colleagues or mean it to reach a broader, more ‘popular’ audience. (Despite some publishers’ scepticism about the survival of the ‘general reader’, history

172 

G. Scarre

remains one of the mainstays of book sales in many U.K. high-street bookshops). Indeed, so-called ‘popular’ historians, in common with their academic counterparts when they write to be read beyond the academy, bear an especially weighty responsibility to be conscientious stewards of memory, for the twin reasons that their work is likely to reach a larger number of readers, and that many of those readers will lack the professional historian’s expertise in discriminating historical truth from falsity.7 There is a crucial difference between the historian, whether she describes herself as ‘academic’ or ‘popular’, ‘professional’ or ‘amateur’, and the historical novelist who spins a tale out of historical materials without purporting to relate the facts and nothing but the facts. Historical novelists may invent characters and events ad libitum but this is never permitted to historians. Academic historians have sometimes taken a rather snobbish or condescending attitude towards historical writing of the ‘popular’ genre. This is unjust and out of place. Works of popular history, when they witness to the truth, offer invaluable windows on the past and convey a genuine and often far from superficial understanding of the amazing variety of ways in which human lives have been lived. Popular historical writers serve the Collingwoodian project of revealing what human beings are by showing what human beings have done. Their works are crucially important in broadcasting significant shared memories beyond the hallowed cloisters of the academy. Popular history promotes the desire to know more about that past of which, as Mary Webb wrote eloquently a century ago, ‘because it is invisible and mute, its memoried glances and its murmurs are infinitely precious’ (Webb, 1978 [1924], p. 6).

2 Warts-and-all History (But Not Forgetting the Beauty-Spots) In speaking of the importance to societies of acquiring an unblinkered view of their pasts, it should not be forgotten that highly general ascriptions of moral blame for former failings may seriously oversimplify the moral landscape, tarring all inhabitants of a particular social setting with

8  The Morality of Memory 

173

the same brush. Nevertheless, a great deal of human behaviour evidently is rooted in structural conditions that are society-wide, rather than in purely autonomous individual acts of will.8 The challenge for historians is that such structural conditions are frequently multi-stranded and complex, making discriminating and unravelling them difficult, especially where the evidence is sparse. Historical research is not a simple process of uncovering ‘the facts’: for ‘facts’ only emerge out of the background when viewed within a certain context of interpretation. Hence the choice of an appropriate interpretative frame is never determined simply by the ‘facts’: it depends on the interests we bring to the interrogation of the past, the reasons why we wish to remember. And those interests are liable to change with time. As an example, Mary Beard has pointed out that during the heyday of the British Empire, British interest in the history of Rome chiefly centred on the factors which caused the empire of Rome to rise and subsequently to fall; nowadays, on the other hand, ‘we come to Roman history with different priorities—from gender identity to food supply—that make the ancient past speak to us in a new idiom’ (Beard, 2015, p. 16). Sufficient to the day be the interests thereof. But maybe there are certain happenings, or categories of happenings, that ought never to be forgotten or neglected. In speaking about what ought to be remembered, Margalit claims that there is a more urgent need for ‘humanity’ to ‘remember moral nightmares … than moments of human triumph’ (that is, ‘moments in which human beings behaved nobly’) (2002, p. 82). This claim, I think, should not be allowed to pass unchallenged. Margalit argues in its justification that: ‘There is asymmetry between protecting morality and promoting it. Promoting is highly desirable. Protecting is a must’ (2002, pp. 82–83). The basis of the obligation to protect morality, he suggests, ‘comes from ‘the effort of radical evil forces to undermine morality itself by, among other means, rewriting the past and controlling collective memory’ (2002, p. 83). However, the reality of the quasi-demonic forces working to undermine the good which this Manichean conception implies must be considered very doubtful, to say the least. Historical experience reveals the manifold forms that evil can take but it scarcely warrants their personification as active malevolent agencies. Aside from this confusing reification of a metaphor, Margalit’s distinction between ‘promoting’ morality and ‘protecting’ is also less

174 

G. Scarre

secure than he supposes. In aiming to persuade us that getting people to desist from doing bad things is more important than convincing them to do good or meritorious ones, he implies a somewhat dismal view of life that systematically subordinates the achievement of the good to the avoidance of the bad. Yet very often the most effective way to stop people from doing evil is to convince them that there are alternative courses of action that are more worthy of their choice. Properly considered, then, ‘protecting’ and ‘promoting’ morality are really two sides of the same coin. Margalit probably takes the view he does because he is mindful of the woeful record of evils of the last 100  years or so—the calamitous wars, genocides, ethnic cleansings, state despotisms, and other gross instances of human inhumanity. Seeking to reduce as far as possible (however far that may be) the chances of such things occurring again is plainly of the first importance; but no morality was ever inculcated successfully by means of negatives alone. People desire ideals to live by, and ideals are the heart-blood of any ethical system. This is a truth well recognised by the great religious teachers, such as Jesus, the Buddha and Confucius. The best defence against evil is recognition of what Iris Murdoch notably termed ‘the sovereignty of Good’ (in Murdoch, 1970).9 I have said more than once that an historian who never issues a false statement may still be guilty of distortion by presenting the facts too selectively or economically. This could involve suppressing parts of the evidence that fail to support the favoured interpretation, or laying an undeserved stress on those details which do. Selectivity of this deliberately deceptive kind might even be reckoned a greater obstacle to accurate understanding than downright falsehood, because while a falsehood can be defeated by contrary evidence, a selective interpretation may contain no defeasible untruths. But there is another kind of selectivity which is more innocent in intention, albeit not wholly without its dangers. All writing of history is selective in the sense that the questions that historians ask and the kinds of information they pursue are inevitably influenced by current cultural concerns. To cite a further example from Beard, it is unlikely to have occurred to any historian working a century ago that the extent of environmental pollution caused by ancient Roman industry might represent a worthwhile subject for research. Nowadays, however, there could be few more vital enquiries to pursue, when the

8  The Morality of Memory 

175

multi-­disciplinary tracing of the course and impact of anthropogenic climate change has come to be recognised as being of literally life-and-death importance.10 That the subjects chosen for investigation by historians are reflective of contemporary trends and interests is neither surprising nor reprehensible. Nevertheless, intellectual fads and fashions can have a less healthy influence when they cause particular research band-wagons to roll (often greasing their wheels with generous research funding), while giving few or no incentives to others. Intellectual fashion is not always a bad thing, and it can be a very good one when it concentrates attention on topics which were formerly undeservedly overlooked. The currently fashionable areas of women’s history and environmental history were for much too long neglected when historians were fixated on the deeds of kings, generals and other high-born men. Even so, there are downsides to the influence of fashions in history. The historian who aspires to academic distinction and honours has a lessened chance of success if the field she works in is presently out of favour. The gravitational attraction of fashionable topics may cause others which are less à la mode to be unfairly looked down on and historians who study them to be regarded as belonging to the second rank. Beard’s remark that historians today are more likely to ask questions about food supply or gender identity in ancient Rome than to enquire, as their predecessors did, into the causes of Rome’s rise and fall should give us pause: might we be in some danger of substituting one sort of tunnel vision for another? However we choose to answer that question—and I shall not pursue it further here—the most problematic kind of selectivity in history from an ethical point of view is indubitably that involved in deliberately seeking to make persons, events, institutions or processes appear to be better or worse than they really were (or at any rate, than the author believes them to have been). A sycophantic biography of Stalin which praises to the skies his modernisation of the Soviet Union or his leadership during the Great Patriotic War but which ignores or makes light of the famines, purges and mass-murders for which he was equally responsible, would be objectionably selective in this manner. Similarly so would be a history commissioned by an oil company to spotlight the technological advances and economic benefits it had created while wrapping in silence the

176 

G. Scarre

damage its oil-extraction activities had done to fragile ecosystems. Praise may be given where praise is due but encomium-writing is never the historian’s job. As a steward of shared memories, the historian needs to be on guard not only against other people’s biases and prejudices, but also against her own. The writer of an historical biography who ‘has it in’ for the person she writes about is on a wrong footing from the start; so too is the biographer whose admiration for her subject borders on adulation. The historian must never forget that the responsibility to be accurate and balanced is not a matter solely of professional propriety but a moral responsibility that she owes to her fellow human beings—society in the largest sense. Admittedly, opinions can legitimately vary on what counts as an ‘accurate and balanced’ treatment of some historical topic. That an author believes she is being scrupulously fair in her writing is no guarantee that she is being so. But occasionally a lack of balance, either in the presentation of facts or in the significance the author assigns to them, appears too evident to miss. Any choice of examples will be invidious, and readers’ interpretations of what authors write should never be looked on as infallible, but one text which might reasonably be thought to lack balance is the fifth chapter of Hugh Brogan’s The Penguin History of the United States of America, which deals with the fate of the indigenous population of the country between 1492 and 1920 (Brogan, 1990, pp. 51–70).11 The fact that only twenty pages in a book of around 700 pages are devoted to the fate of the native people of America is itself revealing; but most striking is the author’s insouciant attitude to the sufferings of the Indians, whose gradual displacement from their native lands by more technologically advanced white settlers he seems to consider to have been too inevitable a process to merit much regret or sympathy. It is unsettling to be told that: ‘A history of the United States must be a history of the victors; the defeated are relevant chiefly for what they tell us of their conquerors’ (1990, p. 55). Recording the early pioneer John Winthrop’s justification of the Christian take-over of land that, since it had previously been ‘common’ land and ‘proper to none’, was now available for appropriation, Brogan remarks that ‘Perhaps his style betrays a slightly uneasy conscience; but even if it does not, he should not be blamed overmuch.’ This is for the surprising reason that: ‘The migration of forty million Europeans

8  The Morality of Memory 

177

between 1607 and 1914 is too great a matter to be dealt with by elementary moral texts, such as the Eighth Commandment [‘Thou shall not steal’]. Migration, we have seen, is natural to man’ (1990, p. 60). On this thinking, it would seem that, provided only that crimes are practised on a sufficiently large scale (or against people of lesser status?), they need not trouble the conscience overmuch. Brogan argues in defence of settler expansion that there was really plenty of land to satisfy the needs of both settlers and Indians, but this doubtful fact (doubtful, because of the extensive territorial range that is required to sustain a hunter-gatherer way of life of the kind practised by many Indian tribes) has nugatory potential to excuse given that in the end the white settlers controlled the whole of it (1990, p. 60). To be fair to Brogan, he does in passing criticise the cruelties and oppressive treatments meted out by non-native to native people but these appear to cause him limited discomfort in view of the inevitability of the march of white colonisation. The Indians, featuring among history’s victims, can be relegated to a moral footnote. In a further excusatory gesture, Brogan suggests that, when compared with other atrocities committed by Europeans elsewhere (e.g. the French Revolutionary Reign of Terror, Stalin’s purges or the Nazi genocide against the Jews), the treatment of the American Indian ‘was not exceptional, it was characteristically European, if not human, and gentler than many comparable manifestations’ (1990, p.  68; my emphasis). On this version of moral accounting, not only the Eighth but the Sixth Commandment also (‘Thou shall not kill’) falls by the wayside when history’s inexorable processes are at work.12 That the writing of history should be balanced, impartial and non-­ partisan, comprehensive in neither suppressing nor over-emphasising evidence in accordance with the author’s personal predilections (or a funding body’s preferences), equitable in its moral judgements and willing to accord praise and blame where these are due, may appear to be uncontentious ideals. But the historian who takes seriously her duty to be a steward of memory on behalf of some society ‘at hand’ (which may, of course, in some contexts, mean the whole of the human race) will aspire to something more than merely the expression of mild moral sentiments. Blustein has written that society’s obligation to bear witness about past suffering is an obligation not just to remember but also to talk about the bad things

178 

G. Scarre

that have happened to people (Blustein, 2008, p. 304). And no one is better qualified than the historian, the steward of society’s memories, to discharge the role of bearing witness in both of these aspects. Blustein is emphatic that by remembering now-dead victims we show them a respect that was wrongly withheld from them during life. Remembering victims of war, persecution, racial and religious discrimination, slavery, ethnic cleansing and other kinds of atrocity can neither restore them to life nor provide them with any compensation of which they can ever be aware. Memorialisation is nonetheless important because it is ‘restorative in a moral sense, insofar as it symbolically rehumanizes victims, acknowledges that they were wrongly treated, and—belatedly—gives them standing in the political community from which they were wrongfully excluded’ (Blustein, 2014, p. 188). Memorialisation is a public moral duty, not one that is borne exclusively by historians. But because ‘[w]illful forgetting of the victims of wrongdoing is itself a denial of the moral significance of their suffering’, it is evident that historians who ignore or downplay the suffering of victims are culpable for their failure to bear witness (Blustein, 2014, p.  188). This is not how stewards of collective memory should conduct themselves. Sometimes historians who were eye-witnesses—or even themselves victims—of events will have their own personal testimony to deliver; more commonly, historians’ mode of bearing witness is by sifting, appraising and passing on the testimonies of others. But is it always appropriate to tell the truth, the whole truth, and nothing but the truth? Recall Margalit’s question, ‘Are there episodes that we ought to forget?’ (Margalit, 2002, p.  48). There are memories which, were they to be brought under the historian’s gaze, would be likely to pain or embarrass certain groups or individuals, risk reopening old wounds or even reignite old conflicts. Might it sometimes be wisest, and kindest, to leave such sleeping dogs to lie? To this question probably no universal answer can be given; each case needs to be considered on its merits. There would be little warrant for suppressing some newly-discovered information about, for instance, the mutual jealousies and infighting among the members of some previous government administration simply in order to save the blushes of still-serving ministers or officials. But if the publication of that evidence were likely to endanger current national security or hamper certain vital negotiations with a foreign power, there would be a

8  The Morality of Memory 

179

stronger case for keeping it under wraps, at least for the present time.13 Another arguably respectable ground for reticence is charitable concern for the feelings of living people who would be saddened or shamed were historians to reveal details derogatory to the reputation of their dead relatives or friends: although this justification for silence needs to be weighed against the importance of not setting on a moral pedestal people who have no right to be there. Blustein, who is in general no friend to suppressing the truth, admits that ‘remembrance can be excessive, and sometimes needs to be tempered by forgetting’ (Blustein, 2014, p.  178). Attempting to expunge past wrongdoing and injustice from the historical record would mark a signal failure to bear witness and uphold the good. Yet a community that forever dwells in the past, endlessly rehearsing the wrongs it has suffered, prolongs a sense of victimhood which hampers forgiveness and renders reconciliation an elusive goal. Memorialisation is most problematic when it sustains hard feelings in victims or their descendants which would otherwise have faded away in time (cf. Blustein, 2003, p. 221; Blustein, 2014, p. 178). There is a big difference between making light of old wrongs and of refusing to allow them to stand in the way of achieving a better future, in which friendship succeeds to enmity and cooperation replaces strife. A community which is reluctant to abandon its status of victimhood and will not forget, or permit others to forget, its old wrongs, is trapped by its history. So long as it remains there, its ability to forge constructive relations with other communities—and particularly those which it blames for its previous injustices—is constrained by the dead weight of memory. Remembering the subjects of injustice is a symbolic form of compensation that extends to them the respect for their personhood that they were previously denied. Nevertheless, given the risks that memorialisation carries of souring relationships which it means to heal, it may sometimes be preferable, in the evangelist’s words, to ‘let the dead bury their dead’ (Matt. 8:22) and avoid jeopardising the formation of future good relationships by insisting overmuch on the recognition of past wrongs and the acknowledgement of guilt. Clearly, there is a balance to be struck between too much remembrance—or rather, remembrance of the kind which keeps ancestral hostilities alive—and too little. Where the effects of old injustices still linger, complicated questions of responsibility to

180 

G. Scarre

right the wrongs arise for governments, ethnic groups, clans, churches, companies or other collective bodies or individuals who had an ancestral hand in creating them. In these cases, simple remembrance is not enough but must (a moral ‘must’) be followed by practical action once lines of responsibility have been established. But the value of remembrance goes beyond the righting of past or persisting wrongs and the paying of respect to the dead. Public remembrance honours individuals who have made an outstanding contribution to the life of society through their achievements or discoveries, heroism or generosity. It recalls the salient events in a nation or community’s history which have made a notable impact on its development or otherwise affected it for good or ill. More broadly, public remembrance, especially when it assumes more ritualised forms, reminds living communities of who they are, where they have come from and what they stand for. Memorialisation reinforces social identity and the values that sustain it more vivid to the mind. Ideally it should itself be memorable in order to maximise its effect. Memorable remembrance can take many forms, some being performative in character while others are physical and material. Examples of the former are sacred and secular memorial ceremonies, concerts, exhibitions, prizes, games and competitions held ‘in memory of ’ some person or persons, and the celebration of anniversaries. Instances of physical remembrance include monuments, statues, grave-markers, war memorials and other architectural features; also the naming of buildings and locations after prominent people and the mounting of commemorative plaques and tablets.14 More will be said about the historian’s treatment of individual reputations and the ethical questions that this raises in the following chapter. Meanwhile, the point deserves repeating that it is morally and practically healthier for a society to face up to the facts of its past, uncomfortable though some of these may be, than to indulge in unmerited self-flattery. Indeed, a society which conceals from itself, or which is ‘protected’ from knowing, the less creditable aspects of its history, really does not know its history at all. To tell only part of a story while ignoring or passing lightly over other parts inevitably conveys a misleading impression even of the portion that is told. A society which fails to face up to its past, taking account of the good and the bad together, has a flawed self-image which distorts its past and hobbles its future relationships. A former colonial

8  The Morality of Memory 

181

power which glories in the ‘civilising’ impact it once had on some ‘savage’ part of the world while ignoring the exploitation and humiliation that were inseparable from its rule, ought not to be surprised if its present benevolent overtures to the people of that region are not always greeted with gratitude. However, a society does itself an injustice if focuses only on the bad or shameful episodes in its past and neglects to recall its genuine achievements and the more admirable men and women who have graced the pages of its story. The past is not a scene of unmitigated woe and the good has not lost all its battles with the bad. A society which fixes its gaze on the negative features of its history and ignores the positive fails to arrive at a balanced view of its past and saps public confidence in its ability to face up to the challenges of the future. For if its past is so uniformly bad, what real hope can there be that it will do any better in the future? (Some people think that a depressed and depressing attitude of this sort has become all too prevalent in post-Brexit Britain). History is not just a catalogue of injustice, oppression and suffering but also a repository of inspiring examples of conduct and accomplishment. To regard the one but not the other is to be a poor student of the past. Every society has things in its story to condemn and things to applaud: including among the latter the everyday acts of goodness of ordinary men and women who have been moved by benevolent feeling or a sense of the right. Even though their names are long forgotten, the fact that they once existed is matter to celebrate. Both oppressors and the victims of oppression (and their descendants) should strive to reach an unedited and unadulterated view of their history to ensure the validity of their respective self-images. Victimhood is victimhood however it is remembered, but adulterated memory misrepresents former and undermines current social relationships. Victims, too, must occasionally face uncomfortable truths when new facts surface or previously known ones are reappraised in fresh frameworks which complicate the moral accounting. One instance which is sensitive right now in the wake of the Black Lives Matter movement concerns the emerging historical evidence of the large extent of indigenous African complicity in the transatlantic slave trade between the fifteenth and the nineteenth centuries. The European traders who bought slaves for transhipment to the

182 

G. Scarre

Caribbean islands, North America and Brazil rarely ventured very far from the security of the forts they constructed along the West African coast. Rather than take slaves themselves, they preferred to depend for their supplies on the local rulers and tribespeople, who sold to them the men, women and children they captured in wars or in raids into the interior. As J.H.  Parry notes, ‘Slavery had long existed throughout [West Africa]. Prisoners taken in inter-tribal war were commonly enslaved, and wars were undertaken deliberately for the purpose. Slaves might also be recruited by kidnapping or by purchase’ (Parry, 1964, p. 272). For some African kings and war-lords, slave-raiding followed by slave-trading was very big business. Hugh Thomas records of one African ruler, King Tegbesu of Dahomey, that he is reputed to have sold ‘over 9,000 slaves a year, chiefly to the French and Portuguese’; as a result of this lucrative trade his annual income in 1750 is reckoned to have been ‘about £250,000—a figure which far exceeded that of the richest merchant of Liverpool or Nantes’ (Thomas, 1997, p. 352). Any attempt to mitigate the guilt of the white slave-merchants by pointing to the collaboration of African rulers and traders in the appalling business of slave-trading would be grotesquely out of place. An agent does not become less answerable for the crimes he commits because he has aiders and abettors. And the slave-trade would never have risen to the enormous height it did had it not been for the insatiable demands of European and American merchants for ever-greater numbers of slaves. But neither does it do to excuse the black Africans from their part in the traffic by asserting that they only acted on behalf of the merchants. Not only was slaving a way of life in Africa that went back centuries before the transatlantic trade began, but the Africans who supplied the slaves were themselves ruthless and pitiless enough. It might be suggested that the fact that slaving was an established tradition among certain West African peoples provides some mitigation of the guilt of the rulers and their agents who captured and sold the slaves. And at one level it may do: to grasp the evil of taking slaves would have called for some radical thinking ‘outside the box’, something which is rare enough in any culture. It was bad moral luck to have been born in an environment that accepted slaving as a perfectly normal mode of life. Nevertheless, slaving corrupted the Africans who carried it out, just as it did the white merchants to whom

8  The Morality of Memory 

183

they sold the slaves—and who, arguably, had far less excuse for their activities, given the incompatibility of slavery with the Christian principles they professed to uphold. Conceivably, some of the African rulers who captured slaves did from time to time wonder whether what they were doing might be wrong, but either suppressed the thought as being detrimental to their self-interest or cynically considered it inapplicable in this dog-eats-dog world in which only the ruthless prosper. West African people, like people anywhere, dreaded the prospect of becoming slaves, knowing full well that slavery meant removal from their homes and loved ones, oppression, misery, hard labour, cruelty and an early death. One might therefore think that the evils of slavery ought to have been quite sufficiently evident to invite moral censure within West African culture itself. The idea that the black slavers were morally more naïve than their white counterparts is as insulting as it is implausible. If credit should be given where credit is due, then so too should discredit, where that is what is due.15 Attempting to sweep uncomfortable or upsetting facts about the past under the carpet, in order to spare the sensitivities of living people, may be well-intentioned but it is rarely truly wise. The Socratic injunction to ‘Know yourself ’ applies to societies as much as to individual persons. In neither case is true self-knowledge arrived at by faking or funking the background history.

Notes 1. In what follows I shall use the terms ‘shared memory’ (Margalit’s) and ‘collective memory’ (Blustein’s) interchangeably. 2. A brief word should be said here about the contribution that archaeology makes to the construction and preservation of shared memory. Archaeologists play a crucial and distinctive role in the making of memory, and are uniquely well qualified to do so where the written records that are relied on by historians are inadequate or non-existent. Archaeology commonly provides the only route to information about pre-literate societies, but sub-branches of the discipline such as bioarchaeology have also facilitated the recovery of memory of aspects of life

184 

G. Scarre

in literate societies which are unobtainable from the written record. To cite just one example, much light has been thrown on the day-to-day existence of the working-­class population of the district of Spitalfields in London between 1700 and 1850 by the analysis of skeletal remains removed from the former Spitalfields cemetery. Without this analysis, far less would now be known about the diet, diseases, work, lifespan and environmental conditions of London’s poor during this significant period of the city’s growth. This can with perfect appropriateness be described as a recovery of memory. See Cox (1996) on the Spitalfields excavation; for more general works on bioarchaeology, see Buikstra and Beck (2006); Roberts (2009). 3. It is interesting to note that the multiple-choice test on life in Britain which immigrants applying for UK citizenship are required to take contains several questions on British history: examples included in the sample test are ‘What was the last battle fought between Great Britain and France?’ and ‘What was the cause of the destruction of parts of London during Charles II’s reign?’. Whether most native-born Britons confronted with the test could produce a good score on this test has been questioned by some critics. 4. This is not always the case, however, as the present soul-searching in many western nations regarding their former participation in the evils of the slave-trade and colonial exploitation plainly shows. National consciences can sometimes be pricked. Likewise, many Germans after 1945 came to ask themselves how they could ever have succumbed to the lure of Hitler and the Nazi world view. 5. It is hardly to be supposed that small-scale societies such as golf clubs or philatelic societies will need or wish to commission professional historians to be custodians of their records. But occasionally there may be morally murky aspects of their past that ought not to be swept under the carpet. A golf club that once refused to admit women, the working-class, Jews or ‘people of colour’ as members should now be prepared to acknowledge and repent of its discriminatory behaviour. 6. Surprisingly, even Attila the Hun has been held to be of some contemporary relevance. Wess Roberts’ 1990 book Leadership Secrets of Attila the Hun is described on the Amazon website as ‘a bestselling classic on leadership’(!). Should we feel alarmed? 7. Many historians have held prestigious academic posts while writing extensively for non-specialist audiences. Examples well known to British audiences include G.M.  Trevelyan, A.J.P.  Taylor, Hugh Trevor-Roper,

8  The Morality of Memory 

185

Helen Castor, Jenny Uglow and Mary Beard. The distinction between ‘academic’ and ‘popular’ is indeed, more suitably applied to works than to their authors. So-called ‘academic’ writing is based on meticulous research into primary sources, typically engages in debate with other historians who are working in the same area, is thoroughly evidenced and referenced, and is intended to make an original contribution to knowledge; peer-review processes ensure that it measures up to these standards. Narrative is commonly, though not invariably, subservient to analysis and research methods are explicitly set out and defended. The technical apparatus of academic history may nowadays include the widespread use of statistical and other varieties of quantitative analysis, while historians are increasingly accustomed to join forces with specialists in other disciplines to produce collaborative research on such topical cross-­disciplinary topics as the effects of diet on health, the aetiology of diseases and pandemics, or the extent of environmental pollution caused by past industry. By contrast, ‘popular’ history writing aims to tell an exciting story or present a vivid depiction of some past scene of life in order to whet the appetite of an intelligent but not necessarily well-informed readership. Whether based on the author’s original research or on the work of other writers, it will typically be sparing in its references to other authors, although it ought always to acknowledge its intellectual debts. However, not all historical writing is neatly classifiable as either ‘academic’ or ‘popular’. Works that straddle the divide provide original interpretations or ideas to engage the attention of specialists while stimulating the interest of a non-professional audience by their lively evocations of past people and events. Mary Beard’s SPQR: A History of Ancient Rome (Beard, 2015) and Diane Purkiss’s The English Civil War (Purkiss, 2006) are just two of many recent excellent examples of substantial works that bestride this particular line. 8. Catherine Lu has recently warned against the oversimplified and over-­ generalising assignments of blame which have become common in academic writing as well as in popular discourse. She proposes moving beyond ‘the statist interactional approach that dominates international political discourse’, which prefers to single out nation states as the bearers of ultimate moral responsibility, to ‘a more expansive view of morally responsible agents’ and the often very complex social structures that underpin their choices and actions (Lu, 2017, p.  138). Iris Marion Young has emphasised the dependence of structural social injustices on the too-­ ready acceptance by individuals and institutions of socially

186 

G. Scarre

accepted norms and practices; she advocates a ‘social connection model of ­ responsibility’ which assigns to societies and their members the responsibility to tackle structural injustices by questioning the assumptions that underlie them (Young, 2011). 9. In recalling the horrors of the twentieth century, it is all too easy to overlook the many admirable achievements which the century can also boast. Developments in science and technology have been in some respects mixed blessings, raising new problems at the same time as they have solved some old ones; yet the period has by and large been one of social advancement, with many countries—though sadly not all—witnessing dramatic improvements in health and educational provision, income and standards of living, housing, welfare, political rights and personal liberty. Not all authors have paid the credit due to these more positive aspects of the last century. One such writer is Jonathan Glover, whose 1999 book Humanity: A Moral History of the Twentieth Century struck a particularly gloomy note, in its forty-three chapters rarely referring to any besides calamitous events (Glover, 1999). Such imbalance gives a distorted picture of the twentieth century, which possessed its triumphs as well as its tragedies. 10. Beard reports that archaeological scientists have been discovering traces of contamination from Roman industrial sources deep in ice-cores drilled in Greenland (Beard, 2015, p. 16). Et in Arcadia ego. 11. The book originally appeared in the USA in 1985 under the title Longman History of the United States of America. 12. It would be depressing indeed if Brogan’s treatment of the history of the settlement of North America represented the dominant approach. Fortunately it does not, and there are other studies of the devastating effects of white settlement on the native peoples of the continent that sound a far more sympathetic, and justifiably angry, note. Jonathan Lear’s Radical Hope: Ethics in the Face of Cultural Devastation movingly describes the experience of people whose social world has come to an end and who have no future to look forward to (Lear, 2006). A now classic work is Dee Brown’s lament for a lost way of life: Bury My Heart at Wounded Knee: An Indian History of the American West (Brown, 1970). Important recent publications which tell the story that historians like Brogan neglect are the essay collection Why You Can’t Teach United Sates History Without American Indians (Sleeper et  al., 2015), and Jeffrey Ostler’s Surviving Genocide: Native Nations and the United States from the American Revolution to Bleeding Kansas (Ostler, 2020).

8  The Morality of Memory 

187

13. In the UK, government and other official documents of a sensitive nature can legally be withheld from public release for a period of 30 years from their date of issue. This seems reasonable where urgent reasons of national security provide the rationale for concealment, but in some cases where documents have been made accessible under the 30 year rule it hard to see what vital national interest would have been endangered by their earlier release. 14. Worthy of special mention here are the so-called Stolpersteine (literally, ‘stumbling stones’) which commemorate the name and date of arrest of individual Jews and other victims of the Nazis at the place where they were arrested before being deported to extermination or punishment camps. Mostly set in pavements or in walls, the first Stolpersteine appeared in Germany in 1992 under the inspiration of Gunther Demnig and by 2019 were estimated to number 75,000, spread across several countries of Western Europe. These poignant reminders in their everyday locations are perhaps more effective than some larger-scale memorials are in bringing home the fact that behind the mind-numbing totals of Nazi victims, each death was an individual tragedy occurring to a person like ourselves. 15. Candid acknowledgement of the part played by black Africans in the slave trade must never become a blame-shifting exercise. That, by and large, it has not is apparent in the large number of internet sources that can be accessed by conducting an internet search under the heading ‘Slave trade black involvement’.

9 Historical Biography: Giving the Dead Their Due

1 Who Should Be Remembered? The previous chapter began by citing two questions raised by Avishai Margalit: ‘Are there episodes that we ought to remember?’ and ‘Are there episodes that we ought to forget?’ (Margalit, 2002, p. 48). It is evident that similar questions can also be posed about people: Are there individuals whom we ought to remember? and: Are there individuals whom we ought to forget? Here, again, it is important to be clear about who is the referent of ‘we’. Most deceased people have no lasting dwelling-place in the House of Fame, though their memory may continue to be treasured by those who knew and loved them. Besides the obvious emotional satisfactions it affords, remembering the dead rewards the living by helping to preserve their sense of family or of social identity and by reminding them of the values, interests and purposes they inherit from their forbears. Some individuals from the past inspire, others appal; some people deserve to be remembered after death for their good deeds or achievements; others, not so estimable, need to be kept in mind as warnings to posterity— even if posterity might prefer to forget them.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1_9

189

190 

G. Scarre

Whether or not fame is, as poets tell us, a fleeting thing, it is undeniably a relative one. One society’s heroes may be another’s villains or be totally unknown to them. Individuals whose fame transcends the bounds of their own communities may be remembered very differently in other places. Memories may be ‘shared’ in the minimal sense that they refer to the same personages or events while they vary markedly in content, emphasis or evaluation. So Winston Churchill is remembered as a valiant war-leader by many British and American people, but many citizens of India recall him primarily as the arch-enemy of their national independence and as the British prime minister who neglected to send relief to the starving people of Bengal during the catastrophic famine of 1943. Often the truth about an individual cannot be captured from a single viewpoint but needs to be assembled from several. To say this is not to endorse a post-modernist thesis to the effect that truth is not an objective matter but always relative to some specific point of view. There may be plural viewpoints on Churchill without the truth about him being plural, as the post-modernist would have it. To obtain a truly accurate picture of a complex individual like Churchill involves combining multiple angles of vision, rather in the manner of a Cubist portrait. Dissonant memories of the same person, in spite of their different tone and focus, may in fact complement rather than contradict one another.1 To speak of a ‘duty to remember’ our departed loved ones can sound odd; this seems a cold way to think about our relations to at least those of the dead whose memory should be pleasant to us. Still, many people have the sense that there would be something not quite right about forgetting ‘their’ dead, or allowing those memories to grow dim—almost as if this condemned them to a more profound sort of death. In part this may be because ceasing to think about our own deceased family members or other people who have been personally influential in our lives can look tantamount to a form of ingratitude. The conventional wording on gravestones, ‘Sacred to the memory of …,’ sacralises the remembering which it both signalises and invites. And where memory is lost there is a personal price to be paid as well, since knowing who our ancestors were and, where available, some facts about their lives, is a brick in the wall of our self-identity.

9  Historical Biography: Giving the Dead Their Due 

191

In time, of course, everyone passes beyond living memory. But when personal or first-hand memory of a person is extinct, documentary memory of one form or another may survive. It is in this documentary or ‘historical’ sense of memory that Napoleon Bonaparte or Elizabeth I are still remembered, though no one alive today has any recollections of the French emperor or the English queen. But while most people who have died, once they have passed from the memory of the living, have left behind little documentary record of themselves (though this pattern may now be changing, given the potential of digital technology to store effectively limitless quantities of data), that does not mean that their lives have caused no enduring effects. This is clearly not so where they have left descendants to carry on the family line. More broadly and subtly, as George Eliot reminds us on the final page of Middlemarch, even ‘we insignificant people’ (like the novel’s heroine Dorothea) prepare the lives of those who come after us with ‘our daily acts and words’, which can have an ‘incalculably diffusive’ impact on other people, both present and to come (Eliot, 1965 [1871–72], p. 896). The contributions we make to the lives of those who come after us may not be attributable to us by name but they are nonetheless real and they may also be important. The reality and significance of the actions of the hosts of the now anonymous dead in shaping the life of their times is readily acknowledged by social and economic historians. An historian researching, say, population fluctuations in East Anglia in the aftermath of the Black Death of 1348–50 may appear to be interested more in humanity in the lump than in the deeds of kings, aristocrats or other notable individuals. Even where names of specific individuals are recoverable from archived documents such as leases or manorial court rolls, there is not usually much further information supplied. A typical fourteenth-century manorial court-roll entry that records that John Gooding was fined three pence for allowing his sheep to stray into Thomas Wray’s field tells us nothing about John Gooding or Thomas Wray beyond the facts that the former was a somewhat careless sheep-owner and the latter owned or (more likely) leased a field. Neither Gooding nor Wray would make eligible subjects for the writers of a historical biography. Nevertheless, when they were alive, these men were just as much Kantian ends-in-themselves,

192 

G. Scarre

possessors of intrinsic worth and subjects of valuable lives, as were their more illustrious contemporaries such as King Edward III or Geoffrey Chaucer. Given the equal status of every human being in the Kingdom of Ends, it may seem hard lines that the vast majority of fourteenth-century people are now entirely forgotten, not even their names surviving in some faded court roll or charter. It might be said that no member of the Kingdom of Ends deserves to be forgotten. Yet it would be idle to suggest that all those forgotten people have been done a wrong. For ‘ought’ implies ‘can’, and the dead are far too numerous for all to be remembered. It might also be asked what, after all, is so terrible about oblivion? It would be hard to argue that the dead are currently being harmed by the non-survival of any documentary or other evidence to testify that they once existed. It is true (a point I shall return to shortly) that some people are ambitious to be posthumously remembered, even posthumously famous. Oblivion, in their case, means that they are entertaining during their life an ambition that is not in fact going to be realised, although they cannot know that at the time; and this could be argued to constitute a form of lifetime (as distinct from posthumous) harm to them. Some philosophers, of whom George Pitcher and Joel Feinberg are perhaps the best-known in this connection, have argued that the harms of death can only affect the living person—since after death there no longer exists a person to be either harmed or benefited—and that an ante-mortem subject is harmed when some significant project she is pursuing is in fact going to be truncated, frustrated or negated in some way or other by her dying, making it the case that she is engaged in a project that will not succeed (although this does not mean that it is doomed to failure, as a fatalist might wish to claim) (see Pitcher, 1984; Feinberg, 1984). The Pitcher/Feinberg theory that death is bad (when it is) for the living via the negative retrospective significance it imparts to the antemortem subject’s projects has been considered counter-intuitive by some writers (e.g. Glannon, 2001; Belshaw, 2009) but it may be the most promising attempt made to date to explain how death can do us any harm when it removes us from the realm of subjects.2 Even if we possessed a list of the names of every single human being who has ever lived, it is hard to see what genuine usefulness that would

9  Historical Biography: Giving the Dead Their Due 

193

be either to the living or to the recorded dead. Blustein suggests that remembering is ‘a chief means by which we overcome the finality of death’ and that it expresses respect for the deceased ‘insofar as it is a response to “the dignity of one’s having existed”’ (Blustein, 2008, p. 269, 272). He adds that because we have a wish to be remembered after our own death, we ought to remember the people (or at least, those linked to us by close familial or social ties) who have gone before us (Blustein, 2008, 276f.). It is certainly unpleasant to imagine ourselves being forgotten even by our loved ones after our death, but the reason for that does not appear to have much to do with thoughts of dignity; rather, it turns on the fact that ‘Love is not love/ Which alters when it alteration finds’— including the alteration that consists in moving from life to death. Love that evaporates shortly after its object has disappeared from the scene looks as if though it can never have possessed much depth. Indeed, the ‘dignity of having once existed’ is more plausibly seen as grounded in the status of deceased human beings as formerly living Kantian persons, ends in themselves whose lives were of intrinsic and not merely instrumental value, than in their retaining a place in memory. Remembering the dead may be good for other reasons but the ‘dignity of having once existed’ should be fully and unconditionally acknowledged as belonging to all once-living human beings, whether they are individually remembered or have joined the ranks of the nameless dead. Pace Blustein, most people, though they may dislike the ideas of being quickly forgotten by the people they cared about in life or of becoming the posthumous subjects of ill fame (a matter we shall return to below), are probably not much concerned by the thought that they and their deeds will eventually pass into oblivion—they have far too many things to worry about in life to fret much about the prospect of finally joining the ranks of the nameless dead. However, there are some exceptions. A well-known example is the Roman poet Horace (Quintus Horatius Flaccus), who composed his verse meaning it to be a ‘monument more durable than bronze’ (monumentum aere perennius) that should keep his memory fresh for centuries (Carmina Bk.3, Poem 30). Rather more unusual was the eccentric arsonist Herostratos of Ephesus, of whom Valerius Maximus relates the following curious tale:

194 

G. Scarre

There was a certain man who determined to burn down the temple of Diana at Ephesus, in order that the destruction of so magnificent a work should spread his name throughout the world. He confessed this intention later when on the rack. The Ephesians wisely decided to abolish by decree the memory of such a man; but the eloquent Theopompus has named him in his histories.3

Herostratos was executed for his pains, as he fully expected to be, and so paid an extraordinarily high price for his posthumous fame; but, given his strange psychology and intentions, he presumably judged his life to have been a success. Authors of historical biographies do not as a rule write in order to satisfy whatever ambitions their subjects may have had to be posthumously remembered but because they regard their subject’s lives as having been notably interesting or impactful on others, in good ways or bad. (Biographers of ‘celebrities’ may, indeed, also write in order to make money). The fact that biographers naturally focus on ‘exceptional’ people may create moral unease if it appears to imply that some people are of more intrinsic worth than others are. Queen Elizabeth I was no more important a member of the Kingdom of Ends than was the lowliest peasant in her dominions. But the aim of historical biography is not to assert or imply that some people ‘matter’ more than others, in the sense of meriting more ‘recognition respect’ than others do, but to describe and analyse the actions of those individuals who, by virtue of their circumstances, status or abilities, have made an unusually large impact on the lives of their fellows.4 The blunt fact is that a person born in a prince’s palace usually has far more opportunities to affect others’ lives for good or ill than another who is born in a hovel.5 Biographers are not always responding to fame in selecting their subjects: they may also be creating it. Sometimes this involves bringing before the public eye previously little-known or underrated individuals whose deeds deserve to be recognised as much as those of others who have hitherto attracted the lion’s share of attention. Every British schoolchild is familiar with the story of the charismatic Florence Nightingale, whose pioneering nursing efforts during the Crimean War of 1854–56

9  Historical Biography: Giving the Dead Their Due 

195

saved many British soldiers from dying of wounds or sickness; but only recently has the equally impressive work of the British-Jamaican nurse and businesswoman Mary Seacole become widely known (see, e.g., Robinson, 2004; Ramdin, 2005). The recognition of the bravery and humanity of this black woman who, without any help from the authorities, established the ‘British Hotel’ behind the battle-lines at Balaclava to supply home comforts and motherly love to the suffering troops, is significant both as an act of restorative justice and as a corrective to an earlier neglect of black history which was as unscholarly as it was shameful. Historical biographies provide us with inspiring examples such as Mary Seacole and Martin Luther King, and cautionary examples such as Adolf Hitler or Josef Stalin. Whether the authors of such works voice approval or disapproval of their subjects, or prefer to keep their moral feelings to themselves, they ought always to be objective, honest, and prepared to tell the whole truth and not some redacted or bowdlerised version of it. Historical biographers should be neither panegyrists nor character-assassins but tellers of the plain, unvarnished truth. Their subjects should emerge from the pages as they actually were, and not as the authors might have wished them to have been, where that is something different. Admittedly, biographies that are scrupulously neutral or dispassionate in tone can sometimes make dull reading; they may seem to lack that ‘human touch’ that is part and parcel of normal human confrontations, whether in the flesh or in print. Pace Williams, there are no good reasons why the biographer of Louis XIV should not express moral indignation with his subject if he thinks he merits it. But if he is indignant, he should take care to justify his negative opinion of le Roi Soleil on the basis of the evidence.

2 Historical Biography and its Pitfalls6 Who steals my purse steals trash … But he that filches from me my good name Robs me of that which does not enrich him, And makes me poor indeed. (Othello, Act 3, Sc.3, ll.161–165)

196 

G. Scarre

Few people would disagree with Iago that a good name is a valuable possession, and that it behoves us to be careful when speaking or writing about the reputations of others.7 It is true that some people, of whom Herostratos is a prime instance, may care more about being remembered by posterity than about how posterity chooses to remember them. But no one likes to be made the subject of unflattering judgement, even where the practical consequences are expected to be minimal. Although we will not then be around to be pained by anything that is said about us, most of us dislike the thought that we might be subjects of bad-mouthing after we are dead. A partial explanation of this may be that, as social beings living in close contact with others, we care what others think about us; not to do so would be a recipe for alienation. As Alasdair MacIntyre remarks, ‘I am part of their story, as they are part of mine. The narrative of any one life is part of an interlocking set of narratives’ (MacIntyre, 1984, p. 218). This echoes the well-known words of John Donne: ‘No man is an island; every man is a piece of the continent, a part of the main.’ Individual death is not the same thing as social death, so long as our life and personality retain a presence in the shared memories of the social groups to which we have belonged. But even if caring about our posthumous reputation should be, in view of our permanent unconsciousness beyond the grave, in some degree an irrational attitude, it is near to impossible to divest ourselves of such concern. It is just too deeply engrained in us for that. It would be strangely inhuman not to be pained by the thought, for example, that some malicious enemy will spread slanderous tales about us after we are dead, causing even our loved ones to revise their previous good opinions of us. Truth-telling is normally considered a duty for everyone and not just for historians, to be breached, if ever, only in circumstances of serious necessity.8 Hence the historical biographer is morally bound to tell the truth, and nothing but the truth, about a deceased individual she writes about. But more than this, the biographer who lies or conveys a slanted picture in order to satisfy some personal grudge or promote some political programme does not deserve the honourable title of ‘historian’ at all. ‘History’ and ‘fiction’ denote entirely distinct genres of writing. Anyone who writes about a person now deceased, whether death occurred recently or long ago, bears the normal responsibilities of the moral agent to be

9  Historical Biography: Giving the Dead Their Due 

197

accurate with the facts. But this responsibility rests with especial weight on the shoulders of the historical biographer, because historical biography is a particularly sensitive area of enquiry from a moral point of view. This is because the reputations of the dead are unusually vulnerable to abuse, for three significant reasons. First, a person who is accused of a crime or offence after her death is not able to speak or offer evidence in her own defence, while her inability to defend herself leaves her more exposed than a living person would be to false or exaggerated accusations. Next, a dead person is unable to offer any explanation for her acts or to draw attention to any circumstances that might mitigate her blameworthiness. Nor can she detail any steps she may have taken to avoid or to conquer her evil temptations. Finally, a deceased person cannot now apologise, make reparation to, or ask forgiveness for the harm she has caused to other people. She lacks the living offender’s opportunity to make her peace with her victims and society, undergo appropriate punishment or turn over a new leaf. Her reputation will forever be that of a guilty person, never a repentant one. These considerations may incline conscientious historians to forego writing about individuals altogether and turn their attention to morally ‘safer’ subjects such as social trends, economic conditions or the history of ideas. Alternatively, in order to reduce the chance of traducing a dead person  unwittingly, they may adopt the working principle of treating unflattering accusations against their subjects with a pinch of salt unless they can be proved by irrefutable evidence; they may also systematically give subjects the benefit of any doubt that may arise. Such evasive approaches, however, are not really satisfactory. While they may be intended kindly, they run the risk of distorting the truth, representing villains as good men and women and downplaying the harm they did to the innocent people who suffered at their hands. In an intriguing passage in The Metaphysics of Morals, Kant recognises an obligation to avoid traducing the reputations of the dead but rejects the traditional precept that we should say only good things about the dead, suppressing any facts which would be harmful to their reputation (‘de mortuis nihil nisi bene’) (Kant, 1991, p. 112). In Kant’s opinion, ‘a well-founded accusation against [a dead person] is still in order’, and an individual who has led a bad life has no right to be spared just censure.

198 

G. Scarre

People who have lived morally deserve to be given credit for the fact and they are wronged, Kant holds, if calumniators subsequently seek to undermine their honour. (Interestingly, Kant does not say simply that those who calumniate the dead act wrongly, but more specifically that they wrong the dead). Even though posthumously suppressing the truth about a deceased subject’s faults may have no negative practical effects on him, Kant’s view is that it would be bad for our moral accounting; it would also be wasting an opportunity to convey some salutary warnings and useful lessons to the living (Kant, 1991, p. 112).9 On the Kantian theory, a person is not treated disrespectfully by having a ‘true bill’ testifying to his offences laid against him after death, provided this is not done at the prompting of malice; on the contrary, to make a ‘well-grounded accusation’ against a dead subject is to show respect for him as an autonomous moral agent who can be held responsible for his actions. Kant does not specifically say that it would also be wrong to praise a person beyond his deserts after his death, but he would surely not have countenanced any representation of a deceased person that amounted to telling lies about him. Even giving a dead person the benefit of the doubt where his credit is in question would presumably have fallen under this ban if it required the suppression of unfavourable evidence. The common moral intuition that the dead should not be slandered reflects a broader moral intuition that the dead ought to be respected by the living. It may not immediately be obvious that this broader intuition outlaws in equal degree the according of unwarranted praise to a deceased subject. Yet lauding a dead person where no praise is due, or praising him beyond his merits, is an instance of misplaced generosity, since it fails to treat him seriously as a fully responsible moral agent who deserves to be assessed strictly according to his record. Historical biographers, then, should be scrupulous in relating past lives as accurately as the evidence allows, and, where that evidence falls short, in clearly signposting as. authorial speculation any attempt to fill in the blanks. But it is not only writers who lay claim to the title of historians who should not play fast and loose with the reputations of the dead who are no longer able to speak up for themselves. A writer of historical fiction which touches on the lives of real people may be allowed a certain latitude in regard to the

9  Historical Biography: Giving the Dead Their Due 

199

truth, so long as it is made clear where the story departs from the historical facts or relies on imagination rather than hard evidence. Even here, however, there are limits to what is morally acceptable. One well-known example of a work which may fairly be held to have transgressed those limits is Peter Shaffer’s 1979 play (and later a film) Amadeus, which left in the audience’s mind a very doubtfully true image of the composer Antonio Salieri as the jealous poisoner of his musical rival Mozart. Shaffer based his ‘reconstruction’ of the events surrounding Mozart’s death in 1791 on the testimony of certain mischievous rumours which circulated at the time to explain the composer’s unexpected death at the early age of 36. Yet Shaffer must, or at least ought to, have known that modern medical authorities consider the symptoms of Mozart’s final illness to have been much more consistent with death from some type of organic disease than with death from poison. The unfortunate consequence of Shaffer’s play is that the reputation of the almost certainly blameless Salieri has been almost irretrievably damaged. Since Salieri plainly did not deserve this fate, he is wronged by this misrepresentation of his character and actions.10 If the ungrounded, or under-evidenced, denigration of a deceased person is morally wrong, then redeeming the reputation of a dead subject who has been unfairly traduced or erroneously under-valued would seem to be, by contrast, a morally laudable exercise. And so it is, provided that the defence is conducted with a view to doing justice and not from a mere spirit of contrariness, as an exercise in romanticism, or as an intellectual jeu d’esprit. The responsible historical biographer should resemble more the judge in a criminal trial than the counsel for either the prosecution or defence. Where a dead person’s reputation has been unfairly injured, whether on account of malice, prejudice or simple mistake, it is a basic act of justice towards that individual, as well as a needed correction of the historical record, to try to put the matter straight. Yet in attempting to repair a damaged reputation, it is important not to err in the opposite direction and display the subject in glowing terms that are equally unmerited. This appears to have happened in the case of King Richard III (reigned 1483–85), who was long considered to have been one of England’s most evil kings but has in more recent times been argued by some to have been more sinned against than sinning. There is even a society dedicated to defending King Richard against centuries of

200 

G. Scarre

denigration that began—or so it is alleged—with his defeat in 1485 by the founder of the Tudor dynasty (subsequently King Henry VII) at the Battle of Bosworth. Since Tudor times, the standard image of Richard in most people’s minds has been that of the maleficent, envious hunchback immortalised by Shakespeare in the most melodramatic of his historical dramas. The Richard III Society was created in 1924 to overturn this conventional image and presently claims an international membership of over 4000. The Duke of Gloucester, its current President, writes on its website that ‘even after all these centuries the truth is important’, insisting that ‘It is proof of our sense of civilised values that something as esoteric and fragile as a reputation is worth campaigning for’ (http://www. richardiii.net/). These are admirable sentiments, but it is less obvious that they are appropriately applied in this case. This is not the place to engage in the controversy about King Richard III, although it may be noted that a majority of experts on the period view with considerable scepticism the attempts of ‘Ricardians’ to exonerate the King from some of the most serious accusations traditionally levelled against him, including that of ordering the murder of his two young nephews in the Tower of London (the famous ‘Princes in the Tower’). Even if Shakespeare’s Grand Guignol character is as much a figure of fiction as Shaffer’s Salieri, it takes a lot of special pleading and selective reading of the evidence to defend the real Richard against the accusations of usurpation, tyranny and judicial murder that in two short years made his reign unbearable to many and caused a majority of the nobility to defect to the pretender Henry Tudor (whose own claim to the throne was of the weakest). Given the weight of the adverse evidence against him, the suspicion must be that some at least of Richard’s defenders are engaged more in an entertaining flight of historical whimsy than in a genuine campaign to redress an injustice. Where re-working a long-dead subject’s reputation relies on make-believe, its ethical bona fides is questionable. Reputations, especially of the dead, may be, as the Duke of Gloucester reminds us, fragile things (although it is less clear in what respect they are ‘esoteric’), but that should prompt historians to try their hardest to get them right, not to defend them à outrance, regardless of the truth.

9  Historical Biography: Giving the Dead Their Due 

201

3 Reputation and the Passage of Time Assuming that the dead retain a claim to moral consideration, it is a further question whether the force of that claim weakens over time, eventually fading out to nothing. Would it be worse to tell a scandalous tale about the late Diana, Princess of Wales, who died in 1997, than one about Cleopatra, Queen of Egypt in the first century BCE? One relevant factor to be considered is the effect that spreading such a slander would have on the feelings of surviving relatives, friends or admirers of the dead women, and there is an evident difference here: many living people still hold Diana in affectionate memory, whereas no one now alive has any personal memories of Cleopatra. But this line of thought should not be over-stressed because, as we have seen in the case of Richard III, it is possible for people to care deeply for individuals who died long before their own time. A slander spread about the Egyptian Queen would probably cause great offence to many present-day people Egyptians who are properly proud of their ancient heritage.11 Leaving aside all considerations of the impact on living people of what is said about the dead, the philosophically more interesting question is: does Time, the great destroyer, root up moral status too? Abstracting from any impact it may have on others, is a slanderous story told about Cleopatra less morally significant than one told about Diana, simply because Cleopatra has been dead for so much longer than Diana? To this question, the answer, I believe, is that the passage of time makes no moral difference at all. Time weakens memory, but there is no good argument for thinking that it likewise diminishes moral status. As Kant himself implicitly acknowledged in the passage from The Metaphysics of Morals cited earlier, the status of the dead in the Kingdom of Ends is not easy to characterise, though he proposes that while a dead person no longer exists as homo phaenomenon, an object in the world of sense, he might still be considered to exist as homo noumenon, retaining some kind of reality in the realm of Things-in-Themselves that are outside time and space (which are merely ‘forms of intuition’, the sensory modalities through which we face experience) (Kant, 1991, pp. 111–12). Whether or not we favour (or think we understand) the Kantian notion of a

202 

G. Scarre

noumenal realm, it is in any case unclear why whatever entitles a recently-­ deceased person to the respect of the living should be affected either positively or negatively by the mere passage of the years. People may be forgotten over time and the physical traces of their existence may disappear, but the right, if they are remembered, not to be made targets of slander or other forms of disrespectful treatment (including desecration of their tombs or cavalier treatment of their physical remains) does not appear to be of the right kind of category to be subject to temporal decay.12 It is true that rights can be lost or lessened in certain circumstances, but then something has to happen that cancels or suspends a right: e.g. a man may temporarily lose his right to social liberty by committing a serious crime for which he is imprisoned for a term of years. Death, indeed, necessarily cancels some of the moral and legal rights that a person enjoys while alive, such as the right not to be enslaved, or the right to marry or raise a family. But the rights to respectful treatment that pertain to a person on account of her human characteristics, or her membership of the human species, or her membership of the Kingdom of Ends, or her timeless existence in the noumenal realm, do not appear to be vulnerable to the mere passage of time. Although the right not to be slandered or exposed to various other kinds of disrespectful treatment after death appears not to be affected by simple lapse of time, it may seem that it might be affected according to the subject’s lifetime wishes concerning the posthumous future. Imagine that someone were to say: ‘I care very much what people will think and say about me in the next generation or so following my death, but beyond that I’m not really much interested. That’s because I don’t feel sufficiently connected with the people who will be alive in, say, 200  or 300  years from now to be worried about what they think of me, or whether they think about me at all.’ Where retaining a good reputation for a long time after death forms no part of a person’s wishes concerning her posthumous future, then it may seem that the moral brakes can come off, or at least slacken, once that period of indifference has been reached. (Is this what Peter Shaffer assumed about Salieri when he made the long-dead composer the villain of Amadeus?) Yet if Kant is correct that a member of the Kingdom of Ends remains worthy, whether she realises it or not, of being treated with the respect due to her status, and therefore ought to be so

9  Historical Biography: Giving the Dead Their Due 

203

treated, then any disrespectful treatment will be morally offensive whenever it occurs. On this picture, the person who professes not to care what people will say or think about her several centuries after her death is in effect disrespecting herself, under-valuing her own status as an end-in-­ herself. Thus someone who slanders Cleopatra today not only sins by telling a lie but also wrongs the dead Cleopatra. Kant would want to add that the slanderer further offends against humanity as a whole, by implying that it is acceptable to treat ends-in-themselves in this disrespectful manner. To repeat a question posed at the start of this chapter, although posthumous slander and other forms of disrespect are morally out of order, might it be the case that some people deserve to be forgotten on account of their own misdeeds or for other reasons having to do with the public good? If Kant is right that taking people seriously as responsible moral agents is incompatible with posthumously suppressing the details of their good and their bad acts, then it looks as though deliberately condemning them to oblivion will normally be out of order. Yet it might seem that an exception could be made where a particular individual’s life would provide a potentially dangerous example to certain present-day individuals or groups, perhaps by inspiring them to perform acts of a kind detrimental to social peace and harmony. Or one might sympathise with the councillors of Ephesus who sought, albeit unsuccessfully, to suppress the name of Herostratos in order to flout his goal of achieving eternal fame by an act of sacrilegious arson. To the historian, however, such deliberate obfuscation of the facts can never be a favoured option. Revelation, not suppression, is the historian’s business, and while an historian ought never knowingly to purvey scandalous misrepresentations of individuals, she would be a traitor to her trade if she concealed details of past crimes which might inspire copy-cat acts by contemporary actors. At this juncture it may look as though the historian’s role as steward of shared memories is potentially in conflict with the duty of every citizen to protect the public welfare. But this theoretical possibility is not really very alarming. Knowing the biographies of bad people is important to knowing how best to respond to such people, should their like arise again. Moreover, any risk created by the revelation of past crimes would seem vastly outweighed by the value of the insights into human behaviour that emerge

204 

G. Scarre

from an honest, unblinking confrontation with the facts. Collingwood’s dictum that we came to know what men are like by learning what they have done forbids any varnishing or censorship of the record. Historians should tell things as they believe them to be without fear or favour, presenting a rounded picture of human beings that is true to our shared memories.

Notes 1. The fact that shared memories may exhibit dissonance is one reason why teaching history to students from different ethnic or national backgrounds can be a very challenging task. Where students recognise different histories as ‘theirs’, or attribute different significances to historical events or personages, to assume any particular viewpoint may appear to be question-­begging. But the worst thing is to try to fudge a compromise perspective so anodyne that it risks upsetting no one. Conflicts of vision need to be confronted and worked through, not suppressed. A school or college history curriculum should not attempt to mask or eliminate the problems created by dissonant memories but meet them head on, in the hope that some level of mutual understanding will be reached. 2. I discuss at greater length the rationality of caring about one’s posthumous reputation in Scarre 2001. See also Scarre (2013a) on the broader issue of the vulnerability of the dead. 3. Valerius Maximus, Factorum et Dictorum Memorabilium Libri Novem, Bk.VIII.14.15. (Author’s translation). 4. Here I refer once more to Stephen Darwall’s distinction between the ‘recognition respect’ which is due in equal measure to every human being as a member of the kingdom of ends, and the ‘appraisal respect’ which is due to people in differential degrees according to the quality of their acts and achievements (Darwall, 1977). 5. This is not to say that the lives of ‘ordinary’ people are not interesting— and sometimes more interesting—to the modern reader than are the lives of more conventionally ‘notable’ persons. Unfortunately there is usually insufficient information about past people who lived out of the public spotlight to make the writing of their biographies possible. But there are exceptions, where surviving records make the reconstruction of

9  Historical Biography: Giving the Dead Their Due 

205

‘ordinary’ lives possible in some degree of detail. One much-hailed example is Iris Origo’s The Merchant of Prato, which tells the story of a fourteenth-­century Italian merchant, drawing on the unusually rich surviving archives of his business house (Origo, 1957). Origo’s merchant was a man of what today we would call the middle class, and a member of an increasingly literate society. Understandably, evidence bearing on the everyday life of individuals occupying the lowest rungs of the social ladder, and who were rarely literate, is much harder to find. But occasionally it does exist. A pioneering study which makes use of such material is Emmanuel Le Roy Ladurie’s Montaillou: Cathars and Catholics in a French Village, 1294–1324. Basing his work on the extensive records of Dominican inquisitors investigating heresy in the French Pyrenees at the turn of the fourteenth century, Le Roy Ladurie provides an intimate portrayal of the lives of several of the individual villagers (Le Roy Ladurie, 1980 [1975]). 6. For previous discussion of some of the issues in Sections 2 and 3, see Scarre (2012) and Scarre (2013c). 7. Iago’s words deserve more credit than the man: the quoted words are spoken with a view to currying favour with his master Othello, whom he plans to deceive. 8. Plausibly one may, for instance, permissibly tell a lie to a would-be murderer who asks us to tell him where the innocent victim he is pursuing is hiding. Kant notoriously denied that lying was legitimate even in this dire situation, but even many Kantians have found this a very hard line to swallow (see Kant, 1909b; Korsgaard, 1986). 9. Kant concedes that the idea of a person retaining a good or bad reputation after death (‘when he no longer exists as homo phaenomenon’) is metaphysically problematic and hard to explain. I say more about this in the text below. However, at a slightly less abstract level, Kant insists that anyone is entitled, via ‘the right of humanity as such’, to take on the role of apologist for the dead (Kant, 1991, p.  112), this role not being restricted to those relatives or friends of the deceased who may be pained or disadvantaged by any falsification of his character. It is worth adding to this the psychological observation that caring about how the dead are remembered is a basic and inescapable feature of our nature as social beings whose personal narratives interlock with others’ narratives, including those of our social predecessors and successors.

206 

G. Scarre

10. The 2018 film ‘The Favourite’ is a more recent example of a dramatic representation of an historical relationship—in this case that between Queen Anne (reigned 1702–1714) and her friend and ‘favourite’ Sarah Churchill—which wittingly falsifies the facts. That Sarah Churchill possessed a domineering temperament and sought to establish an ascendancy over the gentler-minded queen is matter of record, but the portrayal of Anne as a blubbering, weak-willed near-simpleton who sought comfort in the company of her pet rabbits is a cruel invention. 11. The often-violent reactions that have been experienced in recent years to criticism of the founders of certain religions offer even more dramatic illustration of the power of the long-dead to arouse strong emotions in the living. 12. I have discussed the ethical questions surrounding archaeological investigation of human remains in a number of previous writings, including Scarre (2006), Scarre (2013b). See too Sayer (2010).

10 Postlude: ‘Consider the Ant’

At the close of the final episode of his much-acclaimed 1966 television series Civilisation, Lord Kenneth Clark, in musing mood, observed that ‘Men haven’t changed much in the last 2000 years. We ourselves are history.’ Twenty-first century critics may wish to quarrel with Clark for choosing to focus exclusively on Western civilisation in his survey of the past two millennia, and also for saying relatively little about the contributions made by women to Europe’s cultural evolution; nevertheless, the liberal and humanistic spirit that infuses the 13 hour-long programmes remains inspiring and attractive. Clark’s aperçu that we ourselves are history is a reminder that there is no end to the human story nor are there any ahistorical viewpoints; we are all parts of a continuously evolving saga, links in a chain connecting past to future. People today may know a lot more about the nature of the world than their forbears mostly did yet the last twenty centuries have witnessed little conspicuous change in patterns of human behaviour, and scant growth in practical wisdom. It is usually more plausible to refer what changes have taken place to variations in the contexts of action than to alterations in human physical and psychological dispositions. As Herbert Simon explained half a century

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1_10

207

208 

G. Scarre

ago in The Sciences of the Artificial, the behaviour manifested by any system, whether natural or artificial, is a function not only of the system’s operating principles but also of the complexities of the environment in which it operates (Simon, 1969, pp. 24–25). An ant that is programmed to return to its nest by the most direct route follows a more elaborate path when its way lies over stony ground than it does when its way lies over smooth, but it would be wrong to suppose that in these two instances the insect performed on different principles. And much the same thing goes for human beings.1 A Stone-Age warrior went into conflict with a stone axe or a flint dagger, the must-have weapons of the day; a modern military commander directs a drone attack on an enemy hundreds of miles away from the keyboard of his computer. The difference in technique should not be allowed to disguise the fundamental similarity of the martial behaviours involved; in both instances, the foe is attacked with lethal intent using the most effective armaments available. In asserting that human beings have not changed much in 2000 years, Lord Clark took a diachronic perspective on our species. The parallel synchronic perspective holds that human beings who are alive concurrently are also fundamentally alike both physically and psychologically, notwithstanding differences in their life-styles stemming from their disparate cultural, technological or topographical environments. That human beings are essentially similar in their physical and mental attributes, and for that reason should be considered to be subjects of equal moral worth, did not await discovery by modern human-rights campaigners, nor was it even the product of eighteenth-century Enlightenment thinkers. In the West it can be traced to the earliest Christian Fathers who proclaimed that every human being has an immortal soul and is beloved by God. Yet neither the descriptive claim that people of different times and places are basically alike in their natural characteristics nor the axiological claim that human beings possess equal moral worth, have been allowed to pass without challenge. Both have proved distasteful to the self-seeking and unscrupulous who look to differences, whether real or purported, for pretexts for furthering their own good at the expense of other people’s. And since it is inconsistent to deny that others whom one concedes to be essentially like oneself should be accorded a lower amount of recognition respect than one demands for oneself, a perennially

10  Postlude: ‘Consider the Ant’ 

209

popular strategy with would-be abusers has been first to dehumanise the people whom they desire to exploit. The strategy of dehumanisation was brought by the Nazis to a fine art, but they were by no means the first to practise it. During the early Spanish incursions into Latin America and the Caribbean region, many of the conquistadors, as also the Iberian settlers and slave-traders who followed in their wake, denied that the ‘Indians’ who inhabited those lands were fully human. For non- or sub-humans could be treated as animals and had no claim to the moral or legal rights possessed by Christian Spaniards. Fortunately, this inhuman treatment of human beings before very long aroused protest back in Spain, where fierce opposition to the colonists’ cruelties was led, to their eternal credit, by certain members of the Dominican order of friars. In a blistering sermon delivered in the island of Hispaniola at Christmas 1511, the Dominican preacher Antonio Montesinos uncompromisingly denounced the Spanish invaders and settlers who had killed, tortured and enslaved the native population of the newly-discovered lands: With what right and with what justice do you keep these poor Indians in such cruel and horrible servitude? By what authority have you made such detestable wars against these people who lived peacefully and gently on their own lands? Are these not men? Do they not have rational souls? Are you not obliged to love them as yourselves? (Las Casas, 2004, p. xxi)

Montesinos’ condemnation of the excesses of the conquistadors was subsequently reiterated to even greater effect by his fellow-Dominicans Francisco de Vitoria and Bartolomé de Las Casas. Eventually, the restless advocacy of these churchmen led to the passing of laws that accorded equal legal protection to all subjects of the Spanish crown, irrespective of race—though, sadly, this legislation was never easy to enforce given the remoteness from the Americas of the central government in Madrid (Parry, 1964, pp.  303–19; Las Casas, 2004). What is chiefly to be remarked on here is the uncompromising declaration by Montesinos and the friars who followed him that there exists a universal human moral community of which both Spaniards and Indians are equal members. This idea is strikingly reminiscent of the Kantian conception of the

210 

G. Scarre

Kingdom of Ends, though its roots trace back to the Gospel injunction to love our neighbour as ourself. For these Christians, there were no second-­rate human beings, no Untermenschen fit only to be treated as means and never as ends. And to fail to grasp this fact was to fail to comprehend one’s own humanity, because one failed to recognise in others the features that formed the ground of one’s own value and one’s preciousness in the eyes of God. No wonder that Las Casas could accuse the abusers of the Indians of ‘infringing natural and divine law and thereby conniving at the gravest of mortal sins’ (Las Casas, 2004, p. 7). Because Spaniards, who had had the benefits of a Christian upbringing, could reasonably be expected to know what divine and natural law commanded, no room was left for ignorance as a viable excuse for their wrongdoing; for Las Casas, there were none so blind as those who wilfully would not see.2 Las Casas’s refusal to allow that the Christian abusers of the Indians had a valid excuse for failing to acknowledge the humanity of their victims and the moral side-constraints that such acknowledgement imposes, is in startling contrast to the reluctance of some modern relativist philosophers to pass judgement on past people who inhabited cultural and ideological environments which differed in greater or lesser respects from our own. As we have seen, Bernard Williams maintained that the impossibility of having a ‘real encounter’ with people of the past means that none of our moral judgements of them (with the arguable exception of some judgements of justice) really carries any weight; since all human beings live ‘under culture’ and past cultures are both temporally and conceptually remote from our own, we should observe the ‘relativity of distance’ and refrain from waxing indignant about Louis XIV or other seeming villains of former times. Yet it is perfectly possible to accept Williams’s claim that ‘We see things as we do because of our historical situation’ (Williams, 2005, p. 66) without supposing that in granting this we commit ourselves to the idea that there must be a radical moral discontinuity and lack of ethical connection with people who lived centuries before us. It has been a running theme of this book that the human moral community is much less temporally sectionalised than proponents of the relativity of distance assert, and that the legitimate ‘sweep’ of moral judgements is therefore much greater than Williams and his followers have

10  Postlude: ‘Consider the Ant’ 

211

imagined it to be. To be sure, due allowance must always be made for the cultural, social, religious and political factors which form the mise en scène in which agents deliberate and act, and there is much to be said for Fricker’s view that ‘Blame is inappropriate if the relevant action or omission is owing to a structurally caused inability to frame the required moral thought’ (Fricker, 2010, p. 167). But the disabling power of such inhibiting structures can easily be exaggerated, as Fricker herself is quick to concede. It may be too sanguine to maintain, as Lukes does, that ‘Reasons are available to all who can reason and cannot be completely internal to a particular way of life of culture’ (Lukes, 2003, p. 7); for reasons that are in principle ‘available’ to a reasoner may be hard to spot or to find convincing where they conflict with prevalent conceptual structures. Nevertheless, as Las Casas observed about the Conquistadors (and many commentators have likewise emphasised in regard to the Nazis), all too often moral blindness and ignorance are the evil fruits of a lazy or selfish neglect of moral ideas which are sufficiently plain, if only agents are prepared to look. Conquistadors, Nazis, slave-traders and the many others who have ill-­ used their fellow-men and -women should not be considered to be outside or beyond the scope of our moral appraisal on the basis of the principles of the relativity of distance or of blame. But the question might still be asked whether we can ever attain an adequately intimate grasp of what it was like to be alive at some former time to be entitled to venture historical moral judgements with much degree of confidence. This is what I earlier termed the ‘problem of access’, and it has been another constant theme of this book that while the problem does indeed need to be taken seriously, it would be a bad mistake to believe it to be insoluble. In fact, if the past were as inaccessible as the sceptic supposes, then the whole enterprise of history-writing would be impossible and the profession of historian vain. Historians work on the reasonable assumption that people of the past were sufficiently like ourselves for their motives and actions to be interpretable on much the same psychological principles as we apply to understanding ourselves and our contemporaries. In making this assumption, they implicitly subscribe to Simon’s thesis that behavioural variations are not always referrable to the operating principles of an active system (in this case human beings), but are frequently explicable by

212 

G. Scarre

the differential demands which disparate environments impose on that system. Our lives in today’s fast-moving, high-tech world may be a lot more complex than those of the inhabitants of a Neolithic farming settlement but this is no evidence that we are physically or psychologically different from those earlier people in any significant respects.3 Lord Clark’s observation that people haven’t changed very much in the last two millennia is in fact an understatement; human beings have probably been very much what they are now for the last 40,000 years or more. Although there is no good reason to suppose that human evolution has yet come to a stop, alterations in the human genome occur and spread so slowly that they have failed so far to leave any detectable mark in the historical record.4 If human beings have not changed very much in their natural characteristics over the last many thousands of years, then neither, it may reasonably be supposed, have the basic conditions of their flourishing. I have sought to defend in this work a naturalistic theory of the human good which locates in the factors which cause human lives normally to go well or badly a basis for moral discrimination which transcends the limits of space, time and cultural difference. The work of such writers as Foot, Rawls, Wong and Velleman shows how by paying attention to the things that people naturally need, enjoy and aspire to it is possible to ground a theory of the right in a theory of the human good which is applicable to more cultural and historical situations than just our own. Before we can safely accord blame or praise to individual agents of our own or of any other age, we need, of course, to be confident that we understand both what they were doing and what they took themselves to be doing. To condemn the Aztecs for their human sacrifices, as Putnam for instance does, without carefully scrutinising the reasons by which the Aztecs justified this practice, is to utter moral judgements in the dark. Human sacrifice is assuredly an ethically terrible practice; but the religious and cosmological beliefs which lay behind it make moral criticism of the Aztec priests who performed the grisly ceremonies a ticklish business. Here Fricker’s principle of the relativity of blame can be adduced to some exculpatory effect. Yet while the Aztec priest who cut the heart out of a living prisoner may have had some excuse for his action in his tragically mistaken view of the world, it would be hard to believe that the life he

10  Postlude: ‘Consider the Ant’ 

213

lived was a life of the most flourishing kind. In treating his victims as means rather than as ends in the most brutal manner imaginable, he sidelined the kinder human feelings (Lincoln’s ‘better angels of our nature’) and undercut the basis of his own self-respect. For if human beings were fit to be abused in such a barbarous manner, then what could there be about his own human nature that remained for him a subject of pride? That all human beings possess a value that is not merely instrumental was Las Casas’s theme as much as it was Kant’s. Because all individuals are equally important in God’s sight, and ought to be similarly so in human eyes, sociability should have no limits: all men and women are brother and sisters, and if they cannot always be brought to love one another they should at least be prepared to treat one another in a respectful manner. Las Casas’s refusal to let any of the Conquistadors off the moral hook by pleading that they were only acting as others acted, or that such treatment of the Indians had become the institutionalised norm, or that the Indians were only questionably human anyway, should provide encouragement to historians of the present day who are doubtful about their own right to pass moral judgement on past actors, arguing that the moral reasons that strike us as powerful today may not have been available, or appeared so weighty, in earlier times. The fact that men like Montesinos and Las Casas accorded quite as much force to the moral arguments against mistreatment of the weak and vulnerable as we would do today, and—more notably still—that they regarded those arguments as being readily accessible to those among their contemporaries who chose to regard their ethical and religious duties, should buttress the confidence of historians to pass their own moral judgements on past agents. It needs to be emphasised that the moral appraisals that we make of our own contemporaries and the reasons that we give for them exhibit considerably more similarity to those made by morally reflective people of the past than many sceptical or overly-cautious philosophers and historians appear to imagine. (This is something we saw to be so when examining St Antoninus’s catalogue of offences against sociability). If this fact about convergence-in-judgements is often overlooked, a major reason is that it is easy to be struck more forcibly by the points of contrast than by the features of resemblance between past and present practices.

214 

G. Scarre

Probably the Aztecs had a great many moral ideas that were not very different from our own, but it is the exotic phenomenon of human sacrifice that catches our attention and inveigles us into thinking that the Aztecs were ‘not at all like us’. Below the more dramatic or eye-catching features of difference there are generally to be found a range of less conspicuous commonalities. This fact should not really be surprising. Recall Velleman’s observation that norms suitable for social living will express pro-social rather than anti-social attitudes, ‘in the sense that they will favor material benefit over material harm’ (Velleman, 2015, p. 95). The fact that human beings have similar needs and interests wherever and whenever they live means that some norms will tend to be selected repeatedly on account of their capacity to promote social living: a plausible list of these might include, among others, norms that require that promises should be kept, that lies not be told, that money borrowed be repaid, that persons’ homes and property be respected, and that random violence be discountenanced. Although the comparison may sound unflattering, moral ideas may be said to be in some respects like germs. A moral idea may be narrowly distributed or it may be widespread, even reaching pandemic proportions. It may vanish for a time and spring up again in unexpected places. More or less successful attempts may be made to eliminate or confine it; it may be robustly resistant to suppressive measures or weakly vulnerable to them. It may retain its identity relatively unchanged for a long period or quickly evolve into new forms or spin off successors. Less like a germ, perhaps, a moral idea may arise in different places and at different times that have no direct communication with one another. This is not very wonderful given the common nature of the ‘host’ that entertains it: man, the social animal, who, being an individual of limited capacities, needs to figure out ways to live alongside others in a give-and-take relationship that brings net benefit to all. Because there is not one form of life that is best for all human beings, there is scope for variety in moral frameworks which serve the same ultimate purpose of promoting the well-being of creatures with a nature of our kind. Any moral system that is inefficient in promoting this end is open to criticism; and the criticism may come quite as appropriately from outside the system as from within it.

10  Postlude: ‘Consider the Ant’ 

215

The fundamental moral principle which, I suggest, is the touchstone by which moral systems and regimes of any time or place can be appraised is the equality of all human beings within the Kingdom of Ends. Societies that have existed back to, and even before, the dawn of history that have failed to acknowledge and live up to this basic principle may be judged to be morally defective in proportion to the degree in which they fall short of it. Moreover, such judgements of deficiency not only may but ought to be issued by those historians and anthropologists who study them, the scrutiny of past societies being a form of human encounter which, like other varieties of human encounter, calls for a response that is more than purely cognitive and analytical. As Tim Heysse remarks, the past should not be, ethically speaking, ‘indifferent to us’ (Heysse, 2010, p. 238), and if we cannot wax indignant with wrongdoers of former times, the fact reflects rather poorly on our moral sensibility. This book began with the account of the removal of a statue; it can appropriately end by describing the removal of another. On 8 September 2021, the 60-foot-high equestrian statue of Confederate General Robert E. Lee which had dominated Monument Avenue in Richmond, Virginia, for the past 130 years was finally taken away, to the delight of many and the regret of a few. Lee’s statue was the last and largest of fifteen statues of prominent Confederate figures to be removed from the Avenue in little over a year in the wake of the Black Lives Matter protests. Erected in 1890 as a bold statement to the world that, although the Confederacy had been defeated militarily, its spirit lived on, the effigy had long been a subject of controversy. Time magazine reported Richmond Mayor Levar Stoney’s comment that ‘Even though Robert E. Lee was an inanimate object … of stone and granite, stone and bronze, it was a weight taken off the city’s shoulders—particularly the shoulders of Black and brown people, who for over 100 years have had to toil under the shadow of one of the largest, tallest Confederate statues in the country’ (https://time. com/6096224/richmond-­robert-­e-­lee-­statue/). In passing judgement on the statue, the people of Richmond are looking forward to the future, firmly declaring what must never be permitted to occur again. But the displacement of Lee’s statue represents something in addition: not just a repudiation of the past but also a condemnation of it, a statement that

216 

G. Scarre

the cause for which Lee fought was bad and blameworthy at the time he fought for it. Moral judgement has been delivered, and the past has been found wanting. To be sure, many of the ordinary soldiers who fought on the Confederate side did so from motives of patriotism, loyalty and a confused sense that they were participating in a good and Godly struggle; but if their guilt is mitigable to some extent, the same cannot be said for those who led them on, inspired by their vicious desire to cling on to a privileged lifestyle sustained by cruelty and injustice. In the final analysis, memory without moral judgement is empty; moral judgement without memory to inform it is blind. As Santayana almost said, those who fail to recognise the moral errors of the past are in danger of committing them anew. To be indignant with Louis XIV or Robert E. Lee shows that we are ready to stand up to be counted, whatever history next throws at us.

Notes 1. ‘Go to the ant, thou sluggard; consider her ways, and be wise’ (Proverbs 6: 6). But exactly how wise is the ant? According to Herbert Simon: ‘An ant, considered as a behaving system, is quite simple. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself ’ (Simon, 1969, p. 24). 2. The view that all men and women were equal in the sight of God was rarely taken, however, before quite recent times to imply that they should also enjoy equal political rights or social status. Inequality in these respects was regarded (or alleged) to be necessary for the maintenance of a stable and functioning polity that could defend itself against outsiders. ‘Take but degree away, untune that string, And hark what discord follows!’ opined Shakespeare’s Ulysses: for men to forget their station in life would be as disastrous for society as if the planets were to wander out of their accustomed orbits and bring chaos to the heavens (Troilus and Cressida, Act 1, Sc.3, ll. 83ff.). Thomas Hobbes, less poetically but with better arguments, maintained that for human beings to flourish, people had to be ready to transfer their individual natural right of governing themselves to some ‘Man, or Assembly of Men’, who would govern ‘to the end [that they

10  Postlude: ‘Consider the Ant’ 

217

would] live peaceably amongst themselves, and be protected against other men’ (Hobbes, 1991 [1651], p. 121). The trouble is, as history has shown repeatedly, that where social or economic divisions are stark, it can be hard to preserve an adequate consciousness of the equality of human beings either in God’s Kingdom or in the Kingdom of Ends. 3. Note, however, that unlike in the ants’ case, many of the complexities of our human environment are self-created. 4. For a brief summary of recent research into whether human beings are still evolving, see New Scientist (2018, pp. 210–13).

Bibliography

Antoninus (Pierozzi), St. (1473). Defecerunt Scrutantes Scrutinio – Titulus de Restitutionibus. Barthomaeus Cremonensis. Arendt, H. (1968). Eichmann in Jerusalem. A Report on the Banality of Evil. Penguin. Aristotle. (1954). The Nicomachean Ethics of Aristotle (S. D. Ross, Trans.). Oxford University Press. Aristotle. (1959). Aristotle’s Politics (B. Jowett, Trans.). Clarendon Press. Armitage, D. (2003). In Defense of Presentism. https://scholar.harvard.edu/ files/armitage/files/in_defense_of_presentism.pdf. (Originally Published in D. H. McMahon (ed.). History and Human Flourishing. Oxford University Press, 2003), 59–84.) Armitage, D. (2004). John Locke, Carolina and the Two Treatises of Government. Political Theory, 21(5), 602–627. Arneson, R. (1999). What, If Anything, Renders al Humans Morally Equal? In D. Jamieson (Ed.), Singer and His Critics (pp. 103–128). Blackwell. Arpaly, N., & Schroeder, T. (2013). In Praise of Desire. Oxford University Press. Audi, R. (2007). Moral Value and Human Diversity. Oxford University Press. Austin, J. L. (1970). A Plea for Excuses. In Philosophical Papers (2nd ed., pp. 175–204). Oxford University Press. Barstow, A. L. (1994). Witchcraze. A New History of the European Witch Hunts. Our Legacy of Violence Against Women. HarperOne. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1

219

220 Bibliography

Beard, M. (2015). S.P.Q.R. A History of Ancient Rome. Profile Books. Belshaw, C. (2009). Annihilation: The Sense and Significance of Death. Acumen. Benedict, R. (1934). Patterns of Culture. Routledge and Kegan Paul. Benhabib, S. (2002). The Claims of Culture. Harvard University Press. Bennett, J. (1974). The Conscience of Huckleberry Finn. Philosophy, 49(2), 123–134. Benson, J. (1983). Who Is the Autonomous Man? Philosophy, 58(1), 5–17. Bentham, J. (1988 [1776]). A Fragment on Government (R. Harrison, Ed.). Cambridge University Press. Berlin, S. I. (1991). In H. Hardy (Ed.), The Crooked Timber of Humanity: Chapters in the History of Ideas. Fontana. Berners, J. B. Lord. (1908 [1523]). The Preface of John Bourchier, Knight, Lord Berners. The Chronicles of Froissart, ed. and reduced into one vol. by G. C. Macaulay. Macmillan and Company. Blackstone, W. (1787 [1769]). Commentaries on the Laws of England (10th ed., 4 vols.). Strahan, Cadell and Prince. Bloxham, D., & Kushner, T. (2005). The Holocaust: Critical Historical Approaches. Manchester University Press. Blustein, J. (2008). The Moral Demands of Memory. Cambridge University Press. Blustein, J. M. (2014). Forgiveness and Remembrance: Remembering Wrongdoing in Personal and Public Life. Oxford University Press. Boone, R. (2000). Claude de Seyssel’s Translations of Ancient Historians. Journal of the History of Ideas, 61(4), 561–575. Brady, M. (2010). Disappointment. Proceedings of the Aristotelian Society Supplementary Volume, LXXXIV, 179–198. Brogan, H. (1990). The Penguin History of the United States. Penguin. Brown, D. (1970). Bury My Heart at Wounded Knee: An Indian History of the American West. Holt, Rinehart, and Winston. Browning, C. (1998). Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland (2nd ed.). HarperCollins. Brownstein, M. B., & Saul, J. (2016). Implicit Bias and Philosophy, Volume 1: Metaphysics and Epistemology. Oxford University Press. Buikstra, J. E., & Beck, L. A. (Eds.). (2006). Bioarchaeology: The Contextual Analysis of Human Remains. Elsevier. Butler, J. (1970 [1727]). Butler’s Fifteen Sermons Preached at the Rolls Chapel (T. A. Roberts, Ed.). SPCK. Cabeza de Vaca, A. N. (2003). The Journey and Ordeal of Cabeza de Vaca (C. Covey, Trans. and Ed.). Dover Publications Ltd.

 Bibliography 

221

Carr, E. H. (1986 [1961]). What Is History? Penguin. Las Casas, Bartolome de. A Short Account of the Destruction of the Indies. Translated with an Introduction by Anthony Pagden. Penguin Books. 2004. Casaubon, M. (1668). Of Credulity and Incredulity in Things Natural, Civil, and Divine. T. Garthwait. Clendinnen, I. (1991). Aztecs: An Interpretation. Cambridge University Press. Clifford, W. K. (1999 [1877]). The Ethics of Belief. In T. Madigan (ed.), The Ethics of Belief and Other Essays (pp. 70–96). Prometheus. Collingwood, R. G. (1961 [1946]). The Idea of History. Oxford University Press. Colvin, E. (1991). Principles of Criminal Law (2nd ed.). Carswell. Confucius. (1979). The Analects (D. C. Lau, Trans.). Penguin. Cox, M. (1996). Life and Death at Spitalfields 1700–1850. Council for British Archaeology. Crowder, G. (2017). Value Pluralism vs. Relativism in Bernard Williams’s “relativism of distance”. The Personalist, 12(2), 114–138. Cullwick, H. (1964). Diaries of Hannah Cullwick, Victorian Maidservant. Virago Press. Darwall, S. (1977). Two Kinds of Respect. Ethics, 88(1), 36–49. Darwin, C. (1906 [1845]). The Voyage of the Beagle. J.M. Dent (Everyman’s Library). Davies, K. G. (1999). The Royal African Company. Thoemmes Press. Montaigne, Michel de. (1993 [1588]). The Complete Essays (M. A. Screech, Ed.). Penguin. de Roover, R. (1967). San Bernardino of Siena and Sant’ Antonino of Florence: The Two Great Economic Thinkers of the Middle Ages. Harvard Graduate School of Business. Dickens, C. (1853). The Noble Savage. Retrieved May 25, 2021, from https:// w3.ric.edu/faculty/rpotter/temp/noblesav/html/ Dickens, C. (1985 [1842]. American Notes For General Circulation (J. S. Whitley & A. Goodman, Eds.). Penguin. Dickens, C. (1953). In E. Johnson (Ed.), Letters from Charles Dickens to Angela Burdett-Coutts. Jonathan Cape. Dilthey, W. (1989 [1883]). Introduction to the Human Sciences (R. A. Makkreel & F. Rodi, Trans.). Princeton University Press. Duff, R. A. (1993). Choice, Character, and Criminal Liability. Law and Philosophy, 12(4), 345–383. Dumbach, A., & Newborn, J. (2006). Sophie Scholl and the White Rose. Oneworld. Ehrenreich, B., & English, D. (1976). Witches, Midwives and Nurses. Writers and Readers Publishing Corporation.

222 Bibliography

Eliot, G. (1965 [1871-72]). Middlemarch. Penguin Books. Enoch, D. (2011). Taking Morality Seriously: A Defense of Robust Realism. Oxford University Press. Evans-Pritchard, E. E. (1937). Witchcraft, Oracles and Magic Among the Azande. Clarendon Press. Feinberg, J. (1984). The Moral Limits to the Criminal Law. Volume One: Harm to Others. Oxford University Press. Foot, P. (2001). Natural Goodness. Clarendon Press. Fricker, M. (2010). The Relativism of Blame and Williams’s Relativism of Distance. Proceedings of the Aristotelian Society Supplementary Volume, LXXXIV, 151–177. Froissart, J. (1908 [1523]). The Chronicles of Froissart. Translated by John Bourchier, Lord Berners; edited and reduced into one volume by G.C. Macaulay. Macmillan and Co.mpany. Frowe, H. (2019). The Duty to Remove Statues of Wrongdoers. Journal of Practical Ethics, 7(3), 1–31. Gaitán, A., & Viciana, H. (2018). Relativism of Distance – A Step in the Naturalization of Meta-ethics. Ethical Theory and Moral Practice, 21(2), 311–327. Geertz, C. (1973). The Interpretation of Cultures. Basic Books. George Eliot: Middlemarch. Penguin Books. 1965 [original ed. 1871–72]. Gibbon, E. (1903–06). The Decline and Fall of the Roman Empire (7 vols.). Henry Frowde and Oxford University Press. Gilbert, M. (1987). The Holocaust: The Jewish Tragedy. Fontana Press. Glannon, W. (2001). Persons, Lives, and Posthumous Harms. Journal of Social Philosophy, 32(2), 127–142. Glover, J. (1999). Humanity: A Moral History of the Twentieth Century. Jonathan Cape. Goldhagen, D. J. (1996). Hitler’s Willing Executioners: Ordinary Germans and the Holocaust. Little Brown and Company. Grant, M. (1974). Translator’s Introduction. Tacitus: The Annals of Ancient Rome. Penguin. Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, Culture, and Morality, Or Is It Wrong to Eat Your Dog? Journal of Personality and Social Psychology, 65(4), 613–628. Harcourt, J. (1721). A Sermon Preach’d in the Church of All-Saints in Bristol, October 29, 1721, Upon the Death of Edward Colston. R. Standfast. Hare, R. M. (1981). Moral Thinking: Its Levels, Method and Point. Clarendon Press. Harman, G. (1977). The Nature of Morality. Oxford University Press.

 Bibliography 

223

Harris, J. (2001). The Indian Mutiny. Wordsworth Editions Ltd. Hartley, L. P. (1958). The Go-Between. Penguin. Henderson, J. [Sa’ke’j]. (2009). The Appropriation of Human Remains: A First Nations Legal and Ethical Perspective. In J. O. Young & C. Brunk (Eds.), The Ethics of Cultural Appropriation (pp. 55–71). Wiley-Blackwell. Herodotus. (1992). The Histories (H. Cary, Trans. & C. Scarre, Ed.). The Folio Society. Heysse, T. (2010). Bernard Williams on the History of Ethical Views and Practices. Philosophy, 85(4), 225–243. Hobbes, T. (1991 [1651]). Leviathan (R. Tuck, Ed.). Cambridge University Press. Höss, R. (1959). Commandant of Auschwitz: The Autobiography of Rudolf Höss (C. FitzGibbon, Trans.). Weidenfeld and Nicolson. Hume, D. (1888 [1739]). A Treatise of Human Nature (L. A. Selby-Bigge, Ed.). Clarendon Press. Hume, D. (1903 [1742]. On the Study of History. In Essays Moral, Political and Literary (pp. 558–562). Grant Richards. Hume, D. (1963 [1751]). Enquiries Concerning the Human Understanding and Concerning the Principles of Morals (L. A. Selby-Bigge, Ed.). Clarendon Press. James I and VI, King. (1616). The Workes of the Most High and Mighty Prince, James, by Grace of God King of Great Brittaine, France and Ireland. Robert Barker and John Bill. James, L. (1997). Raj: The Making and Unmaking of British India. Little, Brown & Co. Jennie, K. (2003). Vices of Inattention. Journal of Applied Ethics, 20(3), 279–295. Kant, I. (1909a). Fundamental Principles of the Metaphysics of Morals. In Kant’s Critique of Practical Reason and Other Works in the Theory of Ethics (T. K. Abbott, Trans.). Longmans. Kant, I. (1909b). On a Supposed Right to Tell Lies from Benevolent Motives. In Kant’s Critique of Practical Reason and Other Works in the Theory of Ethics (T. K. Abbott, Trans.). Longmans. Kant, I. (1991). The Metaphysics of Morals (M. Gregor, Trans.). Cambridge University Press. Kilvert, F. (1949). Kilvert’s Diary, 1870–1879. Reader’s Union/Jonathan Cape. Korsgaard, C. (1986). The Right to Lie: Kant on Dealing with Evil. Philosophy and Public Affairs, 15(4), 325–349. Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press. Kuper, A. (1999). Culture: The Anthropologists’ Account. Princeton University Press.

224 Bibliography

Ladurie, E. L. R. (1980 [1975]). Montaillou: Cathars and Catholics in a French Village, 1294–1324 (B. Bray, Trans.). Penguin. Lander, J. R. (1969). Conflict and Stability in Fifteenth-Century England. Hutchinson University Library. Las Casas, Bartolomé de. (2004). A Short Account of the Destruction of the Indies. Translated with an Introduction by Anthony Pagden. Penguin Books. Lawrence James. Raj: The Making and Unmaking of British India. Little, Brown & Co. 1997. Lea, H. C. (1939). Materials Towards a History of Witchcraft. University of Pennsylvania Press. Lear, J. (2006). Radical Hope; Ethics in the Face of Cultural Devastation. Harvard University Press. Lecky, W. H. (1882). History of the Rise and Influence of the Spirit of Rationalism in Europe. Longmans, Green. Ledger, S., & Furneaux, H. (Eds.). (2011). Dickens in Context. Cambridge University Press. Levi, P. (1987). Is This A Man? In Is This A Man? and The Truce (S. Woolf, Trans.). Abacus. Locke, J. (1961 [1690]). An Essay Concerning Human Understanding (2 vols.). Dent. Los Angeles Times. (2017). Retrieved June 2, 2021, from https://www.latimes. com/nation/la-­na-­kennewick-­man-­reburial-­20170220-­story.html Lu, C. (2017). Justice and Reconciliation in World Politics. Cambridge University Press. Lukes, S. (2003). Liberals and Cannibals: The Limits of Diversity. Verso. Machiavelli, N. (1595). The Florentine Historie. Written in the Italian Tongue, by Nicholo Macchiavelli, Citizen and Secretary of Florence. (‘T.B’, Trans.). Thomas Creede for William Ponsonby. MacIntyre, A. (1984). After Virtue: A Study in Moral Theory (2nd ed.). University of Indiana Press. Margalit, A. (2002). The Ethics of Memory. Harvard University Press. Marwick, M. (1982). Witchcraft and Sorcery: Selected Readings. Penguin. McDowell, J. (1986). Critical Notice of Bernard Williams: Ethics and the Limits of Philosophy. Mind, 95(379), 377–386. Mead, M. (1961 [1928]). Coming of Age in Samoa. Penguin Books. Midgley, M. (1983). On Trying out One’s New Sword. In Heart and MInd: The Varieties of Moral Experience (pp. 69–75). Methuen & Co. Ltd. Mill, J. S. (1981). Autobiography. In J. M. Robson & J. Stillinger (Eds.), Autobiography and Literary Essays (Collected Works, Vol. 1) (pp. 1–290). University of Toronto Press.

 Bibliography 

225

Montmarquet, J. A. (1999). Zimmerman on Culpable Ignorance. Ethics, 109(4), 842–845. Moody-Adams, M. M. (1994). Culture, Responsibility, and Affected Ignorance. Ethics, 104(2), 291–309. Moody-Adams, M. M. (2002). Fieldwork in Familiar Places: Morality, Culture, and Philosophy. Harvard University Press. Moore, G. E. (1903). Principia Ethica. Cambridge University Press. Moore, G. (2004). Dickens and Empire: Discourses of Class, Race and Colonization in the Works of Charles Dickens. Ashgate. Moore, R. I. (2007). The Formation of a Persecuting Society (2nd ed.). Blackwell. Mulgan, T. (1999). The Place of the Dead in Liberal Political Philosophy. Journal of Political Philosophy, 7(1), 52–70. Murdoch, I. (1970). The Sovereignty of Good. Routledge and Kegan Paul. Murphy, J. (1982). Forgiveness and Resentment. Midwest Studies in Philosophy, 7, 503–516. Murray, O. (1988). The Greek Historians. In J. Boardman, J. Griffin, & O. Murray (Eds.), Greece and the Hellenistic World (pp. 180–197). Oxford University Press. Nagel, T. (1974). What Is It Like to Be a Bat? Philosophical Review, 83(4), 435–450. New Scientist. (2018). Human Origins: 7 Million Years and Counting. John Murray Learning. Newall, V. (Ed.). (1973). The Witch Figure. Routledge and Kegan Paul. Norwich, J. J. (1988). Byzantium: The Early Centuries. Viking. Nussbaum, M. C. (2006). Foundations of Justice: Disability, Nationality, Species Membership. Harvard University Press. Nussbaum, M. C. (2011). Creating Capabilities. Harvard University Press. Origo, I. (1957). The Merchant of Prato. Jonathan Cape. Ostler, J. (2020). Surviving Genocide: Native Indians and the United States from the American Revolution to Bleeding Kansas. Yale University Press. Pagden, A. (2004). ‘Introduction’ to Bartolomé de Las Casas: A Short Account of the Destruction of the Indies (pp. xiii–xli). Penguin. Parry, J. H. (1964). The Age of Reconnaissance. Readers Union and Weidenfeld & Nicolson. Paxman, J. (2012). Empire. Penguin. Pereboom, D. (2021). Wrongdoing and the Moral Emotions. Oxford University Press. Pettigrew, W. A. (2014). The Royal African Company and the Politics of the Atlantic Slave Trade, 1672–1752. University of North Carolina Press. Pitcher, G. (1984). The Misfortunes of the Dead. American Philosophical Quarterly, 21(2), 183–188.

226 Bibliography

Poole, A. L. (1955). Domesday Book to Magna Carta. Clarendon Press. Pope, A. (1956 [1733]). An Essay on Man. In Collected Poems (B. Dobrée, Ed.). J.M. Dent & Sons. Procopius. (1981). The Secret History (G. A. Williamson, Trans. and with an Introduction). Penguin. Purkiss, D. (2006). The English Civil War: A People’s History. HarperCollins. Putnam, H. (1992). Renewing Philosophy. Harvard University Press. Quine, W. V. O. (1960). Word and Object. M.I.T. Press. Ramdin, R. (2005). Mary Seacole. Haus Publishing. Rawls, J. (1971). A Theory of Justice. Oxford University Press. Rawls, J. (1999). The Law of Peoples. Harvard University Press. Reader, S. (2007). Needs and Moral Necessity. Routledge. Robbins, R. H. (1963). The Encyclopaedia of Witchcraft and Demonology. Paul Hamlyn. Roberts, C. A. (2009). Human Remains in Archaeology: A Handbook. Council for British Archaeology. Roberts, W. (1990). Leadership Secrets of Attila the Hun. Grand Central Publishing. Robinson, J. (2004). Mary Seacole: The Charismatic Black Nurse Who Became a Heroine of the Crimea. Robinson. Rosen, G. (2003). Culpability and Ignorance. Proceedings of the Aristotelian Society, New Series, 103, 61–84. Rosen, G. (2004). Skepticism About Moral Responsibility. Ethics, 18(2), 295–313. Ross, W. D. (1930). The Right and the Good. Clarendon Press. Santayana, G. (1905). The Life of Reason: Reason in Common Sense. Scribner’s. Sayer, D. (2010). Ethics and Burial Archaeology. Duckworth. Scarre, G. (1987). Witchcraft and Magic in Sixteenth- and Seventeenth-Century Europe. Macmillan. Scarre, G. (2004). After Evil: Responding to Wrongdoing. Ashgate. Scarre, G. (2006). Can Archaeology Harm the Dead? In C. Scarre & G. Scarre (Eds.), The Ethics of Archaeology: Philosophical Perspectives on Archaeological Practice (pp. 181–198). Cambridge University Press. Scarre, G. (2012). Speaking of the Dead. Mortality, 17(1), 36–50. Scarre, G. (2013a). The Vulnerability of the Dead. In J. S. Taylor (Ed.), The Metaphysics and Ethics of Death: New Essays (pp. 171–187). Oxford University Press. Scarre, G. (2013b). “Sapient trouble-tombs”? Archaeologists’ Moral Obligations to the Dead. In S. Tarlow & N. Stutz (Eds.), The Oxford Handbook of Death and Burial (pp. 665–676). Oxford University Press.

 Bibliography 

227

Scarre, G. (2013c). Speaking of the Dead: A Postscript. Mortality, 18(3), 313–318. Scarre, G., & Callow, J. (2001). Witchcraft and Magic in Sixteenth- and Seventeenth-Century Europe (2nd ed.). Palgrave. Scheffler, S. (1987). Morality Through Thick and Thin: A Critical Notice of Ethics and the Limits of Philosophy. The Philosophical Review, 96(3), 411–434. Semmel, B. (1962). The Governor Eyre Controversy. MacGibbon & Kee. Sen, A. (1985). Commodities and Capabilities. North Holland. Sen, A. (1993). Capabilities and Well-Being. In M. C. Nussbaum & A. Sen (Eds.), The Quality of Life (pp. 30–53). Clarendon Press. Sereny, G. (2000). The German Trauma: Experiences and Reflections, 1938–2000. Allen Lane/The Penguin Press. Shaftesbury, Anthony Ashley Cooper, Third Earl of. (1999 [1711]). An Inquiry Concerning Virtue Or Merit. In L. E. Klein (Ed.), Characteristics of Men, Manners, Opinions, Times (pp. 163–230). Cambridge University Press. Sherer, J. W. (n.d. [circa 1910]). Havelock’s March on Cawnpore. Thomas Nelson and Sons. Simon, H. A. (1969). The Sciences of the Artificial. MIT Press. Sleeper, S., Juliana Barr, M., O’Brien, et al. (Eds.). (2015). Why You Can’t Teach United States History Without American Indians. University of North Carolina Press. Slote, M. (1982). Is Virtue Possible? In R. Kruschwitz & R. Roberts (Eds.), The Virtues. Wadsworth Press. Stenton, S. F. (1971). Anglo-Saxon England (3rd ed.). Clarendon Press. Strachey, L. (1928). Elizabeth and Essex: A Tragic History. Chatto and Windus. Strawson, P. F. (1962). Freedom and Resentment. Proceedings of the British Academy, 48, 1–25. Thomas, H. (1997). The Slave Trade. The History of the Atlantic Slave Trade: 1440–1870. Picador. Thomas, K. (2009). The Ends of Life: Roads to Fulfilment in Early Modern England. Oxford University Press. Thompson, J. (2002). Taking Responsibility for the Past: Restoration and Historical Injustice. Polity Press. Thomson, D. (1950). England in the Nineteenth Century. Penguin. Todorov, T. (1999). Facing the Extreme: Moral Life in the Concentration Camps (A. Denner & A. Pollack, Trans.). Weidenfeld and Nicolson. Trevelyan, G. M. (1964 [1942]). Illustrated Social History (Illustrated ed. in 4 vols.). Penguin Books. Trevor-Roper, H. (1969). The European Witch-Craze. Penguin. Velleman, J. D. (2015). Foundations for Moral Relativism (2nd exp. ed.). Open Book Publishers.

228 Bibliography

Webb, M. (1978 [1924]). Precious Bane. Virago. Wedgwood, C. V. (1983). The Trial of Charles I. Penguin. Williams, B. (1993b). Morality (Canto ed.). Cambridge University Press. Williams, B. (1972). Morality: An Introduction to Ethics. Cambridge: Cambridge University Press. Williams, B. (1981). Moral Luck: Philosophical Papers 1973–1980. Cambridge University Press. Williams, B. (1985). Ethics and the Limits of Philosophy. Fontana. Williams, B. (1993a). Shame and Necessity. University of California Press. Williams, B. (1995). Making Sense of Humanity. In B. Williams (Ed.), Making Sense of Humanity and Other Philosophical Papers1982–1993 (pp. 79–89). Cambridge University Press. Williams, B. (2002). Truth and Truthfulness: An Essay in Genealogy. Princeton University Press. Williams, B. (2005). Human Rights and Relativism. In G. Hawthorn (Ed.), In the Beginning Was the Deed: Realism and Moralism in Political Argument (pp. 62–74). Princeton University Press. Williamson, G. W. (1981). Introduction. In Procopius: The Secret History (pp. 7–35). Penguin. Wilson, B. (2016). Heyday: The 1850s and the Dawn of the Global Age. Weidenfeld & Nicholson. Winder, R. (2020). English Heritage Handbook 2020/21 (p. 18). English Heritage. Wong, D. B. (2006). Natural Moralities: A Defense of Pluralistic Relativism. Oxford University Press. Young, I. M. (2011). Responsibility for Justice. Oxford University Press. Zangwill, N. (2003). Perpetrator Motivation: Some Reflections on the Browning/Goldhagen Debate. In E. Garrard & G. Scarre (Eds.), Moral Philosophy and the Holocaust (pp. 89–102). Ashgate. Zimmerman, M. J. (2002). Controlling Ignorance: A Bitter Truth. Journal of Social Philosophy, 33(3), 483–490.

Name Index1

A

Africa, 1, 157, 182 Alain of Lille, 138n3 America, United States of, 1, 35n2, 58, 116, 121n9, 176, 186n11, 209 Anglo-Saxon Chronicle, 145, 147, 149 Anne, Queen, 206n10 Antoninus, St (Antonio Pierozzi), 128–132, 138n5, 139n6, 139n7, 139n8, 213 Aquinas, Thomas, 48, 123n16 Arendt, Hannah, 109 Aristotle, 29, 58–60, 67, 84n3, 114, 115, 127 Armenian massacres, 149 Armitage, David, 10, 58, 168

Arpaly, Nomy, 99–101, 104, 105 Attila the Hun, 25, 127, 159, 171, 184n6 Audi, Robert, 67, 70, 79, 80 Auschwitz, 107, 114, 157 Austin, J.L., 120n8, 121n8 Aztecs, 15, 16, 18, 30, 31, 33, 34, 64, 90–95, 97, 98, 100, 104, 105, 109, 111, 112, 122n11, 126, 162n8, 212, 214 B

Beard, Mary, 173–175, 185n7, 186n10 Benedict, Ruth, 23, 33 Bengal famine (1943), 190 Bennett, Jonathan, 7, 30, 31, 75

 Note: Page numbers followed by ‘n’ refer to notes.

1

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1

229

230 

Name Index

Benson, John, 112, 124n18 Bentham, Jeremy, 5, 115 Berlin, Isaiah, 69, 79 Berners, John Bourchier, Lord, 31, 32, 143 Bible, the, 90, 96, 111 Blackstone, Sir William, 115, 116, 120n3 Blustein, Jeffrey M., 164–166, 169, 177–179, 183n1, 193 Boccaccio, Giovanni, 150 Bosworth, Battle of (1485), 200 Brady, Michael, 103–105, 107 Brazil, 3, 182 Bristol, 1–3, 5, 11, 44, 96, 157 British Empire, 148, 161n3, 173 Brogan, Hugh, 176, 177, 186n12 Brown, Dee, 186n12 Browning, Christopher, 162n7 Brutus, Lucius Junius, 94 Brutus, Marcus Junius, 145, 155 Burdett-Coutts, Angela, 12n3 Butler, Joseph, 106, 123n16

Churchill, Sarah, 206n10 Churchill, Winston S., 148, 161n3, 190 Clark, Kenneth, Lord, 207, 208, 212 Clendinnen, Inga, 30, 31 Cleopatra, Queen of Egypt, 201, 203 Clifford, W.K., 108 Collingwood, R.G., 17, 18, 32, 33, 38, 141, 143, 154–156, 158, 160 Colston, Edward, 1–12, 20, 42, 44, 45, 47, 48, 157, 158 Confederate States of America, 215, 216 Confucius, 86n10, 174 Crimean War (1854-56), 194 Cromwell, Oliver, 94, 120n6 Crow, the, 126 Crowder, George, 32, 40 Crusades, Crusaders, 9, 15, 16, 157 Culhua Mexica, see Aztecs Cullwick, Hannah, 26

C

D

Cabeza de Vaca, Alvar Núñez, 128, 137n1 Caesar, Julius, 34, 38, 145, 155 Caribbean, the, 181–182, 209 Carr, E.H., 152 Casaubon, Meric, 88 Cato, Marcus Porcius, 94 Cervantes Saavedra, Miguel de, 51 Chaucer, Geoffrey, 18, 192 Christian, Christianity, 1–5, 44, 87, 88, 92, 95, 97, 112, 129–131, 138n5, 157, 176, 183, 210

Daesh (ISIS), 127 Darwall, Stephen, 74, 204n4 Darwin, Charles, 3, 4, 8 Diana, Princess of Wales, 201 Dickens, Charles, 5–7, 9, 11, 12–13n3, 26, 162n9 Dilthey, Wilhelm, 18 Dominican (order of friars), 128, 209 Donne, John, 196 Doyle, Sir Arthur Conan, 119n1

  Name Index  E

Eichmann, Adolf, 109 Eliot, George (Mary Ann Evans), 191 Elizabeth I, Queen of England, 25, 191, 194 Engels, Friedrich, 26 Enlightenment, the, 10, 208 Evelyn, John, 25 Eyre, Edward John, 7, 13n4 F

Feinberg, Joel, 120n5, 192 Florence, 138n5 Foot, Philippa, 29, 67, 68, 79, 84n2, 84n3, 105, 212 Foxe, John, 147 Fricker, Miranda, 39–42, 97–105, 107, 122n13, 211, 212 Froissart, Jean, 31 Frowe, Helen, 12n2 G

Geertz, Clifford, 23 Gibbon, Edward, 10, 143, 147, 151 Glover, Jonathan, 186n9 Goldhagen, Daniel Jonah, 153, 156, 162n7 Grant, Michael, 150 Greeks, the, vi, 43, 49, 50, 58, 59, 114, 115, 149, 150 Greenland, 186n10 H

Hamlet (Shakespeare), 149 Harcourt, James, 2 Harman, Gilbert, 62n5

231

Harris, J.G.N. (Brigadier), 160 Hartley, L.P., 15, 126 Herder, J.G., 111 Herodotus, 150 Herostratos of Ephesus, 193, 194, 196, 203 Heysse, Tim, 19, 31, 34, 40, 41, 44, 215 Himmler, Heinrich, 75, 76 Hispaniola, 209 Hitler, Adolf, 21, 38, 52, 62n5, 64, 108, 109, 114, 153, 155, 184n4, 195 Hobbes, Thomas, 216n2, 217n2 Holocaust, the, 123n17, 149, 152, 153, 157, 163, 167 Horace (Quintus Horatius Flaccus), 193 Höss, Rudolf, 107, 108, 114, 123n17 Huckleberry Finn, 7, 8, 100 Hume, David, 31, 68, 70, 72, 73, 77–79, 84n2, 124n18, 151, 161n5, 161–162n6 Huns, the, 127, 162n8 Huxley, Thomas, 7 I

India, 12n3, 35n1, 159, 190 Indian Mutiny (aka Sepoy Rebellion), 12n3, 159 Italy, 128 J

James I and VI, King of England and Scotland, 87 James, Lawrence, 160

232 

Name Index

Jennie, Kathie, 124n18 Jesus Christ, 85n5, 174 Jews, the, 108, 119n2, 124n18, 149, 153, 177, 184n5, 187n14 Justinian (Byzantine emperor), 146 K

Kant, Immanuel, 4, 5, 39, 73–77, 79, 81, 82, 85n6, 85n8, 86n10, 108, 109, 120n4, 138n2, 161n4, 197, 198, 201–203, 205n8, 205n9, 213 Kennewick Man, 35n2 Kikuyu, The, 19, 66 Kilvert, Francis, 26 King, Martin Luther, 19, 195 L

Ladurie, Emmanuel Le Roy, 205n5 Lander, J.R., 117 Las Casas, Bartolomé de, 209 Latin America, 209 Lea, H.C., 87 Lear, Jonathan, 186n12 Lecky, William, 89 Lee, Robert E., General, 215, 216 Leroy Little Bear, 28 Levi, Primo, 121n10 Lincoln, Abraham, 116, 127, 213 Locke, John, 58, 106 London, 6, 25, 115, 184n2, 184n3 Louis XIV, King of France, 20, 21, 32, 43, 47, 53, 144, 155, 195, 210, 216 Lu, Catherine, 185n8 Lukes, Steven, 85n5, 109–111, 211

M

Machiavelli, Niccoló, 150, 151 MacIntyre, Alasdair, 58, 196 Margalit, Avishai, 163–165, 168, 169, 173, 174, 178, 183n1, 189 Marx, Karl, 58 Mary I, Queen of England, 147 Maximus, Valerius, 193 Mead, Margaret, 23 Middle Ages, The, 117 Midgley, Mary, 42 Mill, John Stuart, 7, 13n4 Montaigne, Michel de, 88, 109, 110 Montesinos, Antonio, 209, 213 Montmarquet, James, 110 Moody-Adams, Michelle, 23, 49–51, 108, 110–114 Moore, G.E., 12n3, 43, 119n2 Morant Bay (Jamaica), 7 Mozart, Wolfgang Amadeus, 199 Mulgan, Tim, 64 Murdoch, Iris, 174 Murphy, Jeffrie, 122n13 Murray, Oswyn, 150 N

Nagel, Thomas, 17 New Guinea, 35n1, 39, 40 Nietzsche, Friedrich, 5 Nightingale, Florence, 194 Norwich, John Julius, 146, 147 O

Ojibbeway Indians, 6 Origo, Iris, 205n5 Othello (Shakespeare), 205n7

  Name Index  P

Parry, J.H., 121n9, 182, 209 Paxman, Jeremy, 161n3 Pepys, Samuel, 25 Pereboom, Derk, 121n11 Pitcher, George, 192 Pope, Alexander, 63–70, 84n3, 123n15 Procopius, 146, 147 Purkiss, Diane, 185n7 Putnam, Hilary, 90–92, 94, 212 R

Ranke, Leopold von, 152 Rawls, John, 71–73, 79, 81, 82, 85n4, 86n9, 212 Renaissance, The, 32, 150 Richard III, King of England, 199–201 Richard III (Shakespeare), 200 Richard III Society, The, 200 Richmond (Virginia), 215 Rio de Janeiro, 3 Robbins, Rossell Hope, 88, 89 Rome, 10, 149–151, 173, 175 Roover, Raymond de, 138n5 Rosen, Gideon, 102, 107, 122n12 Royal African Company, 1, 12n1 S

Salieri, Antonio, 199, 200, 202 Samoa, 23 Santayana, George, 32 Satan, 88, 92, 97 Scheffler, Samuel, 54, 55, 58 Schroeder, Timothy, 99–101, 104, 105

233

Seacole, Mary, 195 Sereny, Gitta, 109 Seyssel, Claude de, 32 Shaffer, Peter, 199, 200, 202 Shaftesbury, Anthony Ashley Cooper, 3rd Earl of, 43 Shakespeare, William, 200, 216n2 Sherer, J.W., 159 Shirazi, Saadi, 137n1 Simon, Herbert, 207, 211, 216n1 Slote, Michael, 49, 50, 114 Socrates, 9 Spain, 127, 209 Spitalfields (London), 184n2 Stalin, Josef, 175, 177, 195 Stoney, Levar, 215 Strachey, Lytton, 25 Strawson, Peter, 86n12 T

Tacitus, Publius Cornelius, 150, 151 Tegbesu, King of Dahomey, 182 Theodora (Byzantine empress), 146 Thomas, Hugh, 12n1, 182 Thomas, Keith, 32 Thompson, Janna, 8 Todorov, Tzvetan, 169, 170 Trevelyan, G.M., 24, 25, 184n7 Trevor-Roper, Hugh, 89, 184n7 Troilus and Cressida (Shakespeare), 216n2 Twain, Mark, 7 U

United Nations, 137n1

234 

Name Index

Velleman, J. David, 19, 65–68, 70, 71, 79, 84n3, 105, 126, 212, 214 Vitoria, Francisco de, 209

91, 97, 105, 107, 114, 116, 125, 128, 144, 158, 195, 210 Williams, G.A., 147 Winder, Robert, 22, 26 Wong, David, 66–68, 79, 105, 212

W

Y

V

Waterloo, Battle of (1815), 165, 166 Webb, Mary, 172 West Africa, 182 Weyer, Johann, 88, 110 White Rose group, 52 Williams, Bernard, v, 15–22, 27, 28, 32, 34, 37–49, 53–61, 61n1, 61n2, 62n5, 63, 66, 79, 80,

Young, Iris Marion, 2, 185–186n8 Yugoslavia, 161n4 Z

Zimmerman, Michael J., 107, 108, 113, 114 Zulus, 6

Subject Index1

A

B

Affected ignorance, 48, 108, 158 Agreement in norms, 51 Alien cultures, 23, 53, 68 Alternative facts, 143 Animals, non-human, 3, 5, 26, 68, 74, 78, 82, 136, 209, 214 Ant, the, 207–216 Anthropology, 23, 50 anthropologists, 17, 22, 23, 27, 126, 215 Appraisal respect, 74, 85n7, 204n4 Archaeology, 183n2 archaeologists, 28, 183n2 Archimedean point, 57 Artificial, 5, 208

Biography, 12n1, 175, 176, 189–204 biographers, 171, 176, 194–199 Black Lives Matter movement, 4, 181, 215 C

Carolina, Constitution of, 58 Categorical Imperative, 73–76, 85n6, 109 Communities of memory, 163, 166 Conquistadors, 30, 111, 209, 211, 213

 Note: Page numbers followed by ‘n’ refer to notes.

1

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 G. Scarre, Judging the Past, https://doi.org/10.1007/978-3-031-34511-1

235

236 

Subject Index

Conscience, 30, 89, 91, 96, 105–114, 123n16, 123n17, 176, 177, 184n4 Contractualism, 70, 72, 81 contractualist theories of justice, 71 Courage, 27, 51, 99, 109, 113, 127, 166 Cruelty, 6, 8, 12n3, 64, 80, 90, 94, 120n7, 125, 149, 157, 169, 177, 183, 209, 216 Culture, cultures cultural bias, 10, 83 cultural distance, 38

ethical values, 19, 21, 28, 31, 51, 63–64, 81, 109, 148, 150 Exploitation, 29, 74, 80, 107, 143, 147, 156, 181, 184n4 F

Fascism, 21 Female circumcision, 19, 66 First Nations, 28 Foedus pacificum (Kant), 81, 82 Freedom, 51, 60, 67, 69, 71, 74, 81, 86n12 G

D

Death, 8, 30, 78, 87, 90, 94, 107, 108, 113, 115, 126, 141, 149, 161n3, 183, 187n14, 189, 190, 192, 193, 196–199, 202, 203, 205n9 the dead, vi, 8, 20, 21, 64, 150, 167, 179, 180, 189–204 E

Elizabethan age, 25 Empathy, 7, 33 Equality, 26, 59, 122n14, 215, 217n2 Ethics, ethical ethical beliefs, 56, 57, 64, 98 ethical conceptions, 16, 18, 19, 21, 22, 37, 53, 64, 125, 158 ethical norms, 19 ethical standards, 64, 117, 125 ethical truth, 19, 56, 57

Gaming, 135–137 Gender discrimination, 55 Genocide, 62n5, 123n17, 149, 161n4, 174, 177 Geshichtswissenschaft, 151 H

Happiness, 12n3, 67, 69, 73, 77–79, 84n3, 100, 115, 144 Heretics, 9, 119n2 History historians, vi, 10, 13n5, 16, 18, 22–25, 32–34, 41, 42, 82, 89, 142–160, 160n1, 161n3, 162n6, 162n8, 165–180, 183n2, 184n5, 184–185n7, 186n12, 191, 196–198, 200, 203, 204, 211, 213, 215 historical fiction, 198 historical moral judgements, vi, 10, 27, 34, 38, 40, 53, 61, 79, 82, 83, 105, 142, 148, 156, 211

  Subject Index 

Human beings, v, 3, 4, 6–8, 17, 26, 29, 30, 33, 38, 42, 50, 56, 57, 61, 61n3, 61n4, 64–69, 71, 73–76, 78, 79, 82, 83, 85–86n8, 86n12, 93, 94, 96, 105, 112, 121n10, 123n15, 125, 128, 142, 152, 155, 157, 158, 161n4, 163, 167, 169, 172, 173, 176, 192, 193, 204, 204n4, 208–215, 216–217n2, 217n4 Humanity, 5–7, 18, 44, 100, 108, 112, 121n9, 137n1, 151, 163, 168, 170, 171, 173, 191, 195, 203, 205n9, 210 Human rights, 53–56, 58, 60, 62n4, 74, 83, 159, 208 Human sacrifice, 25, 27, 90–93, 97, 104, 109, 126, 212, 214 Human species, 23, 68, 202

J

I

L

Identity individual identity, 19 social identity, 180, 189 Ideology, 58, 87–96, 113, 127, 171 ideological, 95, 96, 210 Imagination, 3, 5, 7, 18, 26, 33, 41, 51, 71, 85n8, 88, 103, 155, 156, 169, 199 Incommensurable values, 27 Injustice, 8, 53, 55, 59, 86n10, 179, 181, 185–186n8, 200, 216 Internalism, 47, 62n5 internal reasons, 46, 47, 52, 53, 107 Intolerance, 56, 141

M

237

Jagged worldviews, 26–34, 112 Judges, 9, 10, 16, 40, 53, 83, 89–91, 95, 115–118, 120n2, 120n7, 132, 141–149, 155, 158, 162n7, 162n8, 199 judicial punishment, 116, 124n19 Justice, 20, 38, 52–61, 62n5, 63, 67, 71, 75, 80, 85n4, 89, 116, 118, 127, 133, 153, 154, 162n6, 195, 199, 209, 210 justice rights, 60 Justice as fairness (Rawls), 71 K

Kingdom of Ends, v, 4, 5, 61n4, 74, 75, 77, 79–84, 192, 194, 201, 202, 204n4, 210, 215, 217n2

Law of Peoples (Rawls), 81 Liberal, liberals, v, 7, 16, 31, 46, 50, 54–56, 58, 63, 64, 81–83, 110, 207 Liberalism, 64

Memory, 2, 145, 163–183, 189–191, 193, 194, 201, 204n1, 216 shared memory, memories, 163–166, 168–170, 172, 176, 183n1, 183n2, 196, 203, 204, 204n1

238 

Subject Index

Moral, morals, morality moral attitudes, 8, 26, 103, 145, 146 moral blame, 95, 121n11, 172 moral blindness, 4, 49, 50, 211 moral duty, 52, 73, 108, 167, 168, 178 moral environment, 34 moral-epistemic disappointment, 102, 103 moral-epistemic incapacity, 98 moral ignorance, 101, 102 moral imagination, 5, 103 morality system (Williams), 53–57, 59–61 moral judgement, v–vi, 3, 5, 9–11, 15, 19, 20, 25–27, 34, 38–43, 53, 61, 65, 77, 79, 80, 82, 83, 86n9, 90, 91, 101, 102, 104, 105, 113–119, 126, 142, 143, 148, 152, 155–157, 159, 160, 162n8, 177, 210–213, 216 Moral Law, The (Kant), 73–75, 100 moral negligence, 3 moral outlook, 8, 11, 22, 158 moral praise, 98, 99 moral realism, 29, 57, 84n1 moral relativism, 66, 98 moral relativity, 40 moral responsibility, 99, 102, 108, 113, 142, 169, 176, 185n8 moral standards, v, 9, 11, 16, 37, 67, 84 moral values, 5, 8, 19, 65, 75, 148 Morally heavy history, 141–160 Morally light history, 145

Motivational set, 45, 46, 48, 49, 107 Myth, 109, 164–166, 171 N

Nationalism, 21 Nature natural law, 123n15, 123n16, 210 Nazis, Nazism, 21, 43, 64, 97, 108, 113, 153, 171, 187n14, 209, 211 Notional confrontations, 40, 43, 53 Noumenal realm, 202 P

Pathological love (Kant), 73, 75, 76 Philanthropy, 2, 11, 76 philanthropists, 1, 44, 75, 157 Pluralism, 28 Posthumous reputation, 196, 204n2 Potlatch ceremonies, 35n1 Practical love (Kant), 73 Presentism, 11, 13n5, 43, 83 Primary goods, 71, 72 Problem of access, 16, 18, 25, 41, 211 Problem of relevance, 16, 18, 21 Progressive view of history, 43 Punishment, 90, 102, 115–117, 124n19, 160, 187n14, 197 Pyrrhonism, 83 R

Racism racial discrimination, 147 racist, 7, 11, 162n7

  Subject Index 

239

Rationality rational person, 47, 71 rational thinking, 47 Real confrontations, 20, 39–41, 155 Reason, 5, 10, 16, 21, 22, 27, 28, 30, 32, 33, 43–54, 56, 57, 62n5, 64, 66, 70–79, 81, 83, 84, 84n1, 85n6, 86n10, 93, 97, 98, 101, 104, 107, 109, 110, 113, 116, 118, 120n2, 122n14, 143, 146, 151–154, 157, 167, 168, 172, 173, 176, 187n13, 193, 195, 197, 203, 204n1, 208, 211–213 Reburial, 28, 35n2 Recognition respect, 74, 75, 82, 194, 204n4 Reflective equilibrium, 24, 77, 86n9 Relativism of distance, 20, 34, 49, 54, 59, 80, 91 Relativity of blame, 87–119, 212 Relativity of praise, 105 Reputation, 113, 128, 179, 180, 196–204 Respect, 4, 7, 42, 49, 50, 55, 57, 61, 61n3, 65, 68, 71, 73–76, 79, 80, 83, 85n7, 86n10, 100, 103, 109, 116, 117, 122n11, 128, 137n1, 151, 155, 161n4, 165, 167, 170, 178–180, 186n9, 193, 198, 200, 202, 204n4, 208, 210, 212, 214, 216n2

Self-deception, 4, 9, 96, 97, 108, 112, 158 Sentiment, 6, 7, 9, 30–32, 41, 70–79, 86n10, 99, 145, 177, 200 Simony, 132 Sincerity, 95, 96, 153 Situation-adjusted moral judgements, 117 Slander, 201–203 Slaves, slavery slave trade, 12n1, 48, 82, 93, 152, 157, 167, 181, 182, 184n4, 187n15 slave traders, 3, 4, 11, 34, 61n3, 95–98, 211 slave trading, 2, 4, 19, 20, 45, 47, 48, 97, 182 Sociability, 65–70, 72, 79–84, 90, 105, 125–137, 213 sociable living, 70–79, 128 Social contract, 72, 81, 82 Social science, sciences, 23, 142, 143, 151, 152, 167, 186n9 Sociology, 50 Sovereignty of Good (Murdoch), 174 Soviet Union, 175 Statue, statues, 1, 2, 11, 12n2, 180, 215 Sympathy, 7, 9, 12n3, 31, 40, 69, 72, 73, 77–79, 81, 94, 103, 107, 149, 161n4, 176, 203

S

T

Sacrilege, 132 Samurai warriors, 42 Secondary goods, 71, 72

Theft, 59, 80, 115, 124n19, 125, 126, 131, 132 Thick moral concepts, 54

240 

Subject Index

U

Unconscious bias, 83, 84, 86n11 UN Universal Charter of Human Rights, 83 Usury, 132, 138n5, 139n6 Utility, 28, 77 utilitarianism, 28 V

Value pluralism, 28 Verstehen, 18, 23 Victorians Victorian age, 15, 25, 159 Victorian period, 25 Virtue, vi, 17, 20, 27, 49, 50, 54, 57, 59, 78, 110, 116, 118, 142, 150, 151, 161–162n6, 194

virtuous, 15, 33, 117, 118 Vulgar relativism, 61n2 W

War, 6, 80, 109, 131, 141, 148, 169, 174, 178, 180, 182, 209 Wife-beater, the, 45–47 Witches, witchcraft, 9, 87–97, 110, 119n1, 119–120n2, 120n3, 124n18 witch prosecution, 90, 92, 93, 97, 120n2 Women, 1, 9, 10, 13n3, 16, 17, 21, 26, 29, 32, 74, 87–90, 113, 142, 160, 181, 182, 184n5, 197, 201, 207, 213, 216n2 history, 175 Wronging the dead, 198