Uncertainty and Economics: A Paradigmatic Perspective 9780367076030, 9780429021534


260 87 26MB

English Pages [135] Year 2019

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Cover
Half Title
Series Information
Title Page
Copyright Page
Dedication
Table of contents
Preface
Acknowledgements
Introduction
1 What's uncertainty, after all?
1.1 Institutions, events and actions
1.2 Determinism, risk, ambiguity and uncertainty
1.2.1 Determinism
1.2.2 Chance, risk and ambiguity
1.2.3 Uncertainty
2 Uncertainty in economics
2.1 The origins of uncertainty
2.1.1 The main assumption
2.1.2 Uncertainty: a human trait
2.2 Rationality, uncertainty and decision making
2.2.1 Subjectivity, objectivity and rationality
2.2.2 Decision making under uncertainty
2.2 The epistemology of economics
2.3.1 A brief look in the rear mirror
2.3.2 Positivist-falsificationist epistemology
2.4 Paradigmatic uncertainty
2.4.1 Puzzles
2.4.2 Rational expectations and uncertainty
2.4.3 Rationality and uncertainty ``measurement''
2.4.4 Elements of an uncertainty paradigm
2.5 Economic epistemology and uncertainty: an application to the Lucas critique
3 Uncertainty in the economy
3.1 Uncertainty and institutions
3.2 Money
3.3 Fiscal policy and Keynes's battles: lost and won
4 The empirics of uncertainty
4.1 A model for asset pricing under uncertainty
4.1.1 The investor's objective function
4.1.2 The investment rule
4.1.3 The median investor
4.1.4 Properties of the subjective median model
4.2 Testing for uncertainty
4.2.1 Conventional asset pricing model and uncertainty
4.2.2 The empirical approach
4.2.3 The empirical evidence
5 Conclusion
Bibliography
Index
Recommend Papers

Uncertainty and Economics: A Paradigmatic Perspective
 9780367076030, 9780429021534

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Uncertainty and Economics

“Essential reading for everyone who is willing to take ‘uncertainty’ in economics seriously!” Prof. Dr. Joachim Güntzel, Baden-Württemberg Cooperative State University, Ravensburg, Germany This book is set against the assumption that humans’ unique feature is their infinite creativity, their ability to reflect on their deeds and to control their actions. These skills give rise to genuine uncertainty in society and hence in the economy. Here, the author sets out that uncertainty must take centre stage in all analyses of human decision making and therefore in economics. Uncertainty and Economics carefully defines a taxonomy of uncertainty and argues that it is only uncertainty in its most radical form which matters to economics. It shows that uncertainty is a powerful concept that not only helps to resolve long-standing economic puzzles but also unveils serious contradictions within current, popular economic approaches. It argues that neoclassical, real business cycle, or new-Keynesian economics must be understood as only one way to circumvent the analytical challenges posed by uncertainty. Instead, embracing uncertainty offers a new analytical paradigm which, in this book, is applied to standard economic topics such as institutions, money, the Lucas critique, fiscal policy and asset pricing. Through applying a concise uncertainty paradigm, the book sheds new light on human decision making at large. Offering policy conclusions and recommendations for further theoretical and applied research, it will be of great interest to postgraduate students, academics and policy makers. Christian Müller-Kademann is Privatdozent of Economics at Jacobs University in Bremen, Germany. He received his first degree from Hull University, UK before graduating from Humboldt University, Berlin with a diploma in economics, after which he began his PhD in economics and econometrics.

Routledge Frontiers of Political Economy

244. Supranational Political Economy The Globalisation of the State–Market Relationship Guido Montani 245. Free Cash, Capital Accumulation and Inequality Craig Allan Medlen 246. The Continuing Imperialism of Free Trade Developments, Trends and the Role of Supranational Agents Edited by Jo Grady and Chris Grocott 247. The Problem of Political Trust A Conceptual Reformulation Grant Duncan 248. Ethics and Economic Theory Khalid Mir 249. Economics for an Information Age Money-Bargaining, Support-Bargaining and the Information Interface Patrick Spread 250. The Pedagogy of Economic, Political and Social Crises Dynamics, Construals and Lessons Edited by Bob Jessop and Karim Knio 251. Commodity The Global Commodity System in the 21st Century Photis Lysandrou 252. Uncertainty and Economics A Paradigmatic Perspective Christian Müller-Kademann For more information about this series, please visit: www.routledge.com/books/series/SE0345

Uncertainty and Economics A Paradigmatic Perspective

Christian Müller-Kademann

First published 2019 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 52 Vanderbilt Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business c 2019 Christian Müller-Kademann  The right of Christian Müller-Kademann to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloguing in Publication data A catalog record has been requested for this book ISBN: 978-0-367-07603-0 (hbk) ISBN: 978-0-429-02153-4 (ebk) Typeset in Times New Roman by Out of House Publishing

To Suse

Contents

Preface Acknowledgements

ix xi

Introduction

1

1

3

What’s uncertainty, after all? 1.1 Institutions, events and actions 3 1.2 Determinism, risk, ambiguity and uncertainty 5 1.2.1 1.2.2 1.2.3

2

Determinism 5 Chance, risk and ambiguity 6 Uncertainty 9

Uncertainty in economics 2.1 The origins of uncertainty 15 2.1.1 2.1.2

The main assumption 15 Uncertainty: a human trait 16

2.2 Rationality, uncertainty and decision making 19 2.2.1 2.2.2

Subjectivity, objectivity and rationality 19 Decision making under uncertainty 23

2.3 The epistemology of economics 29 2.3.1 2.3.2

A brief look in the rear mirror 29 Positivist-falsificationist epistemology 32

2.4 Paradigmatic uncertainty 42 2.4.1 2.4.2 2.4.3 2.4.4

Puzzles 43 Rational expectations and uncertainty 46 Rationality and uncertainty “measurement” 50 Elements of an uncertainty paradigm 53

2.5 Economic epistemology and uncertainty: an application to the Lucas critique 56

15

viii 3

Contents Uncertainty in the economy

63

3.1 Uncertainty and institutions 63 3.2 Money 66 3.3 Fiscal policy and Keynes’s battles: lost and won 75 4

The empirics of uncertainty

80

4.1 A model for asset pricing under uncertainty 81 4.1.1 4.1.2 4.1.3 4.1.4

The investor’s objective function 81 The investment rule 82 The median investor 82 Properties of the subjective median model 84

4.2 Testing for uncertainty 85 4.2.1 4.2.2 4.2.3

5

Conclusion

Bibliography Index

Conventional asset pricing model and uncertainty 85 The empirical approach 86 The empirical evidence 90

106

111 119

Preface

As an undergraduate student in the 1990s I attended a course in macroeconomics that offered the exciting prospect of teaching the determination of foreign exchange rates. Foreign exchange rates, I knew, were pinned down by a cobweb of myriads of trades and traders spanning the whole world, 24/7. Extracting the laws according to which these hundreds, maybe thousands, of people set foreign exchange prices seemed to be beyond human comprehension, I thought. And yet, economics had an answer the macroeconomic syllabus promised. When the magic lecture finally commenced, the professor – after some elaboration on international bond arbitrage – came up with an equation like this: st+1 = i t − i t∗ . There is very little that could describe the scale of my surprise, if not outright disappointment. Should it be possible to squeeze all those individuals into one, simple function like that? No, it was not, we were comforted, since a no-less-magical t would take care of all “random” individual mishaps by simply adding it to the otherwise perfect answer: st+1 = i t − i t∗ + t . Still gasping for air, the lecture proceeded too fast and too authoritarian for any critical reflection to take root. After all, scribbling down high-speed and swallowing what was served proved to be the most efficient way of preparing for the fast-approaching exams. Nevertheless, the initial shock never fully waned. So, fifteen or twenty years later, a long night came when I wrote down the equation once again and started to think about t . What if we do not squeeze all the traders into it, but free the traders from their distributional chains? The answer was both a revelation and a curse. In fact, never had I come closer to the feeling of having to choose between Morpheus’ red pills and blue pills. The shining blue t with all its perfectly shaped probability distribution functions had all the advantages of a reputable, admired default choice. Conferences are organised in its honour and prestigious keynote lectures by the industry’s most renowned scholars address its beauty. And yet,

x

Preface

once having started to think about it more thoroughly, the reflection began to create this uneasy feeling of the default being, in the words of Morpheus, a “prison for your mind”. Eventually, I went for the red pill. This choice was the right choice. Immediately, everything fell into its place, puzzles evaporated and tricky econometric problems turned into necessities instead of nasty, unruly bastards. However, it was – and still is – a curse because it became immediately obvious that taking the red pill was enough to secure a place at the outskirts, or rather in a parallel universe of economic mainstream. Nevertheless, asking this naive (or silly) question about t has since triggered a long journey into the history of economic thoughts and this book is about the answers that I have found, be them my own or those others have come up with before me. The answer that surprised me most was the insight that swallowing the red pill does not and cannot lead to a war of good against evil. Though taking the side of uncertainty affords a deeper understanding of the economy and of what economists do, it does not give rise to the view that mainstream economics is inferior or obsolete as such. Instead, Uncertainty and Economics shows where the current mainstream approach is located within the universe of understanding economic decision making. All along the journey, I met mountains of other works that have addressed the issues of uncertainty and several others that I am discussing. An inevitable insight of looking into these earlier contributions was that there is much, much more material than could possibly be covered. I am aware of some of the missing references and desirable in-depth discussions, but surely not of all of them. Therefore, I do feel responsible for all missing due credits – known or unknown to me – and am grateful for everyone who has supported me in this endeavour.

Acknowledgements

I am indebted to Ulrich Busch, Michael Graff and David Hendry for giving me many opportunities over the course of several years to share and develop my ideas that finally made it into this book. Several anonymous referees provided invaluable inputs. Of course, all errors are mine. The editor and his team did an incredible job helping me to improve earlier drafts of the manuscript. My thanks also go to my beloved wife for her patience and encouragement over the course of the genesis of this book. I apologise to my two clever daughters for not writing a collection of fairy tales, despite their repeated demands.

Introduction

Economic cycles and economic crises belong to the defining moments in economic history because they affect our sense of economic security and level of welfare at large. They also serve, at least implicitly, as tests of our understanding of the economy as well as our ability to draw the right policy conclusions from our economic theories. No wonder, therefore, that economists have long sought to understand severe economic fluctuations with the ultimate goal of steering the economy clear of their troubled waters. To that aim, economists, not least due to John Stuart Mill’s ingenious work, have long ago boarded a particular ontological train with the ambitious goal of keeping up with the natural scientists’ positive-deductive race to uncovering the truth about the world around us. If we only had sufficient knowledge of the machinery we would know how to stop crises from recurring, or so the logic goes. Alas, the financial crisis that started to unfold in 2007 once again reminded us that economists are still a long way from safeguarding the economy from severe difficulties. As one of many consequences of the crisis, the long-abandoned idea of “uncertainty” once again gained currency among economists. Of course, die-hard (Post-)Keynesians and other heterodox economists had never quite given up the concept of uncertainty, but this time around it became quite fashionable even among mainstream economists. Leading journals such as the American Economic Review now seem to take pride in featuring articles with “uncertainty” or “ambiguity” in the title, abstract or list of keywords. However, unless uncertainty is allowed to operate not merely as an epistemological fig leaf but as an ontological guide for economic analysis, no fundamentally new insights can be gained. This observation is the first major topic of the book. Since uncertainty is now used in economics in such a great many different ways, it is crucial to first clarify its actual meaning and the meaning of rival concepts. Maybe surprisingly, formal definitions using the language of statistics are rarely available, if at all. Therefore, these definitions are provided in a rather formal manner using standard terminology. In this first part, further basic concepts such as institutions, ambiguity, risk and events as well as their mutual relationships are defined. Although starting with a list of definitions naturally runs

2

Introduction

the risk of “putting off” some readers, there is a lot to gain from this rather dry section as these definitions really deepen our understanding of the key issues. Next, we make a truly innocent-looking assumption: humans cannot be emulated by humans or by non-humans. In other words, it is impossible to build a machine or software code or anything else that would replicate human behaviour in its entirety. It is therefore up to humans and humans alone to generate completely unpredictable states of the environment, an ability which leads to what we will call uncertainty. This assumption will provide the main thrust for all future arguments. Under this single assumption, it turns out, uncertainty must be regarded as the key driver of economic progress and economic crises alike. Although uncertainty posits an as-yet-unresolved analytical challenge, the evident presence of chance and uncertainty in life implies that there is a lot to learn from actual human behaviour because humans must have developed various strategies for coping with uncertainty. A typical economic approach to uncertainty uses the rational expectation and efficient market hypotheses to conceptualize these strategies. Therefore, these hypotheses will be revisited. In a backward loop, uncertainty will be looked at as a main reason for the creation of institutions. Adopting uncertainty in economics inevitably leads to serious challenges for the dominant ontology and epistemology of economics. Digging into these grounds reveals that a silent paradigmatic change is already under way. We illustrate this shift by looking at landmark economic theories and their econometric implementation, the “standard” DSGE approach. The new paradigm gives up the positivist idea of a universal truth and of objective economic laws, thus giving way for a constructivist perspective of economic rules that emerge from individual, subjective interactions. This perspective allows, among other things, the application of the famous Lucas critique to itself. We conclude that economists stand a chance to take full advantage of the new paradigm if they consciously embrace uncertainty and its implications for economic analysis and methodology. The second part will then discuss various ways of using the concept of uncertainty for economic policy analysis. In particular, the economics of institutions, money and fiscal policy are addressed because they are – after all – still the main policy instruments for weathering the business cycle. Finally, we develop a test for the presence of uncertainty in financial markets in the third part. This test is based on the definition of uncertainty and makes use of the taxonomy of uncertainty also suggested in the first part. The test approach makes use of the fact that under the null of “no uncertainty”, more or less standard econometric procedures are applicable, while under the alternative of a higher-order uncertainty, they are not.

1

What’s uncertainty, after all?

Despite its being very frequently used and with seemingly apparent confidence in its common understanding, surprisingly little effort is usually spent on properly defining the term uncertainty. This is in stark contrast to concepts like randomness or determinism, for which an undisputed common ground exists. This section, therefore, reviews key terminology. In particular, we will consider definitions for institutions, events, risk, ambiguity and uncertainty. Careful scrutiny of these terms will simplify the later analysis.

1.1 Institutions, events and actions We start with a few definitions other than uncertainty in order to operationalise the definition of uncertainty that we will later employ. The first definition is about institutions. Makeshift social institutions organise the living of humans. These institutions vary in their importance, prevalence, flexibility, range and degree to which they affect everyday life.1 We will use the following more specific definition of institutions owed to Hodgson (2006, p. 2). DEFINITION 1. Institution. An institution is a set of established and prevalent social rules that structure social interactions. Since institutions have to be established, they are amendable. This distinguishes them from the laws of physics or purely biological or genetic rules which humans also obey. Non-adjustable rules are by definition beyond the reach of humans. Institutions are commonly labelled according to the aspects of life they organise. Institutions are thus, for example, the nation state, the family, markets or common laws, and may again be thought of as belonging to meta-institutions like politics or the economy. Institutions organise social interactions. These interactions can be triggered by decisions or actions initiated within or from outside institutions. In general, interactions change the state of the environment, where the state of the environment is considered as the sum of its features at a given point in time. Anything that changes the state of the environment will be called an event. In

4

What’s uncertainty, after all?

order to distinguish between events triggered by humans and all other events we will use the term action. DEFINITION 2. Event. An event changes the state of the environment. We hold that properties of a state of the environment also include events such as strikes or voting which help in describing the properties of an environment in a meaningful way. If, for example, an ongoing strike is a feature of a particular state of the economy then this strike is at the same time a property of this state and an event that changes this state. In other situations, we may define a static state such as a resting die, then consider the event casting a die has on the state of the environment. Including events in the list of properties of the state of the environment also accounts for the fact that events trigger other events and that an exact, timeless snapshot of a society always has to account for the momentum of its subjects and objects. If humans are involved in an event we call this event an action. DEFINITION 3. Action. An action is an event triggered by a human or a group of humans. With these definitions we can now see that institutions, while being subject to events, initiate and shape actions. These actions are subject to human considerations, judgements and preferences. For example, an event could be a drought which might lead to raising government expenditures in order to support farmers, which we call an action. In that particular case the event might be considered to originate from outside institutions. Alternatively, one might argue that this event is a consequence of man-made climate change, in which case it is itself caused by institutions and is also an action. Therefore, the most important feature of actions will be the involvement of humans. In what follows, we will make the following abstractions. We assume the existence of an initial state of the environment. This initial state refers to the part of the environment that is of interest to us. So, for example, we may focus on a game of roulette and disregard the place where this game takes place and who plays. The initial state may further be narrowed down to the moment at which the ball hits the moving wheel. At this moment, the initial situation is also characterised by the speed of the wheel and so on. The aim of our analysis now is twofold. First, we want to know the properties of the final state. In this simple example, the final state may be characterised by the number on the wheel the ball ends up with. The second goal would be to take a decision based on our inference. We will look at decision making in section 2.2.2. It should be pointed out that the properties of the initial state very often also include knowledge about some ongoing event such as the spinning of the wheel. In fact, any event or action taken must be considered part of the initial state from which we deduce the properties of the final state. It follows that the final state can also be characterised by events and actions. Furthermore, inference about the final

What’s uncertainty, after all? 5 state also depends on the scope of the states, with the almost trivial observation that narrow scopes usually simplify the deduction.

1.2 Determinism, risk, ambiguity and uncertainty As we will see later, “uncertainty” holds the key for understanding many economic phenomena. The actual meaning of uncertainty, therefore, requires a careful definition. Let us start with uncertainty’s etymology. Uncertainty derives from the Latin certus meaning “fixed”, “sure”, “settled”. The negating “un” turns these meanings into their opposites. It follows that “uncertainty” simply refers to something that is not sure, fixed or settled. Uncertainty, it turns out, describes a state or a process ex negativum. We merely know what it is not. This observation prompts the question of what states or processes uncertainty does not describe because all other states and processes can logically be referred to as uncertain or uncertainty. We will categorize these sure or fixed states and processes as deterministic, stochastic and ambiguous. The overarching objective of these distinctions is to obtain a nuanced view of the conditions under which people make decisions. 1.2.1 Determinism In the most general sense, determinism describes a process by which the properties of a state can be perfectly deduced from knowing the properties of another state.2 Determinism can thus also be understood as a process that deduces the properties of an unknown state from the properties of known states. Physicists, for example, aim at describing the whole world as a sequence of states in which each state is the result of all past events going all the way back to the beginning of the universe. As a consequence, in a perfectly deterministic world the infinite future can be predicted by knowing the initial state. We will use determinism in a more narrow sense, however. Determinism in the narrow sense describes a process by which the properties of a particular state can be used to deduce the properties of another state which is unique. This process does not need to be invertible. DEFINITION 4. Determinism. Determinism is a process that deduces the properties of a final state from the properties of an initial state. The final state is unique. For example, spilling water that wets the floor is a deterministic process. We can deduce from a force that is exercised on a bucket filled with water (initial state) that the floor gets wet (final state). Uniqueness here means that the final state is known for sure; there is no other outcome possible in its stead. The impossible outcome in this example would be a dry floor. Invertibility would imply that the wet floor permits the conclusion that a force has been applied on a bucket with water. However, other deterministic processes such as rain might have also caused the wet floor.

6

What’s uncertainty, after all?

A rather special deterministic process is also worth mentioning: complex systems. Though completely deterministic, complex systems exhibit many features that make them look very much like stochastic or even uncertain processes. The similarity arises from the fact that even very minor changes to the initial conditions may trigger a response of the system that pushes the outcome far away from its initial state. Moreover, complex systems very often do not even permit any reliable statements about what direction of shift is induced by the change in the initial conditions due to the intractability of the system. Economists were early to notice the relevance of complex systems (Alchian, 1950; Sonnenschein, 1973; Mantel, 1974; Debreu, 1974; Ormerod, 1999, without any claim to completeness) for economic analysis. Complex systems arise, for example, by “simply” aggregating individual demand curves (Sonnenschein, 1973). Although it may seem tempting to liken complexity to risk or even uncertainty, it has to be stressed that they remain purely deterministic and all their seemingly non-deterministic properties are merely owed to the fact that so far, we lack the means (information) of uncovering their exact mechanisms. To be very precise on the last point, complex systems in economic models may give rise to multiple solutions, or multiple equilibria in the language of economists (Cass and Shell, 1983). This multiplicity seems not to fit the definition of determinism. However, a close inspection of complex system shows that the particular, unique solution can indeed be determined if the initial state of the system is exactly known. Therefore, handling complex system hinges on knowing the initial state and all dependencies within the system. Therefore, the major restriction that narrows down the general case to the definition we are using for capturing determinism is the uniqueness of the final state. If, by contrast, the final state is not uniquely defined we deal with chance, ambiguity or uncertainty. 1.2.2 Chance, risk and ambiguity Risk and chance Assume now that rain drops from the sky. Many things could happen which would hamper the water reaching the floor. A sudden breeze could carry the rain away or it could evaporate on its way down and so on. In consideration of these possibilities the final state would not necessarily be fixed because the floor could be wet or not depending on the intervening factors. Deducing the properties of the final from the initial state now includes several, i.e., non-unique, final states. The degree of non-uniqueness is fixed. Colloquially speaking the outcome is random. A closer look at the example begs a different explanation, however. This is because it could be argued that with a sufficiently complete description of the initial states and knowledge about the physical and chemical laws one could actually accurately determine whether or not a given patch of floor will get wet. Therefore, this kind of randomness is just randomness out of convenience, with convenience referring to the fact that studying the complexity of all relevant

What’s uncertainty, after all? 7 initial conditions and natural laws of a deterministic process may be too costly. Nevertheless, for many practical purposes, randomness is a satisfactory concept for describing the outcome of events. Beyond this simple example, genuine randomness also exists. This genuine randomness occurs at the micro scale, for example, when matter seemingly assumes two states at the same time. According to the Heisenberg uncertainty principle, matter’s actual state can only be pinned down by actual observation, with the striking implication that mere observation affects the state of the matter. What might seem very odd and irrelevant for daily life is actually relevant when it comes to secure data transmission or calculating the costs of nuclear waste disposal because random decay determines the time the waste has to be stored in safe (and hence expensive) conditions. Random or stochastic processes may thus yield multiple outcomes. A statement about the relation between initial and final states can nicely be comprehended by so-called frequentist statements. Continuing the above example, such a statement could read that, on average, out of ten raindrops one will reach the floor. Stochastic calculus can then be used to give the odds that the floor will be wet after rain. Two features of randomness stand out. First, determinism is a special case of randomness because deterministic processes generate particular states, with probability one. Second, randomness thus defined yields quantifiable odds. Whenever it is possible to actually calculate the probability of a certain state or states, economists usually refer to it as risk or chance. DEFINITION 5. Risk. Risk is a process that deduces the properties of a final state from the properties of an initial state. The final state is not unique. Each possible final state has a known (simple risk) or knowable (common risk) probability of occurrence. In the domain of economics the concept of risk is used to determine the probability of a certain stock price or a foreign exchange rate, for example. Business cycle analysts may also try to gauge the rate of expansion of an economy. To this aim, they often use statistical methods for learning about the factors that drive the economy. In all these instances these methods are essentially about calculating probabilities for the states of interest. Ambiguity It may happen that the probabilities of states cannot be determined, however. In his treatment of entrepreneurial decision making, Knight (1921) draws the following famous distinction based on the exceptionalism of events. The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from statistics of past experience), while in the case of uncertainty this is not true, the reason

8

What’s uncertainty, after all? being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique. Knight (1921, p. 233)

It is worthwhile noticing that Knight does not rule out the possibility that the probability distribution for a certain outcome exists. By stressing that it is the uniqueness of an event – yet not the absence of a probability distribution – that prevents us from deducing probabilities most economists have concluded that the states under consideration do indeed follow a probability distribution. This assumption is an important restriction because it restrains the universe of potential states. In the following we will refer to so-called Knightian uncertainty as ambiguity. DEFINITION 6. Ambiguity. Ambiguity is a process that deduces the properties of a final state from the properties of an initial state. The final state is not unique. Each final state has a probability of occurrence. At least one final state has an unknown probability of occurrence. Ambiguity, therefore, requires first that the set of final states is closed, i.e., each state that may logically be a consequence of the initial situation is defined and known. Second, each of those final states follows a probability distribution. The properties of some moments of the probability distribution are unknown and not knowable, however. Ellsberg (1961) offered a superb example of ambiguity. He ran an experiment which required participants to draw from one of two urns which each contained balls of two different colours. For one of the two urns the ratio of the two colours was given and known to all participants, while for the other, from which a ball was to be drawn, the ratio of the two colours was not known. Participants were told, however, that the second urn contained balls of the same two different colours as in the first urn in a fixed proportion. Apparently, by drawing only once there was no chance to ever learn the odds of the colours in the second urn, while at the same time it was known that a probability distribution function for picking a certain colour existed. Ellsberg (1961) used his example to demonstrate that actual decision making under ambiguity differs from rational choice under risk. Savage (1951) had described utility maximising choice allowing for subjective probabilities or beliefs held by individuals. Following that approach, he could devise a strategy for eliciting the individual probabilities by observing actual choice. The Ellsberg paradox shows, however, that Savage’s (1951) axiom “P2”, that is needed for distilling the subjective probabilities, is systematically violated. This axiom essentially says that if two choices are equal with respect to an outcome, then it should not matter how (the colour) they are equal on that outcome. Schmeidler (1989) and Gilboa and Schmeidler (1989) suggested two axiomatic theories, the Choquet Expected Utility (CEU) and the MaxMin Expected Utility

What’s uncertainty, after all? 9 (MMEU) approaches respectively, for modelling decisions under ambiguity. Both theories also rest on the Bayesian method of decision making that starts with maintaining a prior (or multiple priors in the case of MMEU) probability distribution function for the outcome of one’s choice. This prior belief is updated when more information becomes available. The great advantage of CEU and MMEU over conventional Bayesian decision modelling is their allowance for non-additive probability functions. With non-additivity the Ellsberg paradox can be reconciled. Bayesian decision making theories are now standard in game theory and microeconomics at large. At the same time, however, it is only due to the financial crisis of 2007/2008 that interest in ambiguity has resurfaced among macroeconomists. Unfortunately, however, many economists now tend to mistake ambiguity for genuine uncertainty. In a working paper that surveys the recent literature on uncertainty in economic research by two European Central Bank economists, this (mis-)labelling becomes evident from the following quote: “uncertainty could be either the range of possible outcomes of future economic developments (type I), and/or the lack of knowledge of the probability distribution from which future economic developments are drawn (type II)” (Nowzohour and Stracca, 2017, sec 2.2). In this view, the assumption that certain events follow a probability distribution does represent a considerable restriction on the possible outcomes, or, to put it the other way round, a considerable information advantage over a situation in which a distribution function does not exist goes largely unnoticed and undisputed. But it is precisely this implicit informational lead that makes ambiguity so much less interesting than genuine uncertainty from a research point of view. The final step, therefore, remains to drop the assumption of a probability distribution function.

1.2.3 Uncertainty So far, we have dealt with processes that allow us to infer the properties of a final state from an initial state. Moving from determinism to ambiguity the precision with which we can determine these properties decreases. In the case of ambiguity we merely know that the final states follow a probability distribution function. However, we are not able to make statements about the odds of occurrence or any higher-order moment of these probabilities. What seems to imply total ignorance about the final states is, in fact, a serious and useful restriction on the potential outcomes. For instance, if we know that the final state is a real valued number between zero and ten but with not knowable probabilities, we do not only know that the final state cannot be eleven but also that it cannot be a duck. For real-life situations such knowledge can be invaluable. Consider a potential investor in Nokia shares in the late 1970s. She would have been well-advised to predict the rise of mobile telecommunication. However, it is highly unlikely that mobile telecommunication

10

What’s uncertainty, after all?

had been part of the states over which she did form a probability distribution function. The reason – in this particular example – simply is the incapability to forecast actions such as emergence of basic innovations. In defence of his famous 1936 book “The General Theory of Employment, Interest, and Money”, Keynes (1937) provided the following well-known illustrations for situations in which a probability distribution function does not exist. By “uncertain” knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know. Keynes (1937, pp. 213f) The definition of uncertainty thus stresses the fact that some final states are themselves unknown to the extent that it is also impossible to define a set of residual states for which probability distribution functions could exist. This observation leads to the definition of uncertainty. DEFINITION 7. Uncertainty. Uncertainty is a process that deduces the properties of a set of final states from the properties of an initial state. The final state is not unique. One or more final states are not known altogether. This definition of uncertainty implies that the deduction process recognises that the properties of the final state(s) remain partially unknown. It does not imply, however, that uncertainty cannot be applied to imaginable – and hence knowable – events such as “a revolution in China or the detonation of a nuclear bomb in mid-town Manhattan” which we may add to Keynes’s examples (Derman, 2011, p. 154). It rather means that those imaginable events cannot be associated with a probability distribution function because their relevant alternatives are not knowable. In order to form a probability distribution function about a revolution in China, the set of all possible states other than a revolution that may derive from the current state must have to be known, however. Only if that is the case is it theoretically possible to calculate the odds of a particular, imaginable event such as a revolution. Therefore, the key conclusion of deduction is to recognise one’s limit to knowledge. Uncertainty, therefore, is the most general form of deduction that comprises all of the more specific determinism, risk and ambiguity.

What’s uncertainty, after all? 11 The three main types of deductions are very often associated with properties of events or states. For example, Keynes has said that “the prospect of a European war is uncertain”, or it could be said that the cast of a die is “random” (i.e., stochastic). Statements like these actually describe an initial state that requires a deduction according to, respectively, uncertainty or risk. The “uncertain event” or “stochastic event” must be regarded a property of at least one known element of the set of final states. Other common statements combine the type of deduction with “shock” such as “uncertainty shock”. An uncertainty shock should be regarded as an event that generates uncertainty. This event thus triggers deductions of the uncertainty kind. In fact, an uncertainty shock does not mean that the trigger event necessarily occurred unexpectedly because that condition would not make sense under uncertainty. It does mean instead that the initial state is changed such that risk processes turn into uncertainty processes and uncertainty processes are revised or added to the list of deductions to be made. As an example, one may consider the financial crisis of 2007/2008. In its wake a sovereign debt crisis emerged which was pretty well predictable once the money markets had dried up. Therefore, this sovereign debt crisis cannot be considered a shock in the stochastic sense, in which a shock simply is defined as a surprise or the deviation between expected value and realisation. It nevertheless was an uncertainty shock because people now had to deduce the future of the European economies knowing that the sovereign debt crisis had altered the current economic environment in a way that some earlier deterministic relations had to be revised, such as the link from the former situation which implied the exclusion of bond purchases by the ECB or the stability of the European monetary union. Before that crisis most people would have held that over the time horizon of a year or so, the existence of the Euro currency was out of the question. With the crisis in full swing people started instead to think about what potential other formerly unknown unknowns would soon become known unknowns and what new unknown unknowns would affect the European economies. It is in this sense that we are going to refer to (uncertainty) events and (uncertainty) shocks. Figure 1.1 describes the implied taxonomy of uncertainty as interlaced boxes with the most specific kind of deduction cornered in the lower left. The boundaries of the boxes represent the limits of knowledge that can be gained with the respective approach. Thus, the most general process, uncertainty, is free of any boundaries because it is impossible to box the not-knowable. It is instructive to relate the various boxes to science. Physicists, for example, predominantly operate under the assumption that all states of the universe derive from deterministic and partly from stochastic processes. Probably the most fascinating feature of this assumption certainly is that it permits to test its own validity up to a point. Experiments and statistical tests can actually prove hypotheses in the former and reject hypotheses in the latter case. If a theory is rejected, physicists will eventually abandon that theory and shift to another. In some cases, the whole paradigm of analysis must be re-drawn, as

12

What’s uncertainty, after all?

Figure 1.1 Taxonomy of uncertainty

the example of Einstein v. quantum theory has vividly shown. These shifts notwithstanding, physicists comfortably operate within the two inner boxes because the basic principle, that identifiable cause-effect relationships permit the ex-ante explanation of all phenomena once the initial states are known, can be maintained. To illustrate the physicists’ approach, consider weather forecasting. Weather forecasts generally rely on purely deterministic models which account for chemical and physical laws that do not allow for randomness. Whenever there is talk of “risk” of rain, or sunshine for that matter, these risk assessments are based on variations in the assumptions that are necessary because of insufficient knowledge of the initial state. This insufficiency may result from patchy measurement networks or from as-yet-undiscovered cause-effect mechanisms. Cosmologists apply basically the same methodology in order to explain the evolution of the universe from its very beginning to its predicted end. In other words, physicists’ deduction is well represented by the second-largest box, which is common risk.3 Perhaps surprisingly, many economists pride themselves in applying the same methodological concepts. The reason for this focus is the prevailing desire for an internally consistent empirical modelling framework as advocated by Lucas (1976), for example. Because only mathematical rigour can ensure that economic models are coherent and internally valid, the physicists’ method of model building also appeals to economists.4

What’s uncertainty, after all? 13 For example, Robert Lucas, Laureate of the Nobel Memorial Prize of the Swedish National Bank, who is probably best-known for the so-called Lucas critique, has been quoted as saying that economics too deals with probabilistic events at best (Mason, 2016, p. 5). In the wake of Lucas’ critique the real business cycle revolution created a strand of literature which views human decision making as a system of equations that completely describes the major pattern of economic dynamics quite as cosmologists describe the history, presence and future of the universe. In contrast to physics, however, rejecting hypotheses turns out tremendously difficult in economics because it is always possible to “excuse” a rejection with data problems, limits to human comprehension, econometric issues and many other factors. In fact, economic hypotheses can be very “sticky” in the sense that no matter how often researchers have found evidence against them, economists nevertheless maintain them due to their overwhelming theoretical appeal. Probably the best-known example of such sticky hypotheses certainly is the efficient market hypothesis. This hypothesis is methodologically so convincing that irrespective of the actual empirical evidence the consensus among economists remains that somehow it must be true to some extent. The alternative to hypothesis testing – experimentation – is also not helpful in economics, at least not for macroeconomic problems. Not being able to straightforwardly reject certain hypotheses in the light of counterweighting observations is at odds with Friedman’s (1953b) call for a positivist methodology in economics and as such proof that the emulation of physics in economics fails. Fortunately, however, in many instances, as we will see in the next chapter, uncertainty can provide the missing link that is able to reconcile sticky hypotheses and the empirical evidence. It has to be stressed that choosing the positivist approach in economics despite all odds comes at a high price. The price is the exclusion of uncertainty from economic models and the analysis of human decision making at large.5 The reason is easy to see and has, for example, been pointed out by Crotty (1994): In face of the not-knowable it is not even theoretically possible to calculate expected utility, the foundation of modern economic decision theory. Human choices could not be described as actions which result in the maximum attainable utility. Therefore, any analysis, including so-called micro-founded macroeconomics, that preys on this assumption ultimately collapses. Of course, if uncertainty played only a negligible role in real life, then this exclusion is a minor and negligible issue itself. The next chapter, therefore, explores the actual significance of uncertainty for humans. Before turning to the significance of uncertainty for economics we will first look at structural mechanisms that generate uncertainty.

Notes 1 To some extent institutions are shaped by purely biological factors. For the sake of simplicity these factors will be ignored.

14

What’s uncertainty, after all?

2 Keynes (1921, p. 8) refers to determinism as knowledge: “The highest degree of rational belief, which is termed certain rational belief, corresponds to knowledge. We may be said to know a thing when we have a certain rational belief in it, and vice versa.” (Emphasis as in the original.) 3 It is beyond my expertise to decide if common or simple risk better describes the world of physics. But this distinction does not matter for the discussion below. 4 Although it might seem obvious that mathematical rigour should be a “conditio-sine-qua-non” for any scientific approach to economics, one should also note that other social sciences and the humanities, such as sociology or psychology but also medicine, do not predominantly operate with mathematical models. Despite the use of other methodologies it would be hard to argue that all these fields lack rigour, or are unscientific. Therefore, “rigour” can have many disguises and going beyond pure mathematical model building will not be an end to a scientific approach to economics. 5 Wilkinson and Klaes (2012) consider probability an integral part of “standard economic models” and behavioural economics models alike.

2

Uncertainty in economics

This chapter deals with the origins of uncertainty. We argue that uncertainty is an indispensable part of daily life and hence of economic scrutiny. In addition to the so-far-encountered examples of uncertainty being the “not-knowable”, this chapter clarifies the deeper reasons for these examples to arise in the first place. The most important driver of uncertainty, it is argued, is given in the main assumption, while institutions serve as uncertainty management devices. Having formally defined uncertainty, this chapter demonstrates that uncertainty fits very well into major economic concepts and does, in fact, have the potential to close serious gaps in economic theories. In particular, uncertainty can be considered the missing link between the theoretical concept of rationality and empirical “irrationality”. It also offers clues to how decision making theories should be devised.

2.1 The origins of uncertainty 2.1.1 The main assumption At the heart of uncertainty lies an agnostic demon. “We just don’t know” postulated Keynes. Science, however, is about learning and understanding, which is pretty much the opposite of not knowing. No wonder, therefore, that Keynes’ dictum is prone to be ignored or circumvented in more or less elegant ways. Not accounting for uncertainty may, however, result in severe confusion about what we do indeed understand about the economy. In the financial crisis of 2007/2008 the demon has lashed out at this ignorance and challenged the credibility of the whole economic community by laying bare economists’ incapability to prevent the crisis. This book’s main assumption is that humans cannot be emulated by other humans, not with even the most sophisticated machines. This assumption is based on the observation and experience of limitless human creativity as it is witnessed not only by collections of art and libraries full of genuine contributions to science, poetry, music and literature but also by daily life encounters of conversations of humans where literally no single exchange is a copy of the past. We will use these

16

Uncertainty in economics

observations to induce a principle of genuine individualism to all human thinking and decision making. It is, of course, no coincidence that this principle also pertains to the methodological individualism that every first-year student of economics is made familiar with. It essentially maintains that all individuals have their own preferences, judgements and desires. Likewise, the humanist approach that no human is worth more or less than any other human but equipped with the same rights to live and prosper may also be quoted as supportive for the main assumption. For reasons that will become apparent later, we nevertheless emphasize the fact that our main assumption is an induced working hypothesis derived from experience. A first and very important implication thus immediately follows: no matter how hard one tries, it is impossible to “form groups of instances” of humans that would completely describe all of these humans’ properties, neither in deterministic nor in stochastic terms. This is not to say that some properties such as body height or purchasing propensity could not be inferred under very specific circumstances. But it is to say that humans always possess the ability to change themselves–their wants, needs, preferences, their fate. Another striking implication derives by contradiction. Under our main assumption the so-called singularity (Good, 1966; Vinge, 1993) in artificial intelligence (AI) does not exist. This “singularity” is the moment at which machines become conscious of themselves and thereby turn into what some fear as being some kind of super-humans. Now, let us consider two possibilities. For one, let us assume that this singularity might occur at some time in the future. If we make this assumption, there would be no way of denying that this point has not already occurred in the past and that we, the human race, are the product of some machine that gained consciousness at some moment long past. Only by assuming that this AI-singularity does not exist can we maintain our main assumption that it is not possible to emulate humans by other humans, not with even the most sophisticated machines. For the time being, we will work with this hypothesis until proven wrong.1 At the same time, this is not to say that machines may not be superior in accomplishing certain tasks, such as playing Chess or Go, for example. The crucial difference the main assumption implies is that humans are capable of idiosyncratic creativity. 2.1.2 Uncertainty: a human trait Human creativity does not only enable men to recognise their environment and deduce its properties, it foremost equips them with actually shaping and making their world. There are at least two important processes by which humans create the not-knowable as opposed to stochastic events. The first process may be called reflexivity (Soros, 2013). Reflexivity relates to the fact that humans form views and models about facts and events that “can influence the situation to which they relate through the actions of the participants”

Uncertainty in economics 17 (Soros, 2013, p. 310).2 Though it may seem as if this ability is exactly what made Robert Lucas come forward with his critique of the macroeconomic modelling of his time, the consequences are reaching far beyond what is covered by model-consistent expectation formation. The Lucas critique demands that economic modellers account for the implied outcome of agents’ actions (policy conduct) at the very outset. True reflexivity expands Lucas’ argument beyond the boundaries of any model. The reason is again twofold. First, unless literally every single human uses exactly the same model up to some stochastic variation, the individual responses will differ in an unpredictable manner. The main assumption rules out purely random variations of the model across all individuals. However, ruling out random variations of the model also rules out random model consistent responses. Therefore, the outcome would be not-knowable. To turn the argument around, only if it was possible to somehow force all agents to subscribe to a certain model or certain type of model, model-consistent behaviour of all agents (up to some stochastic variation) can be expected. As we will see below, this subscription seems very unlikely when considering that economic interaction is not equivalent to planets and atoms flying through time and space. Second, reflexivity also implies that humans can question the model itself. Being jailed behind the bars of a particular model that rules essentially all options is not the way to operate a profitable business in the long run.3 In contrast to an object falling off some height forced by the inevitable laws of gravity, agents will relentlessly challenge the very rules of the game, especially if the current rules lead to a crash. Therefore, while a falling object cannot but “wait” until the moment of impact, humans would try to change the law of gravity. They are able to do so because the rules of business and economics at large are a far cry from natural laws, but institutions that are meant to be amended if needed to. Again, the direction of amendment depends on the properties of the humans involved, and since they are genuinely individualistic by assumption, this direction is, in principle, not predictable. The second process is transformativity. Transformativity means that economists and economic agents at large constantly change the object of economic analysis. Transformativity can therefore be regarded as the other side of the positivist coin. It is very close to pure irony that the influential advocate of positivism in economics, Milton Friedman, was also a major transformist. There are at least two remarkable achievements of his that illustrate this point. The first one is the general move from fixed to flexible exchange rates. Friedman’s article “The case of flexible exchange rates” exercised an enormous influence not only on economists but also on policy makers as it delivered the scientific pretext of abandoning the Bretton Woods system. Tellingly, this article appeared as one of his “Essays in positive economics” but the analysis quickly assumed a de-facto normative character and eventually helped inspire the rise of the era of flexible exchange rates. In other words, economics itself can, at least partly, be considered “An engine, not a camera”, as MacKenzie (2006) argues.

18

Uncertainty in economics

MacKenzie (chap. 6) further cites yet another well-known example of economic transformativity. According to him, Friedman delivered a commissioned scientific analysis that argued for the need to establish a market for foreign exchange futures which later became the International Monetary Market of the Chicago Mercantile Exchange.4 Examples for the transformative nature of economics abound: the publication of the Black-and-Scholes formulae pushed wildly fluctuating option prices into a comparably narrow range around their theoretically “true” values (MacKenzie, 2006, p. 165). Money, too, is constituted only by the actual use of coins, notes, bits and bytes as money. There can be no abstract object that serves as money without people turning it into money by using it. Most significantly, transformativity is, of course, also evident in the focal object of economic analysis: markets. Markets are nothing but the result of human ingenuity. Therefore, market prices, for example, are first of all the result of human imagination and can hardly be assumed to obey natural laws. Consequently, economics itself cannot be regarded a purely analytical science. It has the amazing and exciting property of shaping the object of its own analysis. This feature clearly distinguishes it from physics, chemistry, archaeology and many other sciences. While biologists, chemists, engineers, physicists and many more are very well able to transform whole societies by their discoveries and inventions – like Penicillin or the internet – the laws of nature they study remain unaffected by these inventions.5 In economics, this constancy of the object under study just does not exist. Transformitivity, however, is not restricted to economics as a science. Any agent in the economy and as such the object of study of economists has the power to transform the economy; some to a greater, some to a lesser extent. As an example, consider that a key ingredient of the financial crisis was the invention of the so-called securitisation of assets, their slicing and repackaging. This invention was the brain child of some creative agent who thus expanded the rules of the mortgage and other credit markets. The world-wide spread of that instrument and similar ones eventually led to the enormous, largely undetected correlation of risk exposures that narrowly crashed the world financial market. For economists especially it is straightforward to see that there were strong incentives for coming up with an invention of this profit-expanding kind. Yet despite understanding this pattern, most economists were ill-equipped to precisely predict the innovation or its later impact on the economy. In a nutshell, human creativity systematically and unpredictably creates and changes the very scientific object economists are set to analyse: the economy and its laws of motion. By the main assumption that humans cannot be emulated it becomes clear that the very nature of humans, therefore, generates the not-knowable and hence uncertainty as a distinct and undeniable feature of society and, therefore, of the economy. Any neglect of uncertainty in economic modelling and economic analysis at large thus bears the potential of grave analytical mistakes if not scientific oblivion.

Uncertainty in economics 19

2.2 Rationality, uncertainty and decision making 2.2.1 Subjectivity, objectivity and rationality Rationality is a common and seemingly natural property of human decision making. In fact, it would be very odd to assume that humans behave irrationally as it would imply that they systematically make mistakes or work against their own interests. However, rationality is by definition also a relative concept. It requires knowledge of the optimal outcome of a decision for judging whether or not a certain behaviour is rational. This can easily be seen by invoking a common definition of rationality that is used in the world’s best-selling introductory economics textbook:6 Rational people systematically and purposefully do the best they can to achieve their objectives, given the available opportunities. Mankiw (2011b) The relativity of the concept enters via the terms “best” and “available opportunities”. In order for people to do the best, they have to know what the best is among the available opportunities. In other words, all opportunities must be known and individuals must be able to assign them an order. Both conditions are likely to be violated when uncertainty is considered. First, even if there are only limited opportunities to act, say to do something or not to do it, the eventual effect of the action, or non-action, may remain obscure. If this is the case, then, second, ordering the opportunities according to the degree of which they possibly help us achieve our objectives is also impossible. Given reflexivity and transformativity it seems very likely that the final outcome of one’s action always remains unknown to some degree. Therefore, rationality as a scientific behavioural concept demands careful treatment. Rationality is a very important part of modelling choices (see 2.2.2 below) in economics. Without rationality, solutions to many decision problems remain unidentified, and the course of events hence unknown. Therefore, economists often make rationality the object of scrutiny in its own right. Researchers ask, for example, whether people behave rationally. In order to judge behaviour, researchers must, however, know at least as good, if not better than the test person, what the options to choose from are and, equally important, they must know the ordering of these choices according to the individual’s preferences. Both requirements are very hard to meet. There are largely two types of “rationality test”. The first measures an individual’s decision by this individual’s own standards. As an example, consider the Ellsberg paradox again (Ellsberg, 1961). Ellsberg could show that individuals systematically make contradictory decisions depending on the experimental setting, yet not depending on the actual matter to decide. Similar phenomena can be found when presenting decisions in varying frameworks where these frameworks have nothing to do with either the outcome or the available options.

20

Uncertainty in economics

A nice example of the latter kind is the evaluation of risk depending on how the risky choice is presented. Tversky and Kahneman (1981, p. 453) devised an experiment (the Asian disease) with the following scenario: “Imagine that the US is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed.” They then present two sets of alternative choices to two distinct groups of their subjects. According to the scenario, these choices were based on “exact scientific estimates of the consequences of the programs”. The first group of 152 participants had to choose between programs A and B: A B

“If Program A is adopted, 200 people will be saved.” “If Program B is adopted, there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved.”

while the other groups had to opt for either program C or D: C D

“If Program C is adopted, 400 people will die.” “If Program D is adopted, there is a 1/3 probability that nobody will die, and a 2/3 probability that 600 people will die.”

Despite the different framing of the choices (A v B is framed “positively”, C v D “negatively”) both decision problems are identical (A=C and B=D). This identity notwithstanding, 72% of all respondents preferred A over B but only 22% preferred C over D, which is clearly at odds with rationality in this experimental setting. If observed, inconsistency of decisions may be counted as evidence for lack of rational behaviour, which very often earns the label “puzzle”, thus demanding a “solution”. The second type of “rationality test” compares actual human behaviour to what is postulated as “rational” from the researcher’s point of view. For example, forward currency rates are modelled by means of interest rate differentials and the spot rate. If the modelled rate deviates considerably from the actual froward rate, economists consider this situation a result of irrational choices. These empirical deviations from theoretically sound models are usually referred to as puzzles. In fact, modern macroeconomics knows quite a few puzzles; a standard count puts the number of major puzzles at six (Obstfeld and Rogoff, 2000). The trouble with the latter kind of “irrationality”, but also to a lesser extent with the former, lies with the superior knowledge about the best choice that the researcher must be assumed to posses in order to judge an actual behaviour. Consider the uncovered interest parity problem that is related to the forward premium puzzle already mentioned. The uncovered interest parity (henceforth UIP) condition can be cast as follows i ∗ − i t = st − st+1

Uncertainty in economics 21 where i signifies the interest rate on one-period bonds and st the log of the price of the foreign currency in period t (a month, for example) expressed in home currency units. An asterisk indicates foreign prices. The above relationship states that the difference in interest that could be earned by either investing in foreign or domestic bonds must match the change in the log-exchange rate in order to exclude arbitrage opportunities. Re-arranging the terms shows that concurrent values of the interest rate and the spot exchange rate identify the future spot exchange rate (see also 4.11 in section 4 on page 95): st+1 = i t − i t∗ + st . Unfortunately, despite enormous resources having been spent on investigating this relationship, very little evidence has been produced that would show this relationship to be of any reliable, empirical value. Sometimes, researchers find this relationship to hold, and sometimes not, depending on the particular sample chosen, on the country pairs, the margin of error tolerated and the frequency of sampling.7 This striking gap between sound theory and empirical evidence encapsulates in a nutshell the problems that arise whenever uncertainty is not accounted for. To see this, one should observe that by the principle of rationality anyone who entertains the idea that UIP holds cannot deviate from it to any significant extent. This observation has first been made explicit by Muth (1961), thus getting rid of all the complications implied by Savage’s (1951) subjective probability approach. In fact, while Savage allowed for individual views on the extent to which the above equation holds, Muth could show that all these individual opinions did not matter whenever one assumes that a certain (underlying) law such as UIP is generally valid. In this latter case the only rational choice is to accept the law and agents’ decisions will at least on average replicate the law. In other words, if UIP was true, then the only rational choice was to behave as if it was indeed true. Consequently, with all agents converging to the truth, the foreign exchange rate and interest rate data generated by those very agents should reflect the UIP law up to a negligible or stochastic margin. Muth’s approach to justify averaging individual guesses in order to approximate the underlying, true value is sometimes anecdotally justified by Galton’s (1907) study of a weight-judging competition in Plymouth. Galton noticed that the median of all weight estimates for a “fat ox” resembled the actual value up to a margin of just 0.8% (Galton, 1907, p. 450).8 It is important, however, to realise that this weight-judging is about an objectively defined issue and therefore not at all suitable for justifying Muth’s method as a general tool in economic analysis. In Galton’s competition, there is no way to influence the actual weight by guessing it or any other source of uncertainty, which is in striking contrast to many economic phenomena such as pricing financial assets. The most remarkable methodological aspect of many macroeconomic research can thus be seen to turn genuinely individual, subjective decisions into objective laws. In the above exchange rate example, this transformation can be seen

22

Uncertainty in economics

when reading the last equation from left to right. On the left-hand side, the price of foreign exchange is the result of negotiations between at least two parties at time t + 1. The outcome of these negotiations is subject to reflexivity and transformativity and, therefore, uncertain by nature. The outcome is transformative because it defines the observed foreign exchange price and potentially will moreover affect the course of the economy at large. At the same time, the terms on the right-hand side show the economically sound, objective factors that determine the price and which are eventually known in t + 1. This right-hand side is, therefore, free of uncertainty, which implies that the equation sign tries to equate fundamentally unequal terms, an item of uncertainty with a deterministic term. In practice, the expression to the right is very often augmented by a stochastic component that is meant to capture unsystematic yet stochastic deviations of the negotiations’ outcomes and the economic theory. However, by averaging over many time periods and over many individuals (foreign exchange contracts) those deviations converge to zero or some other deterministic constant, quite like in the case of Galton’s ox. Thus, the main issue of matching an uncertain to a deterministic/stochastic term remains. In the literature, this matching is eventually facilitated by giving up on uncertainty and by working with the deterministic/stochastic concept embodied in the right-hand side term. The eventual outcome is a stochastic, objective benchmark process individuals are – in principle – able to discover (Nerlove, 1983; Pesaran, 1987, p. 11). Applying the concept of rationality in this particular way adopts the physicists’ idea of a truth or a reality beyond human intervention to economics. Whoever knows this truth commands some superior knowledge everyone has to submit to. This superiority is needed in order to make the principle of rationality become operational because, as we have seen above, rationality requires agents to “do the best they can to achieve their objectives”. Without knowing what the best is, researchers would be at a loss when trying to establish economic laws and to assess the degree of rationality of agents. In general, economic model building and also econometric analysis largely rely on the concept of rationality as a means for separating the individual, subjective motifs from the objects of analysis, such as the price of foreign exchange. However, this separation may be at odds with actual agents’ behaviour because, in contrast to objective facts and laws of motion which rule nature, the most interesting aspects of economic life such as the prices of assets or investments only exist because of human decisions and actions. More explicitly, human action is the conditio sine qua non of many economic phenomena; without humans they would simply not be. Therefore, any analysis of these phenomena should begin with the investigation of how they result from human interaction and hence the nature of humans. This observation refers us back to the key assumption underlying this book. Due to the fact that humans cannot be emulated, and due to the fact that many economic phenomena are inextricably linked to human action, the matching of subjective and objective determinants should never result in giving up on the

Uncertainty in economics 23 subjective part. In sum, economic analysis must rest on uncertainty as its principle foundation. Decision making under uncertainty is a good starting point in that respect.

2.2.2 Decision making under uncertainty The concepts of uncertainty and rationality add up to a decision making framework. Rationality maintains the classical assumption about the most plausible behaviour of humans. In neoclassical economics, this assumption translates into the model homo oeconomicus. However, in connection with uncertainty the usual chain of reasoning for decision making lacks an important link. In general, we would use the following algorithm for evaluating the available choices (options): option(i) → outcome(i) → utility(outcome(i)). This evaluation process results in a set of projected utilities and the option referring to the maximum utility will then be favoured due to the rationality principle. Uncertainty destroys this chain because the mapping from options to outcomes is fundamentally impaired. This is due to the fact that uncertainty involves not-knowable outcomes. With at least one outcome remaining unknown, the ordering of the implied utilities becomes unidentified. It is an empirical fact, however, that individuals make decisions despite the lack of identification. The challenge, therefore, is to understand how people nevertheless arrive at choices at all. In order to establish the actual decision making process under uncertainty, two important observations have to be made. First, rational humans do account for the not-knowable when making decisions. Second, and probably more strikingly, only uncertainty turns the choice between options into an actual decision problem. The first observation follows directly from the definition of rationality. It states that people choose “systematically and purposefully”, which excludes the possibility that people ignore uncertainty. The second observation follows from the fact that some aspects of individual choice remain obscure because of uncertainty. Owed to this obscurity, an objective solution for finding “the best they can to achieve their objectives” would entail perfectly copying humans, a possibility that has been excluded in the beginning. If, however, the decision making problem could be reduced to a tractable optimisation problem, humans would not actually be required to make decisions, or, more plainly, humans would not be needed at all. In that sense, without uncertainty, humans would be the slaves of the optimisation algorithm but not actually have a choice (see the interview of Ebeling, 1983, p. 7 with George Shackle, for example). Suppose now that we would entertain an economic model that ignored uncertainty. What would be the result?

24

Uncertainty in economics

An iconic economic model is the random walk model for asset prices in complete and efficient markets. It basically says that rational individuals who process all available information cannot predict the change in the price of a liquid asset. Therefore, the best guess of a share price tomorrow is its value today. In the absence of uncertainty, we would therefore obtain a time series of asset price changes that looks like a series of independent data points drawn from a mean zero-probability distribution. Such a series provides no information whatsoever about future prices.9 When allowing for uncertainty, note first that asset prices too are the sole result of human negotiation. If it was true that the random walk would perfectly describe human behaviour, then human intervention would be unnecessary and we would run into a contradiction to the assumption that human behaviour cannot be exactly emulated. As a consequence, the empirical properties of asset prices will systematically deviate from the random walk properties. Time and again patterns will emerge and – by the principles of reflexivity and transformativity – disappear, change and re-emerge. However, the direction of change of existing patterns and the features of the emerging patterns are all unknown in advance. Therefore, the key insight of the efficient market hypothesis also prevails under uncertainty: asset prices are not systematically predictable. Retrospectively, however, observed times series of asset price changes may very well exhibit autocorrelation and all sorts of dependency between these increments. Uncertainty also explains why, despite this systematic unpredictability, countless numbers of the brightest minds restlessly engage in forecasting asset prices. All these efforts are directed at discovering the emerging pattern in order to advise trading strategies based on these patterns. Trading on these patterns eventually changes them, giving rise to new features and so on. The research into and the exploitation of asset prices’ properties, therefore, is nothing but applying the principles of reflexivity and transformativity in practice. Two more remarks are in order. First, an interesting further implication of uncertainty is that an observer who does not act on the observation may well be able to discover a persistent pattern that looks like a profitable arbitrage opportunity. In such a situation, transformativity does not kick in, however, and as long as the observer stays inactive, the pattern may well survive giving rise to a “puzzling” deviation from the efficient market model. It is thus understandable that several hotly debated stock market anomalies such as the weekend effect, the January effect or the momentum strategy have eventually disappeared or decayed (Malkiel, 2003; Schwert, 2003; McLean and Pontiff, 2016).10 Once these anomalies were brought to light, people started to act upon them and thus triggered the transformation of the market. In the future, it is plausible to expect fewer such anomalies to be published because covertly acting upon them is certainly more profitable than publishing them. Covert action will delay the spread of the discovery and thus copying behaviour which eventually lead to the transformation of the market process nonetheless.11

Uncertainty in economics 25 As a second example, consider the risk assessment problem discussed earlier. Experiments have often shown that individuals are on average not able to provide a proper evaluation of the risk presented to them. In the best of cases, subjects are able to learn how to evaluate and consequently improve their performance (see Hommes, 2013, for example). Uncertainty helps to reconcile these results with rationality. The argument goes as follows. Suppose that an individual is faced with uncertainty. Suppose further that, as a rule, it is impossible to quantify the various utilities of the outcomes of the individual’s choice. This impossibility will usually arise because one or more final states of the options are not knowable. Add to this the empirical fact that individuals nevertheless do make decisions and we can conclude that somehow humans are able to choose even in the absence of quantifiable utilities. Let us denote the decision making process under uncertainty du and the actual elementary choices, c, among the set of options, O, with c ∈ O such that c = du (O). The key difference between decision making under uncertainty and decision making in the actual experiment is now handily characterised by introducing an alternative decision making process that covers the domain of risk, dr . This decision making process applies to the experiment that is set in a probabilistic framework. A possible interpretation of the “irrational” behaviour observed in the Asian disease experiment (see p. 20) then simply is that participants apply rule du to a risk problem for which dr would have been more appropriate since c = c = dr (O). Hence, the finding that people decide irrationally to a large part follows from the experimenters’ observation of c = c with c representing the objectively rational solution. The whole experiment, therefore, must be based on knowing the correct outcome beforehand. But in the presence of uncertainty such superior knowledge is unavailable, which restricts any such analysis to the domain of risk. With this restriction in place, experimental rationality tests essentially only challenge the individuals’ ability to recognise that a certain decision problem is a problem of risk rather than one of uncertainty. If we assume, however, that humans are most of the time faced with uncertainty rather than with risk, it is logical to expect respondents to “automatically” apply the decision making process under uncertainty even though the easier risk approach would be more appropriate.12 This having been said, under rationality, respondents in experiments of the Asian disease kind must be expected to reflect on the problem and by the principle of transformativity be able to adapt to the actual circumstances in the experiment. And indeed, according to List (2004), Thomas and Millar (2012) or Hommes (2013), learning in repeated experiments takes place with respondents showing a tendency to provide the objectively correct answers. The same learning process can also be observed in real-life situations. Thaler (2016) offers a telling anecdotal evidence. He observes that investors continuously converge to the rational price of a derivative (which is its underlying asset) once they are made aware of the technically determined relationship between the

26

Uncertainty in economics

derivative and the asset. His specific example draws on a fund called “Cuba” that tracks the evolution of some listed companies. For some reasons, the price of the fund stays below the underlying assets’ values. In the wake of the Obama administration’s lifting of sanctions against Cuba (the country), the funds’ prices first strongly overshoot before converging to the value of the underlying stocks. The question thus remains what can be learned from experiments set in the context of risk while in reality many decisions involve uncertainty.13 Contrary to what some experimenters claim, not much can obviously be learned about the degree to which people act rationally beyond, of course, respondents’ ability to switch from du to dr . Nonetheless, those experiments are invaluable in determining the tools humans use for decision making when facing uncertainty. Remember that this task is extremely difficult if not impossible to shoulder with mathematical precision because it involves not-knowables. On the upside, however, we must conjecture that all those factors that determine the difference between c and c are also linked to the differences between dr and du . Since economists and others have already studied and understood dr (decision making under risk) very well, a careful analysis of the factors linked to the differences between dr and du will eventually reveal the mechanisms of decision making under uncertainty. Thankfully, psychologists and economists have already identified a great number of factors that are related to c − c . Assuming, for example, that humans have been faced with uncertainty ever since they have populated the earth, humans probably have developed efficient strategies to cope with it. It is therefore a matter of taste to refer to these empirical identifications as an evolutionary approach. Researchers, in turn, uncover those strategies and, potentially, improve upon them. Without claiming completeness, the following list provides some of them. For reasons that will be explained below, we will call this list “decision enabling factors”: • • • • • • • • • • • • • •

emotions (Damasio, 1995; 2012) anchor values (Kahneman, Schkade and Sunstein, 1998) endowment (Tversky and Griffin, 1991) institutions belief credible information (Druckman, 2001) status quo (Samuelson and Zeckhauser, 1988) heuristics (Goldstein and Gigerenzer, 2002) uncertainty aversion (Ellsberg, 1961) inattention (Bacchetta and van Wincoop, 2005) deliberate ignorance science whim (Keynes, 1936, pp. 162–163) sentiment (Keynes, 1936, pp. 162–163)

Uncertainty in economics 27 • •

chance (Keynes, 1936, pp. 162–163) prejudice

In the presence of uncertainty these factors are not merely “explainawaytions” (Thaler, 2016, p. 1582) or stains on an otherwise-perfect homo economicus. Rather, they are indispensable tools for making decisions in most circumstances because they help to bridge the gap between the not-knowable outcome of a certain choice and its utility for the individual. A striking example that underlines the importance of emotions, for instance, is owed to Damasio (1995). Damasio tells the story of a patient called Elliot who had lost parts of his brain due to a surgery that removed an aggressive brain tumour. Crucially, however, with the tumour some frontal lobe tissue of the brain had also be to be removed. This region of the brain is known to be the region where emotions are controlled. The operation was a “success in every respect”, Damasio (1995, p. 36) reports and adds, “To be sure, Elliot’s smarts and his ability to move about and use language were unscathed”. It later turned out that he also did well in standard tests of cognitive skills and analytical problem solution. And yet, something peculiar had changed: He needed prompting to get started in the morning and prepare to go to work. Once at work he was unable to manage his time properly; he could not be trusted with a schedule. [. . . ] One might say that the particular step of the task at which Elliot balked was actually being carried out too well, and at the expense of the overall purpose. Damasio (1995, p. 36), emphasis as in the original Elliot, moreover, showed “superior scoring on conventional tests of memory and intellect” which apparently “contrasted sharply with the defective decision-making he exhibited in real life”, (Damasio, 1995, p. 49). Even more strikingly, Elliot was very well aware of himself and his excellence at testing as well as of his inability to cope with real life. Damasio (1995, p. 49) gives the patient’s own account of his situation as follows. At the end of one session, after he had produced an abundant quantity of options for action, all of which were valid and implementable, Elliot smiled [. . . ] but added: “And after all this, I still wouldn’t know what to do!” Damasio (1995, p. 49) It might be noteworthy that Elliot seemed to command all the required skills, knowledge and wits of a rational man, quite like economists imagine the perfect decision maker. But still, he was not up to the challenge of being a functioning member of society. This dis-functionality shows as an inability to choose, as Damasio (1995, p. 50) observes: “The defect appeared to set in at the late stages of reasoning, close to or at the point at which choice making or response selection must occur. [. . . ] Elliot was unable to choose effectively, or he might not choose at all, or choose badly.”

28

Uncertainty in economics

What is probably most remarkable of all is the underlying reason Damasio discovers for Elliot’s behaviour. Damasio (1995, p. 51): “I was certain that in Elliot the defect was accompanied by a reduction in emotional reactivity and feeling. [. . . ] I began to think that the cold-bloodedness of Elliot’s reasoning prevented him from assigning different values to different options, and made his decision-making landscape hopelessly flat.” After having studied twelve patients with similar damages to frontal lobe tissue and similar changes in personality that turned individuals from affectionate, emotional beings into rather “cold-blooded” rational men, Damasio (1995, p. 53) summarizes that in none of the twelve cases “have we failed to encounter a combination of decision-making defect and flat emotion and feeling. The powers of reason and the experience of emotion decline together.” There still remains one riddle to be resolved. How is it possible that a smart, knowledgeable man does perfectly well in the laboratory but still cannot prevail in life? Damasio’s (1995) simple answer is: uncertainty. Even if we had used tests that required Elliot to make a choice on every item, the conditions still would have differed from real-life circumstances; he would have been dealing only with the original set of constraints, and not with new constraints resulting from an initial response. [. . . ] In other words, the ongoing, open-ended, uncertain evolution of real-life situations was missing from the laboratory tasks. Damasio (1995, p. 49f), emphasis added Thus, once we shift the focus of the analysis of decision making away from laboratories (and mathematical models) to real-life problems where decisions have to be made under uncertainty, emotions apparently do not impair but enable decisions. In the words of Damasio (1995, p. 49f): “Reduction in emotion may constitute an [. . . ] important source of irrational behavior” (bold face and italics in the original). Using the above notation, it seems that Elliot had lost his ability to make decisions using du but retained dr . When going through standard testing procedures, dr was fully sufficient for gaining high scores but these scores were meaningless, when du was needed in real life and hence under the conditions of uncertainty. It follows that emotions are an essential, distinctive element of du that is not part of dr . To sum up, emotions enable decision making; they are not merely confounding factors or causes of “irrational” behaviour. Quite to the contrary, emotions are a pre-condition for rational choice under the conditions of uncertainty. We may also consider deliberate ignorance. In the presence of uncertainty, the common expected utility approach does apparently not work. A workaround in this situation is to simply ignore this fact. For example, central banks including the European Central Bank and the Federal Reserve nowadays require the commercial banks to calculate their individual value at risk. The value at risk is then compared to the banks’ equity in order to assess the banks’ resilience to adverse shocks. Obviously, due to the many subjectively determined asset

Uncertainty in economics 29 prices on the banks’ balance sheets and the fundamentally unknown future, these value-at-risk calculations can only be stochastic approximations of uncertain, i.e., not-knowable outcomes. Nevertheless, by deliberate ignorance, a judgement of the banks’ resilience can be made, irrespective of how “wrong” these judgements may turn out in the future. Finally, let us consider science and belief. Science helps mankind to draw a distinction between uncertainty and risk. Earlier, people would consider thunder as the result of their own misconduct. This reasoning was perhaps enshrined in beliefs or religion14 and offered a way of dealing with the (unknown) cause-effect relationship. Scientists have, however, later proved that it is the result of exogenous, natural forces instead. Therefore, human decision making with respect to the thread of thunder, or lightning for that matter, has fundamentally changed. It is now referred to the domain dr while it used to be part of the du rules. On a deep, very fundamental level, it is as yet an open question, whether all decision making problems can – in violation of the key assumption – eventually be referred to the domain of risk instead of uncertainty. Models of decision making that aspire to also account for uncertainty, therefore, must include the actual determinant of decision making into a coherent framework. Furthermore, interpreting the experimental or empirical evidence requires an inversion of Friedman’s (1953a) notion of “as if”-behaviour. In experiments that are restricted to the stochastic or risk domain, one must assume that participants usually behave “as if” they make their decision under uncertainty, yet not “as if” the economic theory or their assumptions were true. Their decisions must be considered to reflect rationality and potential deviations between the experimenter’s rational solution and actual decisions must first of all be accredited to the difference between optimisation under risk versus uncertainty.

2.3 The epistemology of economics Owed to human creativity, any economic model, law or constant must be considered historical in nature, or at best of transitory validity. It is impossible to discover fundamental or eternal laws because humans constantly create and change the society they are living in. The direction of change is genuinely unpredictable due to the impossibility to simulate the human brain which eventually gives rise to omnipresent uncertainty. This omnipresence of uncertainty, however, does not simply deal a blow to some economic methodologies, it also opens the scope for new research approaches. These new opportunities will be discussed in the following subsections. We start with a brief review of the dominant, current epistemology of economics.

2.3.1 A brief look in the rear mirror As every science aims at uncovering the truth, each science has to answer the two basic questions as to what constitutes truth and how to find truth.

30

Uncertainty in economics

Following Kant’s “Kritiken” in 1781/1783 (2004) philosophers have offered different answers to these two questions. The following account unavoidably remains incomplete and a very rough sketch of the actual evolution of the philosophy of science. It aims at catching the main developments which are relevant for the subsequent analysis without claiming exhaustiveness. It is now common to associate the very early Greek scholars, such as Thales and Aristotle, and medieval European (e.g., Roger Bacon) and Arabic (e.g., Ibn al-Haytham) scholars with the origin of science. In the pre-modern era the truth was usually implicitly assumed to be identical with the world as it is. Therefore, empirical investigations dominated in sciences.15 Emanuel Kant identified some limits of this empiricist approach by acknowledging the fact that there are problems that cannot be resolved through the analysis of empirical facts. Following this basic distinction Kant ventured into the possibilities and limits of knowledge. Kants’ reflections henceforth laid the grounds for scientific methods to do science. The first truly scientific approach according to modern concepts was henceforth introduced by the positivist school, owing mainly to Comte (1865). It is regarded scientific because it did not only offer an explicit view of what truth is, but it also defined the methods according to which truth can be found (Russell, 1918). The latter thus constituted the main innovation as compared to the pure empiricism. The key term of positivist science is the proof (verifiability), which separates belief from truth. Theories have to be verified by experiment or abstract logical proof to qualify as scientific. Positivism and pre-modern sciences share the conviction of the ontological existence of an objective truth. Owed to the combination of assuming an objective truth and the insistence on proof as the main road for knowledge, positivism to this day is the domain of natural sciences, including physics, astronomy and chemistry. John Stuart Mill is commonly accredited with adopting the positivist school into economics (Fullbrook, 2009). As Lawson (1997, p. 238) points out, this adoption comprises the characterisation of the economic agents as an isolated, atomistic actor void of any social context. The behaviour of the agent is determined by mechanical rules with the convenient implication that the state of the economy can be deduced from the behaviour of this agent by adding up several such agents in a lego-like fashion. Leading neo-classical economists such as Menger, Walras and Jevons sucessfully applied this concept for deriving basic properties of market solutions, for example. Positivism was subsequently challenged on grounds that the identification of truth might not be objective but necessarily depends on subjective judgement (Polanyi, 1962). Mainly in response to the two revolutions in physics, Popper (1959) argued that knowledge in general is only transitory until a superior theory takes over. Insight can thus only be gained by rejecting existing convictions and beliefs. Those theories which defy rejection may then be regarded as the truth until they will eventually be rejected, giving rise to a new truth. Popper (1959) postulated that falsifiability of a theory is the true test of science. That is, a theory is scientific only if it can be rejected.

Uncertainty in economics 31 The new school that emerged out of these considerations is called falsificationist. Kuhn (1970) further contributed by introducing the concept of the paradigm shift, implying that while new knowledge is generated incrementally, progress in a science as a whole comes about by revolutionary disruption of the nature of scientific inquiry in a certain field. More precisely, a paradigm shift comes in three stages. The first is the emergence of the paradigm, the second its expansion during which “puzzles” are solved and, finally, its collapse. This collapse is triggered by mounting contradictory evidence which does not fit into the paradigm and thus needs to be talked away by admitting mistakes, or allowing for anomalies without challenging the paradigm as such. Eventually, the anomalies and mistakes lead to a situation that the inquiry fails to pass Popper’s (1959) falsifiability test. The working paradigm must then be replaced for the science to stay scientific. The falsificationist epistemology in general suits not only the natural sciences but also empirical sciences such as psychology, economics, sociology and other social sciences. Though allowing for subjectivity in the knowledge generation process, the falsificationists by and large uphold the ideal of an objective truth as it is envisaged by the positivist view. In opposition to this perception, constructivists maintain that the world itself is not “as it is” but (fundamentally) mentally constructed. Piaget (1969, 1971) draws a distinction between the truth which might be independent from humans and the human ability to learn this truth. Knowledge, it is asserted, depends entirely on the scientist’s means and tools of inquiry and therefore merely is a construct of reality and never a true picture of reality itself. Constructivists thus acknowledge that an objective reality does not exist in the sense that it is possible to generalise about it. Rather, reality can only be captured by the accounts of individuals reflecting upon it. Constructivists hence try to understand how humans account for certain issues and topics without attempting to discover a universal truth. In contrast to the positivist epistemology and the falsificationist epistemology, constructivism can rarely be found in natural sciences but it is the domain of sociologists, psychologists, management and other social sciences. Building on Bhaskar’s (1978) concept of “critical realism” Lawson (1997) suggests an ontology of “transcedental realism” that stands somewhere in between positivist-falsificationism and constructivism. Lawson identifies three distinguishing features of social processes in support of his proposal. According to Lawson (1997), social processes are produced in open systems, they possess emergent powers or properties, they are structured, they are internally related and they are processual (Fullbrook, 2009, p. 4). As a consequence of the inherent dynamic and openness, social reality is transcedental with distinct implications for the conduct of scientific analysis. He maintains that instead of investigating fixed causal laws as in the positivist tradition, transcendental realism construes science as a fallible social process which is primarily concerned to identify and understand structures, powers, mechanisms and their tendencies

32

Uncertainty in economics that have produced, or contributed in a significant way to the production of, some identified phenomenon of interest – mechanisms, etc., which, if triggered, are operative in open and closed systems alike. Lawson (1997, p. 33)

Lawson thus upholds the idea that there exist structures within societies that are at least partly intransitive, relatively enduring. These structures, in Lawson’s view, are the ultimate object of economic analysis (Lawson, 1997, chap. 12). As regards the desirable methodology for uncovering truth in “transcedental realism”, Lawson suggests “judgemental rationality” comprising the comparison of rival theories or explanations based on empirical evidence (Lawson, 1997, chap. 17). So far, neither constructivism nor Lawson’s (1997) transcedental realism have found wide acceptance, or, rather, as I will argue below, recognition among most economists. Implicitly or occasionally explicitly, publications in the leading economic journals point out their use of a scientific methodology while paying little, if any, attention to the underlying ontology. 2.3.2 Positivist-falsificationist epistemology When looking at economics, the plethora of economic research makes it impossible to satisfactorily summarize the actual economic epistemology. A feasible way to nevertheless discuss economic epistemology is to focus on Milton Friedman, who received the Nobel Memorial Prize in economics and who has been explicit about economic methodology. We thus regard the positivist epistemology as the predominant approach to economics (Friedman, 1953b). For positivists, the search for truth is akin to a discovery process with hypotheses serving as intermediate steps on the way of identifying the ultimate goal. As has been argued before, this approach implicitly mirrors the ontological foundation of physics and other natural sciences. In physics, one might for example hypothesise that the speed with which raindrops fall depend on the size and the weight of the drop. Experiments then show that this hypothesis is wrong, which in turn leads to a refined hypothesis that may also be rejected and so on, until a “final” hypothesis emerges that defies rejection and is hence regarded the “truth”. In science, the advancement of knowledge thus proceeds from total ignorance to a complete understanding of the world. According to the taxonomy of uncertainty, one could depict this process as a continuing outward expansion of the frontiers of knowledge that departs from the origin. It is not clear where this journey ends (du Sautoy, 2016) but it is apparent that ever since the age of enlightenment, the frontier has moved considerably, thus covering an ever larger space. It is instructive to formalise the knowledge generation process through the positivist-falsificationist epistemology in economics. Albert (1965) offers a systematic representation of statements as the main output of scientific

Uncertainty in economics 33

Figure 2.1 Taxonomy of uncertainty and the frontier of knowledge

reasoning.16 Statements are based on theorems which are deduced from a set of hypotheses. These statements have three distinct and inter-related features, which are relatedness to reality, informational content and truth. In physics, a set of hypotheses would, for example, maintain that two objects with non-zero masses attract each other and that the attracting forces are proportionate to their masses but inversely related to the squared distance between them. These hypotheses may lead to the statement that two objects of identical mass that are dropped from an identical height will fall with identical speed. We may summarise these assumptions in Newton’s law of gravity and offer an equation for the time that elapses before an object hits the surface. In this case, the solution to this equation constitutes the truth. We may next apply the law in an inductive manner in order to predict the behaviour of falling objects. If it was possible to apply the law to all thinkable situations, the informational content of the law is infinite. This inductive step is thus key to any theory or science that aspires to be of any practical relevance. As a general rule, informational content and truth are rivals. A statement that is always and unconditionally true tends to be a tautology without any relevant informational content. Finally, relatedness to reality implies that the statement must be consistent with observable data. In principle, informational content and relatedness to reality will complement each other, although there is no guarantee that a statement always permits an empirical test. With respect to Newton’s law of gravity we may run a simple experiment that shows that a napkin needs more time to hit the floor than predicted by the theory.

34

Uncertainty in economics

In other words, the statement does poorly in terms of relatedness to reality and by the method of falsification we reject the hypotheses on which Newton’s law rests. There are now at least two ways to generate more insights for expanding our knowledge. We may either add hypotheses that help explain the empirical failure (amendment 2 below), or we add an auxiliary hypothesis (amendment 1 below) that limits the informational content of the truth in an appropriate way. In the latter case, one could put forth the restriction that the law applies only to objects in a vacuum. With this restriction the truth is bounded. The upshot of this auxiliary hypothesis hence is that the truth is maintained. But (re-)establishing the truth comes at the expense of restricting the informational content; it is not possible to calculate the time it takes for a falling object in the presence of the atmosphere. Alternatively, additional explanatory hypotheses may account for the impact of the atmosphere on the falling object(s). These hypotheses may concern the shape of the object, air pressure and many more variables and their respective relations. The below table sketches the three situations as combinations of explanatory hypotheses (Hi ), auxiliary hypotheses (AH j ) and the main statement, the theorem. naive law

amendment 1

amendment 2

H1 ∧ H2 ∧ . . . Hi Theorem

H1 ∧ H2 ∧ . . . Hi AH1 ∧ . . . AH j → Theorem

H1 ∧ H2 ∧ . . . Hi ∧ . . . Hi+m Theorem

By the positivist epistemology the cycle of hypothesising and empirical tests ends whenever no more contradictions between statements and empirical evidence (the relatedness to reality) can be found. Knowledge generation in economics follows the same method at large, the biggest difference being that controlled experiments are not feasible at the macroeconomic scale. Therefore, econometric analysis in a post-experimental setting must serve as a substitute. For the remainder of the discussion we pretend that the econometric exercise is meaningfully feasible. The epistemology for macroeconomics As has been mentioned before, the diversity of economic research limits our ability to make generalising statements about the actual economic epistemology. It is possible, however, to do so for mainstream macroeconomics that is in the focus of the ongoing debate about the usefulness of modern macroeconomics for policy analysis in the aftermath of the financial crisis of 2007/2008. For illustrative purposes we refer to Smets and Wouters (2007, 2003) as benchmark mainstream macroeconomic models.17 For the sake of brevity let us focus on their dynamic stochastic equilibrium model proposed in Smets and Wouters (2003). The following quotation is part of the collection of hypotheses that eventually lead to the main statement(s) of the authors.18 There is a continuum of households indicated by index τ . Households differ in that they supply a differentiated type of labor. So, each household has a

Uncertainty in economics 35 monopoly power over the supply of its labor. Each household τ maximizes an inter-temporal utility function given by: E0

∞ 

β t Utτ

t=0

where β is the discount factor and the instantaneous utility function is separable in consumption and labor (leisure):   1−σc 1  τ ε L  τ 1+σl Ct − Ht t − t Utτ = εtb 1 − σc 1 + σl Utility depends positively on the consumption of goods, Ctτ , relative to an external habit variable, Ht , and negatively on labor supply τt . σc is the coefficient of relative risk aversion of households or the inverse of the inter-temporal elasticity of substitution; σl represents the inverse of the elasticity of work effort with respect to the real wage. Smets and Wouters (2003, p. 1127) With the aid of some mathematics the authors derive statements about the observable variables (given as deviations from their steady-state values indicated by aˆabove the variable), such as The equalization of marginal cost implies that, for a given installed capital stock, labor demand depends negatively on the real wage (with a unit elasticity) and positively on the rental rate of capital: Lˆ = −wˆ t + (1 + ψ)ˆrtk + Kˆ t−1

(34)

where ψ = ψ  (1)/ψ  (1) is the inverse of the elasticity of the capital utilization cost function Smets and Wouters (2003, p. 1136) and, based on Bayesian learning from data, also offer parameter values for ψ and k and many more which are too numerous to list here. All estimated parameters and the collection of all non-reducible (solved) equations constitute the empirical model which represents the truth in the terminology of Albert (1965). We will refer to it interchangeably as the model, the empirical model or the truth. So far, two observations are noteworthy. First, the modelling strategy is another example of the objective approach discussed in section 2.2.1, despite the fact that it refers to household decisions which could be considered genuinely subjective. Second, the set of assumptions does not draw a distinction between explanatory and auxiliary hypotheses. In other words, there is no way to learn what the domain of the informational content is supposed to be. Studying the model thus admits two basic interpretations. Either all hypotheses are auxiliary hypotheses and the truth is thus restricted by the space these hypotheses cover, or all hypotheses are explanatory, in which case the model

36

Uncertainty in economics

should be universally true, that is, the informational content is unbounded. Of course, anything in between is also possible. Obviously, if all hypotheses were auxiliary, the informational content would be zero as there would be virtually no household that forms expectations about an infinite horizon, for example. With zero informational content the question of relatedness to reality immediately also becomes obsolete. Therefore, the overwhelming majority of underlying hypotheses must be considered explanatory. And, in fact, chapter 3.3 in Smets and Wouters (2003) assesses the model performance in terms of forecasting capability in comparison to alternative econometric models.19 These comparisons are not restricted to a certain subset of available data, which implies that the truth is considered unbounded. These comparisons also offer an assessment of the third aspect of the authors’ statement: the relatedness to reality. It turns out that in case of Smets and Wouters (2003, p. 1151), judged by their respective prediction errors, the empirical DSGE model is in fact inferior to competing, so-called agnostic (Bayesian) vector autoregressive (VAR) models, while Smets and Wouters (2007, p. 596) find that their DSGE model outperforms two competing VAR models in terms of out-of-sample forecasting. The latter finding has subsequently been challenged by other researchers (Edge and Gurkaynak, 2010, for example) on grounds of genuine out-of-sample forecast comparisons.20 Following the positivist-falsificationist epistemology, the knowledge generation step should finally follow. To that aim, a testable statement about the reality should be formulated. This new statement would usually be a generalisation of the deduced model by means of induction. For example, we might hypothesise that Lˆ is a function of not only rˆt and Kˆ t−1 but also of rˆt−1 . Smets and Wouters’s (2003) equation 36 (see above) would thus change to Lˆ = −wˆ t + (1 + ψ)ˆrtk + Kˆ t−1 + arˆt−1 with Ha0 : a = 0 against Ha1 : a = 0 as a feasible, econometrically testable hypothesis. Under the null hypothesis Ha0 , the truth derived by Smets and Wouters (2003) is maintained, while under the alternative the model must be rejected. Upon rejection, the set of assumptions should be amended, quite like in the case of Newton’s law of gravity. Interestingly, no such test or any (Bayesian) equivalent is provided, neither in Smets and Wouters (2003) nor in Smets and Wouters (2007).21 Therefore, the decisive step of positivist-falsification for augmenting our knowledge is simply left out. More interestingly, this omission is not an accident but part of the mainstream epistemology altogether. Blanchard (2016), in his defence of the DSGE modelling approach, explains that for DSGE models, “fitting the data closely is less important than clarity of structure” (Blanchard, 2016, p. 3). Giving priority to “clarity of structure” at the expense of fitting the data appears quite remarkable because it implies that empirical evidence will hardly ever contribute to expanding our knowledge. In conclusion, the dominating macroeconomic epistemology does not permit empirical rejection of the hypotheses it deduces its statements from.

Uncertainty in economics 37 In view of Blanchard’s (2016) refusal of falsification as a method of knowledge generation, the question arises what alternative strategies mainstream economics pursues. Blanchard (2016) offers the following: more appealing assumptions (especially with respect to consumer and firm behaviour), improved estimation techniques, more convincing normative implications, “building on a well understood, agreed upon body of science and exploring modifications and extensions” (Blanchard, 2016, p. 3). Again, there is no mentioning of rejecting a wrong hypothesis based on factual evidence. Rather, knowledge is generated by adding “a particular distortion to an existing core”, by playing around with more or less plausible hypotheses or through (partly) disliking the policy implications. It is important to realise the significant gap between the natural science’s approach illustrated with Newton’s law and mainstream macroeconomics illustrated with Smets and Wouters (2003, 2007).22 Science usually starts with observing regularities that form the foundations of hypotheses. Systematic analysis and experimenting will then eventually lead to the discovery of a natural law that is inductively generalised to all like cases. Macroeconomics like in Smets and Wouters’s case pretty much walks the opposite way. It usually start with generalising statements about behaviour and attitudes (see the quotation on page 34 above) and then deduces the properties of the specific decisions of humans and the evolution of the economy from their model (see, e.g., Syll, 2016, p. 3). Rational expectations and perfect foresight are among the prime examples of those generalising statements economic theory very often starts with. The validity of the final deduction does entirely depend on the validity of the initial statements, of course. Friedman (1953c) has famously offered a salvation for this methodology by postulating that “conformity of [. . . ] ‘assumptions’ to ‘reality”’ is not required for obtaining a valid theory or hypothesis. All what is needed is evidence that the derived theory or hypothesis is “important”’ meaning that they explain “much by little” (Friedman, 1953c, p. 14). Yet even if one accepts this justification, the difference to the natural scientist’s approach must be highlighted.23 To understand this consider Einstein’s theory of relativity that replaced Newton’s mechanics on the cosmic scale. Einstein also started with the generalising assumption that there must be an upper limit to speed. Consequently, he deduced from his model that time and distance had to be two sides of the same coin. In contrast to macroeconomics, with its spectacular failures to understand the economy, the deduced specific properties of the universe have all been shown to be consistent with Einstein’s basic assumption. In macroeconomics as in cosmology, experiments are obviously inappropriate tools of investigation. However, this inappropriateness cannot serve as an excuse for not developing or for not using customised methods of falsification or verification, quite like in the case of Einstein’s claims. Therefore, Friedman (1953c) has also demanded that a theory or hypothesis has to be “rejected if its predictions are contradicted”.24 Following Blanchard (2016) it seems that so

38

Uncertainty in economics

far macroeconomists neither have conducted, nor do they intend to conduct, the ultimate test of knowledge by comparing their theory’s predictions to experience. Therefore, while science aims at robustifying statements by means of systematic empirical testing of its implications, the dominant macroeconomic epistemology aims instead at adding to a core with neither this core nor the additions being subjected to rigorous scrutiny. Instead, both are controlled by appeal and persuasion. Therefore, the main contribution to what constitutes truth in this strand of literature is the researchers’ or the research communities’ beliefs. Hence, truth in mainstream macroeconomics is nothing but the statements which are logically deduced from a set of assumptions economists believe in. To sum up, the dominant macroeconomic epistemology constructs its truth based on a set of agreed-upon hypotheses, or assumptions. It does not use empirical scrutiny as a principle guide to advance empirically more-suitable hypotheses but resorts to updating its beliefs or convictions about what the most appropriate assumptions are in comparison to peer beliefs and evaluations of the deductive, normative statements. Based on these updates, a new truth (model) is deduced and the underlying beliefs as well as the new normative statements are analysed, and so on. Truth in mainstream macroeconomic models is hence a purely theoretical, self-contained construct. An epistemology that is based on truth as a construct of an individual or a collective is called constructivism. Therefore, the actual dominant macroeconomic epistemology is best described as constructivist, yet not positivist. A paradigmatic interpretation and uncertainty Kuhn (1970) introduced the idea of knowledge being generated through shifts in the paradigm of a science. A paradigm might be defined as follows: [. . . ] a paradigm is a world view about how theoretical work should be done in a particular subject area which is shared by those who actually do theoretical work in that subject area. It includes agreements about: assumptions about the nature of the subject areas or phenomenon about which theory is being built; variables which are most important for study to understand the phenomenon about which theory is being built; and acceptable methods for supporting assertions about the phenomenon about which theory is being built. Rossiter (1977, p. 70) A paradigm is thus a collection of beliefs about how scientific analysis has to be conducted. In the process of applying the paradigm, Kuhn (1970) notes that, on the one hand, many questions are answered and a deeper understanding gained, but, on the other hand, some new “puzzles” will emerge that cannot be resolved within the prevailing paradigm. As more and more puzzles accumulate, further insights can only be gained by applying a new paradigm. This new paradigm will

Uncertainty in economics 39 maintain all the (empirical) evidence of the old paradigm but this evidence will be looked at from a different angle such that a completely new picture becomes visible. Our understanding of gravity provides a vivid example of a paradigm shift. While it was possible to use Newton’s mechanic to describe and apply the law of gravity on a global scale, it failed to predict according phenomena in space. Objects at a far distance did not appear where they ought to be according to Newton’s laws. It was left to Albert Einstein to come up with the idea of space-time to explain this puzzle, while also offering an explanation for the validity of Newton’s mechanics within earthly dimensions. The idea of space-time can be interpreted as a paradigmatic shift because a central assumption about nature was changed; Einstein postulated an upper limit to speed which merged space and time into a single entity. Using the framework of Albert (1965) we might picture the idea of paradigm and paradigmatic shift by re-ordering the hypotheses from which theorems are deduced in an appropriate way. Lets us group those h hypotheses first that are defining the paradigm and the remaining (i − h) explanatory hypotheses last: H1 ∧H2 ∧...Hh ∧Hh+1 ∧...Hi . We may now ponder what hypotheses in macroeconomics Theorem belong to the first h and therefore should be considered part of the macroeconomic paradigm. A widely shared view of what hypotheses are indispensable in mainstream economics comprises the following: • • • •

subjective rationality subjective utility maximisation risk/ambiguity, equilibrium/optimality/ergodicity.

Of these four, the first three describe individual human behaviour, while the fourth refers to the result of individual interaction. We capture the three concepts equilibrium, optimality and ergodicity in one line because they all have the same effect on economic model building. This effect is the tendency of the variables of interest such as income, consumption or interest rates to approach or to represent their respective steady-state levels.25 This tendency in turn rests on further hypotheses which are not explicitly mentioned in the above list. Among those hypotheses are the assumption of full information, frictionless market operations, stable preferences and price flexibility, for example. Since these more-specific assumptions can also be relaxed or made less stringent, we better refer them to the set Hh+1 ∧ . . . Hi , that is, the non-paradigmatic explanatory hypotheses. The main role equilibrium/optimality/ergodicity play in macroeconomic theory remains unaffected by the various versions of those non-paradigmatic explanatory hypotheses. One might object to listing rationality and utility maximisation as two separate hypotheses because rationality could also be regarded as a systematic, purposeful pursuit of one’s individual welfare, which would be equivalent to

40

Uncertainty in economics

utility maximisation. However, rational behaviour could alternatively mean the systematic, purposeful pursuit of whatever objective one might have in mind; therefore, we maintain the separation of these two. Sometimes, “rational expectations” are quoted as a central assumption in modern economics. However, looking at it from close range it becomes apparent that rational expectations are not an assumption in its own right but rather a hypothesis that is deduced from the more fundamental assumptions of rationality and risk. For a related reason we add risk to the list of hypotheses as it describes a fundamental assumption about how humans deduce the properties of the outcome of actions. It is therefore not only a behavioural hypothesis but also a non-reducible, paradigmatic assumption. There is another noteworthy interplay between these paradigmatic hypotheses, as has already been argued in section 2.2.1. Combining rationality, risk and equilibrium implies objectivity of human decisions. The reason is easy to see: the ergodicity assumption leads to objective outcomes since nothing but the steady-state, or equilibrium, values of the concerned variables will result, at least in the long-run.26 Therefore, the only “rational” expectation is to predict the long-run outcomes of the model, thereby simultaneously reducing the subjects’ role in the theory to nil (Frydman and Goldberg, 2011). Next to the paradigmatic hypotheses the basic methods of knowledge generation define the paradigm. As regards those “acceptable methods for supporting assertions” in contemporary macroeconomics, we may point out • • • • •

positivist-falsificationist, reductionism, deduction, mathematics, microfoundations.

The first item on the list has been discussed in the previous section. According to this discussion, the positivist-falsificationist methodology has already given way to constructivism. Positivism tops the list nevertheless, as it remains the nominal reference point of contemporary macroeconomics. The reductionist methodology has two aspects. It describes first the modelling approach, within which the major features of reality are attempted to be captured in a simplified representation of reality, which is called the model, and second the fact that relevant relations of reality can be boiled down to rather stable laws. The deductive method is embedded in the idea of deriving theorems from general hypotheses, as described in Albert (1965) and referred to above. In general, these theorems are cast in mathematical terms, which simplifies scrutiny of their internal consistency. Finally, the idea of microfoundations has it that aggregate, macroeconomic phenomena should be explainable by the behaviour of individuals. It is important to note the relations between the paradigmatic methodology and the paradigmatic hypotheses. The microfoundations in particular experience a

Uncertainty in economics 41 considerable twist when combined with the paradigmatic hypotheses equilibrium, risk and rationality (rational expectations). To see why, we might for illustrative purposes consider the Smets and Wouters (2003) DSGE example again, which introduces the microfoundations with the statement of the households’ behaviour (see the quotation on p. 34). The microfoundations in combination with rational expectations immediately turn the individual, subjective contribution of the household into an objective, pre-determined result because what looks like a bottom-up, individual decision making approach is, in fact, the economic equivalent to a physicist aiming at modelling the state of the floor as the aggregate of individual raindrops pouring on a surface. We simply need to replace household by raindrop, consumption, labour and the shocks by winds and sunshine or initial height, expectations by entropy and so on. In the end, we obtain a functional that completely describes the raindrops’, aka household’s, behaviour that does not, in fact, leave any leeway for subjectivity because everything is objectively described. In that sense, it requires a very generous interpretation of microfoundations to claim that macroeconomic models such as Smets and Wouters’ (2003, 2007) are truly microfounded. Moreover, the result, which is a certain amount of income, consumption and labour, can be likened to the properties of the wetted floor. Since the final result is consistent with the initial assumptions about “individuals”, no other result can be considered by an individual without this individual being found guilty of “irrationality”. This subordination of the subject to some objective truth is exactly what Muth (1961) observed when he noted that the rational, subjective expectation about the final outcome is the objective truth. It was left to Lucas (1976) to generalise this insight to the case where the objective truth is nothing but the modelling output, thereby postulating what is now recognised as model-consistent rational expectations. Of course, Smets and Wouters’ (2007) model is no exception to this approach. There is also some tension among the paradigmatic methods. For example, the reductionist methodology which posits stable laws implies that the microfoundations are also stable in the process of aggregation. In other words, the household behaviour does, in principle, remain constant independent of what level of aggregation we look at. One might doubt that this is true when considering peer effects, for example, as well as other channels of influences on individuals’ preferences and evaluations. In any case, the concept of advancement of knowledge by means of paradigmatic change emphasises the accumulation of puzzles that remain unexplained within the ruling paradigm. Beyond a certain threshold, the old paradigm will have to be abandoned and a new paradigm will emerge, just like the Copernicanian heliocentric paradigm removed the earth from the centre of the solar system and Einstein’s principle of relativity replaced Newton’s mechanics. Taking stock of the puzzles that remain unanswered in modern macroeconomics, we may again refer to Obstfeld and Rogoff (2000), who count no less than six major puzzles in international macroeconomics alone. Moreover, Obstfeld and Rogoff (2000) maintain that although some of the puzzles can be fixed to

42

Uncertainty in economics

some degree, several others defy satisfactory solutions and also those fixes tend to continue to stir discussion. On an even more fundamental level, the incapability of modern macroeconomic models in the vein of Smets and Wouters (2003, 2007) to explain the financial crisis, let alone forecasting it, also casts considerable doubt on the ability of the current paradigm to account for various puzzles of its own. The questions, therefore, arise as to whether macroeconomics is ripe for paradigmatic change and, if so, what direction the shift should take. Though we cannot answer the first question, we will make an attempt to offer an answer to the second. Meanwhile, it should be pointed out that, to some extent, the shift is already taking place. The ongoing change is noticeable in the silent de facto substitution of positivist with constructivist methodology and in the abandoning of subjective rationality in favour of objective decision rules. The next section argues that instead of these veiled, tacit paradigmatic shifts, openly embracing uncertainty offers a more attractive way forward.

2.4 Paradigmatic uncertainty Justified, for example, by Arrow and Debreu’s (1954) general equilibrium concept, economists hunt the mechanisms that steer the wheels of the economy from top to bottom. The idea of a general equilibrium, in this context, plays the role of the objective, natural phenomena that natural scientists investigate. If we are confident that a general equilibrium exists, we can then scrutinise its properties quite like natural scientists by in turn formulating, rejecting and maintaining hypotheses. In contemporary macroeconomics this replacement has important consequences, however. For example, economists may claim to model market dynamics or, equivalently, investigate the evolution of market prices and quantities. The concept of (general) market equilibria does, however, pre-determine the outcome of this investigation because individual decisions that in fact establish actual market prices and quantities must be assumed to adapt to the equilibrium the investigator posits. Ironically, general equilibrium thus drives a wedge between actual decision making of individuals and the outcome of those individuals’ decisions. When considering market dynamics as being driven by some exogenous forces that push markets to equilibrium, Muth (1961) has proven that rational individuals cannot but adjust to these forces when making decisions. The most remarkable implication of the positivist approach in macroeconomics is thus the inversion of the flow of energy that spins the economic wheel. While undoubtedly individuals establish markets with their interactions resulting in prices and quantities, modern macroeconomics models markets as if individuals optimise their decisions with respect to predictable, existing market outcomes, i.e., general equilibrium. In this section we will test the idea that uncertainty may form the foundations of a new paradigm in economics. To that aim we will demonstrate how economic phenomena do indeed shine in a new light when looked at from the angle of

Uncertainty in economics 43 uncertainty. Uncertainty also helps to resolve a number of puzzles and implies the use of new tools of analysis and new interpretation of already available instruments. Testing the idea of uncertainty as the new paradigm starts with a discussion of a key ingredient of successful top journal submissions: puzzles.

2.4.1 Puzzles Kuhn (1970) notes that the accumulation of unresolvable puzzles pre-dates the shift to a new paradigm. Economics, it seems, is an exception to this rule because the discovery of a new puzzle is often not necessarily considered evidence against, but in favour of, the validity of the prevailing paradigm. This view is fuelled by the prospect of high-ranking journal publications when “discovering” (and nurturing) puzzles. Attending to puzzles, one might even think, is more useful for economists than their eventual resolution because only unresolved puzzles afford continued intellectual discussions and career prospects. It is therefore enlightening to understand the nature of puzzles in economics. Looking at canonical economic puzzles (Meese and Rogoff, 1983; Obstfeld and Rogoff, 1995; Engle, 1996; Taylor, 1995; Obstfeld and Rogoff, 2000) it is straightforward to see that puzzles in economics share a common key feature at large. This common feature is their underlying structure, which is characterised by two mutually contradicting stances. The first of them rests on the paradigmatic pillars of contemporary economics comprising general equilibrium, subjective rationality, individual utility maximisation and risk. We have already seen that rational expectations is short-hand for subjective probability and individual utility maximisation under general equilibrium and risk. In what follows, we therefore do not refer to these deeper concepts individually but to rational expectations as their aggregate for the sake of simplicity. Rational expectations in conjunction with the economic hypothesis of interest eventually lead to conclusions about the theoretical features of economic phenomena such as prices and quantities. The second, contradicting, stance is derived from empirical observations. Empirical research consistently shows that the theoretical features of economic phenomena derived from rational expectations are not in line with actual data. This is true especially for those markets where stakes are the highest and where rationality should yield the largest pay-offs. For example, foreign exchange rate models are famous for their various failures, usually dubbed puzzles, such as the difficulties in predicting spot rates by forward rates (Wang and Jones, 2003; Salvatore, 2005) and the hassles in beating the naive random walk hypothesis in forecasting spot rates (Taylor, 1995; Obstfeld and Rogoff, 2000; Cheung, Chinn and Pascual, 2005). Likewise, certain “volatility” puzzles relate to the inexplicable behaviour of the second moments of the exchange rate. Very similar problems with the economic models arise when stock prices are under consideration (Shleifer and Summers, 1990). Keywords

44

Uncertainty in economics

such as irrational exuberance, irrational bubbles, noise trading and so forth all describe but one thing: the impossibility to match theoretical models with the data. So far, the bulk of criticism of the rational expectations paradigm and hence the suggestions for overcoming its empirical problems has addressed the assumption of rational behaviour of individuals. In his survey of the literature on bounded rationality, Conlisk (1996) already lists four major reasons as to why individuals can hardly be expected to shoulder the task of really fully exploiting all available information. He quotes not only the many economic papers that demonstrate the failures of the rationality hypothesis but also discusses some contributions to the psychology literature. Similarly, Tirole (2002) puts forth four reasons as to why we may observe deviations from rational behaviour. Other contributions extend this list towards learning and sentiments (see, e.g., Grauwe and Kaltwasser, 2007; Bacchetta and van Wincoop, 2005; Sims, 2005). Another common strategy to bridge evidence and theory is to amend empirical methods, data frequency, geographical data origin and sample periods. An editor of a renowned journal has neatly summarised these amendments with reference to the uncovered interest parity literature as follows (emphasize added):27 [...] Lothian [...] recently wrote a paper that tried to find the best evidence for UIP and using 100 to 200 years of bond data for about 20 rich countries, he found good but still noisy evidence that UIP held. [...] Lothian notes that 2/3 of the beta coefficients (which should equal exactly 1 in theory) have values of between 0.75 and 1.25. But since we can still “fail to reject” the hypothesis that beta equals 1, Lothian treats this as pro-UIP long run evidence. He still notes however, that things may be more complicated in the short run [...] I think Lothian’s approach is the right one, and [...] most readers would come to Lothian’s conclusion: That it’s nice that “in the extremely long run” UIP holds on average, but it’s still a puzzle why it often fails to hold in the short run. The epistemological stance of this kind of reasoning is not only truly remarkable but also rather commonplace in contemporary economics. The key support for UIP is derived from sufficiently generous confidence intervals (“2/3 of the beta coefficients [...] have values of between 0.75 and 1.25”) that do not permit rejection of the central hypothesis and, in case the central hypothesis is rejected, this rejection is explained away as a phenomenon owed to sample length, for example. These rejections are not even questioned as being erroneous; quite to the contrary, they live alongside the central hypothesis and their presence plays the role of an inexhaustible source of future publications that are meant to “reconcile” the contradictory findings. It should be recalled that the positivist epistemology would have it that a single (undisputed) rejection of the central hypothesis would suffice to reject the theory. This is exactly why Newton’s mechanics eventually replaced Descarte’s theory28 of “corpuscles” and Newton’s mechanics had to give way (on the cosmic scale)

Uncertainty in economics 45 to Einstein’s relativity principle. The same will eventually happen to the current macroeconomic paradigm, though there is still a long way to go. In order to understand why it is so difficult to let lose, two further, structural issues should certainly be considered. The first issue rests with the very construction of macroeconomic puzzles.29 Remember that these puzzles arise because empirical data contradicts the theory based on rational expectations. But rational expectation is a composite concept that holds together the “corpuscles” of the paradigm, which means that rejecting the central hypothesis does not provide any hint as to what “corpuscle” has been rejected. Is it individual fallibility (Soros, 2013), utility maximisation, equilibrium, risk or the theory? Each of these elements can be speculated to lead to a rejection without ever being able to identify the true culprit. This indeterminacy of the reasons for rejection constitutes the source for infinite publications that was mentioned before. Its main epistemological relevance is to immunise the theory against definite falsification. Hence, this immunisation is the deeper reason why empirical rejection does not result in the accumulation of puzzles which would otherwise force a shift in the paradigm. Quite to the contrary, instead of triggering a paradigmatic change, the indeterminacy provides the justification for the theory to coexist with its own refutation. The second issue relates to the alternative to the current paradigm. In other words, even if one was ready to abandon the current paradigm, what concept should be used instead?30 One possible candidate for a substitute of the current paradigm could be the introduction of uncertainty. Allowing for uncertainty would have several benefits. First, uncertainty at once resolves all major economic puzzles, and second, major approaches such as rational expectations could be maintained, though at the price of their assuming a new meaning. The first benefit obtains because uncertainty sheds a new light on the notion of equilibrium. With uncertainty, equilibrium situations cannot be posited as exogenous, objective relationships like, for example, in 4.11. Instead, they emerge from the intentional interactions between individuals and – due to the main assumption – cannot be described by a probability distribution function (see 2.1, 2.2.1). Therefore, equilibria are more like consensual agreements, rather than equations in the mathematical sense. Consequently, since economic phenomena are intrinsically linked to human interaction this human interaction cannot be juxtaposed to an objective, stochastic process representing some superior knowledge about the normative outcome of the interaction. The whole notion of a puzzle that builds on the gap between actual economic phenomena and theoretically correct observation thus simply vanishes. Furthermore, while without an objective benchmark individual behaviour cannot be labelled rational or irrational, rationality – as a concept – still prevails. But rationality now means to recognise that individuals generate economic phenomena themselves. Individuals form expectations about the outcome(s) of their actions building on this insight, which constitutes rational expectations under

46

Uncertainty in economics

uncertainty. The key element of rationality under uncertainty, however, is, that any particular, predicted outcome is not assumed to be deterministic or stochastic but compares to a range of not-knowable alternatives. Exactly how individuals form expectations under uncertainty must remain unknown for the time being. Equipped with the uncertainty paradigm, researching the expectation formation strategies will be greatly simplified, however. The simplification arises because empirical data will inform about how these expectations are formed.

2.4.2 Rational expectations and uncertainty Problems of risk and ambiguity are prone to lead to “correct” answers to questions that are in fact genuinely open issues in individual decision making. This superficial “knowledge” has led to the observation that behaviour that is labelled “irrational” may not be irrational at all because seemingly irrational behaviour may simply reflect optimal decision making under uncertainty while the posited problem is an issue of risk or ambiguity. Under uncertainty as well as under risk or ambiguity an individual will ponder their options in view of their expected outcomes before taking decisions. Taking expectations in a probabilistic setting is pretty well understood – the question thus remains as to how individuals form rational expectations under uncertainty; what does that imply for economic phenomena as well as for economic research? Quite obviously, the main problem for expectation formation is that under uncertainty at least one possible outcome of one’s action cannot be known beforehand. Shackle (1972) proposed a “residual category” for all those events that are not-knowable and showed that because of that residual, the sum of probabilities of the known and assessable outcomes cannot add up to one (Crotty, 1994, p. 8). Therefore, risk and ambiguity or determinism alone cannot satisfactorily solve the evaluation problem. A possible way forward has been suggested by Keynes (1936). Keynes approached the problem from a different direction. Instead of searching for unavailable statements about not-knowables, Keynes says individuals do act on their subjective “state of confidence” (Keynes, 1936, p. 148) with which they weigh their feasible predictions.31 These feasible predictions, or expectations, may each very well be derived from stochastic calculus. The “state of confidence” then determines the extent to which agents think that the calculated probabilities are actually meaningful with respect to the true future sequence of events (Crotty, 1994, p. 9). In other words, the confidence in one’s own predictions plays the key role, while there is no way of directly addressing the unknown events themselves. In this perspective, expectation formation becomes a two-step procedure. In a first step, individuals formulate expectations conditional on the non-existence of the residual category containing the not-knowable events. Then, individuals define the extent to which they think their expectation generating mechanism coincides with the actual environment. Finally, they take a decision.

Uncertainty in economics 47 An advantage of this conceptually two-step procedure rests with the fact that the two considerations, conditional expectation formation and confidence weighting, can be dealt with separately. At the same time, care must be taken not to mix up those two layers. It does not make sense, for example, to declare the residual category to be emptied by one’s modelling of all possible events. If nevertheless done so, the second step seems to be expendable but under uncertainty the according expectations can only be pretence of knowledge but never actual knowledge. Economists have so far focused mainly on the first step of expectation formation by predominantly ignoring the residual category or, rather, assuming it to be empty. This focus has led to an enormous amount of literature on modelling the economy, forecasting and making probability statements about (future) events. For example, Smets and Wouters (2003, 2003) can be read as an elaborate first step of expectation formation with a probably low level of confidence attached as regards the relevance for the economy. Having in mind that many more resources have been spent on this first step, we do not need to expand on these efforts here. In its stead we will discuss some aspects of the conceptual two-step procedure. A first question that one might ask is whether this two-step procedure has any bearing on actual behaviour. The answer is most probably “yes”, even though many economists might not even be aware of it. A clear indication of economists’ applying the two-step approach is their emphasising that any statement about their forecasts of gross domestic income, consumption, prices etc. builds on the assumptions made in their respective models. Implicitly, they thus condition these statements on the non-existence of the residual category. In parallel, detailed forecasts are very often accompanied by separate assessments of possible “risks”, such as political developments, unexpected or unattended economic developments and so on, which can easily be read as a way of weighting the belief in these forecast. It is thus very well possible to combine an optimistic forecast – attaching a high probability to strong GDP growth, for example – with a low vote of confidence.32 Interestingly, quite like in the case of communicating forecasts jointly with a lists of caveats, acknowledgement of the two-step, essentially Keynesian, expectation formation process can be traced to the European Central Bank (ECB), for example. The ECB guide to its staff’s macroeconomic projections (European Central Bank, 2016) notes that these projections are “conditioned on a set of assumptions, combine the use of models and other tools with the knowledge and judgement of economic experts” (European Central Bank, 2016, p. 2). These other tools are necessary because “the state of the world is subject to unforeseeable events” (European Central Bank, 2016, p. 27) which matches pretty clearly the definition of uncertainty. The final outcome of the ECB’s forecasts, therefore, is a mixture of stochastic and uncertainty analyses. Unfortunately, in defiance of its own recognition of “unforeseeable events” the ECB nevertheless does eventually also quantify probabilities for “unforeseeable events” in a process it calls quantitative risk assessment (European Central Bank, 2016, p. 28). This quantification is based on a classification

48

Uncertainty in economics

of “key risk events” and the ECB and Eurosystem staff is surveyed on “the probability of materialisation of the risk event” as well as about “all the remaining risks”(European Central Bank, 2016, p. 28). Quite obviously, although recognising the existence of unforeseeable events would demand a rather cautious approach with respect to quantifying probabilities, the ECB falls victim to the pretence-of-knowledge trap. The second question that arises when considering the two-step expectation formation addresses what determines the weights of belief. The issue at stake is about the importance of the residual category or, equivalently, the congruence between one’s model of reality and reality itself. For a start, let us assume that an individual does not even have a model that would support an expectation. If nevertheless forced to voice an expectation the individual might simply grab whatever suitable straw is near. Suitability in this respect might simply mean being a member of the same category of information, such as a numerical value. Psychologists describe this behaviour as anchoring. It means that people sometimes refer to totally arbitrary “information” when making predictions. A rather popular example thereof is telephone numbers and prices. Letting subjects recall their telephone number can be shown to be informative about the same subject’s guess of the price of a commodity. For instance, regressing the price guesses for a bottle of wine on the last two digits of the recalled phone number often shows a statistically significant relationship. Given the arbitrariness of the connection between a phone number and the price of wine, some researchers could be tempted to label those expectations irrational. What they should do, however, is also enquire about the state of confidence their subjects attach to their expectations. One can hazard a guess that this weight would be very low. Therefore, even if such an expectation was delivered with zero variance, the low level of confidence in the expectation generating mechanism would suggest that an individual would hardly act upon this estimation, especially not if acting could have a far-reaching impact that really matters.33 This distinction is especially relevant for investment decisions. When formulating an expectation about the effect of an investment the investor may employ a model that produces a rather optimistic outlook. The investment decision, however, will not be based on the optimistic mood alone but also on the confidence the investor puts in the (external) validity of the model. Therefore, the problem of what factors determine the confidence in one’s world view demands due attention. Next to more or less arbitrary anchoring, it may be worthwhile considering experience, peer effects and institutions among those factors that shape the level of confidence. For an investor, a decision that turns out to be profitable will certainly raise the level of confidence in the model used in the first step. This rise in confidence may even occur when the underlying model has no bearing on reality because, for example, each investment outcome is independent of past outcomes. A successful investor may thus be tempted to continue investing on grounds of being convinced to posses some superior strategy or insight. In

Uncertainty in economics 49 an extreme case, a few profitable decisions may trigger a self-enforcing and self-confirming behaviour leading to its own eventual catastrophic end. Taleb (2005) offers an illustrative example for how to exploit this erroneous circle of self-confirmation based on irrelevant experience in which a seemingly successful financial market investment is used to instil a level of confidence in one’s investment model that eventually leads to a bankrupting mistake. However, experience may as well not be just spurious but indeed yield information about the external validity of the expectation generating model. Knowledge about this information may be disseminated through business schools, trade chambers and other formal and informal channels. One such informal way of knowledge sharing is peer advice. Again, the advice may not even be based on actual experience or insights, but it may suffice to make the predictor confident in his expectation. A telling example of peer effects in confidence building is money. Economic text books very often refer to money as the means of exchange, storage of value and unit of measurement. When it comes to explaining why some items are more suitable than others to serve as money, the underlying question of why money “works” at all is sometimes overlooked. Indeed, why should anybody accept a metal plate in exchange for hard-laboured produce? Accepting a metal plate involves a prediction about the worth of this plate in future exchanges. The model one uses for forming those predictions may or may not be confirmed by peers. The best answer to the question why accepting money thus simply is because everybody else does so. The underlying reason for money to be operational, therefore, is an implicit contract about the value of money and mutual confirmation of the usefulness of money which enforces the contract through voluntarily obeying its rules. Of course, this peer mechanism can also operate in reverse gear. Observing peers’ preference for other forms of money or other currencies altogether will make agents shift to those alternatives or to ask for more of the same as a compensation for the loss of confidence evidenced by the peer behaviour. The result of those tides in peer confidence will be inflation and revaluation of currencies. Beyond individual factors, institutions play a key role. Institutions can affect the confidence level in two ways. They either increase the congruence between model and reality by enforcing the model mechanism or they manipulate the residual set of unknown events. When considering the investment in a certain location, the investor needs to form an expectation about the availability and the quality of basic services. A government may provide the infrastructure that ensures the delivery of those services such as sewage treatment or electricity. It may alternatively grant monopoly licenses to private companies in return for, among other things, a guarantee of service provision. Moreover, those institutions need not necessarily be governmental at all. If such institutions exist, the investor will place much more confidence in his bet because he trusts more in the reliable operation of his business.

50

Uncertainty in economics

Cultural, social norms and customs very often also tend to ensure the congruence between model and actual outcome. For example, a society that commands a high level of trust among its members will see a tendency to do better economically because trust simplifies exchanges of various kinds. Without trust the costs of enforcing contracts rise and the readiness to invest will be comparatively low. The pitiful economic state of the late Soviet Union has, for example, been associated with weak social norms and lack of trust between individuals and between individuals and the state (Govorkhin, Ganina, Lavrov and Dudinstev, 1989; Gudkov, 2010; Levda, 2001). Similar arguments have also been used to explain regional economic difference within East Germany (Lichter, Löffler and Siegloch, 2016). Institutions may also target the residual set of unpredictable events. When choosing a profession, an investment or a partner, the effects of this choice remain obscure to a considerable extent. A profession may become obsolete, thereby limiting or even destroying the income basis. At the time of making the decision it will more often than not be impossible to calculate the odds of that happening. However, institutions can assist in raising the confidence in one’s expectations about the job or investment prospects by limiting the scope of adverse outcomes. Potential perverse side effects notwithstanding (Müller and Busch, 2005; Snower and Merkl, 2006), social security systems like unemployment insurance and cash transfer programs make sure that the results of a failed investment or occupation choice do not turn out extremely negative. Obviously, although there is no direct link between the expectations about a certain event and the residual category, the confidence in one’s expectation may nevertheless be raised because some of the non-imaginable events within the residual category lose some of their awe since the institutional setting would prevent harm even in case the not-predictable happens. The probably most relevant implication of institutions’ influence on the confidence about one’s expectations is economic policy. The role of institutions in an uncertain environment and economic policy under uncertainty are respectively explored in sections 3.1 and 3.3.

2.4.3 Rationality and uncertainty “measurement” From a research perspective, rational expectations under uncertainty raise a number of exciting and as yet largely unanswered questions. An imminent problem that comes to mind is the measurement of the degree of uncertainty. By the definition of uncertainty, uncertainty itself remains impossible to measure. Any attempts to do so necessarily result in the pretence of knowledge but never in knowledge. Therefore, instead of measuring uncertainty directly, indirect approaches may be more promising. Given the two stages of rational expectations under uncertainty the focus of measurement could address the level of confidence placed on the expectations formed in the first step. This level of confidence could, in principle, be projected

Uncertainty in economics 51 on a zero-to-one scale with zero implying no confidence or complete uncertainty and one full confidence or no uncertainty. However, neither limit realistically seems ever to occur and it would probably overstretch an agent’s capability to apply a given projection mechanism consistently. Therefore, an incremental, ordinal scale for measuring the (change) in the level of confidence should be used instead. For example, a business tendency survey could include a question that enquires the “confidence the expert places in its sales forecasts” and offers the answering options “less than”, “the same as” or “more than last period”. When trying to include such measures in quantitative forecasting exercises, the empirical forecast model could condition on the three categories, for example. Given such a two-layer enquiry, the co-movement between (optimistic) expectations and (strong) confidence would be an interesting object of study in its own right. For instance, if there was a very pronounced correlation, that would indicate that an independent approximation of the two steps, expectation formation and confidence placement, was questionable. Instead of addressing the level of confidence directly, implicit measures could also be used. For example, a high degree of uncertainty implies that expectation formation itself becomes very challenging. If survey respondents are overwhelmed by this challenge they may react by refusing to answer a question or the whole questionnaire. This refusal would drive down participation rates in times of elevated uncertainty. Alternatively, participants may clinch to status-quo expectations and show a tendency to check “no change” boxes. So, for example, when asking about future sales prospects and providing the options “higher sales”, “about the same sales”, “fewer sales”, the share of “about the same” answers might be higher in times of uncertainty.34 Further implicit indications might employ the mechanism of self-confirmation. As has been argued before, an expectation that turns out “correct” will have a tendency to raise confidence in one’s expectation formation model. Therefore, negative surprises will adversely affect confidence and therefore indicate increased levels of uncertainty. Indeed, when looking at the informational content of negative surprises, Müller-Kademann and Köberl (2015) have shown that they significantly improve forecasts of GDP. In that case, negative surprises were defined as decreases of capacity utilisation when an increase or no change was expected in the previous period. Alternative popular “measures of uncertainty” relate sentiments in newspapers (Baker, Bloom and Davis, 2015), dispersion of opinions and disagreements (Boero, Smith and Wallis, 2008; Bomberger, 1996; Bianco, Bontempi, Golinelli and Parigi, 2013; Driver, Imai, Temple and Urga, 2004) or price volatilities to uncertainty (Bekaert, Hoerova and Lo Duca, 2013). However, in all of these cases the fact that the item to be measured is by definition not-measurable is ignored. Moreover, an explicit distinction between the actual expectation and the weight subjects associate with the expectation is only occasionally drawn. In order to give these alternatives some meaning we may consider their inverses as indicators of confidence (sentiment measures) or reflections of

52

Uncertainty in economics

the impact of uncertainty (dispersion of opinions, volatilities of prices) which preserves some of the flavour of actual uncertainty. Analytically, however, mixing up uncertainty with its effect like in the case of dispersion or volatility indicators clearly limits the extent to which the effect of uncertainty can be investigated independently of the properties of the expectation generating model. A very similar issue pertains to the possibility of deriving individual probability distribution functions from actual choices as devised, for example, by Savage (1951). Under uncertainty, inference of that kind is impossible. This impossibility derives from the fact that definite choices are the joint result of probabilistic assessments and the weighting of beliefs. Reverse-engineering probabilities from actual choices thus mix up the two distinct underlying steps of analysis and have no relation to the expectation formation of individuals. Unfortunately, it will usually be impossible to straightforwardly recognise the mistake because in ex-post situations the set of possible outcomes seems to be closed and no residual category needs to be accounted for. This is the same mechanism by which empirical fitting of probability distributions for asset prices produces “fat tails”; ex post, any realisation can be catered for by an arbitrarily large yet finite variance of a distribution function. Third, expectation under uncertainty posits the challenge of being able to distinguish between confirmation due to chance and genuine rises in the confidence level. Maybe paradoxically, the bigger concern is about confirmation by chance. In this context, the latter refers to objective processes which allow quantification and thus a probabilistic statement about the congruence between model and reality. For example, rolling dice or playing a lottery allows the mapping of events to a zero-to-one scale of probabilities while subjective processes which are exposed to reflexive-transformative mechanisms are usually not. This observation yields an interesting implication. A policy that aims at raising the confidence in one’s investment model may succeed even though it operates with anchoring or peer effects – two rather arbitrary confirmation mechanisms – if the aggregate of the triggered investments lifts an economy out of an underemployment equilibrium. This uplift can occur because elevated confidence in a positive but small (hence pessimistic) return on investment can overcome the reluctance to invest that is implied by the pessimistic expectation itself. On a macro scale, the investments will provide the basis for higher consumption and eventually the success of the investment strategy. Objective processes such as lotteries lack this reflexive-transformative aspect and must therefore not be targeted by economic policies. Consequently, research should advise what kind of problem causes an economic slump: is it resulting from a lack of confidence or from bad luck? In conclusion, the two-stage concept of decision making under uncertainty offers a range of empirical tests that may shed light on actual decision making. A reasonable set of research issues would address the validity of Keynes’s (1936) two-step view, the factors affecting the level of confidence, means for separating risk from uncertainty and general measures of confidence.

Uncertainty in economics 53 The biggest advantage of the new perspective, therefore, is the opportunity to investigate human behaviour by asking open questions, that is, without having to know the “correct” answers in advance.

2.4.4 Elements of an uncertainty paradigm If one would compile a list of paradigmatic hypotheses the items subjective rationality, subjective utility maximisation, risk (ambiguity) and equilibrium (optimality, ergodicity) would certainly be considered indispensable (see 2.3.2). Since uncertainty dominates reality, replacing risk with uncertainty is imperative for economics to be, remain or become a relevant social science depending on the degree of one’s pessimism about economics’ achievements at large. Substituting risk with uncertainty will affect the paradigms of economics in more than one way. First of all, with uncertainty, economic analysis will inevitably become more empirical as it will be forced to move away from the simplistic perspective of objectively knowable event spaces implied by probability distribution functions and theory-driven equilibria. Instead of objectively defined equilibria, equilibrium must be shown to emerge from individual behaviour. Therefore, in an initial step the individual behaviour has to be described using first-hand empirical evidence. Equipped with the empirical knowledge of behavioural rules, mutual exchange may then be modelled or its results be approximated. Apparently, subjective utility maximisation will hence remain part of the economic paradigm but its meaning must shift from being an objective measure that is largely homogeneous across individuals to a genuinely subjective matter that might not even be measurable but only, say, be qualitative in nature. In order to illustrate the necessary empirical task ahead, one might consider the question of how to measure confidence or how far people do in fact look into the future instead of assuming optimisation over an infinite horizon, as in Smets and Wouters (2003, p. 1127), for example. When modelling the outcome of individual exchange, equilibria may or may not emerge. The case of multiple or unidentified equilibria is not unknown to economics, not even to neoclassical economics. Sonnenschein, Mantel and Debreu (SMD) have as early as 1973 and 1974 shown that aggregating individual demand does not lead to aggregate functions that share the same properties as the individual functions. Aggregate demand functions may thus turn out non-monotonic and increasing in quantity, for example. As a result, “general” equilibrium will in general be neither unique nor stable and hence not general either. Though uncertainty makes empirical analysis imperative it does also pose a considerable challenge to econometric analysis. Most economic data have a distinct time dimension which usually induces correlation between consecutive observations and therefore questions ergodicity of mean and variance. But even non-stationary data can be dealt with by either taking first differences or

54

Uncertainty in economics

applying cointegration analysis (Engle and Granger, 1987; Johansen, 1988). With uncertainty, a qualitatively new layer of non-stationarity is added, however. This new layer is obtained due to the fact that any new individual who is added to an equation may shake up the whole aggregate. For example, if agents engage in an exchange out of which a share price emerges, the principle of uncertainty implies that, upon entering the negotiations, a new agent may lead to an unpredictable price change (see 4.1). A sequence of unpredictable changes, however, is exactly what defines a martingale process, the model case for non-stationary data. However, under uncertainty this non-stationarity is not the “result” of the passing of time but of the genuine individualism referred to in the main assumption. Instead of non-stationarity with respect to time we thus obtain a cross-sectional non-stationarity in disguise for which it has yet to be seen whether the cointegration framework is suitable. Due to the reflexive-transformative nature of human decision making, the tools with which people cope with uncertainty can also be expected to change constantly. Tracking these tools, therefore, constitutes a permanent research objective and suggests once again that deductive instead of inductive approaches must gain ground. When putting emphasis on the inductive method, it does not make much sense any more to stick to the idea of a universal, objective truth as postulated by positivism. The incompatibility of the inductive method and positivism arises because – as we have argued before – uncertainty will always destroy any existing link and create new links between aggregate (market) results and the individuals. This infinite cycle of destruction-creation is inevitable because otherwise a general law could be established that would allow us to infer individual behaviour to such a degree that it would be at odds with the basic assumption. Economic phenomena are hence the result of human creativity and pure constructs of reality rather than objective realities. Because an economic epistemology must reflect the properties of its study object, constructivism is the appropriate epistemology which economists should openly embrace. This said, constructivism in economics must not be mistaken for an anything-goes approach in which all theories and all models deserve equal valuation. Knowledge can only be gained by discriminating between competing theories and models.35 Modern econometrics already knows several ways of picking losers and winners in this competition. It is has yet to be seen, however, if the available approaches are also suitable for a situation in which the existence of an objective truth cannot be assumed. In order to appreciate this problem more clearly, recall that under positivism a general law exists. This general law can be assumed to have generated the data. Under a suitable null hypothesis, a particular stochastic data generating mechanism (such as equation 4.1) can be assumed to coincide with the underlying true process and all the statistical test batteries can be applied. Under constructivism the null of having found the true data generating mechanism cannot be maintained. Consequently, a t-test for the significance of

Uncertainty in economics 55 a certain parameter, for example, posits under the null and under the alternative that the value of this parameter is zero. Albeit not being able to provide an exhaustive answer to this challenge, a potential way of justifying existing methods also under constructivism can be envisaged. Note that any realisation of observable data is not subject to the reflexive-transformative processes any longer. Therefore, it could in principle be possible to uncover the specific realised actions and causalities out of which these observations emerged. Therefore, in an ex-post analysis, the assumption of a true data generating mechanism may be recovered. In general, competing theories and models will be – and actually already are – the norm. Discriminating between them heeds the call by Friedman (1953c, p. 9) for attaching more confidence to those that see fewer contradictions as much as Keynes’s (1936) concept of weighting the beliefs in various expectations. The truth in this setting is obviously an exact mirror of reality: a construct. The remaining items on the current list (falsification, reductionism, mathematics, microfoundations) appear less contentious. Falsification can be used as a means of discrimination with “false” now meaning “inferior”. Deduction also implies a stronger role for empiricism in order to know, first, what decision making rules and, second, what kind of individualistic behaviour to aggregate up in a truly microfounded modelling approach. Comparing the existing and the envisaged lists of paradigmatic hypotheses and methodologies, the differences look less pronounced at first glance than the similarities. And in fact several branches of the profession such as behavioural economics or (applied) econometrics do already embrace the new paradigm by focusing on empirical analyses of actual individual or societal behaviour. Referring once again to the SMD result, it becomes clear that neoclassical economics has also produced a result that makes empirical investigations of aggregate demand imperative. In fact, one way to interpret the SMD theorem is to accept that a truly bottom-up aggregation of individual preferences and utilities is not only infeasible in practice but also unnecessary. The latter arises from the observation that aggregation of demand functions (without special knowledge about all individual endowments) generates aggregate behavioural functions that lead to many equivalent equilibria. Consequently, instead of analysing a microfounded general equilibrium that can be neither identified nor be stable, one might work with an empirically estimated aggregate demand function. Of course, identification issues are not solved by simply adopting this goal. Finally, macroeconomics has already assumed a constructivist stance, as has been argued in section 2.3.2, though without the decisive discrimination step. Adding the seemingly minor paradigmatic shifts may nevertheless induce a major move towards hypotheses and theories potentially being rejected and abandoned, which would raise hopes that new knowledge may indeed be generated. These new insights may even help us understand the mechanisms of economic crises as well as develop the tools for efficiently handling them.

56

Uncertainty in economics

2.5 Economic epistemology and uncertainty: an application to the Lucas critique When invoking uncertainty instead of risk, deduction instead of induction and constructivism instead of positivism, new insights can potentially be gained. In the following, we apply the new paradigm to the famous Lucas critique (Lucas, 1976) and show that it fits neatly into the Keynesian decision making framework without losing any of its original appeal. The Lucas critique has set loose a fully fledged revolution in economic theory and model building in economics. Ever since, modellers have focused on the consistency of their models’ implications and the decisions (representative) agents are supposed to take. When uncertainty enters the picture, nothing changes with respect to this modelling principle. However, uncertainty implies that the Lucas critique must be applied more rigorously than it is currently practised. The current practice is dominated by the so-called “real business cycle”, “new classical” and “new Keynesian” models, which align rational decision making on the micro level with equilibrium on the macro level.36 Currently, the DSGE variant of each of them, no doubt, represents the most advanced approach. If we apply Lucas’ critique to the choice of these models, we can show that contemporary macroeconomic modelling strategies are in obvious contradiction to rational choice and, therefore, rather useless for actual policy analysis. If, however, put in the context of decision making under uncertainty in the Keynesian framework (see 2.4.2), concurrent macroeconomic modelling strategies may nevertheless deliver a productive basis for economic policy discussion. We begin the argument with a brief review of the Lucas critique. Lucas (1976) famously took issues with the dominant econometric approach of the 1960s and 1970s to use one and the same model for describing the working of the economy as well as for analysing the outcomes of alternative economic policies without paying due attention to how people would react to those outcomes. Ignoring these reactions is a mistake, Lucas claimed, because they would, in fact, impact the model on which they are based.37 In Lucas’ (1976) own words: [. . . ] given that the structure of an econometric model consists of optimal decision rules of economic agents, and that optimal decision rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models. Lucas (1976, p. 41) What followed was a wave of models that achieve consistency between model-based expectations and the models themselves. This “Lucas-proof” modelling approach turned out to be so successful (Smets and Wouters, 2003; Smets and Wouters, 2007, for example) that in 2003, Lucas declared

Uncertainty in economics 57 that macroeconomics [. . . ] has succeeded: Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades to come. Lucas (2003, p. 1) Thus, from a 2003 perspective a long and challenging mission had been accomplished. Back in 1976, Lucas had observed that contemporaneous models were doing a good job in short-term forecasting but “are meaningless” (Lucas, 1976, p. 24) when it comes to long-term forecasting. In the wake of the 2007/2008 financial crisis Lucas’ (1976, p. 42) claim that this newer class of models would systematically outperform traditional models that are not based on microfounded rational expectation came under attack (Edge and Gurkaynak, 2010) and lost some of its appeal at large.38 The question thus emerges, what makes the “Lucas-proof” model class vulnerable to challenges posed by the developments in the real economy and by competing models that lack model-consistent expectations? Alternative suggestions notwithstanding, one answer is Lucas’ account of the benefits of introducing rational expectations. As has been shown before, Lucas rightfully emphasises that rational individuals will realise the consequences a policy choice has according to a given model and hence this model must consistently account for the predictable effects of such policy moves. However, this notion leaves unexplained the actual choice of the model. In other words, before formulating model consistent expectations, an individual must choose a model to build expectations on in the first place. It is only reasonable to assume that rational individuals also choose their model rationally. Therefore, the question arises, what would be a rational model choice? Again, Lucas (1976) provides a useful hint. Referring to the standard econometric practice of his day, he notes: No one, surely, expected the initial parametrizations of [the traditional] models to stand forever. Lucas (1976, p. 24) By simple analogy, a rational individual would not have expected in 1976, or any time before or after, that the most popular (benchmark) microfounded, “Lucas-proof” model at any given point in time would “stand forever”.39 The consequences are, again, far reaching. A rational individual would not only make up his mind about the consequences of a certain policy measure within a given model; he would also take into account the fact that this very model would sooner or later be replaced by a better one that might yield different implications of the same policy. Owed to the transitory nature of any economic model an individual would therefore rationally choose not to place too much weight on the implications of any particular model.40 Exactly how much weight he would place remains prima facie unknown. What can be safely said, nonetheless, is that it is pretty unlikely

58

Uncertainty in economics

that an individual would expect that the implied consequences of a particular given model would fully surface. By backward induction, it will hence not be considered the rationally optimal solution of this particular model to offer a sound, unique basis for decision making. To illustrate the last point one might consider a positive fiscal policy shock. Most rational expectations models, especially of the real business cycle type, have it that the long-run effect of this shock will be no change in output but a rise in the price level. By the virtue of these models the rationally optimal individual policy response is to raise prices immediately, thus driving the output effect to zero, even in the short-run. In the light of the Lucas critique, raising prices may not at all be the truly rational, optimal response, however. This is simply because a rational agent would expect that new and better models of the economy will sooner or later emerge that might have it, for instance, that the best response could be not to raise prices immediately or not to raise prices at all. Consequently, there is only one situation in which relying on any given model truly is the rational choice. This situation arises when the ultimate, unanimously “true” model is at hand. Such a situation is, however, for pretty obvious reasons neither plausible nor desirable to ever arise;41 if the “true” model was known, researchers like Smets, Wouters, Blanchard, Lucas and all future generations of economists would stop amending existing models or coming up with new ones to account, for example, for new developments such as the financial crisis. The alternative to this rather unrealistic prospect would be to once and for all restrict the universe of potential models to a set of models with agreed-upon properties. This alternative would, however, mean putting research in a straitjacket, which would be the end of economics as a science. Therefore, the ultimate model is not available, neither now nor in the future. Rational expectations, therefore, require that models must not only feature model-consistent expectations but also take into account the transitory nature of the model itself. Rationality, therefore, extends inevitably also to the model choice. A model has thus to cover rational expectations within the model as well as rational expectations with respect to model selection. Decision making under uncertainty as devised by Keynes offers a neat way for handling the model selection problem. To see how, simply consider that any given model represents one prediction about the true economic mechanisms. Within the very model these predictions can even be evaluated in a strictly probabilistic fashion by making appropriate assumptions about the involved disturbances and according econometric inference. Going beyond a particular model, Bayesian model averaging may also be used for making probabilistic statements about model results across several models. However, by the principle of uncertainty, the set of all possible and relevant outcomes, or rather models, is not closed. Therefore, adding up individual model “probabilities” can never yield one, implying that unanimous conclusions about cause-effect relations are not available. Faced with the prospect of competing theories or models or both, any rational individual will thus have to weigh the belief in the validity of each of these

Uncertainty in economics 59 models. The strength of belief will usually depend on various factors such as peer effects, prejudice and experience. The latter aspect is highlighted, for example, by Friedman (1953c, p. 9) who suggests to put more trust in theories that have “survived many opportunities for contradiction”,42 while Hendry and Richard (1982) and Hendry and Mizon (1998) develop the concepts of exogeneity and super-exogeneity of policy variable of interest within models. The latter approach offers a way to empirically determine variables that are not affected by policy shifts and hence immune to the Lucas critique. As a result, each individual will allocate different weights to different theories and models. Therefore, different beliefs about the true functioning of the economy will – rightfully – co-exist and so will different strategies for optimal responses to policy action. It is eventually up to the policy maker to base the policy decision on either of these models, keeping an eye on potential outcomes that other models or theories may also suggest. Quintessentially, however, whatever happens in due course of policy conduct will hardly coincide exactly with any of the models under consideration, though the new experience will contribute to shaping the weights to be attached to the available models. It is thus understandable that seemingly old-fashioned Keynesian interventionist models that devise government spending as a means of fighting recession are still honored by many, despite them being ridiculed by the so-called rational expectation revolution; rational people just do not put all their confidence in the (real-business-cycle) models that tell the ineffectiveness of government stimulus. Owed to these “rational expectations” models’ incapability to address the issues of the 2007/2008 financial crisis there is little prospect that these real-business-cycle models will gain more weight any time soon. The Lucas critique has pointed out the necessity of incorporating optimal individual decision making in economic model building. The same necessity also implies that only rational expectations that also take into account the transitory nature of all human knowledge and hence of economic models can truly inoculate against Lucas’ (1976) criticism. Within the new uncertainty paradigm of economics it seems apparent that no single current model nor any future model is able to capture all the relevant policy aspects of the economy. Therefore, policy evaluation by means of economic models requires to form beliefs about the strength of validity of the model under consideration before policy decisions are made. Eventually, experience will provide some hints as to which models are more useful than others for economic policy conduct.

Notes 1 The “Human Brain Project” (www.humanbrainproject.eu/) aims at delivering exactly that proof. 2 Soros goes on to illustrate: “For example, if investors believe that markets are efficient then that belief will change the way they invest, which in turn will change the nature

60

3 4 5

6 7 8 9 10

11 12 13 14 15 16 17 18 19 20

21

Uncertainty in economics of the markets in which they are participating (though not necessarily making them more efficient).” Schumpeter convincingly argued that point as early as 1912. See also the remarkable account given by Leo Melamed, the former chairman of the Chicago Mercantile Exchange at www.cmegroup.com/education/interactive/fxproduct guide/birthoffutures.pdf, 2 March 2018. Cross, Hutchinson, Lamba and Strachan (2013) claim that biologists, by discovering ways to manipulate DNA and climate research affecting CO2 emissions, also are transformative. However, in natural sciences transformativity only affects the object of study but never the laws of nature. See also Wilkinson and Klaes (2012, ch. 9) for an elaboration on various definitions and their implications for economics. For example, Engel (2016) finds evidence for four out of six countries at a favourable 10 per cent level of significance, but one country pair is “lost” when varying the sample (Müller-Kademann, 2016) re-establishing the randomness of the conclusion. The advantage of using the median has also been confirmed by Müller-Kademann (2009) and Müller (2010). If we are ready to make assumptions about the properties of the distribution function, we are able to provide statements about the probabilities by which a given price will be realised. Malkiel (2003, p. 80) quotes the presence of irrational traders as the reason for any anomalies to occur: “As long as stock markets exist, the collective judgement of investors will sometimes make mistakes. Undoubtedly, some market participants are demonstrably less than rational. As a result, pricing irregularities and even predictable patterns in stock returns can appear over time and even persist for short periods.” By covertly investing we understand the investment of small sums at different platforms and similar tactics that help disguise the investor and the applied investment rule. Tirole (2002) remarks on how easily he steps into the “traps” laid out by psychologists. He too is apparently not aware that he might simply employ du when dr was more appropriate. In 1949, Mises already argued for “rationality” to be defined by actual individual action, though for different reasons (see the re-print: Mises, 1998 [1949]) “Thursday” is a weekly reminder of this logic as it derives from the Nordic god of thunder, Thor. For centuries, the empirical approach was not very popular with the Catholic Church, however. Therefore, empirical investigations were not popular either. The discussion of Albert’s (1965) approach largely draws on Kapeller (2013); an English version is available from Albert, Arnold and Maier-Rigaud (2012). Smets and Wouters (2007) were quoted no less than 700 times during the first nine years after publication. In economics, hypotheses are often called assumptions, while the term “hypothesis” is commonly reserved for a subset of all assumptions that is put to the test after deriving the statement of truth. In Smets and Wouters (2007) the empirical analysis is presented in chapter 3. Smets and Wouters (2007) derive their model based on data up to 2004 before conducting simple pseudo out-of-sample forecasting by re-estimating their derived model with samples ending before 2004. It seems that their DSGE model is relatively ill-suited for capturing the financial crises when compared to “non-theoretical” alternatives. Smets and Wouters (2003) conduct an informal test which builds on simulating observations with the deduced model and comparing these artificial data’s properties to actual data. Next, artificial and actual data are each filtered with a vector

Uncertainty in economics 61

22 23 24 25 26 27

28 29 30 31

32 33 34

35 36 37

autoregressive process of order three and the auto-cross-covariances are compared. However, no test level and no test hypotheses are offered. Smets and Wouters (2003, 2007) apply Bayesian methods which obviously defy frequentist means of significance tests. Nevertheless, the Bayesian framework cannot be used as an excuse for severing the ties between model and reality. The dominant approach in finance is no different in that respect; see 2.3. Ours is not the first time that this gap has been “discovered”. Lawson (1997), Frydman and Goldberg (2011) and Syll (2016), among others, offer detailed and insightful discussions as well as many examples. Frydman and Goldberg (2011, p. 24) even challenge the view that Friedman really was unconcerned about assumptions. Of course, existence of and movement towards equilibrium are two distinct problems. This distinction does not matter, however, for our discussion. In Smets and Wouters (2007), the long-run is approached after approximately five years. Malkiel (2003, p. 80) comes to a similar conclusion with respect to stock markets: “Periods such as 1999 where ‘bubbles’ seem to have existed, at least in certain sectors of the market, are fortunately the exception rather than the rule. Moreover, whatever patterns or irrationalities in the pricing of individual stocks that have been discovered in a search of historical experience are unlikely to persist and will not provide investors with a method to obtain extraordinary returns.” See Farmer (2013) for an elaboration of this example. Romer (2016), for example, also quotes group thinking and defending one’s (own and one’s buddy’s) research legacy. Romer (2016) also acknowledges that the lack of alternatives serves as an argument for sticking to the established custom but he also points out that the lack of such an alternative surely is insufficient for maintaining the wrong approach. One might note that Keynes here goes beyond his 1921 “treatise on probability”. In the treatise, uncertainty in our sense was not a category: “Of probability we can say no more than that it is a lower degree of rational belief than certainty; and we may say, if we like, that it deals with degrees of certainty” Keynes (1921, p. 15). Consequently, Keynes (1921, chap. VI) speaks of “weights of argument” with argument being defined as follows: “[...] whenever we pass to knowledge about one proposition by the contemplation of it in relation to another proposition of which we have knowledge – even when the process is unanalysable – I call it an argument” Keynes (1921, p. 13). The “weighting of argument” must therefore be regarded an early version of Bayesian averaging, while “state of confidence” cannot because it also refers to uncertainty. Maybe ironically, Friedman (1953c, p. 9) also speaks of confidence one attaches to a theory as long as contradicting evidence remains absent. Or, to put it more cautiously, individuals with a low confidence in their arbitrary estimate will act less often upon it than subjects with a higher level of confidence. The KOF Swiss Economic Institute spring 2015 investment survey prompted: “We consider the realisation of our investment plans for 2015 as ...” with the answering options “very certain”, “fairly certain”, “fairly uncertain” and “very uncertain”, which can be read as an inquiry into the confidence in the respondent’s expectation generating model. Lawson (1997, chap. 17) makes a similar methodological proposal based on a different ontology yet rejects the use of econometrics. Arguably, the causation most of the time runs from macro-equilibrium to “individual” decisions; see 2.2.1. Lucas (1976) credited Tinbergen and Marschak with an early anticipation of his criticism (Lucas, 1976, p. 20, footnote 3).

62

Uncertainty in economics

38 Smets and Wouters (2003, p. 1151) argue that their model’s performance is “very close to that of the best VAR models. This implies that the DSGE model does at least as good a job as the VAR models in predicting [. . . ] over the period 1980:2 to 1999:4.” Smets and Wouters (2007, p. 596) claim superior – essentially, in-sample – forecast properties of their DSGE model, a finding that Edge and Gurkaynak (2010) fail to confirm in a truly ex-ante setting covering the financial crisis. 39 Of course, the fact that DSGE models operate with “representative agents” in order to escape the SMD curse is not helpful in that respect either. 40 Alternatively, one might say that the Lucas critique requires making the model choice part of the model of a rational agent. 41 Among the less-significant implications we would conclude that many economic journals would instantly be shut down because everything that could be said about macromodels would have already been said. 42 One way to afford models the opportunity to prove themselves wrong would be to subject them to rigorous ex-ante forecasting exercises (Müller and Köberl, 2012) as opposed to the more common pseudo ex-ante forecast comparisons.

3

Uncertainty in the economy

This chapter collects some examples of how uncertainty may re-shape thinking about relevant economic issues. It starts with analysing the roles of institutions. Institutions will be interpreted as means of coping with uncertainty. After that, the analysis will be applied to money as an institution and to fiscal policy.

3.1 Uncertainty and institutions In a speech to the association of Corporate Treasurers in Leeds on 14 September 2009, Andrew Haldane, Executive Director at the Bank of England, made the following point: So how is trust, and thereby credit, to be restored? To date, the answer has been to rely on the one sector whose credit has not been seriously questioned – governments and central banks. Governments and central banks are institutions, or collections of rules as we have defined earlier. It is instructive to consider the relationships between institutions such as government and uncertainty. One way to look at institutions is to recognise their ability to bridge the gap in the decision making chain of reasoning. Re-considering that this chain is broken due to the not-knowable item in the middle (option(i) → outcome(i) → utility(outcome(i))), it can be argued that the decision making problem can be mitigated by fixing either end of the chain. There are thus two ways in which institutions may facilitate decision making under uncertainty. They can either restrict (pre-determine) the actual choices of individuals, or they can determine the outcome of choices.1 In both cases, however, institutions’ capability to resolve the problems of decision making under uncertainty is eventually limited due to the very origins of uncertainty. Looking at institutions as means of restricting or pre-determining choices, one might consider legislation that requires employees and employers to contribute to government pension schemes. These forced contributions severely limit the choices of individuals to cater for old-age incomes. Sometimes, these laws even exclude the opportunity to opt out. One reason for implementing these restrictions is to help people to buy into pension schemes at all. The rational choice of an

64

Uncertainty in the economy

appropriate solution is obviously infested with a huge degree of uncertainty as it requires making predictions about future values and returns on assets, one’s own income and many more. Restricting the available options apparently, therefore, circumvents the problem of not being able to calculate various outcomes and their utilities. The effect of legislation can also induce norms for choices that might otherwise be subject to infinite regress of expectations or not be soluble due to fundamental contradictions. As an example consider the normative effect of minimum wages. In the absence of minimum wage regulations companies with a large share of unskilled labour can find themselves engaged in a race to the bottom over wages. The optimal wage rate depends, among other things, on the wages of competitors, the desired service level and also on objectives such as corporate social responsibility. Thus, decision making may involve correctly guessing the competitors’ and the unions’ (if any) moves which may end in the mentioned infinite-regress-of-expectations problem (see Fama, 1970, for an early mentioning) rendering optimal choice impossible. By imposing a minimum wage, the decision making problem is resolved. Of course, game theory could, in principle, also be used to offer a solution but that would require all participants of the game to accept rather restrictive rules of the game, such as a finite and given set of strategies. Human creativity is at odds with limiting the set of strategies, however. Therefore, these restrictions essentially exclude uncertainty, which participants may not approve. Game theory, therefore, does nothing but reduce the decision making problem from an issue of uncertainty to one of risk (or determinism) with deliberate ignorance of the potential differences in the results. There are other institutions which serve the same purpose of restricting or pre-determining choices. Next to formal laws, bureaucracy and cultural habits can have the same effect.2 For example, cultural norms of dating and mating not only restrict the choices of matching across classes and races, they moreover have, traditionally at least, also served as tools for controlling population growth (Hajnal, 1965). Those norms to this day significantly determine the options of young adults. Another historical example is the medieval guild, which dominated the economies of central European cities for several centuries, especially in Germany, Switzerland and Austria. Guilds applied strict rules on the quality of output such as carpentry, sewing, rope making and the like. They also regulated access to these trades, effectively limiting supply and hence keeping up prices. Guilds thus limited uncertainty by ensuring a certain standard of the products and also by guaranteeing their members a reliable income. Strictly speaking, the effective taming of uncertainty by pre-determining choice hinges on an important condition, however. It can only work if the outcome of the restricted decision is known with sufficient certainty. In other words, pre-determining choices is equivalent to excluding several alternative, unknown cause-effect relations by enforcing a known mechanism. Minimum wages are a case in point.

Uncertainty in the economy 65 By imposing a minimum wage the decision problem of employers can be resolved by excluding other potential choices. In that sense, the uncertainty surrounding the choice of the wage rate is indeed eliminated. At the same time, however, the eventual effect of a minimum wage policy on the economy at large remains unknown, at least to some extent. Only if we know the total and long-run effect of minimum wages can their imposition be regarded as a complete answer to uncertainty. In reality, however, there always remains a non-reducible residual uncertainty. This residual arises from the – in principle – unlimited creativity of humans in responding to the restriction. As a result, even the most sophisticated restriction on choices may in turn sow the seeds for blossoming adverse effects which may lead to the lifting of the restriction. These two sides of institutional management of uncertainty also apply to the second principle method for coping with uncertainty by institutions, which is determining outcomes rather than choices. Although choosing may offer pleasure (or pain) and hence utility (or dis-utility), the ultimate focus of decision making rests with the ultimate outcomes and their importance to the individual. Uncertainty systematically undermines the ability to predict outcomes and hence to evaluate them. Again, institutions sometimes offer a way out by determining or creating the outcome of choices. A prime example of an institution that determines outcomes is the welfare state. Its basic promise is to provide human living conditions more or less independently of individual choice, or more precisely, for a huge variety of choices. Individuals are thus spared the burden of having to ponder the implications of all actions within this range for their welfare. Under uncertainty, individuals are not even able to accurately establish the link between their choices and their eventual effect on later well-being. Therefore, rational people may very well decide to establish a welfare state that secures individuals’ welfare under most circumstances. The history of economics also knows a very important institution that was once tasked with providing a distinct outcome: Bretton Woods. Up until its break-up in the early 1970s, the Bretton Woods accord determined the rates at which participating currencies were exchanged. Ever since, the major currencies fluctuate, and a huge literature has emerged that tries to pin down the determinants of these fluctuations as well as the “correct” exchange rates. It has been argued before that foreign exchange rates are subject to uncertainty. Therefore, Bretton Woods must be understood as an institution that served the purpose of facilitating choices of exchange rates by eliminating uncertainty from the decision making problem. The Bretton Woods example also points to an important third property of institutions. Institutions do not only solve the problems of uncertainty by determining choices and outcomes, they also tend to breed uncertainty themselves.3 This tendency arises from the fact that taming uncertainty in one dimension may very well create or elevate uncertainty in other dimensions. Friedman (1953a) has shown, for example, that fixing the nominal exchange rates may create tensions in the international goods and services markets if changes in relative prices cannot be accommodated by adjustments of the exchange rate. Therefore, producers and consumers who benefited from fixed rates on the one

66

Uncertainty in the economy

hand may have suffered its effects on the other. The larger these tensions become, the weaker the supporting institutions. The breaking-up or the amendment of the institutions eventually turn into sources of uncertainty as the act of change and/or its outcome is not-knowable at large. Again, notwithstanding some predicted advantages of exchange rate flexibility, the general inconclusiveness about the true determinants of foreign exchange lend evidence to the notion that the deterministic rates have been replaced by not-knowable, or fundamentally uncertain flexible rates. The Bretton Woods experience also holds lessons for the euro currency. It is now well understood that the benefits of the common currency are not evenly shared among euro member states. While Germany, the Netherlands, Belgium, Luxembourg and to some extent also France have enjoyed rather healthy export growth due to the devaluation of their currency against the US dollar, for example, southern European countries such as Spain, Portugal, Italy and Greece face difficulties keeping up with the competition. As a result, the southern countries tend to experience trade and surging public deficits as well as problems with unwinding hefty bad loans on the balance sheets of their banks. Taken together, these issues may already hatch the next financial crisis, which may eventually even lead to the destruction of the euro. Therefore, while the common currency on the one hand reduces the uncertainty involved in international exchange among its member states that would otherwise arise from flexible exchange rates, it also creates new uncertainties that threaten its very existence. On a general level, institutions such as banking regulation may not only reduce or eradicate uncertainty, but they may also trigger unwanted or unintended adjustments by individuals. The outcomes of these reactions to institutional design may in turn be not-knowable in advance. Moreover, these consequences may become apparent with considerable delay or may be caused by changes in the environment elsewhere.4 Institutions are thus important enablers of decision making. They work by either restricting choices or outcomes, or a combination of both. While facilitating choice in the presence of uncertainty, institutions may also be sources of uncertainty. Therefore, the design of institutions requires considerable sophistication in order not to create the worse on the back of curing the bad. Governments may thus be considered an example of the latter kind. Well-working governments afford dissenting opinions on the means of pursuing their goals without creating unrest and chaos. Absent according mechanisms the confidence in the opportunity to achieve one’s objectives will run very low, to the effect that no action may be even taken in the first place, while mounting dissatisfaction may lead to revolt and even more uncertainty in the longer run.

3.2 Money in principle every unit can “create” money – the only problem for the creator being to get it “accepted” Minsky (2008 [1986], p. 78f)

Uncertainty in the economy 67 The textbook functions of money are means of exchange, unit of measurement and store of value. For most practical purposes this definition probably suffices. Ontologically, however, such a definition does not make much sense as the what and the how of a thing have to be kept separate. Therefore, “Money is what serves money functions” merely is an answer to how but not to what money actually is. Fortunately, uncertainty offers a convenient handle to also answer the second question. In order to understand what money is, one should first notice that money is an institution. This is because it only works as a set of rules followed by humans.5 Without these rules about accepting and valuing money, there is no money. Second, money always solves the well-known problem of double coincidences in exchange. This problem arises because in exchange the offer of one party has to be met by a matching demand of another party at the same time. Put simply, if a producer of cotton wants to exchange cotton against milk, the cotton producer has, in principle, to find a farmer who has a surplus of milk and concurrently is in need of cotton. Money greatly simplifies this exchange by allowing the cotton producer to sell his product for money and find someone who accepts the money in exchange for milk without time pressure.6 The most interesting part of this exchange story now rests with the fact that the cotton seller has enough confidence in the worth of the money that he accepts it as a compensation for his hard-laboured product. Therefore, the key question is why is he so confident? The answer to this question can be given by interpreting money in institutional terms. The institution that matters most for money is the one that justifies the confidence in accepting money as an intermediate means of exchange. Historically, money has assumed many guises such as gold and precious metals in general, pearls, stones, cigarettes, coins and notes and so on. It is tempting to label money according to these phenomenological attributes as metallic money, paper money or commodity money. These labels do, however, veil the fact that the crucial difference between any types of money lies with the institutions that justify the confidence in using money in the first place. Let us consider gold, for example. Gold seems to be a “natural” choice for money because it serves most money functions very well. However, underlying the “value” that gold is associated with is a cultural institution that assigns this value. Cultures across the globe hold gold in high regard and this respect, which is based on a culturally determined set of rules, is what qualifies gold as money. But what is this “quality”? The essential quality is that any owner of gold can trust in anyone else to also share in the high regard for gold. This property even goes as far as being able to rely on the “value” of gold even if oneself, individually, does not have any respect for it. In fact, it is sufficient to believe, or better trust, in anybody else holding gold in high regard to also accept and use gold as money. A situation in which there is trust in money must hence be distinguished from a situation when there is none. For simplicity, let us assume that in the latter case, there is no money at all. The situation without trust in money is

68

Uncertainty in the economy

obviously associated with considerable uncertainty with respect to the outcome of the production and exchange process. This uncertainty arises because it is very difficult to gauge the eventual proceeds (if any) from producing and selling one’s goods due to the necessity of the double coincidence. In contrast, money dissolves the need for the double coincidence, which simplifies and eventually facilitates the exchange. However, when accepting money with the intent to satisfy their own needs, the money holder depends on the confidence the potential seller has in that the money will serve them too, or in other words, that those third parties also trust in money. We therefore finally return to the question of what makes individuals have this confidence, and repeat the answer: institutions. The institutions that cater for the trust have to manage two competing tendencies. The first tendency is the spread of the use of money. Due to network externalities the larger the number of users the larger will be the benefit to each user. In that sense, money has a strong emergent feature because an individual benefits from using money due to these network externalities. Therefore, money has a tendency to expand in terms of users. At the same time, however, a huge network generates multiple possibilities for seignorage. Seignorage potentially arises whenever money is created because the creator of money does not necessarily have to offer a service or product equivalent to the nominal value of the money created (Schumpeter, 1912, p. 152). In contemporaneous monetary systems, commercial banks raise seignorage by granting credit at rates of interest in excess of their refinancing costs and risk considerations. Seignorage thus is detrimental to thinking that the accepted money will surely be met with according and expected satisfaction. Excessive seignorage cannot but deteriorate the trust in money. Therefore, the specific institutions that support a certain money must keep seignorage in check, which will be more difficult the larger the network. Historically, institutions have managed the inherent detrimental incentives by, for example, excluding seignorage (e.g., “commodity money”, Bitcoin) or by creating very sophisticated procedures that regulate and control every step of money creation as we see it in today’s monetary systems. With an increase in network size, not only seignorage, but opportunities also increase. Shocks to the economy will more likely affect a larger network as will any rouge behaviour such as monopolistic practices, cheating or counterfeiting. These shocks impair the likelihood or capability to deliver goods and services in exchange for money and thus potentially mitigate the trust in money. The institutions of the particular monetary system, therefore, have to address these challenges. In fact, any kind of money can be and maybe should be characterised by the institution that puts a brake on seignorage and generates confidence or trust in money at large. It is probably fair to say that over the years and centuries there has been a tendency of these institutions to develop from rather archaic forms to more sophisticated ones. For example, commodity money such as salt or tea must be

Uncertainty in the economy 69 regarded as more archaic because the institutions on which it rests usually are basic human needs that are knowingly shared among its users. Minted coins or metal money, by contrast, derive their worth not only from the underlying cultural agreement about the value of gold but also from the meaning of the respective imprinting, often a ruler’s head or coat of arms. No wonder, therefore, that after this ruler’s death the precious metal coins were often hastily melted and newly minted with the successor’s signet. The logic behind this rather cumbersome process was to re-assure people that the institution that creates and “guarantees” the worth of the money is functioning. If the value of coins had depended on the “intrinsic” value of the metal alone the imprinting could not have affected it.7 In general, the move from archaic to more sophisticated institutions can be associated with a move from more primitive, or, rather, less convenient, forms of money to very handy versions. Probably the biggest leap yet has been the introduction of “paper” money, which made it possible to swap large values without the hassle of also having to swap large physical items such as loads of gold. The true challenge in introducing paper money was not, however, to replace gold with paper but to replace the confidence generating institution. The earliest attempts by the Bank of Sweden to introduce printed money in Europe in 1666 duly failed. This failure can be attributed to the flawed institutional set-up which favoured reckless money printing. Without according restrictions it was impossible to instil trust in the worth of the new money. It took more than just one try and several amendments to the operating procedures of the Riksbank before it eventually succeeded. A similar development can currently be observed. Today, several so-called crypto currencies compete for acceptance as money. The Riksbank lesson has it that only those new moneys that manage to command a sufficient level of trust amongst all potential users have a chance of succeeding. Returning to contemporary money, we usually refer to national currencies when discussing “money”. In order to understand today’s monetary systems we therefore have to understand what institutions generate trust in the money that is used. Generally speaking, the trust generating institution is an amalgam of all national institutions that make people trust in each other’s acceptance of money. Obviously, there is a plethora of such institutions. Economists would usually first name the central banks that are tasked with managing the monetary system. But since central banks cannot possibly serve as the satisfier of last resort, that is, they cannot accept money in return for actual needs satisfaction, the degree to which people trust in their money also depends on the production potential of the country. This production potential is, in turn, determined by the business sector, the trade unions, the government, parliament and so on. The evolution of money from archaic to ever more sophisticated institutions was not continuous, however. Whenever catastrophe such as war or government collapse struck, societies fell back to less-sophisticated forms. Cigarette money, for example, emerged in the WWII prison camps of the Western Allied armies

70

Uncertainty in the economy

and regional surrogate money such as the famous “Wörgl” community currency was introduced as a measure to cope with the depression of the 1930s when government “paper money” became unavailable (Schwarz, 2008). These examples demonstrate an interesting relationship between the level of sophistication of money and the universality with which it can be used. In principle, the more archaic the institution that supports money, the more universal its use. The archaic, culturally determined convention that gold is valuable gives gold its universal appeal. Likewise, commodity money such as salt and cigarettes also tends to command universal recognition because of its immediate usefulness which generates the trust that is required for these commodities to serve as money. Therefore, there is a tendency of sophistication and universality of money to be inversely related, as is shown in Figure 3.1. If acceptance is the main issue, like in case of the cigarette money, ease of use and other sophisticated properties move into the background. More universal acceptance is achieved by more archaic or basic rules (institutions) that prevail among the members of society. The same mechanism leads to the rise in gold prices whenever world-wide uncertainty is on the rise. The regard for gold seems to be so hard-wired across time, space and cultures that this regard serves as the trust generating mechanism that makes it universally acceptable. Once recognising that money depends on its underlying confidence generating institution(s), a number of relevant conclusions can be drawn. First, we can now provide a one-sentence answer to the question of what money is: money is trust. As has been seen before, money mitigates the uncertainty which arises from the

Universality

Sophistication

Figure 3.1 The money trade-off

Uncertainty in the economy 71 double-coincidence restriction. It therefore seems most appropriate to refer to the semantic opposite of uncertainty that characterises best the necessary quality of human relationships in mutual exchange, which is trust. For all practical matters that let people expect money to be “something”, money may hence alternatively and more simple be defined as anything that is trusted to be accepted by third parties in voluntary exchange. When using this more pragmatic definition instead of its fundamental version, one should not forget, however, that the matter with which money is associated – be it physical, electronic or otherwise – can at best veil the underlying abstract substance of money, which is trust. Consequently, inflation can now be understood as a phenomenon that has its roots in the deterioration of trust. Trust can be diminished by pretty much everything that shakes the institutions which generate confidence in money. Therefore, government crises as well as strikes or the abuse of monopoly power can result in inflation. Printing too much money also has the potential to destroy confidence in it. In contrast to the monetarist view, however, money “supply” is just one factor out of many that affects the trust one has in the relevant institutions. Since money is created as an answer to the double coincidence requirement it is endogenous by principle. If outside forces aim at restricting money creation one can be quite confident that economic agents will find their way around any obstacles. Consequently, the often-quoted “proofs” of the historical linkage between money growth and inflation demand a closer inspection. Gregory Mankiw’s textbook, for example, authoritatively tells its undergraduate students that in “almost all cases of high or persistent inflation, the culprit turns out to be the same – growth in the quantity of money” (Mankiw, 2014, p. 11). He even promotes this link to the ninth out of ten principles of economics. Although this accusation of money growth as the main source of inflation is probably very widely accepted, it is nevertheless insightful to study it in more detail. To begin, let us take stock of how Mankiw supports this ninth principle. The first three versions of the textbook, the first edition, the special edition (which deals with the financial crisis) and the second edition substantiate the principle by the hyperinflation examples of Germany, Hungary, Poland and Austria in the 1920s. Referring to an article by Sargent (1982), he “proves” the principle by means of illustration (Mankiw, 2011a, p. 649f). A closer inspection of Sargent’s (1982) paper shows, however, that Sargent stresses that all four countries “ran enormous budget deficits” (Sargent, 1982, p. 43). Their currencies were not “backed” by the gold standard but “by the commitment of the government to levy taxes in sufficient amounts, given its expenditures, to make good on its debt” (Sargent, 1982, p. 45). Importantly, all four countries found themselves on the losers’ side of World War I, which severely impaired their opportunities “to make good” on their debts.8 Germany, moreover, experienced a revolution after the war and its anti-revolutionary social democratic government “reached accommodations with centers of military and industrial power of the pre-war regime. These accommodations in effect undermined the

72

Uncertainty in the economy

willingness and capability of the government to meet its admittedly staggering revenue needs through explicit taxation”, Sargent (1982, p. 73) reports. The final blow to the German government’s illusion of fiscal sobriety was dealt by France when it occupied the Ruhr in January 1923. In response, the German government tried to stir passive resistance by “making direct payments to striking workers which were financed by discounting treasury bills with the Reichsbank” (Sargent, 1982, p. 73). At that time, the German Reichsbank was not independent and the government resorted to its central bank for financing its debt by issuing more money. However, due to the apparent impossibility “to make good on its debt” people naturally lost confidence in the money. With newly printed money being ever less trustworthy, the government had to compensate by issuing even more of it. In addition, and in striking contrast to Mankiw’s ninth principle, Sargent (1982, p. 73) asserts that after “World War I, Germany owed staggering reparations to the Allied countries. This fact dominated Germany’s public finance from 1919 until 1923 and was a most important force for hyperinflation” (emphasis added). In other words, Sargent (1982) does not consider money growth as the main culprit behind the hyperinflation in Germany that Mankiw quotes in support for the general principle that money growth causes inflation. Rather, money growth must be viewed a result of inflation which was triggered by the loss in trust in government finances and hence the rise in inflation which caused the money stock to increase. The government’s demand for money rocketed as inflation took off. This causality is also reflected in Sargent’s (1982) statistics. While the price level started to double every month as early as July 1922, notes in circulation and treasury bills grew by only 12 per cent at that time and reached 76 per cent (treasury bills) in December 1922. Throughout the hyperinflation period, the rate of inflation not only accelerated way before the rate of money expansion, it also always exceeded money growth by a factor of roughly five (Sargent, 1982, p. 82). Thus, the causal order of how hyperinflation emerged is also born out by the empirical facts. This observation is perfectly understandable when considering the earlier statement that it is those who accept money who decide about its worth. Sargent’s examples show that without the prospect of the money (Sargent, 1982, “unbacked or backed only by treasury bills”, p. 89f) being honoured by the government through taxation, there was no reason to believe that prospective business partners would accept it as a means of payment. Therefore, and in contrast to Sargent’s claims, one has to concede that it neither was “the growth of fiat currency” nor the “increasing quantity of central bank notes” (alone) which caused inflation. Rather, the inflation was caused by the fast deterioration in the trust of money due to the apparent inability of the government to match its promises which were manifested as newly issued money with future revenues. It was this bleak prospect about the usefulness of the money which eventually created inflation. Eventually, Mankiw changed the textbook exposition of the money-inflation relation. Starting with the third edition of his textbook he dropped the examples of

Uncertainty in the economy 73 the central European countries in favour of mentioning the case of Zimbabwe in passing and without elaboration (Mankiw, 2014, p. 11). Based on – now – purely theoretical analysis, he does nevertheless maintain the fundamental statement that money growth causes inflation (Mankiw, 2014, pp. 11, 587). By contrast, according to the lessons from Sargent’s (1982) examples as well as from the principle considerations, money as trust implies that the initial trigger for inflation will always be some impairment of the belief in the respective moneys ability to satisfy needs. And in fact, wars, corruption and grave economic mismanagement usually cripple inflation-ridden economies first. The growth in money stock hence is a response to these ails and must be regarded as an attempt to create more money as a compensation for the loss of trust in the existing stock. Printing more money, however, is a sure means of destroying trust even further. Therefore, the excessive creation of money reinforces but does not cause an ongoing inflation. This insight has several additional practical implications. In practice, adverse forces to the trust in money can sometimes be neutralised. This is why unsound fiscal policy that undermines trust may be offset by monetary tightening and vice versa. For the same reason, however, the expansionary monetary policies by the central banks of Japan, the euro area and by the Federal Reserve have as yet (2018) not been able to create considerable inflation. The excess supply of money has not yet been able to destroy the trust in the institutions that generate the confidence in the respective currencies. Putting internal inflation aside, the relative values of these currencies have changed considerably, however. The noticeable devaluation of the euro currency (until mid-2017) thus simply reflects the lesser degree to which people trust in the European institutions as compared to their US (or Swiss) counterparts. Looking at the other spectrum of inflation, the cases of Venezuela, Egypt or Zimbabwe may be quoted. In these instances, the national institutions are held in such a low regard that they have lost their ability to generate confidence in their respective countries’ currencies. Consequently, inflation has accelerated. In these cases the triggers for the loss of confidence are easy to recognise. All three countries are suffering from autocratic regimes with rampant corruption and ailing economies. It may be interesting to note that the Central Bank of Egypt, in particular, did not do a bad job according to monetarist definitions; money supply growth was kept in check and interest rates hiked in response to rising inflation. However, as has been argued before, instilling trust in money is not entirely in the hands of a single institution. Third, owed to the fact that confidence has so many determinants which are difficult to recognise while the effects of the loss in confidence can so easily be observed, it follows that a central bank with the task of keeping inflation in check must also keep an eye on the performance of all the other institutions that have a sway on this confidence. For example, central banks cannot restrict themselves to setting interest rates or targeting inflation or the money base. They must also try to influence commercial banks’ stability and the stability of the financial sector at large. They also have

74

Uncertainty in the economy

to restrict reckless lending and care for a decent performance of the economy as a whole. This is obviously a very delicate exercise with many trade-offs and fallacies. For example, tightening money supply by raising interest rates may indeed also raise the level of confidence in the Central Bank as the guardian of the currency. But it may at the same time also restrict the creation of money by curbing credit growth. With less credit issued there will be fewer bad loans and with fewer bad loans there will be fewer write-offs and, therefore, money holders will have more confidence in seeing their needs eventually satisfied. The potential downside of interest rate hikes are also easy to spot. If central banks tighten policy too strongly, the economy may even shrink to an extent that puts its potential to deliver goods and services for needs satisfaction in doubt. Be this example as far-fetched as it may, another implication certainly also appeals to intuition. Suppose that the confidence in the currency is shaken by monopoly or oligopolistic price setting and assume that the central bank has no way of addressing this problem. Inflation which arises from these firms’ behaviour should be fought by increased competition but a central bank will usually not be able to alter the degree of competition. Therefore, it will only be able to fight inflation in this situation if it can enhance the confidence in any of the relevant institutions. If, for example, agents believe that the increase in inflation rates is due to a weakness of the central bank itself, showing strength by tightening monetary policy may help. In fact, any means of confidence enhancement including pure communication (without acting) may be useful. On the other hand, if the scope for further raising of confidence is small, central bank policy will be ineffective with respect to inflation control. Applying this insight to actual central bank policy, it apparently becomes important to realistically assess the true capabilities and the true limits of central banks. Central banks cannot and should not be expected to manage inflation alone. If they do a good job by supporting the trust in the institutions, including themselves, that generate confidence in the currency then that is as far as they can go. It cannot be expected that they also fix all the problems arising elsewhere and, therefore, although they should be outspoken on the sources of inflation irrespective of what they are, they must also clearly communicate their own limits. If they fail to do so, central banks’ incapability to control all inflation triggers will inevitably damage their reputation and become a source of harm. To conclude, money is much more than the sum of its functions. Money is, first of all, trust, where “trust” is shorthand for the confidence agents have in the institutions that support the use of money. The value of money, therefore, varies with this confidence. It follows that inflation, in general, is a sign of diminished confidence and that there cannot be a mechanical inflation trigger such as printing money. Inflation is, instead, caused by institutional flaws and central banks must see to these flaws if they want to maintain the value of money. The conduct of monetary policy with uncertainty in mind thus becomes a much more challenging task than “just” following rules, as advocated by the

Uncertainty in the economy 75 monetarist school, for example. Following rules may help, but as a recipe they imply an under-complexity of inflation dynamics that is matched only by thinking of money in terms of the sum of its functions.

3.3 Fiscal policy and Keynes’s battles: lost and won Reflecting on the International Monetary Fund’s role in the efforts to pull Greece away from the brink of bankruptcy, Charles Wyplosz mentioned in a CEPR note three battles fought and lost by Keynes (Wyplosz, 2017). The first two were the battles against German reparations following World War I and the issue of symmetry in the Bretton Woods system. The third, posthumous, loss would be Greece, Wyplosz writes. In light of the real business cycle revolution many economists would be inclined to add yet another defeat: fiscal stimulus as a policy option for fighting recessions. Maybe surprisingly, uncertainty does not alter this assessment. Under uncertainty, the defeat would be for completely different reasons, however. In Keynes’s view the most important source for recessions is a slump in investments. By example of the – now also empirically established – liquidity trap the mechanism is easy to understand. Absent uncertainty, interest rates determine the marginal rate of profit necessary for stimulating investment. Therefore, by controlling the interest rates, governments (central banks) can steer the business cycle. Keynes suggested, however, that an economy can reach a point where lowering interest rates would fail to trigger investment decisions. This point is reached when economic agents do not believe that they can turn a profit on the money spent. In other words, at this point investors lack confidence in the future of the economy. If this lack of confidence is a reflection of uncertainty about the future path of business at large then suitable institutions may step in and take over the investment decisions. A government might decide to increase spending, for example. Government intervention may succeed in curbing uncertainty even if spending is on, in turns, digging up and filling in holes in the ground, Keynes argued.9 In the wake of Keynes’s analysis, governments all over the world enthusiastically embraced measures to fine-tune the business cycle. Germany’s Stability and Growth Act of 196710 probably was amongst the most elaborate and detailed attempts to translate Keynes’s fiscal stimulus concept into actual economic policy conduct. Unfortunately, Keynes’s recipe is ultimately self-defeating. The key issue is that fiscal stimulus can only work if it helps to curb uncertainty. Consequently, if the policy is successful then uncertainty is removed from the economy altogether. Therefore, in Figure 2.1, decision making in the economy would move from the outskirts towards the origin. Upon arriving in the space of risk and ambiguity decisions become accessible to standard real-business-cycle analysis and the Lucas critique catches up with full force; human decisions become predictable and rational expectations models provide correct forecasts.

76

Uncertainty in the economy

The model implications are well known. Tax payers expect having to pay for the stimulus some time in the future and producers as well as wage earners reckon that the increase in demand is just temporary. Depending on how the short-run dynamics are modelled, the ultimate effect of the fiscal stimulus will be either no or only a transitory increase in output and in the price level. Of course, eradicating uncertainty in its entirety must be regarded as an extreme case. Fiscal stimulus will, therefore, always retain appeal and effectiveness to some residual degree. Nevertheless, caution must be applied. First, according to Keynes’s original argument fiscal stimulus must only be used in response to genuine uncertainty. If the economy suffers instead from market frictions, monopolistic practices or other sorts of knowable imperfections these problems could not be fixed with fighting uncertainty. Therefore, a timely identification of the reasons for a downturn is imperative for choosing the appropriate remedy. Second, any fully automatic stimulus that eliminates uncertainty is cursed to miss its target because it gives rise to opportunistic behaviour.11 A government, or rather, society at large should instead focus on building and maintaining strong, or rather “good”, institutions that help reduce uncertainty to the warranted minimum. This minimum is given by the appeal of (disruptive) innovations and the desire for a continuous improvement of our living conditions – in other words by the major source of uncertainty: human creativity. Uncertainty thus gives rise to a peculiar Keynesian-neoclassical symbiosis. Its neoclassical part is what is commonly called supply-side policies to the extent that supply-side policies are concerned with the design of institutions that manage the knowable knowns and unknowns by enhancing factor mobility, fostering innovations, curbing market power and other tools which grease the wheels of markets. These policies must be complemented by institutions that manage the not-knowable unknowns. These institutions include mechanisms for identifying and responding to uncertainty shocks. As has been stressed before, these responses must target uncertainty in the economy. Fiscal stimulus may be one answer; bold promises such as German chancellor Angela Merkel’s guarantee for sight deposits at commercial banks, or Mario Draghi’s “whatever it takes” (Draghi, 2012), and, ultimately, quantitative easing have been others in the recent past. Of course, actions of this kind can be credibly taken only by strong actors. Rational people may, therefore, prefer not to rely only on free markets but task governments and their agencies with taming uncertainty. This said, the distinction between managing the not-knowable and the knowable will continue to shift over time, quite as mankind now understands the laws of thunder. Economists will thus find huge uncharted territory to direct their research efforts to. Likewise, managing the knowable knowns and unknowns will every so often create tensions with the institutions that take on uncertainty. For example, when in the 1980s and 1990s markets started to get “liberalised” in many Western economies, this move opened the space for game-changing innovations. One such innovation certainly was mobile telecommunications while another was financial leveraging. While in the former case human ingenuity

Uncertainty in the economy 77 eventually led to the very much welcomed, productivity enhancing smartphone craze, it created the market for unconventional financial instruments such as re-packaged sub-prime loans in the latter. Both instances clearly are consequences of the mechanisms that generate uncertainty but only one of them obviously led to a severe economic crisis. The financial crisis of 2008 eventually ensued not due to financial instruments spinning out of control but due to the loss in confidence in key economic institutions such as commercial banks and the whole financial sector that the abuse of these instruments triggered. Once uncertainty started to spread like wildfire, it took a well-suited institution to contain the danger. The interplay of escalating uncertainty on the one hand and trust re-building on the other is nicely described by the following summary of the events following the collapse of Lehman Brothers in September 2007 delivered by Paul Kanjorski (Democrat congressman for Pennsylvania, US) who repeats an account given to him by US Treasury Secretary Henry Paulson and FED Reserve Chairman Ben Bernanke: We were having an electronic run on the banks. [. . . ] They [the Federal Reserve system and the US Treasury] decided to [. . . ] close down the money accounts and announce a guarantee of $250,000 per account so there wouldn’t be further panic. If they had not done that; [. . . this] would have collapsed the entire economy of the US, and within 24 hours the world economy would have collapsed. It would have been the end of our economic system and our political system as we know it. Skidelsky (2009, pp. 9–10) One way to interpret the financial crisis from the perspective of a Keynesianneoclassical symbiosis thus has it that from a supply-side perspective, it must be conceded that the underlying market design was apparently flawed because it did not ensure the self-correction of excesses. However, the engine that forced the process was a very much welcomed part of the design. This engine is human creativity and hence uncertainty. From a Keynesian perspective, the outburst of the undesirable crisis must be considered an unavoidable consequence of desirable uncertainty. Therefore, in order to fight the crisis, its underlying reason, uncertainty, had to be fought. According to Kanjorski’s account, the US Treasury and the US FED managed to do so successfully. The symbiosis offers three relevant policy conclusions and a conjecture. First, crises are unavoidable as long as human creativity is part of the economic process. Second, a careful market design (supply-side policies) must make sure that crises may result only as surprises; i.e., there must not be a known mechanism that generates crises endogenously. Third, there must be strong institutions that are suited for containing the spread of uncertainty once a crisis erupts. All three conclusions are largely straightforward and also applicable to institutional design in the economy at large. The first follows from the main assumption of this book. If it was possible to reduce uncertainty to risk or

78

Uncertainty in the economy

uncertainty the whole future would become predictable and insurable and crises would occur with knowable probabilities against which societies could insure. Crisis management would turn into a discussion about insurance premiums. With unknowable states, like under uncertainty, this is not possible. The second and third conclusions are related. Assume, first, that a suitable institution like a government or state has been established for fighting a crisis, should it erupt. Assume next, however, that the market design would systematically, that is, in a knowable way, generate crises. In such a situation, crises would neither come as a surprise nor would they be based on genuine uncertainty. Quite to the contrary, systematic crises would give rise to opportunistic behaviour because the crisis itself and its character are known to occur and so would be the expected institutionalised response. Opportunistic behaviour implies that a crisis could be triggered in such a way that the crisis response unduly benefits those who pull the trigger. Crises may thus become endogenous to the design of markets whereas Keynes assumed that crises are the result of endemic uncertainty which is exogenous to institutional design. In case of the last crisis, opportunistic behaviour could be observed in the sense that the balance sheet risks of commercial banks were offloaded to the governments and central banks and hence to society at large while the dividends were pocketed by the shareholders. Opportunistic behaviour also implies that the institutions tasked with cushioning genuine uncertainty shocks will be systematically undermined because in the run-up to the predictable crisis the scale of the implied predictable response will be inflated due to moral hazard. Supply-side policies must hence ensure that opportunistic behaviour is reduced to a minimum. If opportunism can be successfully contained, supply-side policies will become structurally independent from their uncertainty fighting counterpart institutions. If they were not independent, opportunistic behaviour could arise because a link between market design and policy response to uncertainty shocks would exist. From this conjecture follows, for example, that agencies that aim at marketing unemployed labour should not be the same that offer support to those who get unemployed due to uncertainty shocks. Uncertainty shocks are unpredictable and non-stochastic events. Therefore, fighting the underlying uncertainty cannot be part of the policy that manages uncertainty’s driving forces because – as we have argued before – it would eradicate uncertainty altogether, or rather suppress it until it erupts in a potentially devastating manner.12 In the labour market example, the implication simply is that employment offices should not at the same time be responsible for (re-)training the unemployed and be responsible for distributing unemployment benefits because if the training (supply-side policy) was successful, no payments would be necessary. If, however, uncertainty shocks occur, training cannot be a remedy. In other words, both policies are structurally independent of each other and any artificial linkage will give rise to opportunistic behaviour such as trading in benefits for training without solving any problem.

Uncertainty in the economy 79 The toolkit for successful economic policy under uncertainty can therefore be best described as a two-edged sword. One side concerns the management of the economy’s driving force, human creativity, as well as the unavoidable uncertainty it carries. To that aim, mechanisms must be designed that reap human ingenuity’s fruits while keeping its excesses in check. By definition, crises will nevertheless occur time and again because of the desired but occasional destructive uncertainty. If uncertainty strikes the economy must provide institutions that are suited to tame it. This other side of the blade cannot be but independent of the first while being at the same time its indispensable complement. Keynes is thus spared a fourth defeat. Keynesians, however, must bid farewell to any kind of automatic fiscal stimulus. The Keynesian-neoclassical symbiosis further demands that supply-side policies must be implemented if the blessings of human creativity are to be enjoyed. Finally, fiscal stimulus or any other policy that takes aim at uncertainty in times of crisis must be kept independent of the complementary supply-side policies.

Notes 1 Minsky (2008 [1986], chap. 13) offers an example of how governments and central banks can make the economy more robust by a combination of crisis prevention and crisis resolution which can be read as an application of this general principle. 2 With reference to Max Weber and Werner Sombart, Böhle and Busch (2012, p. 20) point out the role of bureaucratic organisations as institutions that create predictability. 3 See also Dequech (2011, p. 634) for this double-edged property of capitalist institutions. 4 Section 3.3 below discusses an example of a self-defeating institutional design. 5 In the words of Aristoteles: “To others, in turn, money is a nonsense and a pure legal fiction, in no way given by nature” (Aristoteles, 1971, p. 80) and further “So it is due to an agreement that money has become a substitute of the need” (Aristoteles, 1951, p. 164) (author’s translations from German to English). 6 Aristoteles (1951, p. 164) observes: “For a future exchange [...], money is, in a manner of speaking, a bailer [...]” (author’s translation from German to English). 7 People even tend to accept a certain degree of seignorage in exchange for the guarantees of the institution for the trust in the money, i.e., the guarantee that third parties will accept the money in exchange for actual needs satisfaction. 8 Strictly speaking, Poland did not lose the war as it did not exist as a nation state during the war. Polish soldiers’ lives and material resources were spent by the losers: Germany, Russia and Austria. 9 Minsky (2008 [1986], p. 315) puts forth a similar argument, though his emphasis is on confirming an investor’s bet on future income streams by government spending. 10 Gesetz zur Förderung der Stabilität und des Wachstums der Wirtschaft, Bundesgesetzblatt Nr. 32, S. 582–9, 8.6.1967, Jahrgang 1967, Teil 1. 11 Minsky (1982, p. 153) was also cautious with respect to too much intervention: “If capitalism reacts to past success by trying to explode, it may be that the only effective way to stabilize the system, short of direct investment controls, is to allow minor financial crises to occur from time to time.” 12 One may take note that the course of affairs in dictatorships is largely predictable up until a revolution or coup d’état swipes through.

4

The empirics of uncertainty

. . . a scientist must also be absolutely like a child. If he sees a thing, he must say he sees it, whether it was what he thought he was going to see or not. See first, think later, then test. But always see first. Otherwise you will only see what you were expecting. Wonko the Sane

We have so far argued that uncertainty prevails in the economy and hence has to play a main tune in economics. In this chapter we develop a statistical test procedure that allows discrimination between decision making under risk or ambiguity and under uncertainty. We focus our attention on financial markets because financial markets are supposed to be close to the ideal market with very small frictions, minimum transaction costs and a high degree of liquidity. Furthermore, the basic theories about rational behaviour and efficient markets have also been put forward with financial markets in mind (Fama, 1970). In order to statistically test for uncertainty we have to get a handle on the fact that uncertainty cannot be “measured” with the usual statistical toolbox (see section 2.4.3). More precisely, under uncertainty, the statistical moments of a probability distribution cannot be obtained because uncertainty implies that a probability distribution does not exist in the first place. The key test idea hence rests on the fact that while under uncertainty we cannot “measure” uncertainty, under risk, ambiguity and determinism we can. At the same time, uncertainty is the complement of risk and ambiguity when separating the whole space of methods of deduction according to the taxonomy of uncertainty (see Figure 1.1 on p. 12). We can thus divide this space in a null hypothesis that maintains risk and ambiguity and in an alternative hypothesis under which uncertainty prevails. It is, therefore, possible to meaningfully estimate statistical moments of our variables of interest under the null hypothesis and check whether or not we can confirm the properties implied by the null. If we can show that those properties do not hold, we have to reject the null and accept the alternative, which is uncertainty. The testing strategy is implemented by the following study design. First, we develop a model of decision making in financial markets under uncertainty. This model allows us to characterise the second moment of an empirical distribution function. Next we derive the characteristics of empirical distribution functions under the assumption of risk and ambiguity. It will then be demonstrated

The empirics of uncertainty 81 that the latter is a restricted version of the former. We can thus derive key properties of the second empirical moment under the assumption that a statistical distribution function exists by assuming that financial markets operate under risk and ambiguity. Under the alternative, no such distribution function exists and hence the empirical properties must mirror the properties derived with the model of decision making in financial markets under uncertainty. If that is the case, we reject the null and accept the alternative: uncertainty.

4.1 A model for asset pricing under uncertainty 4.1.1 The investor’s objective function Let us assume an asset market that is characterised by infinite liquidity from an individual’s point of view. Infinite liquidity might be justified by noting that a single investor is always small compared to the total or by supposing that credit markets work perfectly in the sense that a convincing investment idea will always meet sufficient means of financing. Second, let us further assume a very large asset market such that there are always enough assets available for selling or buying. For example, foreign exchange and stocks of large multinational companies would fall into this category. As we are only considering professional trading, each investor may switch her role between buyer and seller of the asset at any point in time. The investor at the asset market is assumed to act as an inter-temporal arbitrageuse. She buys (sells) if she thinks the asset price in the future to be higher (lower) than today’s: pt < pt+1 ( pt > pt+1 ), and the sum invested, xt is either positive or negative. Thus, the investor’s sole objective is to make profit, which probably characterises today’s financial markets pretty well. Putting this approach in perspective one might remember the seven reasons for trading foreign exchange given by Friedman (1953a). Only one of them (the seventh) was speculation. All other motives Friedman considered are related to some “real” economic activity such as raising the means for cross-border goods trade. Friedman goes on explaining the price mechanisms for all reasons except the last one. Let us therefore look at number seven. Unfortunately, at time t, only pt is known while pt+1 is not. The investor can, however, consider a certain range of values for pt+1 , and base her decision on the perceived properties, denoted Mt ( pt+1 | It ), of this range which is determined by experience, preferences, and many other factors she thinks worthwhile. These factors are summarised by It . In a standard setting the investor would, for example, attach to each possible future price a probability giving rise to a probability distribution function Mt ( pt+1 | It ) with It representing all the information available at t. Alternatively, Mt ( pt+1 | It ) may be looked at as an individual’s favourite pricing model. As has been stressed above, the evaluation function Mt ( pt+1 | It ) need not be a probability distribution function or a pricing model, however. It simply reflects any form of evaluation of potential future prices given the available information.

82

The empirics of uncertainty

The investment decision is made about the amount of xt to invest. Obviously, there are three opportunities for the investor. She can either buy, sell or do nothing. In order to progress the last option is not considered; it could, however, analytically be included by noting that inaction may incur opportunity costs.1 There are thus two possibilities left. If she sells the asset and tomorrow’s price is higher than today’s then she will have made a profit, otherwise she loses money, and vice versa. 4.1.2 The investment rule The individual rule is to invest if there is a difference between a subjectively predetermined value, m It ( pt+1 ), and the spot price. We will call this value the individual reservation price. For example, if the investor formulates a probability distribution over p then this value might correspond to a certain quantile of Mt ( pt+1 | It ).2 Thus, the general investment rule can be given as 1. 2.

Asset supply: xt+ = xt if pt > m It ( pt+1 ). Asset demand: xt− = −xt if pt ≤ m It ( pt+1 ).

In general, investing xt will change It and, therefore, also Mt ( pt+1 | It ) as well as m It ( pt+1 ). To keep matters simple, we will abstract from these complications and refer the interested reader to Müller-Kademann (2008), who offers a formal treatment of the feedback mechanism as well as arguments for determining the individual investments. 4.1.3 The median investor Let us now assume a finite number of distinct investors, j = 1, 2, . . . , J , who individually form beliefs about the future asset price. They offer and demand the asset according to the aforementioned rule. Demand and supply coincide under the following condition. DEFINITION 8 (Market clearing and equilibrium price). The market clears if J 

xt,i = 0.

i=1

The market is in equilibrium if (2) (i) + pt ≤ pt+ = min{m (1) It ( pt+1 ), m It ( pt+1 ), . . . , m It ( pt+1 ), . . . } ∀i = 1, 2, . . . , J .

and ( j) (2) − pt ≥ pt− = max{m (1) It ( pt+1 ), m It ( pt+1 ), . . . , m It ( pt+1 ), . . . } ∀ j = 1, 2, . . . , J .

where J + and J − count the suppliers and the sellers of the asset respectively. Any price pt , pt− ≤ pt ≤ pt+ is an equilibrium price.

The empirics of uncertainty 83 The market clearing condition simply says that the supply of the asset has to equal its demand. The price will be in equilibrium if all those who see a profit opportunity can realise it such that no profit opportunities are left. If both conditions are simultaneously satisfied we are able to pin down the equilibrium price as follows: We order all sets {m (i) It ( pt+1 ), x t,i }, i = 1, 2, 3 . . . , J ( p ). We then obtain the market price, pt|J , as pt|J = from smallest to largest m (i) t+1 It (∗) ( p | J ) where m ( p | J ) corresponds to the median xt,∗ of the ordered m (∗) t+1 t+1 It It sequence. Choosing the median of all quantities supplied and offered ensures that the market clears. The reservation price of this “median investor” establishes the equilibrium price. In order to signal that the market clearing price has been defined for a given number of J distinct investors we add “| J ” to the market price notation. Due to the fact that the median investor determines the equilibrium price, we interchangeably refer to the model as the subjective asset pricing model, or the “median model”. A few remarks about this market solution are in order. First, the equilibrium thus described essentially clears the market. In this simple market game we do not strive for stability of the market solution but simply seek a way of describing a market clearing mechanism which is based on individual profit objectives. Once the market clears, the game is over and the next period starts with a new cohort of investors and so on. Below, we will analyse the market solution’s response to a change in the number of investors for testing objectivity of the market price. Second, if we imagine that yet another distinct investor arrives at the market at t, this investor will tip the balance between buyers and sellers and hence a new market price will be obtained because the median investor with J investors cannot be the same with J + 1 investors. If we continue to add more investors it may well be that buyers and sellers arrive with equal probabilities, similar investment sums and reservation prices. Under these conditions we could expect the price to converge as more investors are active. But these investors would not be distinct investors but investors whose behaviour is predictable and could hence be emulated by machines. The possibility to emulate human behaviour would be in contradiction to the main assumption, however. Therefore, convergence cannot occur under uncertainty. The empirical investigation below will thus test convergence. Third, if we let the median xt,∗ become very large we can picture a situation in which a (trusted) central bank targets a lower value of its exchange rate. In order to devalue its currency this central bank would simply increase its bet until it becomes the median of the ordered offers. The huge growth of balance sheets of the Swiss and Czech central banks after the financial crisis can be interpreted as the results of their respective efforts to keep the values of the Swiss Franc and the Czech Koruna in check. Furthermore, an interesting case for the market solution is J = 1. It follows that xt,1 = 0 and hence no transaction takes place. Nevertheless, the spot price is defined. However, as no transaction takes place, the ‘true’ spot price cannot be observed; instead the last period’s will feature in the statistics.

84

The empirics of uncertainty

This situation would, for example, arise if all agents expect the same future spot price, have the same evaluation function in mind, including identical attitudes toward risk, and accordingly want to either go short or long. The latter makes sure that the asset is neither supplied nor demanded. As an example consider international investment banks which trade collateralised debt obligations. Since these papers are in general not traded at exchanges quantitative methods are employed to price them. The more investors rely on similar or even identical pricing models the lower J will be. Therefore, the spread of (similar) quantitative pricing models may have contributed to the 2007/2008 financial crisis, when the securities market effectively collapsed.3 4.1.4 Properties of the subjective median model It remains to be shown that the prices generated by the subjective asset pricing model do indeed share main features with the actual data. Given the model, all information sets, all individual pricing functions and the market solution, pt|J = m (∗) It ( pt+1 | J ) can be calculated and its properties analysed. To that aim we consider the average of market prices over J investors: (J )

μ

J 1  (∗) := m ( pt+1 | i). J i=1 It

We are interested in the question of whether or not asset prices are objective. Being the J th investor the pre-condition for objectivity would be μ(J ) = μ(J −1)

(4.1)

because the average of all reservation prices should be independent of whether or not new investors enter or active investors leave the market. In other words, Equation (4.1) implies that if the pool of investors was growing, the observed quantiles of the empirical distribution of the reservation prices of all investors would converge to a fixed number. If, like in the standard, risk-based approach, investors are drawn from a given probability distribution, or from a finite number of different probability distributions convergence of the μ(J ) would necessarily follow. This convergence would ensure that the median investor’s reservation price and hence the market price also converge to a fixed number as J gets larger.4 The answer to the question whether there is a J for which condition (4.1) holds is, however, no. Notice first that the median market solution implies that pt|J − pt|J −1 = γ J = 0.5 Replacing m (∗) It ( pt+1 | i) by pt|i , we then write (J )

μ

J −1 J −1 J  1  1 1 1 = pt|i − pt|i + pt|i−1 + γJ J − 1 i=1 J (J − 1) i=1 J J i=1 J 1 1 1 γJ = μ(J −1) + pt|i−1 − μ(J −1) + J J J i=1

(4.2) (4.3)

The empirics of uncertainty 85 to see that only the two middle terms in (4.3) disappear for large J , whereas the last remains no matter how large J gets, unless it has special properties such as being drawn from a mean zero-probability distribution. If, however, we exclude the latter restriction on grounds of the main assumption, this last term is not going to vanish. Therefore, for γ J under uncertainty a limiting value for μ(J ) , J → ∞ does not exist. Hence, pt|i is nonstationary in J and an objective distribution probability cannot exist. Instead, the distribution always depends on J implying that it is inherently subjective. Nonstationarity of the first moment with respect to time follows directly and hence also nonstationarity of all higher moment. This conclusions offers a handy empirical hypothesis: under uncertainty the variance of the prices should not depend on time but on the frequency of trades, and, hence, by analogy, on the number of traders.

4.2 Testing for uncertainty If uncertainty was truly relevant, it should be possible to also produce according empirical evidence. For testing, we first highlight the differences between asset pricing under risk and under uncertainty. To this aim we point out the inevitable departure from objective price processes when considering uncertainty. Under uncertainty, pricing turns into a genuinely subjective matter which is in line with the basic assumption. Second, the implications for the subjective approach are discussed and an empirical test is proposed. Finally, the test is conducted. The results of the approach clearly point to the prevalence of uncertainty in a wide range of stock market and foreign exchange data across time, space and econometric approaches. 4.2.1 Conventional asset pricing model and uncertainty Suppose that asset prices, denoted p ∗ , would follow a process like dp ∗ (t) = σ (t)dw(t)

(4.4)

with w(t) being a standard Brownian Motion, and σ a random function that is independent of w, and σ 2 (t) is Lipschitz almost surely (Hansen and Lunde, 2006, p. 4). To obtain an idea of the prices implied by (4.4) we could switch on the computer, generate values for p ∗ and plot the result. Prices are thus available without actual trades of the asset in question. Let us now make the following thought experiment. We fix a point in time, say t, and ask what price will result at t conditional on the past and all other information available to us. Depending on the decision theoretical framework, we might employ the von Morgenstern-Neumann calculus of expected values (von Neumann and Morgenstern, 1947), or the minimax, or maximin principles (Savage, 1951; Milnor, 1954), Knightian risk (Bewley, 1988, 2002; Gilboa and

86

The empirics of uncertainty

Schmeidler, 1989) or something else. Putting aside the actual choice, we may stick to economists’ most popular custom and let a representative agent decide.6 The outcome of this exercise will be denoted pt . The price pt is the “true”, objective price a social planner, an analyst or anyone else thinks to be the right price after considering all individual and all public information. In purely theoretical exercises this is as far as one has to go. However, as soon as we want to link our theoretical considerations to reality we have to go one step further. We now have to acknowledge that there exists a whole population of agents who all want to price the asset. We may thus write down the individual pricing problem as pi (t) = p(t) + εi (t)

(4.5)

εi (t) ∼ (0, ζ (t)2 ) ζ (t)2 < ∞ with i = 1, . . . , N (t) denoting the agents in the population and the individual price given as an individual-specific deviation, εi (t), from the objective price. Obviously, once we average the agents and let N be large we obtain the backbone of basically all empirical approaches. On average, all agents agree on the objective price which is p(t). It might be worth mentioning that the same mechanism needs to be assumed if we base our estimation on time series methods. There is an intenional close relationship between the empirical approach of (4.5) and the theoretical model in (4.3) with p(t) playing the role of μ(J −1) and J 1 εi (t) that of J i=1 γ J . We thus also test the properties of γ J . In econometrics, the key property required for estimation is ergodicity. Ergodicity only makes sense, however, if we consider our observations as being drawn from the true process. Hence, deviations of the true price from the actual price have to be considered small and negligible on average. This is equivalent to the requirement of the average of agents’ pricing to converge to the true price and also covers all the space of ambiguity but not of uncertainty. It is now easy to operationalise uncertainty.7 Instead of letting εi (t) follow a certain distribution with finite variance we simply drop this assumption. Re-arranging (4.5) subject to the new situation reveals that p(t) cannot be given an objective value. Moreover, ergodicity is lost, which highlights the fact that Dequech’s (2011) uncertainty features the key phenomenonological property of Keynesian uncertainty (Davidson, 1991). The empirical implications are straightforward. While in the standard situation an increase in N (t) reduces the ambiguity of the true value of p(t), we do not learn more about its true value under the alternative. In other words, if prices get more volatile the more subjects are involved in trading them, prices are ruled by uncertainty yet not by risk or ambiguity. 4.2.2 The empirical approach In the light of the above arguments the empirical analysis has to focus on the relation between realised prices’ variance and the number of market participants.

The empirics of uncertainty 87 A positive association would count as evidence supportive of uncertainty, a constant or negative would lend support to risk. Before actually turning to estimation, we have to get a handle on the problem that p(t) in (4.4) is defined in continuous time. Of course, with t denoting an infinitesimal point, there is no chance to actually observe any price or trader exactly at t. From now on we therefore regard t as a discrete, tiny time period of constant duration, say five or ten minutes, and N (t) will be approximated by the number of actual-price quotes within the same period. We hence write 1  p(t)i N (t) i=1 N (t) 1  2 ¯ζ (t) = ( p(t)i − p(t)) ¯ N (t) i=1 N (t)

p(t) ¯ =

(4.6) (4.7)

and the estimates ζ¯ (t) should be unrelated to N (t) in the standard case while we should observe larger ζ¯ (t) the larger N (t) under its uncertainty alternative.8 The reason is easy to see. If uncertainty rules, a true price does not exist and, therefore, more opinions will not reveal more information about the true price. Quite to the contrary, the more opinions are voiced, the more volatile prices will get. One might wonder if the arguments apply to completely irrational agents as well. The answer is yes. As we still suppose the existence of a well-defined process p(t) a positive association of ζ¯ (t) and N (t) would simply indicate that agents are systematically unable to discover the true price process. However, if agents were irrational in that sense, a rational agent would not benefit from his or her superior skills. Knowing the true price when all others are trading on the wrong prices has no practical implications unless the true price will be realised with a non-zero probability. With a positive relationship between N (t) and ζ¯ (t) such a probability cannot be determined, however. The dependence of ζ¯ (t) on N (t) can be characterised as a subjective interference with objective reality. Looking at the empirical implications discussed above, one might likewise say that subjective interference with objective reality may simply be regarded as subjective reality in its own right. Subjective reality, however, is nothing but uncertainty because humans are all individual subjects. The individual action is determined by creativity, fantasy and conscious decision making subject to, of course, constraints and incentives. Consequently, and by the main assumption, nobody can reliably predict all human action and its outcome. In short, if we can establish a link between ζ¯ (t) and N (t) the distinction between objective reality with subjective interference and subjective reality becomes superfluous as in both cases uncertainty dominates. Following the reasoning in the previous section we now turn to the empirical test. We first write down the hypotheses and then discuss the empirical implementation. The following pair of hypotheses is going to be tested: H0 :

∂(ζ¯ (t)) ∂(ζ¯ (t)) ≤ 0 v. H1 : >0 ∂ N (t) ∂ N (t)

88

The empirics of uncertainty

To shed light on the validity of H0 we will proceed as follows. We employ high-frequency data of stocks and exchange rates and (i) regress ζ¯ (t) on N (t) j , j = 0, 1, 2, . . . in a linear model and (ii) use nonparametric methods to estimate the first derivative of ζ¯ (t) with respect to N (t). The main reason for employing non-parametric methods is to gain independence from potential mis-specification within the parametric approach.9 For the sake of brevity all details are referred to the next section. At this point a summary of the results should suffice. Notice first that the benchmark of all asset pricing models, the random walk hypothesis, easily passes the test. To demonstrate the validity of the argument we generate data by a simple random walk pt = pt−1 + εt

(4.8)

εt ∼ N (0, 1) t = 1, . . . T. In order to mimic the actual data we let T = 1000 and draw between 2 and more than 950 data points, randomly disregarding the remaining. Thus, the choices of the number of observations within each five-minute interval is adopted from the observed data (in this case Credit Suisse share price data, see Table 4.1). It is well-known that the variance of a price such as in (4.8) depends on the length of the time series, i.e., the absolute time span within which the process is generated. It does not, however, depend on whether or not we do actually observe a realisation. Therefore, if we observe just a handful of price quotes (denoted bin size) during the five-minute interval (called the bin), it should nevertheless be informative about the variance of the price process in this time interval. If we would observe many more quotes this should lead to the same estimates about the true variance on average, albeit with a much higher precision. Figure 4.1 demonstrates these features by means of a simulated example in which the simulated process is designed as in (4.8) and thus represents the standard assumptions. The two panels report the estimates of the relationship between the number of traders (“bin size”, approximated by the number of price quotes per time interval) in the left-hand panel and the first derivative of this relationship in the right-hand panel. The estimation employs a non-parametric, local linear kernel estimator. If prices were objective we would expect that the line in the left-hand panel is horizontal as there is no relation between the number of traders and the price variance. Due to random variations the actual line may occasionally slope. Therefore, confidence bands (dotted lines) are also calculated, giving an idea of whether or not these random upward or downward moves are significant. The left-hand panel formalises the problem of the slope. It displays the estimate of the first derivative at every point of the line in the left-hand panel. Under objectivity, the slope should always and everywhere be zero as there is no effect of the number of traders on the variance of the price. Again, confidence bands are calculated.

Table 4.1 Data characteristics Per Bin

Mean

Min

Max

Variance

Nestlé share prices January and February 2007 Total number of bins: 1650 no. ticks 49. 38 . 0 variance . 09 . 0

188. 2.

0 91

729. 0.

Nestlé share prices January – July 2009 Total number of bins: 14,675 no. ticks 89. 47 2. variance . 0007 0.

773. .

0 059

3327. 1.

3 8e-6

955. .

0 280

4268. 348.

9 7e-6

0 0

Credit Suisse share prices January – July 2009 Total number of bins: 14,675 no. ticks 88. 684 2. 0 variance . 003 0. 0

07 04

Sources: Swiss stock exchange, own calculations. Note: “no. ticks” indicates the number of price quotes.

Estimate of first derivative

Variance 0

Bin size mean: 1.405, median: 1.143

3

–0.5

Variance 6

0.5

9

1

12

1.5

Variance as function of bin size

0

–1

Bandwidth: 1.750

0

2

4 6 Bin size (rescaled)

8

0

2

4 Bin size (rescaled)

6

8

On the horizontal axis the bin size is plotted. The bin size is the number of quotes in each bin with bin referring to a five-minute time interval. Under objectivity the lines should be horizontal (left panel) and horizontal around zero (right panel). The dotted lines represent 95-per cent confidence intervals. In order to make the graph more accessible the size has been rescaled by a constant factor (see p. 100 for details).

Figure 4.1 Random walk: nonparametric estimation of bin size–variance relation and its first derivative.

90

The empirics of uncertainty

They should embrace the slope estimates as well as the zero line under the null hypothesis of objectivity. In case of the objective random walk example, the first derivative estimate behaves exactly as expected. For just a few observations within the five-minute interval (left part of the right-hand panel) the variance–number-of-traders-relation is not estimated very precisely. The more trades are observed (middle section) the tighter the confidence bands around the point estimates (left-hand panel) become. Eventually, when the number of trades increases (right-hand part of the left panel) but on only a few occasions, the bands get wider again. Throughout, however, there is no indication of any significant impact of the number of trade(r)s on the price variance as the confidence band for the first derivative estimate (right panel) embraces the zero line. Using the above definitions the random walk approach belongs to the category of risk models.10 Figure 4.1 clearly shows that the first derivative (right panel) of ζ¯ (t) with respect to N (t) is not significantly different from zero because the zero line is between the error bands. Hence, H0 is accepted and uncertainty plays no role. In contrast to the objective random walk case, in all data examples H0 can be safely rejected (see Figures 4.6–4.9 on pp. 101–103). It might be worth noticing that by the logic of science a single rejection of H0 is sufficient to make the case for uncertainty. It is shown in the following that in more than six independent examples H0 must be rejected.11 4.2.3 The empirical evidence In the following, we study three different data sets with two different modelling approaches. We use share price data of two distinct time periods which are separated by the onset of the great financial crisis as well as foreign exchange rate information. We start with some parametric analyses and then turn to non-parametric estimations. The empirical evidence eventually lends support to the subjective asset pricing model across time, markets and modelling methods.

Evidence across time ... In the first exercise we use stock prices of frequently and internationally traded stocks: Nestlé and Credit Suisse. Nestlé is a Swiss company which is one of the largest enterprises in Europe. Likewise, Credit Suisse is one of the biggest banks on the continent. The data at hand cover two distinct periods. The first stretches over January and February 2007 (Nestlé only), which can be considered a quiet and ‘normal’ market period. The second sample starts on 1 January and ends on 31 July, 2009. These three data sets comprise more than 84,400 (Nestlé, 2007 sample) and more than 1.3 million (Nestlé and Credit Suisse each, 2009 sample)

The empirics of uncertainty 91 observations. These prices are next aggregated into 1650 (Nestlé, 2007 sample) and 14,675 (Nestlé and Credit Suisse each, 2009 sample) ten- and five-minute bins respectively. Table 4.1 summarizes the data characteristics, while Figure 4.2 provides a plot of the 2007 Nestlé data. In the top panel we see a cross plot of the data while the bottom panel presents a non-parametric density estimate of the number of observations, that is, the sizes of the bin. These two plots already suggest that the variance tends to increase with the number of observed trades. Turning to formal methods this impression is corroborated. The following regressions analysis sheds light on the relationship between variance and number of ticks. Because the functional form of this relation is unknown, we use a seventh-order (i max = 7) Taylor approximation of the true functional relationship between ζt and Nt to begin with: ζ¯t = α0 + α1 Nt + α2 Nt2 + · · · + αimax Ntimax + t

(4.9)

t ∼ i.i.d.(0, σ 2 ) Although the variables exhibit a time subscript, the regressions are essentially cross-section regressions. There may be occasions on which there are periods of generally higher or particularly low ζ¯t but under the null hypothesis (the standard approach) this should not be related to Nt . Applying standard model reduction technologies such as general-to-specific F-testing and selection criteria (Akaike, Final Prediction Error, Schwarz), we derive a suitable representation of the data. In most cases the optimal order seems to be four. Next, the first derivative with respect to the number of observations within each bin is calculated and evaluated for the data range. Table 4.2 collects the optimally fitting models and in one instance (Nestlé share prices in 2007) also a model variant where a simple linear model is estimated ad hoc.12 In the case where a simple linear model is estimated a standard t-test can be used for evaluating the validity of the subjective model. The null hypothesis maintains the standard case while the alternative corresponds to the subjective asset pricing model. H0 : α1 = 0 v. H1 : α1 = 0 if αimax = 1.

(4.10)

The estimation results are reported in Table 4.2. They point strongly to a positive relationship between the number of trades and the variance of the price. This is in stark contrast to the usual conviction that more trades would reveal more information about the true price. Instead of increasing the precision with which we measure the price by using more observations, accuracy does in fact decrease. Because it is not easy to gauge the first derivative with respect to the number of observations from the coefficient estimates, we provide plots of the derivatives. It turns out that the first derivative is positive around the mean. This can be inferred from the lower panel of Figures 4.2 to 4.4, where the dotted (Figure 4.2), or smooth solid line (Figures 4.2, 4.3) marks the function of the first derivative.13 All in all there is little doubt that instead of increasing the precision of our

92

The empirics of uncertainty

Table 4.2 Estimation results: coefficient estimates and residual standard deviation α0

i max

α1

α2

α3

Nestlé share prices January and February 2007 1 −0.059 0.003 n.a. n.a. (−6.45) (19.0) n.a. n.a. 4 0.171 −0.01 0.0002 −1.8e-6 (4.29) (−3.60) (3.73) (−3.29) Nestlé share prices January – July 2009 3 0.054 0.006 6.6e-6 (1.53) (7.91) (1.60)

2.1e-8 (3.67)

Credit Suisse share prices January – July 2009 5 0.0 0.0196 0.0003 −1.96e-6 – (7.75) (7.39) (−9.93)

α4

α5

σˆ 

n.a. n.a. 5.5e-9 (3.41)

n.a. n.a. n.a. n.a.

0.178493 – 0.172287 –

n.a. n.a.

n.a. n.a.

1.22 –

4.5e-9 (12.8)

−2.8e-12 (−14.2)

5.16271 –

Coefficient estimates and corresponding t-values in parentheses below.

3

Share price variance × number of investors

2

Variance = − 0.05913 + 0.003088*bin size (SE) (0.00916) (0.000163) t−JHCSE: 7.38

1

0

20

40

60

Density

80

100

120

140 160 180 Bin size = number of investors

Mean bin size: 49.375

0.02

Density of bin size First derivative w.r.t. bin size

0.01

0.00

Density of bin sizes and first derivative of volatility w.r.t. bin size in a fourth-order Taylor approximation. −20

0

20

40

60

80

100

120

140 160 180 200 Bin size = number of investors

Top panel: linear model results (see Table 4.2, first line) with data plot and regression line. Bottom panel: fourth-order Taylor approximation results (see Table 4.2, second line) with density estimates for bin size and first derivative (dotted line). Top and bottom panel refer to different empirical models.

Figure 4.2 Linear and fourth-order Taylor approximation for Nestlé share prices: data plot and graphical estimation results.

price measure the precision decreases when more trades take place for any fixed information set. Interestingly, Lyons (2001) and Evans and Lyons (2002) observe similar effects when they report the tremendous increase in the measure of fit of their

The empirics of uncertainty 93 60

Share price variance (x 1000) x bin size (= number of investors) × N Parametric function estimate 40

20

0

100

200

300

400

500

600

700

800

0.015 Mean bin size: 89

Density estimate of bin size First derivative w.r.t. bin size

0.010

0.005

0

100

200

300

400

500

600

700

800

Figure 4.3 Nestlé 2009: data plot and graphical estimation results.

Share price variance (x 1000) x bin size (=numbers of investors) × N Parametric function estimate

200

100

0 0.015

100 Density

200

300

400

500

700

800

900

1000

Density estimate of bin size First derivative w.r.t. bin size (not to scale)

Mean bin size: 89

0.010

600

0.005

0.000 0

100

200

300

400

500

600

700

800

900

1000

Figure 4.4 Credit Suisse 2009: data plot and graphical estimation results.

exchange rate model. The key variable they introduce is order flow data leading to an increase of up to 64 per cent in the measure of regression fit. Moreover, the variables which are in line with economic theory are insignificant on all but one occasion. The result is similar to the present since (cumulated) order flows

94

The empirics of uncertainty

are under fairly plausible assumptions proportionate to the number of investors. Given the median model, it is no wonder, therefore, that Evans and Lyons are able to explain a larger share of the variance. The evidence presented here could be challenged on grounds of endogeneity bias. If the number of investors was dependent on the variance of the price process, then the regression coefficients of Equation (4.9) would not be reliable. Therefore, authors such as Ané and Ureche-Rangau (2008) investigate the hypothesis that the number of trades (rather, trading volume) and volatility are both jointly determined by a latent number of information arrivals. In our context this would imply that the five- (ten-) minute time interval was not short enough for keeping the information set constant. In the particular case of Ané and Ureche-Rangau the data is daily price and volume of stocks, which certainly justifies modelling information arrivals. However, the general question whether or not trading volume/number of traders is exogenous to the volatility remains. In support of our regression approach, we would like to point to the well-known lunchtime volatility decline.14 In fact, for every major asset market, be it stock markets, foreign exchange markets or bond markets intra-day volatility assumes a U-shape (see, e.g., Ito, Lyons and Melvin, 1998; Hartmann, Manna and Manzanares, 2001, and the references therein). Thus, following an exogenously determined decline in the number of investors (traders) the volatility decreases justifying the assumption of weak exogeneity of numbers of investors. The same U-shaped pattern can be found in our data.

. . . markets, and . . . The previous sections provide evidence for abandoning the standard, objective macro finance approach. However, there are at least two possibilities to match the data evidence with the traditional view. One possibility is offered by infinite variance Lévy processes as price generating processes. These processes generate data which also feature a higher empirical variance the more data we observe holding the information set constant. As regards the discrimination between the subjective model and Lévy processes there is little one can do except learning from experiments. Therefore, objective Lévy processes and the subjective asset pricing model probably generate data with very similar basic characteristics. The second explanation could be that the five-/ten-minute time interval is not short enough for actually keeping the information set constant. If so, the increase in the variance as more observations enter the interval might simply be a reflection of a variation in the information set. Is this argument sufficient word of comfort for returning to the standard approach? Probably not. The reason is simple. While shares like Nestlé’s are traded every other second, many of those assets which can be considered alternative investments and hence conditioning variables in portfolio models, for example, may be traded far less frequently. As an example consider the Swiss bond market. A safe alternative to the Swiss shares would be Swiss government

The empirics of uncertainty 95 bonds. It can happen that those bonds are not traded at all within hours. Therefore, the assumption made before finds support that within the five-/ten-minute time interval the information set remains constant. Turning the argument around, we would need to carefully synchronise the data of interest and the information set, before we take up the standard approach again. Therefore, an inevitable test of the macro finance model would have to look at the high-frequency data and make sure that during those time spells where the conditioning variables do not change the corresponding number of investors do not have explanatory power for the variance of the dependent variable. So far, the standard procedure would be to synchronise observation data by using “suitable” time aggregates such as days, weeks, months or quarters. I do hazard the guess that the synchronisation exercise, however laborious, would always produce the same result, namely nonstationarity with respect to the number of trades. Luckily, high-quality data which permit such a synchronisation exercise are becoming more readily available. For example, Akram, Rime and Sarno (2008) have investigated arbitrage on foreign exchange markets. Their high-frequency data set consists of matched spot, forward (forward swap) and deposit interest rate data for the currency pairs British pound/US dollar, euro/US dollar, Japanese yen/US dollar. This data will be used in the follwoing to corroborate the previous findings. Akram et al.’s (2008) main data source is Reuters, which is an advantage for the British pound but less so for the euro and the yen as Reuters is not the main trading platform in these latter two cases. Moreover, the Japanese yen is most heavily traded where Reuters does not collect the data. Therefore, we will only look at the British pound and the euro pairs. Even though Akram et al. (2008) collect observations at the highest possible frequency available to them there are occasions on which quotes for the swap, the spot and the interest rates do not occur simultaneously. Therefore, the variables with the lowest trading activity set the limits. The most important effect on the data sample is a difference in the number of observations despite an exact match of the sample period. Of course, in order to test the model we need to track the market activity as closely as possible. Whenever there are quotes for, say, the spot rate while there are no changes in the interest rate we lose information. That’s why we again restrict our analysis to the largest information sets. The variable of interest is the arbitrage opportunity defined by the covered interest parity condition given below f xt = f xte

it + et . i t∗

(4.11)

Equation (4.11) has it that the spot exchange rate (denoted f xt ) must equal the forward rate ( f xte ) up to deposit interest rate (i t ) on domestic assets discounted by the foreign interest rate (i t∗ ) of the same maturities as the forward contract. As regards the actual data, bid and ask prices are available. Using ask and bid

96

The empirics of uncertainty

quotes provides a much more reliable picture of true arbitrage opportunities. Consequently, for each currency pair we obtain two deviation measures. A nonzero et indicates arbitrage opportunities. Akram et al.’s (2008) analysis focusses on the properties of et . They show for example that sizeable arbitrage opportunities exist but these are all very short lived. For the sake of brevity the data is not described in detail here. All those details are reported in Akram et al. (2008); the data have been downloaded from Dagfinn Rime’s website. Rime also kindly provided advice in handling and interpretating the data. In what follows we will look at derived values for et for the two currency pairs, pound/US dollar and euro/US dollar. For each of these two pairs et is calculated for bid and ask spot rates respectively. We investigate forward contracts for twelve and six months because these are the most liquid markets and we, therefore, most likely obtain a fair picture of the whole market. Taken together, eight data sets are available for analysis (see Table 4.3). The observation period is 13 February to 30 September, 2004, weekdays between 07:00 and 18:00 GMT, which provides up to 2.7 million observations per currency pair and quote (bid or ask). This data is again bundled into five-minute bins. After going through the same steps of analysis as before it turns out that the standard approach can again be rejected in basically all cases. The first derivative of the function describing the relationship between bin size and variance is positive around mean/median, and, relying on nonparametric analysis, there is convincing evidence for this derivative to be significantly positive, as evidenced in Table 4.4. The result can also be graphically depicted as in Figure 4.5, which provides a representative example.

... econometric methods Equation (4.9) defines a parametric function of the relation between bin size (the approximation of number of traders) and the variance of the asset price within those five-/ten-minute time bins. The according results lend support to the hypothesis of a positive association between the number of trades and the variance of the asset price. Nevertheless, one may wonder to what extent these results depend on the specific parametric functional forms used. In order to gain independence from the functional form we now report the outcome of a nonparametric, local quadratic estimation of the relation between bin size and variance. The estimation is based on the software XploRe, which is specifically suitable for analysing financial market data by means of non- and semi-parametric functions.15 In particular, we make use of the procedure “lplocband” of the “smoother” library applying the Epanechnikov kernel. The kernel bandwidth is chosen manually because the automatic procedure always selects the lowest possible bandwidth within the pre-defined range. These lower bands are close

The empirics of uncertainty 97 Table 4.3 CIP deviation data characteristics Feb–Sep 2004 Per Bin

Mean

Min

Max

Variance

POUND / USD ask 12 months Total number of bins: 19,727 no. ticks 139. 21 variance 9. 25

1. 0 0. 0

1524. 0 1772. 89

7323. 25 519. 85

POUND / USD bid 12 months Total number of bins: 19,727 no. ticks 139. 21 variance 10. 38

1. 0 0. 0

1524. 0 1649. 45

7323. 25 531. 97

POUND / USD ask 6 months Total number of bins: 19,711 no. ticks 131. 61 variance 2. 05

1. 0 0. 0

1521. 0 2282. 09

7239. 00 358. 70

POUND / USD bid 6 months Total number of bins: 19,711 no. ticks 131. 61 variance 2. 09

1. 0 0. 0

1521. 0 2182. 55

7239. 00 337. 41

EURO / USD ask 12 months Total number of bins: 19,735 no. ticks 129. 74 variance 6. 20

1. 0 0. 0

558. 0 9695. 46

5955. 00 4858. 83

EURO / USD bid 12 months Total number of bins: 19,735 no. ticks 129. 74 variance 5. 14

1. 0 0. 0

558. 0 9594. 91

5955. 00 4728. 03

EURO / USD ask 6 months Total number of bins: 19,713 no. ticks 117. 14 variance 0. 91

1. 0 0. 0

559. 0 33. 08

5781. 86 2. 67

EURO / USD bid 6 months Total number of bins: 19,713 no. ticks 117. 14 variance 0. 91

1. 0 0. 0

559. 0 29. 80

5781. 86 2. 06

Sources: Akram et al. (2008), own calculations.

to the minimum distance between any two explanatory variable data points. The results do not change qualitatively, however, within a large range of bandwidths. In order to ensure that our results are robust with respect to the specific financial product we again consider two different markets, the stock market and

98

The empirics of uncertainty

Table 4.4 Estimating CIP deviation variance: coefficient estimates and residual standard deviation i max

α0

α1

α2

POUND / USD ask 12 months 4 7485.59 −15.31 (522.3) (8.94)

0.2237 0.0456

POUND / USD bid 12 months 1 7095.85 20.677 (380.6) (2.51)

n.a. n.a.

POUND / USD ask 6 months 1 1478.29 3.562 (74.46) (0.52)

n.a. n.a.

POUND / USD bid 6 months 3 1008.62 12.875 (196.1) (3.451)

−0.0455 (−0.017)

EURO / USD ask 12 months 2 1682.37 6.86 (86.96) (2.320)

6.05 (0.012)

EURO / USD bid 12 months 3 1667.63 38.67 (188.8) (3.95)

α3 −0.0004 7.78e-7

α4

σˆ 

2.36e-7 3.558e-8

18705.7 –

n.a. n.a.

n.a. n.a.

16786.22 –

n.a. n.a.

n.a. n.a.

3466.82 –

n.a. n.a.

3718.51 –

n.a. n.a.

n.a. n.a.

6799.21 –

−0.1445 0.0233

0.0003 0.839e-5

n.a. n.a.

7269.83 –

EURO / USD ask 6 months 4 240.727 14.25 (45.56) (1.476)

−0.0955 0.0144

0.0003 5.11e-5

−2.19e-7 5.765e-8

1590.91 –

EURO / USD bid 6 months 4 229.241 13.285 (40.21) (1.303)

−0.086 0.0127

0.0002 4.51e-5

−2.03e-7 5.088e-8

1404.12 –

7.0e-5 (2.41e-5)

Regressions of variance on number of traders (tics). Coefficient estimates and corresponding t-values in parentheses below.

the foreign exchange market and, in the case of the stock data, two different time periods. Before turning to the empirical evidence let us reconcile the results which should be expected under the null hypothesis which corresponds to the standard approach. Figure 4.1 (p. 89) plots observations that are generated by simulating 14,675 random walks of length 955. In the next step, between 2 and 955 data points of these random walks are selected randomly from each of the 14,675 data sets. These observations mimic the five-minute bins. Accordingly, the variance of these bins is estimated and set in relation to the number of artificial observations entering the bin. This simulation procedure thus draws on the actual Credit Suisse data (see Table 4.1) and clearly demonstrates that even under the random walk hypothesis for price data the relationship between bin size and variance should be completely stochastic; the first derivative

The empirics of uncertainty 99 150000 Share price variance (x 1000) x bin size (= number of investors) × N Parametric function estimate

100000

50000

0 0.0100

50 Density

100

0.0075

150

200

250

300

350

400

450

500

550

600

Density estimate of bin size First derivate w.r.t. bin size

Mean bin size: 130

0.0050 0.0025

0

50

100

150

200

250

300

350

400

450

500

550

600

Share price variance (x 1000) x bin size (= number of investors) × N Parametric function estimate

30000

20000

10000

0 0.0100

50 Density

100

0.0075

150

200

250

300

350

400

450

500

550

600

550

600

Density estimate of bin size First derivate w.r.t. bin size

Mean bin size: 117

0.0050 0.0025

0

50

100

150

200

250

300

350

400

450

500

Figure 4.5 UIP one-year bid (first and second panel from top) and six-months ask (third and fourth panel from top) euro/US dollar, 2004: data plot and graphical estimation results.

estimate frequently crosses the zero line, and the 95-per cent confidence bands safely enclose zero. By contrast, the empirical relationships do look pretty different. For example, Figure 4.6 (p. 100) shows that the estimated first derivative is significantly larger

100

The empirics of uncertainty Estimate of first derivative

1

Variance 10

Variance 1.5

2

20

2.5

Variance as function of bin size

0

Bin size mean: 0.119, median: 0.102

0

-10

0.5

Bandwidth: 0.225

0

0.1

0.2 0.3 Bin size (rescaled)

0.4

0

0.1

0.2 0.3 Bin size (rescaled)

0.4

In apparent contrast to the random walk example (see Figure 4.1) an upward-sloping line describes the relation between trading intensity and variance (left-hand panel). The positive slope is confirmed on the right-hand side where the estimate of the first derivative is plotted together with according confidence bands.

Figure 4.6 Nestlé tick data 2007: nonparametric estimation of bin size-variance relation and its first derivative.

than zero around the mean bin size in the case of the 2007 Nestlé data. Very similar pictures emerge for the other data sets. In order to render the estimation feasible, i.e., avoiding numerical problems, the independent variable was divided by twice the maximum value of the bin size. If that was not sufficient to overcome numerical problems both variables were normalised by their respective empirical standard deviations. This linear transformation cannot affect the relation between the independent and the dependent variable. The figures show that pretty much in line with the parametric estimation the variance increases with the bin size. The panel on the left shows an upward trend in the variance for growing bin sizes and the panel to the right confirms that the first derivative of the relationship is significantly larger than zero around the mean bin size and for sizes larger than the mean. Therefore, the hypothesis derived from the median model receives support once more. The following graphs depict the results for the data compiled by Akram et al. (2008). For the sake of brevity, we present only a selection of all results. The complete set is available upon request. The data is always standardised such that the empirical variance of dependent and independent variables is one. As before, this linear transformation cannot affect the relationship between trading activity and volatility.

The empirics of uncertainty 101

Figure 4.7 Nestlé tick data 2009 (top panel) and Credit Suisse tick data 2009 (bottom panel): nonparametric estimation of bin size-variance relation and its first derivative.

Apparently, the empirical evidence for many different data sets and for different econometric approaches is much more supportive of subjective asset pricing than of what we call the “standard”, objective approach.

Test decision, caveats, reconciliation and directions of future research Assuming that financial market prices follow processes of risk or ambiguity should have resulted in price convergence with respect to the number of

102

The empirics of uncertainty Estimate of first derivative

Variance (rescaled)*E7 0

Variance (rescaled)*E7 0 0

0

0

0

0

Variance as function of bin size

0

0

0

Bandwidth: 1.750

Bin size mean: 1.630, median: 1.474

0

1

2

3 4 5 6 Bin size (rescaled)

7

8

0

2

3 4 5 6 Bin size (rescaled)

7

8

Estimate of first derivative 0

0

Variance as function of bin size

1

Bin size mean: 1.630, median: 1.474

0

0

0

Variance (rescaled)*E7 0

Variance (rescaled)*E7 0 0

0

Bandwidth: 1.750

0

1

2

3 4 5 6 Bin size (rescaled)

7

8

0

1

2

3 4 5 6 Bin size (rescaled)

7

8

Figure 4.8 Pound 12-month ask (top panel) and bid (bottom panel): nonparametric estimation of bin size-variance relation and its first derivative.

individuals involved in the search for the efficient market price. The empirical investigation showed, however, that the data has properties which one would expect if investors behaved as if no objective price process existed. Although the absence is impossible to prove since the non-existing cannot be proven to not exist, the empirical evidence can be interpreted as an as if behaviour. Investors behave as if there was no objective price process.16 This interpretation also holds the key for reconciling experimental evidence of investors’ behaviour.

The empirics of uncertainty 103

Figure 4.9 Euro 12-month ask (top panel) and bid (bottom panel): nonparametric estimation of bin size-variance relation and its first derivative.

The first stylised fact of experimental evidence is the so-called irrational behaviour in artificial asset markets (see inter alia Smith, Suchanek and Williams, 1988; Cipriani and Guarino, 2005). The literature has also demonstrated that experienced traders can push the market price towards its fundamental value and hence eradicate irrational prices (see, e.g., Dufwenberg, Lindqvist and Moore, 2005; Drehmann, Oechsler and Roider, 2005; Hussam, Porter and Smith, 2008; Hommes, 2013, to name but a few). Notably, all these experiments use a design in which an (implicit) objective price process is induced. For example, the traded asset may yield a return

104

The empirics of uncertainty

with a given probability each period. Therefore, irrationality in such a situation might be used as an argument against uncertainty. Uncertainty begs a different interpretation, however. The participants in these experiments behave exactly as they would in the real world; they trade as if there was no objective price process, which does by no means imply they are irrational. By contrast, expert traders are able to discover the induced pricing rule and hence tend to behave “rationally”. Using the notation of 2.2.2, we would say that in these experiments participants start with du until they realise that they can profitably switch to dr because uncertainty is not part of the experiment. Therefore, these experiments do not lend support to the objective price approach. The decisive question instead is, how do experts trade in the absence of an objective price process? Thus, the need for an accordingly set up experiment remains and economists might have a closer look at optimal decision making under the subjective approach in general. Therefore, in the light of more realistic properties of the theoretical model the conclusion of rational “irrationality” of asset markets seems to be the more plausible one. Alternatively, one may regard each price tick as a piece of information itself. In the logic of our argument every investor would represent an indispensable piece of information. The standard rational expectation hypothesis would thus have to include all investors in the information set.17 Then, the standard approach and our model would generate data which would be observationally equivalent. Interestingly, research into the impact of public announcements on asset markets (Carlson and Lo, 2006; Omrane and Heinen, 2009, among several) shows that the breaking of publicly available news clearly leaves its traces in the data by increased trading activity or generating an “uncertainty” premium (Frydman, Goldberg and Mangee, 2015), for instance. At the same time, however, similar traces are also found when no new information arrives. Researchers very often link this phenomenon to the presence of private information and other market frictions. The remarkable distinction to the subjective modelling approach hence is that the subjective approach does not rely on two unproven assumptions. The standard approach starts with positing the existence of an objective pricing process and when implausible consequences arise, yet another assumption is made to fix the problem. The subjective model requires fewer such assumptions while yielding more explanatory power. It should therefore be regarded as superior to the objective modelling approach (Friedman, 1953c, p. 14). The key difference between these two approaches is that the latter assumes risk or ambiguity while the former assumes uncertainty to prevail. Therefore, rejecting the hypothesis of convergence is equivalent to rejecting ambiguity in favour of uncertainty.

Notes 1 Bewley (2002 [1988]) provides another interesting explanation for inaction under Knightian uncertainty (ambiguity).

The empirics of uncertainty 105 2 If Mt ( pt+1 | It ) is an economic model, m It ( pt+1 ) would represent the forecast value, for example. 3 These markets finally died when it became apparent that all models generated too-high prices. As a result only one model (J = 1) survived. This model made all owners of collateralised debt obligations try to sell. 4 Note that individual investors may ‘swap’ positions. The argument is made about large numbers, not about individual behaviour, which would be much more restrictive. 5 Strictly speaking, γ  = 0 only holds for sure if the new investor’s investment exceeds xt,∗ , the median investor’s investment. Otherwise, several investors have to enter. The line of argument is neither affected by this special case nor by conditioning γ on J . 6 Instead of a single agent we might also allow for finite heterogeneity of agents as long as there exists a fixed weighted average of decision making principles. 7 See also section 2.2 for a more detailed treatment. 8 Note that the use of the more popular notion of realised volatility due to Barndorff-Nielsen and Shepard (2002) and Andersen, Bollerslev, Diebold and Labys (2003), among others, instead of the simple variance formula is contingent on the existence of an objective pricing process and hence is not appropriate here. 9 Alternative approaches are due, for example, to Gopikrishnan, Plerou, Liu, Amaral, Gabaix and Stanley (2000), Spada, Farmer and Lillo (2010), Zumbach (2004) or Plerou, Gopikrishnan, Amaral, Gabaix and Stanley (1999). 10 For the sake of brevity, we will subsume models of risk and ambiguity under the term “risk models” because they are equivalent under the null hypothesis. 11 This investigation into this matter is not unique. For example, Gopikrishnan et al. (2000), and Plerou et al. (1999) have also established a positive association between trading activity and volatility. 12 Yet another nonparametric estimation of the relationship between bin size and variance is reported below. All methods deliver the same qualitative results. 13 The first derivative is normalised to match the density estimate scale. This adjustment does not affect its position relative to the zero line. 14 Similar observations have been made with respect to the holiday season. See, e.g., Hong and Yu (2009) on the so-called gone-fishing effect. 15 The code and the data are available on request from the author. 16 One should not feel compelled to prove that something does not exist in the first place. The assumption of existence of objective prices would require sound evidence instead. 17 Arguably, with such modification the idea of a representative agent disappears.

5

Conclusion

“Why did nobody notice it?”, the Queen of England famously inquired about the 2007/2008 financial crisis.1 The royal question was well placed and also addressed at the right people because it was asked during Her visit to the London School of Economics. Who, if not economists, should have been able to anticipate a crisis of the scale of the “U.S. subprime meltdown”?2 As everyone now knows all to well, economists were not prepared and the nagging question whether or not they could have known better remains. Part of the answer rests with economists themselves. Too many of them have for too long clung to a overly simplistic worldview that serves the desire for mathematical beauty and rigour very well but has little to say about an economy which actually does not fit into the straitjacket of traceable calculus. Partly in response to the economists’ crisis that soon followed the economic crisis, “uncertainty” gained currency among leading professional journals. This trend was triggered not the least by the feeling that, somehow, the ignorance of “uncertainty” might somehow have made models overly rigorous such that they would not even allow for events such as crises. Indeed, “uncertainty” does hold the key for answering the Queen’s question. But it does so in a way that the new “uncertainty” fashion in economics hardly embraces. This is because “uncertainty” is much more than a buzz word or publication leverage for ambitious economists. Uncertainty must instead be seen as the engine of the economy and, therefore, as a central part of our economic analysis and understanding of real economics. It is actually rather surprising that uncertainty has arguably been pushed to the outskirts of economics when considering that most economists would certainly agree that the methodological individualism is the cornerstone of economics as a science. From methodological individualism to the main assumption of this book that humans cannot be emulated by any means is only a very short and inevitable step. The proof is by contradiction; if it was possible to emulate humans in all relevant aspects then individualism would no longer be. Therefore, this book’s assumptions immediately imply that uncertainty does indeed rule. Although individualism is widely quoted as the foundations of economics, economists very often only pay lip service to it. In their deeds, economists more often than not put their model agents and whole economies in nice little boxes of collectivist stochastic calculus instead. This is in striking contrast to the amazing

Conclusion 107 human capacities to reflect, transform, feel and think. These genuine skills force us to accept that the economic sphere is indeed ruled by uncertainty but not by risk, or by other higher orders of knowledge (2.1.2). The contrast between nominal individualism and factual collectivism in economics is certainly reason enough to invoke a more sincere use of uncertainty. Unfortunately, in striking contrast to the – now – widespread use of the word, there is no proper consensus on what “uncertainty” actually means. In order to correctly address uncertainty we therefore need a definition of what it actually is. Since uncertainty affects human decision making, a formal definition of uncertainty should refer to the decision making process. Therefore, we suggest a definition that centres around the necessary deductions in the face of not-knowable knowledge (1.2.3). Put simply, uncertainty is making predictions about the not-knowable future. To a certain degree uncertainty thus is a demon that lashes out at scientific rigour, or so it seems. Indeed, the larger part of the economic literature deals with decision making under risk and ambiguity at best (2.2.1). Risk and ambiguity afford mathematically elegant solutions and descriptions of human decision making which are lost, however, when allowing for uncertainty. No wonder, therefore, that economists tiptoe around genuine uncertainty rather than embracing its potential. Since uncertainty prevails in the economy, economists nevertheless have to deal with it and live up to the challenge uncertainty posits. For its better half, uncertainty is not a demon at all. Although it is still way too early to welcome all potential benefits, it seems quite obvious that uncertainty very well explains seemingly odd or “irrational” behaviour on the part of humans. In fact, what is commonly called “irrational” may not be irrational at all when considering uncertainty. The reason for this simply is that telling apart rational and irrational behaviour requires knowledge about what the rational behaviour would be. Under uncertainty, such knowledge is simply not available. Therefore, “irrational” choices or decision making strategies such as heuristics, rules of thumb, et cetera must be appreciated and become part of economic scrutiny as valuable and valid means of coping with uncertainty in a structured way (2.2.2). It is also possible to turn the argument around and ask what is the value added of so-called rational expectations in concurrent economics. Rational expectations are at the heart of modern micro- and macroeconomics and finance, for example, and, therefore, they should serve a purpose. With uncertainty, however, no such rational expectations can exist, at least not for the vast majority of problems. Therefore, adding rational expectations to economic argument is of little use if of any use, at all, beyond the rational recognition that uncertainty prevails. Maybe ironically, economists attest to this conclusion by amending and improving their very models, which are supposed to be the basis for expectation formation. As these amendments are genuine and not predictable, economists’ own actions invalidate the expectations previously obtained (2.5). While the specifics of the amendments are not predictable, the amendment as a fact is very well predictable and should thus be part of the formation of expectations in the first place. Alas, it is not (yet). Therefore, what are usually dubbed rational

108

Conclusion

expectations are very irrational expectations after all and economists who ignore the built-in irrationality of rational expectations only force the Queen to ask her question over and over again. As a consequence of this irrationally of rational expectations, the dominant model class of real business cycle (RBC) and dynamic stochastic general equilibrium (DSGE) models are of little use at best, and harmful in the worst case. Of course, reasons to abandon rational expectations models and DSGE or RBC models in particular are not hard to come by (2.3.2). Romer’s (2016) surprise attack not so long ago even promoted the discussion to the ivy league level. However, uncertainty’s contribution to the list of reasons is Janus-faced. Janus’ one face looks at DSGE models with all due repudiation because of the inherent contradiction just described. The other face, however, acknowledges that uncertainty may pose non-solvable problem leaving humans somewhat helpless. Hence, the DSGE approach can be accepted as one method to overcome the helplessness uncertainty imposes. This second face, therefore, implies that it is not the mathematical method or the strive for formal rigour per se that causes “the trouble with macroeconomics” (Romer, 2016) but its application in a pretence-of-knowledge way. The real challenge thus remains to use mathematics and formal rigour for addressing uncertainty rather than assuming it away. With rational expectations gone and stochastic calculus exposed, not much is left for concurrent economic theory to pride itself on. In fact, the case for uncertainty unveils that current mainstream economics’ (nominal) ontology is a bad match for economic reality (2.3.2). Economists have for too long nurtured the illusion of being scientific chiefly because they apply science’s methodology, especially positivist analysis. However, science is concerned with matters of risk (randomness) or even higher orders of knowledge while the economy is pushed by uncertainty. Therefore, positivism is in fact not applicable to economic problems. Despite all evidence to the contrary, economists frequently nonetheless assume a positivist guise. Consequently, owed to the ontological mis-match, it does not come as a surprise that actual economic discussions (2.3.2) only superficially refer to positivist methodology while they factually employ constructivist methods. The latter is good news because constructivism is much better suited to cope with uncertainty. Moreover, actual mainstream economic approaches can also be understood as yet another out of many equally valid ways to cope with uncertainty. Heuristics and rules of thumb have already been mentioned as uncertainty management strategies, but this list certainly also comprises belief, ignorance and pretence of knowledge (2.2.2). The still-dominant dynamic stochastic general equilibrium approach to macromodelling, for example, can thus be understood as a strategy of more or less conscious pretence of knowledge for answering the challenge of the demon. But having said this, we must be very clear on this point: pretence of knowledge can never be an appropriate, conclusive answer to the Queen’s question. Other than that, pretending that uncertainty just does not exist is one out of several more or less equally valid ways of handling uncertainty and concurrent

Conclusion 109 economics has excelled at perfecting this strategy. Therefore, uncertainty does not imply to refute New Keynesian-neoclassical (mainstream) economics as a whole but to give it a place on the shelf among largely equally relevant approaches of competing, so-called heterodox economics such as post-Keynesian, institutional or behavioural economics, to name but a few. Of course, it would be a serious mistake to take comfort in this state of affairs. Quite to the contrary, the prevalence of uncertainty demands a paradigmatic shift in economics towards paradigmatic uncertainty (2.4). Under this new paradigm the ontological foundations of economics have to change too. The appropriate ontology has it that the nature of truth in economics is the result of genuinely individual action that generates ex-ante not-knowable states of nature and therefore gives rise to uncertainty. With this paradigmatic shift problems of risk and ambiguity turn into very special cases of actual human behaviour in an uncertain environment. The new paradigm eventually helps, for example, to eventually resolve many seasoned economic “puzzles” (2.4.1), which truly expands our understanding of the economy. What is more, anchoring uncertainty at the ontological level yields the promise to move beyond current mainstream economics (2.4.4). This is because people do make decisions under uncertainty even though we so far only partly understand how. But according rules and techniques hence obviously exist. A fine example of these tools has been documented by psychologists who have identified the decisive role of emotions for decision making. Religion, prejudice, inattention, heuristics and several others have been suggested but their proper identification, empirical confirmation and statistical discrimination against weaker explanations (2.4.4) remains a considerable challenge for economists. In general, under the uncertainty paradigm economics apparently must become even more empirical than it has lately already become. The uncertainty paradigm would be worthless, however, if it would not also offer new insights into economic problems beyond resolving “puzzles”. Although there is reason to assume that economists have yet to cover the larger part of paradigmatic uncertainty, some benefits can already be identified. For example, setting up and designing institutional choice can be viewed as a means of coping with uncertainty (3.1). In the domain of fiscal policy, we conclude that uncertainty should be tamed yet not perfectly so because it would turn the economy from an exciting, open system into a rather boring New Keynesian-neoclassical mechanical system (3.3). Furthermore, under the paradigm of uncertainty money can be understood as trust that counters uncertainty and thereby facilitates exchange to mutual advantage (3.2). Inflation thus turns into a sign of loss of trust in those institutions and processes that generate the trust necessary for exchange. Monetary policy should therefore keep an eye on these institutions and processes that cater for this confidence. By the same token, tasking central banks with keeping inflation in check inevitably overburdens them because a central bank system into is the only institution that supports the confidence. It thus pays to keep the

110

Conclusion

central banks’ capabilities and limits for fighting inflation in perspective to avoid disappointment. If uncertainty shall not exclusively remain in the theoretical domain, we also need ways of hunting it down empirically. Fortunately, uncertainty can indeed be detected by careful data analysis (4). Without uncertainty, data properties are well defined and it is also possible to recover them empirically. A feasible test strategy, therefore, is to hypothesise the absence of uncertainty and to relegate uncertainty to the alternative hypothesis. Financial markets data is a case in point. The uncertainty test exploits the restrictions on the variance of financial market prices that are implied by risk and ambiguity. This restriction allows the application of standard econometric tools. Under the alternative of uncertainty, the estimated relationship between sample size and variance exhibits distinctive patterns whose significance can be tested. It turns out that subjective views do indeed matter on financial markets, which lends convincing support to the alternative hypothesis, which is the prevalence of uncertainty in economics and in the economy. Uncertainty in economics destroys a great many certainties in concurrent economics. It easily proves that mainstream ontology is ill-fitted for answering the most pressing questions about the course of the economy and the appropriate design of its institutions. This deficit has already led to a situation in which economists only nominally honour natural science’s epistemology and ontology while – unconsciously or not – adopting more suitable ontologies such as constructivism. However, in order not to get lost in ontological arbitrariness, economists must enter a deliberate discussion about the best-fitting approach for economics. Putting uncertainty in the centre of this discussion holds the promise of considerable scientific returns on investment – and on top of that, the monarch need not ask Her question again.

Notes 1 The Telegraph, 5 May, 2008. 2 Reinhart and Rogoff (2009, ch. 13).

Bibliography

Akram, Q. F., Rime, D. and Sarno, L. (2008). Arbitrage in the foreign exchange market: Turning on the microscope, Journal of International Economics 76(2): 237–253. Albert, H. (1965). Modell-Platonismus: der neoklassische Stil des ökonomischen Denkens in kritischer Beleuchtung, Kiepenheuer & Witsch, Köln. Albert, H., Arnold, D. and Maier-Rigaud, F. (2012). Model platonism: Neoclassical economic thought in critical light, Journal of Institutional Economics 8(3): 295–323. Alchian, A. A. (1950). Uncertainty, evolution, and economic theory, Journal of Political Economy 58: 211–221. Andersen, T. G., Bollerslev, T., Diebold, F. X. and Labys, P. (2003). Modeling and forecasting realized volatility, Econometrica 71(2): 579–625. Ané, T. and Ureche-Rangau, L. (2008). Does trading volume really explain stock returns volatility?, Journal of International Financial Markets, Institutions and Money 18(3): 216–235. Aristoteles (1951). Nikomachische Ethik, Artemis-Verlag, Zürich. Aristoteles (1971). Politik, 2nd edn, Artemis-Verlag, Zürich. Arrow, K. J. and Debreu, G. (1954). Existence of an equilibrium for a competitive economy, Econometrica 22(3): 265–290. Bacchetta, P. and van Wincoop, E. (2005). Rational inattention: solution to the forward discount puzzle, Research Paper 156, International Center for Financial Asset and Engineering. Baker, S. R., Bloom, N. and Davis, S. J. (2015). Measuring economic policy uncertainty, CEP Discussion Papers dp1379, Centre for Economic Performance, LSE. Barndorff-Nielsen, O. E. and Shepard, N. (2002). Econometric analysis of realised volatility and its use in estimating stochastic volatility models, Journal of the Royal Statistical Society, Series B 64: 253–280. Bekaert, G., Hoerova, M. and Lo Duca, M. (2013). Risk, uncertainty and monetary policy, Journal of Monetary Economics 60(7): 771–788. Bewley, T. F. (1988). Knightian decision theory and econometric inference, Cowles Foundation Discussion Papers 868, Cowles Foundation, Yale University. Bewley, T. F. (2002). Knightian decision theory. Part I, Decisions in Economics and Finance 25: 79–110. Bhaskar, R. (1978). A realist theory of science, Harvester Press Humanities Press, Hassocks, Sussex Atlantic Highlands, NJ. Bianco, M., Bontempi, M., Golinelli, R. and Parigi, G. (2013). Family firms’ investments, uncertainty and opacity, Small Business Economics 40(4): 1035–1058.

112

Bibliography

Blanchard, O. (2016). Do DSGE models have a future?, Policy Briefs PB16-11, Peterson Institute for International Economics. Boero, G., Smith, J. and Wallis, K. (2008). Uncertainty and disagreement in economic prediction: the Bank of England Survey of External Forecasters, Economic Journal 118(530): 1107–1127. Böhle, F. and Busch, S. (2012). Von der Beseitigung und Ohnmacht zur Bewältigung und Nutzung, in F. Böhle and S. Busch (eds), Management von Ungewissheit: Neue Ansätze jenseits von Kontrolle und Ohnmacht, Transcript, Bielefeld, Germany, pp. 13–33. Bomberger, W. A. (1996). Disagreement as a measure of uncertainty, Journal of Money, Credit and Banking 28(3): 381–392. Carlson, J. A. and Lo, M. (2006). One minute in the life of the DM/US$: Public news in an electronic market, Journal of International Money and Finance 25: 1090–1102. Cass, D. and Shell, K. (1983). Do sunspots matter?, Journal of Political Economy 91(2): 193–227. Cheung, Y.-W., Chinn, M. D. and Pascual, A. G. (2005). Empirical exchange rate models of the nineties: Are any fit to survive?, Journal of International Money and Finance 19: 1150–1175. Cipriani, M. and Guarino, A. (2005). Herd behavior in a laboratory financial market, American Economic Review 95(5): 1427–1443. Comte, A. (1865). A General View of Positivism, Trübner and Co., New York. Conlisk, J. (1996). Why bounded rationality?, Journal of Economic Literature 34(2): 669–700. Cross, R., Hutchinson, H., Lamba, H. and Strachan, D. (2013). Reflections on soros: Mach, quine, arthur and far-from-equilibrium dynamics, Journal of Economic Methodology 20(4): 357–367. Crotty, J. (1994). Are Keynesian uncertainty and macrotheory compatible? conventional decision making, institutional structures, and conditional stability in keynesian macromodels, in G. Dymski and R. Polin (eds), New Perspectives in Monetary Macroeconomics, University of Michigan Press, Ann Arbor, pp. 105–139. Damasio, A. (1995). Descartes’ error: emotion, reason, and the human brain, Avon Books, New York. Damasio, A. (2012). Self comes to mind: constructing the conscious brain, Vintage, London. Davidson, P. (1991). Is probability theory relevant for uncertainty? A post Keynesian perspective, Journal of Economic Perspectives 5(1): 129–143. Debreu, G. (1974). Excess demand functions, Journal of Mathematical Economics 1(1): 15–21. Dequech, D. (2011). Uncertainty: a typology and refinements of existing concepts, Journal of Economic Issues 45(3): 621–640. Derman, E. (2011). Models.Behaving.Badly.: Why Confusing Illusion with Reality Can Lead to Disaster, on Wall Street and in Life, Free Press, New York. Draghi, M. (2012). Verbatim of the remarks made by Mario Draghi, Website. Speech at the Global Investment Conference in London, 26 July 2012. www.ecb.europa.eu/ press/key/date/2012/html/sp120726.en.html; accessed 4 March 2017. Drehmann, M., Oechsler, J. and Roider, A. (2005). Herding and contrarian behavior in financial markets: an internet experiment, American Economic Review 95(5): 1403–1426.

Bibliography 113 Driver, C., Imai, K., Temple, P. and Urga, G. (2004). The effect of uncertainty on UK investment authorisation: Homogenous vs. heterogeneous estimators, Empirical Economics 29(1): 115–128. Druckman, J. N. (2001). Evaluating framing effects, Journal of Economic Psychology 22(1): 91–101. Dufwenberg, M., Lindqvist, T. and Moore, E. (2005). Bubbles and experience: An experiment, American Economic Review 95(5): 1731–1737. Ebeling, R. (1983). An Interview with g.l.s. shackle, The Austrian Economics Newsletter 4(1): 1–7. Edge, R. M. and Gurkaynak, R. S. (2010). How useful are estimated DSGE model forecasts for central bankers?, Brookings Papers on Economic Activity 41(2 (Fall)): 209–259. Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms, Quarterly Journal of Economics 75: 643–669. Engel, C. (2016). Exchange rates, interest rates, and the risk premium, American Economic Review 106(2): 436–474. Engle, C. (1996). The forward discount anomaly and the risk premium: A survey of recent evidence, Journal of Emiprical Finance 3(2): 123–192. Engle, R. F. and Granger, C. W. J. (1987). Co-integration and error correction: representation, estimation and testing, Econometrica 55(2): 251–276. European Central Bank (2016). A guide to the Eurosystem/ECB staff macroeconomic projection exercises, Online. Evans, M. D. D. and Lyons, R. K. (2002). Order flow and exchange rate dynamics, Journal of Political Economy 110(1): 170–180. Fama, E. F. (1970). Efficient capital markets: a review of theory and empirical work, Journal of Finance 25(2): 383–417. Farmer, J. D. (2013). Hypotheses non fingo: problems with the scientific method in economics, Journal of Economic Methodology 20(4): 377–385. Friedman, M. (1953a). The case of flexible exchange rates, in M. Friedman (ed.), Essays in Positive Economics, University of Chicago Press, Chicago, pp. 157–203. Friedman, M. (1953b). Essays in positive economics, University of Chicago Press, Chicago, Ill. Friedman, M. (1953c). The Methodology of Positive Economics, in M. Friedman (ed.), Essays in Positive Economics, University of Chicago Press, Chicago, pp. 3–42. Frydman, R. and Goldberg, M. D. (2011). Beyond mechanical markets : asset price swings, risk, and the role of the state, Princeton University Press, Princeton. Frydman, R., Goldberg, M. D. and Mangee, N. (2015). Knightian uncertainty and stock-price movements: why the REH present-value model failed empirically, Discussion Paper 2015-38, World Economic Institute, Kiel University, Kiel. Fullbrook, E. (2009). Introduction: Lawson’s reorientation, in E. Fullbrook (ed.), Ontology and economics : Tony Lawson and his critics, Routledge, New York, pp. 1–12. Galton, F. (1907). Vox populi, Nature 75(1949): 450–451. Gilboa, I. and Schmeidler, D. (1989). Maxmin expected utility with non-unique prior, Journal of Mathematical Economics 18(2): 141–153. Goldstein, D. G. and Gigerenzer, G. (2002). Models of ecological rationality: the recognition heuristic, Psychological Review 109(1): 75–90. Good, I. J. (1966). Speculations concerning the first ultraintelligent machine, Vol. 6 of Advances in Computers, Elsevier, pp. 31–88.

114

Bibliography

Gopikrishnan, P., Plerou, V., Liu, Y., Amaral, L. A. N., Gabaix, X. and Stanley, H. (2000). Scaling and correlation in financial time series, Physica A: Statistical Mechanics and its Applications 287(3): 362–373. Govorkhin, S., Ganina, M., Lavrov, K. and Dudinstev, V. (1989). Homo Sovieticus, World Affairs 152(2): 104–108. Grauwe, P. D. and Kaltwasser, P. R. (2007). Modeling optimism and pessimism in the foreign exchange market, CESifo Working Paper Series 1962, CESifo GmbH. Gudkov, L. (2010). Conditions necessary for the reproduction of “Soviet Man”, Sociological Research 49(6): 50–99. Hajnal, J. (1965). European marriage patterns in perspective, in D. V. Glass and D. E. C. Eversley (eds), Population in History, Edward Arnold, London, pp. 101–143. Hansen, P. R. and Lunde, A. (2006). Realized variance and market microstructure noise, Journal of Business & Economic Statistics 24: 127–161. Hartmann, P., Manna, M. and Manzanares, A. (2001). The microstructure of the euro money market, Journal of International Money and Finance 20(6): 895–948. Hendry, D. F. and Mizon, G. E. (1998). Exogeneity, causality, and co-breaking in economic policy analysis of a small econometric model of the money in the UK, Empirical Economics 23(3): 267–294. Hendry, D. F. and Richard, J.-F. (1982). On the formulation of empirical models in dynamic econometrics, Journal of Econometrics 20: 3–33. Hodgson, G. M. (2006). What are institutions?, Journal of Economic Issues 40(1): 1–25. Hommes, C. (2013). Reflexivity, expectations feedback and almost self-fulfilling equilibria: economic theory, empirical evidence and laboratory experiments, Journal of Economic Methodology 20: 406–419. Hong, H. and Yu, J. (2009). Gone fishin’: seasonality in trading activity and asset prices, Journal of Financial Markets 12(4): 672–702. Hussam, R. N., Porter, D. and Smith, V. L. (2008). Thar she blows: can bubbles be rekindled with experienced subjects? American Economic Review 98(3): 924–937. Ito, T., Lyons, R. K. and Melvin, M. T. (1998). Is there private information in the FX market? The Tokyo Experiment, Journal of Finance 53(3): 1111–1130. Johansen, S. (1988). Statistical analysis of cointegration vectors, Journal of Economic Dynamics 12: 231–254. Kahneman, D., Schkade, D. and Sunstein, C. R. (1998). Sharedoutrage and erratic awards: the psychology of punitive damages, Journal of Risk and Uncertainty 16(1): 49–86. Kant, I. (2004). Die Kritiken, Suhrkamp Verlag KG, Frankfurt/Main. Kapeller, J. (2013). ‘Model-platonism’ in economics: on a classical epistemological critique, Journal of Institutional Economics 9(2): 199–221. Keynes, J. M. (1921). A treatise on probability, Cambridge University Press, Cambridge. Keynes, J. M. (1936). The general theory of employment, Interest, and Money, Macmillan, London and New York. Keynes, J. M. (1937). The General Theory of Employment, The Quarterly Journal of Economics 51(2): 209–223. Knight, F. H. (1921). Risk, Uncertainty, and Profit, Schaffner and Marx; Houghton Mifflin Company, Boston, MA. Kuhn, T. S. (1970). The structure of scientific revolutions, University of Chicago Press, Chicago. Lawson, T. (1997). Economics and reality, Routledge, London, New York. Levda, I. (2001). Homo post-sovieticus, Sociological Research 40(6): 6–41.

Bibliography 115 Lichter, A., Löffler, M. and Siegloch, S. (2016). The long-term costs of government surveilance: insights from Stasi spying in East Germany, CESifo Working Paper Series 6042, CESifo Group, Munich. List, J. A. (2004). Neoclassical theory versus prospect theory: evidence from the marketplace, Econometrica 72(2): 615–625. Lothian, J. R. (2016). Uncovered interest parity: the long and the short of it, Journal of Empirical Finance 36: 1–7. Lucas, R. E. (2003). Macroeconomic priorities, American Economic Review 93: 1–14. Lucas, R. J. (1976). Econometric policy evaluation: a critique, Carnegie-Rochester Conference Series on Public Policy 1(1): 19–46. Lyons, R. K. (2001). Foreign exchange: macro puzzles, micro tools, Pacific Basin Working Paper Series 01-10, Federal Reserve Bank of San Francisco. MacKenzie, D. (2006). An engine, not a camera : how financial models shape markets, MIT Press, Cambridge, MA. Malkiel, B. G. (2003). The efficient market hypothesis and its critics, Journal of Economic Perspectives 17(1): 59–82. Mankiw, N. G. (2011a). Economics, 2nd edn, Cengage Learning, Andover. Mankiw, N. G. (2011b). Principles of Macroeconomics, 6th edn, Cengage Learning, Andover. Mankiw, N. G. (2014). Economics, 3rd edn, Cengage Learning, Andover. Mantel, R. R. (1974). On the characterization of aggregate excess demand, Journal of Economic Theory 7(3): 348–353. Mason, J. W. (2016). James Crotty and the responsibilities of the heterodox. www.ineteconomics.org/perspectives/blog/james-crotty-and-the-responsibilities-ofthe-heterodox. 2017-02-08. McLean, R. D. and Pontiff, J. (2016). Does academic research destroy stock return predictability?, The Journal of Finance 71(1): 5–32. Meese, R. A. and Rogoff, K. (1983). Empirical exchange rate models of the seventies: do they fit out of sample?, Journal of International Economics 14 (1–2): 3–24. Milnor, J. W. (1954). Games against nature, in C. H. Coombs, R. L. Davis and R. M. Thrall (eds), Decision Processes, Wiley, New York, pp. 49–60. Minsky, H. P. (1982). Can “it” happen again?: essays on instability and finance, M.E. Sharpe, Armonk, N.Y. Minsky, H. P. (2008[1986]). Stabilizing an unstable economy, McGraw-Hill, NY. Mises, L. (1998 [1949]). Human action: a treatise on economics, Ludwig Von Mises Institute, Auburn. Müller, C. (2010). You CAN Carlson-Parkin, Economics Letters 108(1): 33–35. Müller, C. and Busch, U. (2005). The new German transfer problem, Jahrbuch für Wirtschaftswissenschaften 56(3): 307–326. Müller, C. and Köberl, E. (2012). Catching a floating treasure: a genuine ex-ante forecasting experiment in real time, KOF Working papers 12-297, KOF Swiss Economic Institute, ETH Zurich. Müller-Kademann, C. (2008). Rationally ‘irrational’: theory and empirical evidence. https://ssrn.com/abstract=1162790. Müller-Kademann, C. (2009). The information content of qualitative survey data, Journal of Business Cycle Measurement and Analysis 2019(1): 1–12. Müller-Kademann, C. (2016). The puzzle that just isn’t. https://arxiv.org/pdf/1604.08895.

116

Bibliography

Müller-Kademann, C. and Köberl, E. M. (2015). Business cycle dynamics: a bottom-up approach with markov-chain measurement, Journal of Business Cycle Measurement and Analysis (1): 41–62. Muth, J. F. (1961). Rational expectations and the theory of price movements, Econometrica 29: 315–335. Nerlove, M. (1983). Expectations, plans, and realizations in theory and practice, Econometrica 51(2): 1251–1280. von Neumann, J. and Morgenstern, O. (1947). Theory of Games and Economic Behavior, Princeton University Press, Princeton. Nowzohour, L. and Stracca, L. (2017). More than a feeling: confidence, uncertainty and macroeconomic fluctuations, Working Paper Series 2100, European Central Bank. Obstfeld, M. and Rogoff, K. (1995). The mirage of fixed exchange rates, Journal of Economic Perspectives 9(4): 73–96. Obstfeld, M. and Rogoff, K. (2000). The six major puzzles in international macroeconomics: Is there a common cause?, Working Paper 7777, National Bureau of Economic Research. Omrane, W. B. and Heinen, A. (2009). Is there any common knowledge news in the euro/dollar market?, International Review of Economics & Finance 18(4): 656–670. Ormerod, P. (1999). Butterfly economics: a new general theory of social and economic behavior, Pantheon Books, New York. Pesaran, H. M. (1987). The Limits to Rational Expectations, Basil Blackwell, Oxford. Piaget, J. (1969). The mechanisms of perception, Basic Books, New York. Piaget, J. (1971). Genetic epistemology, W.W. Norton & Co, New York. Plerou, V., Gopikrishnan, P., Amaral, L. A. N., Gabaix, X. and Stanley, H. E. (1999). Economic fluctuations and diffusion, Papers, arXiv.org. Polanyi, M. (1962). Personal knowledge: towards a post-critical philosophy, University of Chicago Press, Chicago. Popper, K. R. (1959). The Logic of Scientific Discovery, Routledge, Abingdon, Oxon and New York. Reinhart, C. M. and Rogoff, K. S. (2009). This Time Is Different: Eight Centuries of Financial Folly, Vol. 1, 1 edn, Princeton University Press, Princeton. Romer, P. (2016). The trouble with macroeconomics. https://paulromer.net/wp-content/ uploads/2016/09/WP-Trouble.pdf. Rossiter, C. M. (1977). Models of paradigmatic change, Communication Quarterly 25(1): 69–73. Russell, B. (1918). The problems of philosophy, new and rev. ed. edn, Williams and Norgate; Henry Holt London, New York. Salvatore, D. (2005). The euro-dollar exchange rate defies prediction, Journal of Policy Modelling 27(2): 455–464. Samuelson, W. and Zeckhauser, R. (1988). Status quo bias in decision making, Journal of Risk and Uncertainty 1(1): 7–59. Sargent, T. J. (1982). The end of four big inflations, in R. E. Hall (ed.), Inflation: Causes and Effects, University of Chicago Press, Chicago, pp. 41–98. du Sautoy, M. (2016). What We Cannot Know: Explorations at the Edge of Knowledge, HarperCollins Publishers Limited, London. Savage, L. J. (1951). The theory of statistical decision, Journal of the American Statistical Association 46(253): 55–67. Savage, L. J. (1972). The foundations of statistics, Dover Publications, New York.

Bibliography 117 Schmeidler, D. (1989). Subjective probability and expected utility without additivity, Econometrica 57(3): 571–587. Schumpeter, J. A. (1912). Theorie der wirtschaftlichen Entwicklung, 1st ed. edn, Duncker & Humblot. Schwarz, F. (2008). Das Experiment von Wörgl: ein Weg aus der Wirtschaftskrise, Synergia, Darmstadt. Schwert, G. W. (2003). Anomalies and market efficiency, in G. Constantinides, M. Harris and R. M. Stulz (eds), Handbook of the Economics of Finance, Vol. 1 of Handbook of the Economics of Finance, Elsevier, chapter 15, pp. 939–974. Shackle, G. L. S. (1972). Epistemics and Economics, Cambridge University Press, Cambridge. Shleifer, A. and Summers, L. H. (1990). The noise trader approach to finance, Journal of Economic Perspectives 4(2): 19–33. Sims, C. (2005). Rational inattention: a research agenda, Discussion Paper, Series 1: Economic Studies 34, Deutsche Bundesbank. Skidelsky, R. J. A. (2009). Keynes : the return of the master, 1st edn, Allen Lane, London. Smets, F. and Wouters, R. (2003). An estimated dynamic stochastic general equilibrium model of the Euro area, Journal of the European Economic Association 1(5): 1123–1175. Smets, F. and Wouters, R. (2007). Shocks and frictions in US business cycles: a Bayesian DSGE approach, American Economic Review 97(3): 586–606. Smith, V. L., Suchanek, G. L. and Williams, A. W. (1988). Bubbles, crashes, and endogenous expectations in experimental spot asset markets, Econometrica 56(5): 1119–1951. Snower, D. J. and Merkl, C. (2006). The caring hand that cripples: the East German labor market after reunification, The American Economic Review 96(2): 375–382(8). Sonnenschein, H. (1973). Do Walras’ identity and continuity characterize the class of community excess demand functions?, Journal of Economic Theory 6(4): 345–354. Soros, G. (2013). Fallibility, reflexivity and the human uncertainty principle, Journal of Economic Methodology 20: 309–329. Spada, G. L., Farmer, J. D. and Lillo, F. (2010). Tick size and price diffusion, Papers 1009.2329, arXiv.org. Syll, L. P. (2016). On the Use and Misuse of Theories and Models in Mainstream Economics, College Publications, London. Taleb, N. (2005). Fooled by Randomness: the hidden role of chance in life and in the markets, Random House, New York. Taylor, M. P. (1995). The economics of exchange rates, Journal of Economic Literature 33(1): 13–47. Thaler, R. H. (2016). Behavioral economics: past, present, and future, American Economic Review 106(7): 1577–1600. Thomas, A. K. and Millar, P. R. (2012). Reducing the framing effect in older and younger adults by encouraging analytic processing, Journals of Gerontology: Series B 67(2): 139–149. Tirole, J. (2002). Rational irrationality: some economics of self-management, European Economic Review 46(4–5): 633–655. Tversky, A. and Griffin, D. (1991). Endowment and contrast in judgments of well-being, in F. Strack, M. Michael Argyle and N. Schwarz (eds), Subjective well-being: An interdisciplinary perspective, Pergamon Press, Elmsford, NY, pp. 101–118.

118

Bibliography

Tversky, A. and Kahneman, D. (1981). The framing of decisions and the psychology of choice, Science 211(4481): 453–458. Vinge, V. (1993). The coming technological singularity: how to survive in the post-human era, in NASA (ed.), Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace, Vol. 10129 of NASA Conference Publication, NASA Lewis Research Center, Cleveland, OH, pp. 11–22. Wang, P. and Jones, T. (2003). The impossibility of meaningful efficient market parameters in testing for the spot–forward relationship in foreign exchange markets, Economics Letters 81: 81–87. Wilkinson, N. and Klaes, M. (2012). An Introduction to Behavioral Economics, Palgrave Macmillan. Wyplosz, C. (2017). When the IMF evaluates the IMF. http://voxeu.org/article/when-imfevaluates-imf. Accessed: 20/02/2017. Zumbach, G. (2004). How the trading activity scales with the company sizes in the FTSE 100, Papers cond-mat/0407769, arXiv.org.

Index

action i, 3–4, 10, 13, 16, 19, 22, 24, 27, 40, 45–6, 55, 60, 65–6, 76, 87, 107, 109 AI 16 AI-singularity 16 ambiguity 7–8, 39, 46, 53 anchoring 48 anchor values 26, 48 artificial intelligence 16 Asian disease 20, 25 asset market 81, 94, 103–4 Austria 71 auxiliary hypotheses 34–5 Bank of Sweden 69 Bayesian model averaging 58 behavioural economics 55 belief 26, 29, 38, 108 Bitcoin 68 bounded rationality 44 Bretton Woods 17, 65, 75 business tendency surveys 51 chance 27 choice 24, 27 Choquet Expected Utility 8 cointegration 54 commodity money 68, 70 confidence 3, 46–7, 50–1, 52–3, 66–8, 70–3, 75, 77, 109 constructivism 2, 31–2, 38, 40, 54, 108 credible information 26 critical realism 31 decision making 19, 20, 22–23, 25, 29, 39, 109 deduction 11, 37, 40, 54–5

determinism 3, 5–6, 9–10, 14, 46, 64, 80 deliberate ignorance 26, 28 double coincidence 67 DSGE 2, 34, 36, 56, 108 ECB 47–8 econometrics 55 economic law 2, 22, 30 efficient market hypothesis 13, 24 efficient markets 24 Einstein’s law 39 Ellsberg paradox 8 emergence 53 emotions 26–8, 109 empirical analysis 53 endowment 26 epistemology 2, 29, 31–2, 34, 38, 40, 44, 54–6, 110 equilibrium 39–40, 53 equilibrium concept 42 ergodicity 39–40, 53 event 1, 3, 4–5, 7–11, 16, 46–50, 78, 106 explainawaytions 27 experiments 25–6, 29–30, 37, 103–4 explanatory hypotheses 34, 39 falsifiability 30–1 falsification 34, 36–7, 45, 55 fiscal policy 2, 58, 63, 73, 75, 109 fiscal stimulus 75, 79 frontal lobe 27–8 general equilibrium 42–3, 53, 55 Germany 71 gold 67

120

Index

heuristics 26, 108 homo oeconomicus 23, 27 Hungary 71 hyperinflation 71 induction 36, 54 inflation 49, 71–3, 109 informational content 33–4, 35–6, 51 institutions 3, 26, 49, 63, 68, 76–7, 109 irrational behaviour 25, 28, 46, 103, 107 irrational bubbles 44 irrational exuberance 44 irrationality 15, 20, 25, 41, 61 Keynesian-neoclassical symbiosis 76 learning 25, 44 Lucas critique 2, 13, 17, 56, 59, 75 main assumption 2, 15–18, 22, 29 mainstream macroeconomics 34, 37–8 markets 18 microfoundations 40–41 minimum wage 64–5 model choice 57 model consistent expectations 57 money 49, 67, 109 natural law 7, 17–18, 37 network externalities 68 New Keynesian models 56 Newton’s law 39 noise trading 44 non-stationarity 54 objectivity 21–2 ontology 2, 30, 32, 108, 110 optimality 39, 53 optimisation 23, 29, 53 paper money 67, 69–70 paradigm 31, 38, 40–1, 45, 53, 109 paradigmatic change 42 peer effect 49 perfect foresight 37 persuasion 38 Poland 71 policy evaluation 59 positivism 13, 17, 30, 40, 54, 108 positivist epistemology 3–12, 34, 44

positivist-falsificationist epistemology 32, 36 prejudice 27, 59, 109 pretence of knowledge 47–8, 50 proof 13, 30, 71, 106 puzzles 20, 39, 41, 43, 45, 109 quantum theory 12 rational choice 8, 21, 28, 56, 63 rational expectations 37, 40–1, 43, 45–6, 57–9, 75 rationality 15, 19–20, 22–3, 29, 39 41, 45, 53, 58 rationality test 25 real business cycle 59 real business cycle models 56 reality 22, 26, 31, 33–4, 36–7, 39–40, 48–9, 52–3, 54–5, 61, 65, 86–7, 108 reductionism 40 reflexivity 16, 24 Reichsbank 72 relatedness to reality 33–4, 36 Riksbank 69 risk 7, 20, 39, 41, 43, 53 science 26, 29, 31 seignorage 68, 79 self-confirmation 51 sentiment 26, 44 singularity 16 space-time 39 state of confidence 46 status quo 26 subjectivity 21–2 subjective rationality 39 subjective utility maximisation 39 surprise indicator 51 tautology 33 taxonomy of uncertainty 2, 11–12, 32–3, 80 theorem 33–4, 39–40, 55 transcendental realism 31 transformitivity 17, 19, 24 trust 67–8, 70–1, 77, 109 truth 1, 21–2, 29–30, 31–2, 34–5, 38, 41, 54–5, 60, 109 uncertainty 2, 23, 43, 45, 50, 53, 64, 68, 75, 108

Index 121 uncertainty measurement 51 uncertainty paradigm 53 uncertainty shock 11, 76, 78 uncovered interest parity 44 uncovered interest parity condition 20–1, 44 utility 23, 25 utility maximisation 39–40, 43, 45, 53

value at risk 28 vector autoregressive models 36 verification 37 welfare 1, 39, 36, 65 welfare state 65 whim 26 Wörgl 70

236