251 12 9MB
English Pages 320 Year 2014
EXPERIMENTS IN MACROECONOMICS
RESEARCH IN EXPERIMENTAL ECONOMICS Series Editors: R. Mark Isaac and Douglas A. Norton Recent Volumes: Volume 7:
Emissions Permit Experiments, 1999
Volume 8:
Research in Experimental Economics, 2001
Volume 9:
Experiments Investigating Market Power, 2002
Volume 10: Field Experiments in Economics, 2005 Volume 11: Experiments Investigating Fundraising and Charitable Contributors, 2006 Volume 12: Risk Aversion in Experiments, 2008 Volume 13: Charity with Choice, 2010 Volume 14: Experiments on Energy, the Environment, and Sustainability, 2011 Volume 15: New Advances in Experimental Research on Corruption, 2012 Volume 16: Experiments in Financial Economics, 2013
RESEARCH IN EXPERIMENTAL ECONOMICS VOLUME 17
EXPERIMENTS IN MACROECONOMICS EDITED BY
JOHN DUFFY Department of Economics, University of California, Irvine, CA, USA
United Kingdom North America Japan India Malaysia China
Emerald Group Publishing Limited Howard House, Wagon Lane, Bingley BD16 1WA, UK First edition 2014 Copyright r 2014 Emerald Group Publishing Limited Reprints and permission service Contact: [email protected] No part of this book may be reproduced, stored in a retrieval system, transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without either the prior written permission of the publisher or a licence permitting restricted copying issued in the UK by The Copyright Licensing Agency and in the USA by The Copyright Clearance Center. Any opinions expressed in the chapters are those of the authors. Whilst Emerald makes every effort to ensure the quality and accuracy of its content, Emerald makes no representation implied or otherwise, as to the chapters’ suitability and application and disclaims any warranties, express or implied, to their use. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN: 978-1-78441-195-4 ISSN: 0193-2306 (Series)
ISOQAR certified Management System, awarded to Emerald for adherence to Environmental standard ISO 14001:2004. Certificate Number 1985 ISO 14001
CONTENTS LIST OF CONTRIBUTORS
vii
MACROECONOMICS IN THE LABORATORY John Duffy EXPERIMENTS ON EXPECTATIONS IN MACROECONOMICS AND FINANCE Tiziana Assenza, Te Bao, Cars Hommes and Domenico Massaro PERSISTENCE OF SHOCKS IN AN EXPERIMENTAL DYNAMIC STOCHASTIC GENERAL EQUILIBRIUM ECONOMY Charles N. Noussair, Damjan Pfajfar and Janos Zsiros FORECAST ERROR INFORMATION AND HETEROGENEOUS EXPECTATIONS IN LEARNING-TO-FORECAST MACROECONOMIC EXPERIMENTS Luba Petersen AN EXPERIMENT ON CONSUMPTION RESPONSES TO FUTURE PRICES AND INTEREST RATES Wolfgang J. Luhan, Michael W. M. Roos and Johann Scharler EXPERIMENTS ON MONETARY POLICY AND CENTRAL BANKING Camille Cornand and Frank Heinemann v
1
11
71
109
139
167
vi
CONTENTS
EVOLVING BETTER STRATEGIES FOR CENTRAL BANK COMMUNICATION: EVIDENCE FROM THE LABORATORY Jasmina Arifovic
229
EXPERIMENTAL EVIDENCE ON THE ESSENTIALITY AND NEUTRALITY OF MONEY IN A SEARCH MODEL John Duffy and Daniela Puzzello
259
LIST OF CONTRIBUTORS Jasmina Arifovic
Department of Economics, Simon Fraser University, Burnaby, BC, Canada
Tiziana Assenza
Department of Economics and Finance, Universita` Cattolica del Sacro Cuore, Milan, Italy; CeNDEF, University of Amsterdam, The Netherlands
Te Bao
Department of Economics, Econometrics and Finance, University of Groningen, Groningen, The Netherlands
Camille Cornand
Universite´ de Lyon, Lyon, France; CNRS, GATE Lyon Saint-Etienne, Ecully, France
John Duffy
Department of Economics, University of California, Irvine, CA, USA
Frank Heinemann
Department of Economics and Management, Technische Universita¨t, Berlin, Germany
Cars Hommes
CeNDEF, University of Amsterdam and Tinbergen Institute, Amsterdam, The Netherlands
Wolfgang J. Luhan
Department of Economics, RuhrUniversita¨t Bochum, Bochum, Germany
Domenico Massaro
CeNDEF, Department of Quantitative Economics, University of Amsterdam and Tinbergen Institute, Amsterdam, The Netherlands
Charles N. Noussair
Department of Economics, Tilburg University, Tilburg, The Netherlands
vii
viii
LIST OF CONTRIBUTORS
Luba Petersen
Department of Economics, Simon Fraser University, Burnaby, BC, Canada
Damjan Pfajfar
Department of Economics, Tilburg University, Tilburg, The Netherlands
Daniela Puzzello
Department of Economics, Indiana University, Bloomington, IN, USA
Michael W. M. Roos
Department of Economics, RuhrUniversita¨t Bochum, Bochum, Germany
Johann Scharler
Department of Economics, University of Innsbruck, Innsbruck, Austria
Janos Zsiros
Department of Economics, Cornell University, Ithaca, NY, USA
MACROECONOMICS IN THE LABORATORY John Duffy ABSTRACT This article discusses the methodology of using laboratory methods to address macroeconomic questions. It also provides summaries of the articles in this volume. Keywords: Experimental economics; macroeconomics
It is often asserted that macroeconomics is a nonexperimental science that macroeconomic theories can only be evaluated using the available historical time-series data and that the only relevant “experiments” are the “natural” ones implemented by policymakers or happenstance.1 However, as controlled laboratory experimentation has become established as a mainstream empirical methodology in microeconomics, researchers have recently turned their attention toward using those same laboratory methods to evaluate the assumptions and predictions of macroeconomic models and theories.2
Experiments in Macroeconomics Research in Experimental Economics, Volume 17, 110 Copyright r 2014 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0193-2306/doi:10.1108/S0193-230620140000017001
1
2
JOHN DUFFY
The blossoming of experimental research in macroeconomics owes much to the careful, micro-founded modeling of modern macroeconomic research which enables laboratory testing. But it is also driven by the lack of field data necessary to test those micro-founded macroeconomic models and their assumptions. It is further motivated by related concerns about how to select an equilibrium in environments with multiple equilibria. Implementation of macroeconomic models in the laboratory necessarily involves many simplifications so that the structure and incentives of the economy can be quickly and transparently conveyed to human subjects. However, the control of the laboratory allows one to make the type of causal inference that eludes macro-econometricians who can never be certain if their specification of the model environment is the correct one. For instance, if one wants to evaluate whether or not agents are forming rational expectations, it is helpful to know the true, data-generating process! To date, macroeconomic experiments have shed light on how agents form expectations of future, endogenously determined variables, how they solve coordination problems in environments with multiple equilibria (e.g., bank runs) as well as the impact of various fiscal and monetary policy interventions on macroeconomic measures such as inflation and output. Skeptics of the application of laboratory methods to address macroeconomic questions might object that it is difficult to generalize from the small-scale, low-stake incentives of laboratory experiments with (typically) small numbers of student subjects to macroeconomic phenomena involving decisions made by millions (or billions) of highly motivated economic actors all interacting with one another. However, there are good reasons to think that behavior observed in the small-scale of the laboratory may indeed generalize to the field since we often observe similar patterns of behavior in both field and laboratory data. For example, in laboratory lifecycle consumption/savings experiments, subjects often fail to smooth their consumption in the manner predicted by the permanent income hypothesis; instead subjects’ consumption is found to be excessively sensitive to their current income, a phenomenon that is well-documented in studies using field data (see, e.g., Carbone & Hey, 2004). Other examples of a correspondence between laboratory and field data are found in several of the articles of this book. Research has also shown that “professional” subjects (defined as those having more life experience with the tasks that are the focus of the experiment) often behave no differently in laboratory studies than the convenience sample of university student subjects (see, e.g., Fre´chette, 2011). As for the study of macroeconomic policy interventions, it seems that a strong case can be made for exploring the impact (and possibly
Macroeconomics in the Laboratory
3
unanticipated consequences) of various different macroeconomic policy interventions in the small-scale of the laboratory and making any necessary adjustments before rolling those same policies out on a much larger scale in the field. Finally, we note that researchers in other, seemingly nonexperimental fields have also resorted to controlled laboratory methods as a window to a better understanding of aggregate phenomena. For instance, experimental astrophysicists conduct controlled laboratory tests with custom instruments to better understand the structure of galaxies and the nature of interstellar space; experimental evolution researchers apply laboratory methods to multiple generations of bacteria to evaluate theories of evolution; and experimental political scientists use laboratory methods with human subjects to assess theories of turnout in elections. While caution is clearly warranted in extrapolating from experimental evidence to the field, this book offers a compelling case for the new methodology of studying macroeconomics in the laboratory. Indeed, the articles in this volume represent the first-ever collection of studies or surveys focusing on the use of laboratory methods to address macroeconomic questions. The aim of this book is to demonstrate via examples the kinds of macroeconomic questions that can be addressed using laboratory methods and thereby to encourage macroeconomists to add controlled laboratory experiments to their set of tools. My hope is that one day in the near future, laboratory methods will be at par with computational experiments and macro-econometric estimation methods as just another means of empirically validating macroeconomic models.
ARTICLE SUMMARIES The article “Experiments on Expectations in Macroeconomics and Finance” by Tiziana Assenza, Te Bao, Cars Hommes, and Domenico Massaro provides a comprehensive survey of experiments exploring how individuals form expectations of future macroeconomic variables. Expectation formation is an important issue in modern macroeconomic models where endogenous economic variables are usually determined in part by the forward-looking expectations of agents. Assenza et al. distinguish between early experiments where the data-generating process is exogenous to subjects’ forecasts and more recent experiments in which the data-generating process is determined in part by subjects’ expectations so that there is a beliefoutcome interaction. They further highlight the
4
JOHN DUFFY
methodological difference between learning-to-forecast and learning-tooptimize experiments. In learning-to-forecast experiments, subjects are tasked only with the objective of forecasting endogenous variables accurately; given each subjects’ forecast, a computer program then solves for the agent’s optimal quantity choices. By contrast, in learning-to-optimize experiments, subjects are tasked with determining quantity choices on their own, so that their forecasts are implicit. The distinction between learningto-forecast and learning-to-optimize experimental designs originates in the work of Marimon and Sunder (1994) and can be regarded as a methodological dichotomy that is native to experimental macroeconomic research; macroeconomic models (in contrast to microeconomic models) can be quite complicated and so it is often useful to decompose the tasks that subjects face into forecasting and optimizing exercises separately. Assenza et al. also report on some experiments that combine both approaches. Finally, Assenza et al. discuss some of their recent work applying learningto-forecast experimental designs in the context of New Keynesian Dynamic Stochastic General Equilibrium (DSGE) models where the focus is on the role played by monetary policy rules in stabilizing private sector expectations. The article “Persistence of Shocks in an Experimental Dynamic Stochastic General Equilibrium Economy” by Charles N. Noussair, Damjan Pfajfar, and Janos Zsiros also implements a version of a New Keynesian DSGE model in the laboratory, however they use a learning-tooptimize experimental design and are interested in issues concerning the persistence of shocks to output, inflation, and interest rates. These authors acknowledge the difficulties involved in implementing an experiment that “fully conforms” to theoretical New Keynesian DSGE models but nevertheless they forge ahead with creative solutions to implementing monopolistically competitive firms and menu costs to price adjustment. Subjects in their experiment are divided up between consumers and producers and in one treatment subjects also operate as the central bank in setting interest rates. Their laboratory implementation allows them to turn on (or off) monopolistically competitive firms and menu cost frictions enabling them to assess the marginal contributions of these devices for the persistence of macroeconomic shocks. They also study a low friction environment where firms have no monopoly price-setting power and there are no menu costs to price adjustment so that the only frictions come from the bounded rationality of subject participants who must make labor and output supply and demand decisions in their roles as consumers and firms. Their main finding is that the presence of monopolistically competitive firms does
Macroeconomics in the Laboratory
5
indeed lead to additional persistence of shocks to output, though there are no persistent effects on inflation. Further, the addition of menu costs does not generate much persistence of monetary policy shocks to output or inflation, though policy shocks exhibit greater persistence when the central bankers are human rather than robots. The article “Forecast Error Information and Heterogeneous Expectations in Learning-to-Forecast Macroeconomic Experiments” by Luba Petersen explores how information feedback affects expectation formation in the context of a New Keynesian DSGE model where subjects are tasked with forecasting inflation and the output gap. Peterson designs, implements, and reports on two learning-to-forecast experiments. The first, baseline treatment is one in which historical information on inflation, the output gap, past forecasts of these variables, as well as realizations of macroeconomic shocks are available for look-up by subjects, but where forecast errors have to be inferred by comparing past forecasts of inflation and the output gap with ex-post realizations of those variables. This baseline treatment is contrasted with a forecast error information (FEI) treatment where the immediate past realizations of those same variables are presented on the same decision screen where subjects input their forecasts for inflation and output and where immediate past forecast errors are also calculated and presented to subjects as well. Petersen’s main finding is that forecast errors are more tightly distributed around zero in the FEI treatment as compared with the baseline treatment indicating that the immediate availability and calculation of past FEI yields significant improvements in subjects’ inflation and output gap forecasts. Further, since the FEI focuses subjects’ attention on minimizing forecast errors they tend to give less weight to macroeconomic shocks, with the result that the FEI economy involves less heterogeneity in expectations and is less volatile as compared with the baseline treatment. Petersen also reports that a constant gain error correction model provides the best fit to the experimental data relative to several other candidates including rational expectations. Petersen’s article provides clear evidence that the nature in which historical information is provided is important. It is also a methodological contribution on how to better implement learning-to-forecast experiments in the laboratory. The article “An Experiment on Consumption Responses to Future Prices and Interest Rates” by Wolfgang J. Luhan, Michael W. M. Roos, and Johann Scharler reports on an experiment designed to understand whether and how consumers react to announced future changes in interest rates and/or prices. Specifically, they study the discounted utility model of lifecycle consumption planning which serves as the foundation of
6
JOHN DUFFY
household behavior in all DSGE models, real business cycle or New Keynesian alike. For simplicity, the consumption planning problem they study involves no uncertainty. However, consumers are asked to formulate consumption paths for their entire five-period lifetime (as opposed to simply choosing consumption amounts sequentially in each period). This choose-a-path approach represents a new type of learning-to-optimize experimental design. They note that the optimal lifetime consumption path depends on all future prices and interest rates. Their main experimental intervention is to announce, at the start of a five-period lifetime, changes that will occur in exogenously determined prices or interest rates midway through that lifetime, in period 3 (in other treatments, there is either no change in prices or interest rates, or changes that are perfectly off-setting). The discounted utility model predicts both anticipated effects of these announced changes on consumption from the very first period of the lifetime as well as impact effects on consumption when the change actually takes place in period 3, and these two effects move in opposite directions. Luhan et al. report that, in the absence of any changes in prices or interest rates, most subjects’ consumption plans depart from the optimal path but can get closer to it with repeated lifetime experience. They also find some evidence that subjects change their consumption plans in response to known future changes in prices and interest rates in a manner that is qualitatively (though not quantitatively) similar to the predicted optimal adjustment path. However, most of this adjustment comes from impact effects, which are far greater than optimal as opposed to anticipation effects which are almost nonexistent. The latter finding provides a more nuanced explanation for the excessive sensitivity of consumption to current economic conditions as reported in many studies using field data. The article “Experiments on Monetary Policy and Central Banking” by Camille Cornand and Frank Heinemann provides a survey of experiments focusing on central bank behavior and monetary policy. The authors point out the many advantages to central bankers of being able to “bench test” alternative policies in the small-scale of the laboratory before rolling those policies out on the larger macro-economy and avoiding costly mistakes in the process. The survey begins with laboratory evidence in support of the effectiveness or “non-neutrality” of monetary policy. Experimental studies provide ample evidence for the non-neutrality of money owing to money illusion (nominal anchoring), adaptive learning behavior, and strategic uncertainty. These experimentally validated behavioral explanations for the effectiveness of monetary policies have been largely overlooked by researchers working in a purely rational choice framework. Cornand and
Macroeconomics in the Laboratory
7
Heinemann next discuss laboratory studies where human subjects play the role of central bankers. They report that laboratory central bankers succumb to discretionary temptations despite reputational considerations, but that they tend to adopt heuristics for stabilizing the economy which approximate those that have been estimated using field data and that monetary policy committees generally make better decisions than individuals acting alone. The control provided by laboratory studies also enables evaluation of various different policy rules as well as central bank strategies with regard to policy transparency and communication. Such studies confirm that the details of policy implementation and communication can matter for policy effectiveness and that policymakers should consider the impact of their policies and announcements on private sector agents who may update their beliefs in a non-Bayesian manner. Cornand and Heinemann also show how laboratory experiments can be useful for resolving questions of equilibrium selection that can arise in various coordination game models of speculative currency attacks and bank runs. Finally, they helpfully point to some open research questions of interest to central bankers that could be addressed using laboratory experiments. The article “Evolving Better Strategies for Central Bank Communication: Evidence from the Laboratory” by Jasmina Arifovic explores the role played by central bank communication on inflation and inflationary expectations in a Kydland-Prescott (1977) type economy. In such environments discretionary monetary policies (i.e., those that are a best response to current conditions) are time consistent but suboptimal; in attempting to optimally exploit an inflationunemployment trade-off in each period, the result, under rational expectations, is that there will be no resultant reduction in unemployment and only higher inflation as the private sector perfectly anticipates the central bank’s policy moves in each period. The dynamically optimal (but potentially time-inconsistent) Ramsey policy is to pre-commit to a constant monetary policy for all time, which eliminates the inflationary bias of a discretionary monetary policy. Arifovic studies the question of whether policymakers can do better than the suboptimal discretionary Nash equilibrium policy regime when they lack the ability to commit but can send nonbinding cheap talk messages about their intended policy actions in advance of the formation of private sector inflation expectations. In her article, the central bank is modeled using an individual evolutionary learning algorithm that chooses both inflation policy and cheap-talk announcements. The private sector is populated by human subjects attempting to correctly guess the actual inflation rate or by an endogenously varying combination of human subjects and robot players;
8
JOHN DUFFY
the latter are programmed to (naively) forecast that inflation will precisely equal the central bank’s announcement. The inflation forecasts of the private sector agents (human and robots) then determine the realizations of actual inflation and unemployment according to the model’s equations in this learning-to-forecast design. Arifovic reports that inflation and inflationary expectations are intermediate between the Nash and Ramsey equilibrium predictions of the model indicating that cheap talk is an effective policy tool. Further, inflation is higher when the private sector is composed entirely of human subjects. When there are robot players who believe the central bank’s announcements, there is more scope for the policy to be effective and consequently inflation is lower, but more variable. This article is the first-ever demonstration of robothuman subject interactions in the context of central bank decision making and provides a new platform for how central bank communication strategies might be evaluated in the future. The article “Experimental Evidence on the Essentiality and Neutrality of Money in a Search Model” by John Duffy and Daniela Puzzello reports on experiments involving a version of Lagos and Wright’s (2005) search model of exchange behavior with or without the existence of a fixed supply of “tokens” or fiat money. In the indefinitely repeated environment they implement in the laboratory, agents meet randomly in pairs in each period. One member of each pair is randomly designated as the consumer and the other as the producer. Pairs bargain over the quantity of a nonstorable good the producer can produce for the consumer, and, in environments with money, over the amount of money the consumer gives to the producer in exchange. The utility benefit to consumption exceeds the cost of production so there are gains from exchange. Following such decentralized pairwise meetings, all agents have the opportunity to participate in a centralized meeting that enables re-balancing of money holdings and which can also serve as a coordinating mechanism for quantities and prices. Duffy and Puzzello use a within-subject, learning-to-optimize experimental design to consider whether the existence or absence of fiat money affects exchange behavior and welfare. They report that in economies that start out with a supply of fiat money, welfare is higher than when money is suddenly (and without prior notice) take away. However, in economies that start out without money, exchange and welfare remain low even after the later surprise introduction of fiat money. They also use a within-subjects design to explore the impact of changes in the money supply on prices. They report that, consistent with the quantity theory of money, a doubling of the money supply leads to a doubling of the price level and no real effects
9
Macroeconomics in the Laboratory
(i.e., money is neutral), however, in the reverse experiment, a reduction in the money supply by one half does not result in a decrease in prices and consequently it has some real effects. This study is one of the first to consider the impact of changes in the quantity of fiat money on same-subject behavior, as opposed to the more commonly used between-subject experimental design. In summary, controlled laboratory experimentation is a relatively new methodology for doing research in macroeconomics, though it has a much longer history of use in microeconomics. The best way to learn how to apply an existing methodology to a new field is by example and the articles of this book, written by pioneers in the field of experimental macroeconomics, provide an excellent road-map for the interested researcher. While this is the first book devoted to experimental macroeconomics, my hope is that it will not be the last. I am very grateful to all of the authors who participated in this project as well as the referees who provided timely and valuable reports on all of the articles in this volume. I also thank the series editors, R. Mark Isaac and Douglas A. Norton, for enthusiastically supporting a book on experimental macroeconomics from the outset and to Sarah Roughley at Emerald Publishing for putting it all together.
NOTES 1. For example, Friedman (1953, p. 10) writes “Unfortunately, we can seldom test particular predictions in the social sciences by experiments explicitly designed to eliminate what are judged to be the most important disturbing influences. Generally, we must rely on evidence cast up by the “experiments” that happen to occur.” Regarding macroeconomics more specifically, Farmer (1999) asserts that “Unlike many of the natural sciences, in macroeconomics we cannot conduct experiments.” 2. See Duffy (2014) for a broad survey of experimental macroeconomics.
REFERENCES Carbone, E., & Hey, J. D. (2004). The effect of unemployment on consumption: An experimental analysis. Economic Journal, 114, 660683. Duffy, J. (2014). Macroeconomics: A survey of laboratory research. In J. H. Kagel & A. E. Roth (Eds.), Handbook of experimental economics (Vol. 2). Princeton, NJ: Princeton University Press (forthcoming). Farmer, R. E. A. (1999). Macroeconomics. New York, NY: South-Western College Publishing.
10
JOHN DUFFY
Fre´chette, G. R. (2011). Laboratory experiments: Professionals versus students. Working Paper. Department of Economics, NYU, SSRN. Retrieved from http://ssrn.com/abstract= 1939219 Friedman, M. (1953). Essays in positive economics. Chicago, IL: Chicago University Press. Kydland, F. E., & Prescott, E. C. (1977). Rules rather than discretion: The inconsistency of optimal plans. Journal of Political Economy, 85, 473492. Lagos, R., & Wright, R. (2005). A unified framework for monetary theory and policy analysis. Journal of Political Economy, 113, 463484. Marimon, R., & Sunder, S. (1994). Expectations and learning under alternative monetary regimes: An experimental approach. Economic Theory, 4, 131162.
EXPERIMENTS ON EXPECTATIONS IN MACROECONOMICS AND FINANCE Tiziana Assenza, Te Bao, Cars Hommes and Domenico Massaro ABSTRACT Expectations play a crucial role in finance, macroeconomics, monetary economics, and fiscal policy. In the last decade a rapidly increasing number of laboratory experiments have been performed to study individual expectation formation, the interactions of individual forecasting rules, and the aggregate macro behavior they co-create. The aim of this article is to provide a comprehensive literature survey on laboratory experiments on expectations in macroeconomics and finance. In particular, we discuss the extent to which expectations are rational or may be described by simple forecasting heuristics, at the individual as well as the aggregate level. Keywords: Expectation feedback; self-fulfilling beliefs; heuristic switching model; experimental economics
Experiments in Macroeconomics Research in Experimental Economics, Volume 17, 1170 Copyright r 2014 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0193-2306/doi:10.1108/S0193-230620140000017002
11
12
TIZIANA ASSENZA ET AL.
MOTIVATION AND INTRODUCTION Modeling of expectation formation is ubiquitous in modern macroeconomic and finance where agents face intertemporal decisions under uncertainty. Households need to form expectations about future interest rate and housing prices when deciding whether to buy a new apartment and how to finance it with a mortgage loan. Similarly, fund managers need to form expectations about future stock prices to build an optimal investment portfolio. Based on their expectations, individual agents make their economic decisions, which then via market clearing, determine the realization of the aggregate variables that agents attempt to forecast. The market/economy can be modeled as an expectation feedback system, where agents base their expectations on available information and past realizations of the variables, and future realization of the aggregate variables depends on their expectations. It is therefore of crucial importance how the individual expectation formation process is modeled. Economists have different views towards expectation formation. Since the seminal works by Muth (1961) and Lucas (1972), the rational expectations hypothesis (REH) has become the mainstream approach to model expectation formation. The REH assumes that agents use all available information and, on average, are able to make unbiased predictions of future economic variables, that is, without systematic errors. When all agents form rational expectations, the economy will reach the rational expectations equilibrium (REE), the solution obtained after plugging in the REH condition to the model of the economy. In dynamic economic models, REH means that expectations are assumed to be model-consistent. REH has been applied widely because of its simplicity and strong discipline on the number of free parameters, but has been criticized by making highly demanding assumptions about agents’ knowledge about the law of motion of the economy and their computing capacity. There is an alternative, behavioral view that assumes agents are boundedly rational. This view dates back to Simon (1957), and has been advocated recently by Shiller (2000), Akerlof and Shiller (2009), Colander et al. (2009), DeGrauwe (2009), DeGrauwe (2012). Hommes (2013), DeGrauwe (2012) and Kirman (2010), among others. A strong motivation for this approach is that expectation formation observed from participants in real market does not seem to be rational. For example, Shiller (1990) and Case, Shiller, and Thompson (2012) find that during the housing market boom, investors in the US housing market expected housing prices to grow at an extremely high rate that cannot be supported by reasonable estimates of
Experiments on Expectations in Macroeconomics and Finance
13
fundamental factors, and they also expected an even higher long-run growth rate, although the growth was obviously not sustainable, and the market collapsed soon afterwards. An alternative theory of adaptive learning (Evans & Honkapohja, 2001; Sargent, 1993) has been developed, and there are also empirical works assuming agents have heterogeneous expectations (Branch, 2004; Brock & Hommes, 1997; Harrison & Kreps, 1978; Hommes, 2011; Xiong & Yan, 2010). The bounded rationality approach sometimes confirms that the market will converge to the rational expectation equilibrium when the equilibrium can be found by the agents via learning, but often leads to non-RE equilibria (Bullard, 1994), or bubble-bust phenomena, for example, when the evolutionary selection leads to agents using trend extrapolation forecasting strategies (Anufriev & Hommes, 2012; Brock & Hommes, 1998). Parallel to the literature on bounded rationality, Soros (2003, 2009) used the terminology “reflexivity” to describe trading behavior in real financial markets, where the realized price goes up when the average expectation goes up, and the agents have a tendency to ignore information about the fundamental value of assets and instead make speculative demand based on trend-following expectations, which leads to an intrinsic tendency for the market to destabilize. Understanding the way agents form expectations is not only an interesting topic for academic discussion, but also a relevant issue for policy design. For example, if agents are able to form rational expectations, and reach the REE immediately, there would be no need for policy makers to take time lags of the effect of a policy into account. On the other hand, if agents adjust their expectations adaptively, it would be very important for the policy makers to know how quickly they can learn the effect of the policy in order to decide the optimal timing for implementation. While expectation formation plays a very important role in modern dynamic macroeconomic modeling, there are usually no empirical data on agents’ expectations. Occasionally survey data on expectations on future macroeconomics variables (e.g., inflation) is available, but the surveys typically pay a fixed reward, which generates no incentive to provide an answer with careful consideration. When the data on expectations are missing, empirical works on dynamic macroeconomic models face the difficulty of “testing joint hypotheses”: namely, when a model is rejected, it is not clear whether it is because of misspecification of the model, or incorrect assumption on the expectation formation rules. In this article, we review laboratory experiments with human subjects on expectation formation in dynamic macroeconomic and asset pricing environments. The advantages of using data from the lab include (1) the agents’ expectations are explicitly
14
TIZIANA ASSENZA ET AL.
elicited and properly incentivized, which makes expectation formation directly observable; (2) while it is very difficult to find the “true model” of the real macroeconomy, and hence the rational expectation equilibrium/ equilibria, with empirical data, the model of the experimental economy is fully known and controlled by the experimenter; and (3) it is very difficult to get empirical data on expectations on macroeconomic variables with a large number of observations or high frequency, while in experiments, it is easier to elicit expectations for many periods within a short period of time. This article surveys three types of macroeconomics experiments1 with elicitation of expectations/forecasts: (i) experiments where agents predict time series from field data or generated by a random process (e.g., random walk). These experiments show mixed results about whether the agents are able to make rational expectations on the exogenously generated time series. Some studies show that agents are able to form optimal predictions (e.g., use naive expectations to predict a random walk process), while others find that agents’ forecasting behavior is better explained by models with bounded rationality (e.g., models with “regime shifting belief”). (ii) Learning-to-Forecast Experiments (LtFEs), an experimental design that dates back to a series of papers by Marimon and Sunder (1993, 1994, 1995) and Marimon, Spear, and Sunder (1993). The key feature of the design is that the subjects of the experiment play the role of professional forecasters, with the only task to submit their expectation about an economic variable, for example, the market price. After collecting individual forecasts, the conditional optimal quantity decisions (e.g., production, trading, and saving) by the agents are calculated by a computerized program, for example, derived from utility and profit maximization, which then determines the aggregate variables, for example, the market price, via market clearing. Unlike the experiments in (i), the time series in the LtFEs are a function of the agents’ expectations. The LtFEs are forecasting experiments with feedback. A general conclusion from this literature, to be discussed below, is that the agents learn to play the REE when the market is a negative feedback system, where the realized value of the economic variable (e.g., the price) is low when the average expectation is high (as in a traditional cobweb market), but agents fail to learn the rational expectation equilibrium when the market is a positive feedback system, where the realized value is high when the average expectation is high (as in a speculative asset market). (iii) Works that compare the LtFEs with the Learning-to-Optimize Experiments (LtOEs) design, where the subjects submit their quantity decisions directly. The main conclusion is that the main result of the LtFEs is a robust finding, namely, there is also a tendency for the negative feedback markets to
Experiments on Expectations in Macroeconomics and Finance
15
converge to the REE, and the positive feedback markets to deviate from it with the LtOE design. When a learning to forecast experiment is run using the learning-to-optimize design, the aggregate market outcome deviates further from the REE, namely, the negative feedback markets converge slower to the REE (i.e., after a larger number of periods), and the positive feedback markets experience more severe boom-bust cycles. Within the experimental economics literature, there is a parallel literature in microeconomics and game theory about belief/expectation elicitation in games. Since in many game theoretical models, people form a belief/expectation of their opponents’ action before choosing their own actions, when agents deviate from the optimal response to their opponent, it is important to understand whether it is because they do not form the right belief about their opponent, or fail to make conditional optimal decisions to their belief. Important studies in this field include those by Nyarko and Schotter (2002), Rustrom and Wilcox (2009), Blanco, Engelmann, Koch, and Normann (2010) and Gaechter and Renner (2010); see the recent survey by Schotter and Trevino (2014). The main conclusion is that belief elicitation provides a lot of useful information about agents’ decision process, but it can be intrusive to their decisions itself. Unlike subjects in the belief elicitation experiments in game theory, subjects in the learningto-forecast experiments (LtFEs) form an expectation on the aggregate market outcome, because they are unable to recognize who their direct opponents are. Game theory experiments did not elicit the forecast alone without asking for agents’ decisions or actions, while many LtFEs ask for only the forecast. Moreover, many macroeconomics models also assume that the market is fully competitive and individuals do not have market power. Because of that subjects in LtFEs are typically paid according to their prediction accuracy instead of the profit of the quantity decision, so that they do not have an incentive to use their market power to manipulate the price. Furthermore, macroeconomic experiments also have a larger group size (612 in each experimental market) than game theory experiments (26 in each group) in order to mimic a competitive market. Finally, another characteristic feature of many macro experiments is that subjects typically only have qualitative information about the economy, for example, whether the feedback structure is positive or negative, but lack detailed quantitative knowledge about the law of motion of the economy. The article is organized as follows. The section “Predicting Exogenous Processes and Field Data” reviews forecasting experiments of field data or exogenous stochastic processes whose realizations are not affected by forecasts. The section “Learning-to-Forecast” reviews LtFEs with expectation
16
TIZIANA ASSENZA ET AL.
feedbacks, where realizations of aggregate variables depend upon expectations of a group of individuals. The section “Learning to Optimize” compares learning-to-forecast and learning-to-optimize experiments. Finally, “Concluding Remarks” concludes.
PREDICTING EXOGENOUS PROCESSES AND FIELD DATA Early work on lab experiments on individual forecasting behavior focused on exogenously generated time series, either from real world market data or from simple stochastic processes. In this setup there is no feedback from forecasts. It is like predicting the weather, where forecasts do not affect the probability of rain or the laws of motion of the atmosphere. An advantage of real world data is obviously its realism and relevance for economic forecasting. However in this framework, defining rational expectations is not immediately obvious as the data generating process of the real world time series data is not known. If on the other hand the time series to be forecasted is generated by a (simple) exogenous stochastic process that is known to the experimenter, deviations from rational model-consistent expectations can be more easily measured. This section reviews a number of early contributions in the experimental literature on how agents behave when forecasting an exogenous process.
Forecasting Field Data One of the first contributions goes back to the late 70s by Richard Schmalensee (1976), building on earlier work of Fisher (1962). The author describes an experimental analysis on how subjects form expectations on deflated British wheat prices. In particular Schmalensee focuses on the effects of turning points on forecasting behavior. Each participant could observe 25 realizations of the time series and she was told that the series refers to yearly actual wheat prices in a period with no big political changes. Subjects could observe plots for the time series and five years averages (1 − 5, 2 − 6, …, 21 − 25). The individuals had to provide their best forecast for the next five years average (i.e., 2630).2 The article aims at investigating whether turning points in the time series are points at which important changes in expectations formation take place. In order to
Experiments on Expectations in Macroeconomics and Finance
17
conduct the analysis the author applies alternative expectation formation rules including trend-following and adaptive expectations rules and analyzes if there are differences in agents’ behavior around turning point periods. Schmalensee finds that the adaptive model performs much better than the extrapolative model. Similar to Fisher (1962), Schmalensee finds that turning points in the time series are special to the participants in the experiment. He introduces in his analysis a parameter that captures the speed of response in the adaptive model (in order to check whether this parameter changes in turning points) and he finds that the parameter drops during turning points. More recently, Bernasconi, Kirchkamp, and Paruolo (2009) conduct a laboratory experiment in order to study expectations about fiscal variables using real world time series data from 15 European countries. The authors use an estimated VAR of the real world data as a benchmark for comparison. Participants in the experiment are shown graphical representations of annual data (as percentage of GDP) of gross total taxes (Tt), total public expenditure (Gt), public debt (Bt), and the change in the debt level (ΔBt = Bt − Bt − 1). Subjects are aware they are observing data from European countries, but they do not have any information about which country they are observing or the time period. At the beginning of the experiment subjects observe the first seven periods realization of the time series (for most of the countries it coincides with the period between 1970 and 1976), then they have to give their forecast for the next period until the end of the time series (in 1998). Once the first run ended each participant was randomly assigned to another country. The authors conduct three different treatments of the experiment. The first one is the benchmark treatment where participants are asked to predict both Tt and Gt. The second treatment is labeled as the “neutral” treatment, where the time series are the same as in the baseline treatment but any economic framing is removed, and the time series are just labeled “A” and “B.” The neutral treatment is useful to check whether the participants have understood the economic context of the baseline treatment or not and whether this is helpful in forecasting. Finally, the third treatment is a “control” treatment were subjects have to predict Tt only; the third treatment is useful to understand whether forecasting two variables simultaneously is too demanding for subjects, and whether asking them to predict only one variable helps to improve their forecasting performance. By comparison and analysis of expectations schemes in the experimental data and the estimated VAR benchmark the authors find that subjects violate the REH and their expectations scheme follow an “augmented-adaptive” model. That is, subjects do not always
18
TIZIANA ASSENZA ET AL.
follow a pure, univariate adaptive expectations scheme, but other (e.g., fiscal) variables enter the adaptive updating rule. They also find that forecasts in the neutral case are less precise, so that economic context improves forecasting performance in this setting.
Forecasting Exogenous Stochastic Time Series There have been quite a number of experiments on forecasting behavior of time series generated by an exogenous stochastic process. Perhaps the simplest example has been investigated by Dwyer, Williams, Battalio, and Mason (1993), where the time series were generated by a simple random walk, that is, xt = xt − 1 þ t
ð1Þ
where t are IID stochastic shocks. Subjects were asked to forecast different series of “events” and were informed that past observations were informative in order to learn the exogenous generating process of the time series and that “forecasts have NO effect” on the realized values. Dwyer et al. did not find that participants were affected by systematic bias or that they were inefficiently using available information. They conclude that subjects’ forecasts can be reasonably described by the rational expectation augmented or decreased by a random error and thus they find support for rational expectations. For a random walk, the rational forecast coincides with naive expectations, that is, the forecast equals the last observation. Apparently, with time series generated by a random walk, subjects learn to use the simple naive forecast, which coincides with the rational forecast. Hey (1994) had conducted a laboratory experiment in which subjects were asked to give forecasts for a stochastic AR(1) time series3 xt = 50 þ ρðxt − 1 − 50Þ þ t
ð2Þ
all with the same mean value of 50, but with different persistence coefficients ρ, where t are IID stochastic shocks from a normal distribution with mean 0 and variance 5. Agents, at any time during the experiment, could choose in which form past values of the time series were to be observed, that is, in table or graphical representation or both, and the time window. The main focus of the article consists in analyzing whether agents use rational or adaptive expectations. Differently from Dwyer et al. (1993), who found that rational expectations perform pretty well, the main finding
Experiments on Expectations in Macroeconomics and Finance
19
of Hey’s paper is that “subjects were trying to behave rationally, but frequently in a way that appears adaptively.” The authors estimated a general expectation rule and conducted the F test on the coefficients. In this way, the subjects could be categorized into users of adaptive expectations, rational expectations, or users of a mixture of the two. The distribution of the subjects over the rule differs depending on which time series the subjects have to predict (series 1, 2, or 3). Beckman and Downs (2009) conducted experiments where subjects also had to forecast a random walk as in Eq. (1), but with varying levels of the noise t. Each participant took part in four treatments, with the noise drawn from a uniform distribution of different size, and had to provide 100 predictions for each noise level. Once the experimental data were collected the authors compared them with a survey amongst professional forecasters conducted by the Philadelphia Federal Reserve Bank. The main finding of the article is that, for both the experimental data and the survey data, as the variance of the random walk increases deviations from the theoretically correct prediction strategy increase (i.e., naive expectations) as well. Indeed a 1% increment in the random error standard deviation implies a 0.9% increase in the standard deviation of the forecast of the rational expectations rule. Bloomfield and Hales (2002) conducted an experiment where MBA students were shown some time series generated by the random walk process. They used the data to test the “regime shifting belief” model by Barberis, Shleifer, and Vishny (1998) that predicts the fact that individuals use the number of past trend reversals to evaluate the likelihood of future reversals. Their results support the model. The subjects did not seem to perceive the random walk time series as randomly generated, and tended to predict more reversals if they experienced more reversals in the past. However, Asparouhova, Hertzel, and Lemmon (2009) found evidence against “regime shifting belief” models in favor of the “law of small numbers” in Rabin (2002), namely, the subjects do not predict a continuation of the current streak when the streak is longer. Kelley and Friedman (2002) consider learning in an Orange Juice Futures price forecasting experiment, where subjects must learn the coefficients of two independent variables in a stationary linear stochastic process. They find that learning is fairly consistent with respect to objective values, but with a slight tendency toward over-response. Moreover, learning is noticeably slower than under adaptive learning. Two striking treatment effects are tendencies toward over-response with high background noise and under-response with asymmetric coefficients.
20
TIZIANA ASSENZA ET AL.
Becker, Leitner, and Leopold-Wildburger (2009) conducted a laboratory experiment in which participants had to predict three time series subject to regime switching. First a stationary stochastic time series with integer values was generated, after which regime switches were applied by adding a constant mean in different subperiods. The main focus of the article consists in explaining the average forecasts by means of the bound and likelihood heuristics model (B&L heuristic hereafter) by Becker and Leopold-Wildburger (1996), according to which two features of the time series are most important in forecasting, namely turning points in the time series and the average variation. The authors find that after a regime switch the agents’ forecasts show a higher variance and less accuracy for several periods after the structural break in the time series they observe. Hence the heuristic performs slightly better than the Rational Expectation Hypothesis. In order to explain the average forecast the B&L heuristic and the Rational Expectation Hypothesis are applied to the three different treatments of the experiment. The authors find that if the periods immediately after the break has occurred are considered as a transition phase then the B&L heuristic explains subjects’ forecasting behavior even if the time series taken into account is affected by different breaks. In fact individuals have a memory of the pre-break periods. Beshears, Choi, Fuster, Laibson, and Madrian (2013) ask subjects to forecast an integrated moving average (ARIMA) process with short-run momentum and long-run mean reversion. Subjects make forecasts at different time horizons. They find that subjects have difficulty in correctly perceiving the degree of mean reversion, especially in the case with a slow dynamic process.
LEARNING-TO-FORECAST In this section we consider Learning-to-Forecast Experiments (LtFE) with human subjects. The Learning-to-Forecast design has been pioneered in a series of papers by Marimon and Sunder (1993, 1994, 1995) and Marimon et al. (1993), within dynamic Overlapping Generations Models; an earlier survey of LtFEs is given in the work by Hommes (2011). Subjects have to forecast a price, whose realization depends endogenously on their average forecast. The key difference with the previous section is the expectations feedback in these systems. Subjects are forecasting within a self-referential system: their individual forecasts affect and co-create aggregate behavior,
Experiments on Expectations in Macroeconomics and Finance
21
which then leads to adaptations of individual forecasts. The main goal of these experiments is to study how, within a dynamic self-referential economic system, individual expectations are formed, how these interact, and which structure emerges at the aggregate level. Will agents coordinate on a common forecast and will the price converge to the rational expectations benchmark or will other, learning equilibria arise? As already noted in Muth’s classical paper introducing rational expectations, a crucial feature for aggregation of individual expectations, is whether the deviations of individual expectations from the rational forecast are correlated or not. To quote Muth (1961, p. 321, emphasis added): Allowing for cross-sectional differences in expectations is a simple matter, because their aggregate affect is negligible as long as the deviation from the rational forecast for an individual firm is not strongly correlated with those of the others. Modifications are necessary only if the correlation of the errors is large and depends systematically on other explanatory variables.
Laboratory experiments are well suited to study correlation of individual expectations in a controlled self-referential environment. It turns out that the type of expectations feedback, positive or negative, is crucial. In general, the market price quickly converges to the REE in the negative feedback markets, and fails to converge in the positive feedback markets.
Asset Pricing Experiments This section reviews two closely related LtFEs by Hommes, Sonnemans, Tuinstra, and van de Velden (2005, 2008) on a speculative asset market. These experiments are based on the dynamic asset pricing model (e.g., Campbell et al., 1997), where the investor allocates his wealth between two assets. One asset is riskless, and pays a fixed gross return R, and the other is risky, paying an uncertain dividend yt in each period, yt is i.i.d. with the mean dividend y. The price of the risky asset is determined by market clearing conditions, and the supply of the asset is normalized to 0. The demand for the risky asset by each individual i in the period t is denoted by zi,t. This demand function is derived from mean-variance maximization of the next period expected wealth: n o a max Ui;t þ 1 ðzi;t Þ = max Ei;t Wi;t þ 1 ðzi;t Þ − V i;t ðWi;t þ 1 ðzi;t ÞÞ 2
ð3Þ
22
TIZIANA ASSENZA ET AL.
Ei,t and Vi,t are the subjective beliefs of agent i about the mean and variance of next period’s wealth. Expected wealth can be rewritten in terms of the demand zi,t as ( max zi;t
aσ 2 z2i;t ðpt þ 1 þ yt þ 1 − Rpt Þ − 2
) ð4Þ
where we assumed homogeneous and constant beliefs about the variance of excess returns, that is, Vi;t ðpt þ 1 þ yt þ 1 − Rpt Þ = σ 2 for all agents. The optimal demand is: zi;t =
Ei;t pt þ 1 þ y − Rpt aσ 2
ð5Þ
Imposing the market clearing condition: X i
zi;t =
X Ei;t pt þ 1 þ y − Rpt i
aσ 2
= zst = 0
ð6Þ
The market clearing price then becomes: pt þ 1 = P
Ei;t pt þ 1
1 e p þ yÞ þ ɛt R tþ1
ð7Þ
where = i I is the average prediction by the investors (I = 6 in the experiments) and ɛ t ∼ NIDð0; 1Þ (i.i.d. with normal distribution) is a small noise term added to the pricing equation (representing, e.g., a small fraction of noise traders). Both experiments use the parameter setting R = 1 þ r = 21 20 (or equivalently, the risk free interest rate is 5%) and y = 3. Therefore, by substituting in the rational expectations condition, and ignoring the noise term ɛt with zero mean, the REE or the fundamental price of the experimental markets is pf = y=r = 60. In both experiments, the subjects play the role of investment advisors of pension funds, and submit a price forecast for period t + 1 repeatedly. The market price at t is a function of the average price forecast by the subjects. The key difference between Hommes et al. (2005) and Hommes et al. (2008) is the presence of a computerized fundamental robot trader, always trading based upon the forecast that price equals fundamental. The fundamental trader acts as a “far from equilibrium stabilizing force,” pushing petþ 1
Experiments on Expectations in Macroeconomics and Finance
23
prices back toward its fundamental value. More precisely, Hommes et al. (2005) used the pricing rule pt þ 1 =
1 ð1 − nt Þpetþ 1 þ nt pf þ yÞ þ ɛt R
with the weight assigned to the robot trader nt given by 1 jpt − 1 − pf j nt = 1 − exp − 200
ð8Þ
ð9Þ
The fraction of the fundamental robot trader is 0 at the fundamental price pf = 60 and becomes larger when the price deviates further from the fundamental price, with an upper limit of 0.26.4 Fig. 1 illustrates market prices and individual forecasts in three groups in Hommes et al. (2005). Prices are very different from the rational expectation equilibrium benchmark pf = 60. Within the same treatment three different types of aggregate price behavior are observed: (i) slow monotonic convergence to the fundamental price (top panel), (ii) persistent oscillations (middle panel), and (iii) dampened price oscillations (bottom panel). Another striking feature is that individual expectations are strongly coordinated, despite the fact that subjects have no information about other subjects’ forecasts and only communicate through the observed price realization. Hommes et al. (2008) ran similar experiments without the presence of fundamental robot traders, that is, the realized price is generated using P Ei;t pt þ 1
Eq. (7), where petþ 1 = i I is the average prediction of six subjects in the experiments. Fig. 2 illustrates the aggregate price behavior in six groups, when there are no robot traders in the market. The market price can go to a very high level, sometimes more than 10 times the REE before it crashes. These bubbles are driven by strong coordination of individual expectations on trend-following behavior.
Heuristics Switching Model Prices in the asset pricing LtFEs clearly deviate from the RE benchmark. What would be a good theory of expectations that fits these laboratory data? In order to explain all different observed patterns (convergence, persistent oscillations, dampened oscillations, and large bubbles and crashes) heterogeneity of expectations may play a key role in explaining the data.
24
TIZIANA ASSENZA ET AL.
Group 5
Price
65
55
Predictions
45 65 55 45
2 0 –2
35 0
10
Price
20
30
40
50
40
50
40
50
Group 6
65
55
Predictions
45 65 55 45
5 0 –5
35 0
10
30
Group 4
90 Price
20
70 50 30
Predictions
10 90 70 50 30 10 30 0 –30 0
10
20
30
Fig. 1. Market Prices, Individual Forecasts, and Forecasting Errors in Three Groups in Hommes et al. (2005). The Rational Expectation Equilibrium (flat line) is pf = 60. Three Different Types of Aggregate Price Behavior Are Observed: Slow Monotonic Convergence to the Fundamental Price (Top Panel), Persistent Oscillations (middle panel) and Dampened Price Oscillations (Bottom Panel). Individual Expectations Are Strongly Coordinated.
25
Experiments on Expectations in Macroeconomics and Finance 1000 gr 1 gr 2 gr 3 gr 4 gr 5 gr 6
800
600
400
200
0 0
1000
20
30
40
30
40
50
Price Trader 1 Trader 2 Trader 3 Trader 4 Trader 5 Trader 6
500 Price (log scale)
10
200 100 50
20 10 0
10
20
50
Time
Fig. 2. The Market Prices in Six Different Groups (Top Panel) in Hommes et al. (2008). Without Fundamental Robot Traders in the Market Large Bubbles and Crashes Arise, Due to Strong Coordination of Individual Forecasts (Bottom Panel).
Anufriev and Hommes (2012) have developed a Heuristic Switching Model (HSM), extending the heterogeneous expectations framework of Brock and Hommes (1997, 1998) to fit the experimental data. Agents choose from a list of simple “rule of thumbs” to predict, for example, naive expectations,
26
TIZIANA ASSENZA ET AL.
adaptive expectations, or trend-following rules, and choose their forecast rule based upon its relative success. There is thus evolutionary selection of the rules: heuristics that performed better in the recent past attract more followers in the future. Hommes et al. (2005) and Heemeijer, Hommes, Sonnemans, and Tuinstra (2009) estimated linear forecasting rules for the individual forecasts and showed that simple linear rules, with only one or two time lags fit individual forecasting behavior quite well. Based on these estimations they could classify the rules into simple classes, such as adaptive or trend-following expectations. Anufriev and Hommes (2012) fitted a HSM with only four heuristics: • • • •
Adaptive expectations (ADA): petþ 1;1 = pet þ 0:65ðpt − 1 − pet;1 Þ. Weak trend rule (WTR): petþ 1;2 = pt − 1 þ 0:4ðpt − 1 − pt − 2 Þ. Strong trend rule (STR): petþ 1;3 = pt − 1 þ 1:3ðpt − 1 − pt − 2 Þ. Learning, Anchoring, and Adjustment heuristic (LA&A): petþ 1;4 = 0:5ðpav t − 1 þ pt − 1 Þ þ ðpt − 1 − pt − 2 Þ.
The LAA rule is proposed by Tversky and Kahneman (1974), where pav t − 1 stands for the sample average of past market prices until period t − 1. The key difference between the LAA and the simple trend-following rules lies in the anchor from which price extrapolations occur. For the simple trend-following rule, the anchor is simply the last observed price. This simple rule can easily forecast a price trend, but performs poorly at turning points. In contrast, the LAA rule uses an average of the last observed price Pt − 1 and the long-run sample average, pav t − 1 , which serves as a proxy of the average price level. By giving 50% weight to its long-run average, the LAA heuristic predicts turning points, when the price moves away from its fundamental or long-run value. In the HSM, consistent with the incentives in the LtFEs, the performance of heuristic h, h∈{1, 2, 3, 4} is measured by its squared prediction error of the rule in each period t: Ut;h = − ðpt − pet;h Þ2 þ ηUt − 1;h
ð10Þ P
nh,t is the fraction of the agents using heuristic h, and h nh;t = 1. η∈[0, 1] is a parameter measuring memory. The updating rule for the weights given to forecast strategy h is given by a discrete choice model with asynchronous updating: exp ðβUt − 1;h Þ nt;h = δnt − 1;h þ ð1 − δÞ P4 i = 1 exp ðβUt − 1;i Þ
ð11Þ
Experiments on Expectations in Macroeconomics and Finance
27
β ≥ 0 is the intensity of choice parameter. The larger the β, the more quickly agents switch to the heuristic that performs well in the recent past. δ ∈ [0,1] is a parameter for inertia. With the benchmark parameter setting β = 0.4, η = 0.7, δ = 0.9, the model fits each of the three patterns in the data of Hommes et al. (2005) remarkably well. Fig. 3 shows the actual and simulated one-step-ahead forecasts of market prices using the HSM model in Anufriev and Hommes (2012). The results of these simulation show that convergence to the REE is driven by a coordination of subjects using the stabilizing adaptive expectations. In markets with persistent oscillations, the evolutionary selection leads to most subjects to coordinate on the LAA rule, with almost 90% of the subjects using the LAA rule. The dominating forecasting strategy changes over time for the market with dampened oscillations. Initially most subjects use the strong trend rule, followed by a dominating LAA rule in the middle, while most subjects eventually switch to stabilizing adaptive expectations in the final periods. As for the price bubbles in Hommes et al. (2008), there is also a competing theory to explain the emergence of large bubbles. Hommes et al. (2008) discussed the possibility of “rational bubbles,” as in Tirole (1982), where the price grows at the same rate as the interest rate. They find however in most markets (4 out of 6), that the growth rate of the price is much higher than the interest rate (r = R − 1). Hu¨sler et al. (2013) explore the possibility that the bubbles in this experiment can be described as “super-exponential bubbles,” with an accelerating growth rate of the price. They discuss two possibilities (1) the growth rate is an increasing function in the price deviation from the fundamental price, log
pt pt − 1
= a þ bpt − 1 , where pt − 1 = pt − pf . This
means a larger price deviation makes the investors overly optimistic/pessimistic, and expect that the deviation will grow faster than the interest rate; (2) the growth rate is an increasing function of the growth rate in the last period, log
pt pt − 1
= c þ d log
pt − 1 pt − 2
. This means it is the return rate instead
of the price level that makes investors overly optimistic/pessimistic. They run estimations and find that specification (1) provides the best description of the experimental data. Positive versus Negative Feedback The asset pricing experiments are characterized by positive expectations feedback, that is, an increase of the average forecast or an individual forecast causes the realized market price to rise. Heemeijer et al. (2009) investigate how the expectations feedback structure affects individual forecasting
28
TIZIANA ASSENZA ET AL.
Ro-HF, Session 5
Ro-HF, Session 5
1
0.8 55 Simulation
45 ADA
WTR
Impacts
Price
65
Experiment STR
LAA
Predictions
65
0.6
0.4
55 0.2 2 0 –2
45 35 0
0 ADA
10
20
30
40
50
0
Ro-HF, Session 6
WTR
10
STR
20
30
LAA
40
50
Ro-HF, Session 6
1
0.8 55 Simulation
45 ADA
WTR
Impacts
Price
65
Experiment STR
LAA
Predictions
65
0.4
55 0.2 5 0 –5
45 35 0
0 ADA
10
20
30
40
50
0
Ro-HF, Session 4
WTR
10
STR
20
30
LAA
40
50
Ro-HF, Session 4
1
90 Price
0.6
70 0.8
50
10
Simulation
Predictions
ADA
WTR
Impacts
30 Experiment STR
LAA
90 70 50 30 10
0.6
0.4
0.2 30 0 –30
0 ADA
0
10
20
30
40
50
0
WTR
10
STR
20
30
LAA
40
50
Fig. 3. Left Panel: Top of Each Graph: The Actual and Simulated Market Prices (Hollow Squares) Using the HSM Model (Dots) in a Typical Market of Each Treatment in Anufriev and Hommes (2012); Bottom of Each Graph: Simulated Price Predictions (Main Figure) and Variances (Subfigure) by the Four Rules in the HSM: ADA For Adaptive Expectations, WTR for Weak Trend Rule, STR for Strong Trend Rule, and LAA for Learning, Anchoring and Adjustment Rule. Right Panel: The Simulated Fraction of Agents Who use Each Kind of Heuristics (Hollow Circle for ADA, Hollow Square for WTR, Square for STR and + for LAA) by the HSM in Anufriev and Hommes (2012).
Experiments on Expectations in Macroeconomics and Finance
29
behavior and aggregate market outcomes by considering market environments that only differ in the sign of the expectations feedback, but are equivalent along all other dimensions. The realized price is a linear map of the average of the individual price forecasts pei;t of six subjects. The (unknown) price generating rules in the negative and positive feedback systems were respectively:5 where t is a (small) exogenous random shock to the pricing rule. The positive and negative feedback systems (12) and (13) have the same unique RE equilibrium steady state p* = 60 and only differ in the sign of the expectations feedback map. Both are linear near-unit-root maps, with slopes 20/21≈−0.95 resp. +20/21.6 Fig. 4 (top panels) illustrates the dramatic difference in the negative and positive expectations feedback maps. Both have the same unique RE fixed point, but for the positive feedback map, the graph almost coincides with the diagonal so that every point is almost a steady state. Under near-unit-root positive feedback, as is typical in asset pricing models, each point is in fact an almost self-fulfilling equilibrium. Will subjects in LtFEs be able to coordinate on the unique RE fundamental price, the only equilibrium that is perfectly self-fulfilling? Fig. 4 (bottom panels) shows realized market prices as well as six individual predictions in two typical groups. Aggregate price behavior is very different under positive than under negative feedback. In the negative feedback case, the price settles down to the RE steady state price of 60 relatively quickly (within 10 periods), but in the positive feedback treatment the market price does not converge but rather oscillates around its fundamental value. Individual forecasting behavior is also very different: in the case of positive feedback, coordination of individual forecasts occurs extremely fast, within 23 periods. The coordination however is on a “wrong,” that is, a non-RE price around 30 and thereafter the price starts oscillating. In contrast, in the negative feedback case coordination of individual forecasts is slower and takes about 10 periods. More heterogeneity of individual forecasts however ensures, that, the realized price quickly converges to the RE benchmark of 60 (within 56 periods), after which individual predictions coordinate on the correct RE price. In his seminal paper introducing RE, Muth (1961) considered a negative expectations feedback framework of the cobweb “hog-cycle” model. Previous LtFEs on cobweb models show that under negative expectations feedback, heterogeneity of individual forecasts around the rational forecast 60 persists in the first 10 periods, and correlated individual deviations from the RE fundamental forecast do not arise (in line with Muth’s observations as quoted in the Introduction) and the realized market price converges quickly to the RE benchmark. In contrast, in an environment with positive
30
TIZIANA ASSENZA ET AL. Price
Price
120
120
100
100
80
80
60
60
40
40
20
20 Prediction 20
40
100
Prediction 20
120
Price
60 40
20 100
80
80
3 0 –3
20
80
100
120
Positive
40
100
60
60
60
20
40
40
80
Predictions
Price
80
Negative
80
Predictions
60
60 40
3 0 –3
20
0
0 0
10
20
30
40
50
0
10
20
30
40
50
Fig. 4. Negative (Left Panels) versus Positive (Right Panels) Feedback Experiments. Linear Feedback Maps (Top Panels) Share the Unique RE Price at the Fixed Point 60. The Positive Feedback Map is Very Close to the Diagonal and Therefore Has a Continuum of Almost Self-Fulfilling Steady State Equilibria. Realized Market Prices (Upper Part Bottom Panels), Six Individual Predictions (Middle Parts), and Individual Errors (Bottom Parts). In the Negative Expectations Feedback Market the Realized Price Quickly Converges to the RE Benchmark 60. In all Positive Feedback Markets Individuals Coordinate on the “Wrong” Price Forecast and as a Result the Realized Market Price Persistently Deviates from the RE Benchmark 60. " ! # 6 X 20 1 e pt = 60 − pht − 60 þ t negative feedback ð12Þ 21 6 h=1 20 pt = 60 þ 21
"
6 X 1 h=1
6
! peht
# − 60 þ t
positive feedback
ð13Þ
expectations feedback the LtFEs show that, within 23 periods, individual forecasts become strongly coordinated and all deviate from the rational, fundamental forecast. As a result, in positive expectations feedback markets, at the aggregate level the market price may persistently deviate from
Experiments on Expectations in Macroeconomics and Finance
31
the rational, fundamental price. Individual forecasts then coordinate on almost self-fulfilling equilibrium, very different from the perfectly selffulfilling RE price.7 Coordination on almost self-fulfilling equilibria has also been obtained in laboratory experiments in a Lucas asset pricing model (Asparouhova, Bossaerts, Eguia, & Zame, 2014; Asparouhova, Bossaerts, Roy, & Zame, 2013). Bao, Hommes, Sonnemans, and Tuinstra (2012) combine Heemeijer et al. (2009) and Hey (1994) and consider positive and negative feedback experiments, with large permanent shocks to the fundamental price level.8 More precisely, these shocks have been chosen such that, both in the negative and positive feedback treatments, the fundamental equilibrium price pt* changes over time according to: pt = 56; pt = 41; pt = 62;
0 ≤ t ≤ 21 22 ≤ t ≤ 43 44 ≤ t ≤ 65
ð14Þ
The purpose of these experiments was to investigate how the type of expectations feedback may affect the speed of learning of a new steady state equilibrium price, after a relatively large unanticipated shock to the economy. Fig. 5 shows the average price behavior for positive and negative feedback (top panels), realized prices in all groups (middle panels), and an example of individual forecasts in a positive as well as a negative feedback group (bottom panels). Aggregate behaviors under positive and negative feedback are strikingly different. Negative feedback markets tend to be rather stable, with price converging quickly to the new (unknown) equilibrium level after each unanticipated large shock. In contrast, under positive feedback prices are sluggish, converging only slowly into the direction of the fundamental value and subsequently overshooting it by large amounts. Fig. 6 reveals some other striking features of aggregate price behavior and individual forecasts. The top panel shows the time variation of the median distance to the RE benchmark price over all (eight) groups in both treatments. For the negative feedback treatment, after each large shock the distance spikes, but converges quickly back (within 56 periods) to almost 0. In the positive feedback treatment after each shock the distance to the RE benchmark shows a similar spike, but falls back only slowly and does not converge to 0. The bottom panel shows how the degree of heterogeneity, that is, the median standard deviation of individual forecasts, changes
32
TIZIANA ASSENZA ET AL. 100
100 Positive Feedback Fundamental
90
80 Average Price
Average Price
Negative Feedback Fundamental
90
80 70 60 50 40
70 60 50 40
30
30
20
20
10
10
0
0 0
10
20
30
40
50
60
70
0
10
20
30
40
50
60
70
Period
100
100
90
90
80
80 Market Price
Market Price
Period
70 60 50 40
P1 P2 P3 P4 P5 P6 P7 P8 Fundamental
30 20 10
70 60 50 40
N1 N2 N3 N4 N5 N6 N7 N8 Fundamental
30 20 10
0
0 0
10
20
30
40
50
60
70
0
10
20
Period
30
40
50
60
70
Period
100
100
90
90
80
80
70
70
60
60
50
50
40
40
P81 P82 P83 P84 P85 P86 Fundamental
30 20 10
N81 N82 N83 N84 N85 N86 Fundamental
30 20 10
0
0 0
10
20
30
40
Period
50
60
70
0
10
20
30
40
50
60
70
Period
Fig. 5. Positive Feedback (Left Panels) and Negative Feedback (Right Panels) Experimental Data. Top Panels: The Average Realized Price Averaged Over all Eight Groups; Middle Panels: The Market Prices for Eight Different Groups; Bottom Panels: Predictions of Six Individuals in Group P8 (Left) and Group N8 (Right) Plotted Together with Fundamental Price (Dotted Lines).
33
Experiments on Expectations in Macroeconomics and Finance The median of the distance between the market price and RE 25 Negative feedback Positive feedback
Distance
20
15
10
5
0 0
10
20
30 Period
40
50
60
The median of the standard deviation 15 Negative feedback Positive feedback
Distance
10
5
0 0
10
20
30 Period
40
50
60
Fig. 6. Positive/Negative Feedback Markets with Large Shocks. These Plots Illustrate Price Discovery (Top Panel) and Coordination of Individual Expectations (Bottom Panel). The Top Panel Shows the Median Absolute Distance to RE Fundamental Price, While the Bottom Panel Shows the Median Standard Deviation of Individual Predictions. In Positive Feedback Markets Coordination Is Quick, but on the “Wrong,” that is, Non-RE, Price.
34
TIZIANA ASSENZA ET AL.
over time. For the positive feedback treatment after each large shock heterogeneity decreases very quickly and converges to (almost) 0 within 34 periods. Under positive feedback, individuals thus coordinate expectations quickly, but they all coordinate on the “wrong,” that is, a non-RE price. In the negative feedback treatment heterogeneity is more persistent, for about 10 periods after each large shock. Persistent heterogeneity stabilizes price fluctuations and after convergence of the price to its RE fundamental individual expectations coordinate on the correct RE price. One may summarize these results in saying that in the positive feedback treatment individuals quickly coordinate on a common prediction, but that coordination on the “wrong” nonfundamental price occurs. As a result price behavior is very different from the perfect REE price. On the other hand, in the negative feedback treatment coordination is much slower, heterogeneity is more persistent but price convergence is quick. Stated differently, positive feedback markets are characterized by quick coordination and slow price discovery, while negative feedback markets are characterized by slow coordination, more persistent heterogeneity, and quick price discovery. Notice also that under positive feedback, coordination on a non-RE-fundamental price is almost self-fulfilling, with small individual forecasting errors. The positive feedback market is thus characterized by coordination on almost self-fulfilling equilibria with prices very different from the perfectly rational self-fulfilling equilibrium.9 Similar to Anufriev and Hommes (2012), Bao et al. (2012) fit a heuristics switching model with four rules to these experimental data.10 The rules are an adaptive expectation (ADA) rule: petþ 1;1 = pet þ 0:85ðpt − pet;1 Þ
ð15Þ
A contrarian rule (CTR) given by:11 petþ 1;2 = pt − 0:3ðpt − pt − 1 Þ
ð16Þ
A trend-extrapolating rule (TRE) given by: petþ 1;2 = pt þ 0:9ðpt − pt − 1 Þ
ð17Þ
The coefficients of the first three rules are the medians of the estimated individual linear rules in Bao et al. (2012). The fourth rule is
Experiments on Expectations in Macroeconomics and Finance
35
a learning anchor and adjustment heuristic (LAA) (Tversky & Kahneman, 1974): petþ 1;4 = 0:5ðpav t þ pt Þ þ ðpt − pt − 1 Þ
ð18Þ
As before, subjects switch between these rules depending upon their relative performance. Fig. 7 shows realized market prices and the one-period-ahead simulated market prices (top panels), together with the evolution of the fractions of the four strategies of the heuristics switching model (bottom panels) for a typical group of the negative feedback (left panels) and the positive feedback treatment (right panels). The heuristics switching model matches the aggregate behavior of both positive and negative feedback quite nicely and provides an intuitive, behavioral explanation why these different aggregate patterns occur. In the negative feedback market, trend-following strategies perform poorly and the contrarian strategy quickly dominates the market (more than 70% within 20 periods) enforcing quick convergence to the RE benchmark after each large shock. In contrast, in the positive feedback treatment, the trend-following strategy performs well and dominates the market (with more than 50% trend-followers after 10 periods). The survival of trend-following strategies in the positive feedback markets causes persistent deviations from the RE steady states, overreaction and persistent price fluctuations. The difference in aggregate behavior in these experiments is thus explained by the fact that trend-following rules are successful in a positive feedback environment amplifying price oscillations and persistent deviations from the rational equilibrium benchmark price, while the same trendfollowing rules are driven out by the CTR in the case of negative feedback. Coordination of individual expectations on trend-following rules and almost self-fulfilling equilibria in a positive expectations feedback environment has a large aggregating effect with realized market prices deviating significantly from the perfectly self-fulfilling RE benchmark. Overlapping Generations Economies This section reviews the main contributions in experimental overlapping generation (OLG) economies. In a series of papers, Marimon and Sunder (1993, 1994, 1995) and Marimon et al. (1993) pioneered the learning-tooptimize and learning-to-forecast design to study dynamic macroeconomic models in the laboratory, using the framework of OLG economies.12
36
TIZIANA ASSENZA ET AL.
100
1 Experiment Simulation
90 80
0.9 0.8
70
0.7
60
0.6
50
0.5
40
0.4
30
0.3
20
0.2
10
0.1
ADA CTR TRE A&A
0
0 0
10
20
30 40 Period
50
60
70
100
0
10
20
30 40 Period
50
60
70
1
90
0.9
Experiment Simulation
80
0.8
70
0.7
60
0.6
50
0.5
40
0.4
30
0.3
20
0.2
10
0.1
0
ADA CTR TRE A&A
0 0
10
20
30 40 Period
50
60
70
0
10
20
30 40 Period
50
60
70
Fig. 7. Experimental and Simulated Prices Using the HSM Model in One Typical Group from The Positive (Top Left, Group P8) and Negative Feedback Treatment (Bottom Left, Group N8) Respectively. Experimental Data (Squares) and One-Step Ahead Simulated Prices from the HSM Model (Circles) Almost Overlap. The Right Panels Show the Evolution of the Four Market Heuristics in the Positive (Top Right) and Negative Feedback Treatments (Bottom Right). The Trend-Following Rule Dominates in the Positive Feedback Markets, While the CTR Dominates in the Negative Feedback Markets. Parameters Are: β = 0.4, η = 0.7, δ = 0.9, as in Anufriev and Hommes (2012).
Marimon and Sunder (1993) consider an OLG experimental economy in which participants are facing a monetary economy where the level of deficit is constant and financed by means of seigniorage. This is essentially a learning-to-optimize experiment, since subjects must submit supply schedules, but the subjects also take part in a forecasting contest to be able to end the experimental OLG economy in finitely many periods, without affecting
Experiments on Expectations in Macroeconomics and Finance
37
the equilibria of the infinite OLG model. This OLG economy has two stationary steady states, a low inflationary stationary state (Low ISS) and a high inflationary stationary state (High ISS), as illustrated in Fig. 8. Under RE the equilibrium path converges to the High ISS, while under adaptive learning the economy converges to the Low ISS. In the experiments coordination on the Low ISS occurs in all cases. These experiments thus strengthen the view that economic agents are more likely to follow adaptive learning based on observed data. Marimon et al. (1993) design an experimental OLG economy with a RE period-2 cycle and sunspot equilibria. They use a learning-to-forecast design to study whether coordination on a 2-cycle or on sunspots can arise in the lab. As shown by Woodford (1990) sunspot equilibria are learnable and hence they cannot be ruled out a priori. Marimon et al. (1993) find that in their experimental economy the emergence of a 2-cycle does not arise spontaneously, but coordination on (approximate) 2-cycle equilibria may arise when they are correlated with an extrinsic sunspot signal, as illustrated in Fig. 9. In the first 17 periods of this OLG economy, the generation size oscillates between 3 and 4, while after period 17 the generation size is held fixed at 4. As a result, the OLG economy has a large amplitude RE 2-cycle in the first 17 periods and a small amplitude RE 2-cycle thereafter. The extrinsic shocks in generation size in the first 17 periods facilitate coordination on a (approximate) 2-cycle. After the extrinsic shocks disappear after period 17, coordination on the RE 2-cycle remains. This OLG experimental economy thus shows the possibility of coordination on expectations-driven price volatility, but only after subjects have been exposed to a sequence of extrinsic sunspot signals correlated with a real cycle. Marimon and Sunder (1994) have studied the robustness of their earlier results and studied the effect of changes in policy specifications. They find persistence of expectations-driven fluctuations in the experimental economy characterized by sunspots and the anticipation mechanism in the economy with repeated pre-announced policy regime shifts. Marimon and Sunder (1995) investigate the Friedman prescription whether a (simple) constant money growth rule helps stabilize prices (inflation rates). They find that the price volatility that is observed in the experimental data is due more to the use of adaptive learning rules than to difference in monetary regimes. Inflation volatility is broadly equivalent for the two monetary rules. Bernasconi and Kirchkamp (2000) build a very similar experiment and differently from Marimon and Sunder the authors find that: (i) agents do
38
TIZIANA ASSENZA ET AL. 7 High ISS
6 5
πt+1
4 3
R(πt)
2 Low ISS
1
An Arbitrary Initial Condition
45°
0 0
1
2
3 πt
4
6
7
250
400 350
5
High ISS High ISS
200 Percent per period
Percent per period
300 250 200 150
150 Low ISS
100
100 50
50
Low ISS
0
0 0 2 4 6 8 1012141618 20 0 2 4 6 8 Periods
0
Panel A: Inflation (Econ. 4A & B).
Actual
2
4
6
8 10 12 14 16 18 Periods
Panel C: Inflation (Econ. 7C).
RE path
LS path
Fig. 8. Experimental OLG Economy in Marimon and Sunder (1993), with a Low Inflationary Stationary State (Low ISS) and a High Inflationary Stationary State (High ISS). Under RE the Time Path Converges to the High ISS, While Adaptive Learning Converges to the Low ISS. Experimental Data are Consistent with Adaptive Learning. (Figs. 1 and 3, Panels A and C, from Marimon and Sunder, Econometrica 1993. Reprinted by Permission of the Econometric Society.)
39
Experiments on Expectations in Macroeconomics and Finance Economy 1 40
Gen size = (3,4)
Gen size = 4
Price
30
20
10
0 0
5
10
15
20
25 30 Period
35
40
45
50
20 18 16 Price
14 12 10 8 6 4 2 15
20
25
30 35 Period
40
45
50
Fig. 9. Experimental OLG Economy in the work by Marimon et al. (1993) with RE 2-Cycle Driven by an Extrinsic Sunspot Signal. In the First 17 Periods the Generation Size Oscillates between 3 and 4; After Period 17 the Generation Size Is Fixed at 4. The OLG Economy Coordinates on the Large Amplitude RE 2-Cycle Correlated with the Sunspot Signal in the First 17 Periods. After the Extrinsic Shocks Disappear after Period 17, Coordination on the RE 2-Cycle Remains. Fig. 3, Economy 1, and Fig. 4 from Marimon, Spear and Sunder, Journal of Economic Theory 1993. Reprinted by Permission of Elsevier.
not use first-order adaptive rules; (ii) agents show over-saving due to precautionary motivations; and (iii) the Friedman conjecture holds. Three main differences may be highlighted between the Marimon and Sunder (1995) and Bernasconi and Kirchkamp (2000). First of all subjects in Bernasconi and Kirchkamp forecast prices but they also decide savings,
40
TIZIANA ASSENZA ET AL.
that is, they use a learning-to-forecast and a learning-to-optimize design; second agents do not hold by construction a quasi-point forecast but they can test the results for different forecast decisions for various periods ahead before submitting their final decision. Finally, monetary policies are distinguished by labels and participants vote for them. Within this experimental setup Bernasconi and Kirchkamp are able to analyze the subjects’ expectation formation process independent of their saving behavior. In contrast to Marimon and Sunder they find that the two implemented monetary rules are not equivalent but they show significant differences both in inflation levels and volatility, in particular they find support for Friedman’s conjecture as the experiments show higher inflation volatility under a real deficit regime than under a simple money growth rule. Finally, Heemeijer et al. (2012) conduct an individual learning-to-forecast experiment within a standard OLG framework in which two monetary policy regimes are implemented, namely a low monetary growth rule and a high monetary growth rule. Subjects are asked to forecast inflation rates. This learning-to-forecast experiment has a more complicated structure
πt = θ
Sðπ et Þ Sðπ etþ 1 Þ
ð19Þ
where S is the (nonmonotonic) savings function and realized inflation πt depends on the inflation forecasts for periods t and t + 1 of a single subject. The authors find a wide variation in participants’ forecasting ability both among participants in the same experimental session and among different treatments. The rational expectation hypothesis is not able to explain the experimental results. The authors find essentially three types of individual forecasting behavior: an accurate forecast leading to stabilizing the inflation rate; learning behavior with inflation stabilizing after a highly volatile initial phase; and finally a set of subjects who never learn how to predict inflation rate with some accuracy. The Heemeijer et al. (2012) OLG experimental results are consistent with subjects using constant gain algorithms (e.g., adaptive expectations) as forecasting rules or average expectations when they learn to stabilize inflation. Moreover analyzing the experimental data they find evidence of agents switching among different forecasting rules on the basis of rules’ forecasting performance. Hence, even if, agents do not use least squares learning, they try to improve their forecasting ability by learning eventually ending up close to the rational expectations steady state equilibrium.
Experiments on Expectations in Macroeconomics and Finance
41
New Keynesian DSGE This section surveys LtFEs framed in New Keynesian (NK) macro environments. In its basic formulation, the NK model consists of an IS curve derived from households’ intertemporal optimization of consumption expenditures, representing the demand side of the economy, and a Phillips curve derived from firms’ intertemporal optimization under monopolistic competition and nominal rigidities, representing the supply side of the economy. The IS curve and the Phillips curve are respectively specified as yt = Et yt þ 1 − φðit − Et π t þ 1 Þ þ gt
ð20Þ
π t = λyt þ βEt π t þ 1 þ ut
ð21Þ
where yt denotes the output gap, it the interest rate, πt the inflation rate, while gt and ut are exogenous shocks. The terms Et yt þ 1 and Et π t þ 1 denote subjective (possibly nonrational) expected values of the future output gap and inflation respectively.13 The model is closed by specifying a policy rule for the nominal interest rate. Experimental implementations of the NK model have been targeted at shedding light on two important issues, namely: • the nature of the expectation formation process and its impact on aggregate dynamics in a more complicated framework, with inflation and output depending on expectations of both variables; • the effectiveness of alternative monetary policies in stabilizing experimental economies in which the expectational terms in Eqs. (20) and (21) are replaced by subjects’ average (or median) forecasts. This section will discuss the LtFEs presented in Pfajfar and Zakelj (2011) (and companion paper Pfajfar & Zakelj, 2014), Assenza, Heemeijer, Hommes, and Massaro (2011) (and the revised and extended paper of Assenza, Heemeijer, Hommes, & Massaro, 2014), and Kryvtsov and Petersen (2013), which contributed on the issues mentioned above within experimental economies described by Eqs. (20) and (21) , and touch upon the LtFEs described in Adam (2007).14 The small scale NK model described by the aggregate demand Eq. (20) and the aggregate supply equation Eq. (21) is widely used for policy analysis and its popularity is based on its ability to replicate a number of stylized facts. However, the implementation of such a model in an experimental setup is more complicated, because subjects have to submit two-periodahead forecasts for two variables, inflation as well as the output gap. In
42
TIZIANA ASSENZA ET AL.
order to simplify subjects’ cognitive task, Pfajfar and Zakelj (2011) only ask for inflation expectations and assume Et yt þ 1 = yt − 1 .15 This scenario corresponds to a situation in which subjects have naive expectations about the output gap, or to an extreme case of habit persistence. To deal with the same issue, Assenza et al. (2011) elicit forecasts of the endogenous variables from different groups of subjects, one group forecasting inflation and another group forecasting output gap. In more recent experiments Kryvtsov and Petersen (2013) ask subjects to forecast both inflation and the output gap. Formation of Individual Expectations. The information set available to participants in the experiments by Assenza et al. (2011) and Pfajfar and Zakelj (2011) include realizations of inflation, the output gap, and the interest rate up to period t − 1. Subjects also have information about their past forecast, but they do not observe the forecasts of other individuals. Overall, Assenza et al. (2011) and Assenza, Heemeijer, et al. (2014) find that the predictions of the model with homogeneous rational expectations can hardly describe the experimental outcomes. The authors find evidence for heterogeneity in individual expectations and estimate simple forecasting first-order heuristics using the time series of individual predictions.16 A stylized fact that emerges from the analysis of Assenza et al. (2011) is that individual learning takes the form of switching from one heuristic to another. The authors use the HSM developed by Anufriev and Hommes (2012) to explain both individual forecasting behavior and aggregate macro outcomes observed in the experiments. Using the same set of heuristics used by Anufriev and Hommes (2012) (described in detail in the section “Asset Pricing Experiments”), Assenza et al. (2011) and Assenza, Heemeijer, et al. (2014) show that the HSM explains how the different macro patterns observed in the experiment, that is, convergence to the target equilibrium level, inflationary/deflationary spirals, persistent oscillations, and dampened converging oscillations, emerge out of a selforganization process of heterogeneous expectations driven by their relative past performance. Convergence to equilibrium is explained by coordination on adaptive expectations, inflationary/deflationary spirals arise due to coordination on strongly extrapolating trend-following rules, persistent oscillations arise after coordination on an anchor, and adjustment rule and dampened converging oscillations arise when initially dominating (weak) trend-following rules are finally driven out by adaptive expectations. Pfajfar and Zakelj (2014) focus on the analysis of individual data on inflation expectations collected by Pfajfar and Zakelj (2011).17 The authors fit 12 alternative models of expectation formation to individual prediction
Experiments on Expectations in Macroeconomics and Finance
43
series and find significant heterogeneity in subjects’ forecasting strategies. The article develops a new test for rational expectations that checks the consistency of expectation formation rules with the actual laws of motion, explicitly allowing for the possibility of heterogeneous expectations. In other words, the test allows for the possibility that the perceived law of motion (PLM) of a rational agent may differ from that implied by the assumption of homogeneous rational expectations, and include additional state variables as a result of the presence of heterogeneous forecasters. Using this test, Pfajfar and Zakelj (2014) find that for 3045% of subjects it is not possible to reject rationality. Moreover, 2025% of subjects’ forecasting strategies are described by adaptive learning algorithms. The authors also find evidence for simple heuristics. Roughly 2535% of subjects can be described by trend extrapolation rules and an additional 1015% by adaptive expectations or by a sticky information type of model. Finally, Pfajfar and Zakelj (2014) find evidence for switching between forecasting models. The authors study “unrestricted” switching, that is, they re-estimate all alternative models in each period and for each individual they select the best performing model in each period, finding that switching between alternative models better describes subjects’ behavior. The experimental setup of Adam (2007) is closely related to the NK framework described in this section. The author implements a sticky price environment where inflation and output depend on expected inflation. As in Pfajfar and Zakelj (2011) and Assenza et al. (2011), the information set available to subjects includes past realizations of endogenous variables through period t − 1 and, in each experimental economy, a group of five subjects is asked to provide one- and two-step ahead forecasts of inflation for 4555 periods. The results show cyclical patterns of inflation around its steady state. Although the rational forecast for inflation should condition on lagged output, Adam finds that in most of the experimental sessions, the forecast of the “representative subject,” that is, the average forecasts entered by subjects in any given period, uses a simple AR(1) model. He shows that such behavior can result in a restricted perception equilibrium (RPE) in which the autoregressive inflation model outperforms the rational forecast model. Adam further notes that mis-specified forecasting rules provide a source of inflation and output persistence, explaining therefore the observed persistence of inflation cycles. Kryvtsov and Petersen (2013) focus on measuring the strength of the expectation channel for macroeconomic stabilization. The main difference between the experimental setup developed by Kryvtsov and Petersen and the ones described above in this section consists in the information set
44
TIZIANA ASSENZA ET AL.
available to subjects in the experiment. In fact, Kryvtsov and Petersen provide subjects with full information about the only exogenous shock process, that is, gt in the IS equation,18 and about the model underlying the experimental economy. Moreover, information about histories of past outcomes and shocks, as well as a detailed model description, is available at a small time cost. This setup allows estimating forecasts as a function of the observed shock history, that is, gt in the IS equation, which is then used to quantify the contribution of expectations to macroeconomic stabilization via counterfactual analysis. Krystov and Petersen show that a model with a weak form of adaptive expectations, attributing a significant weight on t − 1 realizations of inflation and the output gap, fits best both the magnitude and the timing of aggregate fluctuations observed in the experiment. The Effectiveness of Monetary Policies. An important contribution of LtFEs casted in the NK framework is to analyze the effectiveness of alternative monetary policy rules in stabilizing the variability of inflation in a setting where expectations about endogenous variables are potentially nonrational and heterogeneous across subjects. Pfajfar and Zakelj (2011) close the NK model described by Eqs. (20) and (21) with a forward-looking interest rule of the form it = ϕπ ðEt π t þ 1 − πÞ þ π where the monetary authority responds to deviations in subjects’ inflation expectations from the target π, set at 3%. The authors vary the value of ϕπ, measuring the strength of policy reaction, in different treatments. Pfajfar and Zakelj (2011) also consider an alternative contemporaneous policy rule of the form it = ϕπ ðπ t − πÞ þ π where the central bank responds to deviations of current inflation from the target. The different treatments implemented in Pfajfar and Zakelj (2011) are summarized in Table 1. Fig. 10 displays the experimental outcome of Pfajfar and Zakelj (2011). A cyclical behavior of inflation and the output gap around their steady states can be observed in all treatments of Pfajfar and Zakelj (2011). Among the forward-looking policy rules, a reaction coefficient ϕπ = 4 (Treatment 3) results in lower inflation variability compared to reaction
45
Experiments on Expectations in Macroeconomics and Finance
Table 1.
Treatments in Pfajfar and Zakelj (2011).
Treatment
Parameter ϕπ = 1.5 ϕπ = 1.35 ϕπ = 4 ϕπ = 1.5
1. Forward-looking rule 2. Forward-looking rule 3. Forward-looking rule 4. Contemporaneous rule
Treatment 1
Treatment 2
20 10 10 0 0
Inflation (%)
–10
–10 0
10
20
30
40
50
60
0
70
10
20
Treatment 3
30
40
50
60
70
50
60
70
Treatment 4 6
5 4
4
3 2 2 1
0 0
10
20
30
40
50
60
0
70
10
20
30
40
Period
Fig. 10.
Realized Inflation by Treatments in Pfajfar and Zakelj (2011). Each Line Represents One of the 24 Experimental Economies.
coefficients ϕπ = 1.35 (Treatment 2) and ϕπ = 1.5 (Treatment 1). The authors report that there is no statistical difference between Treatments 1 and 2. When comparing the results in Treatments 4 and 1, Pfajfar and Zakelj find that the inflation variance under the contemporaneous rule is significantly lower than under the forward-looking rule with the same reaction coefficient ϕπ = 1.5. The intuition provided by the authors for this result is that the variability of the interest rate is generally lower under the contemporaneous rule.
46
TIZIANA ASSENZA ET AL.
Assenza, Heemeijer, et al. (2014) implement different versions of the contemporaneous interest rate rule which takes, as in Pfajfar and Zakelj (2011), the following form it = ϕπ ðπ t − πÞ þ π In particular the authors set an inflation target π = 2% and consider a case in which ϕπ = 1, so that the Taylor principle does not hold and thus policy does not play a stabilizing role, and compare it with the case where the Taylor principle does hold, setting ϕπ = 1.5. In this particular setting, the Taylor principle corresponds to the setting ϕπ > 1. Moreover, since π = 2% could be a focal point for subjects’ forecasts, the authors run an additional treatment with π = 3:5% to check the robustness of the policy rule obeying the Taylor principle to alternative target values. The different treatments implemented in Assenza, Heemeijer, et al. (2014) are summarized in Table 2. Fig. 11 illustrates the experimental results of Assenza, Heemeijer, et al. (2014). The evidence presented in Assenza, Heemeijer, et al. (2014) suggests that a monetary policy that reacts aggressively to deviations of inflation from the target (Treatments b and c) stabilizes macroeconomic fluctuations and leads the economy to the desired target. The specific value of the target seems to have little influence on the stabilizing properties of the policy rule. On the other hand, when the interest rate reacts weakly to inflation fluctuations, Assenza, Heemeijer, et al. (2014) observe convergence to nonfundamental equilibria (Treatment a, Groups 13) or exploding behavior (Treatment a, Groups 46). Overall, the results of Assenza et al. (2011) and Assenza, Heemeijer, et al. (2014) are in line with those of Pfajfar and Zakelj (2011). Treatment 4 in Pfajfar and Zakelj (2011) uses the same contemporaneous policy rule adopted by Assenza et al. (2011) and Assenza, Heemeijer, et al. (2014) with reaction coefficient ϕπ = 1.5. A qualitative comparison of the outcomes of the two experiments shows sustained inflation oscillations around the Table 2. Treatments in Assenza, Heemeijer, et al. (2014). Treatment a. Contemporaneous rule b. Contemporaneous rule c. Contemporaneous rule
Policy
Target
ϕπ = 1 ϕπ = 1.5 ϕπ = 1.5
π = 2% π = 2% π = 3:5%
47
Experiments on Expectations in Macroeconomics and Finance Treatment a (groups 1 − 3)
Treatment a (groups 4 − 6)
5 1000 800 Inflation (%)
Inflation (%)
4
3
600 400 200
4
0 1 0
10
20
30
40
0
50
10
20
Period
40
50
40
50
Treatment c 5
4
4 Inflation (%)
Inflation (%)
Treatment b 5
3
2
1
30 Period
3
2
0
10
20
30 Period
40
50
1 0
10
20
30 Period
Fig. 11. Realized Inflation by Treatments in Assenza, Heemeijer, et al. (2014). Each (Thin) Line Represents One of the 18 Experimental Economies. Dashed Thick Lines Depict the Inflation Targets.
steady state in Pfajfar and Zakelj (2011), while in Assenza et al. (2011) inflation seems to converge to the target value, at least in the late stages of the experiment. The different behavior might be due to the differences in the two experimental setups. While Pfajfar and Zakelj (2011) only elicit inflation expectations, assuming that expectations of the future output gap are given by lagged output, that is, Et yt þ 1 = yt − 1 , Assenza et al. (2011) elicit forecasts of both future inflation and the future output gap in accordance with the NK model. Moreover, Pfajfar and Zakelj (2011) assume AR(1) processes for the exogenous shocks gt and ut, while Assenza et al. (2011) use IID shocks with the consequence that the rational expectation fundamental solution is an IID process and any observed fluctuations in aggregate variables are endogenously driven by individual expectations. The
48
TIZIANA ASSENZA ET AL.
experimental results in Assenza et al. (2011) suggest therefore that a policy rule reacting more than point to point to deviations of inflation from the target can stabilize endogenous expectations-driven fluctuations in the aggregate variables. Both Pfajfar and Zakelj (2011) and Assenza, Heemeijer, et al. (2014) further investigate the relationships between expectations, monetary policy, and macroeconomic stability. Using panel data regressions, Pfajfar and Zakelj (2011) show that a higher proportion of agents using TRE rules increases the volatility of inflation. In contrast, having more agents that behave according to the adaptive expectations models has a stabilizing effect on the experimental economies. Moreover, the regression results show that the monetary policy also has an impact on the composition of different forecasting rules in each treatment. Pfajfar and Zakelj find that the percentage of destabilizing trend extrapolation rules and the variability of inflation are lowest in Treatment 3, where the strength of the positive expectational feedback is the lowest. Interestingly, the explanation of how different macro patterns emerge out of a process of self-organization of heterogeneous expectations provided by the HSM in Assenza, Heemeijer, et al. (2014) delivers comparable insights. Assenza, Heemeijer, et al. (2014) find that macroeconomic instability arise due to coordination on a strongly extrapolation trend-following rules while convergence to equilibrium is associated with coordination on adaptive expectations. Moreover, Assenza, Heemeijer, et al. (2014) show that an aggressive policy rule can avoid almost self-fulfilling coordination on destabilizing trend-following expectations by reducing the degree of positive feedbacks in the system. Kryvtsov and Petersen (2013) assume a policy rule of the form it = ϕπ Et− 1 π t þ ϕy Et− 1 yt where the reaction coefficients assume values ϕπ = 1.5 and ϕy = 0.5 in the Benchmark treatment and ϕπ = 3, ϕy = 1 in the Aggressive Monetary Policy treatment. They find that inflation and output gap predominantly exhibit a stable cyclical behavior, with inflation and the output gap displaying less volatility and less persistence in the Aggressive Monetary Policy treatment.19 The experiment of Kryvtsov and Petersen is designed to identify the contribution of expectations to macroeconomic stability achieved by a systematic monetary policy. The authors find that, despite some nonrational component in individual expectations, the monetary policy is quite powerful in stabilizing the experimental economies confirming thus the results of Pfajfar and Zakelj (2011) and Assenza et al. (2011), and they
Experiments on Expectations in Macroeconomics and Finance
49
report that monetary policy accounts for roughly a half of business cycle stabilization.
LEARNING TO OPTIMIZE “Learning-to-optimize experiment” (LtOE) refers to the experiments where subjects submit their economic decisions (i.e., consumption, trading, production) directly, without elicitation of their forecasts of the market price. It is a literature with a longer history than LtFEs. Some examples of this approach include Smith et al. (1988), Lim, Prescott, and Sunder (1994), Arifovic (1996), Noussair, Plott, and Riezman (2007), and Crockett and Duffy (2013). There have been many surveys on studies using this approach already (Noussair & Tucker, 2013). In this section, we limit our attention to a few experiments with parallel learning-to-forecast and learning-tooptimize treatments (based on the same model of the experimental market) in order to compare them.20 These experiments are helpful in answering the robustness question: “will the results of the LtFEs change if the subjects make a quantity decision directly (instead of making a forecast only)?” There are two potential sources that may lead to different results due to the LtFE versus LtOE design: the nature of the task and the payoff structure. In terms of the nature of the task: (1) the subjects in LtFEs are aided by a computer to make calculations, which should facilitate the learning of the REE; (2) on the other hand, since the price determination equation is in the end a function of quantity decisions, it should be easier for the subjects in the LtOE design to understand how their decisions are translated to market price, which helps the subjects to learn to play rationally. In terms of payoff structure, subjects are typically paid according to forecasting accuracy in the LtFE design, and profitability of the quantity decision in the LtOE design. We speculate that the market price should be closer to the REE when the subjects are paid according to the forecasting accuracy than profit. The reason is twofolded: (1) when the subjects are paid according to forecasting accuracy, predicting the REE is the unique symmetric Nash equilibrium of the “prediction game” (the payoff of every subject is maximized when they predict the REE). When they are paid according to the profit, they may earn a higher payoff if they deviate from the REE. For example, in a finite player cobweb market, the subjects can earn a higher payoff if they play the collusive equilibrium instead of the REE (competitive equilibrium). (2) Some studies show that emotion can influence the
50
TIZIANA ASSENZA ET AL.
optimality of individual decisions and price stability in asset market experiments (Breaban & Noussair, 2013). Moreover, the emotion of subjects can be heavily driven by past gains and losses of their trading behavior (quantity decisions). On the other hand, it seems the accuracy of predictions should have less influence on subjects’ emotional status. Therefore, payoff based on prediction accuracy should generate less fluctuation in emotions, and more stable market prices. According to the experimental results, the answer to this question is: 1. Holding other things equal, there is indeed a difference between aggregate price behavior in the learning-to-forecast and the learning-tooptimize markets. 2. The markets in the learning-to-optimize treatment deviate more from the rational expectation equilibrium than the markets in the learning-toforecast treatments. More specifically, in negative feedback markets, it takes longer for the market price to converge to the REE in the learning-to-optimize treatment than in the learning-to-forecast treatment. In positive feedback markets, there are larger bubble-crash patterns in the asset price in the learning-to-optimize treatment than the learning-toforecast treatment. To conclude, the aggregate market outcome is closer to the REE in the LtFE design than the LtOE design. This suggests researchers interested in testing a dynamic macroeconomic model in the rational expectations benchmark case should probably use the LtFE design. Meanwhile, the larger deviation in the LtOE design is probably a result of agents’ failure to solve the optimization problem after they form their expectations. Since there is already a large literature on bounded rationality in expectation formation, modeling bounded rationality in solving the optimization problem given one’s own expectations may be a good direction for future research.
Cobweb Market This section discusses the experiment conducted by Bao, Duffy, and Hommes (2013). The model behind this experiment is a traditional “cobweb” economy as studied by Muth (1961) when he proposed the famous rational expectation hypothesis. Before this experiment, there have been pure LtFEs on cobweb markets (Hommes, Sonnemans, & van de Velden, 2000 and Hommes, Sonnemans, Tuinstra & Van de Velden, 2007). But those experiments used nonlinear models. The model used in this
Experiments on Expectations in Macroeconomics and Finance
51
experiment is similar to the one used in the negative feedback treatment in Heemeijer et al. (2009). Heemeijer et al. (2009) found that the price converges quickly to the REE in negative feedback markets like the cobweb economy. One of the main targets of Bao et al. (2013) is to investigate whether this result still holds if the subjects submit a production quantity instead of a price forecast. The model is about a nonstorable commodity. Let pt be the price of the good at period t. D is the linear demand function for the good that is decreasing in pt, D(pt)=a − bpt, where a = 63; b = 21 20. The subjects play the role of the advisors of firms that produce the good. The supply of firm h in period t is denoted by Sh,t. Let peh;t be the price forecast made by firm h in period t. The supply function may be rewritten as Sðpeh;t Þ. This should be the solution of the expected profit maximization problem: h i max π eh;t = max peh;t qh;t − cðqh;t Þ
ð22Þ
2
Each firm has a quadratic cost function cðqÞ = Hq2 , where H is the number of firms in the market. Taking the first-order condition of the expected profit, it is not difficult to find S ðpeh;t Þ =
peh;t H
ð23Þ
The total supply of the goods equals the sum of supplies of individual firms. If every firm makes a supply based on Eq. (23), P the total supply in the market will coincide with the average price forecast, h S ðpeh;t Þ = pet The market price of the good is determined by the market clearing condition (supply equals demand): pt = D
−1
X
! Sh;t
ð24Þ
þ t
h
Plugging in the parameters, the price determination equation becomes: pt = max
20 ð63 − pt− e Þ þ ɛt ; 0 21
ð25Þ
where ɛ t ∼Nð0; 1Þ. Imposing the rational expectations assumption e pet = Ept = E max 20 21 ð63 − pt Þ þ ɛ t ; 0g , and noting that the expected value of ɛt is zero, the REE price of this economy is pet = p = 30:73; pt = 30:73 þ ɛt . The optimal supply in the REE is 5.12.
52
TIZIANA ASSENZA ET AL.
Five treatments were designed in the experiment, and we focus on the first three of them. 1. Treatment 1: the LtFE treatment. Subjects (firms) only make a price forecast peh;t in each period t. Their implicit quantity decision, Sðpeh;t Þ will be calculated based on Eq. (23) by the experimental computer program. They are paid according to the prediction error, namely jpt − peh;t j. The larger the prediction error, the smaller the payoff. 2. Treatment 2: the LtOE treatment. Subjects (firms) make the quantity decision Sh,t directly, and there is no assistance from the computer. Each subject is paid according to the profit his firm makes in each period as defined by Eq. (22), namely, revenue minus cost.21 3. Treatment 3: the LtFE + LtOE treatment. Each subject makes both a price forecast peh;t and a quantity decision Sh,t in each period. The market price is again determined by the production decisions submitted by the firms as in Treatment 2. Subjects are paid according to an equal weighted linear combination of the payoff functions used in the LtFE and LtOE treatments. If agents are able to form rational expectations, the experimental results should be exactly the same in all of the three treatments. From the results in former LtFEs on cobweb markets, we know this kind of markets have a tendency to convergence to the REE reliably. In a way, one can argue that the convergence to the REE should take fewer periods in Treatment 1 than 2, because subjects are helped with a computer program to calculate the conditionally optimal quantity, but not in Treatment 2. But one could also argue that because people make quantity decisions for their economic activities on a daily basis, but have less experience with making a forecast, therefore, the quantity decision task should be more familiar, and easier for them. In a way, subjects are faced with a situation that resembles the situation in a theoretical RE model in Treatment 3, where they first make a forecast, and then make a quantity decision. Therefore, they should learn faster from the theoretical point of view, because doing the two tasks at the same time should make them think more about how the economy works. It turns out that in terms of the number of periods it takes for the market price to converge to the REE, the convergence is fastest in Treatment 1, and slowest in Treatment 3. Fig. 12 plots the average market price in the three treatments. It can be seen that the market price is most stable in Treatment 1, and most unstable in Treatment 3. The market price deviates to the largest extent from the REE in Treatment 3, and it also takes longer
53
Experiments on Expectations in Macroeconomics and Finance
Average Average Price price in in Treatment treatment 11
Average Average Price price in in Treatment treatment 22
40
40
35
35
30
30
25
25
20
20
15
15
10
10
5
5
0
0 0
10
20
30
40
0
50
10
20
30
40
50
Period
Period Average price
REE
Average price
REE
Average Average Price price in in Treatment treatment 33 40 35 30 25 20 15 10 5 0 0
10
20
30
40
50
Period Average price
Fig. 12.
REE
The Average Market Price and the REE Price in Each of the Three Treatments of the LtFE and LtOE in Bao et al. (2013).
for even the average market price to get close to the REE. The authors declare convergence to have occurred in the first period for which the difference between the market price and the REE price is less than 5 and stays below 5 forever after that period. If a market fails to converge, the number of periods before convergence is counted as 50. The result shows that the median number of periods before convergence is only 3 in Treatment 1, 13 in Treatment 2, but 50 in Treatment 3. Related to the arguments in the paragraph above, the finding suggests that the LtFE indeed provides the highest speed of convergence due to that fact the subjects are helped by computers. The subjects seem to be cognitively overloaded by the complexity of the decision problem, which leads to a lower speed to find the REE. The authors followed Rubinstein (2007) and used decision time as a proxy to measure cognitive load. They found that subjects in Treatment 3 indeed took a significantly longer time to make each decision.
54
TIZIANA ASSENZA ET AL.
Asset Pricing Market Similar to Bao et al. (2013), Bao, Hommes, and Makarewicz (2014) set up an experiment with comparable treatments of learning-to-forecast (Treatment 1), learning-to-optimize (Treatment 2), and forecast + optimize (Treatment 3) for an experimental asset market similar to Hommes et al. (2005, 2008), and the positive feedback treatment of Heemeijer et al. (2009).22 The purpose is again to see whether the price deviation in positive feedback markets in the previous LtFEs is a robust finding under the learning-to-optimize or forecasting + optimizing design. In the LtFE treatment subjects submit a price forecast pei;t þ 1 and are paid according to forecasting accuracy. This treatment is a replication of the positive feedback treatment of Heemeijer et al. (2009). The pricing rule in the LtFE is given by pt þ 1 = 66 þ
20 e pt þ 1 − 66 þ ɛt 21
ð26Þ
P where 66 is the fundamental price (y = 3:3 and r = 0.05) pet = 16 6i = 1 pei;t þ 1 is the average prediction of price pt + 1 and ɛt a small IID noise term.23 In the second, LtOE treatment subjects submit the amount of asset they want to buy/sell, zi,t directly, and are paid according to the trading profit. The price adjustment thus takes the form of pt þ 1 = pt þ
6 20 X zi;t þ ɛt 21 i = 1
ð27Þ
where zi,t is the demand of subject i (a quantity choice between −5 and +5). In Treatment 3 subjects submit both a forecast and a trading quantity, and are paid randomly with equal chance according to their forecast accuracy or trading profit. Since the LtFEs help find many bubble-crash patterns in the market price, a natural question to ask is whether these patterns are still there in the learning-to-optimize, or forecast + optimize design. The results show that the bubble-crash pattern is not only robust, but even stronger in the learning-to-optimize, and forecast + optimize treatment. Fig. 13 shows the market price in a typical market in each of the three treatments. The market price is most stable in Treatment 1, which steadily and slowly goes up. There is some mild oscillation in the market price in Treatment 2. Treatment 3 is the only treatment where the market price can go above 100 (in 3 out of 6 markets). In the market that generates the
55
Experiments on Expectations in Macroeconomics and Finance 100
100 80 60 40 20 0
80 60 40
0
10
20
4 2 0 –2 –4
20 0
Realized prices
0
10
20
30
140 120 100 80 60 40 20 0
40
50
30
40
50
Individual quantity decisions
0
10
20
30
40
50
Realized prices
0
10
20
4 2 0 –2 –4
30
40
50
Individual quantity decisions
0
10
20
30
40
50
Fig. 13. The Price (Squares) and Individual Expectations if Applicable (Lines) in a Typical Market (Market 1) against the REE Price (REE = 66, Dashed Line) in Each of the Three Treatments in Bao et al (2014).
largest bubble, the price reaches 215 at the peak, which is more than three times the fundamental price (REE) of the asset! The deviation of the asset price from the REE is 10.8% in Treatment 1, 23% in Treatment 2, and 36% in Treatment 3 in terms of Relative Absolute Deviation (RAD) defined by Sto¨ckl et al. (2010). The results of this study confirm that the price deviations from the REE in the LtFEs with positive feedback, as in Hommes et al (2005, 2008) and Heemeijer et al. (2009), are robust against the experimental design. The learning-to-forecast design provides the result that is closest to the REE in the laboratory. Besides this experiment, there are also learning-to-optimize experiments with elicitation of subjects’ price forecasts. For example, Haruvy, Lahav, and Noussair (2007) study the asset market with double auction trading mechanism as in Smith, Suchanek, and Williams (1988). The subjects trade an asset that lives for 15 periods, and the fundamental value of the asset is determined by the sum of the remaining (expected) dividend of the asset at
56
TIZIANA ASSENZA ET AL.
each point of time. The typical finding with this kind of asset market is that the subjects fail to trade according to the fundamental value of the asset. There is a bubble-crash pattern where the market price first goes higher than the fundamental value, and then crashes till the end of the experiment, when the fundamental price goes to 0. Haruvy et al. (2007) ask the subjects to provide their price prediction for every future period at the beginning of each period (namely, in period 1, to predict prices in each of period 115, in period 2, to predict prices for each of period 215, and so on). They find that the subjects did not predict the fundamental value of the asset. When the subjects play in the market for the first time, they tend to predict that the past trend in the price will continue, and when they get experienced, they are able to predict the downturn of the price in the end, but still overestimate the number of periods before the downturn happens. Peterson (1993) elicits one-period-ahead forecast in a similar experimental market. He finds that the subjects also fail to form rational expectations, but there is evidence of learning over the periods. More recently, Akiyama, Hanaki, and Ishikawa (2014) set up a forecast only (FO) treatment where both experienced and inexperienced subjects are invited to make one-periodahead forecast for a similar market populated with other subjects. They find that the initial price prediction by the experienced subjects is significantly closer to the fundamental price than the inexperienced subjects, but the difference becomes smaller in later periods.
PriceQuantity Setting under Monopolistic Competition Assenza, Grazzini et al. (2014) present results from 50-rounds learning-tooptimize experimental markets in which firms decide repeatedly both on the price and quantity of a perishable good. The experiment is designed to study the pricequantity setting behavior of subjects acting as firms in monopolistic competition. In particular, each firm i in an N = 10 firms market faces in every period a demand curve of the form qi = α − βpi þ θp where qi represent the demand P for the good produced by firm i, pi is the price set by firm i and p = N1 Ni= 1 pi is the average market price. All firms face common constant marginal costs c.24 Subjects are endowed with qualitative information about the market structure but they do not know the
57
Experiments on Expectations in Macroeconomics and Finance
functional form of the demand for their product. Assenza, Grazzini et al. (2014) are interested in understanding whether subjects in the experiment converge to the monopolistically competitive (MC) outcome without knowledge of the demand function and production set in advance. Moreover, they analyze the pricequantity setting strategies used by subjects in response to signals from the firms’ internal conditions, that is, individual profits, excess demand/supply, and the market environment, that is, aggregate price level. Assenza et al. implement two treatments, differing in the information sets available to subjects. In Treatment 1 subjects observe the average market price, their own price, production, sales, profits, and excess supply up to period t − 1. In Treatment 2 subjects have the same informational structure, but in addition firms can also observe excess demand, that is, the portion of demand they were not able to satisfy given their pricequantity decisions and the average market price. Comparison between Treatments 1 and 2 allows to assess the impact of alternative information sets and, ultimately, different market structures. Finally, given that expected market price represents an important variable in firms’ decisions on how much to produce and at which price to sell, the authors also elicit expectations of the average market price. Overall, Assenza et al. report convergence of average prices and quantities to (a neighborhood of) the MC equilibrium in both treatments. Fig. 14 reports the median (over four markets per treatment) of the absolute difference between the MC equilibrium and the realized prices and quantities.
Price
Quantity 7
3.5 Treatment 1 Treatment 2
3 2.5
5
2
4
1.5
3
1
2
0.5
1
0
0
10
20
30
40
50
Treatment 1 Treatment 2
6
0
0
10
20
30
40
50
Fig. 14. Left Panel: Median of the Absolute Difference between Average Price and MC Equilibrium Price. Right Panel: Median of the Absolute Difference between the Average Quantity and the MC Equilibrium Quantity.
58
TIZIANA ASSENZA ET AL.
In the case of prices, the authors report a significant difference between treatments, with an observed higher degree of convergence in Treatment 1. Quantities show no significant difference in the degree of convergence between treatments. Although average prices and quantities show a tendency towards equilibrium, Assenza, Heemeijer, et al. (2014) find substantial heterogeneity among individual price and quantity decisions. Fig. 15 shows the median of the standard deviations of individual decisions for each period over the four markets of each treatment. A low standard deviation implies a high level of coordination among the subjects. For both price and quantity, the authors report a statistically significant higher degree of coordination among individual decisions in Treatment 2 than in Treatment 1. In order to gain further insights on aggregate market behavior and explain individual price and quantity setting decisions, the authors estimate the following behavioral model pei;t = c þ α1 pt − 1 þ α2 pei;t − 1 þ α3 pt − 2 þ ɛt pi;t = c þ β1 pi;t − 1 þ β2 pei;t þ β3 Πi;t − 1 þ β4 Si;t − 1 þ ut qi;t = c þ γ 1 qi;t − 1 þ γ 2 pi;t þ γ 3 pei;t þ γ 4 Si;t − 1 þ ηt where p refers to realizations of the aggregate price, the variable pei refers to individual forecasts of the aggregate price, the variable pi refers to individual prices, Si refers to individual excess supply/demand and qi denotes individual quantities. The variable Пi is a profit-feedback proxy defined as Πi = Δpi ⋅ sign ðΔπ i Þ, where πi are individual profits and Δ is the first-order difference operator. Assenza et al. identify three types of behavioral
Price
Quantity
6
8 Treatment 1 Treatment 2
5
Treatment 1 Treatment 2
7 6
4
5 3 4 2
3
1 0
2 0
Fig. 15.
10
20
30
40
50
1
0
10
20
30
40
50
Left Panel: Median of the Standard Deviations of Individual Prices. Right Panel: Median of the Standard Deviations of Individual Quantities.
Experiments on Expectations in Macroeconomics and Finance
59
strategies on the basis of estimated price-setting rules: market followers, that is, subjects for which β3 = β4 = 0, profit-adjusters, that is, subjects for which β3 > 0 and β4 = 0, and demand-adjusters, that is, subjects for which β3 = 0 and β4 > 0. Overall, 46% of the subjects are market followers, 28% are profit-adjusters, and 26% are demand-adjusters. The authors investigate the impact of each behavioral type on market dynamics by means of simulations. The main findings can be summarized as follows: (a) profit-adjusters play a key role in pushing the experimental market towards the MC equilibrium; (b) the anchor term, that is, the first three components in the price-setting rule, is important to determine the (long run) equilibrium price and its stability; (c) demand-adjusters move their price in the direction that reduces excess supply, acting as loss-minimizers. Although firms display a higher level of coordination in Treatment 2, Fig. 14 shows that, in the same treatment, the difference between market price and the MC equilibrium is higher. Assenza et al. attribute the different market behavior in the two treatments to the different information sets available to firms. In particular, the limited information in Treatment 1 provides subjects with an incentive to “explore” the demand function by experimenting with different prices. In order to support this hypothesis, the authors construct a proxy for individual exploration via price experimentation and confirm their conjecture. The lower level of exploration in Treatment 2 leads firms to rely on the information conveyed by market prices in their price-setting decisions. This results in a higher level of coordination among firms, and at the same time it slows down convergence to the MC equilibrium. In fact, inertia in price-setting behavior of a significant share of firms (i.e., demand-adjusters with less incentive to explore and market followers) prevents profit-adjusters from pushing the market towards the MC equilibrium. In fact in this scenario profit-adjusters may realize low profits if they deviate too much from the average price, even if they move in a direction corresponding to a positive slope in the profit function, causing the market to lock in suboptimal regions. This explains the stylized fact of a lower degree of convergence to equilibrium in Treatment 2.
CONCLUDING REMARKS This article surveys laboratory experiments on expectation formation in macroeconomics and finance. We summarize three important findings of this literature focusing on key differences in the experimental designs:
60
TIZIANA ASSENZA ET AL.
(1) exogenous time series versus endogenous expectations feedback, (2) positive versus negative feedback, and (3) learning-to-optimize versus learning-to-forecast. To the best of our knowledge, there is no systematic study comparing forecasting exogenously a given time series and forecasting time series within an endogenous expectations feedback environment. The following comparison may be instructive however. In one of the treatments of Hey (1994), subjects must forecast an exogenous stochastic AR(1) process xt = 50 þ ρðxt − 1 − 50Þ þ ɛ t , with mean 50 and persistence coefficient ρ = +0.9. Using individual forecast series, Hey estimates simple trend-extrapolating rules of the form xet = α1 xt − 1 þ α2 ðxt − 1 − xt − 2 Þ
ð28Þ
and for most subjects finds a coefficient α1 not significantly different from 1. The coefficient α2 varies across subjects and assumes positive as well as negative values for different subjects. For about one third of the subjects the coefficient is significantly different from 0, either positive or negative, and Hey presents examples of −0.27 and +0.21.25 This means that for a simple exogenous AR(1) process, subjects learn a simple trendextrapolating rule, but they disagree about the sign of the coefficient. Some subjects are trend-followers, while others are contrarians and go against the trend. Heemeijer et al. (2009) run LtFEs with endogenous expectations feedback and a linear feedback map
xt = 60 þ ρ xet − 60 þ ɛt
ð29Þ
where xt e is the average forecast of a group of individuals, ɛ t is a (small) noise term, and the slope coefficient of the linear feedback map is either positive (ρ = 0.95) or negative (ρ = −0.95). Heemeijer et al. (2009) also estimate first-order linear rules to the individual forecast series. In the positive feedback treatment for most subjects they find a positive trend-coefficient, ranging from 0.27 to 0.94. Apparently, in self-referential positive feedback systems, subjects learn to coordinate on (strong) trend-extrapolating rules. In the negative feedback treatment, the estimated trend-coefficient in most cases is not significant, and in the few significant cases the trend-coefficient is negative. This brings us to the second important finding, the difference in aggregate behavior between positive and negative expectations feedback systems. A frequently heard argument in macroeconomics and finance is that at
Experiments on Expectations in Macroeconomics and Finance
61
the aggregate level expectations should be rational, because on average individual errors wash out at the aggregate level. The evidence from the laboratory experiments however shows that this is only true under negative expectations feedback, but not under positive feedback. LtFEs show that under positive feedback (small) individual errors may become strongly correlated, individual expectations may coordinate on prices very different from the rational expectations benchmark and prices do not converge but rather fluctuate around the fundamental. Surprisingly, oscillating prices already arise in positive feedback systems with a slope coefficient less than 1, for example, 0.95. Most adaptive learning algorithms, including the simple naive expectations rule, would predict convergence to equilibrium in this case, but the experiments show that at the aggregate level prices may oscillate persistently. For an intuitive explanation we refer the reader once more to the graph of the positive feedback map (Fig. 4): for a near-unitroot linear feedback map, the graph almost coincides with the diagonal so that every point is almost a steady state. As a consequence, at any moment in time, any price forecast is almost self-fulfilling. Subjects may then easily coordinate on a dynamic price pattern very different from the unique RE price steady state with small forecasting errors. In the case of 2-period ahead forecasts, as is common in temporary equilibrium models in macroeconomics and finance such as the NK DSGE and asset pricing frameworks, the LtFE takes the form (cf. Eq. (7))
xt = 60 þ ρ xetþ 1 − 60 þ ɛt
ð30Þ
where xt þ 1 e is the average 2-period ahead forecast of a group of individuals. In this setup price volatility strongly increases with large bubbles and crashes (as in Fig. 2). In this type of temporary equilibrium framework the coefficient ρ often represents a discount factor close to 1, so that the system exhibits a strong positive feedback. Finally, let us discuss differences between the learning-to-forecast and learning-to-optimize designs. Price oscillations, with bubbles and crashes, have been observed in many experimental studies within a learning-toforecast design. This design fits with models where consumption, savings, production, and investment quantity decisions are optimal, given subjective forecasts. Recent experimental studies with a learning-to-optimize design show that convergence to RE may be even slower and instability under positive feedback may be even stronger. For subjects in laboratory experiments, learning-to-optimize seems even more difficult than learning-toforecast. This experimental evidence calls for relaxing the rationality
62
TIZIANA ASSENZA ET AL.
assumption in utility, profit, and portfolio optimization and more realistic modeling of boundedly rational, heterogeneous decision heuristics in macroeconomics and finance. Much work remains to be done on the empirical validation of individual expectations and aggregate behavior in experimental economic feedback systems. An important question, for example, is the robustness of these results in large groups. The fact that expectations at the aggregate level may persistently deviate from rationality has important policy implications. Policy analysis is often based upon RE models. But if RE fails the empirical test of simple laboratory environments, can we trust macroeconomic and financial policy analysis based on the rational paradigm? This survey indicates a potentially successful strategy for policy to manage selfreferential systems of heterogeneous boundedly rational agents. In order to stabilize macroeconomic or financial expectations feedback systems, a policy should add negative feedback to the system and weaken the positive feedback so that coordination on destabilizing trend-following behavior becomes less likely and the system is more likely to coordinate on stabilizing adaptive expectations.
NOTES 1. See Duffy (2008) for a survey on experiments in macroeconomics. 2. Subjects also had to provide a forecast of their confidence in their own forecast and faced higher costs (i.e., lower earnings) when their forecasts were outside their confidence intervals. 3. One treatment had a structural break, with the coefficient b switching from 0.1 to 0.8. 4. The robot trader can be considered as fundamental traders who always buy when the price is below and sell when the price is above the fundamental price. Their weight nt increases when the price deviates more from the REE, so that the price does not “explode.” The intuition behind the increasing weight is that the more the price deviates from the REE, the less likely it is that the deviation will sustain. Knowing this, more fundamental traders will join because they think that mean reversion of the price becomes more likely. 5. Our positive and negative feedback LtFEs may be viewed as repeated guessing games or beauty contest games as introduced in Nagel (1995). Sutan and Willinger (2009), Williams (1987) study beauty contest games with negative feedback (i.e., players actions are strategic substitutes). Moreover, positive feedback is similar to strategic complements, while negative feedback is similar to strategic substitutes in terms of strategic environment. Fehr and Tyran (2005, 2008) show that, after an exogenous shock, the market price converges faster to the new fundamental price in an environment with strategic substitutes than with strategic complements.
Experiments on Expectations in Macroeconomics and Finance
63
Potters and Sutens (2009) show that it is easier for a group to achieve full cooperation under strategic complements than in strategic substitutes. 6. In both treatments, the absolute value of the slopes is 0.95, implying in both cases that the feedback system is stable under naive expectations. Leitner and Schmidt (2007) study a LtFE in an experimental foreign exchange market, which is a positive feedback system with a slope +1. For all markets, the realized exchange rate highly correlates with the (small) noise shocks. Similar to Heemeijer et al. (2009) they estimate simple linear expectations rules to subjects’ forecast series and find evidence for adaptive, naı¨ ve, and trend-following expectations rules amplifying fluctuations. Sonnemans and Tuinstra (2010) show that the feedback strength is important for the (in)stability of the positive feedback experiments and that fast convergence to the fundamental is obtained for feedback strength +0.67. 7. See Hommes (2013, 2014) for further discussion of coordination on almost self-fulfilling equilibria in positive feedback systems and the relation to Soros’ notion of reflexivity. 8. Hommes et al. (2000) conduct individual learning-to-forecast experiments with large permanent shocks to the fundamental price in the cobweb model. More recently, Bao and Duffy (2014) conduct a learning-to-forecast experiment with both individual and group settings where the subjects have complete information about the model of the economy. 9. Wagener (2014) uses the same experimental data and shows weak individual rationality (i.e., unbiased forecast errors without autocorrelations) for both the negative and positive feedback treatments, but strong rationality (i.e., prices converge to the homogeneous REE price) only under negative feedback. 10. Anufriev, Hommes, and Philipse (2013) fit a HSM with two heuristics, adaptive expectations versus a trend-following rule, to the positivenegative expectations feedback experiments of Heemeijer et al. (2009). 11. Anufriev and Hommes (2012) used two different trend-following rules in their model, a weak and a strong trend-following rule, to describe asset pricing experiments with positive feedback. Because of the negative feedback treatment in Bao et al. (2012), one trend-following rule was replaced by a contrarian rule, that is, with a negative coefficient (−0.3) which is able to detect (short run) up and down price oscillation characteristics for negative feedback markets. 12. Marimon et al. (1993) is also a pioneering work in experiments on sunspot equilibrium, followed by Duffy and Fisher (2005). 13. Detailed derivations of the NK model can be found in Woodford (2003) among others. 14. Other contributions to LtFEs are Arifovic and Sargent (2003) and Cornand and M’baye (2013), which are discussed in detail in the survey on experiments on monetary policy and central banking by Cornand and Heinemann (2014). 15. The analysis performed in Pfajfar and Zakelj (2014) uses the experimental data collected in Pfajfar and Zakelj (2011). 16. In total 216 subjects participated in the experiment of Assenza et al. (2011) divided into 18 experimental economies with 12 subjects each (6 subjects forecasting inflation and 6 subjects forecasting the output gap). Each participant submitted forecasts for 50 consecutive periods.
64
TIZIANA ASSENZA ET AL.
17. In total 216 subjects participated in the experiment of Pfajfar and Zakelj (2011) divided into 24 independent groups of 9 subjects each. Each participant submitted inflation forecasts for 70 consecutive periods. 18. In Kryvtsov and Petersen (2013) the cost-push shock ut = 0 in every period t. 19. Kryvtsov and Petersen (2013) elicit expectations on both inflation and the output gap and they assume an AR(1) process for the exogenous driving process. 20. See also Roos and Luhan (2013), for a recent LtOE and LtFE in an experimental macroeconomy with monopolistic firms and labor unions. 21. Note that the difference between Treatments 1 and 2 can be a result of the joint force of difference in tasks and in payoff structures. In order to better isolate the effect of each of them, there was another treatment (Treatment 5) in Bao et al. (2013) where the subjects make forecasts but are paid according to profits. The result of that treatment is just between Treatment 1 and 2. It seems that the task nature and payoff structure play equally important roles in causing the treatment difference. 22. Bostian and Holt (2013) developed web-based experimental software for LtO classroom experiments for asset bubbles in an environment with a constant fundamental value. 23. Notice that this is a one-period-ahead LtFE, in contrast to the two-periodahead asset pricing experiments in Hommes et al. (2005, 2008). The reason is that for one-period-ahead the payoff table in the corresponding LtOE is twodimensional, depending upon the quantity and realized return. For a two-periodahead LtFE, the corresponding payoff table for the LtOE would depend on three variables due to an extra time lag. Therefore, the LtFE treatment in Bao et al. (2014) is not directly comparable to Hommes et al. (2005, 2008), but more comparable to the positive feedback treatment in Heemeijer et al. (2009), which is more stable than Hommes et al. (2005, 2008). Instead of very large bubbles and crashes, the asset price in many markets in Heemeijer et al. (2009) shows a mild upward trend, which typically overshoots the REE. 24. Davis and Korenok (2011) use a similar experimental monopolistically competitive market setup to investigate the capacity of price and information frictions to explain real responses to nominal price shocks. 25. Hey (1994) does not report all estimates.
ACKNOWLEDGMENT We would like to thank the Editor John Duffy and two anonymous referees for detailed and helpful comments on an earlier draft. We gratefully acknowledge the financial support from the EU FP7 projects “Complexity Research Initiative for Systemic Instabilities” (CRISIS, Grant No. 288501), “Macro-Risk Assessment and Stabilization Policies with New Early Waring Signals” (Rastanews, Grant No. 320278),
Experiments on Expectations in Macroeconomics and Finance
65
“Integrated Macro-Financial Modelling for Robust Policy Design” (MACFINROBODS, Grant No. 612796) and the INET-CIGI Research Grant “Heterogeneous Expectations and Financial Crises” (HExFiCs, Grant No. INO1200026).
REFERENCES Adam, K. (2007). Experimental evidence on the persistence of output and inflation. Economic Journal, 117, 603635. Akerlof, G. A., & Shiller, R. J. (2009). Animal spirits: How human psychology drives the economy, and why it matters for global capitalism. Princeton, NJ: Princeton University Press. Akiyama, E., Hanaki, N., & Ishikawa, R. (2014). How do experienced traders respond to inflows of inexperienced traders? An experimental analysis. Journal of Economic Dynamics and Control, 45, 118. Anufriev, M., & Hommes, C. H. (2012). Evolutionary selection of individual expectations and aggregate outcomes in asset pricing experiments. American Economic Journal: Microeconomics, 4(4), 3564. Anufriev, M., Hommes, C. H., & Philipse, R. (2013). Evolutionary selection of expectations in positive and negative feedback markets. Journal of Evolutionary Economics, 23, 663688. Arifovic, J. (1996). The behavior of the exchange rate in the genetic algorithm and experimental economies. Journal of Political Economy, 104, 510541. Arifovic, J., & Sargent, T. (2003). Laboratory experiments with an expectational Phillips curve. In D. Altig & B. Smith (Eds.), The origins and evolution of central banking: Volume to inaugurate the institute on central banking of the federal reserve bank of Cleveland (pp. 2356). United Kingdom: Cambridge University Press. Asparouhova, E., Bossaerts, P., Eguia, J., & Zame, W. (2014). Asset prices and asymmetric reasoning. Working Paper No. 14/640. Department of Economics, University of Bristol, UK. Asparouhova, E., Bossaerts, P., Roy, N., & Zame, W. (2013). ‘Lucas’ in the laboratory. National Bureau of Economic Research Working Paper No. w19068. Asparouhova, E., Hertzel, M., & Lemmon, M. (2009). Inference from streaks in random outcomes: Experimental evidence on beliefs in regime shifting and the law of small numbers. Management Science, 55(11), 17661782. Assenza, T., Grazzini, J., Hommes, C. H., & Massaro, D. (2014). PQ strategies in monopolistic competition: Some insights from the lab. Journal of Economic Dynamics and Control. doi:10.1016/j.jedc.2014.08.017 Assenza, T., Heemeijer, P., Hommes, C. H., & Massaro, D. (2011). Individual expectations and aggregate macro behavior. CeNDEF Working Paper No. 2011-01. University of Amsterdam. Assenza, T., Heemeijer, P., Hommes, C. H., & Massaro, D. (2014). Managing self-organization of expectations through monetary policy: A macro experiment. CeNDEF Working Paper No. 14-07. University of Amsterdam. Bao, T., & Duffy, J. (2014). Adaptive vs. educative learning: Theory and evidence. SOM Research Report, University of Groningen.
66
TIZIANA ASSENZA ET AL.
Bao, T., Duffy, J., & Hommes, C. H. (2013). Learning, forecasting and optimizing: An experimental study. European Economic Review, 61, 186204. Bao, T., Hommes, C. H., & Makarewicz, T. A. (2014). Bubble formation and (In) efficient markets in learning-to-forecast and-optimize experiments. CeNDEF Working Paper (No. 1401). Universiteit van Amsterdam. Bao, T., Hommes, C. H., Sonnemans, J., & Tuinstra, J. (2012). Individual expectations, limited rationality and aggregate outcomes. Journal of Economic Dynamics and Control, 36, 11011120. Barberis, N., Shleifer, A., & Vishny, R. (1998). A model of investor sentiment. Journal of Financial Economics, 49(3), 307343. Becker, O., Leitner, J., & Leopold-Wildburger, U. (2009). Expectations formation and regime switches. Experimental Economics, 12, 350364. Becker, O., & Leopold-Wildburger, U. (1996). Some new lotka-volterra-experiments, In Operations Research Proceedings 1995 (pp. 482486). Berlin: Springer. Beckman, S. R., & Downs, D. H. (2009). Forecasters as imperfect information processors: Experimental and survey evidence. Journal of Economic Behaviour and Organization, 32, 89100. Bernasconi, M., & Kirchkamp, O. (2000). Why do monetary policies matter? An experimental study of saving and inflation in an overlapping generations model. Journal of Monetary Economics, 46(2), 315343. Bernasconi, M., Kirchkamp, O., & Paruolo, P. (2009). Do fiscal variables affect fiscal expectations? Experiments with real world and lab data. Journal of Economic Behaviour and Organization, 70, 253265. Beshears, J., Choi, J. J., Fuster, A., Laibson, D., & Madrian, B. C. (2013). What goes up must come down? Experimental evidence on intuitive forecasting. American Economic Review 103, Paper and Proceedings, pp. 570574. Blanco, M., Engelmann, D., Koch, A. K., & Normann, H. T. (2010). Belief elicitation in experiments: Is there a hedging problem? Experimental Economics, 13(4), 412438. Bloomfield, R., & Hales, J. (2002). Predicting the next step of a random walk: Experimental evidence of regime-shifting beliefs. Journal of Financial Economics, 65, 397414. Bostian, A. J. A., & Holt, C. A. (2013). Price bubbles with discounting: A web-based classroom experiment. Journal of Economic Education, 40, 2737. Branch, W. A. (2004). The theory of rationally heterogeneous expectations: Evidence from survey data on inflation expectations. Economic Journal, 114, 592621. Breaban, A., & Noussair, C. N. (2013). Emotional State and Market Behavior. CENTER Working Paper No. 2013031. University of Tilburg. Brock, W. A., & Hommes, C. H. (1997). A rational route to randomness. Econometrica, 65, 10591095. Brock, W. A., & Hommes, C. H. (1998). Heterogeneous beliefs and routes to chaos in a simple asset pricing model. Journal of Economic Dynamics & Control, 22, 12351274. Bullard, J. (1994). Learning equilibria. Journal of Economic Theory, 64, 468485. Campbell, J. Y., Lo, A. W., MacKinlay, A. C., & Lo, A. Y. (1997). The econometrics of financial markets. Princeton, NJ: Princeton University press. Case, K. E., Shiller, R. J., & Thompson, A. (2012). What have they been thinking? Home buyer behavior in hot and cold markets. Working Paper No. w18400. National Bureau of Economic Research.
Experiments on Expectations in Macroeconomics and Finance
67
Colander, D., Goldberg, M., Haas, A., Juselius, K., Kirman, A., Lux, T., & Sloth, B. (2009). The financial crisis and the systemic failure of the economics profession. Critical Review, 21(23), 249267. Crockett, S., & Duffy, J. (2013). An experimental test of the Lucas asset pricing model. Working Paper No. 504. Department of Economics, University of Pittsburgh. Cornand, C., & Heinemann, F. (2014). Experiments on monetary policy and central banking. In R. M. Isaac, D. Norton, & J. Duffy (Eds.), Experiments in macroeconomics (Vol. 17, pp. 167227). Research in Experimental Economics. Bingley, UK: Emerald Group Publishing Limited. Cornand, C., & M’baye, C. K. (2013). Does inflation targeting matter? An experimental investigation. Working paper GATE 201330. Davis, D., & Korenok, O. (2011). Nominal price shocks in monopolistically competitive markets: An experimental analysis. Journal of Monetary Economics, 58, 578589. DeGrauwe. (2009). What’s wrong with modern macroeconomics. Top-down versus bottom-up macroeconomics. CESifo Economic Studies, 56(4), 465497. doi:10.1093/cesifo/ifq014 DeGrauwe, P. (2012). Lectures on behavioral macroeconomics: Princeton, NJ: Princeton University Press. Duffy, J. (2008). “Experimental macroeconomics”, entry. In S. Durlauf & L. Blume (Eds.), The new Palgrave dictionary of economics (2nd ed.). New York, NY: Palgrave Macmillan. Duffy, J., & Fisher, E. O. (2005). Sunspots in the laboratory. American Economic Review, 95, 510529. Dwyer, G. P., Williams, A. W., Battalio, R. C., & Mason, T. I. (1993). Tests of rational expectations in a stark setting. Economic Journal, 103, 586601. Evans, G. W., & Honkapohja, S. (2001). Learning and expectations in macroeconomics. Princeton, NJ: Princeton University Press. Fehr, E., & Tyran, J. R. (2005). Individual irrationality and aggregate outcomes. Journal of Economic Perspectives, 19, 4366. Fehr, E., & Tyran, J. R. (2008). Limited rationality and strategic interaction: The impact of the strategic environment on nominal inertia. Econometrica, 76(2), 353394. Fisher, F. M. (1962). A priori information and time series analysis. Amsterdam: NorthHolland. Gaechter, S., & Renner, E. (2010). The effects of (incentivized) belief elicitation in public goods experiments. Experimental Economics, 13(3), 364377. Harrison, J. M., & Kreps, D. M. (1978). Speculative investor behavior in a stock market with heterogeneous expectations. The Quarterly Journal of Economics, 92(2), 323336. Haruvy, E., Lahav, Y., & Noussair, C. N. (2007). Traders’ expectations in asset markets: Experimental evidence. American Economic Review, 97(5), 19011920. Heemeijer, P., Hommes, C. H., Sonnemans, J., & Tuinstra, J. (2009). Price stability and volatility in markets with positive and negative expectations feedback. Journal of Economic Dynamics and Control, 33, 10521072. Heemeijer, P., Hommes, C., Sonnemans, J., & Tuinstra, J. (2012). An experimental study on expectations and learning in overlapping generations models. Studies in Nonlinear Dynamics & Econometrics, 16(4). Hey, J. (1994). Expectations formation: Rational or adaptive or …? Journal of Economic Behavior & Organization, 25, 329349.
68
TIZIANA ASSENZA ET AL.
Hommes, C., Sonnemans, J., & van de Velden, H. (2000). Expectation formation in a cobweb economy: Some one person experiments. In Interaction and market structure. Berlin: Springer. Hommes, C., Sonnemans, J., Tuinstra, J., & Van de Velden, H. (2007). Learning in cobweb experiments. Macroeconomic Dynamics, 11(S1), 833. Hommes, C. H. (2011). The heterogeneous expectations hypothesis: Some evidence for the lab. Journal of Economic Dynamics and Control, 35, 124. Hommes, C. H. (2013). Behavioral rationality and heterogeneous expectations in complex economic systems. Cambridge, MA: Cambridge University Press. Hommes, C. H. (2013). Reflexivity, expectations feedback and almost self-fulfilling equilibria: Economic theory, empirical evidence and laboratory experiments. Journal of Economic Methodology, 20, 406419. Hommes, C. H. (2014). Behaviorally rational expectations and almost self-fulfilling equilibria. Review of Behavioral Economics, 1, 7597. Hommes, C. H., Sonnemans, J., & van de Velden, H. (2000). Expectation formation in a cobweb economy: Some one person experiments. In D. Delli Gatti, M. Gallegati, & A. P. Kirman (Eds.), Interaction and market structure (pp. 253266). Berlin: Springer Verlag. Hommes, C. H., Sonnemans, J. H., Tuinstra, J., & van de Velden, H. (2005). Coordination of expectations in asset pricing experiments. Review of Financial Studies, 18(3), 955980. Hommes, C. H., Sonnemans, J. H., Tuinstra, J., & van de Velden, H. (2008). Expectations and bubbles in asset pricing experiments. Journal of Economic Behavior and Organization, 67, 116133. Hu¨sler, A., Sornette, D., & Hommes, C. H. (2013). Super-exponential bubbles in lab experiments: Evidence for anchoring over-optimistic expectations on price. Journal of Economic Behavior and Organization, 92, C304C316. Kelley, H., & Friedman, D. (2002). Learning to forecast price. Economic Inquiry, 40, 556573. Kirman, A. (2010). Complex economics: Individual and collective rationality. Oxford: Routledge. Leitner, J., & Schmidt, R. (2007). Expectations formation in an experimental foreign exchange market. Central European Journal of Operations Research, 15, 167184. Lim, S. S., Prescott, E. C., & Sunder, S. (1994). Stationary solution to the overlapping generations model of fiat money: Experimental evidence. Empirical Economics, 19, 255277. Lucas, R. (1972). Expectations and the neutrality of money. Journal of Economic Theory, 4, 103124. Marimon, R., & Sunder, S. (1993). Indeterminacy of equilibria in a hyperinflationary world: Experimental evidence. Econometrica, 61(5), 10731107. Marimon, R., & Sunder, S. (1994). Expectations and learning under alternative monetary regimes: An experimental approach. Economic Theory, 4, 131162. Marimon, R., & Sunder, S. (1995). Does a constant money growth rule help stabilize inflation? Carnegie-Rochester Conference Series on Public Policy, 43, 111156. Marimon, R., Spear, S. E., & Sunder, S. (1993). Expectationally driven market volatility: An experimental study. Journal of Economic Theory, 61, 74103. Muth, J. E. (1961). Rational expectations and the theory of price movements. Econometrica, 29, 315335. Nagel, R. (1995). Unraveling in guessing games: An experimental study. American Economic Review, 85, 13131326.
Experiments on Expectations in Macroeconomics and Finance
69
Noussair, C., Plott, C., & Riezman, R. (2007). Production, trade, prices, exchange rates and equilibration in large experimental economies. European Economic Review, 51(1), 4976. Noussair, C. N., & Tucker, S. (2013). Experimental research on asset pricing. Journal of Economic Surveys, 27(3), 554569. Nyarko, Y., & Schotter, A. (2002). An experimental study of belief learning using elicited beliefs. Econometrica, 70(3), 9711005. Peterson, S. P. (1993). Forecasting dynamics and convergence to market fundamentals: Evidence from experimental asset markets. Journal of Economic Behavior and Organization, 22(3), 269284. Pfajfar, D., & Zakelj, B. (2011). Inflation expectations and monetary policy design: Evidence from the laboratory. CentER Discussion Paper 2011091, Tilburg University. Pfajfar, D., & Zakelj, B. (2014). Experimental evidence on inflation expectation formation. Journal of Economic Dynamics and Control, 44, 147168. Potters, J., & Suetens, S. (2009). Cooperation in experimental games of strategic complements and substitutes. Review of Economic Studies, 76(3), 11251147. Rabin, M. (2002). Inference by believers in the law of small numbers. Quarterly Journal of Economics, 117, 775816. Roos, M. W. M., & Luhan, W. J. (2013). Information, learning and expectations in an experimental model economy. Economica, 80, 513531. Rubinstein, A. (2007). Instinctive and cognitive reasoning: A study of response times. Economic Journal, 117, 12431259. Rutstro¨m, E. E., & Wilcox, N. T. (2009). Stated beliefs versus inferred beliefs: A methodological inquiry and experimental test. Games and Economic Behavior, 67(2), 616632. Sargent, T. J. (1993). Bounded rationality in macroeconomics. New York, NY: Oxford University Press Inc. Schmalensee, R. (1976). An experimental study of expectation formation. Econometrica, 44, 1741. Schotter, A., & Trevino, I. (2014). Belief elicitation in the lab. Annual Review of Economics, 6, 103128. Simon, H. A. (1957). Models of man: Social and rational-mathematical essays on rational human behavior in a social setting. New York, NY: Wiley. Shiller, R. J. (1990). Speculative prices and popular models. Journal of Economic Perspectives, 4, 5565. Shiller, R. J. (2000). Irrational exuberance. Princeton, NJ: Princeton University Press. Smith, V. L., Suchanek, G. L., & Williams, A. W. (1988). Bubbles, crashes and endogenous expectations in experimental spot asset markets. Econometrica, 56, 11191151. Sonnemans, J., & Tuinstra, J. (2010). Positive expectations feedback experiments and number guessing games as models of financial markets. Journal of Economic Psychology, 31(6), 964984. Soros, G. (2003). The alchemy of finance. Hoboken, New Jersey: Wiley. Soros, G. (2009). The crash of 2008 and what it means: The new paradigm for financial markets. Public Affairs. Sto¨ckl, T., Huber, J., & Kirchler, M. (2010). Bubble measures in experimental asset markets. Experimental Economics, 13(3), 284298. Sutan, A., & Willinger, M. (2009). Guessing with negative feedback: An experiment. Journal of Economic Dynamics and Control, 33, 11231133.
70
TIZIANA ASSENZA ET AL.
Tirole, J. (1982). On the possibility of speculation under rational expectations. Econometrica, 50(5), 11631181. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 11241131. Wagener, F. O. O. (2014). Expectations in experiments. Annual Review of Economics, 6, 421443. Williams, A. W. (1987). The formation of price forecasts in experimental markets. Journal of Money, Credit and Banking, 19, 118. Woodford, M. (1990). Learning to believe in sunspots. Econometrica, 58, 277307. Woodford, M. (2003). Interest and prices: Foundations of a theory of monetary policy. Princeton, NJ: Princeton University Press. Xiong, W., & Yan, H. (2010). Heterogeneous expectations and bond markets. Review of Financial Studies, 23(4), 14331466.
PERSISTENCE OF SHOCKS IN AN EXPERIMENTAL DYNAMIC STOCHASTIC GENERAL EQUILIBRIUM ECONOMY Charles N. Noussair, Damjan Pfajfar and Janos Zsiros ABSTRACT We design experimental economies based on a New Keynesian Dynamic Stochastic General Equilibrium (DSGE) model. We apply shocks to tastes, productivity, and interest rate policy, and measure the persistence of these shocks. We find that, in a setting where goods are perfect substitutes, there is little persistence of output shocks compared to treatments with monopolistic competition, which perform similarly irrespective of whether or not menu costs are present. Discretionary central banking is associated with greater persistence than automated instrumental rules. Keywords: Experimental Economics; DSGE economy; monetary policy; menu costs
Experiments in Macroeconomics Research in Experimental Economics, Volume 17, 71108 Copyright r 2014 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0193-2306/doi:10.1108/S0193-230620140000017003
71
72
CHARLES N. NOUSSAIR ET AL.
INTRODUCTION New Keynesian dynamic stochastic general equilibrium (DSGE) models (see Clarida, Galı´ , & Gertler, 1999) are the principal paradigm currently employed for central bank policymaking. Price frictions are necessary to reproduce the persistence of shocks to output, productivity, and interest rates that appear in empirical data (see e.g., Christiano, Eichenbaum, & Evans, 2005; Clarida et al., 1999; Rotemberg & Woodford, 1997; Smets & Wouters, 2007). Assuming that firms have menu costs under monopolistic competition are one way of generating the requisite price frictions (Ball & Mankiw, 1995; Barro, 1972; Mankiw, 1985; Rotemberg, 1982). The monopolistic competition ensures that firms earn profits, and thus that they have some discretion in the timing and magnitude of changes in the prices they set. This structure reconciles empirical data, with the assumptions of optimizing representative households and firms, who have rational expectations. In the work reported in this article, we construct laboratory economies with a structure based on the DSGE model. We then conduct an experiment in which we introduce shocks to productivity, to preferences, and to interest rate policy. Stylized facts from empirical studies motivate the specific questions we consider. A first set of issues concerns how two particular frictions influence the persistence of shocks (Chari, Kehoe, & McGrattan, 2000; Jeanne, 1998). The frictions are (1) the presence of monopolistic rather than perfect competition, and (2) the existence of menu costs, in the output market. Specifically, we study whether a number of empirical stylized facts can be replicated in our experimental economies. Empirical vector autoregression (VAR) studies show that policy innovations typically generate an inertial response in inflation and a persistent, hump-shaped response in output after a policy shock (see, e.g., Christiano, Eichenbaum, & Evans, 1997; Leeper, Sims, Zha, Hall, & Bernanke, 1996). Moreover, hump-shaped responses in consumption, employment, profits, and productivity, as well as a limited response in the real wage, are robust findings. To match the empirical (conditional) moments of the data, as derived by structural VARs, nominal and real rigidities must be introduced. One way this has been done is through monopolistic competition and menu costs in the output market. Three of our experimental treatments isolate these specific rigidities in our economy. Our Baseline treatment features monopolistic competition, but no menu costs. The Menu Cost treatment includes both monopolistic competition and menu costs.1 The Low Friction treatment has the features that outputs are perfect substitutes and that there are
Persistence of Shocks in an Experimental DSGE Economy
73
no menu costs. The design allows us to separate the effects of monopolistic competition and menu costs on shock persistence. The experiment also permits the bounded rationality of agents to cause frictions on their own that generate shock persistence. In all of our treatments, human agents act as consumers and producers. Furthermore, in a fourth treatment, Human Central Banker, experimental subjects act as the central banks and set interest rates as well.2 It is impossible to implement a model that fully conforms to the NK DSGE model in the laboratory. Several modifications, and imposition of assumptions on the timing of events, are required in order to make the model implementable in the laboratory. The modifications we made were guided by evidence in the empirical literature and by the functioning of field economies. The most important differences from the standard NK DSGE model relate to the existence of multiple agents, the explicit sequencing of events within a period, the structure of the demand side of the economy and the creation of monopolistic competition, the possibility of having a positive level of savings in the short run, and demand uncertainty on the part of sellers. Due to these modifications, we are not able to claim that we put the NK DSGE model under scrutiny in the laboratory. However, we believe that several of the changes outlined above could represent important avenues for further development of the theoretical DSGE model. To investigate the persistence of shocks, we employ both structural VAR (SVAR) and Romer & Romer (2004) analyses. SVAR is a very common and flexible technique in monetary economics, and can be used to study the effects of all of the shocks in our experiment with the same framework. We estimate a trivariate VAR with output, inflation, and interest rates. The Romer & Romer approach is particularly well-suited to some of our treatments, because the shock realizations are observable in our experiment, and do not need to be identified. Under this approach we estimate a bivariate VAR with output and inflation, and with actual shocks as exogenous variables.3 The principal findings are the following. In the setting where goods are perfect substitutes, there is little persistence of output shocks compared to treatments with monopolistic competition. In all treatments, there is little or no effect of a monetary policy shock. Under the SVAR analysis, output shocks tend to be persistent in both the Baseline and Menu Cost treatments, while inflation shocks are not persistent in either treatment. Thus, the presence of menu costs does not generally significantly affect shock persistence under the SVAR analysis. Under the Romer & Romer approach, taste shocks generate persistent effects on inflation, as well as persistent and hump-shaped effects on output gap. Introducing
74
CHARLES N. NOUSSAIR ET AL.
menu costs induces a modest increase in the persistence of the effect on inflation of a taste shock. Other findings, unrelated to the study of the persistence of shocks, from the same experiment are reported in companion papers by Noussair, Pfajfar, & Zsiros (2013a, 2013b).
EXPERIMENTAL DESIGN The actual structure of the experimental economy incorporated some changes from the DSGE structure as commonly understood in macroeconomics. The modifications reflected what we thought was feasible to keep the complexity for subjects manageable. Some changes concerned the sequence of events with each period. For example, the standard DSGE model has no explicit sequencing of events within periods. However, a functioning laboratory economy requires that some decisions be taken after others are. As another example, we did not constrain savings to be zero in every period as in the DSGE literature, but rather allow for positive levels of savings in the short run and give an initial amount of money to each agent in the economy.4 Subjects were all undergraduate students at Tilburg University. Four sessions were conducted under each of the four treatments for a total of 16 sessions. Six subjects participated in each session, with the exception of sessions of the Human Central Banker treatment, in which there were nine participants. Average final earnings to participants were 43.99 h. No subject participated in more than one session. Only one treatment was in effect in any session. The experiment was implemented with the Z-Tree platform Fischbacher (2007). The description below applies to the Baseline treatment. The section “Treatments” indicates how the other three treatments differed from the Baseline.
Consumers The economy was populated by six agents, I = 3 consumers and J = 3 firms. Consumers and firms are indexed by i and j, respectively. Each consumer had an objective function of the following form: ( ! ) 3 X c1ijt− θ Lit1 þ ε t uit ðci1t ; ci2t ; ci3t ; ð1 − Lit ÞÞ = β Hijt −α ð1Þ 1−θ 1þε j=1
75
Persistence of Shocks in an Experimental DSGE Economy
Here, cijt denotes the consumption of the ith consumer of good j, and Lit is the labor supplied by i, at time t. Hij denotes the taste shock of consumer i for good j. The shock difference occurs in each period, and differs by consumer and good in each period, according to the process: Hijt = μij þ τHijt − 1 þ εjt
ð2Þ
The preference shocks follow an AR(1) process, and ε1t , ε2t , and ε3t are independent white noise processes, with εjt ∼ Nð0; ζÞ. Time discounting was implemented by reducing the induced value of consumption of each of the output goods, as well as the utility cost of labor supply, by 1 − β = 1% in each period. Monopolistic competition in the output market was created by making firm’s outputs imperfect substitutes from the point of view of consumers. Taste shocks occurred in every period with varying drifts that depended on both i and j. The budget constraint of consumers was given by: 3 X
cijt pjt þ Bit = wit Lit þ ð1 þ it − 1 ÞBit − 1 þ
j=1
1 N Π I t−1
ð3Þ
cijt denotes subject i’s consumption of good j at time t, pjt is the price of good j at time t, wit is the wage of subject i at time t, Bit is the saving of subject i at period t, and ΠNt− 1 is the total nominal profit of firms in period t − 1. ΠNt− 1 translates into a component of household income, capturing the assumption of the DSGE model that the households own the firms. Thus, at the end of each period in the experiment, the total profits of firms were transferred to, and divided equally among, the three consumers.
Producers Profits of firm j in each period t were given by: ΠRjt = ðpjt yjt − wjt Ljt Þ
P0 Pt
ð4Þ
ΠRjt denotes real profits, pjt is the price, yjt is the quantity of goods sold, wjt is the wage paid, and Ljt is the labor employed by firm j in period t. Pt is the price level in period t, P0 the price level in the initial period, and P0/Pt a deflator that translates nominal profits into real terms. The subjects in the role of firms received cash payments proportional to their real profits.
76
CHARLES N. NOUSSAIR ET AL.
The production technology available to each firm was given by: fjt ðLjt Þ = At Ljt
ð5Þ
At = A þ νAt − 1 þ ςt
ð6Þ
where
At is a technology shock following an AR(1) process, ςt is independent white noise ςt ∼ Nð0; δÞ. In each period, each firm j chose a quantity labor to use in production, Ljt, and its price for its product, pjt.
The Labor and Output Markets Because the standard DSGE model assumes perfect competition on the labor market, we employed a continuous double auction trading mechanism (Plott & Gray, 1990; Smith, 1962). Continuous double auction market are conducive to attaining competitive equilibria with a small number of agents (Smith, 1982). Trade in both the labor and the output markets took place in terms of an experimental currency, called ECU. Consumers’ cost of supplying labor was private information. Current productivity was private information for producers. Previous evidence in the experimental literature, where it has been shown that convergence to competitive equilibrium in double auctions occurs more quickly when asymmetry in information is present (Smith, 1994). The three outputs were imperfect substitutes, due to the product-specific Hijt taste shocks of consumers. This ensured the monopolistic competition assumed in the DSGE model. The output market followed posted offer rules, with three markets operating simultaneously, one for each firm’s product. Producers set prices before observing the prices of their competitors, and consumers could purchase the products on a first-come first-served basis.5 Products were consumed immediately. Producers were required to bring their entire production to market in the current period. Units could not be stored for future periods either by consumers or producers.
Monetary Policy The nominal interest rate depended on current inflation and followed a simplified Taylor rule,
77
Persistence of Shocks in an Experimental DSGE Economy
it = π þ κðπ t − 1 − π Þ þ ρt
ð7Þ
where ρt is i.i.d. and the parameters were set to κ = 1:5 and π = 3%.6 There was a zero lower bound on interest rates. Parameters The parameter values chosen for the experiment are given in Table 1. Whenever possible, the values are drawn from empirical estimates, with each period t converted to one three-month quarter in the field. Parameter μ is different in the Low Friction treatment (μLF) than in treatments with monopolistic competition (μij).7 ω is the magnitude of the menu cost in the Menu Cost treatment. In each period, each consumer was endowed with 10 units of labor. Furthermore, each consumer received an endowment of 1,500 ECU of cash at the beginning of period 1 that could be used for purchases. Producers had no initial endowment of labor or cash, but could borrow funds at zero interest at the beginning of a period to purchase labor, and thus at no time had binding cash constraints. Timing within a Period Each period of the experiment corresponded to a time period t in the DSGE model. At the beginning of each period, productivity shocks for the period were realized and observed by producers. The labor market was then opened for two minutes, decreasing to 1 minute late in the sessions. While the labor market was open, consumers had the following information available on their screens: the history of the wages they received, the average wage in the economy, the quantity of labor they sold, the inflation rate, the interest rate, and the output gap. Producers had available the Parameters.
Table 1. β 0.99
θ 0.5
ε 2
α 15
τ 0.8
v 0.8
A 0.7
δ 0.2
ζ 1
π 0.03
μij 0
95 @ 38:2 33
62 93 59:6
1
37:8 64 A 97
μLF
ω
120
0.025
78
CHARLES N. NOUSSAIR ET AL.
history of the wages they paid, the wages in the economy, the quantity of units of labor they hired, and the same macroeconomic data that were shown to consumers. After the labor market closed for the period, labor was transformed automatically into output according to each producer’s individual production function. Then, the output market opened. Producers simultaneously posted their prices. Subsequently, consumers received the posted prices and information on their current budget level, the interest rate, their valuations of each good, the ratio of their marginal valuation, and the posted price for each good. Before setting their prices, producers knew the quantity of labor they hired, the quantity of output that the labor produced, the total and average cost of production, the interest rate, the history of own sales, prices, labor expenses, profits, and a number of macroeconomic variables. After the consumers made their purchases, the period ended. Consumers then received information about their earnings and the budget they would have available for the next period. Producers observed their profits, production, and sales.
Timing of Sessions and Subject Payments Four sessions were conducted under each of the four treatment conditions, for a total of 16 sessions. The sessions each lasted from 3 3/4 to 4 3/4 hours. After the instructions were read to subjects, which lasted approximately 45 minutes, a practice sequence of five periods, which did not count toward the subjects’ final payment, was conducted. Afterwards, a sequence of 5070 periods was conducted, which determined the final payment of the subjects. A random ending rule was used to end the sequence, with the final period drawn randomly from a uniform distribution on 5070. Subjects were told that the sequence would end randomly after period 50. Subjects in the role of consumers received a monetary payment for the session, in euros, in proportion to the sum of the values of Eq. (1) they attained over all periods. Valuations for output and costs of labor supply were expressed in terms of 100ths of a euro cent on subjects’ screens. The currency used for transactions, ECU, did not translate directly into the earnings that participants in the role of consumers received (see Lian & Plott, 1998 for a similar use of fiat money in a general equilibrium experiment). Currency was required to purchase output, so it retained value as a medium of exchange, and it could be saved, so it also functioned as a store of value. Savings earned interest at rate it per period. At the end of the
79
Persistence of Shocks in an Experimental DSGE Economy
session, final cash holdings of consumers were converted from ECU to euros.8 Participants in the role of producers received a cash payment for the session, in euros, in proportion to the sum of the values of Eq. (4) they realized over all periods.9
Treatments The Menu Cost treatment The section “Producers” described the Baseline treatment. The Menu Cost treatment was identical to the Baseline except that if a producer set a price in period t that was different than the one he set in period t − 1, he had to pay a cost equal to: Mjt = ωpj;t − 1 yjt
ð8Þ
pjt is the price that producer j chose in period t, and yj;t − 1 is the quantity of sales of producer j in the previous period. The magnitude of the menu cost (ω = 0.025) is calibrated based on Nakamura and Steinsson (2008). The Low Friction Treatment The Low Friction treatment was identical to the Baseline treatment, except for the preferences of consumers. The payoffs for consumers in period t were given by 8 > > > > >
> > > > :
3 P j=1
!1 − θ cijt
1−θ
−α
9 > > > > > 1þε = L it
1þε> > > > > ;
ð9Þ
In each period, all consumers experienced an identical preference shock, so that, HtLF = μLF þ τHtLF − 1 þ εt
ð10Þ
where ɛt is an independent white noise process, and εt ∼ Nð0; ζÞ. The specification of the shocks ensured that consumers valued all three goods as perfect substitutes.
80
CHARLES N. NOUSSAIR ET AL.
The parameters of the Low Friction treatment were chosen to set the welfare of consumers in the Low Friction and Baseline treatments to be close together.10 Although the products of the three firms were perfect substitutes, there was a separate posted offer market for each firm’s output, as in the other treatments. The Human Central Banker Treatment The Human Central Banker treatment was identical to the Baseline treatment, except that interest rates were determined by three additional human subjects who were placed in the role of central banker. In each period, each of the three central bankers submitted a proposed nominal interest rate, required to be non-negative, simultaneously at the beginning of each period. The median choice was implemented as the interest rate for the current period. Central bankers had an inflation target of 3% in each period. Subjects in the role of central bankers were paid for each period according to the following loss function: Central Banker’s Payoff lt = max a − bðπ t − π Þ2 ; 0
ð11Þ
where a = 100, b = 1 and π = 3%. The conversion rate from payoffs to euro earnings was 1 to 100. Therefore, if the inflation rate was 3% in a given 1 period, then each central banker earned 100⋅ 100 = 1 euro in that period. This payoff function incentivizes them to engage in inflation targeting. Central bankers had the history of interest rates, inflation, and the output gap available on their screens to help them make their decisions.
PREDICTIONS We estimate a trivariate VAR with two lags of output gap, inflation, and interest rate. The appropriate identification scheme to use for our data is not obvious. In the literature, three options have attracted particular attention: Choleski decomposition, long-run restrictions, and sign restrictions. However, they each have advantages and disadvantages. If we were to estimate the VAR using Choleski decomposition, we would fall into the trap described in Carlstrom, Fuerst, & Paustian (2009). They show that the IRFs can be severely muted if one assumes Choleski decomposition and the model actually does not exhibit the assumed timing. This critique does
Persistence of Shocks in an Experimental DSGE Economy
81
apply in the case of our experiment, where the demand, supply, and monetary policy shocks contemporaneously influence the realizations of inflation, output gap, and interest rate. Therefore, Choleski decomposition is not an appropriate identification scheme. Long-run and sign restrictions have also been criticized (see, e.g., Chari, Kehoe, & McGrattan, 2008; Faust & Leeper, 1997). Specifically, long-run restrictions tend to suffer from truncation bias, as finite order VARs are not good approximations of infinite order VARs. However, we believe that the truncation bias is less severe than the misspecified timing in the case of Choleski decomposition. Therefore, we report the impulse responses using long-run restrictions. The restrictions that we implemented were that there be no long-run effects of demand shocks on output gap and interest rate and no long-run effect of the monetary policy shock on output gap. Very similar results have been obtained also with the generalized impulse responses (Pesaran & Shin, 1998). Before proceeding with the estimation of the SVAR using the experimental data, we can also solve the model in the Baseline environment to see the behavior of the theoretical model under the assumption of rational expectations. The model is solved in a multi agent environment using exactly the same parameter values as in the experiment and then simulated under the assumption that firms employ a 20% markup over costs. Note that this assumption is necessary as we create monopolistic competition in a different way than is common in the theoretical literature. Fig. 1 shows IRFs for the simulated data. Analysis using simulated data shows that output gap shocks exert a positive and significant effect on itself, while the effects on inflation and interest rates are not significant. The effect on output is significant for four periods. This is the only effect in this analysis that is persistent. The shock to inflation has a positive and significant effect on inflation on impact, but does not display any persistence. Similarly, the interest rate shock has a significant and positive impact on itself in the first period. Inflation and interest rate shocks do not produce significant effects on other variables. In particular, we do not see any effects of monetary policy shocks on output gap and inflation. We thus predict that, in the Baseline treatment, output shocks would exhibit persistence for multiple periods. However, shocks to inflation and the interest rate would not exhibit any effect beyond the current period. The Human Central Banker treatment, which has the same parametric structure, also would be subject to the same predictions. Simulations for the Low Friction treatment (not reported here) exhibit no persistence of any type of shock, besides the exogenous persistence embedded in output.11
82
CHARLES N. NOUSSAIR ET AL.
5
6
4
4
2
2
0
0
0
IRF18, int18, gap18
IRF18, inf18, gap18
IRF18, gap18, gap18 10
0 1 2 3 4 5 6 7 8 9 10
–2 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
IRF18, int18, inf18
IRF18, inf18, inf18
IRF18, gap18, inf18 20000
60000
10000
10000
40000
5000
0
20000
–10000
0 0 1 2 3 4 5 6 7 8 9 10
0 –5000 –10000 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10 IRF18, int18, int18
IRF18, inf18, int18
IRF18, gap18, int18 4
10
20
2
5
10
0
0
0
–5
–10
–2 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10 Period 95% CI
Fig. 1.
0 1 2 3 4 5 6 7 8 9 10
Orthogonalized irf
Impulse Responses for Baseline Treatment (Simulated Data).
RESULTS We employ two different methodologies to study the responses of output gap, inflation, and interest rates to exogenous disturbances. The most common methodology employed in empirical monetary economics to assess the persistence of shocks is to estimate a structural vector autoregression (SVAR) and to plot the impulse responses (IRFs). We complement this analysis using a Romer & Romer (2004) approach as some shocks are directly observable in the experiment. Figs. 25 display the IRFs of one session in each treatment. The choice of session to display in each figure is made on the basis of how representative it is of the overall data from the treatment. The full persistence data for all 16 sessions of the study is shown in Table A1 in Appendix. In the figures, orthogonalized impulse responses are plotted, and 95% error bands are calculated using bootstrap techniques. The label (IRFX, infX, gapX),
83
Persistence of Shocks in an Experimental DSGE Economy IRF12, gap12, gap12
IRF12, inf12, gap12
20
10
15
IRF12, int12, gap12 6 4
5
10
2
5
0
0
0
–5
–2 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
IRF12, gap12, inf12
0 1 2 3 4 5 6 7 8 9 10
IRF12, inf12, inf12
20
IRF12, int12, inf12 5
40 30
0
20
–5
0
10
–10
–10
0
10
–15 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
IRF12, gap12, int12
0 1 2 3 4 5 6 7 8 9 10
IRF12, inf12, int12
10
IRF12, int12, int12 10
15
5
10
5
0
5
0
–5
0 0 1 2 3 4 5 6 7 8 9 10
Fig. 2.
–5 0 1 2 3 4 5 6 7 8 9 10 Period
0 1 2 3 4 5 6 7 8 9 10
Impulse Responses for Baseline Treatment.
for example, denotes the IRF for group X and the effect of inflation shock on output gap. There are a number of regularities that are common to all treatments in the SVAR analysis. An output gap shock induces a positive change on itself. Inflation reacts negatively to the output shock, though the reaction usually dissipates in a few periods. A positive output shock seems to act as a productivity shock and increase competition in the final product market. The effect of an output shock on interest rate is rather ambiguous. However, this is in line with the feature that our Taylor rule is set to respond only to inflation, and not to the output gap. Except for the last reaction, which is usually found to be positive, the effects of the output shock correspond to stylized facts for major industrialized economies (for the United States see, e.g., Christiano et al., 1997, 2005). The inflation shock induces a reaction of inflation that is similar in sign. The persistence of this reaction varies substantially across treatments. It exhibits almost no persistence in the Low Friction treatment, while in other
84
CHARLES N. NOUSSAIR ET AL. IRF7, gap7, gap7
15
IRF7, inf7, gap7
10
2
5
0
0 –2
–2
0 0 1 2 3 4 5 6 7 8 9 10
IRF7, int7, gap7 2
4
0 1 2 3 4 5 6 7 8 9 10
IRF7, gap7, inf7
0 1 2 3 4 5 6 7 8 9 10
IRF7, inf7, inf7
IRF7, int7, inf7
1
3
.5
.5
2
0
1
–.5
0 –.5
0
–1 0 1 2 3 4 5 6 7 8 9 10
–1 0 1 2 3 4 5 6 7 8 9 10
IRF7, gap7, int7
0 1 2 3 4 5 6 7 8 9 10
IRF7, inf7, int7
IRF7, int7, int7
1
4
1
.5
3
.5
0
2
–.5
1
0 –.5
0
–1 0 1 2 3 4 5 6 7 8 9 10
Fig. 3.
–1 0 1 2 3 4 5 6 7 8 9 10 Period
0 1 2 3 4 5 6 7 8 9 10
Impulse Responses for the Menu Cost Treatment.
treatments, at least in some sessions, the shock lives for a few periods. In most sessions, the output gap reacts in the same direction as the inflation shock, although in two sessions the reaction is opposite in sign and significant. The inflation shock induces a change in interest rate that is similar in sign for most of the sessions. This is in line with the stabilizing objective of interest rates that are set in accordance with the Taylor principle. In the Human Central Banker treatment, all four sessions exhibit this property. This behavior of central bankers is further studied in Noussair et al. (2013b). The last shock that we study is the monetary policy shock. This shock is different in nature in our Human Central Banker treatment, compared to all other treatments, in which the interest rate was set according to the instrumental rule specified in Eq. (11). In the Human Central Banker treatment, the monetary policy shock induces a change in interest rate that is similar in sign. The persistence of this shock varies considerably across sessions, but generally it is greater than in other treatments. Note that we
85
Persistence of Shocks in an Experimental DSGE Economy IRF17, gap17, gap17
IRF17, inf17, gap17
15
4
10
2
IRF17, int17, gap17 4 2
0
5
0
–2 0
–2
–4 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
IRF17, gap17, inf17 4
IRF17, inf17, inf17
2
4
0
2
–2
0
–4
–2 0 1 2 3 4 5 6 7 8 9 10
1 0 –1 0 1 2 3 4 5 6 7 8 9 10
IRF17, gap17, int17
0
IRF17, int17, inf17 2
6
2
0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
IRF17, inf17, int17
IRF17, int17, int17
6
3
4
2
2
1
–2
0
0
–4
–2
–1
0 1 2 3 4 5 6 7 8 9 10
Fig. 4.
0 1 2 3 4 5 6 7 8 9 10 Period
0 1 2 3 4 5 6 7 8 9 10
Impulse Responses for the Low Friction Treatment.
have not exogenously embedded any persistence in the monetary policy shock. The Taylor rule we implemented does not exhibit interest rate smoothing and the objective function of the human central bankers does not penalize the interest rate variability. The persistence of output to monetary policy shocks has attracted a lot of attention in the literature in the last 30 years.12 In our experiment, a contractionary monetary policy has no persistent effect on the output gap in any treatment. In some cases it even increases the output gap, though not significantly. In our setup, the interest rate changes induce both substitution and income effects to the consumers, due to their accumulation of savings. Therefore, in principle, it is possible that higher interest rates increase output, although the evidence from empirical macroeconomics supports a negative effect. This difference may be also due to the fact that in the experimental economy, there are no effects of interest rate that go through the supply side. In all but three sessions, inflation reacts positively to the contractionary monetary policy shock, although this reaction is often not
86
CHARLES N. NOUSSAIR ET AL. IRF14, gap14, gap14
IRF14, inf14, gap14
15
2
10
0
5
–2
IRF14, int14, gap14 6 4 2 0
–4
0 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
IRF14, gap14, inf14 4
–2
IRF14, inf14, inf14 10
2
0 1 2 3 4 5 6 7 8 9 10 IRF14, int14, inf14 1 0
5
0
–1 0
–2 –4
–2
–5 0 1 2 3 4 5 6 7 8 9 10
–3 0 1 2 3 4 5 6 7 8 9 10
IRF14, gap14, int14
0 1 2 3 4 5 6 7 8 9 10
IRF14, inf14, int14
IRF14, int14, int14
.2
.3
.4
.1
.2
.3
0
.1
.2
–.1
0
.1
–.2
–.1 0 1 2 3 4 5 6 7 8 9 10
0 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
Period
Fig. 5.
Impulse Responses for the Human Central Banker Treatment.
significant. A similar pattern is also commonly found in VAR studies of the monetary policy transmission mechanism, and is referred to as the price puzzle (Eichenbaum, 1992; Sims, 1992). The effect of a monetary policy shock on inflation and output gap displays the least persistence in the Low Friction treatment. Figs. 25 suggest similar persistence of shocks for output gap and interest rate in the Menu Cost and the Baseline treatments. Moreover, the Low Friction treatment exhibits a very low degree of persistence, and shocks rarely last more than one period. To compare the persistence of shocks between different treatments, we construct a simple test. We compute the number of periods for which output gap, inflation and interest rate deviate significantly from their long-run steady states as a result of a positive onestandard-deviation shock. The values are presented in Table 2. We then compare these values between treatments using nonparametric tests, with each session as the unit of observation.
87
Persistence of Shocks in an Experimental DSGE Economy
Table 2.
Persistence of Shocks. Number of Periods (Sig.)
Treatment Baseline Menu Cost Low Friction Human Central Banker
Output Gap 10 10 1 3
3 10 2 1
10 4 2 3
Inflation 6 2 1 5
0 1 0 0
0 6 0 8
0 1 0 0
Interest Rate 1 0 0 0
1 0 0 2
0 1 0 9
0 1 0 5
1 1 0 2
As mentioned above, we do not observe much persistence of monetary policy shocks on interest rates, except in the Human Central Banker treatment. The differences between this and the other three treatments are significant at the 5% level under standard nonparametric tests. The only significant difference regarding the response of inflation to its own shock is between the Menu Cost and Low Friction treatments (5% significance). For the output gap, the Baseline and Menu Cost treatments exhibit more persistence than the other treatments. The Baseline and Menu Cost treatments are significantly different from the other two treatments at the 5% level, using a KruskalWallis test. The Baseline treatment is also significantly different from the Human Central Banker treatment at the 10% level. Estimations on the benchmark simulated data (see Fig. 1) find that the persistence of the output shock on the output gap is about 45 periods. This suggests that both the Human Central Banker and Low Friction treatments generate significantly lower persistence than the benchmark, while most of the sessions of the Baseline and Menu Cost treatments generate greater persistence of the output gap than the benchmark level. The relative importance of shocks for the determination of interest rate, inflation, and the output gap, can be measured with a variance decomposition exercise, using our VAR estimations (see Table A2 in the Appendix). We find considerable differences between the Human Central Banker and the other treatments. The inflation shock is the shock that explains the most variance of interest rate in the other three treatments. In the Human Central Banker treatment, however, interest rate smoothing explains a greater proportion of the variability of interest rates. However, the experimental design allows us to directly observe the shocks, except for the monetary policy shock in the Human Central Banker treatment. Therefore, the responses to most shocks can be also studied using a Romer & Romer (2004) framework. We estimate a bivariate VAR
88
CHARLES N. NOUSSAIR ET AL.
with one lag of output and inflation, and with shocks as exogenous variables. In the Human Central Banker treatment we estimate a trivariate VAR with one lag of output gap, inflation and interest rate, and productivity and taste shocks as exogenous variables. But we are not able to study all of the shocks using this framework as we cannot study the effects of monetary policy shocks in the Human Central Banker treatment. Thus, we treat this analysis as complementary to the SVAR analysis above. Using the Romer & Romer (2004) analysis we calculate impulse responses to a 100 basis point increase in the interest rates for one period, a 10% rise in the productivity shock for one period, and a 1% rise in taste shocks for one period. Due to the discrete nature of our labor hiring decisions at least a 10% productivity shock is needed in this empirical analysis to change the behavior of firms. In line with Romer & Romer (2004) we are using 66% confidence intervals are displayed. Figs. 69 display the results for one group in each treatment. This analysis suggests that the taste shock is relatively more important than the productivity shock to both the dynamics of output and of inflation. Taste shocks generally have both significant and persistent
Inflation to MP shock
Output gap to MP shock
20
20
0
0
–20
2
4
6
8
10
Inflation to taste shock
100
–20 150
0
100
–100
50
–200
2 4 6 8 Inflation to prod. shock
10
0
20
20
0
0
–20
2
4
6
Fig. 6.
8
10
–20
2
4
6
8
10
Output gap to taste shock
2 4 6 8 10 Output gap to prod. shock
2
4
Group 2 Baseline Treatment.
6
8
10
89
Persistence of Shocks in an Experimental DSGE Economy Inflation to MP shock
Output gap to MP shock
5
20
0
0
–5
2
4
6
8
10
–20
Inflation to taste shock
2
4
6
8
10
Output gap to taste shock
20
200
10 100 0 –10
2
4
6
8
10
0
Inflation to prod. shock
2
4
6
8
10
Output gap to prod. shock 20
5 0
0
–5 –10
2
4
6
8
10
–20
Inflation to MP shock 20
0
0
4
6
6
8
10
Output gap to MP shock
10
2
4
Group 7 Menu Cost.
Fig. 7.
–10
2
8
10
–20
Inflation to taste shock
2
4
6
8
10
Output gap to taste shock
20
50
0 0 –20 –40
2
4
6
8
10
–50
Inflation to prod. shock 10
20
0
0
–10
2
4
6
Fig. 8.
8
2
4
6
8
10
Output gap to prod. shock
10
–20
2
4
Group 17 Low Friction.
6
8
10
90
CHARLES N. NOUSSAIR ET AL.
Inflation to taste shock 150
Output gap to taste shock 100 80
100
Interest rate to taste shock 60 40
60 50
20 40
0 –50
0
20 5
10
0
–20 5
10
Inflation to prod. shock 20
Output gap to prod. shock 20
10
10
0
0
–10
–10
5
10
Interest rate to prod. shock 15 10 5 0
–20
5
10
Fig. 9.
–20
–5 5
10
–10
5
10
Group 5 Human Central Banker.
hump-shaped effects on inflation and output. While the effect on output is always positive, as expected, the effect on inflation is negative in the Baseline and Low Friction treatments. The effect of taste shock on output lasts at least seven periods in all our treatments and is the lowest in the Low Friction treatment. We observe the most persistence in the Baseline and Menu Cost treatments where the effect lasts for more than 10 periods. Compared to the SVAR results we observe more persistence in this exercise. However, all the persistence comes from the taste shock. The effect of the taste shock on inflation is significant for three periods in the Low Friction treatment, while in the Menu Cost treatment is significant for 10 periods thus displaying considerable persistence. This is different than the SVAR results, where we observe persistent effects of any shock to inflation in only 2 out of 16 groups. Monetary policy shocks do not have significant effects on inflation or on output. This has been shown with both the SVAR and Romer & Romer analysis. Also productivity shocks do not exert a statistically significant effect on any of the variables using Romer & Romer (2004) analysis.
Persistence of Shocks in an Experimental DSGE Economy
91
CONCLUSION In this study, we construct a laboratory DSGE economy populated with human decision makers. The experiment allows us to create an economy with a structure similar to a standard New Keynesian DSGE economy, without making any assumptions about the behavior of agents. Different treatments allow us to study whether the assumptions of menu costs and monopolistic competition are essential to create the frictions required to make the economy conform to empirical stylized facts. The experiment allows the possibility that the behavior of human agents alone creates the requisite friction. The specific focus of our study is the relationship between output market frictions and the persistence of shocks. In the theoretical New Keynesian DSGE model, both monopolistic competition and nominal rigidities are necessary to create persistence of shocks beyond the duration of the shock itself. Comparison of our Baseline and Menu Cost treatments allows us to consider the effect of the addition of a pricing friction, holding all else equal. Comparison of our Baseline and Low Friction treatments isolates the effect of monopolistic competition. We find that the existence of monopolistic competition, in conjunction with the behavior of human agents, generates additional persistence in output, which is generally similar whether or not a pricing friction is present. In none of our treatments, however, do any of the shocks systematically generate persistent effects on inflation, with the exception of the taste shock in the Romer & Romer (2004) analysis. Furthermore, the presence of menu costs is not enough to generate persistent effects of a monetary policy shock on output or inflation. Similar results are obtained if a SVAR or a Romer & Romer (2004) analysis is conducted, although the Romer & Romer analysis suggests that taste shocks are the most important type of shock driving the dynamics of inflation and output gap. Also, under the Romer & Romer approach, introducing menu costs induces a modest increase in the persistence of the effect of a taste shock on inflation. There is more persistence of policy shocks under the Human Central Banker treatment, in which our subjects set interest rates, than under the instrumental rule in effect in the otherwise identical Baseline treatment. Nonetheless, the Human Central Banker treatment exhibits less persistence of output gap shocks. All of the results depend on whether we have been able to create a wellfunctioning economy, from which meaningful data can be extracted. This means that the complexity of the economy is not so great as to be beyond
92
CHARLES N. NOUSSAIR ET AL.
the capabilities of the participating human agents. The data provide clear evidence that economies with this level of complexity are amenable to experimentation. None of our subjects lost money overall or consistently made poor decisions. The empirical patterns and treatment differences lend themselves to intuitive ex-post explanations, though many of these would not have been anticipated ex-ante. Thus, in our view, experiments, in conjunction with traditional empirical methods, can increase our understanding of how a macroeconomy operates. Future research can investigate the extent to which the particular design choices that we made for this study to implement a DSGE economy might influence the results. For example, we permitted carryover of cash, but not of goods, from one period to the next. In contrast, we could have allowed no carryover of cash, or permitted it for both cash and goods. We chose a particular means of implementing a monopolistically competitive output market, but there are other possible ways of doing so (see e.g., Fenig et al., 2013). Finally, our choice of a double auction market for labor may generate behavior that might not appear if the market were organized with bilateral bargaining or with posted bid rules, under which employers first offer wages that workers can then choose to accept.
NOTES 1. An alternative to the introduction of menu costs would have been to impose Calvo (1983) pricing. However, while elegant, Calvo pricing has received less empirical support than the assumption of menu costs. 2. A number of experimental studies have investigated interest rate setting (see e.g., Engle-Warnick & Turdaliev, 2010). These studies differ from ours in a number of ways, but perhaps most fundamentally in that their experimental economies do not include an underlying economy populated with human consumers and producers. A number of other experimental studies have studied production economies with interacting input and output markets. See for example Noussair, Plott, & Riezman (2007) or Fenig, Mileva, & Petersen (2013). 3. For Human Central Banker treatment we have to slightly adapt this methodology as we do not observe monetary policy shocks. 4. As explained later, savings held at the end of the session were transformed to consumption. 5. Once the output market opened, all prodcuer prices were displayed on comsumers’ screens. Then consumers could make purchases by selecting a field corresponding to the producer from whom they wished to purchase. Everytime they did so, they submitted an order to purchase one unit. Units were allocated in the sequence in which consumers’ orders were submitted. When a producer’s stock of
Persistence of Shocks in an Experimental DSGE Economy
93
output was exhausted, the transactions were refused and a message indicating “out of stock” was displayed on all consumer’s screens. 6. We rounded the interest rate to the nearest one-tenth of one percent in the experiment. Therefore, the monetary policy shock in the Baseline, Menu Cost, and Low Friction treatments could be identified as the difference between the rounded interest rate and the rate implied by the Taylor rule. 7. Except this parameter the only other difference is in the initial values of the taste shocks ðHij; t = 0 Þ. For the exact values see appendix of Noussair et al. (2013a). 8. This was done by assuming that the experiment would continue forever, with the valuations and costs continuing the downward trend they followed during the session. We calculated how much a consumer would have earned if she made the best possible savings, labor sale, and product purchase decisions possible, given the savings she had at the end of the session. The average prices for labor and products over the course of the session were used for the calculation. The resulting Euro earnings were awarded to the participant. Thus, total session payoffs to consumers, payoffs equalled the sum of the values of Eq. (6) attained over the life of the economy, plus the payout based on final savings. For producers, the conversion rate from their profits in terms of ECU to Euro was 100 ECU to 1 Euro. 9. Although the ECU profit was removed from the firm’s balance and added to the currency balance of the consumers at the end of each period, the profits were awarded to the participant on paper. These profits were translated into real monetary payments to the human participant in the role of the firm. This was required to create the same incentives and structure as in the theoretical model. 10. This calibration was conducted in the following manner. The economy was simulated, assuming a markup of 11 percent, under the assumption that firms and consumers optimize for the current period. The resulting welfare is calculated and the initial shock parameters are chosen so that welfare in Low Friction is equal to that in Baseline. 11. For the Menu Cost treatment, applying the same simulation methodology is infeasible. For this treatment, data on expectations are crucial to be able to solve for prices, and we do not have the requisite data on expectations. 12. Romer & Romer (2004) also present a second framework, in which they use SVAR, but instead of interest rate they include the monetary policy shock in the specification. We can replicate their analysis for all treatments except for Human Central Banker. The results are very similar, except for the reaction of the monetary policy (shock) to inflation. Under this framework, the impulse response is often in the opposite direction because of large shocks when the zero lower bound is binding. Due to the limited number of periods we are unable to proceed as in Iwata & Wu (2006) and to add a dummy for cases when the zero lower bound is binding.
ACKNOWLEDGMENT We would like to thank John Duffy, Shyam Sunder, Oleg Korenok, Steffan Ball, Ricardo Nunes, Michiel De Pooter, Wolfgang Luhan, and participants
94
CHARLES N. NOUSSAIR ET AL.
at the Federal Reserve Board, the University of Innsbruck, 1st and 2nd LeeX International Conference on Theoretical and Experimental Macroeconomics (Barcelona), the 2011 Computational Economics and Finance Conference (San Francisco), the 2011 Midwest Macro Meetings (Nashville), the 2011 SEA Meetings (Washington), the DSGE and Beyond Conference at the National Bank of Poland (Warsaw), the 2010 North American ESA Meetings (Tucson), the WISE International Workshop on Experimental Economics and Finance (Xiamen), the 5th Nordic Conference on Behavioral and Experimental Economics (Helsinki), and the 2010 International ESA Meetings (Copenhagen) for their comments. We are ˇ grateful to Blazˇ Zakelj for his help with programming. Damjan Pfajfar gratefully acknowledges funding from a Marie Curie project PIEF-GA2009-254956 EXPMAC.
REFERENCES Ball, L., & Mankiw, N. G. (1995). Relative-price changes as aggregate supply shocks. The Quarterly Journal of Economics, 110(1), 161193. Barro, R. J. (1972). A theory of monopolistic price adjustment. The Review of Economic Studies, 39(1), 1726. Calvo, G. A. (1983). Staggered prices in a utility-maximizing framework. Journal of Monetary Economics, 12(3), 383398. Carlstrom, C. T., Fuerst, T. S., & Paustian, M. (2009). Monetary policy shocks, Choleski identification, and DNK models. Journal of Monetary Economics, 56(7), 10141021. Chari, V., Kehoe, P. J., & McGrattan, E. R. (2008). Are structural VARs with long-run restrictions useful in developing business cycle theory? Journal of Monetary Economics, 55(8), 13371352. Chari, V. V., Kehoe, P. J., & McGrattan, E. R. (2000). Sticky price models of the business cycle: Can the contract multiplier solve the persistence problem? Econometrica, 68(5), 11511179. Christiano, L., Eichenbaum, M., & Evans, C. (2005). Nominal rigidities and the dynamic effects of a shock to monetary policy. Journal of Political Economy, 113(1), 145. Christiano, L. J., Eichenbaum, M., & Evans, C. L. (1997). Sticky price and limited participation models of money: A comparison. European Economic Review, 41(6), 12011249. Clarida, R., Galı´ , J., & Gertler, M. (1999). The science of monetary policy: A new Keynesian perspective. Journal of Economic Literature, 37(4), 16611707. Eichenbaum, M. (1992). Interpreting the macroeconomic time series facts: The effects of monetary policy: By Christopher Sims. European Economic Review, 36(5), 10011011. Engle-Warnick, J., & Turdaliev, N. (2010). An experimental test of Taylor-type rules with inexperienced central bankers. Experimental Economics, 13(2), 146166. Faust, J., & Leeper, E. M. (1997). When do long-run identifying restrictions give reliable results? Journal of Business & Economic Statistics, 15(3), 345353.
Persistence of Shocks in an Experimental DSGE Economy
95
Fenig, G., Mileva, M., & Petersen, L. (2013). Asset trading and monetary policy in production economies. Discussion Papers dp13-08, Department of Economics, Simon Fraser University. Fischbacher, U. (2007). Z-Tree: Zurich toolbox for readymade economic experiments. Experimental Economics, 10(2), 171178. Iwata, S., & Wu, S. (2006). Estimating monetary policy effects when interest rates are close to zero. Journal of Monetary Economics, 53(7), 13951408. Jeanne, O. (1998). Generating real persistent effects of monetary shocks: How much nominal rigidity do we really need? European Economic Review, 42(6), 10091032. Leeper, E. M., Sims, C. A., Zha, T., Hall, R. E., & Bernanke, B. S. (1996). What does monetary policy do? Brookings Papers on Economic Activity, 1996(2), 178. Lian, P., & Plott, C. R. (1998). General equilibrium, markets, macroeconomics and money in a laboratory experimental environment. Economic Theory, 12(1), 2175. Mankiw, N. G. (1985). Small menu costs and large business cycles: A macroeconomic model of monopoly. The Quarterly Journal of Economics, 100(2), 529537. Nakamura, E., & Steinsson, J. (2008). Five facts about prices: A reevaluation of menu cost models. Quarterly Journal of Economics, 123(4), 14151464. Noussair, C., Plott, C., & Riezman, R. (2007). Production, trade, prices, exchange rates and equilibration in large experimental economies. European Economic Review, 51(1), 4976. Noussair, C. N., Pfajfar, D., & Zsiros, J. (2013a). Frictions in an experimental dynamic stochastic general equilibrium economy. Mimeo. Tilburg University. Noussair, C. N., Pfajfar, D., & Zsiros, J. (2013b). Pricing decisions in an experimental dynamic stochastic general equilibrium economy. Mimeo. Tilburg University. Pesaran, H. H., & Shin, Y. (1998). Generalized impulse response analysis in linear multivariate models. Economics Letters, 58(1), 1729. Plott, C. R., & Gray, P. (1990). The multiple unit double auction. Journal of Economic Behavior and Organization, 13(2), 245258. Romer, C. D., & Romer, D. H. (2004). A new measure of monetary shocks: Derivation and implications. American Economic Review, 94(4), 10551084. Rotemberg, J. J. (1982). Monopolistic price adjustment and aggregate output. Review of Economic Studies, 49(4), 517531. Rotemberg, J. J., & Woodford, M. (1997). An optimization-based econometric framework for the evaluation of monetary policy. NBER Macroeconomics Annual, 12, 297346. Sims, C. A. (1992). Interpreting the macroeconomic time series facts: The effects of monetary policy. European Economic Review, 36(5), 9751000. Smets, F., & Wouters, R. (2007). Shocks and frictions in US business cycles: A Bayesian DSGE approach. American Economic Review, 97(3), 586606. Smith, V. L. (1962). An experimental study of competitive market behavior. The Journal of Political Economy, 70(2), 111137. Smith, V. L. (1982). Microeconomic systems as an experimental science. The American Economic Review, 72(5), 923955. Smith, V. L. (1994). Economics in the laboratory. Journal of Economic Perspectives, 8(1), 113131.
96
CHARLES N. NOUSSAIR ET AL.
APPENDIX Table A1.
Persistence of Shocks. Number of Periods (Sig.)
Shock\Effect on
Output Gap
Inflation
Interest Rate
Output gap
Baseline Menu Cost Low Friction Human Central Banker
10 10 1 3
3 10 2 1
10 4 2 3
6 2 1 5
0 0 1 10
0 0 0 0
0 0 0 0
0 2 0 0
0 0 10 8
3 0 0 0
0 2 0 0
0 1 0 4
Inflation
Baseline Menu Cost Low Friction Human Central Banker
0 0 0 0
0 0 0 0
0 0 0 0
0 0 1 0
0 1 0 0
0 6 0 8
0 1 0 0
1 0 0 0
0 0 7 9
0 0 0 0
0 0 0 0
0 1 0 0
Interest rate
Baseline Menu Cost Low Friction Human Central Banker
0 0 0 0
0 0 1 5
0 0 0 0
0 0 1 0
1 1 1 10
1 2 1 10
1 1 1 1
1 1 1 1
1 0 0 9
0 1 0 5
0 1 0 2
1 1 0 2
Table A2.
Variance Decomposition of Shocks. % of Variance in Period 5
Shock\Effect on
Output Gap
Output gap
Baseline Menu Cost Low Friction Human Central Banker
93 44 61 55
Inflation
Baseline 6 Menu Cost 56 Low Friction 10 Human Central Banker 34
82 97 95 94
94 80 96 97
90 83 96 87
5 4 7 2 13 11 4 4 1 3 2 3
Interest rate Baseline 1 13 Menu Cost 0 1 Low Friction 29 1 Human Central Banker 11 3
2 3 7 6 0 3 1 10
Inflation
Interest Rate
20 7 5 10 51 9 2 7 11 18 5 22 6 6 12 16
7 10 3 5 51 3 5 8 6 13 5 10 2 28 10 6
79 48 83 77
93 91 82 91
94 96 93 84
87 85 77 82
72 41 61 55
69 76 71 32
75 84 75 44
75 71 70 12
1 1 6 17
0 0 0 3
1 2 2 4
3 8 1 2
21 8 33 42
21 21 16 40
22 11 20 46
20 21 20 82
Persistence of Shocks in an Experimental DSGE Economy
97
Instructions This section contains the instructions of the experiment. Each subject received the same instructions during the experiment. The instructions were given to each subject as a paper handout, and an experimenter read them aloud at the beginning of each session. The instructions reprinted here were used in the Human Central Banker treatment. Overview You are about to participate in an experiment in the economics of market decision making. The instructions are simple and if you follow them carefully and make good decisions, you can earn a considerable amount of money which will be paid to you in cash at the end of the experiment. Trading in the experiment will be in terms of experimental currency units (ECU). You will be paid, in euro, at the end of the experiment. The experiment will consist of a series of at least 50 periods. You are a consumer, a producer, or a central banker, and will remain in the same role for the entire experiment. If you are a consumer, you can make money by selling labor and buying products. If you are a producer, you can make money by buying labor and selling products that you make with the labor. If you are a banker you can make money by trying to get the inflation rate as close to possible to a target level. Whether you are a consumer, a producer, or a central banker is indicated at the top of the instructions. Specific Instructions for Consumers Selling Labor. At the beginning of each period, you will have the opportunity to sell your labor for ECU. You will see the screen shown on the next page. You can sell units of Labor for whatever wage you are able to get for them. To sell a unit, you use the table in the middle of the upper part of your screen entitled “Labor market.” There are two ways to sell a unit: 1. You can accept an offer to buy labor that a producer has made: To do this, look in the column labeled “offers to buy,” and highlight the wage at which you would like to sell. Then click on the red button labeled “sell.” 2. You can make an offer to sell, and wait for a producer to accept it. To do so, enter a wage in the field labeled “Your offer,” and then select “Offer to sell” to submit it to the market. Your offer will then appear in the column labeled “Offers to sell.” It may then be accepted by a
98
CHARLES N. NOUSSAIR ET AL.
producer. However, it is also possible that it may not be accepted by any producers before the current period ends, since they are free to choose whether or not to accept an offer. When you do not wish to sell any more units in the period, please click the “Stop Selling” key. You must pay a cost, in euro, for each unit you sell. The table in the upper left part of the screen, called “Your cost to sell labor” tells you how much you have to pay for each unit of labor you can sell. The numbers are given in units of 1/100th of a cent, so that a cost of 400, for example, is equal to 4 cents. Each row of the table corresponds to a unit that you are selling. The first row is for the first unit you sell in the current period, the second row is for the second unit, etc. The second column of the table tells you how much it costs you to sell each unit. The numbers in the table will decrease by 1% from one period to the next.
Buying Products. After selling labor in each period, you will have the opportunity to buy products by spending ECU. The screen on the next page will appear to allow you to do so.
Persistence of Shocks in an Experimental DSGE Economy
99
In the upper left part of the screen, there is a table which will help you make your purchase decisions. There are three goods, 1, 2, and 3, which each correspond to a column in the table. The row called “price” gives the current price per unit, in ECU, that the producer making the unit is currently charging for it. The next row gives the “Next unit’s value per ECU.” This is calculated in the following way. Your value for the next unit is the amount of money, in euro, that you receive for the next unit you buy. As you buy more units within a period, your value for the next unit you buy will always be less than for the last unit you bought of the same good. Your values will change from one period to the next. They will randomly increase and decrease from one period to the next, but on average, they will decrease by 1% per period. The numbers in the “Next unit’s value per ECU” row give the value for the unit, divided by the price that the producer selling the unit is charging. The last row in the table shows the number of units of each good that you have purchased so far in the current period. To make a purchase of a unit of good 1, click on the button labeled “buy a unit of good 1”. To make a purchase of a unit of good 2 or 3, click on the button corresponding to the good you want to buy. When you do not want to purchase any more units of any of the three goods, click the button labeled “Quit buying.”
100
CHARLES N. NOUSSAIR ET AL.
Saving Money for Later Periods. Any ECU that you have not spent in the period is kept by you for the next period. It will earn interest at the rate shown at the top of your screen next to the label “Savings interest rate.” That means, for example, if the interest rate is 2%, and you have 100 ECU at the end of the period, it will grow to 102 ECU by the beginning of the next period. Note that saving ECU for later periods involves a trade-off. If you buy more products now, and save less ECU, you can earn more, in euro, in the current period, but you have less ECU spent in later periods. If you buy fewer products now, you make fewer euros in the current period, but you have more ECU to spend in later periods and can earn more euros then. In a given period, you cannot spend more ECU than you have at that time. Your Share of Producer Profits. You will also receive an additional payment of ECU at the end of each period. This payment is based on the total profit of producers. Each consumer will receive an amount of ECU equal to 1/3 of the total profit of all three producers. How the profit of producers is determined will be described in the next section. You might think of this as you owning a share in each of the producers so that you receive a share of their profits. How You Make Money if You Are a Consumer. Your earnings in a period, in euros, are equal to the valuations of all of the products you have purchased minus the unit cost of all of the units of labor that you sell. For example, suppose that in period 5 you buy two units of good 1 and one unit of good 3. You also sell three units of labor in that period. Your valuation, that is, the amount of euros you receive, for your first unit of good 1 is 400, and your valuation for the second unit of good 1 is 280. Your value of the first unit of good 3 is 350. These valuations can be found on your “Buy Products” screen in the row called “Your valuation for the next unit.” The cost of your first, second, and third units of labor are 50, 100, and 150. Then, your earnings for the period equal 400 þ 280 þ 350 50 100 150 = 730 = 7:3 cents Note that the ECU that you paid to buy products and those that you received from selling labor are not counted in your earnings. The
Persistence of Shocks in an Experimental DSGE Economy
101
ECU you receive from selling labor, saving, and producer profit is important, however, because that is the only money that you can use to buy products. Your euro earnings for the experiment are equal to your total earnings in all of the periods, plus a bonus at the end of the game that is described in the section “Ending the Experiment.” Specific Instructions for Producers Buying Labor. At the beginning of each period, you will have the opportunity to buy labor with ECU. You will see the following screen. You can buy units of Labor for whatever wage in ECU you are able to get them for. To buy a unit, you use the table in the middle of the upper part of your screen entitled “Labor market.” There are two ways to buy: 1. Accept an offer to sell that a consumer has made: To do this, look in the column labeled “offers to sell,” and highlight the price at which you would like to buy. Then click on the red button labeled “buy.” 2. Make an offer to buy, and wait for a potential seller to accept it. To do so, enter a wage in the field labeled “Your offer,” and then select “Make a new offer” to submit it to the market. Your offer will then appear in the column labeled “Offers to buy.” It may then be accepted by a seller. However, it is also possible that it may not be accepted by any sellers before the current period ends. The table in the upper left of the screen, entitled “You require” can help you make your purchase decisions. In the first column is the number of the unit that you are purchasing. 1st corresponds to the first unit you buy in the period, 2nd corresponds to the second unit you are buying in the period, etc. The second column, indicates how many units of product that is produced with each unit of labor. In the example here, each unit of labor produces 3.4 units of product.
102
CHARLES N. NOUSSAIR ET AL.
Selling Products. After the market for labor closes, you automatically produce one of the three goods using all of the labor you have purchased in the period. You produce good … and you will always be the only producer of that good. You can make money by selling the good for ECU. You can do so by using the following screen. In the upper middle portion of the screen, the number of units of Labor you have purchased in the period is shown in the field labeled “Number of Units of Labor Purchased.” Just below that field is the amount of the product you produce that the labor you bought has made. The amount of product that you make with a given amount of labor can change from period to period. Labor expense indicates how much money you spent on labor in the period. In the field labeled “Insert your price,” you can type in the price per unit, in ECU, that you wish to charge for each unit of the product you have produced. When you have decided which price to charge and typed it in, click on the field called “set price.” This price will then be displayed to consumers who have an opportunity to purchase from you.
Persistence of Shocks in an Experimental DSGE Economy
103
How You Make Money as a Producer. If the amount of ECU you receive from sales is more than the amount that you spent on labor, you will earn a profit. Your profit in ECU in a period = Total ECU you get from sales of product total ECU you pay for labor In period 1, your profit in ECU will be converted to euro at a rate of … … ECU = 1 euro. Therefore: Your earnings in euro in period 1 = … ..*[ECU you get from sales of product ECU you pay for labor] In later periods, the conversion rate of your earnings from ECU to euro will be adjusted for the inflation rate. Your ECU balance will be set to zero in each period. However, the profit you have earned in each period, in euro, will be yours to keep, and the computer will keep track of how much you have earned in previous periods. Your euro earnings for the experiment are equal to your total earnings in all of the periods.
104
CHARLES N. NOUSSAIR ET AL.
Specific Instructions for Central Bankers Setting the Interest Rate. Three of you are in the role of Central bankers. In each period, the three of you will set the interest rate that consumers will earn on their savings in the current period. You will see the screen shown on the next page at the beginning of each period. In the field labeled “Interest Rate Decision,” you enter the interest rate that you would like to set for the period. Of the three of you who set interest rates, the second highest (i.e., the median choice) will be the one in effect in the period. Higher interest rates might encourage consumers to save rather than spend their money and might lead to lower prices, and therefore a lower rate of inflation. On the other hand, lower interest rates might discourage saving, and lead to more spending and higher prices. How You Make Money as a Central Banker. You earnings in each period will depend on the inflation rate in the current period. The inflation rate for a period is calculated in the following way. The average price for the three products is calculated for this period and last period. The percentage that the prices went up or down is determined. This percentage is the inflation rate. For example, if the prices of the three products are 60, 65, and 70 in period 9, the average price in period 9 is 65. If the average prices in period 8 were 55, 55, and 70, the average price in period 8 was 60. Prices increased by (6560)/60 = 0.0833 = 8.33% in period 9. Notice that prices could either increase or decrease in each period. You make more money the closer the inflation rate is to … ..% in each period. Specifically you earnings in euro will be equal to … … (Actual Inflation Rate … ..%)2 in each period.
Persistence of Shocks in an Experimental DSGE Economy
105
Additional Information Displayed on Your Screens There are graphs on each of the screens described above that give you some additional information about market conditions. You are free to use this information if you choose, to help you make your decisions. In all of the graphs, the horizontal axis is the period number. Consumers. If you are a consumer, the graphs show for each period, histories of: • the interest rate (that you earn on the ECU you save), • the inflation rate (the percentage that average prices for the three goods have gone up or down between one period and the next), • the output gap (a measure of the difference between the most products that could be made and how much are actually made; the smaller the gap, the lower is production), • the wage you received (for the labor you sold), • the average wage in the economy (the average amount consumers received for selling labor),
106
CHARLES N. NOUSSAIR ET AL.
• the number of units of labor you sold, • your consumption (how much money you spent on products), • your savings (how much of your money that you did not spend on products), • the price of each of the three products, • the quantity you bought of each of the three products. Producers. If you are a producer, the graphs show histories of: • • • • • • • • • •
the interest rate, the inflation rate, the output gap, the wage you paid (for the labor you bought), the average wage in the economy, the number of units of labor you bought, your labor expense (how much you spent on labor), your production (how much you have produced), your sales (how much you have sold), your profits.
Central Bankers. If you are a central banker, the graphs show histories of: • • • •
Interest rates, Your earnings, The GDP, a measure of how much the economy is producing, The output gap.
Ending the Experiment The experiment will continue for at least 50 periods. You will not know in advance in which period the experiment will end. At the end of the experiment, any consumer who has ECU will have it converted automatically to euro and paid to him/her. If you are a consumer, we will convert your ECU to euro in the following manner. We will imagine that the experiment would continue forever, with your valuations and costs following the downward trend they had during the experiment. We will then calculate how much you would earn if you made the best possible savings, labor selling, and product buying decisions that are possible, given the savings you currently have. We will use the average prices for labor and products during the experiment to make the calculation. We will then take the resulting amount of euros and credit them to you.
Persistence of Shocks in an Experimental DSGE Economy
107
Starting the Experiment In the first two periods of the experiment, we will place limits on the range of wages and prices that can be offered. You will be informed of these limits when the experiment begins. These restrictions will be lifted in period 3.
Differences with the instructions in other treatments In the Baseline and Low Friction treatments, subjects received the same instructions as those above, except for the section entitled Specific Instructions for Central Bankers. That part was not included in Baseline and Low Friction, because the interest rate was set automatically by the computer.
In the Menu Cost treatment, section entitled Specific Instructions for Central Bankers was absent, similarly to the Baseline and Low Friction treatments. In Menu Cost only, the screen-shot in the figure above was
108
CHARLES N. NOUSSAIR ET AL.
displayed in the section entitled Selling products, instead of the one shown that section. The screen shown in the Menu Cost treatment was accompanied by the following text: After the market for labor closes, you automatically produce one of the three goods using all of the labor you have purchased in the period. You produce a good …. and you will always be the only producer of that good. You can make money by selling the good for ECU. You can do so by using the following screen. In the upper middle portion of the screen, the number of units of Labor you have purchased in the period is shown in the field labeled “Number of Units of Labor Purchased.” Just below that field is the amount of the product you produce that the labor you bought has made. The amount of product that you make with a given amount of labor can change from period to period. “Labor expense” indicates how much money you spent on labor in the period. In the field labeled “Insert your price,” you can type in the price per unit, in ECU, that you wish to charge for each unit of the product you have produced. When you have decided which price to charge and typed it in, click on the field called “set price.” This price will then be displayed to consumers who have an opportunity to purchase from you. You can change your price from one period to the next or you can keep it the same as in the last period. However, if you change the price you are charging for your product, you have to pay a cost that is calculated in the following way. Cost to change price = (price you charged last period) × (how many units you have produced this period) × 0.025.
FORECAST ERROR INFORMATION AND HETEROGENEOUS EXPECTATIONS IN LEARNING-TOFORECAST MACROECONOMIC EXPERIMENTS Luba Petersen ABSTRACT This article explores the importance of accessible and focal information in influencing beliefs and attention in a learning-to-forecast laboratory experiment where subjects are incentivized to form accurate expectations about inflation and the output gap. We consider the effects of salient and accessible forecast error information and learning on subjects’ forecasting accuracy and heuristics, and on aggregate stability. Experimental evidence indicates that, while there is considerable heterogeneity in the heuristics used, subjects’ forecasts can be best described by a constant gain learning model where subjects respond to forecast errors. Salient forecast error information reduces subjects’ overreaction to their errors and leads to greater forecast accuracy, coordination of expectations,
Experiments in Macroeconomics Research in Experimental Economics, Volume 17, 109137 Copyright r 2014 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0193-2306/doi:10.1108/S0193-230620140000017004
109
110
LUBA PETERSEN
and macroeconomic stability. The benefits of this focal information are short-lived and diminish with learning. Keywords: Experimental macroeconomics; expectations; availability heuristic; focal points; rational inattention
INTRODUCTION Expectations are an important driver of economic activity. What households believe about the future state of the economy will shape their decisions on how much to consume, work, and invest. Firms’ pricing decisions depend significantly on expectations of future demand and aggregate price levels. An understanding of how expectations are formed and evolve is key to managing expectations and promoting economic stability. It can be challenging to identify the effects of information, policy, and disturbances on expectations and the overall economy using traditional empirical approaches without making assumptions about the underlying structure of expectation formation and the aggregate data-generating process. As a result, laboratory experiments have become an increasingly popular source of data on expectation formation. In a highly controlled, incentivized environment where the data-generating process of the economy is established by the experimenter, one can more cleanly identify how individuals form beliefs in response to different policies, shocks, or information. Numerous experimental macroeconomic papers have now explored the effects of monetary policy rules, communication, and the structure of the economy on expectation formation and aggregate outcomes. As learning-to-forecast experiments become a more policy-relevant source of data, further research on how certain design decisions influence behavior is warranted. For example, how we place information on subjects’ screens may bias behavior and create potentially unintended focal points. Placing historical information right next to where subjects submit their forecast has the potential to generate adaptive or trend-chasing behavior, especially if the information is presented graphically. Locating that information elsewhere while making the current period shocks more salient may generate increased sensitivity to those shocks and a reduction in adaptive behavior.
Learning-to-Forecast Macroeconomic Experiments
111
This article seeks to begin the discussion on focal information in expectation-driven environments. We construct a macroeconomic environment where subjects’ aggregated forecasts about future output and inflation influence the current state of the economy. We conduct an experiment to understand how focal forecast error information influences forecasting heuristics and accuracy. Improving the salience and availability of forecast error information may encourage subjects to utilize the information when forming their expectations, improve overall coordination of heuristics, and lead to greater economic stability. We also explore how forecasting strategies, accuracy, and economic stability change with learning. This article addresses not only an important question for the design of experiments, but also for the design of central bank communication. We are among the first to investigate experimentally the role of information in the heterogeneity of expectations. Our laboratory experiment provides evidence of how the negative consequences of rational inattention can be ameliorated through public announcements. Our main finding is that increasing the salience of forecast errors encourages inexperienced subjects to correct their forecasting behavior and results in significantly smaller forecast errors. Moreover, the improved coordination of forecasting behavior leads to greater output and inflation stability. Over time, this information becomes less useful as a coordinating device as subjects continue to increase their usage of and overreact to past forecast errors when forming their expectations, leading to larger forecast errors and increased volatility. The fact that expectations can be influenced by focal information, at least in the short-run, suggests that what policy makers emphasize when communicating to the public can be very important in influencing economic stability.
RELATED LITERATURE Dozens of experiments have been conducted to understand how expectations are formed as structural features of an environment or information sets change. Duffy (2012) provides an extensive literature review discussing the evolution of the experimental expectation-formation literature. This article contributes most directly to the learning-to-forecast New Keynesian experiments pioneered by Adam (2007). These experiments involve subjects forming output gap and/or inflation expectations in a multivariate multiequation linearized environment where current output and inflation depend
112
LUBA PETERSEN
on aggregate expectations. Subjects are paid based on their forecast error, inducing an incentive to form accurate forecasts. Pfajfar and Zakelj (2013) vary the type of nominal interest rate rule to explore the relationship between expectation formation and monetary policy. They find that forward-looking rules tend to generate expectational cycles and higher inflation variability than contemporaneous rules. Within the set of forwardlooking policy rule treatments, they vary the sensitivity of interest rates to future expected inflation and output, and find that more aggressive policy leads to greater economic stability. In a companion paper, Pfajfar and Zakelj (2014) utilize their experimental data to test the rational expectations hypothesis in 70-period temporally linked economies. They find that they cannot reject rationality for 40% of their sample, while many subjects’ expectations can be modeled by some form of adaptive behavior. Changes in aggregate variables influence the likelihood of switching. For example, subjects are more likely to switch their strategies during recessions. Assenza, Heemeijer, Hommes, and Massaro (2013) also study heterogeneous expectations in a New Keynesian experiment where subjects only forecast inflation, while the output gap expectation is either set to the steady state, formed naively based on past realized values, or formed by another human subject in the group. The authors also vary the central bank’s reaction function between passive and active monetary policy. Like Pfajfar and Zakelj, they observe subjects frequently switching between forecasting heuristics. They find that an estimated evolutionary switching model of heterogeneous expectations can better describe expectation formation than a homogeneous expectations model. A related paper by Roos and Luhan (2013) investigates how subjects gather and utilize information in a combined forecasting and optimization experiment. Subjects played the roles of either workers or firms who were incentivized to form accurate forecasts of wages and prices and maximize their utility or profits, respectively. A “rudimentary” description of the data-generating process was provided to subjects at the beginning of the experiment. In each period, subjects could choose to purchase for a small cost market information presented either cross-sectionally or in a timeseries form. The authors observe a very low demand for information and that the majority of the information requests come from a small subset of subjects. Information purchases lead firms to earn higher profits but do not improve forecast accuracy for either type. Average absolute forecast errors do diminish over time and are attributed to learning. This experiment extends the experimental design of Kryvtsov and Petersen (2013), in which subjects interact in a learning-to-forecast New
Learning-to-Forecast Macroeconomic Experiments
113
Keynesian economy similar to the ones developed by Pfajfar and Zakelj (2013, 2014) and Assenza et al. (2013). Kryvtsov and Petersen study how the strength of the expectations channel of monetary policy changes in response to increased persistence of shocks, more aggressive monetary policy, and central bank forward guidance. Among other things, they find that providing focal central bank forecasts of the nominal interest rate to subjects has mixed effects. Inexperienced subjects condition on the information which leads to improved economic stability. With learning however, the aggregate economies either strongly condition on the forward guidance resulting in high aggregate stability or it creates increased confusion and greater instability. New Keynesian environments can become fraught with multiple equilibria when agents’ expectations are heterogeneous. To successfully forecast, subjects in these learning-to-forecast experiments must coordinate their expectations, and more specifically, their forecasting rules. Schelling (1960) argues that information that focuses players’ attention on one equilibrium can facilitate coordination and that it may be rational to condition one’s decision on the focal information. Numerous laboratory experiments have since shown that, in games with multiple Nash equilibria, focal points can facilitate and improve coordination (Mehta, Starmer, & Sugden, 1994 generate focal points through variations in labeling, Blume & Gneezy, 2000 on endogenously generated focal points). Nagel (1995) observes high levels of coordination on focal points in a Keynesian-inspired “beauty contest” game where subjects are rewarded for guessing closest to p times the mean of all numbers submitted. Recent theoretical work by Demertzis and Viegi (2008, 2009) has shown that the communication of an inflation target can serve as an effective focal point at coordinating and stabilizing expectations in Morris-Shin (2002) environments where there is poor or ambiguous public information. Thus, it is reasonable to conclude that communicating focal forecast error information in learning-to-forecast experiments would generate improved coordination and improved payoffs. Extensive survey and experimental evidence suggests that individuals use heuristics to form their beliefs about the economy. Pfajfar and Santoro (2010) use the University of Michigan’s Survey of Consumer Attitudes and Behavior to study expectation formation. They observe considerable heterogeneity in forecasting behavior, including rational forecasters, highly adaptive forecasters, and constant gain learners. Milani (2007) shows that adding constant gain learning where agents update their forecasts based on previous forecast errors can improve the fit of monetary DSGE models with alternative expectation specifications, including those with rational
114
LUBA PETERSEN
expectations. On the other hand, Keane and Runkle (1990) find strong support for rational expectation formation using price forecast data from the ASA-NBER Survey of Professional Forecasters. Using experimental evidence, Pfajfar and Santoro (2013) find that 37% of their subject pool can be described as using a general model that employs all available information, while 38% extrapolate trends in some form. Another 9% form adaptive expectations while the remaining 16% exhibit behavior consistent with sticky information and adaptive learning models. Finally, Kyrvtsov and Petersen find strong support for an adaptive-lagged expectation formation rule where subjects condition both on current shocks and lagged realized values of inflation and output. As forecasting experiments become a more policy-relevant source of data, further research needs to be conducted on how certain design decisions influence behavior. For example, how we place information on subjects’ screens may create unintended focal information that can bias forecasting behavior. Placing historical information right next to where subjects submit their forecast has the potential to generate adaptive or trend-chasing behavior, especially if the information is presented graphically. Locating that information elsewhere while making the current period shocks more salient may generate increased sensitivity to those shocks and a reduction in adaptive behavior. Through a series of related experiments, Tversky and Kahneman (1973) demonstrate how individual behavior can be biased by information that is easily accessible. In the context of this experiment, the availability heuristic would suggest that subjects would condition their expectations more on past forecast errors when this information is made more readily available.
EXPERIMENTAL DESIGN The experiments were conducted at CIRANO in Montreal, Quebec. Both nonstudent and student subjects were invited to participate in sessions that involved 30 minutes of instruction and 90 minutes of game participation. Each session consisted of nine subjects interacting together as a single group. Earnings, including a $10 show up fee, ranged from $18 to $47, and averaged $35.25 for two hours of participation. The experiment took place within a simplified New Keynesian economy where households and firms make optimal decisions given their expectations. The theoretical framework is derived in the work of Woodford (2003).
Learning-to-Forecast Macroeconomic Experiments
115
The aggregate economy implemented in the experiment can be described by the following four equations, calibrated to match three moments in the Canadian data under the assumption of rational expectations: standard deviation of inflation deviations (0.44%), serial correlation of inflation deviations (0.4), and the ratio of standard deviations of output gap and inflation (4.4).
xt = Et xt þ 1 − it − Et π t þ 1 − rtn ð1Þ π t = 0:989Et π t þ 1 þ 0:13xt
ð2Þ
it = 1:5Et− 1 π t þ 0:5Et− 1 xt
ð3Þ
rtn = 0:57rtn− 1 þ ɛt
ð4Þ
Equation (1) is the InvestmentSaving (IS) curve and describes how the output gap xt, a measure of aggregate demand above its natural level, responds to current aggregate expectations of the future output gap and deviations of the real interest rate, it − Et π t þ 1 , from the natural rate of interest, rtn . As the real interest rises above the natural rate of interest, contractionary pressures cause the output gap to decrease. Equation (2) is the New Keynesian Phillips curve and describes the supply side of the economy. The equation is derived from the monopolistically competitive firms’ intertemporal optimization problem. As aggregate expectations of future inflation, Et π t þ 1 , or aggregate demand, xt, increase, current inflation will increase. Firms are able to update their prices randomly, leading to sluggish adjustment in prices. Equation (3) is the reaction function of the central bank and describes how the nominal interest rate is set. According to this specification, the central bank increases nominal interest rates in response to higher expected inflation and output gap formed in the previous period for the current period. This specification allows for a period t nominal interest rate to be provided to subjects when they form their period t + 1 forecasts. Under rational expectations, this formulation is equivalent to a standard central bank function that targets current period realized output and inflation. Finally, Eq. (4) describes the stochastic process of the natural rate of interest as an AR(1) process where ɛ t is assumed to be drawn randomly from a normal distribution with mean zero and variance σ 2r , where σ r = 1:13. Each experimental session consisted of two stationary repetitions, consisting of 50 periods each. These repetitions were initialized at the long-run
116
LUBA PETERSEN
steady state of zero inflation, output gap, and nominal interest rate. In each period, subjects were provided information about the current period’s interest rate, shock to the natural rate of interest, and the expected shock size in the following period. They were then asked to provide forecasts for the next period’s inflation and output gap in basis points (e.g., 1% would be submitted as 100 basis points). We described the output gap as the extent to which the economy is over- or under-demanding output relative to its long-run steady-state level. Subjects were allowed to submit positive or negative numbers, and there was no limit to the values they may submit. Each period lasted up to 1 minute in the first 10 periods of each repetition, and 45 seconds thereafter. If all subjects submitted their decisions before time elapsed, which was generally the case, the experiment immediately moves on to the next period. Before moving onto the next period, the current period’s inflation and output as well as the next period’s nominal interest rate were computed using the median forecasts for inflation and output. The motivation for using the median, rather than the average forecast as done in similar experiments, was to minimize the ability of a single subject to manipulate the economy, and because the median tends to be a better measure of the central tendency. Two information treatments are considered in this experiment. We analyze behavior in a benchmark environment (abbreviated as the “B” treatment) where subjects must actively obtain historical information and compare this to a treatment where subjects are provided with additional forecast error information (“FEI” treatment) on the main screen. The purpose of this alternative information environment is to identify whether the focal forecast error information influences how inexperienced and experienced subjects form forecasts. Fig. 1 is a screenshot of the main screen that subjects interacted with during the FEI sessions. In the B treatment, subjects did not have immediately accessible information on previous period forecast errors. Mehta et al. (1994) identify closeness or proximity as a feature that enhances the salience of a specific strategy and the usefulness of it as a coordinating device. In this experiment, forecast error information in the FEI treatment is made salient primarily because of its free availability and proximity right next to where subjects submit their forecasts. We also study the effect of experience on forecasting, disagreements, and macroeconomic stability. By resetting the environment and conducting a second stationary repetition, we can observe whether forecasting behavior, forecast errors, and aggregate outcomes are significantly different over time with learning. We conducted five sessions of the Benchmark treatment and four sessions of the Forecast Error Information treatment.
Learning-to-Forecast Macroeconomic Experiments
117
Fig. 1. Screenshot from the Forecast Error Information Treatment. Note: This figure shows the main screen of the interface that subjects interacted with in the FEI treatment. “Previous Period” information was not included in the Benchmark treatment.
The experimental design builds on the work of Kryvtsov and Petersen (2014) and differs in a number of dimensions from the previous literature. First and most importantly, the experimental interface is considerably different. The only available information on the main screen is the current nominal interest rate, the natural rate of interest rate shock occurring in the current period, and a forecast of the next period’s shock. Historical information is placed on a secondary screen which subjects must actively click on in order to obtain past information about their past forecasts and realized aggregate variables. This differs from the interfaces of previous experiments that place all the current and historical information on a single screen. The purpose of this modification is to minimize the degree to which subjects are “primed” to focus on past information when forming their forecasts and to create a more realistic environment where subjects must “look up” past information if they are interested in utilizing it to make forecasts. As in the work of Roos and Luhan (2013), the data-generating process of the economy is provided to subjects in a supplementary technical instructions screen that subjects could access if they wanted more information. Providing subjects with the data-generating process makes it easier to identify the set of information that may be used in forming forecasts.
118
LUBA PETERSEN
However, unlike Roos and Luhan, we do not charge subjects for this information. Instead, they must utilize their limited time to look up the information. Finally, subjects submitted forecasts for both output and inflation, whereas in the earlier literature, subjects either forecasted one of the two variables or forecasted inflation for one and two periods ahead. Participants were presented with detailed instructions before the experiment began. We explained using non-technical language how the output gap, inflation, and nominal interest rate would evolve given their forecasts and exogenous shocks. Subjects were informed that their only task would be to submit forecasts for the following period’s output gap and inflation, and that their score would depend on the accuracy of their forecast. Specifically, their score would be computed for each period according to the following payoff function:
Scoret = 0:3 e − 0:01jEt − 1 π t − πt j þ e − 0:01jEt − 1 xt − xt j
ð5Þ
where Et− 1 π t − π t and Et− 1 xt − xt were the subject’s forecast errors associated with forecasts submitted in period t − 1 for period t variables. With more than 100 periods of play, a subject had the potential to earn over $70 by making accurate forecasts. This scoring rule incentivizes subjects to form accurate forecasts. This scoring rule is very similar to that used in the previous experimental literature in that scores decrease monotonically with the forecast errors and the minimum score a subject can earn in any period is zero. In the rules used by Assenza et al. (2013) and Pfajfar and Zakelj (2014), there is diminishing marginal loss from forecast errors while under our rule. Under our rule, the per-period score reduces by 50% for every 100 basis point forecast error for both inflation and output, continually incentivizing subjects to make the most accurate forecasts possible. We also clearly explained that the median forecast for inflation and output in each period would be used in the calculation of the output gap, inflation, and the nominal interest rate. We explained both quantitatively and qualitatively the relative importance of forecasts in the calculation of the three aforementioned variables. Subjects never directly observed each others’ forecasts or the median forecasts. We also explained to subjects how they could access detailed information about the economy in the technical instructions. Subjects were given a four-period practice phase of approximately 10 minutes to learn the interface and better understand the timing of the game.
Learning-to-Forecast Macroeconomic Experiments
119
Forecasting Models As a starting point, we begin with a rational expectations forecasting model. Given the parameterization of the environment, the rational expectations solution does not depend on any endogenous state variables but only on exogenous state variables. The rational expectations solution for the output gap is simply a function of the current period shock and parameters of the model: Et xt þ 1 = Φrtn
ð6Þ
The rational expectations solution for inflation forecasts follows an identical structure. We also consider a variety of alternative forecasting models. The simplest is the naive expectations model, where agents form their expectation of a variable based on its previous realized value. We consider an adaptive model of the form: Et xt þ 1 = βxt − 1
ð7Þ
It is not immediately obvious whether presenting subjects with forecast error information would increase the importance they place on past realized values when forming their expectations. On one hand, the forecast error information reduces the need to utilize historical information when forming forecasts, and should reduce the reliance on past values. On the other hand, past output and inflation are clearly presented on the main screen and also become more focal. We consider the possibility that subjects’ forecasts respond to trends in inflation and output, as in the model for output expectations below: Et xt þ 1 − xt − 1 = α þ ηðxt − 1 − xt − 2 Þ
ð8Þ
If the estimated η^ ≥ 0, agents expect that the previous upward or downward movements in the variable that they are forecasting will continue in the next period, that is, the subjects are trend-chasing. If η^ < 0, agents expect that the movement in the variable of interest will reverse its trend, and we describe this as contrarian expectations. In order to observe the trend, subjects would need to review the historical screen or else remember values from two periods prior. Given the additional information presented on the
120
LUBA PETERSEN
main screen in the FEI treatment, we would expect less time spent on the history screen and generally a reduction in trend-chasing behavior compared to the B treatment. Presenting salient forecast error information may prime subjects to condition on their forecast errors when forming expectations about the future. This type of behavior for output forecasts can be described by a constant gain adaptive expectations model: Et xt þ 1 = Et − 1 xt þ γðEt − 1 xt − xt − 1 Þ
ð9Þ
with a similarly structured model for inflation forecasts. The dependent variable in this estimation is the change in expectations, Etxt + 1 − Et − 1xt. An estimated γ^ < 0 suggests that when subjects over-forecast a variable, they will correct their forecast downward in the next period. If the focal forecast error information is important in influencing forecasting behavior, we should expect the estimated γ^ s in the two treatments to be significantly different from one another.
FINDINGS Forecast Errors Does making forecast errors more salient improve forecasting ability? In the Benchmark treatment, to identify one’s forecast error, a subject would need to review the history screen to identify how accurate their forecasts were. This could be observed by comparing the time series graphs of realized variables to that of forecasted variables. This task was made simpler in the FEI treatment, where subjects forecast errors were presented on the main screen. If subjects were to condition on their past forecast errors and could successfully correct under- or over-forecasting, then we should expect to see smaller forecast errors in the FEI treatment. The kernel densities of the forecast errors for each treatment are displayed in Fig. 2 by repetition. Summary statistics on the squared forecast errors are given in Table 1. We also report the treatment effect size using Glass Δ, which is the difference between the mean B forecast error and the mean FEI forecast error, and is measured in standard deviations.1 The difference across treatments is stark in the first repetition. The density function for the FEI treatment is more tightly centered around zero. The
121
Learning-to-Forecast Macroeconomic Experiments
Repetition 2
.001 .0005 0
Kdensity output gap forecast error
.0015
Repetition 1
–5000
–2500
0
2500
5000 –5000
–2500
0
2500
5000
500
1000
Output gap forecast error Repetition 2
0
Kdensity inflation forecast error .002 .004 .006
.008
Repetition 1
–1000
–500
0
500
1000 –1000
-500
0
Inflation forecast error Benchmark
Fig. 2.
Forecast error information
Kernel Densities of Output Gap and Inflation Forecast Errors.
median and mean absolute forecast errors in the FEI treatment are generally smaller for both output and inflation in both repetitions (the only exception is inflation forecasts, which are modestly higher in the second repetition of the FEI treatment).
122
LUBA PETERSEN
Table 1.
Absolute Forecast Errors for Output Gap and Inflation.a
Treatment
Output Gap
Inflation
Rep. 1
Rep. 2
Rep 1.
Rep. 2
Median Mean Std. Dev. Glass Δb
278 363.75 319.32
219 295.86 278.67 0.21 (0.150.27)
75 102.73 98.85
52 69.78 64.12 0.33 (0.270.39)
Median Mean Std. Dev. Glass Δc
166 224.30 475.62
37 75.94 272.11
Glass Δd
0.44 (0.370.50)
178 235.61 235.52 −0.02 (−0.09,0.04) 0.22 (0.150.28)
47 72.32 224.44 0.01 (−0.05,0.08) −0.04 (−0.100.02)
B
FEI
0.27 (0.210.33)
a
Summary statistics for forecast errors. Effect sizes associated with a Glass Δ test across repetitions. c Effect sizes associated with a Glass Δ test across repetitions. d Effect sizes associated with a Glass Δ test across treatments. Values in brackets are the 95% confidence intervals of the estimated effect size. The estimates are calculated using the Benchmark group’s standard deviation. b
These differences across treatments diminish with learning in the second repetition. Relative to the first repetition of the Benchmark treatment, median and mean forecast errors for both output and inflation decrease with learning, and the variance of forecast errors also declines. Forecast errors somewhat worsen with learning in the FEI treatment. For output forecast errors, the median and mean error increases but the standard deviation decreases. This suggests that the tails of the distribution of forecast errors are getting fatter but less extreme. For inflation forecast errors, the median error increases, but mean and standard deviation decrease. The changes in FEI forecast errors are negligible and statistically insignificant. The null hypothesis that the forecast errors under the B and FEI treatments are drawn from the same distribution is rejected through a two-sample Kolmogorov Smirnov test (p < 0.001 for inflation and output forecasts in both repetitions). Comparing within a treatment, the distributions of the output gap and inflation forecast errors are significantly different across repetitions in both the B treatment (p < 0.001) and the FEI treatment (p < 0.01).
Learning-to-Forecast Macroeconomic Experiments
123
Evaluation of Forecasting Models We study the fit of the rational expectations, adaptive expectations, trendchasing, and constant gain adaptive expectations models across repetitions. Each model is estimated as a fixed-effect regression with standard errors clustered at the session level. We consider the effects of information and learning in separate regressions. The results for inflation forecasts are presented in Tables 2 and 3, while output forecasts are presented in Tables 4 and 5. As measures of fit, we compute R2, Akaike Information Criterion (A.I.C.), and Bayesian Information Criterion (B.I.C.) statistics. The rational expectations model is presented in columns (1) and (5) for all tables. Subjects significantly condition both their inflation and output forecasts on the current period shock in both repetitions. While subjects with forecast error information utilize the shock less in their forecasts, the differences across treatments is not statistically significant. With experience, subjects in both treatments learn to place more weight on the shock in their forecasts. The learning effect is statistically significant for subjects in the FEI treatment (p < 0.05). The adaptive expectations model is presented next in columns (2) and (6). Lagged values of output and inflation play a quantitatively large and significant role in the forecasts made by subjects, across levels of experience and information. There are no significant differences in adaptive behavior across treatments when inexperienced subjects form their forecasts. In the second repetition, the experienced FEI subjects place significantly more weight than B subjects on past inflation when forming their inflation forecasts (p < 0.05). The role of past output levels in output forecasts does not significantly differ across treatments, however B subjects become significantly less adaptive in their output forecasts with experience. Columns (3) and (7) present the results from the trend-chasing expectations models. Generally, the model does not perform well in describing the variability in inflation and output expectations. Past trends in inflation do not generate a large or significant trend-chasing behavior among inexperienced subjects. Experienced subjects do exhibit weakly significant contrarian expectations and there are no considerable differences across treatments. In forming their output forecasts, subjects in both treatments exhibit contrarian heuristics. This behavior is only statistically significant among those in the Benchmark treatment and does not change significantly with learning. Finally, the constant gain expectations model addresses how subjects update their forecasts in response to the previous period’s forecast errors.
124
Comparison of Estimated Expectation Models.
Table 2.
Inflation Forecasts Repetition 1 Dep. Var. rtn rtn × FEI
(1) Etπt + 1
(2) Etπt + 1
(3) Etπt + 1 − πt − 1
(5) Etπt + 1
(6) Etπt + 1
(7) Etπt + 1 − πt − 1
(8) Etπt + 1 − Et − 1πt
0.251*** (0.03) −0.035 (0.05) 0.559*** (0.04) 0.207 (0.21)
πt − 1 × FEI πt − 1 − πt − 2
0.580*** (0.02) 0.275** (0.06) −0.051 (0.02) −0.045 (0.11)
πt − 1 − πt − 2 × FEI Et − 2πt − 1 − πt − 1
−0.055* (0.02) 0.188 (0.09)
21.224*** (2.55)
10.015*** (0.01)
3,981 0.0163 53332.3 53344.9
3,880 0.0672 51856.5 51869.0
3,780 0.000545 50217.7 50230.2
3,767 0.415 49464.8 49477.3
Significance levels: *p < 0.10, **p < 0.05, ***p < 0.01.
11.721*** (0.81)
3.395*** (0.28)
6.629*** (0.01)
−0.880*** (0.04) −0.142** (0.04) 6.044*** (0.24)
3,960 0.0333 51513.5 51526.1
3,862 0.121 49938.4 49950.9
3,765 0.00107 48860.0 48872.5
3,755 0.534 48730.9 48743.3
LUBA PETERSEN
39.604*** (0.13)
−0.871*** (0.02) 0.255*** (0.05) 6.861*** (0.25)
Et − 2πt − 1 − πt − 1 × FEI
N R2 A.I.C. B.I.C.
(4) Etπt + 1 − Et − 1πt
0.235** (0.05) −0.139 (0.08)
πt − 1
α
Repetition 2
Comparison of Estimated Expectation Models. Inflation Forecasts Benchmark
Dep. Var. rtn rtn × EXP
(1) Etπt + 1
(2) Etπt + 1
(3) Etπt + 1 − πt − 1
(5) Etπt + 1
(6) Etπt + 1
πt − 1 − πt − 2
(8) Etπt + 1 − Et − 1πt
0.766** (0.20) 0.089 (0.18) −0.051 (0.02) −0.004 (0.02)
πt − 1 − πt − 2 × EXP Et − 2πt − 1 − πt − 1 Et − 2πt − 1 − πt − 1 × EXP 23.767*** (0.79) 4,511 0.0949 54019.3 54032.1
(7) Etπt + 1 − πt − 1
0.096** (0.03) 0.121** (0.03) 0.559*** (0.04) 0.021 (0.04)
πt − 1 × EXP
N R2 A.I.C. B.I.C.
(4) Etπt + 1 − Et − 1πt
0.235** (0.05) 0.017 (0.05)
πt − 1
α
FEI
12.239*** (0.54) 4,421 0.268 52033.5 52046.3
−0.871*** (0.02) −0.009 (0.03) 8.345*** (0.27) 4,308 0.539 51509.0 51521.7
28.241*** (0.19) 3,430 0.00788 47619.3 47631.6
12.453** (3.16) 3,321 0.0509 46052.0 46064.2
6.715*** (0.01) 3,214 0.000516 44353.5 44365.7
−0.616*** (0.04) −0.406*** (0.04) 3.917*** (0.31) 3,214 0.460 44020.7 44032.9
125
Significance levels: *p < 0.10, **p < 0.05, ***p < 0.01.
9.521*** (0.01) 4,331 0.00192 51868.0 51880.7
−0.096 (0.10) 0.229 (0.17)
Learning-to-Forecast Macroeconomic Experiments
Table 3.
126
Comparison of Estimated Expectation Models.
Table 4.
Output Forecasts Repetition 1 Dep. Var. rtn rtn × FEI
(1) Etxt + 1
(2) Etxt + 1
(3) Etxt + 1 − xt − 1
(5) Etxt + 1
(6) Etxt + 1
(7) Etxt + 1 − xt − 1
(8) Etxt + 1 − Et − 1xt
0.727*** (0.08) −0.020 (0.06) 0.511*** (0.03) −0.015 (0.15)
xt − 1 × FEI xt − 1 − xt − 2
0.465*** (0.02) 0.104 (0.08) −0.107* (0.04) −0.117 (0.15)
xt − 1 − xt − 2 × FEI Et − 2xt − 1 − xt − 1
−0.133** (0.04) −0.027 (0.15)
31.518*** (0.26)
34.902*** (0.18)
3,981 0.0360 58683.4 58696.0
3,880 0.154 56778.1 56790.7
3,780 0.0127 55666.4 55678.9
3,767 0.568 55373.9 55386.3
Significance levels: *p < 0.10, **p < 0.05, ***p < 0.01.
26.738*** (3.22)
20.602*** (0.82)
46.649*** (0.02)
−0.806*** (0.05) 0.012 (0.14) 37.263*** (1.72)
3,960 0.130 54538.8 54551.3
3,862 0.335 51742.8 51755.4
3,765 0.0266 51827.0 51839.5
3,755 0.605 51420.6 51433.1
LUBA PETERSEN
31.015*** (0.25)
−0.792*** (0.04) −0.152 (0.08) 30.143*** (0.49)
Et − 2xt − 1 − xt − 1 × FEI
N R2 A.I.C. B.I.C.
(4) Etxt + 1 − Et − 1xt
0.629** (0.16) −0.155 (0.10)
xt − 1
α
Repetition 2
Comparison of Estimated Expectation Models. Output Forecasts Benchmark
Dep. Var. rtn rtn × EXP
(1) Etxt + 1
(2) Etxt + 1
(3) Etxt + 1 − xt − 1
(5) Etxt + 1
(6) Etxt + 1
xt − 1 − xt − 2
(8) Etxt + 1 − Et − 1xt
0.496** (0.14) 0.072 (0.08) −0.107* (0.04) −0.026 (0.02)
xt − 1 − xt − 2 × EXP Et − 2xt − 1 − xt − 1 Et − 2xt − 1 − xt − 1 × EXP 34.625*** (2.57) 4511 0.0976 63109.6 63122.4
(7) Etxt + 1 − xt − 1
0.474** (0.09) 0.232** (0.05) 0.511*** (0.03) −0.046* (0.02)
xt − 1 × EXP
N R2 A.I.C. B.I.C.
(4) Etxt + 1 − Et − 1xt
0.629** (0.16) 0.098 (0.11)
xt − 1
α
FEI
29.830*** (0.68) 4,421 0.321 60662.2 60675.0
−0.792*** (0.04) −0.013 (0.03) 41.439*** (2.35) 4,308 0.612 60460.2 60473.0
21.329*** (0.58) 3,430 0.0410 50485.8 50498.1
21.070*** (0.49) 3,321 0.118 48502.8 48515.0
26.300*** (0.15) 3,214 0.0135 46967.5 46979.6
−0.944*** (0.05) 0.151 (0.07) 23.319*** (1.99) 3,214 0.549 46943.0 46955.1
127
Significance levels: *p < 0.10, **p < 0.05, ***p < 0.01.
51.497*** (0.07) 4,331 0.0204 61133.2 61145.9
−0.225 (0.12) 0.065 (0.08)
Learning-to-Forecast Macroeconomic Experiments
Table 5.
128
LUBA PETERSEN
The results of this set of regressions are presented in columns (4) and (8). This model of forecasting fits the data the best according to all of our goodness of fit measures. Across all treatments and repetitions, subjects significantly respond to their past errors when forming their forecasts of future inflation and output. Inexperienced B subjects react significantly more to their inflation forecast errors when forming their forecasts than their FEI counterparts (p < 0.01). The salient forecast error information works to stabilize FEI subjects’ responsiveness to their forecast errors. With experience, the FEI subjects increase their reaction to their inflation forecast errors by more than 65%, while the B subjects responsiveness is largely unchanged. Experienced FEI subjects significantly overreact to their errors relative to the B subjects. The weight that subjects place on past errors in their output forecast does not differ significantly across treatments or with learning. On average, inexperienced FEI subjects exhibit a larger aggressive reaction to their past forecast errors, however there is considerable heterogeneity among subjects and the differences between treatments is only significant at the 15% level.
Heterogeneity in Forecasts Forecast error information may reduce the heterogeneity in subjects’ forecasts by providing a common focal point. As a measure of heterogeneity in expectations, we calculate the standard deviation (in basis points) of forecasts in each period at the session level. Histograms and kernel density functions depict the distribution of heterogeneity as shown in Fig. 3 by treatment and repetition. The distributions of forecast heterogeneity are relatively skewed toward zero when subjects are presented with salient forecast error information. That is, there is considerably less disagreement in inflation and output forecasts when subjects have common information to coordinate on. Two-sample KolmogorovSmirnov tests reject the null hypothesis that the distribution functions are identical across treatments for either of the repetitions (p < 0:01 for both inflation and output gap disagreements). The median inflation disagreement in Repetition 1 (Repetition 2) is 70 (50) bps in the Benchmark treatment and 38 (35) bps in the FEI treatment. Similarly, the median output disagreement in Repetition 1 (Repetition 2) is 159 (191) bps in the Benchmark treatment and 119 (131) bps in the FEI treatment. While inflation disagreements lessen over time, output disagreements worsen for both treatments. This is consistent with our finding from Table 5
129
Learning-to-Forecast Macroeconomic Experiments
Benchmark - Rep 1
Benchmark - Rep 2
FEI - Rep 1
FEI - Rep 2
.5 .4 .3 .2
Fraction
.1 0 .5 .4 .3 .2 .1 0 0
200
400
600
800
0
200
400
600
800
Standard deviation of inflation forecasts
Benchmark - Rep 1
Benchmark - Rep 2
FEI - Rep 1
FEI - Rep 2
.25 .2 .15 .1
Fraction
.05 0 .25 .2 .15 .1 .05 0 0
200
400
600
800
1000 0
200
400
600
800
Standard deviation of output gap forecasts
Fig. 3.
Heterogeneity in Inflation and Output Gap Forecasts.
1000
130
LUBA PETERSEN
where we observe relatively large standard errors when we estimate the various models using output gap forecasts for experienced subjects.
Macroeconomic Stability We now turn our attention to aggregate outcomes and compare the volatility of the output gap and inflation across treatments. Figs. 4 and 5 present time series of the output gap and inflation across sessions and repetitions for each treatment while Table 6 provides the associated summary statistics. The behavior of inexperienced subjects in Repetition 1 is considered. Visually, we can detect significant differences in both output and inflation across treatments. In the Benchmark economies, the aggregate variables appear more volatile and reach greater extremes than in the FEI economies. The mean standard deviation of the output gap (inflation) is 149.89 (58.43) basis points higher in the Benchmark treatment. Wilcoxon ranksum tests reject the null hypothesis that the distributions of output gap and inflation variability across the two treatments are identical (p = 0:014 for both output gap and inflation). This coincides with our earlier finding that inexperienced subjects in the B treatment are relatively more responsive to their forecast errors and more adaptive than subjects in the FEI treatment. The average autocorrelation of output in the first repetitions of B and FEI are 0.46 and 0.26, respectively. This highly reactive behavior in the Benchmark treatment dampens on average with learning in the second repetition. The mean standard deviation of output (inflation) falls by 84.7 (32.12) basis points, and a signedrank test weakly rejects the null hypothesis that there are small differences across repetitions (p = 0:138 for output gap and p = 0:08 for inflation). This is consistent with the findings in the previous section that, with learning, there are minimal differences across repetitions for any of the learning models. Subjects somewhat decrease their reliance on lagged output in favor of lagged forecast errors and contrarian beliefs when forming their expectations, resulting in increased mean reversion. The opposite occurs in the FEI treatment. In the second repetition, the mean standard deviation of output (inflation) significantly increases by 63.14 (30.77) basis points (p = 0:068 for both variables). This increase in volatility is generated by a more extreme reaction to forecast errors. Given the considerable changes across repetitions in both treatments, there are no significant differences between the B and FEI treatments in Repetition 2 (p = 0:806).
131
Learning-to-Forecast Macroeconomic Experiments Session 1 - Rep 1
Session 2 - Rep 1
Session 3 - Rep 1
Session 4 - Rep 1
Session 5 - Rep 1
Session 1 - Rep 2
Session 2 - Rep 2
Session 3 - Rep 2
Session 4 - Rep 2
Session 5 - Rep 2
0
0
0
0
0
1000 500 0
Output
–500 –1000
1000 500 0 –500 –1000 20
40
60
20
40
60
20
40
60
20
40
60
20
40
60
Period Session 1 - Rep 1
Session 2 - Rep 1
Session 3 - Rep 1
Session 4 - Rep 1
Session 1 - Rep 2
Session 2 - Rep 2
Session 3 - Rep 2
Session 4 - Rep 2
1000 500 0
Output
–500 –1000
1000 500 0 –500 –1000 0
20
40
60
0
20
40
60
0
20
40
60
0
20
40
Period
Fig. 4.
Time Series of the Output Gap by Session and Repetition.
60
132
LUBA PETERSEN Session 1 - Rep 1
Session 2 - Rep 1
Session 3 - Rep 1
Session 4 - Rep 1
Session 5 - Rep 1
Session 1 - Rep 2
Session 2 - Rep 2
Session 3 - Rep 2
Session 4 - Rep 2
Session 5 - Rep 2
400
200
Inflation
0
–200
400
200
0
–200 0
20
40
60
0
20
40
60
0
20
40
60
0
20
40
60
0
20
40
60
Period
Session 1 - Rep 1
Session 2 - Rep 1
Session 3 - Rep 1
Session 4 - Rep 1
Session 1 - Rep 2
Session 2 - Rep 2
Session 3 - Rep 2
Session 4 - Rep 2
200
0
Inflation
–200
–400
200
0
–200
–400 0
20
40
60
0
20
40
60
0
20
40
60
0
20
Period
Fig. 5.
Time Series of Inflation by Session and Repetition.
40
60
133
Learning-to-Forecast Macroeconomic Experiments
Table 6.
Standard Deviation of Output and Inflation.a
Treatment
Output Gap
Inflation
Rep. 1
Rep. 2
Rep. 1
Rep. 2
Mean Min Max p-valueb
350.84 224.41 459.76
266.14 204.04 370.30 0.138
106.03 76.91 126.31
73.91 53.38 91.15 0.08
Mean Min Max p-valuec p-valued
200.95 182.59 220.33
264.09 195.55 355.73 0.068 0.806
47.60 37.32 54.89
78.37 55.12 127.78 0.068 0.806
B
FEI
0.014
0.014
a
Summary statistics for the standard deviation of output gap and inflation calculated at the session-repetition level are presented. b p-values associated with a signed-rank test across repetitions. c p-values associated with a signed-rank test across repetitions. d p-values associated with a rank-sum test across treatments.
DISCUSSION This article reports the findings from a laboratory experiment that explores the effects of experimental design features on forecasting behavior. The experimental environment is modeled as a reduced-form New Keynesian economy, where aggregate expectations formed by subjects are used to generate macroeconomic dynamics. This experiment specifically studies how forecast error information and learning influence expectation formation. In the benchmark environment, subjects must look up historical information and infer their forecast errors by comparing time series of their forecasts to realized values. The results of this treatment are compared to a second environment where subjects are provided salient information on their forecast errors in the previous period. Four heuristics of expectation formation are compared to identify wellfitting models under different information structures: rational, adaptive, trend-chasing, and constant gain learning. While subjects do significantly utilize random shocks and past outcomes in their forecasts, forecasting behavior is best described by constant gain learning under both the benchmark and the forecast error information environments. Inexperienced
134
LUBA PETERSEN
subjects generally attempt to correct past forecast errors by significantly raising (lowering) their forecasts in response to past under- (over-) forecasting. However, when it comes to inflation forecasting, the reactions are significantly less extreme when subjects are provided with precise information about their forecast errors. In other words, inexperienced subjects with only visual information overreact to their forecast errors compared to those with additional numerical information. Presenting inexperienced subjects with accessible and salient forecast error information also draws their attention away from aggregate shocks when forming both forecasts. While these subjects are less “rational” than what would be predicted by the rational expectations model, they incur smaller forecast errors because they receive immediate, more precise feedback and correct any trend-chasing heuristics. This behavior results in significantly lower forecast errors and volatility. After extensive learning, experienced subjects continue to utilize forecast error information. Those with salient forecast error information significantly increase their usage of the aggregate shock in forming their forecasts, leading to more extreme forecasts, outcomes, and forecast errors. As a result, they become increasingly overreactive to their errors, perpetuating greater volatility. The benchmark treatment can be viewed as an environment with informational frictions. Subjects must actively seek out and interpret relevant information about forecast accuracy on a second screen, leaving them prone to inattentiveness, extrapolative or overreactive behavior that generates disagreements. Similar to Kryvtsov and Petersen (2013) and Roos and Luhan (2013), we find that most subjects will not utilize information if it comes at a cognitive cost. Instead, they overly rely on easy-to-interpret information such as historical information and trends. With limited time and capacity to interpret information, we observe that subjects rationally select a coarse subset of variables and heuristics to condition their expectations on a finding consistent with the notion of rational inattention developed by Sims (2003). By providing a common and accessible forecasting heuristic to all subjects, heterogeneity in expectations is efficiently and effectively reduced. The findings of this experiment suggest that the design of an experimental interface matters. Providing salient forecast error information will encourage subjects to utilize that information and will alter how a subject forms beliefs. Indeed, the focal information can potentially serve as an effective coordinating device. Over time, however, some subjects reduce their reliance on the supplementary information, leading other subjects to
Learning-to-Forecast Macroeconomic Experiments
135
also find it less useful. Consistent with Assenza et al. (2013) and Pfajfar and Zakelj (2014), we find that providing subjects the opportunity to learn matters. Subjects in the Benchmark treatment are able to reduce their forecast errors substantially by altering their reliance on various pieces of information. It is worth emphasizing that had we only conducted one repetition per session, it would have been easy to conclude that behavior across the information treatments is significantly different. A second repetition shows that forecasting behavior changes with learning and the relative benefits of focal information are reduced. Switching between forecasting rules has been well-observed by Hommes (2011) and Pfajfar and Zakelj (2014) over long horizons, but these experiments are typically conducted as one long repetition. Given that this is a coordination game that rewards forecast accuracy, subjects will mimic the behavior they believe is driving historical aggregate behavior and can result in long stretches of nonrational forecasting. Stationary repetition allows subjects to more effectively “learn away” suboptimal forecasting rules that may have emerged in the beginning of the session when they experimented with various strategies. Our experiment demonstrates the ability of policy makers to influence expectations and overall economic activity. Practically speaking, policy makers can encourage constant gain learning by making forecast errors more salient. This can be accomplished by encouraging both firms and households to update their expectations more frequently and effectively communicate current inflation and demand statistics in such a way that is retained by the general public. Financial planning and commercial bank websites can play an important role by providing an application that allows individuals to track their expectations and forecast accuracy over time. More generally, central bank communication is an area where laboratory experiments have the potential to be particularly insightful. Further experiments can shed light on what information subjects are more likely to respond to and coordinate on. Filardo and Hofmann (2014) have recently observed in the United States that while qualitative and calendar-based forward guidance on monetary policy has been effective in influencing interest rate expectations, communication of more complex threshold-based policies beginning in December 2012 is associated with increased volatility and disagreement in financial markets. This is just one example where the clarity and ease of understanding of information can lead to better coordination of expectations. Finally, our treatment variation in information was conducted across different groups. Instead, one could consider an experiment where focal information is presented unexpectedly. How and whether
136
LUBA PETERSEN
subjects would respond to new information after learning to coordinate their beliefs with others is an open question that is particularly relevant in a world where policy makers are increasingly communicating with the public.
NOTE 1. A Glass Δ of 0.44 for output forecast errors in the first repetition implies that the mean B forecast error was 0.44 standard deviations larger than the mean FEI forecast error.
ACKNOWLEDGMENT This article has benefited greatly from the helpful comments and suggestions of Jasmina Arifovic, David Freeman, Oleksiy Kryvtsov, and an anonymous referee.
REFERENCES Adam, K. (2007). Experimental evidence on the persistence of output and inflation. The Economic Journal, 117(520), 603636. Assenza, T., Heemeijer, P., Hommes, C., & Massaro, D. (2013). Individual expectations and aggregate macro behavior. Working Paper No. 13016/II. Tinbergen Institute Discussion Paper. Blume, A., & Gneezy, U. (2000). An experimental investigation of optimal learning in coordination games. Journal of Economic Theory, 90(1), 161172. Demertzis, M., & Viegi, N. (2008). Inflation targets as focal points. International Journal of Central Banking, 4(1), 5587. Demertzis, M., & Viegi, N. (2009). Inflation targeting: A framework for communication. The BE Journal of Macroeconomics, 9(1), Article 44. Duffy, J. (2014). Macroeconomics: A survey of laboratory research. Working Paper. University of California, Irvine. Filardo, A., & Hofmann, B. (2014). Forward guidance at the zero lower bound. International Banking and Financial Market Developments, 3, 3753. Hommes, C. (2011). The heterogeneous expectations hypothesis: Some evidence from the lab. Journal of Economic Dynamics and Control, 35(1), 1–24. Keane, M. P., & Runkle, D. E. (1990). Testing the rationality of price forecasts: New evidence from panel data. The American Economic Review, 80(4), 714735. Kryvtsov, O., & Petersen, L. (2013). Expectations and monetary policy: Experimental evidence. Discussion paper dp 14-05, Department of Economic, Simon Fraser University.
Learning-to-Forecast Macroeconomic Experiments
137
Mehta, J., Starmer, C., & Sugden, R. (1994). Focal points in pure coordination games: An experimental investigation. Theory and Decision, 36(2), 163185. Milani, F. (2007). Expectations, learning and macroeconomic persistence. Journal of Monetary Economics, 54(7), 20652082. Morris, S., & Shin, H. S. (2002). Social value of public information. American Economic Review, 92(5), 1521–1534. Nagel, R. (1995). Unraveling in guessing games: An experimental study. American Economic Review, 85(5), 13131326. Pfajfar, D., & Santoro, E. (2010). Heterogeneity, learning and information stickiness in inflation expectations. Journal of Economic Behavior Organization, 75(3), 426444. Pfajfar, D., & Santoro, E. (2013). News on inflation and the epidemiology of inflation expectations. Journal of Money, Credit and Banking, 45(6), 10451067. Pfajfar, D., & Zakelj, B. (2014). Inflation expectations and monetary policy design: Evidence from the laboratory. Working Paper. Board of Governors of the Federal Reserve System. Pfajfar, D., & Zakelj, B. (2014). Experimental evidence on inflation expectation formation. Journal of Economic Dynamics and Control, 44, 147168. Roos, M. W., & Luhan, W. J. (2013). Information, learning and expectations in an experimental model economy. Economica, 80(319), 513531. Schelling, T. (1960). The strategy of conflict. Combridge, MA: Harvard University Press. Sims, C. (2003). Implications of rational inattention. Journal of Monetary Economics, 50(3), 665690. Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207232. Woodford, M. (2003). Interest and prices: Foundations of a theory of monetary policy. Princeton, NJ: Princeton University Press.
AN EXPERIMENT ON CONSUMPTION RESPONSES TO FUTURE PRICES AND INTEREST RATES Wolfgang J. Luhan, Michael W. M. Roos and Johann Scharler ABSTRACT We design an experiment to investigate the influence of announced future variations in interest rates and prices on consumption decisions. In an experimental implementation of the discounted utility model, the subjects learn the entire paths of inflation and interest rates prior to deciding on a consumption path. We decompose the total change in consumption that results from changes in either interest rates or inflation rates into anticipation and impact effects. While impact effects are of similar orders of magnitude as in the model, future changes in inflation or interest rates exert substantially smaller effects on current consumption than predicted by the model. Keywords: Consumption; saving; intertemporal utility maximization; macroeconomic experiment
Experiments in Macroeconomics Research in Experimental Economics, Volume 17, 139166 Copyright r 2014 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0193-2306/doi:10.1108/S0193-230620140000017005
139
140
WOLFGANG J. LUHAN ET AL.
INTRODUCTION The discounted utility model is the backbone of essentially all dynamic stochastic general equilibrium (DSGE) models, no matter whether they are in the real business cycle tradition or New Keynesian. Although these models sometimes also include nonforward looking elements, such as rule-ofthumb behavior, typically a fraction of the household sector is assumed to maximize lifetime utility by choosing a consumption path for all periods of life subject to the intertemporal budget constraint and contingent on currently available information. An interesting implication of this assumption is that households adjust consumption in response to future changes in interest rates and prices already when they learn about those changes and do not wait until the changes have actually happened. We refer to the former adjustment as the anticipation effect, the latter as the impact effect. In fact, Gali and Gertler (2007) argue that one of the main differences between New Keynesian DSGE models and earlier contributions arises precisely because of this theoretical result. In New Keynesian DSGE models, consumption decisions, and thus aggregate output and inflation, do not only depend on the interest rate set by the central bank in the current period, but also, and perhaps even more importantly, on the entire expected future path of the interest rate. It follows that the overall effectiveness of monetary policy depends crucially on its ability to steer private sector expectations. Along similar lines, Walsh (2010) argues that the zerolower bound on nominal interest rates is not necessarily a constraint on the effectiveness of monetary policy. Central banks can still influence economic activity via the expectation of future real interest rates. That is, by committing to keep nominal interest rates low for a substantial period of time which is equivalent to promising to keep inflation high aggregate demand can be stimulated even if the nominal interest rate is essentially zero. Again, this argument relies precisely on the idea that future interest and inflation rates influence current consumption. The purpose of this article is to explore this link between current consumption choices and future changes in interest and inflation rates in an experimental setting. We design an experiment that allows us to draw clear inference about the causal effect of announced future changes in the real interest rate on consumption. Our experimental environment is a minute implementation of the standard theoretical model of intertemporal utility maximization. More specifically, the subjects’ task is to choose consumption paths for a given level of initial wealth and certain future price levels and interest rate paths. By varying future price levels and interest rates, we
Consumption Responses to Future Prices and Interest Rates
141
can observe whether subjects adjust their consumption choices as predicted by the standard discounted utility model. This experimental setting allows us to decompose the total effect of variations in future interest rates and prices into the anticipation effect and the impact effect to isolate and quantify the adjustment of consumption that occurs in anticipation of future changes in interest rates and prices. The experimental method has an for our purposes decisive advantage over more standard economic studies. A problem of econometric studies is that it is essentially impossible to control for all present and future factors that might affect current consumption expenditure, which complicates the isolation of causal effects with field data. When analyzing field data, we simply do not know which information subjects used and whether they understood the implications of such information when making spending decisions. The lab, in contrast, offers a controlled environment where we can focus on a small number of influential factors (for a similar argument, see Fehr & Zych, 2008). It is likely that anomalies observed in an environment that eliminates all sources of confusion and reduces the task to its simplest core are also present in the much more complex world outside the lab. Our analysis is related to several strands of the literature. First, consumption and saving behavior has been analyzed experimentally in a number of studies (e.g., Carbone & Duffy, 2014; Carbone & Hey, 2004; Chua & Camerer, 2007; Hey & Dardanoni, 1988; Meissner, 2013). The main differences between our experiment and those previous studies are that we (i) eliminate any effects of uncertainty and (ii) ask for the choice of consumption paths instead of consumption levels in individual periods. Both features serve to mirror the theoretical optimization problem as closely as possible. By eliminating uncertainty, we circumvent all potential problems related to risk attitudes and the formation of expectations. We ask for complete consumption paths over an experimental “life” instead of sequential consumption choices in each period. This way we elicit the complete ex ante solution of the utility maximization problem. Second, previous research has demonstrated that the discounted utility model may not be a good description of actual behavior due to timedependent discount rates (Frederick, Loewenstein, & O’Donoghue, 2002). We are not so much interested in the effect of discounting per se, but focus on the distinction between anticipation and impact effects. Finally, our study is closely related to the empirical literature on the “excess sensitivity of consumption to current income” and “excess smoothness of consumption to future events” (e.g., Campbell & Deaton, 1989; Flavin, 1985; Luengo-Prado & Sørensen, 2008; Pasini, 2009; West, 1988).
142
WOLFGANG J. LUHAN ET AL.
While this literature studies the response of consumption to changes in income, we focus on the effects of changes in interest rates and prices, which have not been analyzed before. We conjecture that if subjects do not respond to income changes as predicted by the standard model, they may also fail to respond correctly to changes in prices and interest rates. Our main result is that subjects’ responses to announced changes in future interest rates and the prices differ markedly from the assumptions about behavior typically made in DSGE models. In particular the anticipation effect essentially does not exist: Subjects hardly adjust consumption paths in advance of known future changes in interest rates or prices. Thus, we argue that monetary policy makers cannot rely on the effects of announcements about future policy. In addition, our findings confirm previous studies (e.g., Ballinger, Palumbo, & Wilcox, 2003; Chua & Camerer, 2007) showing that subjects do not smooth consumption as predicted by theory. Subjects tend to consume too much in early periods and too little toward the end of an experimental life. The remainder of this article is structured as follows: The section “Theory” discusses the theoretical basis for our experiment while the section “Design and Experimental Procedure” presents the experimental design and procedure. In the section “Results” we discuss our results and the “Conclusions” section concludes this article.
THEORY DSGE models are currently the workhorse models in modern macroeconomics. One of the basic building blocks of essentially all of these models is the so-called forward-looking IS relationship. Although this relationship shares some similarities with the traditional IS curve familiar from introductory textbooks, there are several differences. Most importantly, the forward-looking IS relationship is derived from the equations that characterize an optimal solution to the intertemporal optimization problem of a representative household in the framework of the discounted utility model. In its simplest version, the forward-looking IS relationship describes how the household reallocates consumption over time depending on the nominal interest rate as well as the inflation rate both captured in the real interest rate. In this section, we present the version of an intertemporal optimization problem which we implement in the laboratory experiment. Note that we
Consumption Responses to Future Prices and Interest Rates
143
only implement this optimization problem as a partial equilibrium approach with exogenously determined prices. Households’ lifetime utility function is modeled as U=
T X
βt uðCt Þ
ð1Þ
t=1
where β = 1/(1 + ρ) is the constant discount factor and ρ is the rate of time preference.1 The period utility function2 is defined as uðCt Þ =
Ct1 − σ 1−σ
ð2Þ
We simplify the implementation in the lab by concentrating on the case with a known finite horizon T = 5. The choice of five periods is motivated by the tradeoff between simplicity for the subjects in the lab and the need to have enough observations for the empirical analysis. In principle, it would be desirable to have more periods, but we felt that this would make the problem harder for the subjects and the use of the computer interface in our experiment less convenient. While it is not important for our conclusions that the experimental time has a real-world interpretation, we think of a period as consisting of about 1012 years. In this case, a young consumer of about 20 years would make the planning for the next 5060 years, which roughly corresponds to the life expectancy in developed countries. We abstract from potential problems related to the formation of expectations by considering only the perfect foresight case here. We assume that the household does not earn any income but has an initial nominal wealth endowment of A0 in the experimental currency “taler.” This way, we avoid two problems related to credit. On the one hand credit constraints constitute a severe complication of the task, which is not necessary for the theoretical model. On the other hand unobservable psychological factors such as debt aversion might influence behavior and thereby blur the empirical results. The experimental currency is like fiat money without any redemption value at the end of the experiment. In each period t, the household can buy and consume Ct units of a single consumption good at an exogenously determined price Pt in talers. Any part of the initial endowment not consumed in period t is saved at a
144
WOLFGANG J. LUHAN ET AL.
risk-free nominal interest rate Rt. At date 0, all future prices and interest rates are known with certainty. Therefore, wealth evolves according to At þ 1 = ð1 þ Rt ÞðAt − Pt Ct Þ
ð3Þ
Maximizing the utility function (1) subject to a set of wealth equations (3) by choice of consumption for each period yields the Euler equation C−σ Ct− σ = βð1 þ Rt Þ t þ 1 Pt Pt þ 1
ð4Þ
The Euler equation together with the boundary condition C5 = A5/P5 characterizes optimal consumption in periods 1,..., 5. An important implication of (4) is that current consumption depends on all future prices and interest rates. The solution for optimal consumption in the first period is C1 = P1 þ
A0 1 Ωστ − 1 t t = 2 P t ∏τ = 2 1 þ R τ − 1
PT
ð5Þ
with Ωt ≡
1 þ rt 1þρ
ð6Þ
and where (1 + rt) ≡ (1 + Rt)Pt/Pt + 1 is the real interest rate factor. Consumption in all other periods can be determined recursively using (5) and the Euler equation. Our experiment generates data on how consumption depends on future real interest rates. To see how subjects respond to announced changes in the real interest rate, we will vary future prices as well as the nominal interest rate to generate changes in the real interest rate. Note that since we treat the nominal interest rate as well as the price levels as exogenous, we can vary both these variables independently. The Euler equation implies that Ct + 1 = Ct if Rt = ρ and Pt + 1 = Pt. The optimal consumption profile is flat if the interest rate is equal to the rate of time preference and if the price level is constant. For our experiment, we choose the following baseline calibration: ρ=0.2, Rt = 0.2, Pt=1, A0=1,000, and σ = 0.5. This calibration is chosen for
145
Consumption Responses to Future Prices and Interest Rates
practicality reasons and is not meant to have any external validity. If a period is interpreted as about 10 years, the discount rate and the interest rate are close to 0.02 per year, which is not unrealistic. The chosen elasticity of intertemporal substitution ensures that optimal savings increase with the interest rate. With this calibration, we obtain a flat baseline consumption path: Ct = 278.65 for t = 1,…, 5.
DESIGN AND EXPERIMENTAL PROCEDURE We implement the theoretical model as closely as possible in the lab, simplifying the decision task to meet all theoretical assumptions. We change interest rates and prices between treatments to test whether subjects respond as predicted by theory. In the baseline calibration, interest rates and prices are such that the optimal consumption path is flat, as the real interest rate is equal to the rate of time preference. In the treatments, we systematically vary prices and interest rates to see how subjects deviate from the baseline in response to the changes. Fig. 1 illustrates the idea with the example of a perfectly anticipated future increase in the real interest rate. If the real interest rate changes at time tI in the future, the overall consumption response can be decomposed into an anticipation effect, which occurs when the future change becomes Ct
Treatment
Impact effect Benchmark Anticipation effect
t tl
Fig. 1.
Theoretical Anticipation and Impact Effect.
146
WOLFGANG J. LUHAN ET AL.
known, and an impact effect which occurs when the change actually takes place, that is, at tI and later. Note that in our calibration the anticipation and impact effects move in opposite directions. In case of, for example, an anticipated future increase in the real interest rate it is optimal to postpone consumption prior to tI. Since the increase in the interest rate is certain, the substitution effect leads to lower consumption already before the change in the interest rate takes place. This lower consumption in early periods and higher interest rate payments after tI generate higher wealth, allowing higher consumption toward the end of the planning horizon. We are primarily interested in measuring the observed anticipation and impact effects of real interest rate changes on chosen consumption. First, we want to see whether and in which direction subjects adjust their consumption levels relative to the benchmark. And second, we want to compare whether the size of those changes corresponds to the size of the optimal adjustments predicted by the theoretical model. Using a within-subject design, we vary nominal interest rates and price levels in future periods to change the real interest rate. We can categorize our individual treatments into three different treatment groups. In treatment group T1, we vary only the price level, in treatment group T2 we vary only the nominal interest rate, and in treatment group T3, we vary both. Table 1 summarizes the 16 treatments. Note that for each price level change in T1 there is an equivalent change in the nominal interest rate in T2, which generates the same change in the real interest rate. In T3 these changes are created by changing both, the price and the nominal interest rate. All changes take place such that the real interest rate changes in the third period and fall into one of five categories: an increase by 10 (r↑10) or 20 percentage points (r↑20); a decrease by 10 (r↓10) or 20 percentage points (r↓20); or simultaneous changes of the inflation rate and the nominal interest rate that cancel out leaving the real interest rate unchanged (r=). These treatments allow us to observe directly whether subjects respond differently to nominal interest rates and prices. Each subject completed all 23 consecutive lives. The first five lives exhibited constant nominal and real interest rates and prices. These lives were intended as a training phase. Fig. 2 shows the theoretically optimal consumption paths for the 23 lives. The treatment with a flat consumption path appears four times (life 1, life 5, life 14, and life 23). The flat paths in lives 7, 9, 16, and 18 are the result of offsetting price and interest changes keeping the real interest rate fixed. To see the optimal consumption changes relative to the flat baseline path, we incorporate the baseline path in all
147
Consumption Responses to Future Prices and Interest Rates
Table 1. Δr
Life
T1
r↓10
8
T1
r↑20
12
T1
r↓20
13
T1
r↑10
19
T2
r↑10
10
T2
r↓20
17
T2
r↓10
20
T2
r↑20
22
T3
r↓20
6
T3
r=
7
T3
r=
9
T3
r↑20
11
T3
r↓10
15
T3
r=
16
T3
r=
18
T3
r↑10
21
Treatment Overview.
Variable
Period 1
Period 2
Period 3
Period 4
Period 5
Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt Pt Rt
1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2
1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1 0.2
1 0.2 1 0.2 1 0.2 1 0.2 1 0.3 1 0 1 0.1 1 0.4 1 0.1 1 0.1 1 0.3 1 0.3 1 0.32 1 0 1 0.4 1 0.4
1.09 0.2 0.86 0.2 1.2 0.2 0.92 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1.1 0.2 0.92 0.2 1.08 0.2 0.93 0.2 1.2 0.2 0.83 0.2 1.17 0.2 1.08 0.2
1.09 0.2 0.86 0.2 1.2 0.2 0.92 0.2 1 0.2 1 0.2 1 0.2 1 0.2 1.1 0.2 0.92 0.2 1.08 0.2 0.93 0.2 1.2 0.2 0.83 0.2 1.17 0.2 1.08 0.2
Note: Each subject had to make decisions in 23 “lives.” Each life consists of five periods with potentially different interest rates and prices. Lives 15, 14, and 23 are not contained in the table as they are baseline and training lives in which there were no changes in the focus variables.
graphs. In our empirical analysis, we will test if actual consumption choices confirm to the optimal paths shown in Fig. 2. We conducted four sessions at the RUBex laboratory at the RuhrUniversita¨t Bochum. The 50 participants were students from several
148
WOLFGANG J. LUHAN ET AL. 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
350 300 250 200
Consumption
350 300 250 200
350 300 250 200
350 300 250 200 1
2
3
4
5
350 300 250 200
1
2
3
4
5
Optimal Baseline 1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Period
Fig. 2.
Optimal Consumption Paths in all Lives.
departments. The experiment was conducted using z-Tree (Fischbacher, 2007) and lasted about two hours. Upon arrival in the lab, subjects were randomly seated at workstations separated by blinds. Instructions were read aloud and subjects were encouraged to ask questions at any point of the experiment. The instructions contained Eqs. (1), (2), and (3) and verbal explanations. Only one life was randomly chosen for the final payment in order to avoid supergame effects. Subjects had to choose a consumption profile for each life. As we wanted subjects to choose the complete path at the beginning of a life, they could enter preliminary consumption choices for each period and change them again if they wanted to. If a preliminary consumption choice for the first period in a life was made, interest earnings were automatically calculated by the program and displayed together with the remaining endowment for the subsequent periods. The same happened in the remaining four periods. If a subject was not satisfied with the preliminary choices made, he or she could reset all the consumption choices and start over in period 1 of that life. Subjects could experiment with the consumption path as often as they wished within the given time limit. Only after confirming the complete consumption path of all five periods of a life, those choices were finalized and became payoff relevant. Subjects did not receive
Consumption Responses to Future Prices and Interest Rates
149
feedback on utility during a life, but only after they had confirmed the finally chosen path.3 A more detailed description of this procedure can be found in the instructions in the appendix. By eliciting the complete consumption path and providing a calculator4 as well as the possibility for trial and error, we tried to simplify the intertemporal consumption task. In a sequential setting errors of previous periods cannot be corrected and impede optimal behavior in subsequent periods. In our setting the task is still “quasi sequential” as our subjects enter the consumption levels for each period, observe interest earnings, and proceed to the next period. As they observe all previous periods while making their decisions and can reset these at any time, errors can be identified and corrected before confirming the consumption path. We deliberately excluded an automatic calculation of the period utility during a life, as we wanted to separate learning within a life from learning between lives. While the former fosters the search for the highest possible consumption, the latter should lead to an (observable) increase of the overall utility over repetitions of similar treatments. One experimental session lasted for an average of two hours. Total payoffs ranged from h34 to h18 with an average of h30.3 (about $42.5) including a h4 show-up fee. We implemented a CRRA utility function as described in Eqs. (1) and (2), which were also contained in the instruction. Given our calibration (see the section “Theory”) the payoff function can be simply described as: U = 0:83u1 þ 0:69u2 þ 0:58u3 þ 0:48u4 þ 0:4u5
ð7Þ
To foster comprehension, we displayed Eq. (7) on screen and pointed out that holdings of the experimental currency at the end of the experiment would not generate any monetary payoff. The conversion rate was h0.28 (∼$0.39) per utility point.
RESULTS As a first step in our analysis, we study the properties of the actually chosen consumption profiles in the baseline treatment (lives 1, 5, 14, and 23). The actually chosen profiles will serve as a benchmark from which we calculate the anticipation and the impact effects. The next step is to relate the observed consumption profiles to present and future interest rates and
150
WOLFGANG J. LUHAN ET AL.
prices. Then, in a qualitative exercise, we check how often consumption is adjusted in the right direction.
Benchmark Case: No Change in the Real Interest Rate
400
We first focus on the four baseline treatments in which prices and interest rates are constant and equal to the rate of time preference over all four periods of a life. In this baseline case, the utility-maximizing consumption path is flat. Fig. 3 shows the average consumption of all 50 subjects in each period averaged over the four baseline treatments. The figure clearly shows that subjects do not choose flat consumption profiles but rather declining consumption levels in the course of time. Table 2 compares the actual mean consumption level in each period with the optimal ones and contains the significance levels of t-tests on the equality of the means with the predicted values. On average, subjects consume too much in the first period and too little in the later ones. This outcome can be interpreted as a result of nonstandard discounting or disregard of the compound effects of interest
Mean consumption
0
200
250
300
350
Optimal consumption
1
2
Fig. 3.
3 Period
4
Baseline Consumption.
5
151
Consumption Responses to Future Prices and Interest Rates
Table 2. t Copt t Ct p
Mean Consumption by Periods.
1
2
3
4
5
All
278.65 367.53 0.000
278.65 256.53 0.003
278.65 245.60 0.001
278.65 207.24 0.000
278.65 219.43 0.001
278.65 259.27 0.001
Note: Copt is optimal consumption for the parameters chosen in the section “Theory.” Ct is the average consumption of all subjects in the baseline treatments in period t. The p-value is the empirical significance level of a t-test on the equality of average and optimal consumption.
rates, both of which are documented in the literature (Ainslie, 1991; Christiandl & Fetchenhauer, 2009; Laibson, 1997; Loewenstein & Prelec, 1992; Stango & Zinman, 2009; Wagenaar & Sagaria, 1975). If subjects underestimate the nonlinear effect of interest rate relative to the effect of the discount rate, it is very plausible that they do not save enough in the early periods of a life, which is consistent with the observed pattern. Focusing on average consumption can, however, be misleading, since individual heterogeneity may hide behind the average. While some subjects might have declining baseline paths, others might satisfy the basic rationality criterion that the baseline paths should be flat. Therefore, we apply the following procedure: Using OLS we first regress each individual’s consumption in the four lives of the baseline treatment on a linear and quadratic time trend and a constant. We then test whether the coefficients of the linear and the quadratic trend term are jointly different from zero using an F-test. At the 5% level, we cannot reject the null for 17 subjects. For these subjects, we check whether the constant is in the interval [200, 360], which cannot be rejected for five subjects. The interval follows from a basic approximation of what a flat consumption path in the baseline treatment should be. If the interest rate is neglected and the initial endowment of 1,000 is equally divided across all five periods, consumption in each period is 200, which is the lower bound. Assuming that the initial endowment yields an interest earning of 200 at the given interest rate of 20% in periods 14, the lifetime budget is 1,800. If this is allocated evenly to all five periods, the resulting period consumption is 360. Of course, both calculations are wrong but can serve as a rough estimate of what the true optimal consumption path should be. In fact, the optimal consumption level of 278.65 is almost exactly in the middle of this interval. We hence conclude that five subjects out of the total of 50 have a consumption path in the baseline treatments which is flat and close to the optimal one.
152
WOLFGANG J. LUHAN ET AL.
The average results also hide the fact that subjects appear to learn to get closer to the optimal flat consumption profile during the experiment. Subjects had to choose consumption in the baseline treatment in lives 1, 5, 14, and 23. As shown in Fig. 4, the steep decline in the intertemporal consumption profile observed in Fig. 3 is mainly due to the early lives 1 and 5. Especially in the last baseline treatment in life 23, the average consumption path of all subjects is still falling, but much flatter than the first one. We can use the standard deviation over the consumption levels in the five periods of an individual’s life as a measure for the flatness of the baseline consumption path. Averaging this standard deviation over all individuals results in values of 288.3 for life 1, 191.1 for life 5, 134.9 for life 14, and 113.8 for life 23.
Comparative Statics So far, we have demonstrated that most subjects do not allocate consumption intertemporally in a way consistent with the point predictions of the discounted utility model. Nevertheless, it still appears conceivable that
Mean consumption
0
100
200
300
400
500
by life
1
5
14
23
Life C1
C2
Fig. 4.
C3
C4
Baseline Consumption by Life.
C5
Consumption Responses to Future Prices and Interest Rates
153
responses to changes in interest rates and prices are more in line with the predictions of the model. Despite the observation that the consumption paths chosen by the subjects deviate from the paths predicted by the model, comparative statistics may be more in line with the theory. In addition to being another dimension along which we can evaluate the theory, analyzing the comparative statistics also allows us to quantify the anticipation and impact effects, which is our primary goal. Even if the model fails to predict the intertemporal allocation of consumption and savings, it may still be an appropriate framework for the evaluation of the effects of monetary policy on private consumption, as long as it correctly predicts how people respond to changes in important variables such as nominal interest rates. More specifically, we look at changes in actual consumption profiles relative to the individual baselines, ΔCit = Cit − C it , and compare these deviations to the corresponding, theoretically predicted deviations: ΔCtopt = ΔCtopt − 278.65. The individual baselines, C it ; are the chosen consumption levels in each period averaged over the four treatments with constant prices and interest rates (lives 1, 5, 14, and 23). Note that the construction of ΔCit takes into account that subjects may already deviate from the optimal path in the baseline case of a flat consumption profile. We are therefore able to isolate the size of the reaction to changes in interest rates and prices. A first approach to summarize the data is to regress ΔCit on ΔCtopt. Fig. 5 shows the scatter plot with the regression line and the 45° line.5 The figure shows that the observed deviations of consumption from the individual benchmarks in some cases are huge. Some of these deviations even exceed the initial wealth endowment of 1,000, which can only happen, if a subject does not consume anything in early periods and consumes most of the total endowment including the interest in single periods at the end of a life. But even apart from these extreme outliers, many observations are far away from the 45° degree line, which indicates large individual deviations from optimal behavior. Clearly, if ΔCit and ΔCtopt were equal, the regression line would coincide with the 45° line. The regression line has a slope of 1.02, which is not significantly different from 1 (t-test: p < 0.01), and the constant is slightly positive (6.68: p < 0.01). Thus, in the aggregate, the actual changes are practically identical to the changes predicted by the model. Individual deviations from optimal behavior are not systematic. From a macroeconomic point of view, this could be seen as good news, as macroeconomic models serve to explain and predict aggregate behavior.
154
WOLFGANG J. LUHAN ET AL.
Fitted values
45°
1000 0 –1000
Observed delta C
2000
Delta C
–100
–50
Fig. 5.
0 Optimal delta C
50
100
Actual versus Optimal Changes in Consumption.
However, the low R2 of 0.05 indicates that while those deviations are not systematic, they are nevertheless very large. This is visible by the enormous variation in the scatter plot in Fig. 5. Note that in the previous analysis, we pooled the consumption responses over time. A more disaggregated view is provided by the following regression, which is run for every life: ΔCit − ΔCtopt =
5 X t=1
αt dtperiod þ ɛit
ð8Þ
We subtract the adjustment of consumption predicted by the model from the observed adjustment of the consumption profile and regress this difference on a set of period dummies. Note, that here we compare deviations in each period of an experimental life. Running this regression provides a simple way to compare actual and predicted deviations of consumption from the baseline case involving a constant real interest rate. With t-tests and F-tests, we can analyze whether the actual changes are
155
Consumption Responses to Future Prices and Interest Rates
different from the optimal ones in individual periods and over all periods jointly. Table 3 contains the results from separate OLS regressions for each life. Δr indicates whether and how the real interest rate changed in the individual lives (treatments). The last column shows the significance level on the F-test that all five period dummies are jointly different from zero. While we find no significant difference between the actual and the optimal consumption changes in lives 612, there are significant deviations in the later lives6 1322. In lives 9, 11, and 12 the t-tests show that in one period the difference between the actual consumption change and the optimal one is significant, however, the F-test does not indicate a joint difference from zero. Remarkably, the significant differences typically occur in periods 1 and 5 and are negative in the former and positive in the latter. The message from this analysis is that while in the majority of the treatments the theoretical model does not predict correctly how subjects respond to changes in prices and interest rates for some treatments, we
Table 3. Difference between Actual and Optimal Changes. Δr
Life
Period 1
Period 2
Period 3
Period 4
Period 5
p(F)
T1
r↓10 r↑20 r↓20 r↑10
8 12 13 19
−15.29 −47.88* −23.32 −49.06**
24.78 24.34 14.07 18.40
−6.71 14.76 21.92 21.48
10.78 −14.54 33.03* .58
1.98 37.63 −17.35 30.93*
.89 .17 .03 .00
T2
r↑10 r↓20 r↓10 r↑20
10 17 20 22
−24.86 −54.45** −52.76** −88.82**
2.19 −1.40 6.93 13.34
23.35 −1.69 14.19 32.37
.43 60.43** 38.49** 50.34*
6.69 54.23** 41.39** 55.02*
.90 .00 .00 .00
T3
r↓20 r= r= r↑20 r↓10 r= r= r↑10
6 7 9 11 15 16 18 21
2.57 −14.76 −58.27* −42.70 −45.42** −63.52** −62.08** −67.45**
1.39 19.47 33.59 11.15 −4.85 −6.26 8.15 13.52
5.54 −43.02 1.58 7.21 6.72 −7.44 0.80 7.78
19.98 24.46 27.09 −5.47 82.29** 34.61 72.59** 61.55**
−11.65 7.85 32.85 53.43* 3.35 108.82** 31.87 33.27*
.94 .43 .11 .18 .00 .00 .00 .00
Note: The table shows the estimated coefficients from Eq. (8). (*, **) significant from zero at (5%, 1%) in the t-test, p(F) is the significance level of F-tests that all dummies are different from zero, 250 observations in each regression. In all estimations, the constant was omitted. This allows us to include dummy variables for all periods without generating perfect multicollinearity.
156
WOLFGANG J. LUHAN ET AL.
cannot reject optimal consumption changes. Overall, we find some evidence in favor of the hypothesis that subjects react in anticipation of future changes in interest rates and prices, though only to a limited extent
Anticipation and Impact Effects The main focus of this study lies in how subjects respond to changes in prices and interest rates both in anticipation and on impact. To explore this issue, we examine in this section whether there are systematic responses to interest rates and prices and when these systematic responses occur. To do so, we run the following regression separately for each period: Ciperiod = β0 þ β1 R3 þ β2 P4 þ ɛ i
ð9Þ
Theoretically, the optimal consumption levels in each period are functions of interest rates and prices. Although the relations are nonlinear (see Eq. (5)), we can approximate them by linear regressions. Since the interest rate changes only in period 3 and prices in periods 4 and 5 are always identical, it is sufficient to include R3 and P4 into the regression.7 All the other variables are captured by β0. Table 4 contains the results of these regressions for each period. In all cases, the estimated coefficients have the correct sign, indicating that subjects in principle understand how they should respond to the changes in the real interest rate. For the nominal interest rate, the sizes of the estimated coefficients are also not too bad. However, the fit of these regressions is very poor.8 In periods 13, future changes in prices and interest rates hardly explain any variation of chosen consumption levels, although the response of consumption in period 1 is quantitatively large and highly significant. In periods 4 and 5 the adjusted R2 of the regressions is slightly higher but still very low (.066 and .048). Despite the low R2, the F-test rejects that R3 and P4 have no effect on consumption in period 1 and the coefficients are not statistically different from the optimal ones (F-test, p = .176). The standard errors of the estimates are very large, though. In periods 2 and 3, the coefficients of R3 and P4 are not significantly different from zero. The significant coefficients in periods 4 and 5 indicate that there is an effect of price and interest rate changes on impact. Overall, we find very little evidence in favor of an anticipation effect. However, subjects do respond on impact.
157
Consumption Responses to Future Prices and Interest Rates
Table 4.
Constant R3opt R3 P4opt P4 adj R2 p(F) #
OLS Regression of Consumption in Each Period on Prices and Interest Rates. C1
C2
C3
C4
C5
211.11** (63.54) −65.96 −166.93** (55.46) 80.05 154.08** (65.91)
205.64** (35.06) −66.41 −35.74 (30.60) 80.65 66.44 (36.37)
184.91** (44.59) −65.37 −5.61 (38.92) 79.12 65.71 (46.25)
455.37** (52.00) 377.20 371.83** (45.39) −457.93 −295.95** (53.94)
816.68** (91.72) 376.93 404.35** (80.05) −458.41 −646.09** (95.14)
.008 .005 1000
.002 .17 1000
.000 .32 1000
.066 .000 1000
.048 .000 1000
Notes: R3opt and P4opt stand for the coefficients in a regression of optimal consumption on the interest rate in period 3 and the price in period 4. We do not show the very small standard errors of these regressions as the relations are linear approximations of the deterministic nonlinear relationships. (*,**) different from 1 at (5%, 1%) in the t-test, p(F) is the significance level of F-tests that the coefficients are different from zero.
Directions of Adjustment We have demonstrated that the size of the consumption adjustments in response to interest rates and prices is not as predicted in the majority of the treatments. A more lenient test of the model’s predictive power is to check whether it predicts at least the direction of consumption changes correctly.9 We coded each individual deviation from the individual’s baseline consumption as “−1” if it was negative and “+1” if it was positive. Table 5 summarizes the proportions of adjustments in the predicted direction in the different treatments for the anticipation phase and the impact phase. Since virtually all observed consumption paths deviate from the baseline, we omit the treatments with flat optimal consumption paths. As consumption either goes up or down, we use the binominal test to see whether the proportion of positive or negative changes is significantly larger than 50%, which would be the random proportion. Table 5 displays that subjects do not respond in anticipation of known changes. The exception is life 13 in which 58% of the changes were in the theoretically expected positive direction. On impact, subjects’ responses are
158
WOLFGANG J. LUHAN ET AL.
Table 5. Treatment condition
Δr
Directions of Consumption Changes. Life
Anticipation Sign
Impact
Prop
p
Sign
Prop
p
T1
r↓10 r↑20 r↓20 r↑10
8 12 13 19
+ − + −
.53 .51 .58 .45
.23 .47 .03 .92
− + − +
.70 .68 .76 .72
.00 .00 .00 .00
T2
r↑10 r↓20 r↓10 r↑20
10 17 20 22
− + + −
.52 .43 .53 .54
.34 .96 .34 .18
+ − − +
.56 .59 .51 .84
.14 .04 .46 .00
T3
r↓20 r↑20 r↓10 r↑10
6 11 15 21
+ − + −
.49 .53 .49 .56
.60 .28 .62 .08
− + − +
.81 .77 .50 .77
.00 .00 .54 .00
Note: “Sign” indicates whether consumption should increase (+) or decrease (−) relative to the benchmark. “prop” is the proportion of consumption changes in the theoretically predicted direction, “p” is the significance level of a one-sided binomial test that the proportion is smaller (larger) than 0.5. The treatments in which consumption should be constant are omitted because observed consumption always changed.
more in line with theory as in 9 out of 12 treatments more than 50% of the adjustments are correct. Notice that impact responses always have the correct sign in the price change treatments T1, but only in two of the four interest rate treatments T2.
CONCLUSIONS In this article, we explore the extent to which subjects react to announced changes in future interest rates and prices when making consumption and saving decisions in a lab experiment. Our first result is that subjects appear to discount future periods more strongly than justified by the experimental design and neglect the compound effect of the interest rate leading to overconsumption in early periods. Taking this into account, we find strong indications for under-sensitivity to perfectly anticipated future changes in prices and interest rates. This is reflected in an anticipation effect that is significantly smaller than predicted by theory and smaller than what is
Consumption Responses to Future Prices and Interest Rates
159
typically assumed in state-of-the art macroeconomic models. Despite the fact that future changes in interest rates and price levels are known with certainty, subjects typically do not adjust their consumption decisions at the time of the announcement. The impact effect is qualitatively and quantitatively in line with the theoretical prediction, or even larger than predicted. Overall, our results can be interpreted as an extension of the “over-sensitivity” literature. It is wellknown that consumption follows current income too closely to be consistent with a high degree of intertemporal smoothing. We show that this over-sensitivity to the current economic environment and under-sensitivity to future developments carries over to intertemporal prices. Interestingly, we obtain this result without imposing any form of credit constraint, which are often invoked as an explanation for the observed over-sensitivity of consumption to current income. We believe that our results are not the consequence of general confusion among our subjects or a too difficult task. Houser, Keane, and McCabe (2004) show that some subjects can get fairly close to the optimal solution in a complicated intertemporal decision-making problem. Our results show that, first, subjects seem to learn to get close to the optimal flat consumption path in the baseline treatment, if they encounter this treatment repeatedly, and, second, on average make adjustments in the right direction. While we cannot rule out that some subjects were confused and had little insight in how to behave optimally, on average our results are not nonsensical. In particular, the average impact effects are quite reasonable. This suggests that the general absence of the anticipation effect in our experiment has deeper reasons than deficient comprehension of the problem at hand. The observed “present bias” is unlikely to be a problem with discounting consumption over time. Subjects’ home-grown time preference does not matter in this experiment because they receive their total payoff at the end of the experiment and have no possibility of consumption during the short experiment. For the different experimental periods, we induced the discounting of time to make it clear how time should be treated. It rather seems that many subjects have a natural tendency to discount information about the future, even if this is perfectly reliable. A similar result was found in the work by Orland and Roos (2013), where subjects partially ignored future information relevant for optimal price-setting. This might be a reasonable behavior in real-world situations, in which the future is typically highly uncertain. Future information might be discounted because confidence in this information decreases the more the respective event lies in the future. Of course, in our setting, such discounting is not warranted,
160
WOLFGANG J. LUHAN ET AL.
but it might be a reflection of subjects’ usual way to approach such kind of intertemporal problems. The policy implications lie at hand: if the overall reaction to future changes in the real interest rate is smaller than predicted as suggested by our results , policy makers cannot rely on the positive effects of perfectly credible announcements of future changes in monetary policy, as predicted by standard models. This is especially important in the current economic environment in which many policy rates are at the zero-lower bound and central banks hope that unconventional policy measures such as forward guidance could still give monetary policy some traction. Even if nominal interest rates are close to zero, the central bank can lower the real interest, if it can convince the public that future inflation will be higher. However, if the public does not respond to central banks’ communication, central banks may be trapped at the zero-lower bound. In future research, it seems important to study whether the lack of anticipation effects or the discounting of information about the future is also present in other settings. One might think of a design, in which there is a very strong link between future events and current optimal decisions and in which it is easier to find the optimal decisions than in our design. Furthermore, there might be various types of subjects that differ in their forward-orientation. This can be analyzed by looking at the individual decisions of subjects rather than focusing on the aggregate results, as we did in this article. A related question is whether and how subjects learn to take future information adequately into account. Finally, one might ask how the importance of the future can be made sufficiently salient to overcome the potential present bias in the use of information.
NOTES 1. In principle, we do not need discounting as we are not particularly interested in its effect on consumption. We nevertheless have a positive rate of time discounting in order to balance the positive interest rates. Furthermore, discounting is standard in the macroeconomic literature. 2. Note that the period utility function exhibits constant relative risk aversion which is frequently assumed in business cycle models. 3. Most subjects made use of the opportunity to experiment with different consumption paths. The average number of consumption paths over all lives and subjects is 2, that is, subjects normally tried two paths before actually making a choice. The maximum number of paths a subject tried was 10. Throughout the experiment,
Consumption Responses to Future Prices and Interest Rates
161
the propensity to test different paths declined from 2.5 and 2.8 in lives 1 and 2 to 1.7 in the final two periods. 4. Subjects had the opportunity to use the standard windows calculator in addition to the automatically calculated costs for the chosen consumption in a period, the remaining budget in that period and the disposable budget (remaining budget plus interest earnings) in the next period. 5. The regression line and the 45° line are practically identical in this graph. Notice that the optimal consumption change on the horizontal axis can only have nine different values. The real interest can increase or decrease by 10 or 20 percentage points or remain unchanged. When the real interest rate changes, consumption changes in anticipation and on impact. 6. This apparent pattern seems to be a coincidence and not a systematic effect. If we exclude individual subjects from the sample the pattern becomes more irregular. 7. Note that a permanent change of the prices in periods 4 and 5 is equivalent to a one-time change in inflation in period 3 so that instead of including the new price in period 4, we could also use the rate of inflation in period 3 in the regression. We prefer this specification as it is in line with the way in which the data was presented and recorded. 8. In contrast, the R2 of the regressions with the optimal consumption levels are always larger than .98. This proves that the nonlinear functions Ct(R3,P4) can be well approximated by linear regressions. 9. This approach is somewhat related to the Learning Direction Theory (Selten & Buchta, 1999; Selten & Stoecker, 1986). In our setting, however, there is no clear overshooting or undershooting and no repetition of identical situations.
REFERENCES Ainslie, G. (1991). Derivation of “Rational” economic behavior from hyperbolic discount curves. American Economic Review, 81(2), 334340. Ballinger, T. P., Palumbo, M. G., & Wilcox, N. T. (2003). Precautionary saving and social learning across generations: An experiment. Economic Journal, 113(490), 920947. Campbell, J., & Deaton, A. (1989). Why is consumption so smooth? Review of Economic Studies 56(3), 357373. Carbone, E., & Duffy, J. (2014). Lifecycle consumption plans, social learning and external habits: Experimental evidence. Journal of Economic Behavior & Organization, 106, 413–427. Carbone, E., & Hey, J. D. (2004). The effect of unemployment on consumption: An experimental analysis, Economic Journal, 114(497), 660683. Christiandl, F., & Fetchenhauer, D. (2009). How laypeople and experts misperceive the effect of economic growth. Journal of Economic Psychology, 30(3), 381392. Chua, Z., & Camerer, C. F. (2007). Experiments on intertemporal consumption with habit formation and social learning. MIMEO. CalTech. Fehr, E., & Zych, P. K. (2008). Intertemporal choice under habit formation. In C. R. Plott & V. L. Smith (Eds.), Handbook of experimental economics results (Vol. 1, pp. 923928). Amsterdam: The Netherlands.
162
WOLFGANG J. LUHAN ET AL.
Fischbacher, U. (2007). Z-tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171178. Flavin, M. (1985). Excess sensitivity of consumption to current income: Liquidity constraints or myopia? The Canadian Journal of Economics, 18(1), 117136. Frederick, S., Loewenstein, G., & O‘Donoghue, T. (2002). Time discounting and time preference: A critical review. Journal of Economic Literature 40(2), 351401. Gali, J., & Gertler, M. (2007). Macroeconomic modeling for monetary policy evaluation. Journal of Economic Perspectives, 21(4), 2545. Hey, J. D., & Dardanoni, V. (1988). Optimal consumption under uncertainty: An experimental investigation. Economic Journal, 98(390), 105116. Houser, D., Keane, M., & McCabe, K. (2004). Behavior in a dynamic decision problem: An analysis of experimental evidence using a Bayesian type classification algorithm. Econometrica 72(3), 781822. Laibson, D. I. (1997). Golden eggs and hyperbolic discounting. Quarterly Journal of Economics, 112 (2), 443447. Loewenstein, G., & Prelec, D. (1992). Anomalies in intertemporal choice: Evidence and an interpretation. Quarterly Journal of Economics, 107(2), 573598. Luengo-Prado, M. J., & Sørensen, B. E. (2008). What can explain excess smoothness and sensitivity of state-level consumption? The Review of Economics and Statistics, 90(1), 6580. Meissner, T. (2013). Intertemporal consumption and debt aversion: An experimental study. SFB 649 Discussion papers, Humboldt University Berlin. Orland, A., & Roos, M. W. M. (2013). The New Keynesian Phillips curve with myopic agents. Journal of Economic Dynamics and Control, 37(11), 22702286. Pasini, G. (2009). Excess sensitivity of consumption to income growth: A model of loss aversion. Industrial and Corporate Change, 18(4), 575594. Selten, R., & Buchta, J. (1999). Experimental sealed bid first price auctions with directly observed bid functions. In I. E. D. Budescu & R. Zwick (Eds.), Games and human behavior: Essays in honor of Amnon Rapoport. Lawrence Erlbaum Associates: New Jersey. Selten, R., & Stoecker, R. (1986). End behavior in sequences of finite Prisoner’s dilemma supergames: A learning theory approach. Journal of Economic Behavior and Organization, 7(1), 4770. Stango, V., & Zinman, J. (2009). Exponential growth bias and household finance. Journal of Finance, 64(6), 28072849. Wagenaar, W. A., & Sagaria, S. D. (1975). Misperceptions of exponential growth. Perception and Psychophysics, 18(6), 416422. Walsh, C. E. (2010). Using monetary policy to stabilize economic activity. Federal reserve bank of Kansas city financial stability and macroeconomic policy 2009 Jackson hole Symposium, pp. 245296. West, K. D. (1988). The insensitivity of consumption to news about income. Journal of Monetary Economics, 21(1), 1733.
Consumption Responses to Future Prices and Interest Rates
163
APPENDIX: INSTRUCTIONS Welcome to the experiment. Please do not talk to any other participant from now on. We kindly ask you to use only those functions of the PC that are necessary for the conduct of the experiment. The purpose of this experiment is to study decision behavior. You can earn real money in this experiment. Your payment will be determined solely by your own decisions according to the rules on the following pages. The data from the experiment will be anonymized and cannot be related to the identities of the participants. Neither the other participants nor the experimenter will find out which choices you have made and how much you have earned during the experiment. Task Your task is to make savings and consumption decisions for a “life.” A life is divided into five periods. Your utility and therefore your payoff in Euros at the end of the experiment depends on the consumption of a good. Endowment At the beginning of a life, in period 1, you receive an endowment of 1,000 “taler” which you can either spend on the consumption of a good or save. You will not receive any other income in your life, but you can increase your budget through savings. Consumption In each period the consumer good can be bought at a specific price P per unit. If you consume a quantity C of the good, you have to spend C × P taler. Expenditure = C × P Saving In each period your unspent endowment is automatically saved and earns interest. The interest rate is R. In the subsequent period the remaining budget from the previous period plus interest payments can again be either used to buy the consumption good or saved. Remaining budget × ð1 þ RÞ = budget next period Example: Assume the interest rate is 20% and your remaining budget after consumption is 100 taler. Your budget in the next period would be 100* (1.20)=120 taler.
164
WOLFGANG J. LUHAN ET AL.
Period utility The utility u that is generated by consumption C in one period is defined by the following equation: u=
C0:5 0:5
The more you consume in one period the higher will be your utility in that specific period. The increase in utility, however, declines with each consumed unit of the good. Lifetime utility Your payoff depends on your lifetime utility. This is the total utility you generated in all five periods of a life. Your remaining budget after the consumption in period 5 will be forfeited and will not generate any utility. The lifetime utility is the sum of the period utilities. However, the period utilities are discounted. This means, they receive specific weights in this sum, with the weights being smaller for later periods. In order to achieve the same discounted utility, you would have to consume more in each subsequent period than in the previous period. Formally presented you discount future utilities in period t by the factor 1 = 0:8333t ð1 þ 0:2Þt The discounted utility in each period t is therefore calculated as ut × 0.8333t The period utility of period 1, for example, is multiplied by 0.833 while the period utility of period 2 is multiplied by 0.8332 = 0.694, etc. The respective weights will be displayed on screen. Lives This experiment consists of 23 lives, each in turn consisting of five periods. Thus, the planning horizon in each life is five periods. The lives are completely independent of one another. You receive an endowment of 1000 taler in each life. It is not possible to transfer taler or goods between lives. Prices and interest rates may change between lives. Payoff Your final payoff depends on the lifetime utility of one single life. After you completed all 23 lives, one of these will be randomly selected for payoff. The lifetime utility generated in this life will be converted into Euros. The following conversion rate will apply: 1 utility points = 0:281 Euro ðapprox: 28 CentÞ or 1 Euro = 3:6 utility points
Consumption Responses to Future Prices and Interest Rates
165
Operation instructions
1. Lives/time At the upper panel of the screen you can find which life you are currently in as well as the remaining time for the input (in seconds). 2. Period values The actual values of the interest rates R, the prices P, and the weights for the period utilities are displayed for all five periods of the current life. Note that R and P may be different in each period. 3. Input
166
WOLFGANG J. LUHAN ET AL.
At the beginning of a life only the input field for period 1 is displayed. Below the current budget you will find a blue field where you can enter the consumption for period 1. You may enter numbers up to the third decimal place. When you click the “calculate” button the remaining budget in this period will be displayed and the input field of the next period will open. In the new field the entry “budget” displays the remaining budget from the previous period plus interest. 4. Confirming entries When you have chosen the consumption for all five periods, you may either confirm your entry or re-calculate. If you press the red button, your entries for this life are confirmed and your lifetime utility will be displayed on the next screen. 5. Reset To change your entries, you can use the reset button. This will reset all your consumption entries and you can start over with your input in period 1. You can change your entry as often as you wish. 6. Calculator If you need a calculator you can open the Windows calculator by clicking on the symbol in the bottom left corner of the screen. End After filling in a short questionnaire you will be called to the imbursement separately. Please bring the receipt and the card indicating your workstation number with you. The payout will be anonymous and private.
EXPERIMENTS ON MONETARY POLICY AND CENTRAL BANKING Camille Cornand and Frank Heinemann ABSTRACT In this article, we survey experiments that are directly related to monetary policy and central banking. We argue that experiments can also be used as a tool for central bankers for bench testing policy measures or rules. We distinguish experiments that analyze the reasons for non-neutrality of monetary policy, experiments in which subjects play the role of central bankers, experiments that analyze the role of central bank communication and its implications, experiments on the optimal implementation of monetary policy, and experiments relevant for monetary policy responses to financial crises. Finally, we mention open issues and raise new avenues for future research. Keywords: Monetary policy; central banking; laboratory experiments
INTRODUCTION Experimental macroeconomics is a growing field and the increasing number of publications in this area is likely for two reasons: first, modern macroeconomics is microfounded with many models resting on strategic games or
Experiments in Macroeconomics Research in Experimental Economics, Volume 17, 167227 Copyright r 2014 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0193-2306/doi:10.1108/S0193-230620140000017006
167
168
CAMILLE CORNAND AND FRANK HEINEMANN
(at least) on individual optimization. Since games and optimization tasks can be framed as laboratory experiments, these foundations of macro models can be tested in the lab. Thereby, macroeconomics is catching up in exploiting a method that has already been used with large success in other fields, like industrial organization, auction design, or the design of incentive schemes. The second reason may be a widespread dissatisfaction with models that rest on assuming rational expectations or, more widely, rational behavior. While the rationality assumption is a necessary tool for predicting the effects of systematic policy or institutional changes, the actual biases in behavior and expectations are too systematic and affect economies too much for subsuming them under unexplained noise. How to rationalize macroexperiments? The explicit microfoundation used in modern monetary macro models such as the “dynamic stochastic general equilibrium (DSGE)” approach allows the programming of small sample economies, in which subjects take the roles of various economic agents, but also calls for testing the microeconomic modules that DSGE models are composed of. While the assumptions and predictions of macroeconomic models have historically been tested using non-experimental field data, an alternative empirical approach that is attracting increased attention uses controlled laboratory settings with paid human subjects. The main advantage of this approach is the ability of the experimenter to control subjects’ incentives, their information, and the channels of communication, so that by changing exogenous factors, causality can be established without the need for sophisticated econometric techniques or for constructing disputable instrument variables. Moreover, while pure equilibrium theory does not capture strategic uncertainty and cannot predict the consequences of policy measures if the model has multiple equilibria, experiments can be used to develop and test theories of equilibrium selection. One may ask how macroeconomic phenomena resting on the interaction of millions of agents can be explored using laboratory experiments with just a few subjects (Duffy, 1998). The same question has been raised with respect to macroeconomic theories that assume homogeneous, often representative, agents. Nevertheless, some of these theories provide valuable insights into the basic mechanisms by which monetary or fiscal policies affect aggregate variables like growth rates, employment, or inflation. Experiments can do even better, because even a small number of, say 10, subjects in a laboratory economy introduce a level of heterogeneity that theories can hardly deal with except by mathematical simulation. The additional insights to be gained by increasing the number of subjects in a well-structured model economy from 10 to 10 Million may be of minor
Experiments on Monetary Policy and Central Banking
169
relevance. Furthermore, microfounded macro models assume that agents interact in response to incentives within a framework, where they can understand the consequences of their behavior and of their interaction. By reading instructions (and eventually by playing some training periods), subjects can achieve a level of comprehension of the functional relationships between variables of the game that we can never hope for in real economies. By specifying the payoff functions, the experimenter has the highest control over incentives, while the confounding impact of context and unstructured communication can be kept at a minimum. Thus, laboratory economies are the best environment for testing the behavioral predictions of theories with micro- or game-theoretic foundation. Such tests are hardly conceivable in the field. As Duffy (1998, p. 9) points out: “Even in those cases where the aggregate predictions of microfoundation models can be tested using field data, it is not always possible to use field data to verify whether behavior at the individual level adheres to the predictions or assumptions of these models.” Of course, the results from laboratory experiments cannot be readily generalized to real economic situations, in which context, ethics, experience, and formal training of the major actors may yield different responses to changes in exogenous variables than observed in an abstract laboratory economy. The same, however, is true for theory. Laboratory experiments may be able to falsify theories. If they do not work in the lab, why should they work outside? But ultimately, economics is a social science, and field evidence is indispensable. In this article, we argue that experiments can serve as an important tool for central bankers. The focus on monetary policy and central banking is linked to the idea that experimental macroeconomics enables policymakers to “bench-test” competing policy actions, rules, or institutional designs by laboratory experiments. Experiments allow to elucidate the different effects anticipated and unanticipated of alternative policy regimes and offer a quick and cost effective way to identify possible consequences of a monetary policy initiative.1 Experiments may help to advise policymakers by exploring the effects of alternative policies in the lab (Ricciuti, 2008). There is a need for more interaction between experimental macroeconomists and central bankers, both to help experimentalists adjust their research and account for the questions and concerns of practitioners and to help central bankers to interpret the results of experiments and judge their external validity. As shown by the example of Alan Blinder, one can be both a central banker and an experimentalist. The topic of central banking experiments lies at the scientific frontier of experimental economics and central banking alike. The results from this
170
CAMILLE CORNAND AND FRANK HEINEMANN
approach can be informative with respect to questions of equilibrium selection or the efficacy of various government policies. Laboratory experiments addressing central banking issues are useful in several respects: Finding out, which out of many equilibria is selected in well-defined environments and testing theories of equilibrium selection provides guidelines for the likely outcomes in macroeconomic environments that are described by models with multiple equilibria. Testing concepts of equilibrium determinacy and stability with respect to their predictive power may help settling controversial discussions about the “right” stability criteria. Trying out policy rules, decision rules, and communication protocols in their effectiveness to stabilize markets is an almost costless exercise in the lab, while any such experiments at the macro level would endanger the welfare of societies or are simply impossible to conduct in a pure form. Understanding how people’s strategic behavior interacts with the institutional environment prior to policy implementation can greatly reduce the cost of achieving policy goals. By taking into account the various factors and motivations that may influence human behavior, experimental economics allows testing alternative policy options. For example, laboratory experiments may help selecting instruments and institutional arrangements that are best-suited for implementing policy goals.2 Solving the endogeneity problems. In the real economy, policy parameters respond to economic activity. As expressed by Ricciuti (2008, p. 218), “the endogeneity of policy in real-world economies (…) makes it difficult to analyze data and formulate correct inferences on changes that have occurred.” Laboratory experiments allow controlled tests of the effects of changing individual parameters exogenously. This article surveys laboratory experiments addressing central banking issues following the scheme represented in Fig. 1.3 While Duffy (1998, 2008a, 2008b) and Ricciuti (2008) focus their surveys on a much larger category of papers dealing with experiments in macroeconomics, we concentrate on issues relevant for central banking and present some recent literature. Most experiments that are presented below focus on specific building blocks or component assumptions of standard macro models. In the section “Channels for Money Non-neutrality,” we look at some causes of non-neutrality of monetary policy. Here, we focus on money illusion and monetary policy experiments applied to environments of sticky
Experiments on Monetary Policy and Central Banking
Fig. 1.
Central Banking Issues and Experimental Literature.
171
172
CAMILLE CORNAND AND FRANK HEINEMANN
prices or sticky information. We do not consider experiments on the formation of expectations, although they would have to be considered here, because they are already dealt with in another article of this book (Assenza, Bao, Hommes, & Massaro, 2014). Looking at how subjects behave in the lab and especially studying the effects of communication between the central bank and the public may help predicting the likely effects of policy measures and designing optimal decision processes and institutions. In the section “Subjects as Experimental Central Bankers,” we present results of experiments that study how subjects behave in the lab when they play the role of central bankers. These experiments demonstrate that the inflation bias arising from time inconsistency matters in repeated games, even if central bankers are concerned about affecting future expectations. They also provide an argument in favor of following fixed rules, although experimental subjects are quite capable of pursuing optimal responses to shocks in order to stabilize an economy. The section “Transparency and Communication Issues” is devoted to central bank communication and the merits of transparency. There is a vivid debate about the pros and cons of transparency, and while central banks have moved toward higher transparency, theory papers provide mixed recommendations. Experiments are particularly well-suited for testing the effects of information and communication channels, because the experimenter can control information and distinguish communication channels in different treatments. Thereby, experiments yield very clear results about the effects of information, while field evidence is always plagued by the simultaneity of different communication channels and by the problem of identifying which information actually affected real decisions. The interplay between communication and stabilization policy is also studied in the lab. The section “Policy Implementation” deals with the implementation of monetary policy. We distinguish policy strategies that may be described by different rules and the operational policy implementation via repo auctions. Auction design is a classical topic of experimental economics and using experiments for bench testing auctions has become a standard procedure. The section “Monetary Policy During Liquidity Crises” focuses on experiments dealing with financial crisis and central banks’ interventions. Finally, we mention open issues and raise new avenues for future research. It seems particularly important to emphasize the behavioral aspects in the transmission process of monetary policy and in the formation of inflation expectations. Specifically, there is a need for thinking about the methodology in designing rules for monetary policy, information disclosure, and financial market regulation that account for private agents’ behavior under strategic uncertainty. The global financial
Experiments on Monetary Policy and Central Banking
173
crisis has also recast the debate over the scope and instruments of central banking. Experiments may represent a good tool for testing them.
CHANNELS FOR MONEY NON-NEUTRALITY As witnessed by Adam (2007, p. 603), “[r]ational expectations models with nominal rigidities, workhorses of current macroeconomics, (…) face difficulties in matching the persistence inherent in output and inflation data.” A possible reason could be that real agents do not behave according to the strong assumptions underlying rational expectations. Laboratory experiments have explored some aspects of bounded rationality that seem relevant for explaining the real and persistent effects of monetary policy. The most promising explanations seem to be money illusion, limited depth of reasoning, nonmonetary costs of information processing, the use of heuristics, and adaptive expectations. Some of these explanations are related and interact in causing real effects of monetary shocks. Looking at how subjects behave in the lab and especially studying the learning processes may help better modeling internal frictions leading to a proper propagation mechanism. Many experiments indeed aim at comparing rational expectations to adaptive learning especially in equilibrium selection. There is a recent and relatively large focus on the formation of expectations in the lab. “Resorting to laboratory experiments is justified on the grounds that expectations are generally not easily observed. This makes it difficult to identify deviations from rational expectations” (Adam, 2007, p. 603). In the lab, subjects’ expectations can be directly observed. The dynamics of learning models depend on the functional relationship between stated expectations and realizations of the variables about which expectations are formed.4 Amongst the monetary policy channels that have been tested in the lab, money illusion (Section “Money Illusion”) stands out as it seems to be driven by anchoring on numerical values and is clearly opposed to rationality. However, sticky prices can also be explained by the expectation that other agents are affected by money illusion. We show some experiments in which sticky prices or sticky information (Section “Sticky Prices and Sticky Information/Monopolistic Competition”) is explicitly introduced and compared to the behavior in otherwise equal economies without such frictions. The general finding is that even in a frictionless economy, subjects behave as if there were some of these frictions. The often observed delayed response of prices to shocks can in parts be explained by money illusion.
174
CAMILLE CORNAND AND FRANK HEINEMANN
Subjective beliefs that other agents are not responding to shocks provides an additional explanation of these delays. In games with strategic complementarities, as they are typical in monetary macro, these two effects reinforce each other.
Money Illusion Fehr and Tyran (2001, 2005, 2008) study the impact of price level changes on individual price-setting in environments with strategic complements and strategic substitutes. In their experiment, subjects play firms who are setting nominal prices in an oligopolistic market. A change of payoff tables represents a large anticipated price-level shock to which subjects should immediately respond by jumping toward the new equilibrium. Fehr and Tyran investigate whether and how fast subjects converge to the new equilibrium for different strategic environments. Prices respond gradually. When prices are strategic substitutes, they converge faster than when they are strategic complements. Since supply and demand functions depend only on relative and not on absolute prices, the result of sluggish price adjustments may be interpreted as evidence for the non-neutrality of money supply. Due to the different speeds of adjustment to equilibrium, monetary shocks have a stronger impact when prices are strategic complements than when they are strategic substitutes. Fehr and Tyran (2001) consider a n-player pricing game with a unique equilibrium similar to price-setting under monopolistic competition. Subjects get payoff tables stating how their payoff depends on their own price and the average price of other firms. After T periods payoff tables are replaced by new ones, that differ only by a scaling factor, representing a fully anticipated negative shock on money supply. The game continues for another T periods with these new payoff tables. Insufficient price adjustments may be explained by two factors: money illusion and the expectation that other subjects are adjusting their prices insufficiently. In order to disentangle these effects, Fehr and Tyran (2001) compare four treatments: in one, payoff tables are given in nominal terms and subjects play (as described above) against other human subjects (Treatment NH). Payoffs are later transformed in real currency with different scaling factors before and after the shock. Treatment NC has payoffs in nominal terms, but the other firms are played by a computer. Here, the only human subject is informed that the computer will always choose a price that is a best response to her or his own stated price. Treatment NC eliminates
175
Experiments on Monetary Policy and Central Banking
strategic uncertainty as a potential explanation for sluggish price adjustments. In two further treatments, RH and RC, payoff tables are given in real terms (subjects again play against humans or a computer, respectively), so that money illusion can be ruled out as a source for sluggish price adjustments.5 Results show that monetary shocks have real effects, as subjects in Treatment NH need several periods to come anywhere close to the new equilibrium. However, the main cause is neither individual money illusion, nor coordination failure, but the combination of both. Fig. 2 presents the average price before and after the shock in the four treatments. In Treatment RC, there was an instantaneous adjustment to the new equilibrium. In Treatments NC and RH, adjustments toward the new equilibrium took a few periods, but subjects came rather close. In Treatment NH, however, where the coordination problem was combined with nominal payoff tables, there is a substantial delay in price adjustments. An explanation is to be found in subjects’ expectations. In Treatments NH and RH, Fehr and Tyran (2001) asked subjects about their expectations of the average price set by others. The difference in stated expectations between treatments was 20
Nominal with human opponents Nominal with computerized opponents
18
Real with human opponents Real with computerized opponents
Average price
16
14
12
Pre-shock phase
Pre-shock phase
(Average price in equilibrium: 18)
(Average price in equilibrium: 6)
10
8
6
4 –20 –18 –16 –14 –12 –10 –8 –6 –4 –2
1
3
5
7
9
11 13 15 17 19
Period
Fig. 2.
Evolution of Average Prices. Source: Fehr and Tyran (2001, p. 1251).
176
CAMILLE CORNAND AND FRANK HEINEMANN
comparable to the difference between the prices that subjects actually chose in these treatments. Fehr and Tyran (2001) attribute these different price expectations to a “rule of thumb,” by which subjects mistake nominal for real payoffs and strive for collusion by setting prices above the equilibrium. This would be another impact of money illusion. For testing this, they added two treatments with positive price-level shocks, where subjects with money illusion who want to collude would speed up the adjustment toward the new equilibrium. Indeed, these sessions showed faster price adjustments than the comparable sessions with negative shocks. However, the treatments differed in several respects from those with negative shocks and are, thus, not entirely comparable. Note that the deviation from equilibrium in Treatment NH is larger than the sum of deviations in Treatments NC and RH. Deviations resulting from coordination failure and money illusion may reinforce each other in environments with strategic complementarities. Fehr and Tyran (2008) use a similar experiment, where treatments differ by prices being either strategic complements or substitutes. In both treatments, the equilibrium was efficient, ruling out that a desire for collusion can explain systematic deviations from equilibrium. In the substitutes treatment, average prices jump toward the new equilibrium almost instantaneously after the shock. There is some mis-coordination, as some subjects choose prices that are too high or too low, and thus, there is an efficiency loss in the first two periods after the shock. The efficiency loss is, however, much larger in the complements treatment, where prices adjust slowly toward the new equilibrium as in the experiment by Fehr and Tyran (2001). Four control treatments serve to identify the causes for insufficient price adjustments. Fehr and Tyran (2008) find that results can be explained by money illusion and anchoring or the expectation that other subjects suffer from these deviations from rationality. While subjects with money illusion mistake nominal for real payoffs, anchoring means that subjects anchor their expectations at the numbers they saw before. After paying an equilibrium price for several periods, subjects deviate from a rational response to a nominal shock toward the best reply of previous equilibrium prices. With strategic complementarities, deviations from equilibrium due to anchoring, money illusion, and the expectation that other subjects are anchoring or suffer from money illusion are reinforcing each other, while in environments with strategic substitutes, they may actually have opposing effects: if I believe that my opponents adjust prices insufficiently, I should adjust my price more than just toward the new equilibrium. It has been confirmed in various other experiments that subjects converge to equilibrium much faster in games with strategic
Experiments on Monetary Policy and Central Banking
177
substitutes than in games with strategic complementaries.6 Limited levels of reasoning are able to explain these patterns.7 Petersen and Winn (2014) argue that the results of Fehr and Tyran (2001) provide less evidence for money illusion, but rather for a higher cognitive load associated with adjusting prices in the NH treatment. Fehr and Tyran (2014) reply to this by explaining that money illusion can only unfold, if adjustment to a new equilibrium is a nontrivial task. Money illusion is not opposed to limitations in cognitive capacity but rather depends on them. As both, Petersen and Winn (2014) and Fehr and Tyran (2014), point out, the cognitive load in finding the Nash equilibrium matters for subjects who take nominal payoffs as a proxy for real payoffs. We conclude from this that money illusion is inevitably linked to the information role of nominal prices. However, the dispute between these authors points at some open issues: what exactly is money illusion and can it be separated from other factors impeding price adjustments after nominal shocks? One may think of experiments comparing responses to nominal and real shocks for identifying money illusion, coordination issues, and anchoring.
Sticky Prices and Sticky Information/Monopolistic Competition Slow price adjustments to shocks are at the foundation of new Keynesian macroeconomics, such as DSGE models. For justifying the limited speed of adjustment, DSGE models rely on either sticky prices (Calvo, 1983) or sticky information (Mankiw & Reis, 2002). In sticky-price models, firms cannot adjust their prices in every period. In sticky-information models, firms cannot update their information in every period. Both restrictions lead to delayed responses of the price level to monetary shocks and, thus, implement the non-neutrality of money. Both restrictions can be partially justified by more fundamental assumptions: menu costs may prevent firms from adjusting prices every period and costs of information processing may justify why firms update their information only occasionally.8 Experiments have been conducted regarding these fundamental assumptions as well as regarding the actual speed of price adjustments in environments with exogenously given restrictions. Wilson (1998) conducts an experiment in which subjects play monopolists who may adjust prices to some shock in their demand function. He finds that menu costs slow down the adjustment process. Orland and Roos (2013) introduce information costs for uncovering future desired prices in an environment with sticky prices a` la Calvo. They
178
CAMILLE CORNAND AND FRANK HEINEMANN
find that about one third of all subjects are myopic in the sense that they set prices that are optimal in the current period only. These subjects neglect that the currently set price should be closer to a weighted average of the current and future desired prices. With information costs, myopic subjects acquire less information about future desired prices and rely even more on the current and past desired prices. The presence of myopic agents can explain why aggregate prices are stickier than predicted in a Calvo model with rational agents. Mac´kowiak and Wiederholt (2009) explain sticky information by rational inattention and Cheremukhin, Popova, and Tutino (2011) test the theory of rational inattention in the lab. They estimate and compare different models of rational choice and reject models with low or homogeneous costs of information processing. Their main result is that subjects seem to be extremely heterogeneous in their costs of information processing. Caplin and Dean (2014) also propose a test of information acquisition theory. The experimental approach is motivated by unobservable information acquisition costs in the field. They show that participants in their experiment adjust their information collection behavior to incentives and use more time and effort for processing information if the rewards are higher. In a companion paper, Caplin and Dean (2013) show that subjects respond less to changes in incentives than the Shannon Entropy theory predicts. They propose a simplified Shannon model that renders account for this observation. Davis and Korenok (2011) present a laboratory experiment aimed at evaluating the relative capacity of alternative theories to explain the delayed adjustment of prices following a nominal shock. In their experiment, subjects play price-setting firms under monopolistic competition. Markets consist of six sellers and 80 trading periods during which there is a nominal shock doubling the money supply. Subjects are informed upfront that this shock will occur, but they are not informed about the precise timing of the shock. The experiment distinguishes three treatments: in a baseline treatment (BASE), firms can adjust their prices in each period and are informed about the market result after each period. From this, they can immediately identify the period in which the shock occurred. In a stickyprice treatment (SP), only two out of six subjects in a market can adjust their prices each period, and subjects take turns in adjusting. In a stickyinformation treatment (SI), only two firms see the results from the immediately preceding trading period, again taking turns, so that each firm receives an information update after three periods. With flexible prices and information, there should be an immediate jump to the new equilibrium following a nominal shock, while Treatments
Experiments on Monetary Policy and Central Banking
179
SP and SI should show a delayed response according to theoretical predictions. Davis and Korenok (2011), however, observed a delay in all three treatments. While subjects adjust very well toward the equilibrium before the shock occurs, there is a considerable deviation between actual and equilibrium prices in the periods following the shock. As in the experiment by Fehr and Tyran (2001), subjects stop short of doubling the price after the money supply has doubled. In line with theory, observed deviations in the SP and SI treatments exceed those in the BASE treatment in the first one or two periods after the shock. In the SI treatment, prices adjust more slowly than in the two other treatments. The main result, however, is that observed prices deviate from the respective theoretical predictions in all three treatments for at least nine periods with no significant differences between treatments for most of these periods. One way to look at this result is that although firms may get timely information and adjust prices whenever they want, they may behave as if there were frictions like sticky prices or sticky information. Note, however, that the environment by Davis and Korenok (2011) is one of strategic complementarities in which adjustments to equilibrium may be held up by limited levels of reasoning. Davis and Korenok (2011) consider two alternative explanations for the delayed price adjustment in the BASE treatment: (1) some sellers might have missed the shock or believed that others miss the shock, because the shock was announced privately rather than publicly; (2) some sellers might have imitated their forecasts instead of best responding to them. That is they stated a price close to their own forecast instead of the price that would have maximized their own payoffs given this forecast. To discriminate between these hypotheses, the authors conduct two additional treatments, each of them deviates from the BASE treatment by one aspect: (1) a treatment where the shock is announced publicly and (2) a treatment where sellers submit forecasts instead of prices. The results from these additional sessions indicate that both explanations play a role: a publicly announced shock leads to an immediate jump in stated prices or expectations in both new treatments instead of a slow convergence process as in the BASE treatment. If subjects state their forecasts instead of prices, the economy comes closer to the monopolistically competitive equilibrium before and after the shock. Hence, the privately announced shocks are responsible for the slow convergence process immediately after the shock (which could also be explained by limited levels of reasoning), while the inability to best respond to one’s own expectations seems responsible for the long-run deviation from the equilibrium.
180
CAMILLE CORNAND AND FRANK HEINEMANN
While Fehr and Tyran (2001) and Davis and Korenok (2011) test responses of price-setting subjects to price-level shocks, Duersch and Eife (2013) test the stability of collusion in environments with permanently increasing or decreasing price levels. The paper is interesting as it provides an argument for the debate on the optimal rate of inflation. Their experiment implements a symmetric duopoly with differentiated goods in which subjects play the role of firms who repeatedly set prices. Period-specific payoff tables implement a constant rate of inflation or deflation (depending on the treatment) of 5%. There are also two baseline treatments with a constant price level, in which one payoff table is valid for all periods.9 Duersch and Eife analyze how well subjects coordinate their prices, whether they cooperate by coordinating on prices above the one-period Nash equilibrium, and how these interactions affect consumer surplus. They show that cooperation is higher in the baseline than in inflationary and deflationary treatments. This indicates that it is easier to sustain cooperation in an environment with constant prices than under inflation or deflation, where a given degree of cooperation requires permanent adjustments of nominal prices. Real prices are, however, slightly increasing over time in the deflationary treatments. This effect may result from nominal anchoring or money illusion as it was found by Fehr and Tyran (2001). The lowest average real prices are observed in the inflationary treatments. Here, money illusion and the additional challenge of coordinating prices in an inflationary environment work hand in hand, reduce the firms’ profits from collusion, and lead to a higher welfare level than deflation or a constant price level. Lambsdorff, Schubert, and Giamattei (2013) conduct an experiment on a simple price-setting game, which is reduced to its form as a beautycontest game. The novelty in their experiment is that one parameter steering the equilibrium price is a random walk. Thus, there are permanent unforeseen shocks in the economy that increase the cognitive load for finding the ever changing equilibrium price. Subjects play the game in groups of six players and the payoff function for each player is: 13 2 1 BIt ; π it = 10 − pit − p − it − 4 − 15 3 10 where pit is the player’s own price, p − it is the average price of other group members, and BIt is the realization of a random variable called “business indicator” in period t. The resulting equilibrium price in period t is pt* = 20 + BIt/2. Note that for realizations of BIt close to 40, the equilibrium price is close to the business indicator. The actual realizations of BIt in the
Experiments on Monetary Policy and Central Banking
181
experiment reached from 20 to 90, and subjects coordinated on stating prices equal to BIt. Thus, the business indicator served as a focal point or heuristic for choosing prices. By deviating toward 40, subjects could have gained individually. However, the high degree of coordination led to average payoffs that were higher than in a control treatment, where the business indicator was presented in a different way (the number shown was BIt/5, so that it lost its power as a salient coordination device). In the control treatment, individual prices were on average closer to the equilibrium, but had a high variance. The experiment shows that following a simple heuristic is an attractive strategy if the potential gains from finding a more sophisticated strategy are small. This finding seems related to the experiments on money illusion, where subjects take the nominal payoffs as proxies for real payoffs and save the effort of calculating. It is also related to experiments generating sunspot equilibria by providing focal points discussed in Section “Sunspots” below. Summing up, price-setting experiments show that there are different reasons why monetary policy has real effects, even in environments where prices are fully flexible and information is provided to all agents. Observed delays in price adjustment after a shock can be partly explained by money illusion, anchoring, or monetary payoffs being used as focal points for simple heuristics in a complicated environment. Limited levels of reasoning or, related to this, a lack of trust that other agents notice the shock when it occurs, may explain the pronounced delay of adjustment to equilibrium in games with strategic complementarities. If information processing is costly, it may be even rational to ignore some information or employ limited levels of reasoning. If an economy exhibits strategic complementarities, as is often the case in macroeconomic environments, all of these channels reinforce each other and amplify the real effects of monetary policy. The study of expectation formation in the lab also shows that subjects generally adapt their expectations slowly, which may explain some persistence of real effects. It is helpful to account for those different forms of money nonneutrality to derive effective monetary policy rules.
SUBJECTS AS EXPERIMENTAL CENTRAL BANKERS Some recent experiments test the abilities of subjects to perform the tasks that standard models expect central bankers to accomplish. In particular, central banks should stabilize inflation, eventually they should minimize a
182
CAMILLE CORNAND AND FRANK HEINEMANN
weighted average of fluctuations in prices and employment by using just one instrument, and they should gain reputation so as to avoid an inflation bias. They also must come to an agreement in committee meetings. Each of these aspects of central bank decisions can be tested in isolation using students as subjects in the role of central bankers. In practice, these tasks are complicated and interconnected, and different kinds of quantitative and qualitative information need to be considered. We would not expect undergraduate students to achieve the goals of monetary policy to the same degree as professional central bankers. While it is obviously convenient to use students as subjects for laboratory experiments, convenience alone is no justification for following this method of research. However, we see four justifications for recurring to laboratory experiments with subjects playing central bankers. First, by testing different aspects of decision making in isolation, we may identify which aspects of decision making are particularly challenging to humans and which kind of biases, heuristics, and fallacies may explain behavior in the lab. These results may carry over to more complex decision situations and well-trained executives to the extent that they describe general aspects of human decision making. The hypothesis that results obtained with undergraduates carry over to professionals has been tested in various experiments related to financial markets, industrial organization, or corporate governance with mixed evidence (Croson, 2010).10 Second, we learn most by comparing different treatments within an experiment. The qualitative treatment effects are more likely to spill over to real economic situations with trained decision makers than the quantitative effects or behavioral biases within any treatment. Third, the models that are used in most of the macroeconomic literature are far less complex than the real economy. They are stripped to essentials and arguably the relation between expert knowledge of real central bankers and the complexity of real economies may be comparable to the relation between the comprehension of models by students and the complexity of these model economies. Fourth, some parts of the literature on monetary policy assume that central bankers respond to incentives. Central bank contracts have been designed for the purpose to alter the objective functions of central bankers in ways that give rise to more efficient equilibria (Walsh, 1995). In laboratory experiments, students are incentivized because we want them to respond to incentives. Thus, the lab is a perfect environment for testing whether an engineered system of incentives has the desired effects on behavior. Here, we are particularly interested in how subjects deal with time inconsistency and the inflation bias (Section “Central Bank Credibility and
Experiments on Monetary Policy and Central Banking
183
Inflation Bias”) and whether they are able to stabilize an economy with saddle-path stability (Section “Stabilization Policy”). Stabilizing an economy is a challenging exercise even in the lab. By analyzing the behavior of human central bankers in the lab we can draw some conclusions about the necessity of sticking to fixed Taylor-type rules, the tension between flexibility and credibility that may also affect the inflation bias, and on the size of the coefficient by which central banks should respond to past inflation. Groups are usually better in solving complicated decision problems than individuals. On the other hand intra-group communication may also induce some costs and reduce success rates of decisions, especially when groups are heterogeneous or several individuals want to lead the groups. Some recent experiments analyze the optimal size and composition of central bank committees (Section “Decision-Making Process in Monetary Policy Committees”).
Central Bank Credibility and Inflation Bias Time inconsistency of monetary policy has worried monetary economists at least since Kydland and Prescott (1979) and Barro and Gordon (1983a) developed models showing that it may explain an inflation bias if private agents form rational expectations and central banks have incentives to respond asymmetrically to positive and negative deviations of unemployment from the NAIRU. The starting point is the existence of a short-run Phillips curve that allows raising employment above the natural rate by unexpected inflation. Ex ante, the central bank wants to achieve a low inflation target, but it also has asymmetric objectives on employment: either the central bank’s objective is an unemployment rate below the natural level or deviations toward lower unemployment are viewed as being less costly to society than higher unemployment rates. Ex post, once expectations are fixed, the central bank may exploit the Phillips curve trade-off and realize welfare gains, provided that expectations are close to the efficient level of inflation. Rational agents forecast this response and expect a higher rate of inflation ex ante, such that any further increase of inflation inflicts welfare losses that exceed the welfare gains associated with reduced unemployment. Thus, the equilibrium level of inflation is inefficiently high. Theoretically, there are different mechanisms of containing this inflation bias. The most important is laid out in the work of Barro and Gordon (1983b): in a repeated game, there is a continuum of equilibria ranging from the inefficient one-period Nash equilibrium explained above to more
184
CAMILLE CORNAND AND FRANK HEINEMANN
efficient solutions in which the central bank accounts for its effects on future expectations and builds up a reputation for low inflation rates. The precise limits for the range of equilibria, and whether the efficient Ramsey solution is actually part of this range, depends on parameters such as the central bank’s discount factor and on observability of central bank actions. Experiments allow testing which of the many equilibria are actually played, whether and how behavior responds to parameters, and whether observability of actions or communication affect efficiency. Experiments can also test the trade-off between credibility and flexibility that has been postulated by theory. These questions can be tackled by experiments in which the central bank is actually played by one or several subjects in the experiment or by comparing expectations formation in environments with different rules. Van Huyck, Battalio, and Walters (1995) test time inconsistency in a repeated two-player peasantdictator game. In each round, the peasant first decides how many beans to plant and the dictator then decides on his discretion about a tax on production. This is compared with an otherwise equal treatment in which the dictator pre-commits to a tax rate before the peasant invests. While the commitment treatment has a unique equilibrium at the efficient investment level, the discretionary treatment has multiple equilibria ranging from the one-period Nash equilibrium at zero investment to the efficient Ramsey equilibrium of the commitment treatment. Although investment levels are in general positive, there are significant and sizable differences between the treatments, indicating that reputation cannot substitute commitment. Arifovic and Sargent (2003) and Duffy and Heinemann (2014) test whether subjects playing central banker can achieve credibility in a BarroGordon game. In both experiments subjects are split up into groups of four to six subjects, with one subject in each group playing central banker and the others forecasting inflation. While forecasters face a quadratic loss function over deviations between their forecast and realized inflation, the central banker is paid according to a loss function with two quadratic terms depending on deviations of inflation and unemployment from target levels. The central banker’s payoff function can be thought of being the economy’s welfare function as in the original model by Barro and Gordon (1983a). The central banker faces a Phillips curve trade-off between unemployment and inflation and can use one instrument (money supply) to choose between the different possible combinations of inflation and unemployment. However, the central banker cannot fully control the inflation rate, leaving the precise outcome of his actions to a random device. Both experiments implement infinitely repeated games by terminating a sequence (supergame) with some
Experiments on Monetary Policy and Central Banking
185
constant probability set at 2% in the work of Arifovic and Sargent (2003) and 1/6 in Duffy and Heinemann (2014).11 Arifovic and Sargent (2003) neither reveal the relationship between inflation and unemployment nor the incentives of central bankers to forecasters. Forecasters are just told that the policymaker is setting a target rate of inflation and how the actual rate of inflation depends on this target rate and a noise term. In particular, forecasters are not informed about the Phillips curve relationship or the central banker’s payoff function, although knowing these functions is crucial for a rational expectations equilibrium. Arifovic and Sargent did not compare different treatments, because their main focus was to explore whether subjects could avoid the inflation bias associated with a one-period Nash equilibrium and whether expectations could be described by a model of adaptive expectations. They found that a majority of their 12 groups arrived more often at inflation rates closer to the efficient rate (zero) than to the one-period Nash equilibrium (5%), but nearly all groups showed extended periods with inefficiently high inflation. Expectation formation could be described by a model of adaptive expectations. Central bankers who tried to get expectations down, were reducing target rates too slowly compared to a best response to adaptive expectations. Since central bankers were not changed between different sequences, one might argue that the actual continuation probability was even larger than 98% in their experiment, which should have favored low inflation and may explain spillovers between sequences and the absence of end-game effects when sequences approached the maximum duration of 100 periods. Duffy and Heinemann (2014), instead, provide forecasters and central bankers with full information about the model, including the Phillips curve relationship and the incentives of both types of players. Formally, the game can be described by four equations: The Phillips curve is given by u = w + πe − π, where u represents unemployment, w a supply shock with a uniform distribution in [120,160], π inflation, and πe the average of subjects’ stated inflation forecasts. Inflation depends on the central banker’s choice of money supply m and a transmission shock, π = m + v. The transmission shock v has a uniform distribution in [0, 40]. Central bankers are paid according to a welfare function 6,000 − 2(u − 120)2 − (π − 40)2 and forecasters receive 4; 000 − ðπ − π ei Þ2 , where π ei is forecaster i’s stated inflation forecast. This simple BarroGordon model has a one-period Nash equilibrium with πe = 80, while the efficient average rate of inflation is 40. Duffy and Heinemann do not tell subjects that they are playing a monetary policy game, but use a neutral framing instead.12 Subjects are told that the A-player (central banker) has the task to move water from one container
186
CAMILLE CORNAND AND FRANK HEINEMANN
(interpreted as unemployment) to another (representing inflation). The ability to move water corresponds to the Phillips curve trade-off. Duffy and Heinemann compare a treatment implementing a commitment regime with discretionary treatments with and without cheap talk, policy transparency, and economic transparency. In total, they had six different treatments, each with eight different groups of subjects. The focus of their study was to test treatment effects on the levels of inflation and unemployment, on the ability of central banks to stabilize employment, and subsequently the level of welfare, as measured by the central banker’s payoff function. Building up on the work of Van Huyck et al. (1995), they ask whether cheap talk or transparency can make up for the lack of trust associated with a repeated discretionary game. In the commitment treatment, the central banker moved first and forecasters knew m, when submitting their forecasts. Here, the inflation bias was not significantly different from zero. In the other, discretionary treatments, forecasts were stated before the central banker decided on m. In these treatments, there was a significant inflation bias that was actually rather close to the predictions of the one-period Nash equilibrium. Thus, neither cheap talk nor transparency worked as substitutes for commitment. Expectations were systematically lower than actual inflation in all discretionary treatments, which resulted in unemployment rates below the NAIRU. This expectation bias was particularly strong in the treatment of cheap talk without policy transparency. Here, the central banker sent a nonbinding announcement about the amount of water that he or she intended to move before expectations were formed. The announcements affected expectations although central bankers regularly cheated on their announcements. In the early rounds of this treatment, the low average unemployment led to an average level of welfare that was comparable to the welfare level under commitment. However, welfare under cheap talk decreased over time with forecasters learning to mistrust announcements. A remarkable result of this experiment concerns the ability of central bankers to stabilize employment. The one-period Nash equilibrium of discretionary games is associated with a policy rule m = 20 + w·2/3, resulting in unemployment u = 140 + w/3 − v. In the unique equilibrium under commitment, unemployment is u = 140 + w − v. Thus, discretionary policy enables the central bank to partially stabilize employment from the impact of supply shocks w. This is known as the trade-off between flexibility and credibility. In the experiment, however, the standard deviation of unemployment was higher in the baseline discretionary treatment than under commitment, where it was close to the theory prediction. Thus, there was no trade-off: commitment reduced
Experiments on Monetary Policy and Central Banking
187
the level of the inflation bias and employment fluctuations compared to discretion. Duffy and Heinemann explain this result by the attempts of central bankers to reduce the inflation bias with different policies. These policy experiments contributed to the overall noise level in the economy, because they were not expected by forecasters.
Stabilization Policy Experiments with subjects playing central bankers are interesting in evaluating how well humans are able to stabilize variables in an environment with saddle point stability. In theory, an optimal stabilization of inflation requires that interest rates respond to expected inflation with a coefficient larger than 1. This so-called “Taylor principle” (Taylor, 1993)13 is the focus of an experiment by Engle-Warnick and Turdaliev (2010). They find that most experimental central bankers are able to stabilize inflation. The strategies employed obey the Taylor principle if the responses of interest rates over several periods are added up. However, subjects smooth the interest rate and do not respond fully in the first period they see inflation expectations deviating from the target. Such behavior is theoretically optimal for a policy maker who faces uncertainty about the impact of his instruments or the size of shocks. As subjects in the experiment were not informed about the precise relationships between variables in their economy, they actually followed strategies that can be regarded as being close to optimal. The experiment contains a control problem similar to the task of a policy maker in a New Keynesian macroeconomic environment. The economy is described by a DSGE model in two variants, one where inflation depends on current output and past inflation (Model 1) and one in which inflation today is also directly affected by inflation two periods ahead (Model 2). Subjects were college students and were given the task to stabilize the economy by setting the interest rate. They were not told that their decisions were related to an economy. Instead, instructions talked about “chip levels in two containers labeled Container A and Container B.” Subjects were told that these levels are related to each other and that increasing the instrument would lower the chip levels. Container A represented output and Container B inflation. The goal was to keep the chip level in Container B as close as possible to 5 and the payoff depended on how close they got to this target in the 50 periods of the game. Subjects had some practice rounds in which they could get used to the effect of their instrument on the
188
CAMILLE CORNAND AND FRANK HEINEMANN
chip levels. Due to the inherent instability of the models, subjects could lose control, in which case they ended up with a negative payoff that was set to zero in the end, leaving these subjects with the show-up fee. The main result of this article is that more than 80% of subjects managed the control problem in such a way that they received positive payoffs. Engle-Warnick and Turdaliev (2010) try to identify subjects’ strategies by running linear panel regressions explaining the instrument by the data that the respective subjects could observe. In this, they follow Taylor (1999) who used a similar technique for identifying monetary policy rules of the Federal Reserve during different historical eras. The regressions reveal that successful subjects14 responded to current inflation with coefficients that are close to 1. However, they also responded to output and to their own lagged instrument with positive coefficients. The positive response to the lagged instrument represents interest smoothing. Summing these responses up, their strategies obey the Taylor principle, which explains their success. The fit of OLS regressions was high, averaging around an R2 of 0.8, which can be taken as evidence that the identified “rules” explain a large portion of actual behavior. It is remarkable that the R2 is comparable in magnitude to the fit of linear policy rules for post-war data in the United States.15 Linear rules fit behavior even though subjects had very little information about the control problem and most likely did not consciously apply a linear rule. It is also interesting to note that subjects actually came close to achieving payoffs that would have resulted from the optimal rule. While most of the literature concentrates on a specific issue in isolation and considers experiments as means to test some particular theory, generally using one market, some recent experiments focus on the interrelations between several markets and the spillovers between them.16 In these experiments, subjects are given different roles: some play firms, others private households, and sometimes even governments and central banks are played by subjects. These experiments usually have commodity markets, labor markets, and (indirectly modeled) a market for liquidity. Cash-in-advance constraints are implemented using computerized double auctions in interconnected markets.17 Subjects interact repeatedly and are incentivized by being paid according to the profit or utility level that they achieve. While Lian and Plott (1998) use a general equilibrium framework for exploring the technical feasibility of running such complex experiments in laboratories with student subjects, in another article of this book, Noussair et al. (2014) construct experimental economies with the specific structure of a New Keynesian DSGE model, in which subjects play the roles of consumer/
Experiments on Monetary Policy and Central Banking
189
workers, producers, and eventually central bankers. They study which frictions are necessary for replicating stylized facts, and how persistent shocks are in such an environment.18 Noussair et al. (2014) study whether menu costs and monopolistic competition are essential for explaining several empirical stylized facts. Their experiment consists of three treatments that allow isolating rigidities in their economy: (1) monopolistic competition treatment; (2) menu cost treatment; and (3) perfect competition treatment. They find that monopolistic competition in the output market is sufficient to generate persistent effects of shocks, while menu costs are not necessary. Patterns of price adjustment follow stylized empirical facts, such as most price changes being positive. With respect to our focus on human central bankers, we restrict our attention to a fourth treatment of Noussair et al.’s (2014) experiment in which subjects are told to act as policymakers and have incentives for stabilizing inflation by setting the interest rate in each period. This treatment was conducted to explore whether successful human policymakers would obey the Taylor principle. A second goal was to check whether the Taylor principle has the theoretically predicted effect of stabilizing an economy inhabited by human instead of fully rational agents. Noussair et al. (2014) find that most of the subjects control inflation relatively well and obey the Taylor principle. They also show that output shocks are more persistent and welfare is lower if monetary policy is conducted by human subjects than for an automated instrumental Taylor rule.
Decision-Making Process in Monetary Policy Committees Amongst the various aspects of the decision-making process, one widely debated issue is the size and structure of monetary policy committees. Committee decisions are nowadays standard in central banking. The composition and the decision rules within a committee can affect the outcomes of its meetings and the quality of decisions. While Maier (2010) reviews general “economic, experimental, sociological and psychological studies to identify criteria for the optimal institutional setting of a decision committee” (p. 320), we review the experimental literature that has focused on monetary policy decisions. Decisions rules of monetary policy committees can largely vary. There is usually a leader, but the leader’s authority also varies (Blinder & Morgan, 2008). For example, Blinder and Wyplosz (2005, p. 9) characterize the Federal Open Market Committee under Alan Greenspan as
190
CAMILLE CORNAND AND FRANK HEINEMANN
autocratically-collegial, the Monetary Policy Committee of the Bank of England as an individualistic committee, and the Governing Council of the European Central Bank (ECB) as genuinely collegial. The size and composition of committees also shows a wide variety: while the ECB Governing Council has 24 members and is dominated by the 18 governors of national central banks, the MPC in the United Kingdom has only nine board members with no regional affiliation, of which four are even external experts. The 12 members of the FOMC consist of seven executive board members and five heads of the 12 regional central banks who rotate annually. Experiments in other areas have shown before that (small) groups can achieve higher payoffs than individuals confronted with the same problem. However, there are some important differences between monetary policy decisions and the usual tasks performed in group experiments: in monetary policy decisions, the instrument affects payoff-relevant parameters (macro data) only with a severe time lag and decisions have to be taken under incomplete information. This raises the question whether groups are also more efficient in dealing with these particular challenges. A second question regards time: it is often said that groups are slower in taking decisions. While the actual duration of a committee meeting (measured in hours) is irrelevant for macroeconomic performance, the number of meetings that a committee needs, before it agrees to change an interest rate in response to a perceived shock is a matter of weeks or even months and has macroeconomic consequences. Thus, the relevant time dimension can be better measured by the amount of data required before a committee actually responds to some external shock of which it cannot be certain. Blinder and Morgan (2005, 2008) examine the effectiveness of individual versus committee decisions via laboratory experiments.19 Blinder and Morgan (2005) propose an experiment in which Princeton University students who had followed at least one course in macro would play the role of central bankers setting the nominal interest rate, either individually or in groups. The economy was modeled using a standard accelerating Phillips curve π t = 0:4π t − 1 þ 0:3π t − 2 þ 0:2π t − 3 þ 0:1π t − 4 − 0:5 ðUt − 1 − 5Þ þ wt ; in which inflation πt depends on the deviation of the lagged unemployment rate Ut − 1 from its natural rate (set to 5) and on its own four lagged values, and by an IS curve: Ut − 5 = 0:6 ðUt − 1 − 5Þ þ 0:3 ðit − 1 − π t − 1 − 5Þ − Gt þ ɛt :
Experiments on Monetary Policy and Central Banking
191
Apart from the effects of shocks, unemployment Ut rises above (or falls below) its natural rate when the real interest rate, it − 1 πt − 1, is above (or below) a neutral level, set to 5. The parameters of this model have been chosen in crude accordance with empirical estimates for the US economy. While wt and ɛt are small i.i.d. shocks with a uniform distribution in the interval [ − .25, +.25], the economy is also subject to a large demand shock Gt that starts out to be zero but switches permanently to either +.3 or .3 in one of the first 10 periods. The main challenge for the central bank is to adjust the interest rate in response to this large demand shock. The smaller shocks make detecting the permanent shock a nontrivial task. Subjects were not informed about the precise specification of the model. They were only told that raising the interest rate increases unemployment and lowers inflation with some delay, while lowering the interest rate has the opposite effects. Subjects knew that a large demand shock would occur equally likely in any of the first 10 periods. The economy lasted for 20 periods and the payoff per period was given by a linear loss function for deviations of unemployment and inflation from target levels: st = 100 − 10 jUt − 5j − 10 jπ t − 2j: Finally, subjects were paid according to the average of st over the 20 periods. In order to achieve their targets, subjects could change the interest rate at any period at a fixed cost of 10 points. This design feature enables the authors to detect when their subjects respond to the large shock. The game was played 40 times. Subjects first played 10 rounds alone. Then, they were matched in groups of five for another 10 rounds. In a third part of the experiment, subjects played alone for another 10 rounds, and finally they were matched in groups of five for the last 10 rounds. Out of 20 sessions in total, in 10 sessions groups decided by majority rule, while the other 10 sessions required unanimous group decisions. The main result of this study is that groups made better decisions than individuals without requiring more time. Time is measured by the number of periods and thus the amount of data required before the individual or group decides to change the interest rate after the external shock occurred. There was no significant difference between the time lags of groups deciding with majority rule and groups deciding unanimously. Blinder and Morgan (2005, p. 801) report that “in almost all cases, once three or four subjects agreed on a course of action, the remaining one or two fell in line immediately.” While there was no evidence that subjects improved their
192
CAMILLE CORNAND AND FRANK HEINEMANN
scores during any of the blocks of 10 rounds, there was a significant difference between the scores that individual decision makers achieved during the first 10 rounds and during rounds 2130. It is not clear, though, whether the improvement is due to learning from other group members or just to the additional experience with the game. Blinder and Morgan (2008) repeated the same experiment with students from the University of California, Berkeley, raising two more questions: the relevance of group size and leadership. Are smaller committees more efficient than large ones and do committees perform better in the presence of a leader? Blinder and Morgan (2008) compare sessions with groups of four and eight subjects and designate a leader in half of the sessions, whose vote serves as a tie break and whose score is doubled. The results show that larger groups yield a better performance. Whether the group has a designated leader or not has no significant impact on performance. Neither has the performance of the best individual in the previous rounds in which subjects played alone. However, the average previous performance of group members has a positive, albeit decreasing effect on the group’s performance. Another issue related to the decision-making process in monetary policy committees, underlined by Maier (2010, p. 331), is that both the ECB20 and the Fed “have adopted a rotation system to limit the number of voting members that is the right to vote rotates following a pre-determined sequence.” Maier argues that “rotation is a useful device to increase the amount of information without compromising the group size.” But, he also notes that the goal of shortening the discussion “can only be achieved if non-voting members hardy ever participate in the discussion.” It is not clear how they can then increase the information used by the committee for its decisions. Another aspect of rotation is that voting members may pursue their own interests at the expense of nonvoting members. In an experiment on committee decisions, Bosman, Maier, Sadiraj, and van Winden (2013) analyze how subjects trade off common and private interests depending on the rotation scheme. They find that voting members receive higher payoffs than nonvoting members in treatments with rotation of voting rights, while payoffs in a control treatment, where all subjects could vote, are somewhere in between. Decisions were taken faster in treatments with smaller groups of voting subjects, and rotation helped avoiding deadlocks. The total earnings were somewhat lower in treatments with rotation, but the difference is small and Bosman et al. (2013, p. 39) conclude that “rotation has primarily distributional effects.” It is not clear though, how much this result is driven by the particular payoffs in this experiment. The total payoffs arising from
Experiments on Monetary Policy and Central Banking
193
selfish voting behavior are very sensitive to the relative differences between committee members’ objectives. Note that there was no information asymmetry between committee members in this experiment. Hence, the question whether rotation decreases the amount of information utilized by the committee could not be addressed here.
TRANSPARENCY AND COMMUNICATION ISSUES This section is devoted to central bank communication and the merits of transparency. There is a vivid debate about the pros and cons of transparency, and while central banks have moved toward higher transparency, theory papers provide mixed recommendations. In particular, strategic complementarities inherent in macroeconomic models provide incentives to overweight public announcements in comparison to their informational content (e.g., Morris and Shin, 2002). This may lead to public signals reducing welfare. We present some experiments that measure the relative weights that subjects put on public versus private signals, provide explanations, and draw conclusions for the welfare effects of public announcements (Section “Overreaction to Central Bank Disclosures”). These studies show that subjects may overreact to public information, in the sense that they attribute more weight to public information than the Bayesian weight following from its relative precision. In theory, overreaction can lead to welfare detrimental effects. It is therefore relevant to analyze how central banks may reduce overreaction to their disclosures. Some experiments test different communication strategies and compare their effectiveness in reducing welfare detrimental overreactions (Section “Central Bank Communication Strategies”). Experiments are particularly well-suited for testing the effects of information and communication channels, because the experimenter can control information and distinguish communication channels in different treatments. Thereby, experiments yield very clear results about the effects of information, while field evidence is always plagued by the simultaneity of different communication channels and by the problem of filtering out which information really affected decisions. Communication is an essential tool at the disposal of central banks. A related interesting issue is the interaction between communication and stabilization policy. Only very few experiments focus on this issue (Section “Communication and Stabilization Policy”).
194
CAMILLE CORNAND AND FRANK HEINEMANN
Overreaction to Central Bank Disclosures As Geraats (2002, p. F533) noted, “central bank transparency could be defined as the absence of asymmetric information between monetary policymakers and other economic agents.” Central bank transparency has increased rapidly in the last 20 years, especially with the adoption of inflation targeting by many central banks (New Zealand, Canada, the United Kingdom, and Sweden in the early 1990s).21 However, financial markets typically exhibit overreaction to public information such as press releases or public speeches disclosed by central banks. Indeed, since central banks interact closely with the financial sector, their disclosures attract the attention of market participants. While it is usually believed that more information improves market efficiency, some literature based on coordination games with heterogeneous information shows that public disclosure may be detrimental. Morris and Shin (2002) present a stylized game with weak strategic complementarities for analyzing the welfare effects of public and private information. Agents have to choose actions that are close to a fundamental state (fundamental motive) but also close to each other (coordination motive). The game is characterized by both fundamental and strategic uncertainty: agents receive noisy public and private signals on the fundamental state variable. In equilibrium, an agent’s action is a weighted average of a public and a private signal. The equilibrium weight attached to the public signal is higher than its relative precision. This “overreaction” is due to the higher informational content of public signals regarding the likely beliefs and actions of other agents. The difference between equilibrium weights and relative precisions rises in the weight put on the coordination motive. This mirrors the disproportionate impact of the public signal in coordinating agents’ actions. The model of Morris and Shin emphasizes the role of public information as a focal point for private actions. Strategic complementarities provide incentives to coordinate on the publicly announced state of the world and underuse private information (PI). If public announcements are inaccurate, private actions are drawn away from the fundamental value and reduce efficiency of the action profile. Cornand and Heinemann (2014) test predictions of this approach by implementing two-player versions of this game adapted for conducting an experiment.22 They run treatments that vary with respect to the weights on fundamental and coordination motive and experimentally measure the weights that subjects put on public information in the different treatments. In this experiment, subjects are matched in pairs and for each pair a
Experiments on Monetary Policy and Central Banking
195
random number θ (fundamental) is drawn out of a large interval with uniform distribution. Each of the two subjects receives a private signal xi and, in addition, both subjects receive a common (public) signal y. All three signals are i.i.d. with a uniform distribution around θ. Payoff functions are: .
2 Ui ða; θÞ = C − ð1 − rÞ ðai − θÞ2 − r ai − aj ; where ai and aj are the actions of the two players and r is the relative weight on the coordination motive. For r > 0, each agent has an incentive to meet the action of his partner. In the benchmark case without a coordination motive (r = 0), subjects follow the theoretical advice from Bayesian rationality: they use all information of the same precision with equal weights, regardless of whether information is private or public. When both fundamental and coordination motives enter subjects’ utility, subjects put larger weights on the public signal, but these weights are smaller than theoretically predicted. This reduces the effects of public signals on the average action compared to equilibrium predictions. Observed weights can be explained by a model of limited levels of reasoning, where Level 1 is defined by the optimal action of a player who neglects that public signals provide more information about other players’ actions, and Level k is the best response to Level k1. Subjects’ choices are distributed around the weights associated with Level 2. Cornand and Heinemann (2014) also elicit higher-order beliefs. As in the game, they match subjects in pairs, draw a random number θ for each pair, and provide each of the two subjects with a private signal xi and, a public signal y. All three signals are i.i.d. with a uniform distribution around θ. Then, they ask subjects to state an individual expectation for θ. The stated i is denoted by ei. The Bayesian expectation is i belief
iof subject
i e = E θjx ; y = x þ y =2. Subjects are also asked to submit an expectation of the stated belief by their partner. The Bayesian expectation about the other subject’s belief is:
j i E xj xi ; y þ y E θ xi ; y þ y 1 i 3 = = x þ y: E e x ;y = 4 4 2 2 Hence, subjects should put a weight of .25 on their private signal when estimating their partner’s stated belief about θ. The actual weights that subjects put on their private signal were significantly higher. This deviation from Bayesian higher-order beliefs indicates that subjects underestimate how informative the public signal is in assessing other
196
CAMILLE CORNAND AND FRANK HEINEMANN
players’ expectations. This may be viewed as an alternative explanation why subjects put lower weights on the public signal in the MorrisShin game. However, drawing on a simulation exercise in which they derive the best response to non-Bayesian beliefs, Cornand and Heinemann conclude that the observed deviations from equilibrium cannot be explained by deviations from Bayesian rationality alone. Rather, non-Bayesian beliefs must be combined with limited of reasoning. In the limiting case of a pure coordination game (r = 1), equilibrium theory does not yield a unique prediction, but the public signal provides a focal point that allows agents to coordinate their actions. In the experiment, subjects indeed tend to follow the public signal and put a significantly larger weight on it than in games with both fundamental and strategic uncertainty. However, they still put a positive weight on their private signals, which prevents full coordination. Here, the provision of PI reduces efficiency. In a related experiment, Shapiro, Shi, and Zillante (2014) analyze the predictive power of level-k reasoning in a game that combines features of the work by Morris and Shin (2002) with the guessing game of Nagel (1995). While Cornand and Heinemann (2014) only look at average weights on private versus public signals, Shapiro et al. (2014) try to identify whether individual strategies are consistent with level-k reasoning. They argue that the predictive power of level-k reasoning is positively related to the strength of the coordination motive and to the symmetry of information. Cornand and Heinemann (2013) reconsider the extent to which public information may be detrimental to welfare: they consider the use of the experimental results from Cornand and Heinemann (2014) for calibrating the model by Morris and Shin (2002). If agents follow the levels of reasoning observed in the experiment, public information cannot have detrimental effects, while PI may be welfare detrimental if coordination is socially desired. Only if subjects employ higher levels of reasoning, negative welfare effects of public information are possible. Cornand and Heinemann (2013) also analyze the effects of limited levels of reasoning in the model of James and Lawler (2011), in which a central bank can take policy actions against some fundamental shock. In this model, policy and private actions are perfect substitutes with respect to neutralizing aggregate shocks, and the government can provide the optimal response to its own information without the need of publishing it. Private actions are still required to account for the additional information contained in agents’ private signals. This distribution of tasks achieves the first best. If, however, the government discloses its information as a public signal, private agents reduce the weight they put on their private signals and, thus, PI enters the total response of
Experiments on Monetary Policy and Central Banking
197
the economy with a weight that is suboptimally small. For this reason, it is always optimal to withhold public information completely. This argument is robust to limited levels of reasoning.23 Overall, Cornand and Heinemann (2013) conclude that for strategies as observed in experiments, public information that is more precise than PI cannot reduce welfare, unless the policy maker has instruments that are perfect substitutes to private actions. Dale and Morgan (2012) provide a direct test for the welfare effects of public information in the model of Morris and Shin (2002). They argue that adding a lower quality private signal improves the quality of decisions. When the lower quality signal is public, subjects strategically place inefficiently high weights on the public signal, which reduces their payoffs. However, Dale and Morgan do not account for the weight that subjects put on the commonly known prior that serves as a second public signal in this experiment, and they give subjects feedback about the best response after each round of decisions, which may be responsible for the convergence toward equilibrium that could not be observed in related studies. While Cornand and Heinemann (2014), Shapiro et al. (2014), and Dale and Morgan (2012) do not consider trading, there is a huge experimental literature about market efficiency in aggregating PI into prices.24 However, the different roles of public and private signals in such markets have only recently been analyzed. Ackert, Church, and Gillette (2004) present evidence from a laboratory asset market in which traders receive public signals of different quality (but no private signals). They show that traders overreact to low-quality public information and under-react to high-quality public information. Taking the example of information provided by rating agencies, Alfarano, Morone, and Camacho (2011) analyze whether the presence of public signals can impede the aggregation process of PI. To this aim, they replicate a market situation in which at the beginning of each trading period, each subject was endowed with some units of an unspecified asset and another amount of experimental currency. The asset paid a dividend at the end of the trading period. At each trading period the dividend was randomly determined by the experimenter. During each trading period, subjects could post bids and asks for assets or directly accept any other trader’s bid or ask. To make his decisions, each subject could purchase as many private signals on the dividend as he wanted during the trading period (as long as he had enough cash). In the treatment with public information, subjects also had access to a free public signal on the dividend. The authors find that when public information is disclosed, less private signals are bought. Thus, public information crowds out PI. However, this effect does not reduce
198
CAMILLE CORNAND AND FRANK HEINEMANN
market information efficiency in the sense that the additional public information compensates the reduction of PI. Middeldorp and Rosenkranz (2011) also test an experimental asset market with costly PI. Their asset market implements the theoretical models by Diamond (1985) and Kool, Middeldorp, and Rosenkranz (2011). In their experiment the provision of a public signal crowds out PI to such an extent that forecast errors may rise with increasing precision of the public signal. In this experiment, subjects participated in two phases. The first phase aimed at measuring subjects’ risk attitudes. The second corresponded to the actual market trading (25 periods). Each period was divided in two stages: an information stage and a trading stage. In the information stage, subjects were shown a screen revealing: their endowment consisting of some amount of experimental currency units and a risky asset producing a random payout at the end of the experiment; and a public signal regarding the payout for the respective period and the standard deviation of this signal. To get more precise information on the payout, subjects could buy a noisy private signal about the payout for the considered period. The trading stage implemented a continuous double auction: subjects could post bid and ask prices in any quantity for 150 seconds and trades were carried out whenever possible. The authors varied the precision of public information between the different periods to measure its impact on the crowding out of PI. To see whether the experimental asset market is incorporating PI into the price, the authors compare the error of public information to the market error: whenever the market price predicts the payout better than public information, the market incorporates PI. However the experiment shows that on average, market prices are less informative than public information that all traders receive. More precisely, prices outperform public information for rather imprecise public signals, while for very precise public information, market errors do not decline proportionally.25 Middeldorp and Rosenkranz (2011) conclude that their results confirm theoretical predictions according to which a more precise public signal from a central bank can in some cases reduce market efficiency.
Central Bank Communication Strategies In the Section “Overreaction to Central Banks Disclosures”, we presented some experiments that investigate whether public information can be detrimental to welfare. Since overreaction to public information is responsible for eventual welfare reducing effects, it is important to ask how central
Experiments on Monetary Policy and Central Banking
199
banks can reduce such an overreaction to its own disclosures. In this subsection, we therefore focus on central banks’ communication strategies in the lab and especially, following Baeriswyl and Cornand (2014), on strategies that may reduce market overreaction to public disclosures. The theoretical literature envisages two disclosure strategies for reducing the overreaction of market participants to public information. The first partial publicity consists of disclosing transparent information as a semipublic signal to a fraction of market participants only (see Cornand & Heinemann, 2008). The degree of publicity is determined by the fraction of market participants who receive the semi-public signal. As argued by Walsh (2006, p. 229), “Partial announcements include, for example, speeches about the economy that may not be as widely reported as formal policy announcements. Speeches and other means of providing partial information play an important role in central banking practices, and these means of communication long predate the publication of inflation reports.” Choosing a communication channel with partial publicity reduces overreaction, as uninformed traders cannot respond anyway, whereas the informed traders react less strongly as they know that some other traders are uninformed. The second strategy partial transparency consists of disclosing ambiguous public information to all market participants (see Heinemann & Illing, 2002). The degree of transparency is determined by the idiosyncratic noise added to the public signal by the individual differences in interpreting the signal. The famous quotation by A. Greenspan in 1987, then chairman of the Federal Reserve Board, can represent a good illustration of partial publicity: “Since I’ve become a central banker, I’ve learned to mumble with great incoherence. If I seem unduly clear to you, you must have misunderstood what I said” (Alan Greenspan, as quoted in the Wall Street Journal, September 22, 1987, according to Geraats, 2007). Choosing a communication channel that implements partial transparency reduces overreaction, because ambiguity generates uncertainty on how other market participants interpret the same signal, which mitigates its focal role. In a framework closely related to that of Morris and Shin (2002), Baeriswyl and Cornand (2014) show that these strategies are theoretically equivalent in reducing overreactions to public information, in the sense that a signal with a limited degree of publicity or an appropriately limited degree of transparency can achieve the same response of average actions. Baeriswyl and Cornand also conduct an experiment comparing the effects of partially public and partially transparent signals, in which parameters are chosen so that both communication strategies are theoretically equivalent. They use a set-up similar to Cornand and Heinemann (2014),
200
CAMILLE CORNAND AND FRANK HEINEMANN
but with a relatively high weight on the coordination motive (r = 0.85), and with seven participants per group instead of two. The different treatments compare the effectiveness of partial publicity and partial transparency to reduce overreaction along with a baseline treatment in which the public signal is transparently provided to all group members. Partial publicity is implemented by revealing the signal only to four of the seven group members. In the partial-transparency treatment, all group members get the signal, but with an appropriate idiosyncratic noise. Partial publicity and partial transparency both succeed in reducing overreaction to public information in the laboratory, although less than theory predicts. According to Baeriswyl and Cornand (2014, p. 1089), “partial publicity reduces overreaction only to the extent that uninformed subjects cannot react to public information, whereas informed subjects do not behave differently than they do under full publicity. In other words, it is the actual lack of information by uninformed subjects rather than the response of informed subjects to the perception that others are uninformed that reduces overreaction. […] Partial transparency reduces overreaction as the ambiguity surrounding public information induces subjects to behave cautiously. Nevertheless, partial publicity turns out to reduce overreaction more strongly than partial transparency in the experiment.” Yet Baeriswyl and Cornand (2014) advocate partial transparency as a policy recommendation for reasons of reliability and fairness. Arguably, partial transparency may be easier to implement than partial publicity in an era, where media quickly relay information on a large scale. Moreover, partial publicity violates equity and fairness principles: it seems to be “politically untenable [for a central bank] to withhold important information intentionally from a subgroup of market participants” (p. 1090) in a democratic society. Central banks should rather prefer controlling the reaction to their public disclosures by carefully formulating their content instead of selecting their audience.
Communication and Stabilization Policy While papers mentioned in Sections “Overreaction to Central Banks Disclosures” and “Central Bank Communication Strategies” focus on experiments on transparency in frameworks that do not include instruments for stabilizing the economy, this subsection is devoted to the few first papers combining active stabilization with communication.
Experiments on Monetary Policy and Central Banking
201
Inflation targeting (IT) is a monetary policy strategy characterized by the announcement of a target for inflation, a clear central bank’s mandate to pursue inflation stabilization as the primary objective of monetary policy, and a high level of transparency and accountability. Empirically, IT regimes largely vary depending on the degree with which these criteria are applied (Svensson, 2010) and the benefits of explicitly adopting an IT regime have long been debated in the literature (see, e.g., Angeriz & Arestis, 2008; Ball & Sheridan, 2005; Levin, Natalucci, & Piger, 2004; Roger, 2009; Roger & Stone, 2005; to mention but a few). Cornand and M’baye (2013) present a laboratory experiment framed as a standard New Keynesian model that aims at testing the relevance of different IT regimes. More precisely, they examine the relevance of communicating the target for the success of the IT strategy and evaluate how central bank objectives matter for economic performances. The model is based on three main equations: an aggregate demand equation (IS curve), a supply function (New Keynesian Phillips curve), and a reaction function of the central bank (the interest rate rule). The experiment consists of retrieving subjects’ inflation expectations in the lab, and inserting them into the theoretical model to derive the current values of inflation, output gap, and interest rate.26 Participants had the task to forecast the next period’s inflation in each of the 60 periods of a session. They were presented with the four main macroeconomic variables: inflation, output gap, interest rate, and central bank’s inflation target. They were informed that the actual values of inflation and output gap mainly depend on stated expectations by all participants and were also affected by lagged output gap, small random shocks, and (when applicable) the central bank’s inflation target. On their screens, participants could observe time series of the first three variables up to the current period. Participants’ payoffs were such that they got points whenever their forecast error was below 3%. Four different treatments27 were considered. First, implicit strict IT: the central bank’s sole objective was to stabilize inflation, but the inflation target was not announced to the public. Second, explicit strict IT: the central bank also had the sole objective to stabilize inflation and explicitly communicated its 5% target to forecasters. Third, implicit flexible IT: the central bank had both an inflation and an output gap stabilization objective and did not announce its target. And fourth, explicit flexible IT: the central bank also had an inflation and an output gap stabilization objective and explicitly communicated its target for inflation.
202
CAMILLE CORNAND AND FRANK HEINEMANN
Cornand and M’baye analyze the impact of individual behavior on macroeconomic outcomes. They find that “if the central bank only cares about inflation stabilization, announcing the inflation target does not make a difference in terms of macroeconomic performance compared to a monetary policy that follows the Taylor principle” (p. 2). However, if the central bank has a double objective, communicating the target may reduce the volatilities of inflation, interest rate, and output gap without affecting the average levels of these variables. The first rationale is that communication reduces agents’ uncertainty about policy objectives by clarifying these objectives. A second reason is that a flexible IT regime seems more sensitive to fluctuations in inflation forecasts than a strict IT regime and is less effective in stabilizing the economy because subjects need more time to reach the target. Hence, announcing the target is more helpful in reducing forecast errors. Third, subjects tend to rely more on trend extrapolation in the implicit flexible IT treatment than in the explicit flexible IT treatment. Trend extrapolation requires more frequent and aggressive adjustments in the policy instrument to mitigate the high volatility in inflation and output gap. While Cornand and M’baye consider subjects only as price-setting firms, Kryvtsov and Petersen (2013) analyze the role of central bank communication in a more macro experimental framework. One of the most important contributions of Kryvtsov and Petersen is to introduce a measure of the expectations channel of monetary policy. They demonstrate that public announcements of interest rate forecasts may reduce the effectiveness of monetary policy and increase macroeconomic fluctuations. More details about this paper can be found in another article of this book (Assenza et al., 2014).
POLICY IMPLEMENTATION Monetary policy implementation has two dimensions that we distinguish here: one is the particular strategy or policy rule by which the central bank adjusts its instrument to observed data, the other dimension is the operational side of how open-market operations are conducted for providing liquidity to the financial system. The ECB conducts weekly repo auctions, while the U.S. Federal Reserve holds auctions on a daily basis. Commercial banks’ liquidity demand depends on their refinancing needs and reveals information about credit flows to the central bank. The market mechanism is important in two ways: it should efficiently allocate liquidity and also
Experiments on Monetary Policy and Central Banking
203
aggregate information. Hence, an important question is how to actually design these auctions. Auction design is a classical topic of experimental economics28 and using experiments for bench testing auctions has become a standard procedure. We first review experiments dealing with the strategic dimension of monetary policy rules (Section “Monetary Policy Rules”) before focusing on the rare experiments that are especially designed for repo auctions (Section “Repo Auctions”).
Monetary Policy Rules ˇ Pfajfar and Zakelj (2014) analyze how effective different monetary policy rules are in stabilizing inflation. They consider a reduced form of the New Keynesian model29 with an IS curve, yt=−φ(it − Etπt + 1)+yt − 1 + gt, where it is the interest rate, πt denotes inflation, Etπt + 1 is the forecast made in period t for period t + 1, yt is the output gap, gt is an exogenous shock, and φ is the intertemporal elasticity of substitution in demand. The Phillips curve is given by πt = βEtπt + 1 + λyt + ut. Four treatments are distinguished depending on the considered moneˇ tary policy rules. In three treatments Pfajfar and Zakelj employ inflation forecast targeting, it = γ ðEt π t þ 1 − πÞ þ π , with different specifications of the parameter γ (γ = 1.5 in Treatment 1, γ = 1.35 in Treatment 2, γ = 4 in Treatment 3). The fourth treatment implements contemporaneous IT, it = γ ðπ t − πÞ þ π with γ = 1.5. Target levels are denoted by π. The experiment consists in a simulated fictitious economy of nine agents, described by the three equations above. For each period t, the participants receive a table with past realizations of inflation, output gap, and interest rate. For the first period, 10 initial values were generated by the computer under the assumption of rational expectations. Subjects get a qualitative description of the underlying model. Their task is to provide an inflation forecast for period t + 1, and a 95% confidence interval around their prediction. Fig. 3 presents a group comparison of expected inflation and realized inflation by treatment. The authors show that amongst the rules targeting inflation forecasts, a higher degree of monetary policy aggressiveness γ reduces the variability of inflation, but may lead to cycles. Contemporaneous IT performs better than inflation forecast targeting with the same degree of monetary policy aggressiveness. ˇ Pfajfar and Zakelj also analyze how subjects form their expectations by identifying different strategies and estimating the share of subjects who are
204
CAMILLE CORNAND AND FRANK HEINEMANN Treatment 1
Treatment 2
20 10 10 0 0
–10
Inflation (%)
–10 0
10
20
30
40
50
60
70
0
10
20
30
40
50
60
70
50
60
70
Treatment 4
Treatment 3 6
5 4
4
3 2 2 1
0 0
10
20
30
40
50
60
70
0
10
20
30
40
Period
Fig. 3.
Group 1
Group 2
Group 3
Group 4
Group 5
Group 6
Group Comparison of Expected Inflation (Average Subject Prediction) and ˇ Realized Inflation by Treatment. Source: Pfajfar and Zakelj (2014).
following these strategies. A significant share of subjects follow either a trend extrapolation model for inflation or a general adaptive model, in which the inflation forecast is a linear function depending on the three macroeconomic variables of the last period. Luhan and Scharler (2014) use a learning-to-optimize experiment for analyzing the role of the Taylor principle. Their main result is that violations of the Taylor principle need not be destabilizing, because subjects use the nominal interest rate as a proxy for the real interest rate and may reduce consumption demand in response to high nominal rates even if the real rate has fallen. In their experiment, subjects play 20 rounds of a twoperiod game. In each round, subjects decide how much of a given endowment to consume and how much to save for consumption in the second
Experiments on Monetary Policy and Central Banking
205
period of the same round. The inflation rate in any round is determined by subjects’ consumption decisions in the previous round of the game.30 Savings yield an interest with a nominal rate that is determined by the central bank in response to the inflation rate. If the central bank obeys the Taylor principle, the real rate rises with increasing inflation. Theoretically, this should induce lower consumption and, thereby lead to a lower inflation rate in the next round. If the Taylor principle is violated, one should expect the opposite response and the economy should converge to corner solutions in which all subjects either consume their total endowment and inflation is high or save it all and inflation is low. Between treatments, Luhan and Scharler (2014) vary whether the central bank obeys the Taylor principle or not and whether the current period’s inflation rate is revealed to subjects before they decide or just ex post. Note that in New Keynesian models, agents base their consumption decisions on the real interest rate that consists of a known nominal and an expected, yet unknown, future inflation. Thus, withholding information about the inflation rate that is relevant for the current decision problem is the more relevant case. Luhan and Scharler observe that mean inflation is close to the target if the Taylor principle holds. If the Taylor principle is violated, and inflation is known ex ante, inflation rates converge to either of the extremes. But, if inflation is not revealed ex ante, average inflation rates are more evenly distributed and close to the target in many economies. The explanation is that many subjects do not learn the inflation dynamics and take the nominal interest rate as a proxy for the real rate. If this observation carries over to real economies, the central bank may be able to stabilize inflation even when it violates the Taylor principle. Amano, Engle-Warnick, and Shukayev (2011) examine how subjects form expectations when a central bank changes from IT to price-level targeting. The recent financial crisis casts doubt on the IT consensus.31 An alternative approach theoretically studied for example by Svensson (2003) is price-level targeting.32 While IT helps stabilizing inflation, it does not correct for past deviations from the target, leaving some uncertainty on the future level of prices: under IT, shocks to inflation may have a permanent effect on the price level. Price-level targeting precisely aims at bringing the price level back to the target after some deviation. In theory, price-level targeting should generate more stable output and inflation (Kahn, 2009). However, price-level targeting remains largely untested in practice and its efficacy rests on the assumption that “economic agents must forecast inflation rationally (…) and in a manner consistent with the price-level
206
CAMILLE CORNAND AND FRANK HEINEMANN
targeting regime” (Amano et al., 2011, p. 1). In theory, price-level targeting provides a better anchor to inflation expectations, which allows the central bank to achieve greater stabilization of inflation and economic activity. Amano et al. (2011) aim at evaluating whether economic agents understand the implications of price-level targeting for the rate of inflation. They analyze whether moving from IT to price-level targeting leads subjects to adjust their inflation expectations in a manner consistent with price-level targeting. They simulate a macroeconomic model with exogenous shocks in which they consider two scenarios: one in which the central bank targets a zero inflation rate; and a second in which the central bank targets a constant price level. All subjects start out with IT for 20 periods (plus 20 practice periods). Then half of all subjects are exposed to a regime with price-level targeting. The screen shows them a history of inflation and aggregate price levels from the past eight periods. Subjects’ task consists in predicting inflation for the next period. Instructions clarify the role of the central bank: under IT, the central bank is not concerned with the past price level; under price-level targeting, the central bank acts to bring the price level to its constant target. While subjects rely on past inflation only for predicting future inflation rates under IT, they partially adjust their expectations in the direction implied by price-level targeting, when this policy becomes effective. Thus, their expectations are qualitatively consistent with the regime switch but not in quantities. Marimon and Sunder (1995) compare different monetary rules in an overlapping generations framework and analyze their influence on the stability of inflation expectations. In particular, they focus on the comparison between Friedman’s k-percent money growth rule and a deficit rule where the government fixes the real deficit and finances it by seigniorage. They find little evidence that Friedman’s rule can help to coordinate agents’ beliefs and stabilize the economy. The inflation process might be even more volatile when Friedman’s rule is announced. In unstable environments, subjects behave more in line with adaptive learning models instead of forward looking rational expectations. Thus, a constant money growth rate does not necessarily anchor inflation expectations. Bernasconi and Kirchkamp (2000) conduct a similar analysis and find that the monetary policy following Friedman’s rule reduces inflation volatility but also leads to higher average inflation than a revenue equivalent deficit rule. The reason is that subjects save too much and over-saving reduces inflation rates under the deficit rule. The design of experiments on overlappinggenerations economies are described in more detail by Assenza et al. (2014) in this volume.
Experiments on Monetary Policy and Central Banking
207
Repo Auctions Ehrhart (2001) studies the fixed-rate tender mechanism used by the ECB before June 27, 2000. According to this mechanism, the ECB was setting an interest rate and a maximum amount of liquidity, while banks announced how much liquidity they wanted to borrow at this rate (bids). If the aggregate demand for liquidity exceeded the maximum, banks were rationed proportionally. During the 18 months in which this mechanism applied, the bids were exploding such that the allotment was finally below 1% of the bids. Banks exaggerated their demand for refinancing, because they expected to be rationed. The problem associated with this strategic response is that refinancing operations were also supposed to help the central bank evaluate and plan monetary policy and it became difficult to extract the relevant information about liquidity demand from this increasing number of bids. The ECB, therefore, switched to an interest-rate tender that does not suffer from possible bid explosions, but has the disadvantage that it provides incentives for underbidding. In this context, Ehrhart (2001) proposed an experiment aimed at evaluating whether and under which conditions a fixed-rate tender leads to a strategic increase in bids and how this affects the information content of these bids. The experiment tests different fixed-rate tender games. Subjects play the role of banks, while the central bank is automated. At the beginning of each round, subjects were informed about the interest rate. The maximum (“planned”) allotment is a random variable unknown to subjects when they submitted their bids; they only knew the distribution of the maximum allotment. They were aware of the proportional rationing mechanism and observed on their screen the payoffs resulting from different allotments at the current interest rate. Treatments differ with respect to the interest rate and the distribution of maximum allotment in order to evaluate the link between the maximum allotment and the optimal demand. In Treatment 1, the expected maximum allotment is set equal to the optimal demand, so that there is a unique equilibrium in which demand is slightly higher than optimal. In Treatment 2, the expected allotment is smaller than the optimal demand, but still allows a larger than optimal allotment. Treatment 3 did not allow for the maximum allotment to be larger than the optimal demand. While the interest rate is kept constant for the 20 rounds of the experiment in the first two treatments, it changed from a low to a high level in Treatment 3 from round 11 onwards. The games in Treatments 2 and 3 (first 10 rounds) have no equilibrium in finite numbers, as banks would always try to overbid
208
CAMILLE CORNAND AND FRANK HEINEMANN
each other. The games in Treatments 1 and 3 (last 10 rounds) have a unique equilibrium in which demand slightly exceeds the optimum. The results show exploding bids in treatments without equilibrium, while the bids were close to the equilibrium in Treatment 1. The sad news is that for two out of six groups in Treatment 3, after playing 10 rounds of a game without equilibrium, bids continued to explode in the second half of this treatment. The bids grew to at least twice the equilibrium level in all six groups. Thus, Ehrhart (2001) concludes that after a phase of continually increasing bids, switching to an accommodative policy (game with a unique equilibrium) need not stop the growth of bids beyond the equilibrium level. Overall the experiment indicates that an explosive trend in bids cannot be stopped just by changing the allotment rules or the equilibrium values. Under a fixed-rate tender bids may remain uninformative, because bidders may respond more to strategic considerations than to their own demand conditions. In 2000, the ECB switched to an interest-rate tender with banks bidding for different amounts at different interest rates, which reveals the complete demand function and reduces the incentives for strategic bids.
MONETARY POLICY DURING LIQUIDITY CRISES When an economy is on the brink of a currency or banking crises, central banks may use their instruments to stabilize exchange rates or insert liquidity for preventing bank runs. In these events, communication may have effects that are quite distinct from its effects in normal times, because pure liquidity crises are a phenomenon of equilibrium multiplicity. In this section, we discuss some experiments on equilibrium multiplicity (Section “Equilibrium Multiplicity”) and show how interventions, but also informative and extraneous signals may affect the equilibrium selection (Sections “Global Games” and “Sunspots”). Equilibrium Multiplicity Many models in monetary macroeconomics suffer from multiple equilibria. Here, theory cannot give a clear answer how changing exogenous variables affects endogenous variables, because any change intended to improve the fundamental conditions on financial markets may adversely affect expectations and lead to the opposite effect. Which out of many equilibria will be played by real agents is ultimately an empirical question.
Experiments on Monetary Policy and Central Banking
209
Equilibrium Stability In overlapping-generation models or DSGE models with long-run neutrality of money, equilibrium multiplicity arises from indeterminate long-run expectations. In these models, an equilibrium is a path (pt)t = 0, …, ∞ that satisfies certain conditions and can usually be written as a function pt(pt − T, … , pt − 1, Et(pt + 1,…, p∞)) or, in a reduced form, pt = f(Et(pt + 1)). If expectations are rational, a transversality condition fixing Et(p∞) yields uniqueness. Unfortunately, these transversality conditions are purely mathematical and lack any microeconomic or behavioral justification. If agents expect hyperinflation, the price level rises faster than for a bounded inflation expectation, and the expectation of hyperinflation becomes selffulfilling. Note that a perfect foresight path reads pt = f(pt + 1) and adaptive expectations of the form Et(pt + 1) = pt − 1 yield pt = f(pt − 1). The dynamic properties are exactly reverted. If expectations are adaptive, the selected path is determined by starting values that are inherited from the past and a transversality condition is not needed.33 As price paths in the reduced-form example take opposite directions depending on whether agents are looking forward or backward, stability of equilibria is also reverted by using adaptive instead of rational expectations. Stability of equilibria has been analyzed by Marimon and Sunder (1993, 1994) with experiments on overlapping-generations economies. The economies have two stationary equilibria, one is stable under rational expectations, the other under adaptive expectations. The experiment shows that the observed price paths tend toward the low-inflation equilibrium that is stable under adaptive expectations. Adaptive learning models can also explain most subjects’ expectations in experiments on DSGE models (see Assenza et al., 2014, in this volume). However, subjects also switch between different forecasting rules depending on the relative success of these rules. Financial Crises as Coordination Games In financial crises, maturity or currency transformation makes borrowers vulnerable to liquidity crises and speculative attacks out of self-fulfilling beliefs. If depositors expect their bank to become illiquid, they withdraw their funds, which reduces the bank’s liquidity. If traders expect devaluation of a currency, they sell it and create an additional market pressure that may force the central bank to devalue. The bank-run model by Diamond and Dybvig (1983) is a perfect example. In order to prevent a bank run, depositors must coordinate on rolling over. The common feature of these models is that they are binary choice coordination games in which one
210
CAMILLE CORNAND AND FRANK HEINEMANN
action yields a higher return than the other if and only if sufficiently many players choose this action. The experimental literature on coordination games shows regular patterns of behavior and also highlights that it can be extremely difficult to achieve coordination on the efficient equilibrium (see Schotter and Sopher, 2007).
Global Games Morris and Shin (1998, 2003) applied the theory of global games to a currency crisis model and demonstrated that the model has multiple equilibria if the fundamentals of the economy are common knowledge amongst the potential speculators, while there is a unique equilibrium if agents have private information (PI) that is sufficiently precise compared to public information. In this equilibrium, the central bank can actually reduce the likelihood of speculative attacks by increasing the interest rate or imposing capital controls. Heinemann (2000) showed that this also holds, when the noise of private signals converges to zero. Using the same approach, Heinemann and Illing (2002) argued that increasing transparency of government policy reduces the likelihood of speculative attacks. In a recent paper, Morris and Shin (2014) apply global games also to a model of the risk taking channel of monetary policy. A global game is a coordination game with an additional state variable where actions can be ordered such that higher actions are more profitable if other agents choose higher actions or/and if the state variable has a higher value. The state is random and its realization is not commonly known. Instead, players receive private signals that are drawn independently around the true state. The equilibrium of a global game is unique provided that the noise in private signals is small. In equilibrium, agents follow threshold strategies and switch to a higher action if their signal exceeds a certain threshold. By letting the noise converge to zero, the global game selects a unique threshold in the state space distinguishing games for which in equilibrium (almost) all players chose the lower action from states in which they choose the higher action. For a particular realization of the state variable, the global game coincides with the original coordination game. Thus, for vanishing noise in private signals, the global game selects a unique equilibrium in the original coordination game. This limit point is called global-game selection. Heinemann, Nagel, and Ockenfels (2004) tested the currency crisis model by Morris and Shin (1998). Each session had 15 subjects who repeatedly
Experiments on Monetary Policy and Central Banking
211
chose between two actions A and B. Action A was a safe option, providing a fixed payoff of T that was varied between treatments. Option B can be interpreted as a speculative attack and gave a payoff R, provided that a sufficient number of group members chose this action. The number of subjects needed for the success of action B depended on R and another parameter that was varied between treatments. R served as a state variable and was randomly drawn for each decision situation. In treatments with common information (CI), subjects were informed about R in each situation and knew that the others were also informed. In treatments with PI, subjects received noisy signals about R, where the noise terms were independent between subjects. Theoretically, games with CI have multiple equilibria for a wide range of realizations of R. Games with PI are global games that always have a unique equilibrium. Heinemann et al. (2004) observed that in all treatments, more than 90% of all subjects were following threshold strategies, choosing B if and only if their information about R exceeded a threshold. The percentage of threshold strategies was about the same for CI and PI. Under both information conditions, 87% of the data variation of group-specific thresholds could be explained by parameters of the payoff function and there was no evidence that CI led to a lower predictability of behavior that could be attributed to the existence of multiple equilibria. Thus, the major conclusion from this article is that even if information is public, subjects behave as if they receive private signals. The numerical prediction of the theory of global games was not generally supported. In most treatments, subjects deviated in the direction of more efficient strategies. However, the comparative statics of the global-game equilibrium gave a perfect description of the qualitative responses to changes in parameters of the payoff function. In sessions with a higher payoff for the safe option or with a higher hurdle for success of action B, the threshold to choosing B was higher. There was one significant difference, though, between sessions with CI and PI: under CI, thresholds were lower than in the otherwise equal treatments with PI. In the interpretation, this means that speculative attacks are more likely if payoffs are transparent. This result is in line with other experiments on coordination games in which subjects are more inclined to choose a risky option if they have better information about the potential payoffs from this option. Heinemann, Nagel, and Ockenfels (2009) conducted a similar experiment in which subjects had to choose between a safe option A paying an amount X with certainty, and an option B, paying an amount of 15 Euros, provided that a fraction k of all group members decided for B in the same situation.
212
CAMILLE CORNAND AND FRANK HEINEMANN
Parameters X and k were varied between situations. They showed that behavior could be described by an estimated global game in which subjects behave as if they had only noisy private signals about the payoffs. The estimated global game also had a descriptive power in out-of-sample predictions. Subjects treated coordination games with multiple equilibria similar to lottery choices, which indicates that strategic uncertainty can be modeled by subjective probabilities for other players’ strategies. When subjects’ beliefs about others’ actions were elicited, the average stated probabilities were surprisingly close to average behavior. The global-game selection for diminishing noise of private signals was close to a best response to the observed distribution of actions amongst players. This reveals that the global-game selection can be used for individual advice to financial market participants who are in a coordination-game situation. Duffy and Ochs (2012) compare behavior in the experiment designed by Heinemann et al. (2004) with treatments in which subjects decide sequentially. In their experiment, 10 subjects are first informed about the realized state either with CI or with PI. Subjects have 10 periods for choosing B. Once they have chosen B, they cannot reverse their decision. In each period, subjects get informed about how many other group members have chosen B before. This entry game resembles the dynamic nature of financial crises that allows for herding and strategic entry. The game is repeated and subjects converge to entering in the first period provided that the state is above a group-specific threshold. The main conclusion of Duffy and Ochs (2012, p. 97) is that “entry thresholds are similar between static and dynamic versions of the same game.” Qu (2014) extends a global game by introducing a communication stage before the actual decisions are made. In a “Market” treatment, subjects may trade contingent claims that pay one unit depending on whether the risky action in stage 2 is successful or not. In a “Cheap Talk” treatment, subjects send nonbinding announcements whether they intend to choose the risky action. In the “Market” treatment prices are observed by all subjects and aggregate PI about the fundamental state. In the “Cheap Talk” treatment, subjects get to know how many subjects announced to take the risky action. Market price and number of intended entries are public signals. Qu observes that in both treatments subjects learn to condition their actions on the respective public signal. However, with cheap talk subjects coordinate on equilibria that are significantly more efficient than the equilibria achieved with markets or in the one-stage baseline treatment. For monetary policy, the most important conclusions from these experiments are comparative statics and predictability of behavior. Subjects
Experiments on Monetary Policy and Central Banking
213
respond to changes in the payoff function in the direction that is predicted by the global-game selection. Comparative statics follow the intuition an improvement of fundamentals makes financial crises less likely. CI about fundamentals does not per se reduce the predictability of behavior, but is possible to influence behavior by communication.
Sunspots Even though predictability of behavior in coordination games seems to be fairly high, there is always a range of parameter values for which these predictions are rather uncertain, even if the theory of global games is applied. With positive noise of private signals, the theory of global games delivers only a probability for a speculative attack being successful or a bank run to occur. If this probability is close to 50%, this is a unique equilibrium, but no reliable prediction for the final outcome. Arifovic and Jiang (2013) show that in these critical cases, subjects may condition their actions on salient but extrinsic signals. Their experiment implements a bank-run game, in which 10 subjects repeatedly decide whether to withdraw funds from a bank. Withdrawing yields a higher payoff than not withdrawing if and only if a sufficient number e* of subjects withdraw. This critical number is varied across treatments. In each period, subjects receive a random message that may be either “The forecast is that e* or more people will choose to withdraw” or “The forecast is that e* or less people will choose to withdraw.” All subjects receive the same message and are informed that it is just randomly generated. If e* = 1, subjects reliably converge to the bank-run equilibrium. If e*=8, they coordinate on not running the bank. However, for an intermediate value e*=3, four out of six groups coordinate on a sunspot equilibrium, in which they run the bank if and only if they receive the first message. This result shows that behavior in coordination games may be unstable and affected by messages that are not informative about agents’ payoffs. Extrinsic events (“sunspots”) may affect behavior in experiments as has been established by Marimon, Spear, and Sunder (1993) and Duffy and Fisher (2005). In a pure coordination game, Fehr, Heinemann, and Llorente-Saguer (2013) show that salient extrinsic messages can lead subjects to condition their actions on these messages, even if no sunspot equilibrium exists. In their experiment, subjects are matched in pairs and must simultaneously choose a number from 0 to 100 inclusive. Their payoff only depends on how close the chosen numbers are, the closer, the higher are both players’ payoffs. Obviously, any number chosen by both players is an
214
CAMILLE CORNAND AND FRANK HEINEMANN
equilibrium. The game is repeated 80 times with subjects being randomly matched each period. In this baseline treatment all groups of subjects converge to choosing 50, which is the risk-dominant equilibrium. Fehr et al. (2013) compare this with various sessions in which subjects receive public or correlated private signals. These signals can be either 0 or 100. In a public information treatment, both subjects in a match receive the same number. Here, they coordinate on choosing the action that is indicated by the public signal 0 or 100. The signal works as a focal point and causes a sunspot equilibrium. If the two subjects receive signals that are not perfectly aligned, their actions should not be affected by the signal, because there are no sunspot equilibria. In the experiment, however, highly correlated private signals had a significant effect on behavior. Four out of 12 groups in which private signals were highly correlated even coordinated on non-equilibrium strategies in which players chose at least 90 when the private signal was 100 and at most 10 for a private signal of 0. When public and private signals were combined, the presence of private signals made half of the groups to choose numbers closer to 50 than in the treatment with pure public signals. This shows that private extrinsic signals may affect behavior even if this is no equilibrium and they may reduce the impact of extrinsic public signals that might otherwise cause sunspot equilibria. Vranceanu, Besancenot, and Dubart (2013) analyze whether uninformative messages with a clear connotation can affect behavior in a global-game environment. As in the work of Heinemann et al. (2009), subjects can choose between a safe and a risky option, where the risky option yields a higher payoff provided that sufficiently many group members decide for it. They compare groups that before making their decisions, receive a positive message with groups who receive a negative message. The positive message reads “In a past experiment, subjects that had chosen the risky option were satisfied with their choice.” The negative message: “In a past experiment, subjects that had chosen the risky option were disappointed by their choice.” The number of risky choices by subjects with the positive message was higher than for subjects with negative messages. The difference is not significant at 5% but close to significant. The meaning of these messages cannot be quantified and they give no clear recommendations for behavior. However, they may raise or lower the subjective beliefs for success of risky choices. The authors conclude that “rumors and other uninformative messages can trigger illiquidity in asset markets” (Vranceanu et al., 2013, p. 5). Sunspot equilibria are equilibria in which strategies depend on extrinsic signals that are unrelated to the players’ payoff functions. Any game with
Experiments on Monetary Policy and Central Banking
215
multiple equilibria also has sunspot equilibria, in which all agents coordinate on playing a particular equilibrium for the respective realizations of the random variable. Once agents are coordinated, the extrinsic signal selects the equilibrium. Agents condition choices on the realization of an extrinsic signal. The experimental evidence indicates that in games with strategic complementarities, chosen actions are rather sensitive to extrinsic signals. The reason may be that any message with a connotation that refers to a higher strategy, provides an incentive to raise one’s own strategy if one believes that others might be affected by the message.
CONCLUDING REMARKS AND OPEN ISSUES FOR FUTURE RESEARCH This article provides a survey of applications of experimental macroeconomics to central banking issues. We have argued that experiments are helpful to better understand the channels by which monetary policy affects decisions, the impacts of different communication strategies, and for benchtesting monetary policy rules. We hope this article also contributes to a better understanding of the prospects (especially policy implications) and limitations of the use of experiments in monetary policy and central banking. We view laboratory experiments as complementary to other methods used in macroeconomic research. Replication of an experiment is possible so that many economies with the same patterns can be created and allows multiple observations, which is necessary for testing theories. Although the created economies are synthetic, they can preserve the main features of a real economy and allow answering specific research questions. Experiments are particularly useful in testing responses to incentives, formation of expectations, effects of communication and information, and equilibrium selection. We now suggest some avenues for future research where macro experiments could be useful. In Fig. 4, shaded arrows indicate some topics that could benefit from experimental analysis. These topics are largely inspired from the discussions about monetary policy tools and strategies during and after the recent financial crisis. Indeed, the global financial crisis has recast the debate about the roles and objectives of central banking. Regarding objectives and institutions, it would be useful to analyze how government debt and political pressures may affect the objectives of central banks. Games of political economy can and have been tested in the lab.
216
Perspectives for Experimental Research in Central Banking.
CAMILLE CORNAND AND FRANK HEINEMANN
Fig. 4.
Experiments on Monetary Policy and Central Banking
217
Central bank independence and its pursuit of inflation stabilization may be compromised by debt, fiscal and political pressures, especially in an era of bank distress and sovereign debt problems. The credibility of the inflation objective relies on central bank independence. To what extent may political pressures force the central bank to deviate from stabilizing inflation? A related issue regards the possible time inconsistency of monetary policy. While we described some experiments related to the inflation bias of conventional policy, there is no work yet analyzing the time inconsistency of exiting unconventional monetary policy after a crisis. The issue is dawning on the horizon. How well do central bankers manage several objectives with a single instrument and to what extent should central banks rely on market forces and automatic stabilizers? Experiments can offer an appropriate framework to treat these issues. The experiments of Engle-Warnick and Turdaliev (2010) and Duffy and Heinemann (2014) could be starting points with which these questions can be dealt with. How challenging is it for experimental central bankers to stabilize an economy when these objectives are competing? Think about stability of inflation and exchange-rates. Do subjects manage to find simple heuristics or rules to deal with these objectives or do they lose control? Channels of monetary policy need to be better researched also by experiments. DSGE models already borrow heavily from behavioral economics by including habit formation, limited capacities of information processing or particular assumptions about risk aversion. Loss aversion can be included in modeling the monetary transmission channel and debt aversion matters as soon as one accounts for the heterogeneity of consumers.34 Another issue related to central bank objective is the relationship between price stability and financial stability. The crisis has shown that the traditional separation between a monetary authority targeting price stability and regulatory authorities targeting financial stability independently is no longer viable. More experimental research is needed for analyzing how rules of monetary policy and financial regulation interact in containing asset-price bubbles.35 A recent paper by Guisti, Jiang, and Xu (2012) makes a step in this direction. It shows that bubbles disappear with high interest rates in an experimental asset market. Fixing the dividend process and terminal value of the asset, the time trend of the fundamental value of the asset becomes positive with a high interest rate and subjects are more likely to follow the fundamental value. While Giusti et al. (2012) only study the impact of a constant interest rate, Fischbacher, Hens and Zeisberger (2013) implement a rule by which
218
CAMILLE CORNAND AND FRANK HEINEMANN
the interest rate is positively related to asset prices. The authors observe only a minor impact of the rule on the size of bubbles, which indicates that opportunity cost for speculative trades are not a powerful instrument to contain speculation. Expected liquidity restrictions, instead, seem to have stronger effects on the size of bubbles. Another interesting issue would be to test whether specific forms of macro-prudential regulation achieve stability of asset prices and inflation simultaneously or whether there are inherent conflicts between these goals. One could also test the impact of macro-prudential equity or liquidity ratios on financial stability. The lab offers an environment for testing alternative macro-prudential tools. A large variety of instruments have been considered (see, e.g., Shin, 2011) to limit pro-cyclicality of the financial system. While real data only give examples, in the lab one could test the effectiveness of countercyclical capital requirements or time-varying reserve requirements systematically. Jeanne and Korinek (2012) propose a model of crisis occurring under financial liberalization, in which they evaluate the effect of macro-prudential regulation in terms of the reduction of crisis occurrence and increase in growth. Such a model could serve as a basis to construct an experimental environment in which one could test alternative measures. One of the biggest advantages of laboratory experiments is the experimenter’s control on subjects’ information. In the aftermath of the crisis, communication and forward guidance have gained in importance. A pressing question is how central banks can achieve negative real interest rates that are desired in a liquidity trap. The central bank must coordinate expectations on a positive long-run inflation target, but private agents must also apply backward induction as in the Cagan (1956) model of the price level. With purely adaptive expectations, it may be impossible to leave a liquidity trap. The last issue also highlights that traditional monetary policy instruments may become ineffective during or after a financial crisis, justifying the use of unconventional policy measures (as, e.g., quantitative easing or credit easing). Because these measures are adopted under particular circumstances, real data only provide very specific illustrations of their effects. Instead, experiments can offer a way to study more systematically their implementation and isolate their effects in the lab. A related, but more general question is linked to equilibrium indeterminacy in DSGE-models with long-run neutrality of money. While models usually apply a transversality condition to establish uniqueness, experiments can be used to analyze under which conditions the equilibrium selected by the transversality condition is most likely to occur.
Experiments on Monetary Policy and Central Banking
219
NOTES 1. In fact, experiments are already used as a policy tool to design regulated markets for utilities and auction schemes (Ricciuti, 2008). 2. “A common argument of skeptics against the use of laboratory experiments (…) as a policy advice instrument (…) is its supposed lack of external validity. (…) [I]f regularities observed in the laboratory do not carry over to the field, any conclusions and policy advice (…) could be dangerously misleading” (Riedl, 2010, p. 87). In research that is concerned with policy advice, laboratory experiments should be viewed as an important complementary research method. “In the ideal case, an economic policy reform is evaluated with all possible scientific methods before a political decision is made. That is, theoretically, experimentally in the lab- and the field, and with traditional applied econometrics” (Riedl, 2010, p. 88). Since field experiments are difficult to pursue in central banking, lab experiments gain importance. For a general discussion of external validity, see for example Druckman and Kam (2011) or Kessler and Vesterlund (2014). 3. This figure as well as Fig. 4 is inspired from Geraats (2002, Fig. 1). 4. In particular, Woodford (2003) has stressed the importance of managing expectations for the conduct of monetary policy. Laboratory experiments are appropriate to improve our understanding of the relationships between monetary policy, agents’ expectations, and equilibrium outcomes as the underlying model can be kept under control and expectation formation processes are observable. Hommes (2011) gives a literature review of laboratory experiments that can be used to “validate expectations hypotheses and learning models” (p. 3). He is particularly interested in the role of heterogeneity in expectations and discusses learning to forecast experiments in order to find a general theory of heterogeneous expectations. 5. In Treatments NH and RH, the number of periods T was set to 20, in NC and RC, T = 10. 6. For further examples see Heemeijer, Hommes, Sonnemans, and Tuinstra (2009) or Bao, Hommes, Sonnemans, and Tuinstra (2012). 7. See Sutan and Willinger (2009) for an experiment on guessing games with positive and negative feedback. They show that levels of reasoning are about the same in both environments but lead to faster convergence towards equilibrium in the environment with strategic substitutes (negative feedback). 8. The frequently assumed constant probability of updating prices or information is of course a technical simplification that cannot be justified by microfoundation if inflation rates are volatile or if the probability of shocks varies over time. 9. The second treatment variable concerns the degree of product differentiation. 10. Fre´chette (2009) surveys 13 studies that compare experiments with students and professionals. Most of these experiments are close to financial-market or management decisions. He summarizes that in 9 out of the 13 experiments there are no differences in behavior between subject pools that would lead to different conclusions. 11. Actually, Arifovic and Sargent (2003) set a maximum length of 100 periods to avoid that a game exceeds the maximum time for which subjects were invited.
220
CAMILLE CORNAND AND FRANK HEINEMANN
12. In fact, many experimental papers in macro mentioned all along this paper present contextualized experiments, in which subjects are confronted with variables like employment, wages, or inflation, rather than formulating them in an abstract manner. The context may be responsible for some biases in observed behavior, because subjects may be affected by value judgments or by experience from real data of their own economy (e.g., when asked for inflation expectations). For avoiding this, some papers as those of Duffy and Heinemann (2014) and Engle-Warnick and Turdaliev (2010) design experiments in an abstract way with neutral framing. 13. The same principle was simultaneously discovered by Henderson and McKibbin (1993). 14. Subjects are classified as being successful if they achieved positive payoffs. 15. Taylor (1999) uses linear rules with only two explanatory variables and finds an R2 of 0.58 for the period 19541997. 16. These experiments implement New Keynesian dynamic stochastic general equilibrium (DSGE) models in the laboratory. Even though they are much simpler than the real economy, these large-scale experimental economies have a “proper” macro content. As already discussed, DSGE models represent the workhorse of current monetary policy analysis. As explained by Noussair, Pfajfar, and Zsiros (2014, pp. 71108), “the objective is to create an experimental environment for the analysis of macroeconomic policy questions.” In this respect, it is important to study “whether a number of empirical stylized facts can be replicated in (…) experimental economies”. 17. See for example, the work of Bosch-Domenech and Silvestre (1997) who show that raising the credit levels has real effects when credit constraints are binding, but only leads to inflation when a large quantity of credit is available. 18. Petersen (2012) also uses a DSGE experiment for studying how households and firms respond to monetary shocks. 19. Lombardelli, Proudman, and Talbot (2005) replicate the work of Blinder and Morgan (2005) at the London School of Economics and Political Sciences. 20. ECB’s rotation system will be effective, once the number of members exceeds 18, which is expected to happen in 2015, when Lithuenia joins the Euro. 21. Geraats (2009) provides an overview of recent changes in central bank communication practices. 22. Compared to the model of Morris and Shin (2002), Cornand and Heinemann (2014) required the number of players to be finite and also changed the distributions of the signals from normal to uniform in order to have a simple distribution with bounded support for the experiment. Moreover, while in the work of Morris and Shin the coordination part is a zero-sum game (so that aggregate welfare depends only on the distance between actions and fundamental state), Cornand and Heinemann change the utility function to make subjects’ tasks simpler without affecting equilibrium behavior. 23. There is no direct experimental evidence for this claim, yet. 24. See, for example, Plott and Sunder (1982, 1988) and Sunder (1992). Plott (2002) and Sunder (1995) provide surveys of this literature. 25. This result should be taken into account with care. As the authors argue, while prices that outperform public information show that private information is being
Experiments on Monetary Policy and Central Banking
221
impounded into the prices, the reverse is not true: supply will always contribute to market errors. “Even when the market price reflects both public and private information, the effect of the random supply can still result in prices that are less predictive of the payout than public information alone” (Middeldorp & Rosenkranz, 2011, p. 26). 26. The paper is closely related to the learning to forecast experiments and ˇ especially the works of Pfajfar and Zakelj (2014) and Assenza, Heemeijer, Hommes, and Massaro (2011): they use the same model and the results come from agents’ inflation expectations. However, while these two papers study agents’ inflation expectations formation process and its interplay with monetary policy in stabilizing inflation, Cornand and M’baye focus on the role of announcing the inflation target on agents’ inflation expectations and on macroeconomic outcomes. They also consider a different reaction function for the central bank (allowing for output gap stabilization). 27. For each treatment, there were four sessions with six subjects each. 28. See Kagel (1995). 29. Assenza et al. (2011) focus on the analysis of switching between different rules. They show that an aggressive monetary policy described by a Taylor-type interest rate rule that adjusts the interest rate more than one point for one in response to inflation is able to stabilize heterogeneous expectations. 30. The time structure in this experiment is a clever experimental design that preserves stationarity of the environment and still allows inter-temporal feedback effects. 31. Starting with New Zealand in 1990, the use of IT by central banks has increased over time. IT is now implemented by more than 25 central banks around the world. 32. “Under a price-level target, a central bank would adjust its policy instrument typically a short-term interest rate in an effort to achieve a pre-announced level of a particular price index over the medium term. In contrast, under an inflation target, a central bank tries to achieve a pre-announced rate of inflation that is, the change in the price level over the medium term” Kahn (2009, p. 35). 33. Benhabib, Schmitt-Grohe´, and Uribe (2002) introduce yet another source of indeterminacy. They show that the response function of monetary policy has two intersections with the Phillips curve relationship, because monetary response is restricted by the zero-lower bound on nominal interest rates. The equilibrium at zero interest is the liquidity trap. 34. Ahrens, Pirschel and Snower (2014) show how loss aversion affects the price adjustment process. Meissner (2013) provides experimental evidence for debt aversion. 35. Following Smith, Suchanek, and Williams (1988), there is a vast literature showing that speculative behavior may lead to bubbles and crashes (see Camerer & Weigelt, 1993). See also Palan (2013).
ACKNOWLEDGMENT We would like to thank the Editor John Duffy, Andreas Orland and Stefan Palan for helpful comments on earlier drafts.
222
CAMILLE CORNAND AND FRANK HEINEMANN
REFERENCES Ackert, L., Church, B., & Gillette, A. (2004). Immediate disclosure or secrecy? The release of information in experimental asset markets. Financial Markets, Institutions and Instruments, 13(5), 219243. Adam, K. (2007). Experimental evidence on the persistence of output and inflation. Economic Journal, 117, 603635. Ahrens, S., Pirschel, I., & Snower, D. (2014). A theory of price adjustment under loss aversion. Centre for Economic Policy Research Discussion Paper No. 9964. London. Alfarano, S., Morone, A., & Camacho, E. (2011). The role of public and private information in a laboratory financial market. Working Papers Series AD 201106. Instituto Valenciano de Investigaciones Econo´micas, S.A. (Ivie). Amano, R., Engle-Warnick, J., & Shukayev, M. (2011). Price-level targeting and inflation expectations: Experimental evidence. Bank of Canada Working Paper No. 2011-18. Ottawa. Angeriz, A., & Arestis, P. (2008). Assessing inflation targeting through intervention analysis. Oxford Economic Papers, 60, 293317. Arifovic, J., & Jiang, J. H. (2013). Experimental evidence of sunspot bank runs. Mimeo Bank of Canada. Arifovic, J., & Sargent, T. J. (2003). Laboratory experiments with an expectational Phillips curve. In D. E. Altig & B. D. Smith (Eds.), Evolution and procedures in central banking. Cambridge, MA: Cambridge University Press. Assenza, T., Bao, T., Hommes, C., & Massaro, D. (2014). Experiments on expectations in macroeconomics and finance. In R. Mark Isaac, D. Norton, & J. Duffy (Eds.), Experiments in macroeconomics (Vol. 17, pp. 1170). Research in Experimental Economics. Bingley, UK: Emerald Group Publishing Limited. Assenza, T., Heemeijer, P., Hommes, C., & Massaro, D. (2011). Individual expectations and aggregate macro behavior. CeNDEF Working Paper No. 2011-01. University of Amsterdam. Baeriswyl, R., & Cornand, C. (2014). Reducing overreaction to central banks disclosure: Theory and experiment. Journal of the European Economic Association, 12, 10871126. Ball, L., & Sheridan, N. (2005). Does inflation targeting matter? In B. Bernanke & M. Woodford (Eds.), The inflation targeting debate (pp. 249276). Chicago: University of Chicago Press. Bao, T., Hommes, C., Sonnemans, J., & Tuinstra, J. (2012). Individual expectations, limited rationality and aggregate outcomes. Journal of Economic Dynamics and Control, 36, 11011120. Barro, R., & Gordon, D. (1983a). A positive theory of monetary policy in a natural rate model. Journal of Political Economy, 91, 589610. Barro, R., & Gordon, D. (1983b). Rules, discretion and reputation in a model of monetary policy. Journal of Monetary Economics, 12, 101121. Benhabib, J., Schmitt-Grohe´, S., & Uribe, M. (2002). Avoiding liquidity traps. Journal of Political Economy, 110, 535563. Bernasconi, M., & Kirchkamp, O. (2000). Why do monetary policies matter? An experimental study of saving and inflation in an overlapping generations model. Journal of Monetary Economics, 46, 315343.
Experiments on Monetary Policy and Central Banking
223
Blinder, A. S., & Morgan, J. (2005). Are two heads better than one? Monetary policy by committee. Journal of Money, Credit, and Banking, 37, 789812. Blinder, A. S., & Morgan, J. (2008). Leadership in groups: A monetary policy experiment. International Journal of Central Banking, 4(4), 117150. Blinder, A. S., & Wyplosz, C. (2005, January). Central bank talk: Committee structure and communication policy. ASSA meetings, Philadelphia. Bosch-Domenech, A., & Silvestre, J. (1997). Credit constraints in a general equilibrium: Experimental results. Economic Journal, 107, 14451464. Bosman, R., Maier, P., Sadiraj, V., & van Winden, F. (2013). Let me vote! An experimental study of the effects of vote rotation in committees. Journal of Economic Behavior and Organization, 96(C), 3247. Cagan, P. (1956). The monetary dynamics of hyperinflation. In M. Friedman (Ed.), Studies in the quantity theory of money. Chicago: University of Chicago Press. Calvo, G. (1983). Staggered prices in a utility maximizing framework. Journal of Monetary Economics, 12, 383398. Camerer, C., & Weigelt, K. (1993). Convergence in experimental double auctions for stochastically lived assets. In D. Friedman & J. Rust (Eds.), The double auction market: Theories, institutions and experimental evaluations (pp. 355396). Redwood City, CA: Addison-Wesley. Caplin, A., & Dean, M. (2013). Behavioral implications of rational inattention with Shannon entropy. NBER Working Paper No. 19318. Cambridge, MA. Caplin, A., & Dean, M. (2014). Revealed preference, rational inattention, and costly information acquisition. NBER Working Paper No. 19876. Cambridge, MA. Cheremukhin, A., Popova, A., & Tutino, A. (2011). Experimental evidence on rational inattention. Working Paper 1112, Federal Reserve Bank of Dallas. Cornand, C., & Heinemann, F. (2008). Optimal degree of public information dissemination. Economic Journal, 118, 718742. Cornand, C., & Heinemann, F. (2013). Limited higher order beliefs and the welfare effects of public information. Mimeo. Cornand, C., & Heinemann, F. (2014). Measuring agents’ overreaction to public information in games with strategic complementarities. Experimental Economics, 17, 6177. Cornand, C., & M’baye, C. K. (2013). Does inflation targeting matter? An experimental investigation. Working paper GATE 201330. Universite´ de Lyon, Lyon. Croson, R. (2010). The use of students as participants in experimental research. Behavioral Operations Management Discussion Forum. Retrieved from http://www.informs.org/ Community/BOM/Discussion-Forum. Dale, D. J., & Morgan, J. (2012). Experiments on the social value of public information. Mimeo. Davis, D., & Korenok, O. (2011). Nominal price shocks in monopolistically competitive markets: An experimental analysis. Journal of Monetary Economics, 58, 578589. Diamond, D. W. (1985). Optimal release of information by firms. Journal of Finance, 40, 10711094. Diamond, D. W., & Dybvig, P. H. (1983). Bank runs, deposit insurance, and liquidity. Journal of Political Economy, 91, 401419. Druckman, J. N., & Kam, C. D. (2011). Students as experimental participants: A defense of the “narrow data base”. In D. P. Green, J. H. Kuklinski, & A. Lupia (Eds.), Cambridge handbook of experimental political science. New York, NY: Cambridge University Press. Duersch, P., & Eife, T. (2013). Price competition in an inflationary environment. Mimeo.
224
CAMILLE CORNAND AND FRANK HEINEMANN
Duffy, J. (1998). Monetary theory in the laboratory. Federal Reserve Bank of St. Louis Review, SeptemberOctober, pp. 926. Duffy, J. (2008a). Macroeconomics: A survey of laboratory research. Working Papers 334, Department of Economics, University of Pittsburgh. Duffy, J. (2008b). Experimental macroeconomics. In S. N. Durlauf & L. E. Blume (Eds.), The New Palgrave dictionary of economics (2nd ed.). New York, NY: Palgrave Macmillan. Duffy, J., & Fisher, E. (2005). Sunspots in the laboratory. American Economic Review, 95, 510529. Duffy, J., & Heinemann, F. (2014). Central bank reputation, transparency and cheap talk as substitutes for commitment: Experimental evidence. Working Paper, Mimeo, Technische Universita¨t Berlin, Berlin. Duffy, J., & Ochs, J. (2012). Equilibrium selection in static and dynamic entry games. Games and Economic Behavior, 76, 97116. Ehrhart, K. M. (2001). European central bank operations: Experimental investigation of the fixed rate tender. Journal of International Money and Finance, 20, 871893. Engle-Warnick, J., & Turdaliev, N. (2010). An experimental test of Taylor-type rules with inexperienced central bankers. Experimental Economics, 13, 146166. Fehr, D., Heinemann, F., & Llorente-Saguer, A. (2013). The power of sunspots. Working Paper, SFB 649 Discussion Paper 2011-070, Berlin. Fehr, E., & Tyran, J.-R. (2001). Does money illusion matter? American Economic Review, 91, 12391262. Fehr, E., & Tyran, J.-R. (2005). Individual irrationality and aggregate outcomes. Journal of Economic Perspectives, 19, 4366. Fehr, E., & Tyran, J.-R. (2008). Limited rationality and strategic interaction: The impact of the strategic environment on nominal inertia. Econometrica, 76, 353394. Fehr, E., & Tyran, J.-R. (2014). Does money illusion matter?: Reply. American Economic Review, 104, 10631071. Fischbacher, U., Hens, T., & Zeisberger, S. (2013). The impact of monetary policy on stock market bubbles and trading behavior: Evidence from the lab. Journal of Economic Dynamics and Control, 37, 21042122. Fre´chette, G. (2009). Laboratory experiments: Professionals versus students. Mimeo, New York University. Geraats, P. M. (2002). Central bank transparency. Economic Journal, 112, F532F565. Geraats, P. M. (2007). The Mystique of central bank speak. International Journal of Central Banking, 3, 3780. Geraats, P. M. (2009). Trends in monetary policy transparency. International Finance, 12, 235268. Giusti, G., Jiang, J. H., & Xu, Y. (2012). Eliminating laboratory asset bubbles by paying interest on cash. Mimeo Bank of Canada. Heemeijer, P., Hommes, C. H., Sonnemans, J., & Tuinstra, J. (2009). Price stability and volatility in markets with positive and negative expectations feedback: An experimental investigation. Journal of Economic Dynamics and Control, 33, 10521072. Heinemann, F. (2000). Unique equilibrium in a model of self-fulfilling currency attacks: Comment. American Economic Review, 90, 316318. Heinemann, F., & Illing, G. (2002). Speculative attacks: Unique sunspot equilibrium and transparency. Journal of International Economics, 58, 429450. Heinemann, F., Nagel, R., & Ockenfels, P. (2004). The theory of global games on test: Experimental analysis of coordination games with public and private information. Econometrica, 72, 15831599.
Experiments on Monetary Policy and Central Banking
225
Heinemann, F., Nagel, R., & Ockenfels, P. (2009). Measuring strategic uncertainty in coordination games. Review of Economic Studies, 76, 181221. Henderson, D. W., & McKibbin, W. J. (1993). A comparison of some basic monetary policy regimes for open economies: implications of different degrees on instrument adjustment and wage persistence. Carnegie-Rochester Conference Series on Public Policy, 39, 221317. Hommes, C. H. (2011). The heterogeneous expectations hypothesis: Some evidence from the lab. Journal of Economic Dynamics and Control, 35, 124. James, J., & Lawler, P. (2011). Optimal policy intervention and the social value of public information. American Economic Review, 101, 15611574. Jeanne, O., & Korinek, A. (2012). Managing credit booms and busts: A Pigouvian taxation approach. NBER Working Papers 16377, National Bureau of Economic Research, Inc. Kagel, J. H. (1995). Auctions: A survey of experimental research. In J. H. Kagel & A. E. Roth (Eds.), The handbook of experimental economics. Princeton: Princeton University Press. Kahn, G. A. (2009). Beyond inflation targeting: Should central banks target the price level? Federal Reserve Bank of Kansas City Economic Review, third quarter, 3564. Kessler, J., & Vesterlund, L. (2014). The external validity of laboratory experiments: Qualitative rather than quantitative effects. Mimeo, Wharton University of Pennsylvania. Kool, C., Middeldorp, M., & Rosenkranz, S. (2011). Central bank transparency and the crowding out of private information in financial markets. Journal of Money, Credit and Banking, 43, 765774. Kryvtsov, O., & Petersen, L. (2013). Expectations and monetary policy: Experimental evidence. Bank of Canada and Simon Fraser University. Ottawa: Bank of Canada Working Paper 2013-44. Kydland, F. E., & Prescott, E. C. (1979). Rules rather than discretion: The inconsistency of optimal plans. Journal of Political Economy, 85, 473492. Lambsdorff, J. G., Schubert, M., & Giamattei, M. (2013). On the role of heuristics Experimental evidence on inflation dynamics. Journal of Economic Dynamics and Control, 37, 12131229. Levin, A., Natalucci, F., & Piger, J. (2004). The macroeconomic effects of inflation targeting. Federal Reserve Bank of St. Louis Review, 86, 5180. Lian, P., & Plott, C. (1998). General equilibrium, markets, macroeconomics and money in a laboratory experimental environment. Economic Theory, 12, 2175. Lombardelli, C., Proudman, J., & Talbot, J. (2005). Committees versus individuals: An experimental analysis of monetary policy decision making. International Journal of Central Banking, 1, 181205. Luhan, W. J., & Scharler, J. (2014). Inflation illusion and the Taylor principle: An experimental study. Journal of Economic Dynamics and Control, 45, 94110. Mac´kowiak, B., & Wiederholt, M. (2009). Optimal sticky prices under rational inattention. American Economic Review, 99, 769803. Maier, P. (2010). How central banks take decisions: An analysis of monetary policy meetings. In P. Siklos, M. Bohl, & M. Wohar (Eds.), Challenges in central banking: The current institutional environment and forces affecting monetary policy. New York, NY: Cambridge University Press. Mankiw, G., & Reis, R. (2002). Sticky information versus sticky prices: A proposal to replace the New Keynesian Phillips curve. Quarterly Journal of Economics, 117, 12951328.
226
CAMILLE CORNAND AND FRANK HEINEMANN
Marimon, R., Spear, S. E., & Sunder, S. (1993). Expectationally driven market volatility: An experimental study. Journal of Economic Theory, 61, 74103. Marimon, R., & Sunder, S. (1993). Indeterminacy of equilibria in a hyperinflationary world: Experimental evidence. Econometrica, 61, 10731107. Marimon, R., & Sunder, S. (1994). Expectations and learning under alternative monetary regimes: An experimental approach. Economic Theory, 4, 131162. Marimon, R., & Sunder, S. (1995). Does a constant money growth rule help stabilize inflation: Experimental evidence. Carnegie Rochester Conference Series on Public Policy, 45, 111156. Meissner, T. (2013). Intertemporal consumption and debt aversion: An Experimental Study. SFB 649 Discussion Paper No. 2013-045. Berlin. Middeldorp, M., & Rosenkranz, S. (2011). Central bank transparency and the crowding out of private information in an experimental asset market. Federal Reserve Bank of New York Staff Reports No. 487, March 2011. Morris, S., & Shin, H. S. (1998). Unique equilibrium in a model of self-fulfilling currency attacks. American Economic Review, 88, 587597. Morris, S., & Shin, H. S. (2002). Social value of public information. American Economic Review, 92, 15221534. Morris, S., & Shin, H. S. (2014). Risk-taking channel of monetary policy: A global game approach. Working Paper. Mimeo, Princeton University. Nagel, R. (1995). Unraveling in guessing games: An experimental study. American Economic Review, 85, 13131326. Noussair, C. N., Pfajfar, D., & Zsiros, J. (2014). Persistence of shocks in an experimental dynamic stochastic general equilibrium economy. In R. Mark Isaac, D. Norton, & J. Duffy (Eds.), Experiments in macroeconomics (Vol. 17, pp. 71108). Research in Experimental Economics. Bingley, UK: Emerald Group Publishing Limited. Orland, A., & Roos, M. W. (2013). The new Keynesian Phillips curve with myopic agents. Journal of Economic Dynamics and Control, 37, 22702286. Palan, S. (2013). A review of bubbles and crashes in experimental asset markets. Journal of Economic Surveys, 27, 570588. Petersen, L. (2012). Nonneutrality of money, preferences and expectations in laboratory new Keynesian economies. SIGFIRM Working Paper No. 8, University of California, Santa Cruz. Petersen, L., & Winn, A. (2014). Does money illusion matter?: Comment. American Economic Review, 104, 10471062. ˇ Pfajfar, D., & Zakelj, B. (2014). Experimental evidence on inflation expectation formation. Journal of Economic Dynamics and Control, 44, 147168. Plott, C. R. (2002). Markets as information gathering tools. Southern Economic Journal, 67, 115. Plott, C. R., & Sunder, S. (1982). Efficiency of controller security markets with insider information: An application of rational expectation models. Journal of Political Economy, 90, 663698. Plott, C. R., & Sunder, S. (1988). Rational expectations and the aggregation of diverse information in laboratory security markets. Econometrica, 56, 10851118. Qu, H. (2014). How do market prices and cheap talk affect coordination. Journal of Accounting Research, 51, 12211260.
Experiments on Monetary Policy and Central Banking
227
Ricciuti, R. (2008). Bringing macroeconomics into the lab. Journal of Macroeconomics, 30, 216237. Riedl, A. (2010). Behavioral and experimental economics do inform public policy. Finanzarchiv, 66, 6595. Roger, S. (2009). Inflation targeting at 20: Achievements and Challenges. Technical Report, IMF Working Paper 09/236. International Monetary Fund. Roger, S., & Stone, M. (2005). On target? The international experience with achieving inflation targets. Technical Report, IMF Working Paper No. 05/163, International Monetary Fund. Schotter, A., & Sopher, B. (2007). Advice and behavior in intergenerational ultimatum games: An experimental approach. Games and Economic Behavior, 58, 365393. Shapiro, D., Shi, X., & Zillante, A. (2014). Level-k reasoning in generalized beauty contest. Games and Economic Behavior, 86, 308329. Shin, H. S. (2011). Macroprudential policies beyond Basel III. BIS Working Paper No. 60, Basel. Smith, V. L., Suchanek, G. L., & Williams, A. W. (1988). Bubbles, crashes, and endogenous expectations in experimental spot asset markets. Econometrica, 56, 11191151. Sunder, S. (1992). Market for information: Experimental evidence. Econometrica, 60, 667695. Sunder, S. (1995). Experimental asset markets: A survey. In J. H. Kagel & A. E. Roth (Eds.), Handbook of experimental economics. Princeton, NJ: Princeton University Press. Sutan, A., & Willinger, M. (2009). Guessing with negative feedback: An experiment. Journal of Economic Dynamics and Control, 33, 11231133. Svensson, L. E. O. (2003). Escaping from a liquidity trap and deflation: The foolproof way and others. Journal of Economic Perspectives, 17, 145166. Svensson, L. E. O. (2010). Inflation targeting. In B. Friedman & M. Woodford (Eds.), Handbook of monetary economics (Ed. 1, Vol. 3, Chap 22, pp. 12371302). Elsevier, Amsterdam. Taylor, J. (1993). Discretion versus policy rules in practice. Carnegie-Rochester Conference Series on Public Policy, 39, 195214. Taylor, J. (1999). A historical analysis of monetary policy rules. In J. Taylor (Ed.), Monetary policy rules (pp. 319341). Chicago, IL: Chicago University Press. Van Huyck, J. B., Battalio, J. C., & Walters, M. F. (1995). Commitment versus discretion in the peasant dictator game. Games and Economic Behavior, 10, 143170. Vranceanu, R., Besancenot, D., & Dubart, D. (2013, July 13). Can rumors and other uninformative messages cause illiquidity? Essec Research Center. DR-130. Walsh, C. E. (1995). Optimal contracts for central bankers. American Economic Review, 85, 150167. Walsh, C. E. (2006). Transparency, flexibility, and inflation targeting. In F. Mishkin & K. Schmidt-Hebbel (Eds.), Monetary policy under inflation targeting (pp. 227263). Santiago, Chile: Central Bank of Chile. Wilson, B. (1998). Menu costs and nominal price friction: An experimental examination. Journal of Economic Behavior and Organization, 35, 371388. Woodford, M. (2003). Interest and prices: Foundations of a theory of monetary policy. Princeton, NJ: Princeton University Press.
EVOLVING BETTER STRATEGIES FOR CENTRAL BANK COMMUNICATION: EVIDENCE FROM THE LABORATORY Jasmina Arifovic ABSTRACT This article describes an experiment in a Kydland/Prescott type of environment with cheap talk. Individual evolutionary learning (IEL) acts as a policy maker that makes inflation announcements and decides on actual inflation rates. IEL evolves a set of strategies based on the evaluation of their counterfactual payoffs measured in terms of disutility of inflation and unemployment. Two types of private agents make inflation forecasts. Type 1 agents are automated and they set their forecast equal to the announced inflation rate. Type 2 agents are human subjects who submit their inflation forecast and are rewarded based on their forecast error. The fraction of each type evolves over time based on their performance. Experimental economies result in outcomes that are better than the Nash equilibrium. This article is the first to use an automated policy
Experiments in Macroeconomics Research in Experimental Economics, Volume 17, 229258 Copyright r 2014 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0193-2306/doi:10.1108/S0193-230620140000017007
229
230
JASMINA ARIFOVIC
maker that changes and adapts its rules over time in response to the environment in which human subjects make choices. Keywords: Individual evolutionary learning; policy maker; experiments with human subjects
INTRODUCTION Following Kydland and Prescott (1977) and Barro and Gordon (1983), the issue of time inconsistency and credibility building has been extensively studied. A number of studies investigate the possible use of nonbinding policy announcements which can improve upon the time-inconsistent Nash solution. Most of them assume hidden information about the type of the policy maker or the state of the economy. In this case, the policy maker can indeed use a nonbinding policy announcement to provide a signal about its private information (e.g., Cukierman, 1992; Persson & Tabellini, 1993; Stein, 1989; Walsh, 1999). Thus, the observed announcement allows for a better prediction of the policy maker’s decision about the actual inflation rate. In addition to models that examined reputation building in a world of rational policy makers and rational agents, there have been a number of papers that studied how learning affects the outcomes in the Kydland and Prescott environment, see, for example, Sargent (1999), Cho, Williams, and Sargent (2002), and Cho and Sargent (1997). In these models either the policy maker or both the policy maker and private agents are learning using recursive least squares, constant gain and stochastic gradient learning. The learning outcomes narrow the set of equilibria down to the time consistent but Pareto inferior Nash equilibrium with occasional “escape” dynamics toward the optimal Ramsey outcome. However, the Nash equilibrium is the most frequent outcome while the Ramsay outcome rarely occurs. In a continuous time framework, Dawid and Deissenberg (2005) study an environment in which a policy maker uses “cheap talk” announcements of the inflation rate they intend to set and then implements the actual inflation rate. There is a continuum of two types of atomistic private agents, believers and nonbelievers, whose relative proportion in the population is constant over time. A rational policy maker decides on the announced and actual inflation rates by solving a dynamic optimization problem, taking into account the impact of its actions on agents’ forecasts. The results show
Evolving Better Strategies for Central Bank Communication
231
that this economy can reach a steady state that is Pareto superior to the Nash equilibrium providing there is a sufficiently high stock of believers. While in Dawid and Deissenberg (2005), the fraction of believers and nonbelievers is constant, this fraction evolves over time in Arifovic, Dawid, Deissenberg, and Kostyshyna (2010) as private agents can choose whether or not to believe in the policy maker’s announcements. In this type of evolving environment, the policy maker can no longer make decisions by solving a dynamic optimization problem. Instead, Arifovic et al. (2010) endow the policy maker with the Individual Evolutionary Learning (IEL) (Arifovic & Ledyard, 2004, 2010) to aide its decision-making process. IEL evolves a set of strategies whose performance is evaluated based on their counterfactual payoffs, that is, the payoffs that the strategies would have earned had they been selected to be played out. The frequency of strategies whose counterfactual payoffs are relatively high increases over time via replication.1 New, different strategies are brought into the set via experimentation. A probability that a given strategy is selected to be played out is proportional to its relative counterfactual payoff. In Arifovic et al.’s implementation, each strategy in the IEL’s strategy set consists of two elements, the inflation announcement and the actual inflation that would be implemented if a given strategy were selected. Believers set their inflation forecast equal to the inflation announcement. Nonbelievers update their forecasts using an error correction mechanism. The fraction of believers and nonbelievers changes over time in response to their relative performance regarding their forecast errors. Their simulation results show that the policy maker learns how to sustain a positive fraction of believers which, in turn, results in outcomes that are Pareto superior to Nash. However, in order for these economies to survive and maintain better than the Nash outcomes, there has to be a “healthy” fraction of nonbelievers as well, that is, they should not be driven out of the population. In other words, in addition, to having positive fractions of believers, the economy has to maintain a sufficient number of nonbelievers that have to adjust quickly and not have too high of a cost of acquiring information. The economy has to have a right degree of heterogeneity to maintain “better than Nash” outcomes and provide enough low-cost information to those who are adjusting their forecast. The issue of the time inconsistency problem, the ability to build reputation, as well as the commitment policy regime have also been examined in the experiments with human subjects. The first study that examined this issue is Arifovic and Sargent (2003). They looked at the outcomes of the experimental economies in which the discretionary monetary policy regime was implemented. Their results show that the policy maker can build
232
JASMINA ARIFOVIC
reputation and achieve close to Ramsey outcomes for extended periods of time. However, in some of the sessions, occasional “backsliding” toward the Nash equilibrium was observed. Duffy and Heinemann (2014) conduct experiments in a BarroGordon type of an environment and examine discretionary monetary policy regime under five different treatments. They also conduct a treatment with a commitment monetary policy regime. Their results show that the discretionary policy does not get experimental economies toward the Ramsey outcomes. However, when policy makers can precommit to a policy, they have no difficulty implementing the equilibrium policy with low inflation. This article describes an implementation of the Arifovic et al. (2010) environment where human subjects play the role of nonbelievers. As in Arifovic et al. (2010), the IEL has the role of the policy maker and evolves its strategy set in the same manner. IEL has been implemented in a number of different environments. The main characteristics of its behavior is that it successfully captures characteristics of the experimental data, in many instances, in the economies where both the theory and other learning algorithms have not been able to describe the dynamics of the observed behavior. For example, Arifovic and Ledyard (2012) combine IEL with other regarding preferences and develop a model that is able to explain a number of stylized facts that characterize experiments with the Voluntary Contribution Mechanism including the restart effect. The data generated by their model are quantitatively similar to the data from a variety of experiments, and experimenters, and are insensitive to moderate variations in the parameters of the model.2 Given how well the IEL does in replicating experimental data, an interesting question that one can ask is how would the IEL do in the environment where it interacts with human subjects. Application to the KydlandPrescott environment described in this article is the first one to study this question. The article examines this interaction between the IEL and human subjects, how well the IEL does as a policy maker, and how the results of these experimental economies compare to those of the simulated economies where IEL, in the interaction with two types of artificial private agents, is capable of steering the economy toward better than Nash outcomes. As in Arifovic et al., each strategy in the IEL’s strategy set consists of two elements, the inflation announcement, and the actual inflation rate, which the policy maker perfectly controls. The experimental design consists of two treatments: one in which human subjects are the only type of private agents, and the other one where, in addition to human subjects, there is
Evolving Better Strategies for Central Bank Communication
233
another type of private agents represented by robots, who play the role of believers, that is, they set their forecast equal to the IEL inflation announcement. The timing of events in both treatments is identical. First, a strategy from the IEL’s set is selected, and the inflation announcement of that strategy is broadcast to the human subjects who then make their forecasts. In the second treatment, the forecast of robot believers is set equal to the inflation announcement. The information about the actual inflation of the selected strategy is given to the subjects. At the same time, subjects can see their payoffs, which depend on their forecast error and on the actual inflation. The updating of the IEL’s strategy set takes place. This article is organized as follows. The section “Two Experiments” provides a brief overview of the existing experimental literature in this type of the environment. The section “Description of the Arifovic et al. (2010) Environment” describes the main features of the study by Arifovic et al. (2010) and their main findings. The section “Experimental Design” of this article describes the experimental design and the section “Results” provides the results of the experiments. The section “Conclusion” concludes the analysis.
TWO EXPERIMENTS There are two papers in the experimental literature that study the time inconsistency problem in an environment with a policy maker and a number of private agents. Arifovic and Sargent (2003) study a version of the KydlandPrescott environment under a discretionary policy regime, while Duffy and Heinemann (2014) study a version of the BarroGordon environment under both a discretionary as well as a commitment monetary policy regime. Arifovic and Sargent (2003) examine the question of whether reputation building under a discretionary monetary policy regime can be a substitute for a commitment monetary regime. An experimental session consisted of several indefinitely repeated games. Each repeated game lived indefinitely where the probability of continuation of the game was set equal to a discount factor which supported the Ramsey equilibrium outcome of the infinite horizon economy. In each experimental session, one of the subjects who was chosen at random, played the role of a policy maker, while the rest played the role of private agents. Subjects remained in the same roles throughout the duration of each experimental session.
234
JASMINA ARIFOVIC
In each period, the policy maker set a target inflation rate, xt. The actual inflation rate is then determined as yt = xt + vt where vt was drawn randomly from a normal distribution with mean zero. Private agents made a forecast of the inflation rate and their payoffs depended on the accuracy of these forecasts. The mean of their expectations was then used in the Phillips curve relationship to determine the actual rate of unemployment. When setting the target inflation rate, the policy maker did not know the private agents’ expectations. The policy maker had the knowledge of the true Phillips curve, of the existence of private agents who were trying to forecast the inflation rate, and the histories of outcomes. The policy maker’s payoff depended on the equally weighted sum of the square of the unemployment and inflation rate. The knowledge of private agents was limited to the information that there is a policy maker in the experimental economy who will be setting the target inflation rate and were also given information about the distribution of the disturbance, vt. The treatments included variation in whether or not the policy maker had information about the previous period’s average inflation expectations, and in varying the variance of the shocks to the Phillips curve and to the target inflation rate with a total of 12 sessions. Overall, the experimental results showed that in 9 of the 12 experimental economies, the policy maker pushed the inflation rate close to the Ramsey value for extended periods of time. Thus, in a pure discretionary environment, policy makers were able to build reputation and coordinate private agents’ expectations on the first best Ramsey equilibrium. In other words, Arifovic and Sargent find support for the “just do it” policy where, in a discretionary environment, reputation can substitute a commitment regime. Their results also show that in four sessions, “backsliding toward the Nash” occurred after the Ramsey outcome was achieved and sustained for some time. Duffy and Heinemann (2014) study the Barro and Gordon (1983) environment which differs from that of Arifovic and Sargent (2003) in terms of timing, the role that human subjects play, and the amount of information they have. They study five different implementations of the discretionary policy regime, and, in addition, study the behavior of the experimental economies under the commitment regime. In terms of timing, the policy maker learns inflationary expectations prior to the setting policy unlike Arifovic and Sargent where the policy maker either did not receive this information at all, or received it only ex post. Unlike Arifovic and Sargent where only the policy maker was
Evolving Better Strategies for Central Bank Communication
235
informed of the underlying model, in Duffy and Heinemann, both the policy maker and the private agents were informed. In addition, each of their session had 20 subjects that were randomly matched in groups of 10 with no further interaction between the two groups. Each session for a matching group lasted for a number of indefinitely repeated games. In each repeated game, subjects in a matching group were randomly divided into groups of size 5. At the beginning of each repeated game, one member in a group was randomly selected to play the role of the policy maker. Thus, unlike Arifovic and Sargent, where the same subject remained the policy maker throughout the session, in Duffy and Heinemann, the policy maker was randomly selected in each repeated game of a given session. The experiment consisted of a total of six treatments: discretionary policy, commitment, cheap talk, policy transparency, cheap talk, and policy transparency and economic transparency. Except for the commitment treatment, other five are variants of the discretionary policy regime. In a baseline discretionary treatment, private agents move by forming inflation expectations. Once the policy maker observes the supply shock and the average of inflation expectations, s/he chooses the rate of money growth, mt. In the treatment with policy transparency, private agents learn the values of the policy maker’s choice and the transmission shock at the end of the period while in the treatment with economic transparency they learn the value of the supply shock at the same time as the policy maker learns it. In the cheap talk treatment, the policy maker moves first; after observing the supply shock, s/he chooses a message of what s/he intends to do; otherwise, the treatment corresponds to the baseline discretionary treatment. Finally, in the commitment treatment, the policy maker observes the supply shock and makes the policy choice prior to observing the private sector’s expectations. Their experimental findings show that the mean choice of the money supply is indistinguishable from the Ramsey prediction for the commitment treatment, while it is significantly higher for all of the discretionary treatments. The hypothesis that the average money supply is equal to the prediction of the one-period Nash equilibrium can be rejected for cheap talk and economic transparency, but not for other discretionary treatments. Overall, there results show that reputation does not serve as a substitute for commitment in any of the discretionary regimes. In terms of the monetary policy regime studied in this article, it corresponds to the Duffy and Heinemann’s treatment with discretionary monetary policy and cheap talk. Their result that cheap talk improves the
236
JASMINA ARIFOVIC
welfare relative to the Nash equilibrium of the discretionary policy regime is of interest to the results presented in this article.
DESCRIPTION OF THE ARIFOVIC ET AL. (2010) ENVIRONMENT As the experimental economy described in this article corresponds to the one studied in Arifovic et al. (2010), this section describes their environment and outlines the main results of their study. The economy consists of the policy maker who cares about the inflation rate and the unemployment of two types of agents. In each time period, the policy maker announces an inflation rate, yat , and then sets the actual inflation rate, yt. The two types of private agents are believers and nonbelievers. As mentioned above, the experimental design will closely mimic the model used in Arifovic et al. (2010), except that human subjects will substitute for the second type of artificial agents, the nonbelievers.
Private Agents Agents form inflation expectations (x) after having observed the policy maker’s inflation announcement. In each time period, t, the believers set their expectation of the inflation, xBt , equal to the announced inflation yat : xit = yat ; i ∈ B and nonbelievers (i ∈ N B) form adaptive expectations by correcting their forecast error around the optimal solution of the static game: xtNB;i =
θ2 ϕt yat þ θu þ dti 1 þ θ 2 ϕt
ð1Þ
where θ > 0 is a parameter from the augmented Philips curve (given below), ϕt is the fraction of believers in the economy at time t, u* is the natural rate of unemployment, and dti is the correction term that takes into account the forecast error of the previous period: dti þ 1 = dti þ γðyt − xNt B;i Þ; d0i = d0 = 0 where γ > 0 is the nonbelievers’ speed of learning.
ð2Þ
Evolving Better Strategies for Central Bank Communication
237
The payoff to each agent depends on the expectational error, the actual inflation rate and, in the case of nonbelievers, the cost of forming a forecast. Jti = −
1 ðyt − xit Þ2 þ y2t − ci 2
ð3Þ
ci is the cost associated with forming a forecast. If i ∈ N B, ci ≥ 0 and ci = 0 ci = 0 if i ∈ B. The fraction of believers in the economy, ϕt, changes over time following a word-of-mouth information exchange among the private agents, as a function of the payoff difference between believers and nonbelievers. In each period, a fraction β of all private agents, believers and nonbelievers, is chosen randomly. The chosen agents are then randomly paired, for example, agent i with agent k. Agent i observes agent k’s current strategy and vice versa. Each agent also observes the payoff of the other agent with some noise. Thus, agent i observes the payoff of agent k, as k Jobserved = Jk þ ∈
where ∈ is a random noise.3 If Ji < Jkobserved, that is, if agent i’s payoff is smaller than k’s observed payoff, agent i adopts the strategy of agent k. Thus, a believer may become a nonbeliever and vice versa. The resulting dynamics of ϕt is stochastic. Assuming, for analytical simplicity, that ɛ is drawn from a distribution qualitatively similar to a Gaussian distribution,4 the expected change in the proportion of believers is given by Δϕ^ t : = IEϕt þ 1 − ϕt = βϕt ð1 − ϕt ÞarctanðJtB − JtN B Þ
ð4Þ
The unemployment rate of each type of agent is determined by the following augmented Philips curve uit = u − θðyt − x it Þ; i ∈ B; N B
ð5Þ
where u* is the natural rate of unemployment equal to actual rate of unemployment if agents form correct inflation expectations, θ > 0 is a parameter, and xit represents the average expectation of agents of type i.5 The payoff of a policy maker that results from implementing yat and yt depends negatively on unemployment rates of each tip of the agent and on the inflation and is given by 1 JtG = − ðϕt ðuBt Þ2 þ ð1 − ϕt ÞðuNt B Þ2 þ y2t Þ 2
ð6Þ
238
JASMINA ARIFOVIC
Policy Maker The policy maker’s strategy set, Yt, consists of J strategies. Each strategy j, j ∈ {1,J}, has two elements, an inflation announcement, yat ðjÞ, and an actual inflation rate, yt(j). The initial set of strategies, Y0, is randomly generated. In each period t, the current set Yt is updated by increasing the frequency of strategies that results in relatively high performance (replication) and through exploration of new strategies (experimentation). Once the updating is completed, the policy maker selects the strategy that is implemented at t + 1 with a probability proportional to its payoff (rule selection). Note that the assumption of the model is that the policy maker has a complete control over the inflation rate, yt that it selects to implement at time t. The updating of IEL takes place in the following way: Experimentation Experimentation introduces new alternative strategies that otherwise might never have a chance to be tried. This insures that a certain amount of diversity is maintained. Each element of each strategy ½yat ðjÞ; yt ðjÞ ∈ Yt , j∈{1, … ,J} is changed independently with a given probability, pex. The new value of yat ðjÞ or yt(j) is generated in the following way: new value = old value þ ∈ where ɛ is a random number drawn from a standard normal distribution. Computation of Counterfactual Payoffs Once the agents’ actions are observed, the policy maker computes both the counterfactual payoff that it would have received if it used any other strategy j in Yt, and the corresponding expected change in the fraction of believers, Δϕ~ t ðjÞ, that would have been associated with strategy j. Let the values that would have been obtained by using the rule ½yat ðjÞ; yt ðjÞ ∈ Yt , j ∈ f1;Jg be G 6 denoted by J~t , u~Bt ðjÞ, and u~NB t ðjÞ. Thus, an instantaneous counterfactual payoff for strategy j, j ∈ {j, J} in period t is given by: 1 G J~t = − ðϕt ðu~Bt ðjÞÞ2 þ ð1 − ϕðjÞt Þðu~Nt B ðjÞÞ2 þ yt ðjÞ2 Þ 2
ð7Þ
Arifovic et al. note that if the policy maker was solving a standard dynamic optimization problem (i.e., did maximize its cumulative discounted payoffs subject to the relevant dynamic constraints), the value to the policy maker of using a given strategy j would not be limited to
Evolving Better Strategies for Central Bank Communication
239
G the resulting instantaneous payoff J~t ðjÞ. Instead, it would also include the changes in the state variables weighted by the proper dynamic multipliers, in order to capture the consequences of the changes on the future optimal stream of the payoffs. While in their formulation of the model, the policy maker does not explicitly solve an infinite horizon dynamic optimization problem, it nonetheless takes into account, in a simplified form, the intertemporal effects of its policy. They formulate a pseudo value function equal to: G G V~ t ðjÞ = J~t ðjÞ þ ΩΔϕ~ t ðjÞ
ð8Þ
where Ω > 0 is a parameter and Δϕ~ t ðjÞ the expected change of ϕt if strategy j had been applied. In that way, it assigns a positive value to an increase in ϕ, that is, it takes into account that a higher ϕt should allow for a higher payoff in the future. The parameter Ω captures how strongly the policy maker takes into account the expected intertemporal consequences of the different strategies that it considers. Replication Replication comes next. Replication reinforces rules that would have been good choices in previous time periods . It allows rules that may potentially have higher performance to replace those that might perform worse. Replication takes place through tournament selection. Pairs of rules are drawn randomly with replacement from the existing pool. The rule with a higher counterfactual payoff replaces a rule with a lower counterfactual payoff. This is repeated J times, resulting in a new pool with an increased proportion of rules with higher counterfactual payoffs. Selection Experimentation and replication allow the policy maker to construct a pool of strategies, Yt + 1, that might be better than the pool of strategies from the previous period, Yt. The new pool of strategies, Yt + 1, consists of new strategies generated by experimentation and a higher proportion of the strategies with higher counterfactual payoffs. The strategy that is effectively used at t + 1 is chosen randomly from Yt + 1, with a probability that is increasing with the strategy’s counterfactual payoff.7 In a set of simulations in which they examined the impact of variation of different parameters of the model, Arifovic et al. (2010) show that their simulated economies can, indeed, reach outcomes better than Nash with
240
JASMINA ARIFOVIC
the greater than Nash welfare of both the policy maker and the private agents. The results depend crucially on the survival of believers, as well as on the relatively quick adjustment of nonbelievers. In my experiment, I examine how this type of economy performs when human subjects play the role of private agents.
EXPERIMENTAL DESIGN In the experimental environment, the IEL plays the role of a policy maker, like in Arifovic et al. (2010), while human subjects replace one type of private agents, the nonbelievers, of the original environment. There are two treatments. In the first treatment, the IEL plays the role of the policy maker and human subjects make inflation forecasts, the Humans Only treatment. In the second treatment, in addition to the IEL and human subjects, there are robot believers that set their forecast equal to the inflation announcement, the Humans and Robots treatment. The task of subjects in each treatment is to forecast the inflation rate in every time period. Each experimental session lasts for 50 periods. The payoffs of human subjects are computed differently from those given in (3) where the payoffs result in values ≤0. In order to avoid negative payoffs for human subjects, a positive constant, A is added to the human subjects’ payoffs. In addition, in order to provide an incentive to ensure that there is an incentive to improve forecasts, the forecast error is multiplied by a positive coefficient b. Thus, the payoffs are computed as: Jth;i = A −
1 bðyt − xh;i Þ2 þ y2t 2
ð9Þ
which is the payoff of human subject i in period t. For symmetry, the payoffs of robots, Jr, are computed in the same way. In the experiment, A = 100 and b = 8. The Nash equilibrium values of the inflation rate, yN E, as well as the unemployment, uN E, are equal to 5.5 for this economy. The optimal Ramsey outcome has yR = 0 and uR = u*. The IEL parameter values are identical to those from Arifovic et al. (2010): J = 100, pex = 0.2, and β = 0.05. The lower bound for the values of inflation announcements and inflation rates was set equal to 10, and the upper bound was set equal to 15.
Evolving Better Strategies for Central Bank Communication
241
Sequence of events within an experimental period: • A strategy that consists of two elements, inflation announcement, yat and actual inflation rate, yt, is selected from the current period strategy set, Yt.8 Subjects are informed about the inflation announcement, yat . • Subjects make their decisions about the forecast of the inflation rate, xh;i t , i ∈{1,N} where N is the number of subjects that participate in a given session. In a treatment with robots, their forecast is set equal to the announced inflation rate, yat , xrt = yat . In the initial period, ϕt = 0.5, thus robot believers and human subjects start out with the equal representation in the experimental economy. • Given the actual inflation rate of the selected strategy, the subjects’ (and robots’ in the Humans and Robots treatment) payoffs are calculated using (9). Subjects’ payoffs and their forecast errors are displayed on their screens. • The fraction of believers for t + 1, ϕt + 1 is calculated as ϕt + Δϕt where Δϕt is computed using (4). The fractions ϕt + 1 and 1 − ϕt þ 1 determine the weights of the robots’ unemployment rate and the human subjects’ unemployment rate in the policy maker’s payoff function at period t + 1. • The payoff of the policy maker’s strategy that was selected for implementation is computed. • IEL updating Experimentation: Each element k, k = 1,2 of a strategy j, j ∈{1,J}, undergoes experimentation with probability pex, independently across elements and strategies. If experimentation takes place, the existing value of the element is taken as a mean of a normal distribution with a given standard deviation and a new value is drawn from this distribution. The median of nonbelievers forecasts is computed. The median of believers forecasts is just equal to the inflation announcement. The unemployment of each type of agent is calculated using Eq. (5).9 Using the median forecasts of the two types of agents, counterfactual unemployment rates are calculated for each strategy in the strategy set, Yt. When calculating the counterfactual unemployment rate for human subjects, u~ht ðjÞ that enters into strategy j’s counterfactual payoff, the median forecast of human subjects comes from subjects’ actual choices and is thus the same for all j. The differences in counterfactual payoffs of different strategies come from different values of yt(j). Thus the value of an inflation announcement, yat ðjÞ, j ∈{1,J}, does not
242
JASMINA ARIFOVIC
enter into the evaluation of strategies’ counterfactual payoffs in the Humans Only treatment. However, in the presence of robots, their “median” forecast is set equal to yat ðjÞ for each j. This will in turn affect the computation of robots’ counterfactual unemployment, u~Bt ðjÞ. In this case, the inflation announcement part of the strategy gets evaluated through the payoff function. A counterfactual expected changes in the fraction of each type of agents Δϕt(j) and 1 − Δϕt ðjÞ are calculated for each strategy j using Eq. (4) in the Humans and Robots treatment. Note that the fraction of each type of agent depends on their relative performance as defined by their payoffs that depend on their forecast errors. Thus, these fractions change over time depending on how robots’ and human subjects forecast errors. Using the values of the variables computed above, the counterfactual payoffs for each strategy j using (7) are calculated in the Humans Only treatment and using (7) and (8) in the Humans and Robots treatment. Note that in the presence of robot believers, the same pseudo value function given in (8) is used in the experimental IEL updating as well. Replication takes place. A new strategy is selected as described in selection. • Subjects move to the screen for the following period by clicking on continue. This screen has a new announcement, yatþ 1 , in the top left corner and a box where subjects enter their new forecast, xh,i. On the right hand side, a chart that displays historical values of the actual inflation and subject’s forecasts is displayed together with a table that contains historical data. Fig. 1 shows a screenshot from the program. The subjects were told the following about the policy maker’s decisions: “The policy maker prefers both a lower inflation rate and a lower unemployment rate. In this market, if the subjects (including you) over-forecast the inflation rate, the unemployment will be relatively high. Likewise, if the market forecasts are generally below the actual inflation rate, the unemployment rate will be relatively low. The policy maker adjusts their announced and actual inflation rates based on the information on the past unemployment rates and inflation rates.” Thus, there was no mention in the instructions of either an algorithm or a human subject playing the role of the policy maker. Regarding the behavior of the policy maker, subjects were given qualitative information about the Phillips curve relationship and how it affects the policy maker’s payoffs.
Evolving Better Strategies for Central Bank Communication
Fig. 1.
243
Screen Shot.
Four sessions were conducted for each treatment. All the software, including the software for the updating of IEL, was programmed using Z-tree (Fischbacher, 2007).10 Three of the sessions were conducted with 10 subjects, 2 with 9 subjects, 1 with 8 subjects and 2 with 7 subjects.
RESULTS Table 1 presents the values of average inflation, the standard deviation of inflation, and the average of absolute forecast errors of human subjects in the Humans Only treatment, while Table 2 presents the same variables for the Humans and Robots treatment. The comparison of the values for each treatment indicates the following: (1) Inflation rates in the Humans Only treatment are higher than in the Humans and Robots treatment. (2) Inflation rates exhibit higher volatility in the Humans and Robots treatment as indicated by higher values of the standard deviation of the inflation rates, σ(yt), in the Humans and Robots treatment. (3) The averages of absolute
244
JASMINA ARIFOVIC
Table 1.
Averages: Inflation, Inflation Std. Deviation, Average of Absolute Forecast Errors. Humans Only
yt (%) σ(yt) jyt −xH t j(%)
Session 1
Session 2
Session 3
Session 4
Average
3.99 1.05 0.62
4.10 1.26 0.84
4.32 1.41 1.38
4.75 1.17 1.26
4.29 1.23 1.03
Table 2. Averages: Inflation, Inflation Std. Deviation, Average of Absolute Forecast Errors. Humans and Robots
yt (%) σðyt Þ jyt − xH t j(%)
Session 1
Session 2
Session 3
Session 4
Average
3.77 1.68 3.81
1.69 2.28 2.54
3.98 1.48 2.83
2.78 2.03 3.21
3.06 1.87 3.10
H forecast errors, jyt − xH t j where xt is the average forecast of human subjects, in the Humans and Robots treatment are higher than in the Humans Only treatment. Figs. 2 and 3 present the sets of IEL strategies, (Fig. 2 shows the inflation announcement element, and Fig. 3 the actual inflation element) for four sessions of the Humans Only treatment. The figures show the behavior of the sets over 50 periods, and include all of the J elements. Figs. 4 and 5 present the collections of IEL strategies, (Fig. 4 shows the announcement element and Fig. 5 the actual inflation element) for four sessions of the humans and robots treatment. For both elements, after several initial periods (all sets are initialized randomly from the interval [ −10,15], drawing from the uniform distribution), the rates of both inflation announcements as well as actual inflation rates stabilize within a band of values between 0 and the Nash equilibrium value of inflation of 5.5. As there is an inherent temptation for surprise inflation, these values do not converge to a single value, but rather oscillate within this band. The variance of the values appears to be shrinking for given periods of time, and then expanding, again remaining in the above-mentioned range of values.
245
15
15
10
10
5
5
yta(j) (%)
yta(j) (%)
Evolving Better Strategies for Central Bank Communication
0
0 –5
–5
–10
–10 0
10
20
30
40
50
0
10
20
30
40
50
40
50
Period
15
15
10
10 yta(j) (%)
yta(j) (%)
Period
5 0 –5
5 0 –5 –10
–10 0
10
20
30
Period
Fig. 2.
40
50
0
10
20
30
Period
IEL Inflation Announcements Humans Only.
For the inflation announcements as well as for the actual inflation rates, the band of values appears to be somewhat narrower in the Humans Only treatment. The explanation for this might be the absence of robot believers, and less opportunity for the policy maker to deviate by creating a larger difference between inflation announcements and inflation rates. This is further discussed below by examining the announcements and inflation rates that were actually selected. Next, we look at the patterns of selected announcements and actual inflation rates across the two different treatments. Fig. 6 presents time series
15
10
10 yt(j) (%)
yt(j) (%)
15
5 0
5 0
–5 0
10
20 30 Period
40
–10
50
15
15
10
10 yt(j) (%)
yt(j) (%)
–10
–5
5 0
40
50
0
10
20 30 Period
40
50
0
0
10
20 30 Period
40
–10
50
IEL Actual Inflation Rates Humans Only.
15
15
10
10
5
5
yat (j) (%)
yat (j) (%)
20 30 Period
–5
Fig. 3.
0 –5 –10
0 –5
0
10
20 30 Period
40
–10 0
50
15
15
10
10 yt – xtH,i (%)
yat (j) (%)
10
5
–5 –10
0
5 0
20 30 Period
40
50
10
20 30 Period
40
50
5 0 –5
–5 –10
10
0
10
Fig. 4.
20 30 Period
40
50
–10
0
IEL Inflation Announcements Humans and Robots.
247
15
15
10
10
5
5
yt(j) (%)
yt(j) (%)
Evolving Better Strategies for Central Bank Communication
0
–5
–5 0
10
20 30 Period
40
–10
50
15
15
10
10 yt(j) (%)
yt(j) (%)
–10
0
5 0 –5
–10
0
10
20 30 Period
40
50
0
10
20 30 Period
40
50
5 0 –5
0
10
Fig. 5.
20 30 Period
40
50
–10
IEL Actual Inflation Rates Humans and Robots.
of these values for four sessions of the Humans Only treatment and Fig. 7 presents the same time series for four sessions of the Humans and Robots treatment. The straight line in each subplot of each figure represents the Nash equilibrium value of inflation for this economy. General qualitative features that characterize time series of both treatments are that both inflation announcements and actual inflation rates are lower than the Nash equilibrium value (except for a period or two in the Humans Only treatment where they exceed this value). Except for few initial periods during which IEL adjustment takes place, the values of inflation announcements and actual inflation rates appear to move close together. In the Humans and Robots treatment, there are occasional periods when there is a relatively larger difference between the two. This again can be explained by the presence of believers that provide incentive to the policy maker for somewhat more deviation between the two values.
JASMINA ARIFOVIC 10
10
8
8
6
6
4
4
2
2
yt,yat (%)
yt,yat (%)
248
0 –2 –4
–4 Realized inflation, yt Announced inflation, yat Nash equlibrium inflation, ye
–6 –8 –10 0
0 –2
10
20
30
40
Realized inflation, yt Announced inflation, yat Nash equlibrium inflation, ye
–6 –8 –10
50
0
10
10
10
8
8
6
6
4
4
2
2
0 –2 –4 –8 10
20
30
Period
Fig. 6.
30
40
50
0 –2 –4
Realized inflation, yt Announced inflation, yat Nash equlibrium inflation, ye
–6 –10 0
20
Period
yt,yat (%)
yt,yat (%)
Period
40
50
Realized inflation, yt Announced inflation, yat Nash equlibrium inflation, ye
–6 –8 –10
0
10
20
30
40
50
Period
Inflation Announcements and Actual Inflation Rates, Humans Only.
The Level of Inflation Rates and Their Volatility In order to test whether the inflation rates in both experimental treatments are smaller than the Nash equilibrium rates, I first compute average inflation rates, from experimental periods 10 to 50 (I discard first nine periods because of the initial IEL adjustment) for each session of each treatment. Cumulative density functions for each treatment (CDFs) and as well as the CDF of the Nash equilibrium inflation rate (a straight line centered on the equilibrium value) are presented in Fig. 8. Both of the experimental CDFs lie to the left of the Nash equilibrium, with the CDF of average inflation
249
10
10
8
8
6
6
4
4
yt, yat, ye q (%)
yt, yat, ye q (%)
Evolving Better Strategies for Central Bank Communication
2 0 −2 −4 −6 −10
0
5 10 15 20 25 30 35 40 45 50
0 −2 −4 −6
Realized inflation, yt Announced inflation, yat Nash equilibrium inflation, eeq
−8
2
Realized inflation, yt Announced inflation, yat Nash equilibrium inflation, eeq
−8 −10
0
10
10
8
8
6
6
4
4
2 0 −2 −4
2 0 −2 −4
−6
Realized inflation, yt Announced inflation, yat Nash equilibrium inflation, eeq
−8 −10
5 10 15 20 25 30 35 40 45 50 Period
yt, yat, ye q (%)
yt, yat, ye q (%)
Period
0
5 10 15 20 25 30 35 40 45 50 Period
Fig. 7.
−6
Realized inflation, yt Announced inflation, yat Nash equilibrium inflation, eeq
−8 −10
0
5 10 15 20 25 30 35 40 45 50 Period
Inflation Announcements and Actual Inflation Rates, Humans and Robots.
rates of the Humans and Robots treatment to the left of the Humans Only treatment.11 The results of the KolgomorovSmirnov tests show that the CDF of each treatment is larger than the Nash equilibrium at 95% significance level which implies that average inflation rates of both treatments are lower than the Nash equilibrium rate. In addition, the KolgomorovSmirnov test shows that the CDF of the average inflation rates of the Humans and Robots treatment is larger than the CDF of the Humans Only treatment at 95% significance level. Thus, the average inflation rates of the Humans
250
JASMINA ARIFOVIC 1 Treatment 1 Nash
0.8
Treatment 2
F(y)
0.6
0.4
0.2
0
1
1.5
2
Fig. 8.
2.5
3 3.5 4 4.5 Inflation, yt (%)
5
5.5
6
CDFs of Inflation Rates.
1 0.9 0.8 0.7
F(Y)
0.6 0.5 0.4 0.3 0.2 Treatment 1 Treatment 2
0.1 0
1
1.5
2
2.5
Standard deviation of realized inflation, σ(yt) (%)
Fig. 9.
CDFs of Standard Deviations of Inflation Rates.
Only treatment are larger than the average inflation rates of the Humans and Robots. The presence of believers leads to lower inflation rates because the private sector is more gullible. Without believers, attempts to generate surprise inflation just result in higher inflation rates.
251
Evolving Better Strategies for Central Bank Communication
Fig. 9 presents CDFs of the standard deviations of inflation rates in both treatments. The figure shows that the CDF of the Humans and Robots treatment is larger than the CDF of the Humans Only treatment and that the difference is statistically significant using the KolgomorovSmirnov test (at 95%). Overall, the average inflation rates in the Humans and Robots treatment are lower than in the Humans Only treatment. However, they also exhibit higher volatility. Thus, the presence of believers results in lower average inflation rates, but also in higher inflation volatility.
Inflation Forecasts Figs. 10 and 11 show time series of the median forecast of human subjects and the actual inflation in each session of each treatment. What we can see in these figures is that, generally, the difference between the median forecast 15
10
10
5
5
%
%
15
0
0 −5 −10
−5
Median inflation forecast, xH,i (%) t Realized inflation, yt (%)
0
10
20 30 Period
40
−10 0
50
10
10
5
5
10
20 30 Period
40
50
%
15
%
15
Median inflation forecast, xH,i (%) t Realized Inflation, yt (%)
0
0
−5 −10
Median inflation forecast, xH,i (%) t Realized inflation, yt (%)
0
Fig. 10.
10
20 30 Period
40
50
−5 −10
Median inflation forecast, xH,i (%) t Realized inflation, yt (%)
0
10
20 30 Period
40
50
Median Forecast of Human Subjects and Actual Inflation, Humans Only.
252
JASMINA ARIFOVIC
10
10
5
5 %
15
%
15
0
0
−5 −10
−5
Median inflation forecast, xH,i t (%) Realized inflation, yt (%)
0
10
20
30
40
−10
50
Median inflation forecast, xH,i t (%) Realized inflation, yt (%)
0
10
20
30
40
50
Period
Period
10
10
5
5 %
15
%
15
0
0
−5 −10
Median inflation forecast, xH,i t (%) Realized inflation, yt (%)
0
10
20
30
Period
Fig. 11.
40
50
−5 −10
Median inflation forecast, xH,i t (%) Realized inflation, yt (%)
0
10
20
30
40
50
Period
Median Forecast of Human Subjects, Humans, and Robots.
and the actual inflation rate is larger in the Humans and Robots treatment. The statistical tests show that this is, indeed, the case. Fig. 12 plots the CDFs of the averages of absolute forecast errors for both treatments. The CDF of the Humans Only treatment is larger than the CDF of the Humans and Robots treatment and the difference is statistically significant using KolgomorovSmirnov test (at 95%). Thus, in addition to higher inflation rate volatility, the Human and Robots treatment also results in subjects making larger forecasting errors. Fig. 13 shows the evolution of the fraction of believers in four sessions of the Humans and Robots treatment. The subplots show that the fraction of believers, which starts at 50%, fluctuates over time, but these fluctuations are not too large, and in none of the sessions there is a threat of their disappearance. (Note that this does happen in the simulations with both believers and nonbelievers being represented by artificial agents.)
253
Evolving Better Strategies for Central Bank Communication 1 Treatment 1 Treatment 2
0.9 0.8 0.7
F(Y)
0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.5
1
100
90
90
80
80
70
70
60
60
φt (%)
φt (%)
100
50 40
40 30
20
20
10
10 0
5
10 15 20 25 30 35 40 45 Period
0
50
100
100
90
90
80
80
70
70
60
60
50 40
30 20
10
10 5
10 15 20 25 30 35 40 45 Period
Fig. 13.
50
5
10 15 20 25 30 35 40 45 Period
50
0
5
10 15 20 25 30 35 40 45 Period
50
40
20
0
0
50
30
0
4
50
30
0
3.5
CDFs of Forecast Errors.
φt (%)
φt (%)
Fig. 12.
1.5 2 2.5 3 Forecast errors, lyt–XH,tl (%)
0
Fraction of Robot Believers Over Time.
254
JASMINA ARIFOVIC
Fig. 14 shows the time series of the changes in the fraction of believers and the absolute difference between the inflation announcement and the actual inflation rates over time. Generally, an increase in the absolute difference between yat and yt results in a subsequent decrease in the fraction of robot believers. In addition, sudden increases in jyat − yt j are followed by decreases in their values, as the policy maker “realizes” that these big increases have a negative impact on the fraction of believers and thus, have a negative impact on the policy maker’s own payoffs. This negative impact comes through two channels, one through the decrease in the “pseudo value function” part of the payoff and the other directly through the higher actual inflation rates which are generally associated with the increases in jyat − yt j.
Abs. Diff. in realized and announced Inf., |yat−1−yt−1|,
10
Chng. in the fraction of robots, φt
8
6 %
%
Chng. in the fraction of robots, φt
8
6 4
4
2
2
0
0
−2
Abs. Diff. in realized and announced Inf., |yat−1−yt−1|,
10
0
10
20
30
40
−2
50
0
10
20
Period 10
10
Abs. Diff. in realized and announced Inf., |yat−1−yt−1|,
8
50
Chng. in the fraction of robots, φt
6 %
%
40
Abs. Diff. in realized and announced Inf., |yat−1−yt−1|,
8
Chng. in the fraction of robots, φt
6 4
4
2
2
0
0
−2
30
Period
0
10
20
30
Period
Fig. 14.
40
50
−2
0
10
20
30
40
50
Period
Change in the Fraction of Robots and Absolute Difference Between Inflation Announcement and Actual Inflation.
Evolving Better Strategies for Central Bank Communication
255
CONCLUSION This article investigates behavior in an experimental economy that is a version of the KydlandPrescott environment where the IEL plays the role of the policy maker and human subjects and robots play a role of private agents whose task is to make forecasts of the next period’s inflation rate. In one of the treatments, the Humans Only, the only type of private agents are human subjects. In the other one, the Humans and Robots treatment there are two types of private agents, human subjects and robots. Robots play the role of believers and set their inflation forecast equal to the inflation rate announced by the policy maker. As in the standard KydlandPrescott environment, the policy maker’s payoffs depend both on the inflation rate as well as on the rate of unemployment. In the humans only treatment it is only the unemployment of human subjects that enters the payoff function. In the humans and robots treatment, the unemployment that enters the payoff function is a weighted average of unemployment of both types of agents. Through its payoff function, the policy maker also cares about future changes in the fraction of believers. The payoffs of private agents depend negatively on their forecast error. The fraction of each type of agent in the economy in the Humans and Robots treatment varies over time and depends on their relative performance. The results of both treatments show that average inflation rates are lower than the Nash equilibrium rate. Moreover, the average inflation rates in the Humans and Robots treatment are lower than in the Humans Only treatment. However, at the same time, the rates are more volatile in the former. In addition, human subjects’ forecast errors are higher in the Humans and Robots treatment. After initial adjustment, the IEL strategy set remains within a band of values that are generally below the Nash equilibrium prediction. As there is always a temptation to inflate and reap off the benefits in a given period, the sets never converge to a single value, for either inflation announcements or actual inflation rates. The strategies that are selected to be implemented follow a similar path, that is, they fluctuate over time, but again in a band with an upper bound that is lower than the Nash equilibrium value. In the Humans Only treatment, the inflation announcements and the actual inflation rates of the strategies that are selected for implementation move closer together than in the Humans and Robots treatment. In the Humans Only treatment the tradeoff between inflation and unemployment prevents the IEL strategy sets from converging to the values closer to the Nash equilibrium, that is, the negative impact of higher inflation
256
JASMINA ARIFOVIC
outweighs the benefit of lower unemployment and the dynamics of the IEL adaptation give less weight to inflationary strategies. In the Humans and Robots treatment, in addition to the above effect, there is an impact on the fraction of believers in the economy that again has a negative evolutionary effect on strategies whose elements consist of relatively low inflation announcements and relatively high actual inflation rates. As there is higher pressure not to inflate in this treatment, the average inflation rates are lower. The inflation announcements of the IEL policy maker can be interpreted as cheap talk. In Arifovic et al. (2010), the existence of believers helps the economy reach outcomes that are better than Nash. However, in the experimental economy described in this article, cheap talk also plays a role in the treatment with human subjects only. The average rates of inflation are lower than the Nash rate. It is worth pointing out that in Duffy and Heinemann (2014), the cheap talk treatment is the only treatment, among discretionary policy treatments, that results in inflation rates that are lower than the Nash equilibrium rate. Further study of the IEL dynamics should help shed light on the actual role of cheap talk in these environments. Finally, this is the first time that an adaptive algorithm has been used in the role of a policy maker in experimental economies. Even though the IEL strategy set starts out with strategies that are randomly initialized from a large range, the IEL quickly adjusts toward the strategies in a given range, and most of the time yields outcomes that are better than Nash. The surprising result is that it can actually adjust in “real time,” that is, it plays this role successfully within the duration of an experimental economy. In addition, in the interaction with human subjects, it does better than it would do in simulations in which artificial private agents would adjust using some sort of adaptive algorithm or error correction mechanism as these economies generally converge to the Nash outcome.
NOTES 1. In this particular application, strategies’ counterfactual payoffs depend negatively on the inflation and unemployment from the current period. 2. Other applications in which the IEL behavior is compared to the experimental data include, for example, the Groves-Ledyard mechanism, Arifovic and Ledyard (2010); call markets, Arifovic and Ledyard (2007). 3. The assumption here is that the payoff is not perfectly observable.
Evolving Better Strategies for Central Bank Communication
257
4. To obtain the analytically convenient expression for computation of the policy maker’s payoff, ∈ is drawn from the unimodal distribution with mean zero: ∈ = 2 tanðπ ðrand − 0:5ÞÞ=π, with rand drawn from the uniform distribution on [0,1]. 5. The intuition behind the assumption of different rates of unemployment for believers and nonbelievers is that, even though there is no explicit modeling of separate labor markets, two types of agents would accept different nominal wages based on their own inflation rate forecasts. 6. Details of the computation of these values can be found in the appendix of Arifovic et al. (2010). 7. Note that the values of VtG ðjÞ might be negative. In order to compute the probability for each strategy of being selected, a minimum value of VtG ðjÞ in the pool is subtracted from each VtG ðjÞ and probabilities based on relative performance are then computed using these transformed values. 8. The initial set of strategies, Y0, is populated randomly with J uniform draws from [ −10,15]. In the first period a strategy is chosen randomly from a uniform distribution. 9. Note that the unemployment rate for nonbelievers is calculated using the median of the forecasts. 10. I thank Chad Kendall for making available the Z-tree code that he wrote for adaptation of IEL in the Groves-Ledyard mechanism as a part of his project for my graduate experimental economics class. 11. In all of the figure legends, treatment 1 refers to Humans Only treatment, while treatment 2 refers to Humans and Robots treatment.
ACKNOWLEDGMENT I would like to thank Andriy Baranskyy and Shiqu Zhou for excellent research assistance. The support from the CIGI/INET grants program is gratefully acknowledged, grant #5533 July 22, 2014.
REFERENCES Arifovic, J., Dawid, H., Deissenberg, C., & Kostyshyna, O. (2010). Learning benevolent leadership in a heterogeneous agents economy. Journal of Economic Dynamics and Control, 34, 17681790. Arifovic, J., & Ledyard, J. (2007). Call market book information and efficiency, 2007. Journal of Economic Dynamics and Control, 34, 19712000. Arifovic, J., & Ledyard, J. (2010). A Behavioral model for mechanism design: Individual evolutionary learning. Journal of Economic Behavior and Organization, 31, 19712000. Arifovic, J., & Ledyard, J. (2012). Individual evolutionary learning, other regarding preferences, and the voluntary contribution mechanism. Journal of Public Economics, 96, 808823.
258
JASMINA ARIFOVIC
Arifovic, J., & Sargent, T. J. (2003). Experiments with human subjects in a KydlandPrescott Phillips curve economy. In D. Altig, & B. Smith (Eds.), The origins and evolution of central banking: Volume to inaugurate the institute on central banking of the federal reserve bank of Cleveland (pp. 2356). Cambridge: Cambridge University Press. Barro, R. J., & Gordon, D. B. (1983). Rules discretion and reputation in a model of monetary policy. Journal of Monetary Economics, 12, 101121. Cho, I.-K., & Sargent, T. J. (1997). Learning to be credible. Manuscript prepared for presentation at the conference to celebrate the Bank of Portugal’s 150th birthday. Cho, I.-K., Williams, N., & Sargent, T. J. (2002). Escaping Nash inflation. Review of Economic Studies, 69, 140. Cukierman, A. (1992). Central bank strategy, credibility, and independence: Theory and evidence. Cambridge, MA: The MIT Press. Dawid, H., & Deissenberg, C. (2005). On the efficiency effects of private (dis)-trust in the policy maker. Journal of Economic Behavior and Organization, 57, 530550. Duffy, J., & Heinemann, F. (2014). Central bank reputation, cheap talk and transparency as substitutes for commitment: Experimental Evidence, manuscript, June 2014. Fischbacher, U. (2007). Z-tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171178. Kydland, F. E., & Prescott, E. C. (1977). Rules rather than discretion: the inconsistency of optimal plans. Journal of Political Economy, 85, 473491. Persson, T., & Tabellini, G. (1993). Designing institutions for monetary stability. In A. H. Meltzer, & C. I. Plosser (Ed.), Carnegie-Rochester conference series on public policy (Vol. 39, pp. 5384). Amsterdam, Netherlands: North-Holland Publishing Co. Sargent, T. J. (1999). The conquest of American inflation. Princeton, NJ: Princeton University Press. Stein, J. (1989). Cheap talk and the fed: A theory of imprecise policy announcement. The American Economic Review, 79, 3242. Walsh, C. E. (1999). Announcements, inflation targeting, and central bank incentives. Economica, 66, 255269.
EXPERIMENTAL EVIDENCE ON THE ESSENTIALITY AND NEUTRALITY OF MONEY IN A SEARCH MODEL John Duffy and Daniela Puzzello ABSTRACT We study a microfounded search model of exchange in the laboratory. Using a within-subjects design, we consider exchange behavior with and without an intrinsically worthless token object. While these tokens have no redemption value, like fiat money they may foster greater exchange and welfare via the coordinating role of having prices of goods in terms of tokens. We find that welfare is indeed improved by the presence of tokens provided that the economy starts out with a supply of such tokens. In economies that operate for some time without tokens, the later surprise introduction of tokens does not serve to improve welfare. We also explore the impact of announced changes in the economy-wide stock of tokens (fiat money) on prices. Consistent with the quantity theory of money, we find that increases in the stock of money (tokens) have no real effects and mainly result in proportionate changes to prices.
Experiments in Macroeconomics Research in Experimental Economics, Volume 17, 259311 Copyright r 2014 by Emerald Group Publishing Limited All rights of reproduction in any form reserved ISSN: 0193-2306/doi:10.1108/S0193-230620140000017008
259
260
JOHN DUFFY AND DANIELA PUZZELLO
However, the same finding does not hold for decreases in the stock of money. Keywords: Money; search; gift exchange; social norms; neutrality of money; experimental economics
INTRODUCTION The use of money in exchange is a relatively recent development in the 200,000-year history of modern humans. The earliest known use of commodity (gold) money was approximately 5,000 years ago during the Bronze age (third millennium BC) in Mesopotamia and Egypt while the earliest known use of fiat (paper) money is around 800 years ago during the Song dynasty in China.1 Prior to the development of monetary exchange, nonmonetary “gift exchange” systems were the norm see Graeber (2011).2 In a gift-exchange system, there is no money object and exchanges may not be direct or immediate. For example, a landowner might provide peasants with household provisions in exchange for promises of later repayment in terms of crops at harvest time. These gift-exchange systems were governed by well-defined social norms of behavior, for example, community-wide sanctioning mechanisms, and worked well so long as the extent of the market remained limited; they eventually broke down and were replaced by monetary exchange systems and explicit collateral requirements as the extent of the market broadened, agents became more specialized and community-wide enforcement became more difficult to enforce see Greif (2006). The conditions under which gift and monetary exchange systems can be rationalized as equilibrium outcomes have recently become the subject of a large theoretical literature in modern monetary economics employing microfounded search-theoretic models (see Lagos, Rocheteau, & Wright, 2014; Nosal & Rocheteau, 2011 and Williamson & Wright, 2011 for surveys). These models make clear the frictions and assumptions under which gift exchange and monetary exchange systems can be rationalized as an equilibrium phenomenon. With a finite population of agents, nonmonetary gift-exchange equilibria can, in principle, be sustained under the starkest of environmental conditions, namely anonymous random matching of agents, lack of commitment or enforcement, and only decentralized information on exchange behavior, provided that members of the economy as a whole
Essentiality and Neutrality of Money in a Search Model
261
play according to a grim trigger “contagious strategy” as first suggested by Kandori (1992) and extended to the search-theoretic exchange framework by Araujo (2004). Monetary equilibria, where intrinsically worthless token objects are used as part of the exchange process, can co-exist in such environments. However, in these same environments, monetary systems cannot achieve the first best outcome whereas gift-exchange systems can. In particular, due to a time delay between the acceptance of money for production and the use of that money for later consumption, monetary equilibria will generally be less efficient than a subset of the nonmonetary gift-exchange equilibria that are possible in anonymous random matching exchange economies with finite populations of agents. Thus, in these environments, money is not essential in the sense that the introduction of money does not expand the Pareto frontier. Indeed, the addition of money to the nonmonetary environment we study does not affect the set of equilibria since there always exists a gift-exchange equilibrium (not involving the use of money) that implements the same equilibrium allocation as in the monetary equilibrium and results in the same welfare. In Duffy and Puzzello (2014), we explored whether gift-exchange equilibria could indeed be welfare improving relative to monetary equilibria in a laboratory experiment implementing a version of Lagos and Wright’s (2005) search-theoretic model of money. Despite the theoretical possibility that gift-exchange equilibria could achieve higher levels of welfare than the unique monetary equilibrium in the same environment, we found just the opposite: exchange activity and welfare were found to be higher when monetary exchange was possible than under a pure, nonmonetary gift-exchange system. We thus concluded that while money was not theoretically essential, behaviorally speaking money was essential to the achievement of higher welfare. We attributed that outcome to the coordinating role played by money and prices in monetary exchange systems. The experiments we reported on in that paper either had no money or a constant supply of a worthless token (fiat money) object that was present in the system at all times. In this article, we follow up on our earlier experimental study and ask whether the introduction or the removal of a token (money) object has any effect on exchange behavior and welfare. In particular, we are interested in understanding the historical transition from nonmonetary gift exchange to monetary exchange as well as the reverse scenario. In addition to exploring the essentiality of money, we further explore whether or not announced changes in the economy-wide quantity of a token money object have purely neutral effects on real activity, resulting only in changes to prices, as would be consistent with the long-run
262
JOHN DUFFY AND DANIELA PUZZELLO
“neutrality-of-money” proposition. This proposition has an even more recent history dating back to Hume’s (1752) essay, “Of Money.” As Hume (1752) observed: If we consider any one kingdom by itself, it is evident, that the greater or less plenty of money is of no consequence; since the prices of commodities are always proportionate to the plenty of money…
Hume’s logic follows directly from the quantity theory of money. A onetime, unexpected and permanent increase in the money supply should eventually result in an adjustment in the price level only, leaving all real activity unaffected. The main difficulty with testing this proposition (outside of the laboratory) is that a one-time, unexpected and permanent increase in the money supply is not easy to engineer in a natural economic setting, and indeed, we are not aware of any such naturally occurring experiments. Nevertheless, all modern microfounded models in monetary economics involving rational agents and markets that always are clear, predict the long-run neutrality of money and so it is of interest to consider the empirical support for this proposition. Our first new experimental treatment explores the transition from a nonmonetary exchange economy to an economy with a constant supply of fiat money and asks whether the introduction of a fiat money object results in greater exchange and welfare mirroring the transition from gift-exchange to monetary exchange that was observed in human history. We also explore the reverse transition from a monetary world to a nonmonetary world. Here our main experimental finding is that economies that start off without a token (money) object coordinate on low-welfare gift-exchange equilibria, as we previously observed in Duffy and Puzzello (2014). The surprise introduction of a token (money) object midway through the experiment, however, does not increase exchange or raise welfare; instead, exchange and welfare remain low relative to the first best even after the token object is introduced, though subjects do use the token object in exchange. By contrast, if the economy starts out with a fixed supply of the token (fiat money) object, the presence of that object in the economy results in higher amounts of exchange and higher welfare relative to economies without the token object, again consistent with the findings of Duffy and Puzzello (2014). If that token object is removed (again as a surprise) midway through the experiment, the amount of exchange drops precipitously as does the level of welfare. These findings suggest that money plays an important coordination role in raising welfare but only if the economy has not previously coordinated on low-welfare gift-exchange equilibria. In particular, our
Essentiality and Neutrality of Money in a Search Model
263
results suggest that the exchange history of an environment into which a money object is introduced is critical as to whether or not that money object is welfare-enhancing. Our second new experimental treatment concerns a change in the stock of the token fiat money object. In particular, we place subjects in an environment with a constant total stock of M units of the fiat money object and midway through the experiment we make a surprise announcement that the supply of money is doubling to 2M. We also explore the reverse sequence of events in which agents start out in an economy with a total stock of fiat money of 2M units and midway through the experiment we make a surprise announcement that the supply of money is being reduced from 2M to M total units. In the case where the money supply doubles from M to 2M, we find that, consistent with the neutrality-of-money proposition, there are no real effects and prices also approximately double. However, in the reverse case where the money stock is reduced by half, prices do not decline proportionally and thus there are some real effects, in contrast to the neutrality-of-money proposition. While the models and theory we are testing in this article are not new, the methodology of using laboratory methods to explore exchange behavior in economies with or without money and the validity of the neutralityof-money proposition is a novel approach to evaluating these models and theories. As our research demonstrates, the key advantage of using laboratory methods is that it provides us with the control necessary to implement the dynamic, intertemporal infinite horizon search-theoretic approach to modeling monetary and nonmonetary exchange, in particular, the pairwise anonymous random matching of subjects in combination with centralized Walrasian meetings. In addition, in the laboratory, it is possible to engage in policy experiments such as doubling the money supply that would be impossible (not to mention unethical) to conduct in the real world. Further, laboratory experiments provide us with a means of checking the robustness of theoretical predictions to populations of agents that may depart from the rational choice ideal in various ways. Finally, and perhaps most importantly, the theoretical environment we study admits a multiplicity of gift-exchange equilibria, in addition to a unique monetary exchange equilibrium. The question of which equilibrium (if any) is selected is ultimately an empirical question that laboratory methods allow us to address. The remainder of this article is organized as follows. The next section discusses related literature. The section “The Lagos-Wright Environment” describes the Lagos-Wright environment that we implement in the laboratory including the parameterization of the model and equilibrium
264
JOHN DUFFY AND DANIELA PUZZELLO
predictions. The section “Experimental Design” presents our experimental design and the section “Experimental Results” summarizes the main findings from our experiment. Finally, the section “Conclusions and Directions for Future Research” provides a summary of the article as well as some directions for future research.
RELATED LITERATURE The focus of this article is on the essentiality and neutrality of fiat money. Regarding the essentiality of money, the most closely related article is that by Duffy and Puzzello (2014), who also study the Lagos and Wright (2005) model of monetary exchange in the laboratory with a necessarily finite population of subjects (Lagos & Wright, 2005 have an infinite continuum).3 Given a finite population of sufficiently patient agents, there exists a continuum of nonmonetary gift-exchange equilibria in addition to the monetary equilibrium; these gift-exchange equilibria are supported by a contagious grim-trigger strategy played by the society of agents as a whole (Kandori, 1992). Some of these gift-exchange equilibria Pareto dominate the monetary equilibrium implying that money may fail to be essential (e.g., Aliprantis, Camera, & Puzzello, 2007a, 2007b; Araujo, 2004; Araujo, Camargo, Minetti, & Puzzello, 2012). However, Duffy and Puzzello find that subjects avoid nonmonetary gift-exchange equilibria in favor of coordinating on the monetary equilibrium. Duffy and Puzzello also study versions of the model when money is not available (see Aliprantis et al., 2007a, 2007b and Araujo et al., 2012) and find that welfare is significantly higher in environments with money than without money, suggesting that money plays a key role as an efficiency enhancing coordination device. Camera and Casari (2014) also compare outcomes across two environments, with fiat money (“tickets”) and without fiat money. In their dynamic game, which shares some similarities with the Prisoner’s Dilemma game, money is also not essential for achievement of the Pareto efficient outcome which can instead be supported by social norms. The monetary environment they study involves both dynamic and distributional inefficiencies associated with the first generation Kiyotaki and Wright (1989) moneysearch model in that ticket prices are exogenously fixed (there is no bargaining), money and goods are indivisible, there are restrictions on money holdings, and there is only decentralized pairwise random matching (there is no centralized meeting (CM) involving all players). They also
Essentiality and Neutrality of Money in a Search Model
265
consider only small groups of four subjects, which may facilitate social norm mechanisms. Indeed, they find that the introduction of money does not improve average overall cooperation rates (exchanges) relative to an environment without money. Neither Camera and Casari (2014) nor Duffy and Puzzello (2014) explore what happens to patterns of exchange when money is first introduced and then removed, or when money is introduced at a later stage; that is, neither article employs a within-subjects design as we do in this article. Regarding the neutrality-of-money hypothesis, our article is related to Lian and Plott (1998) who find support for the neutrality hypothesis in a general equilibrium environment using a between-subjects design. Their experiment involves a cash-in-advance constraint so that money must be used in exchange and they imbue their money object (francs) with a known redemption value, which is reminiscent of commodity-money regimes. By contrast, we consider a within-subjects design, without any cash-in-advance constraints and where the token objects that can serve as money have no redemption value, as is the case of fiat money systems.4 Our study is also indirectly related to the experimental literature on money illusion. Shafir, Diamond, and Tversky (1997) collect survey data that support the conjecture that people tend to think in terms of nominal rather than real nominal values. In their study, people chose between different options presented in nominal or real terms and their reactions to variations in inflation and prices indicated the presence of money illusion. Fehr and Tyran (2001) propose an experimental approach to money illusion by studying firms’ price setting behavior in a monopolistically competitive economy. They find that the presence of money illusion has implications for real allocations, especially after negative nominal shocks, that is, money is not neutral after negative shocks. Similarly, Noussair, Richter, and Tyran (2012) also find an asymmetry in price adjustments in response to inflationary or deflationary nominal shocks in experimental asset markets. Specifically, they report that prices exhibit nominal inertia after a deflationary shock. Petersen and Winn (2014) argue that the nominal inertia observed by Fehr and Tyran (2001) is mainly due to the adaptive nature of firms’ best responses rather than money illusion per se, but Fehr and Tyran (2014) argue that this is too narrow an interpretation. By contrast, in this article we provide a test of the neutrality-of-money proposition in a more explicit, exchangeoriented setting, where agents must bargain over quantities and prices. In particular we study the Lagos and Wright (2005) search model of money, where money may or may not be used for exchange purposes. We find some qualified support for the neutrality-of-money proposition in response
266
JOHN DUFFY AND DANIELA PUZZELLO
to inflationary shocks. However, consistent with the earlier experimental literature, we also find that prices do not adjust downward in response to a deflationary shock.
THE LAGOS-WRIGHT ENVIRONMENT In this section, we describe a modified version of the Lagos and Wright (2005) model with a finite population of agents. Time is discrete and the horizon is infinite. Let the population consist of 2N infinitely lived agents and let β ∈ (0,1) denote the discount factor. Each period is divided into two subperiods. In the first subperiod agents interact in decentralized meetings (DMs) while in the second subperiod trade is organized via a centralized meeting (CM). In the first subperiod agents are randomly and bilaterally matched and every agent in a pair is either a producer or a consumer of a special good in his match with equal probability. Note that this generates gains from trade since agents cannot produce for their own consumption. We denote by x and y consumption and production in the first subperiod. In the second subperiod, agents trade in a CM (Walrasian market) and every agent can produce and consume a general good. Let X and Y denote production and consumption in the second subperiod. Preferences are given by Uðx; y; X; YÞ = uðxÞ − cðyÞ þ X − Y where u, and c are twice continuously differentiable with u0 > 0, c0 > 0, u″ < 0, c″ ≥ 0. There exists a q* ∈ (0,∞) such that u0 (q*)=c0 (q*), that is, q* is efficient as it maximizes the surplus in a pair. Also, let q > 0 be such that uðqÞ = cðqÞ.5 Furthermore, the goods produced during the two subperiods are perfectly divisible and nonstorable. There is another object called fiat money that is perfectly divisible and storable in any amount m ≥ 0. We will consider two environments: one where the money supply is fixed at M and one where the money supply is twice as large and fixed at 2M. Notice that the environment lacks commitment and formal enforcement. However, since our population is finite, in addition to the monetary equilibrium, there exist multiple nonmonetary equilibria supported by informal enforcement schemes (see Aliprantis et al., 2007a, 2007b; Araujo et al., 2012; Ellison, 1994; Kandori, 1992).
Essentiality and Neutrality of Money in a Search Model
267
In what follows, we just report the main theoretical predictions and we refer the interested reader to Duffy and Puzzello (2014) for further details and proofs. Monetary Equilibrium In the Lagos-Wright model, there always exists a nonmonetary, autarkic equilibrium where money is not used and there is no exchange of goods. In addition, there exists a monetary equilibrium involving positive exchange of goods for money which we describe in this section. Let ϕt denote the price of money in terms of the general good in the CM. Under the assumption of a take-it-or-leave-it bargaining protocol in the DM (which we use in the experiment and where the consumer has all the bargaining power), it can be shown that the monetary steady state is unique. The amount q~ of the special good exchanged for money in each DM in the steady state is pinned down by the following functional equation u0 ð~qÞ 1−β =1þ 0 c ð~qÞ ðβ=2Þ
ð1Þ
Supposing that the aggregate supply of money is M, the equilibrium price ~ of money in the CM in the steady state is ϕ = ðcðqÞÞ=ðM=2NÞ. Prices in the ~ Furthermore, periodic access to the cenDMs are given by ðM=2NÞ=ðcðqÞÞ. tralized market and quasilinearity of preferences imply that agents are able to perfectly rebalance their money holdings: the distribution of money holdings at the beginning of each DM is therefore degenerate at M/2N. Because of discounting (β < 1), the first best is not achieved as a monetary equilibrium, in the absence of the Friedman rule Friedman (1969). Note that if the money supply is instead doubled to 2M, the prices of goods in the DM and CM double but production and consumption equilibrium quantities remain unchanged, that is, money is neutral in this model. To see this more clearly, notice that Eq. (1) does not directly depend on the value of M. Social Norms in the Lagos-Wright Environment The model in the previous section can also be described as an infinitely repeated trading game. It is easy to see that producing zero regardless of the history of play is always an equilibrium, that is, autarky remains an
268
JOHN DUFFY AND DANIELA PUZZELLO
equilibrium. However, there also exist nonmonetary, pure “gift-exchange” equilibria that sustain positive amounts of production and consumption (including the first-best) as sequential Nash equilibria through the use of a community-wide contagious strategy mechanism (see Araujo, 2004; Ellison, 1994; and Kandori, 1992). In order to describe these equilibria, we assume that consumers propose terms of trade so that their action set is given by ½0; q × ½0; M.6 The action set of producers is identified with {0,1} where 0 stands for reject and 1 stands for accept. Let 0 < q ≤ q* be some positive amount of production and consumption in the DMs. Consider a strategy that prescribes to shut down the CM and to participate only in DMs. In the latter meetings, the strategy prescribes that consumers propose (q,0) and producers accept (q,0), so long as they have always observed these proposals being accepted in past meetings. As soon as a deviation is observed, then the strategy prescribes rejection of any proposal forever after whenever an agent is in the producer role. As in Duffy and Puzzello (2014), we label this strategy as a decentralized gift-giving social norm, since it only relies on contagion being spread by means of decentralized interactions. It is possible to show that this social norm is supported as a sequential equilibrium if agents are patient enough (see Duffy & Puzzello, 2014). The intuition behind this result is as in Kandori (1992), namely that cooperation can be supported since a single deviation by an agent implies that any agent who has observed it stops producing whenever a producer and thus, defection spreads like an epidemic that eventually hits the whole community leading to autarky. The threat of triggering such a contagious reaction can suffice to support a cooperative gift-exchange social norm of the type characterized above.
The No-Money Environment In addition to studying the Lagos-Wright model, we also study a variant of this environment, due to Araujo et al. (2012) where there is no money. In the DMs of this environment, consumers propose quantities that producers should produce for them so that a consumer’s action set is again given by ½0; q: The action set of producers is again identified with {0,1} where 0 stands for reject and 1 stands for accept. Following the DM, agents can choose whether to participate in the centralized trading post for the general good. Since this is not an endowment economy, agents first choose whether to produce 0 ≤ y ≤ Y units of the general good, where Y denotes the upper bound on production. Second, they decide how much to bid, b, for the
Essentiality and Neutrality of Money in a Search Model
269
general good with the constraint that their bid cannot exceed their production, that is, 0 ≤ b ≤ y. The price of the CCM general good is determined by the ratio ofP the sum P of bids P to the sum P of individual production amounts, that is, p = bi = yi . If bi = 0 or yi = 0, then p = 0 and no trade takes place. Consumption for an agent whose bid is b is determined by b/p. Since preferences are linear in the CM stage, payoffs are given by U(b,y,p) = (b/p) − y. Let 0 < q ≤ q* be a positive amount of production and consumption in DMs. The decentralized gift-giving social norm remains a sequential equilibrium of the trading game. In addition to decentralized social norms, there also exist centralized social norms where the trading post price in the CM can be used a signaling device. It is possible to show that if agents are sufficiently patient, positive amounts of production and consumption (including the first best) can be supported as sequential equilibria (see Duffy & Puzzello, 2014 for details). Contagion under these social norms is much faster thus implying that good allocations can be supported with lower thresholds for the discount factor than in the case of the decentralized social norms of the Lagos-Wright environment.
Parameterization and Equilibrium Benchmarks As in most sessions of Duffy and Puzzello, we considered a population of 2N = 14 subjects. The utility function (cost function) in the DM was given by u(q) = A ln (1 + q) (c(q) = Cq). We set A = 7, C = 1, and β = 5/6, again following the choices made in our earlier paper. The aggregate supply of money was given by M = 112 (with an initial endowment of money per capita of M/2N = 8) and it remained constant in the money treatment part of sessions contrasting money with no money. In our treatments involving the neutrality-of-money proposition, we considered environments where the money supply doubled from M = 112 to 2M = 224 (with an initial endowment per capita of M/2N = 16) or the reverse scenario where the initial supply of money was 2M = 224 and then dropped to M = 112. Given these parameter choices, the first best quantity is q* = 6, while the equilibrium quantity associated with the monetary equilibrium is q~ = 4. The upper bound for the special good in the DM is q = 22.7 We also chose an upper bound of Y = 22 for the CM. Regarding prices, the equilibrium price of the special good in the DM is given by p = ðM=2NÞ=q~ = 8=4 = 2. The equilibrium price of money in terms of the general good in the CM is ~ ϕ = ðcðqÞÞ=ðM=2NÞ = 1=2 and so the equilibrium price of the general good in terms of money is the reciprocal P = 2. When M is doubled, prices are doubled but the equilibrium quantity associated with the monetary
270
JOHN DUFFY AND DANIELA PUZZELLO
equilibrium remains q~ = 4. Finally, for the purpose of calculating welfare, we note that the period monetary equilibrium payoff per pair is v = {7 log 5 − 4} = 7.26 and the period first best payoff per pair is v*{7 log 7 − 6} = 7.62. Thus, the monetary equilibrium is predicted to achieve 95.3 percent of the welfare under the first best equilibrium. Regarding social norm equilibria, the lowest value of the discount factors, β, for which the first best can be supported are given by β CM = 0:6427 and β DM = 0:8256, where the superscript refers to our focus on the centralized or decentralized, nonmonetary social norm. Our choice for β = 5/6 exceeds both of these minimal threshold discount factors, so that the first best can be supported as a sequential Nash equilibrium under both types of social norms, decentralized and centralized (see Duffy & Puzzello, 2014 for more details). In addition to the first best, lower but positive production and consumption levels, q, in the DM can also be supported as sequential, nonmonetary social norm equilibria under our parameterization of the model. Tables 1 and 2 summarize equilibrium predictions for quantities and prices under the various types of equilibria that are possible in the Lagos-Wright environment that we implemented in the laboratory.
EXPERIMENTAL DESIGN The experiment was computerized using the z-Tree software (Fischbacher, 2007). Each session involved 2N = 14 subjects drawn from the undergraduate Table 1. Group Size N = 14
Table 2.
Decentralized Social Norm
Monetary Equilibrium
Autarkic Equilibrium
0.5 ≤ q ≤ 6
q=4
q=0
Equilibrium Predictions Regarding Decentralized Meeting (DM) Price p and Centralized Meeting (CM) Price p.
Per-Capita Money Holdings M/2N = 8 M/2N = 16
Equilibrium Predictions Regarding q.
Price DM P
Price CM P
2 4
2 4
Essentiality and Neutrality of Money in a Search Model
271
population of the University of Pittsburgh. No subject had any prior experience with any of the treatment environments of our experiment; subjects were only allowed to participate in a single experimental session. Our experiment involved a within-subjects design where each session had subjects participate in two distinct treatments or “parts” as they were referred to in the instructions. Prior to each treatment/part subjects were given written instructions which were read aloud in an effort to make these instructions public knowledge.8 Subjects also had to answer comprehensive quiz questions to confirm their familiarity with the environment. Mistaken answers to quiz questions were reviewed aloud in an effort to minimize mistakes due to comprehension problems. The experiment consists of four different, within-subject treatments each consisting of two parts. In the “NM-M” treatment, the first part of the session consisted of several indefinite sequences of the no-money (NM) environment. The second part consisted of several indefinite sequences of the money (M) environment. In the “M-NM” treatment, the two parts were reversed: the first part consisted of several indefinite sequences of M environment followed by a second part involving several indefinite sequences of the NM environment. In the “M-2M” treatment, the first part consisted of several indefinite sequences of the M environment (as in the M-NM treatment), while the second part consisted of several indefinite sequences of the 2M environment, where the only change was that the money supply was doubled. Finally, the “2M-M” treatment considered the reverse order where the first part of the session involved the 2M environment and the second part involved the M environment, that is, the money supply was cut in half. In all four cases, subjects were not informed about the nature of the environment they would face in the second part of the experiment until the first part had been concluded. That is, the changes in the environment of the second part of the study can be regarded as being unknown to subjects in advance. As noted, each part of a session consisted of several “supergames” which we referred to in the written instructions as “sequences.” Each sequence consisted of an indefinite number of repetitions (periods) of a stage game. Each stage game involved two rounds, a DM round and a CM round. Every sequence began with the play of at least one, two-round stage game. At the end of each stage game, the sequence continued with another repetition (period) of the stage game with probability β and ended with probability (1 − β). If a sequence ended, subjects were told that “depending on the time available,” a new indefinite sequence would begin. Specifically, our computer program drew a random number uniformly from the set {1,2,3,4,5,6}, and this was explained to subjects as simulating the roll of a
272
JOHN DUFFY AND DANIELA PUZZELLO
six-sided die. If the number drawn was not a 6, then the sequence continued with another round; otherwise, if a 6 was drawn, the sequence ended. In this manner we induced a discount factor or continuation probability of β = 5/6.9 In practice, we let our computer program determine the indefinite sequence lengths using the random termination method in the very first experimental session reported on in this article in real-time, that is without any intervention on our part. We then hard-coded in these exact same sequence lengths for all remaining sessions, so as to minimize differences across our sessions due to varying sequence lengths.10 At the start of each and every new indefinite sequence of our money, M, or doubled money, 2M, environments, prior to the first DM round, each subject in our Lagos-Wright economy was endowed with either M/2N “tokens” or 2M/2N tokens, depending on the treatment.11 In these sessions, subjects were also informed about the total number of tokens, M = 112 or 2M = 224. They were also informed that the total token quantity was fixed and that they would not get any further endowment of tokens for the duration of that sequence. Subjects were further instructed that if a sequence ended, their token balances would be set to zero. However, if a sequence continued with a new period, their token balance, as of the end of the last period, would carry over to the new period of the sequence. In the NM environment, there was no money and thus no initial endowments of or instructions regarding tokens. Within a period (stage game), the DM round began with a random pairwise matching of all 2N subjects to form N pairs. Within each pair, one player was chosen with probability 1/2 to be the producer and the other player was designated as the consumer for that round. We suggested that subjects think of this determination as the result of a coin flip and recognize that they would be a consumer (producer) in one half of all DM rounds, on an average. Subjects were instructed that all random pairings and assignments were equally likely. For the DM we induced the utility function u(q) = A log(1 + q) over consumption and the cost function c(q) = Cq over production of the decentralized good. These functions were presented to subjects in a payoff table showing how a certain quantity q of the decentralized good translated into a positive number, A log (1 + q), of “points” in the case of consumption or a negative number, −Cq, of points in the case of production. Subjects were instructed in how to use that table to calculate their earnings in various scenarios. At the start of each session each subject was given an initial endowment of 20 points so as to minimize the possibility that any subject ended up with a negative point balance; indeed,
Essentiality and Neutrality of Money in a Search Model
273
we can report that no subject ended any of our experimental sessions with a negative point balance. Importantly, subjects were specifically instructed that “Tokens have no value in terms of points,” that is, tokens had no redemption value. Like fiat money, tokens were intrinsically worthless with regard to the points that subjects accrued over the course of a session (from consumption and less production) and it was these point totals that were used to determine subjects’ earnings from the experiment. Notice that the environment is potentially merciless in the sense that if no producer agrees to produce, there is no consumption and hence no points earned by any agent.12 Consumers moved first and were asked to form a “proposal” as to how much of the decentralized good they wanted their randomly matched producer to produce for them and in the money treatments, how many tokens, if any, the consumer was willing to offer the producer for the quantity requested. Consumers were informed of both their own and their matched producer’s current token balances prior to formulating their proposal. Consumers were restricted to requesting quantities of the decentralized good, q, in the interval ½0; q though fractional units were allowed. In the money treatments they could also offer their matched producer d units of their current period token balance as part of their proposal. It was made clear that token (money) offerings were voluntary; subjects were instructed that the amount of tokens offered, d, could range between 0 and their currently available token balance, inclusive, and that fractional units were also allowed. Thus, each consumer formulated a proposal, (q,d) in treatments with money and a proposal, q, in the NM treatments, which was then anonymously transmitted to their matched producer. Producers moved second and were first informed of their matched consumer’s proposal. Producers were further informed about the consumer’s benefit from receiving the proposed quantity q, u(q), and of their own cost from producing quantity q, u(q). In the money treatment, producers were also informed of both the consumer’s and their own currently available token balances as well as the quantity of tokens, d, the consumer was offering them in exchange for producing q units. Producers then had to decide whether to accept or reject the consumer’s proposal. If the producer accepted the proposal, then it was implemented: producers produced quantity q at a cost to themselves of c(q) points. The consumer consumed quantity q yielding him or her a benefit of u(q) points. In the money treatments the proposed quantity of tokens, d, if positive, was transferred from the consumer to the producer. If the producer rejected the proposal then no exchange took place; both members of the pair earned 0 points for
274
JOHN DUFFY AND DANIELA PUZZELLO
the round and in the money treatments, their token balances remained unchanged. At the end of the decentralized round, subjects were informed of the outcome of that round: in particular, they were informed as to whether the proposal was accepted or not and were updated on any changes to their cumulative point totals. In the money treatments they also learned of any changes to their token balances. After this feedback was communicated, the decentralized round was over and the CM round began. Within a period (stage game), the second, CM round brought together all 2N participants to participate in the meeting for the homogeneous and perishable “good X.” Trade in the CM was organized via a trading post or “market game” as in Shapley and Shubik (1977). The specific trading post set up in our environment follows that of Green and Zhou (2005) and depends on whether subjects were in a M or NM environment. At the start of the centralized round, all subjects were asked whether they wanted to participate in the centralized trading post meeting. If they agreed to participate then they decided how much they wanted to produce of good X for the trading post, say y ≥ 0. After that, subjects had to decide how much they wished to bid, b, for units of good X. In the NM environment, each subject was instructed that their individual bid, bi, for units of good X could be any amount up to and including yi, the number of units they had already committed to produce, that is, 0 ⩽ bi ⩽ yi . In the money treatment, bids for good X had to be in money units and subjects were instructed that they could bid any amount of their currently available money holdings, mi, but that their bid could not exceed their money holdings, that is, 0 ⩽ bi ⩽ mi . After every subject had submitted their decisions, the market P P price of good X was determined by P = b = yi : Subjects were further i P P instructed that P = 0 if bi = 0 or if yi = 0 or both, in which case no trade took place. The realized payoff in the CM was known to be given by U(b,y,P) = (b/P) − y since consumption is determined by b/P and preferences are linear in the CM. In the M environment, token holdings at the beginning of the next DM, m0 , were given by money holdings at the beginning of the previous CM, plus any proceeds from sales, minus any amount of tokens bid: m0 = m + Py − b. As the population size, N, grows large, the theoretical predictions remain the same as for the Lagos and Wright (2005) model. Points were subtracted or added to subjects’ point totals from the DM round. Following the completion of the CM round, subjects were updated on their new point totals or token holdings. Then a random number was drawn from the set {1,2,3,4,5,6}. If the random number drawn was not 6,
Essentiality and Neutrality of Money in a Search Model
275
the sequence continued on with another two-round period. In the money treatment, subjects’ token balances as of the end of the CM were carried over to the decentralized round of the next period in the sequence. If the random number drawn was a 6, then the sequence ended. In the money treatment if a sequence ended, token balances were set to zero. Subjects were instructed that once a sequence ended, depending on the time available a new indefinite sequence might begin. In each new sequence of a money treatment, all subjects would start the new sequence with M/14 = 18 tokens, or 2M/14 = 16 tokens depending on whether it was the M or 2M treatment. Point totals, however, were not re-initialized between sequences and carried over from one sequence to the next. Approximately mid-way through each session, the experimenter announced the conclusion of the first part, which came only at the end of an indefinite sequence (when a 6 was rolled). Instructions for the second part were then handed out and read aloud. The second part also consisted of a number of indefinite sequences, and subjects earned points in both the decentralized and CMs of the second part of the session just as they had in first part; there was no change in the utility benefits from consumption or the costs of production in either the decentralized or CMs of each part of a session of this experiment. The only changes were with regard to the presence or absence of money or the total stock of money in circulation. Following the completion of the second and final part of the session, subjects answered a brief questionnaire and were then paid their accumulated point earnings from all sequences played in both parts of the session along with a $5 show-up payment. Subjects’ cumulative point totals from all periods of all sequences played were converted into cash at the end of the session at the fixed and known exchange rate of 1 token = $0.20. Average total earnings were $24.54 (standard deviation of $6.06) for an approximately 2.5 hour session.
EXPERIMENTAL RESULTS We report on data from 12 experimental sessions. As each session involved 14 subjects, we have data from a total of 168 subjects. Some characteristics of our 12 experimental sessions are given in Table 3. As the table reveals, we had three sessions of each of the four treatments. The first and second parts of each session are indicated in this table but are also reflected in the names given to each session, where NM = no money, M = money, and
276
JOHN DUFFY AND DANIELA PUZZELLO
Table 3. Characteristics of the 12 Experimental Sessions. Treatment Name NM-M-1 NM-M-2 NM-M-3 M-NM-1 M-NM-2 M-NM-3 M-2M-1 M-2M-2 M-2M-3 2M-M-1 2M-M-2 2M-M-3
Part 1 Treatment
Part 1 Seq Lengths; Periods
Part 2 Treatment
Part 2 Seq Lengths; Periods
NM NM NM M M M M M M 2M 2M 2M
6,2,7; 15 periods 6,2,7; 15 periods 6,2,7; 15 periods 4,12; 16 periods 4,12; 16 periods 4,12; 16 periods 4,12; 16 periods 4,12; 16 periods 4,12; 16 periods 6,2,7; 15 periods 6,2,7; 15 periods 6,2,7; 15 periods
M M M NM NM NM 2M 2M 2M M M M
4,12; 16 periods 4,12; 16 periods 4,12; 16 periods 6,2,7; 15 periods 6,2,7; 15 periods 6,2,7; 15 periods 6,2,7; 15 periods 6,2,7; 15 periods 6,2,7; 15 periods 4,12; 16 periods 4,12; 16 periods 4,12; 16 periods
2M = twice money. Hence, “NM-M-1” is the session number 1 of the treatment where the first part was the NM environment and the second part was the M environment. In addition, Table 3 reports the sequence lengths and total number of periods in each part of each session. Recall that after the first session, we hard-coded the sequence lengths and thus the total number of periods for each part and we kept these sequence lengths constant across sessions. For instance, in the NM-M treatment sessions, the first part always consisted of three sequences of lengths 6, 2, and 7 periods for a total of 15 periods. The second part always consisted of two sequences of lengths 4 and 12 periods for a total of 16 periods. Thus, combining the first and second parts, each session involved 31 periods, with each period involving both a DM round followed by a CM round. Our experiment has yielded a number of interesting results which we summarize as several different findings.
NM-M and M-NM Treatments We begin with an analysis of behavior in the NM-M and M-NM treatment sessions. Recall that in these sessions there was either no money (NM) or a constant total stock of 112 units of fiat money (M) or 8 units per capita. Our first finding concerns the acceptance of DM offers by producers in these sessions.
277
Essentiality and Neutrality of Money in a Search Model
Finding 1. The frequency with which DM offers are accepted in the NMM and M-NM treatments is independent of whether or not there is a money object or of the treatment ordering. Support for Finding 1 comes from the second and fifth columns of Table 4 which reports the mean DM offer acceptance frequencies (DM Accept) in the No Money (NM) or Money (M) parts of the three NM-M and M-NM sessions. Using the six pairs (NM, M) of mean acceptance frequencies for each of the six sessions, a Wilcoxon signed ranks test indicates that we cannot reject the null hypothesis of no difference in acceptance frequencies between the NM or M parts of each of these sessions (p = .753). This result is also confirmed by the Wilcoxon-Mann Whitney test on the acceptance frequencies of the first part of the NM-M or M-NM treatment sessions (p = .5). Further support comes from the first column of Table 5 which reports the results of a GLS regression analysis of individual producers’ acceptance decisions in all periods of all NM-M and M-NM sessions. The first column reports results of a regression of the producer’s DM offer acceptance decision on a constant and two dummy variables: M, a dummy variable equal to 1 if the economy had money and OrderM-NM a second dummy variable Table 4. Mean DM Offer Acceptance Frequencies, DM Traded Quantities, q, and Welfare as a Percentage of the First Best in the Two Parts of Each Session of the NM-M and M-NM Treatments. NM-M Session NM-M-1 NM-M-2 NM-M-3 All 3
No Money DM Accept
DM q
Welfare
DM Accept
DM q
Welfare
0.448 0.343 0.457 0.416
1.506 1.291 2.579 1.792
0.251 0.173 0.358 0.261
0.348 0.295 0.429 0.357
2.100 1.559 1.797 1.819
0.212 0.167 0.288 0.222
M-NM Session M-NM-1 M-NM-2 M-NM-3 All 3
Money
Money
No Money
DM Accept
DM q
Welfare
DM Accept
DM q
Welfare
0.429 0.446 0.545 0.473
6.656 3.049 4.050 4.585
0.363 0.342 0.444 0.382
0.419 0.295 0.438 0.384
1.117 0.625 1.412 1.051
0.191 0.084 0.242 0.172
278
Table 5.
JOHN DUFFY AND DANIELA PUZZELLO
GLS Regression Analysis of Treatment Effects and Time on DM Behavior in the NM-M and M-NM Treatments.
Variable
DM Offer Accepted
DM Quantity Traded
Constant
0.378*** (0.037) 0.015 (0.039) 0.045 (0.045)
1.068** (0.519) 1.706* (0.938) 1.150** (0.458)
M OrderM-NM R2 Nobs
0.03 1,302
0.17 524
*,**,*** Significance at the 10%, 5%, and 1% levels.
equal to 1 if the treatment order of the session was M in the first part and NM in the second part. Our random effects regression analysis on individual subject data involved robust clustering of standard errors on each of the six sessions. The results in the first column of Table 5 indicate that the mean DM acceptance frequency was 37.8 percent (the coefficient on the constant term), and that the presence or absence of money or the treatment ordering were not significant factors in producers’ acceptance decisions as indicated by the insignificance of the coefficients on the M and OrderMNM dummy variables. We next consider whether the presence or absence of fiat money (tokens) has an effect on the DM quantities that producers agreed to produce for their matched consumers, that is, on traded DM quantities. Finding 2. In the NM-M and M-NM treatments, mean traded DM quantities are higher with money than without money. However, the impact of money on DM quantities is more pronounced in the M-NM treatment as compared with the NM-M treatment. Support for Finding 2 comes again from Tables 45. The third and sixth columns of Table 4 show the mean quantities traded in the DM market of the NM or M treatments. Notice that mean traded DM quantities in the M part of each session are, with a single exception (session NM-M-3), greater than mean traded DM quantities in the NM part of each session. While we do not have enough session-level observations to establish whether these differences are statistically different from one another at conventional significance levels, our random effects regression analysis of
Essentiality and Neutrality of Money in a Search Model
279
individual traded quantities from the six NM-M and M-NM sessions (with clustering of errors on session-level observations) suggests that the presence of money positively affects the DM quantity traded. In particular, the third column of Table 5 reports on a regression of DM quantity traded on the same two dummy variables used to understand exchange decisions. As the regression results indicate, the baseline NM mean traded quantity is 1.068 units (the coefficient on the intercept term in the regression) and this amount is substantially increased by an additional 1.706 units (for a total of 2.774 units) when money is present as indicated by the statistically significant coefficient on the M dummy term. Notice further that, in support of the second statement of Finding 2, the treatment order also matters for the DM traded quantity as indicated by the significantly positive coefficient of 1.15 on the OrderM-NM dummy variable. Specifically, the DM quantity is further increased from 2.774 units to 3.924 units if the treatment order was M-NM as opposed to the opposite NM-M treatment order. The amount 3.924 is close to the unique monetary equilibrium prediction that four units are traded in the DM. Intuitively, in the NM-M treatment in the initial absence of money the first NM part of this treatment subjects coordinated on a low-level DM quantity to trade. The introduction of money in the second M part of the NM-M treatment sessions did not succeed in increasing the DM quantity by very much as is more clearly revealed in Table 4 where the overall average DM traded quantity was 1.792 units in the NM part and only slightly higher at 1.819 units in the M part. On the other hand, in the M-NM treatment sessions, the presence of a stock of money resulted in a much greater traded DM quantity in the M part; the average of the session observations was 4.585 units which again is close to the monetary equilibrium prediction of four units and much greater than the 1.7292 units traded on average in the first part of the NM-M treatment sessions. Furthermore, when money was removed from the economy in the second part of the M-NM treatment sessions, there was a precipitous drop-off in the mean DM quantity traded, from 4.585 units down to an average of 1.051 units again as revealed in Table 4. This lower level of traded DM quantity is more typical of the NM treatment sessions and also in line with the quantities reported of Duffy and Puzzello (2014). As an alternative comparison, let us focus on just the first part of each of the six NM-M or M-NM treatment sessions. Fig. 1 provides a graphical illustration of the mean traded DM quantities in the first part of each of these six sessions using the relevant data reported in Table 4. This figure makes it clear that mean traded DM quantities are much larger in
280
JOHN DUFFY AND DANIELA PUZZELLO
Mean traded quantity
7 6 5 4 3 2 1 0
M-NM NM-M
Session 1 Session 2 Session 3
Fig. 1.
Average DM Quantity Traded in the First Part of the Three NM-M and of the Three M-NM Sessions.
the first part of the M-NM treatment sessions where there was a constant stock of fiat money (M) as compared with the first part of the NM-M treatment sessions where there was no fiat money (NM). Indeed, a Wilcoxon-Mann Whitney test on the six session-level averages confirms this impression; we can reject the null of no difference in mean traded quantities in favor of the alternative that mean traded DM quantities are higher with money than without money in the first parts of these sessions (p = .10, two-sided test, smallest p-value with just three observations per treatment). A consequence of Findings 12 is that welfare is higher in economies that start out with a supply of the token money object (as in our M-NM treatment) as compared with economies that do not start out with a supply of the token money object (as in our NM-M treatment), which is consistent with Duffy and Puzzello (2014). Furthermore, in economies that start out with a token object, welfare drops significantly when that token object is later taken away as in the M-NM treatment sessions. However, the same is not true in the reverse-order NM-M treatment sessions; in the latter, the low quantity exchanged in the first NM part of the session spills over to the second, M part of the session, where DM quantities are not much different and consequently there is not much improvement in welfare. We summarize this result as follows: Finding 3. Welfare is higher with money than without money in the M-NM treatment sessions where money is introduced in the first part. However, in the NM-M treatment sessions where money is only later introduced in the second part, welfare is not improved by the introduction of money.
281
Essentiality and Neutrality of Money in a Search Model
Support for Finding 3 comes from Table 4, specifically from the welfare measure reported under the heading “Welfare” which indicates the percentage of the first best level of welfare that subjects were able to achieve in the first and in the second parts of the NM-M and M-NM treatments.13 We observe that welfare is higher in the first part of the M-NM treatments where there is a supply of money than in the second part where there is no money. On the other hand, if the economy starts without money, as in the first part of the NM-M treatment sessions, the introduction of money does not seem to be welfare-improving. Further confirmation of these impressions is provided in Table 6 where we use welfare ratios for every period of the NM-M and M-NM sessions as a dependent variable in a fixed effects regression that clusters errors at the session level.14 The regression in the first column (both NM-M, M-NM) uses period welfare ratios from all six NM-M and M-NM sessions combined. There we find that, over all sessions, the presence of money is welfare improving as indicated by the positive and significant coefficient on the M dummy variable (representing the treatment where money was present). We also find that, consistent with Table 4, the treatment order matters; welfare is higher in the M-NM treatment order as compared with the NM-M treatment order as indicated by the coefficient on the OrderM-NM dummy variable. In the second and third columns of Table 6, we focus on just the NM-M or M-NM treatment session data separately. There we see confirmation that in the NM-M treatment order, the introduction of money has no significant impact on the welfare ratio. However, in the M-NM treatment order, the introduction of money has a significant impact on the welfare ratio and
Table 6.
Regression Analysis of Period-by-Period Welfare Ratios in the NM-M and M-NM Sessions.
Variable
Both NM-M, M-NM
NM-M Only
M-NM Only
Constant
0.197*** (0.021) 0.086*** (0.023) 0.040* (0.023)
0.261*** (0.022) −0.038 (0.031)
0.172*** (0.022) 0.211*** (0.031)
M OrderM-NM Nobs R2
186 0.07
*,**,*** Significance at the 10%, 5%, and 1% levels.
93 0.01
93 0.34
282
JOHN DUFFY AND DANIELA PUZZELLO
this effect is so large that it also obtains in the combined data sample in the first column. The finding that welfare is not increasing with the introduction of money in the NM-M treatment order appears attributable to the negligible increase in DM traded quantities in the second part of the NM-M treatment sessions in combination with a slight decrease in DM offer acceptances from the first NM part to the second M part see Table 4. It appears that, once established, social norms of low-level gift exchange are very difficult to abandon and may persist despite the introduction of money (and therefore prices). Regarding prices in the NM-M and M-NM treatments, Table 7 reports mean DM trade prices (for the M part of each session only) along with CM market prices for both the NM and M parts. Recall from Table 2 that for our parameterization of the model, the monetary equilibrium prediction when M/2N = 8 (as in the M parts of these sessions) is for both the DM price, p and CM price, P, to equal 2. We observe that for the M part of the M-NM treatment sessions, the mean DM price is 1.484, which is close to, but lower than the monetary equilibrium prediction of 2. The CM price in the M part of these sessions is however very close to 2, averaging 2.067 across the three M-NM treatment sessions (the median is 2.053. This evidence not only confirms that money is being used (otherwise there would be no DM prices), but suggests that Table 7.
Mean or Median Prices in the DM or CM of the Two Parts of Each Session of the NM-M and M-NM Treatments. No Money
NM-M Session
Mean DM price
NM-M-1 NM-M-2 NM-M-3 All 3
N/A N/A N/A N/A
Money
Mean CM price
Median CM price
Mean DM price
Mean CM price
Median CM price
0.982 0.981 0.977 0.980
0.988 0.994 0.981 0.987
2.403 2.659 3.534 2.865
5.258 13.750 4.576 7.861
4.093 4.483 4.538 4.371
Money M-NM Session M-NM-1 M-NM-2 M-NM-3 All 3
No Money
Mean DM price
Mean CM price
Median CM price
Mean DM price
Mean CM price
Median CM price
1.124 2.017 1.310 1.484
1.452 3.028 1.725 2.068
1.259 3.332 1.566 2.053
N/A N/A N/A N/A
0.949 0.966 0.971 0.962
0.967 1.000 0.999 0.988
Essentiality and Neutrality of Money in a Search Model
283
subjects were close to coordinating on the unique monetary equilibrium in the M part of the M-NM treatment sessions. By contrast, prices in the M part of the NM-M treatment sessions are at odds with the monetary equilibrium predictions; as Table 7 reveals, DM trade prices are greater than 2, averaging 2.865 and CM prices are considerably higher, averaging 7.861, though the median CM price is lower, at 4.371. These greater-than-predicted prices in the M part of the NM-M treatment are really just a reflection of the fact that DM traded quantities are too low in the M part of the NM-M treatment sessions relative to the monetary equilibrium prediction of four units traded per period; with lower-thanequilibrium DM traded quantities and a constant money supply, both DM and CM prices will necessarily be higher than equilibrium predictions. In the NM part of both the M-NM and NM-M treatment sessions, CM prices are very close to 1, reflecting a common strategy that subjects offer to bid for as many units of the centralized good X as they offered to produce; in the NM part of these sessions, there is no need to rebalance monetary holdings and so the price is valuable only as a signal of the coordinated behavior of market participants in the CM. As suggested in the Introduction, the natural order of events in human history was that nonmonetary gift-exchange regimes preceded the current, modern fiat money regime of impersonal exchange. However, when we implement such a regime change in the laboratory (from NM to M) we find that while money is used and there is (in two of three sessions) a slight increase in the amount of DM exchange, the behavior of subjects departs considerably from the new and unique monetary equilibrium prediction and there is no welfare improvement. On the other hand, if, counter to history, we start with a supply of money, then behavior conforms more closely to the monetary equilibrium predictions for the M part of that treatment and taking away money in the second part leads to a significant drop-off in exchange and welfare. We speculate that the adjustment from the NM to the M regime may take longer than is allowed for in the compressed time frame of our experimental study. We further note that monetary exchange systems often involve an intermediate transition phase from a regime of pure gift exchange to one of commodity money and then on to fiat money as opposed to the more stark transition that we attempt to engineer from a pure giftexchange economy to one involving only fiat money. We would add that monetary regimes are often accompanied by legal restrictions requiring the use of money (e.g., to pay taxes) and that such restrictions are completely absent in the framework that we study here. These omissions are potentially important factors in understanding our finding of treatment order effects.
284
JOHN DUFFY AND DANIELA PUZZELLO
M-2M and 2M-M Treatments We now turn to our second main treatment exploring the neutrality-ofmoney proposition as first set forth by Hume and which plays a fundamental role in the quantity theory of money. Recall that in these treatments, the total money stock was either 112 total units, or 8 units per capita as in the M treatment, or twice this amount, 224 total units, or 16 units per capita as in the 2M treatment. Our first finding concerns the acceptance of DM offers in these treatments. Finding 4. There is no difference in DM offer acceptance rates between the first and second parts of either the M-2M or the 2M-M treatments. Support for Finding 4 comes from Tables 8 and 9. The second and fifth columns of Table 8 report mean DM offer acceptance frequencies by producers in the M-2M and 2M-M treatments; these acceptance frequencies are all quite similar, lying between .425 and .473 on average. Indeed, a Wilcoxon signed ranks test on matched pairs of acceptance frequencies yields no significant difference (p = 0.75 for both M-2M and 2M-M). Further support comes from the first column of Table 9 which reports the results of a random effects regression of producer’s DM offer acceptance decisions on a constant and two dummy variables: 2M, a dummy variable
Table 8. Mean DM Offer Acceptance Frequencies, DM Traded Quantities, q, and Welfare as a Percentage of the First Best in the Two Parts of Each Session of the M-2M and 2M-M Treatments. M-2M
2 × Money
Money
Session
DM Accept
DM q
Welfare
DM Accept
DM q
Welfare
M-2M-1 M-2M-2 M-2M-3 All 3
0.464 0.420 0.536 0.473
3.447 4.021 4.690 4.053
0.383 0.330 0.457 0.390
0.448 0.476 0.390 0.438
2.865 3.380 5.195 3.813
0.355 0.381 0.351 0.363
2 × Money
2M-M
Money
Session
DM Accept
DM q
Welfare
DM Accept
DM q
Welfare
2M-M-1 2M-M-2 2M-M-3 All 3
0.505 0.362 0.410 0.425
3.627 2.372 3.884 3.294
0.444 0.302 0.347 0.365
0.536 0.330 0.446 0.438
2.076 2.298 1.850 2.075
0.373 0.227 0.296 0.298
285
Essentiality and Neutrality of Money in a Search Model
Table 9.
GLS Regression Analysis of Treatment Effects and Time on DM Behavior in the M-2M and 2M-M Treatments.
Variable
DM Offer Accepted
DM Quantity Traded
Constant
0.467*** (0.019) −0.025 (0.031) −0.024 (0.045)
3.631*** (0.472) 0.820 (0.506) −0.972* (0.527)
2M Order2M-M R2 Nobs
0.01 1,302
0.05 571
*,**,*** Significance at the 10%, 5%, and 1% levels.
equal to 1 if the money supply was 2M, and Order2M-M a second dummy variable equal to 1 if the treatment order of the session was 2M in the first part and M in the second part. The regression results in the first column of Table 9 suggest that the mean DM offer acceptance frequency is 46.7 percent and that neither the doubling of the money supply nor the treatment order has any effect on DM offer acceptance frequencies as indicated by the insignificance of the coefficients on the 2M and Order2M-M dummy variables. We next consider whether the change in the money supply has any real effects on DM traded quantities. Finding 5. Consistent with the neutrality of money proposition, there are no real effects on DM traded quantities if the money supply is M or 2M. However, this neutrality result is more pronounced in the M-2M treatment as compared with the 2M-M treatment. Support for Finding 5 comes from Tables 8 and 9. Table 8 suggests that mean DM traded quantities (DM q) in the first part of the M-2M and 2M-M treatment sessions are very similar to one another and are close to the monetary equilibrium prediction of four DM units exchanged; these mean quantities from the first parts of all M-2M and 2M-M sessions are also illustrated in Fig. 2. Indeed, a Wilcoxon-Mann Whitney test using the six session-level observations illustrated in Fig. 2 indicates that we cannot reject the null of no difference in DM traded quantities between the first (M) part of the M-2M treatment sessions and the first (2M) part of the 2M-M treatment sessions (p = .275) suggesting that a doubling of the
JOHN DUFFY AND DANIELA PUZZELLO
Mean traded quantity
286
5.000 4.500 4.000 3.500 3.000 2.500 2.000 1.500 1.000 0.500 0.000
2M-M Session 1
M-2M Session 2 Session 3
Fig. 2.
Average DM Quantity Traded in the First Part of the Three M-2M and of the Three 2M-M Sessions.
money supply has no real effects on DM traded quantities, at least in the first part of these sessions. On the other hand, we observe in Table 8 that there is some drop-off in the mean DM traded quantities in the second parts of both the M-2M and the 2M-M treatment sessions and that in support of the second statement of Finding 5 this decline appears to be much larger in the second part of the 2M-M treatment sessions as compared with the second part of the M2M treatment sessions. Further evidence in support of Finding 5 comes from the third column of Table 9 which reports on a regression of DM traded quantities on the same two dummy variables used to understand exchange decisions. The regression results indicate that the baseline quantity of DM exchange in the M treatment is 3.631 units, which is again close to the monetary equilibrium prediction of four units. A doubling of the money supply does not have a significant effect on this quantity as indicated by the statistical insignificance of the coefficient on the 2M dummy variable. However, in support of the last statement of Finding 5, we also observe that the coefficient estimate on the treatment order dummy, Order2M-M, is significantly negative; the latter finding means that mean traded quantities in the DM market are 0.972 of a unit lower if the treatment order was 2M-M as opposed to the baseline M-2M order. This same finding is also found in Table 8 where DM q is lower in both the 2M and M parts of the 2M-M treatment sessions as compared with the M and 2M parts of the M-2M treatment sessions.
287
Essentiality and Neutrality of Money in a Search Model
An implication of Findings 4 and 5 is the following: Finding 6. Welfare is unaffected by the doubling of the money supply in the M-2M treatments but is reduced by the reduction of the money supply in the 2M-M treatments. Support for Finding 6 comes from the welfare session averages reported in Table 8. There we see that for the M-2M treatment sessions, there is not much change in our welfare measure (percentage of the first best welfare achieved) as the money supply was doubled from M to 2M. Indeed, using the three pairs of welfare measures for the three M-2M treatment sessions, a two-sided Wilcoxon signed ranks test indicates that we cannot reject the null of no difference in welfare between the M and 2M parts of these treatments (p = .75). However, applying the same test to the three pairs of welfare measures for the 2M-M treatment sessions, we find that welfare is lower in the M part as compared with the 2M part (p = .25, which is the lowest p-value possible with just three observations). The latter finding is mainly attributable to the large fall-off in the DM quantity traded in the M part of two of the three 2M-M sessions, since acceptance rates do not vary too much across the M and 2M treatments. Disaggregating further, Table 10 reports fixed effects regressions with clustering of standard errors on individual sessions where the dependent variable is the period-by-period welfare ratios relative to the first best welfare in the M-2M and 2M-M sessions regressed on treatment dummy variables (as was done earlier in Table 6 for the NM-M and M-NM treatment sessions).
Table 10.
Regression Analysis of Period-by-Period Welfare Ratios in the M-2M and 2M-M Sessions.
Variable
Both M-2M, 2M-M
M-2M Only
2M-M Only
Constant
0.367*** (0.022) 0.019 (0.025) −0.047* (0.025)
0.390*** (0.025) −0.027 (0.036)
0.298*** (0.025) 0.066* (0.036)
2M Order2M-M Nobs R2
186 0.02
*,**,*** Significance at the 10%, 5%, and 1% levels.
93 0.01
93 0.04
288
JOHN DUFFY AND DANIELA PUZZELLO
Combining all data from the M-2M and 2M-M sessions (first column) we see that the impact of the treatment change from M to 2M has no effect on period welfare ratios as indicated by the insignificance of the 2M dummy (equal to 1 in the 2M treatment). On the other hand, we note that the coefficient on the order dummy variable, Order2M-M, is negative and significant, indicating a treatment order effect wherein welfare is lower if the order is 2M-M as opposed to the baseline M-2M treatment ordering. Indeed, decomposing the data further by the treatment order, we see in the second and third columns that period welfare measures are unaffected by the doubling of the money supply in the M-2M treatment sessions, but that welfare is higher in the 2M part of the 2M-M treatment sessions than in the M part of those same sessions. The latter finding is mainly attributable to the large fall-off in the DM quantity traded in the M part of two of the three 2M-M sessions, since acceptance rates do not vary too much across the M and 2M treatments. A further implication of the neutrality-of-money-proposition is that prices should double in the 2M treatment relative to the M treatment. This prediction should hold for both the DM and CM prices. We again find some mixed support for this prediction: Finding 7. In the M-2M treatment, decentralized and centralized market prices approximately double with the doubling of the total money stock from M to 2M and these prices are in line with monetary equilibrium predictions. By contrast, prices in the 2M-M treatment do not change or change in the wrong direction in response to a decrease in the money supply from 2M to M. Support for Finding 7 comes from Tables 11 and 12. Table 11 reports mean traded prices in both the DM and CM markets of each M-2M and 2M-M treatment session. In the M-2M treatment sessions we observe that mean DM and CM prices are in a neighborhood of the equilibrium prediction of 2 in the M part and increase to a neighborhood of the equilibrium prediction of 4 in the 2M part (we will be more precise about this below). By contrast in the 2M-M treatment sessions, contrary to equilibrium predictions, DM prices increase from the 2M part to the M part and CM prices are essentially unchanged. To quantify these effects more precisely, Table 12 reports a simple regression analysis where the DM traded quantity, q, DM traded prices, p and CM market prices P are regressed on a constant and a dummy variable, 2M which was equal to 1 if the money stock was doubled to 2M and was 0 otherwise (if the money stock was M). Note that by contrast with the
289
Essentiality and Neutrality of Money in a Search Model
Table 11.
Mean or Median Prices in the DM or CM of the Two Parts of Each Session of the M-2M and 2M-M Treatments. 2 × Money
Money M-2M Session
Mean DM price
Mean CM price
Median CM price
Mean DM price
Mean CM price
Median CM price
M-2M-1 M-2M-2 M-2M-3 All 3
1.784 1.410 0.964 1.386
3.043 2.925 1.825 2.598
2.859 1.551 1.512 1.974
6.253 3.266 2.569 4.030
7.161 6.227 3.489 5.626
2.859 4.808 3.285 3.651
2 × Money
Money
2M-M Session
Mean DM price
Mean CM price
Median CM price
Mean DM price
Mean CM price
Median CM rice
2M-M-1 2M-M-2 2M-M-3 All 3
3.100 2.289 2.889 2.759
4.954 4.275 4.916 4.715
4.689 3.337 4.591 4.206
4.429 3.822 3.142 3.797
4.003 5.781 3.006 4.263
3.796 3.784 2.831 3.470
Table 12.
Regression Analysis of Quantity and Price Behavior in Response to Changes in the Money Supply.
Variable
Const. 2M Nobs R2
M-2M
2M-M
DM q
DM p
CM P
DM q
DM p
CM P
4.12*** (0.350) −0.238 (0.373) 293 0.01
1.46*** (0.254) 2.67*** (0.947) 290 0.05
2.597*** (0.390) 3.028*** (0.725) 93 0.17
2.136*** (0.093) 1.895*** (0.182) 278 0.01
3.955*** (0.532) 1.113** (0.508) 276 0.04
4.263*** (0.816) 0.126 (0.821) 93 0.01
*,**,*** Significance at the 10%, 5%, and 1% levels.
prior regression analyses of Tables 5 and 9 we have here disaggregated the data according to the treatment order, that is, either, M-2M or 2M-M, as we are interested in effects of changes in M on prices and so we do not include a treatment order dummy in this regression analysis. The regression results confirm Finding 5 that in the M-2M treatment, the change from M to 2M results in no significant change in quantities exchanged in the DM trading round; the coefficient on the constant term is 4.12 which is significantly different from zero and close to the monetary equilibrium prediction
290
JOHN DUFFY AND DANIELA PUZZELLO
of q = 4, but the coefficient on 2M is not significantly different from zero indicating no real effect from the change in the money supply on the quantity exchanged. Also consistent with the neutrality-of-money proposition, the change from M to 2M results in a significant increase in DM prices from 1.46 to 4.13, which approximates the unique monetary equilibrium prediction of a rise from DM p = 2 to DM p = 4. In the M-2M treatment, the CM price P also increases significantly from 2.597 to 5.628, which is somewhat higher, but in the right direction of the predicted rise from CM P = 2 to CM P = 4 for this treatment. Thus for the M-2M treatment, consistent with the neutrality-of-money proposition, the doubling of the money supply has no real effects but does result in an approximate doubling of both the DM and CM prices. By contrast, in the 2M-M treatment, Table 12 reveals that the change from 2M to M has some real effects on DM traded quantities. In particular, real DM traded quantities are significantly higher by 1.895 units on average in the first 2M part as compared with the second M part of the 2M-M treatment sessions. Also inconsistent with the neutrality-of-money proposition, DM traded prices are significantly lower in the 2M part of the 2M-M treatment as indicated by the negative and significant coefficient on the 2M dummy variable and CM prices are unaffected by the change in the money supply from 2M to M as indicated by the insignificance of the coefficient on the 2M dummy variable. The latter findings, which stand in contrast to those found for the M-2M treatment, may reflect the fact that subjects in our experiment have limited life experience with decreases in the money supply and an associated deflation of the price level and may not have immediately known how to adjust to this new setting in the limited time frame of our experimental sessions. Alternatively, these findings may reflect some kind of behaviorally-based aversion to price reductions as has been documented using field data, for example, by Bewley (1999). Such findings are also broadly consistent with other experimental studies (reviewed in Section “Related Literature”) that exhibited an asymmetric response in nominal prices to positive and negative shocks (e.g., Fehr & Tyran, 2001 and Noussair et al., 2012).
CONCLUSIONS AND DIRECTIONS FOR FUTURE RESEARCH This article provides further evidence on the coordination role played by monetary exchange in random search environments following up on
Essentiality and Neutrality of Money in a Search Model
291
experiments by Camera and Casari (2014) and Duffy and Puzzello (2014) by using a within-subject experimental design so as to more carefully explore the impact of monetary regimes relative to nonmonetary regimes and the impact of changes in the money supply on real activity and prices. The control of the laboratory, and especially the within-subject experimental design that we employ in this article enables us to be clearer about the causal mechanisms underlying real economic activity and price determination. Furthermore, monetary policy experiments of the type we investigate here, namely a doubling of halving of the money supply, are not readily implementable in the field, and this fact provides another motivation for our laboratory investigation. We have found that the presence of money increases real exchange activity relative to the absence of money but that the overall welfare benefit of introducing money may depend on the order in which money is introduced. Our experimental findings also suggest that, consistent with the neutrality-of-money proposition, a doubling of the money supply has no real effects, and is associated with a doubling of prices. However, inconsistent with that neutrality proposition, we also find that a reduction in the money supply by half can have real effects and does not lead to a fall in prices by the same proportion. As we noted in the Introduction, the order of events in the history of human exchange is that gift-exchange regimes preceded monetary exchange regimes. In our experiment, the transition from a nonmonetary giftexchange regime to a monetary exchange regime is not found to be welfare improving and so an important topic for future research is to understand why this may be the case. One obvious omission that we have already alluded to is that our monetary regime does not involve any legal restriction to use money in exchange (e.g., to pay taxes) and such legal restrictions might have played an important role historically in the transition from gift exchange to monetary exchange systems. A second omission from the gift-exchange regime we consider is the use of any kind of accounting or record-keeping credit/debit ledgers that may have obviated the need for a money object, and which were also importantly historically (see Graeber, 2011). A third omission is that we have bypassed a potentially important intermediate step, namely, that of a commodity-money exchange regime where the good used as money has some value in use (utility value) apart from its value in exchange.15 It may be that the transition from a pure gift-exchange regime involving only real costs and benefits to a monetary exchange regime involving the further use of fiat objects having no real intrinsic value requires an intermediate phase involving commodity-money issuance (e.g., gold or silver coins) such as was also
292
JOHN DUFFY AND DANIELA PUZZELLO
observed in the history of monetary exchange. We leave the study of these topics to future research.
NOTES 1. Eagleton and Williams (2007). 2. As Graeber (2011) emphasizes, there is no evidence for quid-pro-quo barter exchange systems as a predecessor to monetary exchange systems in the anthropological record. In fact the evidence points to the exact opposite order of events: historically, barter exchange “has mainly been what people who are used to cash transactions do when for one reason or another they have no access to currency” Graeber (2011, p. 40). 3. See Duffy (2014) for a more detailed literature review of experimental studies on money. 4. That is, the token (or fiat money) objects we consider have value only if agents believe those objects to have value and not because of any legal restrictions or cash-in-advance constraints requiring the use of token objects in exchange; these token objects are “intrinsically worthless” in the sense that they do not yield agents any utility. 5. The original Lagos and Wright model has a positive probability, (1 − α), that agents remain unmatched, a positive probability δ of double coincidence meetings, and a probability σ of being a consumer or a producer. We simplify the model as we set α=1, δ=0, and σ=1/2. This does not affect the qualitative results. 6. If the money supply is instead equal to 2M, then the action set is given by ½0; q × ½0; 2M. 7. Note that the quantity q satisfying uðqÞ = cðqÞ is such that q ∈ [21,22]. For simplicity, we just chose q = 22: 8. Example instructions used in the NM-M treatment sessions are provided in the Appendix. 9. This random termination method for implementing infinitely repeated games in the laboratory is due to Roth and Murnighan (1978). See Fre´chette and Yuksel (2013) for a comparison of random termination (RT) with other, theoretically equivalent methods; RT is found to generate the highest levels of cooperation in repeated Prisoner’s Dilemma games. 10. Variations in sequence (supergame) lengths can have an effect on the extent of cooperative behavior as documented by Dal Bo and Fre´chette (2011) in repeated Prisoner’s Dilemma game experiments and by Engle-Warnick and Slonim (2006) in repeated trust game experiments. Holding the sequence lengths constant across treatments as is also done by Fre´chette and Yuksel (2013), helps to minimize such variations, so that any observed differences in within-subject behavior can be attributed to treatment changes alone. This design choice was also necessitated by our use of a within-subjects design, as we had to ensure that we would be able to read instructions for and implement two different indefinitely repeated game environments within the period of time that we had recruited subjects for each experimental session.
Essentiality and Neutrality of Money in a Search Model
293
11. While we will refer to experimental sessions involving tokens as the “money” treatment sessions, we were careful to avoid all use of the term “money” in the experimental instructions or on computer screens. 12. Recall however, that subjects were given an initial endowment of 20 points and also promised a show-up payment. 13. The welfare measure is calculated using utility benefits and costs in points accrued by all subjects over all supergames and periods of each of the six sessions. That amount is then divided by the number of points that could have been earned had subjects played according to the first best equilibrium where a quantity of q = 6 is exchanged in every decentralized meeting. 14. We use period-level data rather than individual-level data in these regressions since at the individual-level, there will be those who benefit, that is, consumers in the DM and those who incur losses, that is, producers in the DM. The period level welfare ratio aggregates these individual benefits and losses and thus provides a better measure of welfare per period. 15. In the language of experimental economics, such commodity monies have a fixed and known “redemption value” to subjects (as in Lian & Plott, 1998) whereas the token fiat money object that we use is known to subjects to have no redemption value; it serves only as a possible means to the end goal of acquiring consumption, as is also the case in fiat money regimes. 16. If −q + b/P < 0, then b/P < q or b < Pq, so Pq − b > 0. 17. If −q + b/P < 0, then b/P < q or b < Pq, so Pq − b > 0.
REFERENCES Aliprantis, C. D., Camera, G., & Puzzello, D. (2007a). Contagion equilibria in a monetary model. Econometrica, 75(1), 277282. Aliprantis, C. D., Camera, G., & Puzzello, D. (2007b). Anonymous markets and monetary trading. Journal of Monetary Economics, 54(7), 19051928. Araujo, L. (2004). Social norms and money. Journal of Monetary Economics, 51(2), 241256. Araujo, L., Camargo, B., Minetti, R., & Puzzello, D. (2012). The essentiality of money in environments with centralized trade. Journal of Monetary Economics, 59(7), 612621. Bewley, T. F. (1999). Why wages don’t fall during a recession. Cambridge, MA: Harvard University Press. Camera, G., & Casari, M. (2014). The coordination value of monetary exchange: Experimental evidence. American Economic Journal: Microeconomics, 6(1), 290314. Dal Bo, P., & Fre´chette, G. R. (2011). The evolution of cooperation in infinitely repeated games: Experimental evidence. American Economic Review, 101(1), 411429. Duffy, J. (2014). Macroeconomics: A survey of laboratory research. In J. H. Kagel & A. E. Roth (Eds.), Handbook of experimental economics (Vol. 2). Princeton, NJ: Princeton University Press (forthcoming). Duffy, J., & Puzzello, D. (2014). Gift exchange versus monetary exchange: Theory and evidence. American Economic Review, 104(6), 17351776. Eagleton, C., & Williams, J. (2007). Money: A history (2nd ed.). New York, NY: Firefly Books.
294
JOHN DUFFY AND DANIELA PUZZELLO
Ellison, G. (1994). Cooperation in the Prisoner’s Dilemma with anonymous random matching. Review of Economic Studies, 61(3), 567588. Engle-Warnick, J., & Slonim, R. L. (2006). Inferring repeated-game strategies from actions: Evidence from trust game experiments. Economic Theory, 28(3), 603632. Fehr, E., & Tyran, J.-R. (2001). Does money illusion matter? American Economic Review, 91(5), 12391262. Fehr, E., & Tyran, J.-R. (2014). Does money illusion matter? Reply. American Economic Review, 104(3), 10631071. Fischbacher, U. (2007). Z-Tree: Zurich toolbox for ready-made economic experiments. Experimental Economics, 10(2), 171178. Fre´chette, G., & Yuksel, S. (2013). Infinitely repeated games in the laboratory: Four perspectives on discounting and random termination. Working Paper. Department of Economics NYU. Friedman, M. (1969). The optimum quantity of money. New York, NY: Macmillan. Graeber, D. (2011). Debt: The first 5,000 years. Brooklyn, NY: Melville House Publishing. Green, E. J., & Zhou, R. (2005). Money as a mechanism in a Bewley economy. International Economic Review, 46(2), 351371. Greif, A. (2006). The birth of impersonal exchange: The community responsibility system and impartial justice. Journal of Economic Perspectives, 20(2), 221236. Hume, D. (1752 [1987]). Of money. In E. F. Miller (Ed.), David Hume: Essays, moral, political, and literary (pp. 281294). Indianapolis, IN: Liberty Classics. Kandori, M. (1992). Social norms and community enforcement. Review of Economic Studies, 59(1), 6380. Kiyotaki, N., & Wright, R. (1989). On money as a medium of exchange. Journal of Political Economy, 97(4), 927954. Lagos, R., Rocheteau, G., & Wright, R. (2014). The art of monetary theory: A new monetarist perspective. Working Paper. Lagos, R., & Wright, R. (2005). A unified framework for monetary theory and policy analysis. Journal of Political Economy, 113(3), 463484. Lian, P., & Plott, C. R. (1998). General equilibrium, markets, macroeconomics and money in a laboratory experimental environment. Economic Theory, 12(1), 2175. Nosal, E., & Rocheteau, G. (2011). Money, payments, and liquidity. Cambridge, MA: MIT Press. Noussair, C., Richter, G., & Tyran, J.-R. (2012). Money illusion and nominal inertia in experimental asset markets. Journal of Behavioral Finance, 13(1), 2737. Petersen, L., & Winn, A. (2014). Does money illusion matter? comment. American Economic Review, 104(3), 10471062. Roth, A. E., & Murnighan, K. J. (1978). Equilibrium behavior and repeated play of the Prisoner’s Dilemma. Journal of Mathematical Psychology, 17(2), 189198. Shafir, E., Diamond, P., & Tversky, A. (1997). Money illusion. Quarterly Journal of Economics, 112(2), 341374. Shapley, L., & Shubik, M. (1977). Trade using one commodity as a means of payment. Journal of Political Economy, 85(5), 937968. Williamson, S., & Wright, R. (2011). New monetarist economics: models. In B. M. Friedman & M. Woodford (Eds.), Handbook of monetary economics (Vol. 3A, pp. 2596). Amsterdam: North-Holland.
Essentiality and Neutrality of Money in a Search Model
295
APPENDIX Instructions used in the NM-M treatment sessions. Other instructions are similar and available upon request from the authors. Welcome to this experiment in the economics of decision making. Funding for this experiment has been provided by the University of Pittsburgh. If you follow these instructions carefully and make good decisions, you can earn a considerable amount of money that will be paid to you in cash at the end of the experiment. Please, no talking for the duration of today’s session. There are two parts to today’s experimental session We will first go over the instructions for the first part. When we are done, each of you will have to answer a few brief questions to ensure that everyone understands these instructions. You will also have time to ask clarifying questions. Then, you will begin making your decisions using the computer workstations. After the first part is over you will receive instructions for the second part. You can earn money from both parts of today’s session as will be made clear in the instructions for each part.
Overview Part 1 There are 14 people participating in today’s session. Each participant will make consuming, producing, buying, or selling decisions in a number of sequences. Each sequence consists of an unknown number of periods. Each period consists of two rounds. At the end of each two-round period, the computer program will draw a random number, specifically, an integer in the set {1,2,3,4,5,6}. Each of these six numbers has an equal chance of being chosen; it is like rolling a six-sided die. The program will display the random number chosen on all participants’ screens. If the random number drawn is 1,2,3,4, or 5, the sequence will continue with another two-round period. If the random number drawn is 6, the sequence will end. Thus the probability a sequence continues from one period to the next is 5/6 and the probability it ends after each period is 1/6. If a sequence ends, then depending on the time available, a new sequence will begin. You will start today’s experiment with an endowment of 20 points. Over the course of a sequence you may gain or lose points based on the decisions you make as will be explained in detail below. Your point total will carry over from one sequence to the next. Your final point total from all
296
JOHN DUFFY AND DANIELA PUZZELLO
sequences played will determine your earnings for this first part of the experiment. Each point you earn is worth $0.30.
Timing and Pairing Recall that each period consists of two rounds. In the first round of each new period, the 14 participants will be randomly matched in seven pairs and make decisions with one another in a Decentralized Meeting. In the second and final round of each period, all 14 participants will interact together in a Centralized Meeting. We will now describe what happens in each of the two rounds of a period. Round 1: Decentralized Meeting At the beginning of each Decentralized Meeting the first round of each period each participant is randomly paired with one other participant. All pairings are equally likely. In each pair, one participant is randomly chosen to be the Consumer and the other is the Producer. At the start of each Decentralized Meeting round, you are equally likely to be assigned either role; it is as though a coin flip determines whether you are a Producer or Consumer in each round. In the Decentralized Meeting, a perishable good is produced and can be traded. This good is “perishable” because it cannot be carried over into any other round or period. Producers incur a cost in points for producing some quantity of this perishable good which is subtracted from their point total and Consumers receive a benefit in points from consuming some quantity of the perishable good which is added to their point total. Table A1 summarizes how costs and benefits are related to your point earnings. For example, if you are a Producer and agree to produce two units of the good, you incur a production cost of two points. If you are a Consumer and you succeed in consuming seven units of the good, you get a benefit of 14.56 points. Consumers move first and must decide on how many units of the perishable good they want their matched Producer to produce for them see Fig. A1. Consumers can request any amount of the good between 0 and 22 units inclusive (fractions allowed). After all Consumers have made their decisions, Producers are then presented with their matched Consumer’s proposal (amount of good requested). Producers must decide whether to “Accept” or “Reject” the Consumer’s proposal see Fig. A2. If a Producer clicks the Accept button, the proposed exchange takes place: the Producer produces the requested amount of the good and incurs a cost in
297
Essentiality and Neutrality of Money in a Search Model
Table A1. Quantity 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Cost and Benefit (in Points) for Producers and Consumers, Decentralized Meeting. Producer’s Cost in Points
Consumer’s Benefit in Points
0 −1 −2 −3 −4 −5 −6 −7 −8 −9 −10 −11 −12 −13 −14 −15 −16 −17 −18 −19 −20 −21 −22
0.00 4.85 7.69 9.70 11.27 12.54 13.62 14.56 15.38 16.12 16.78 17.39 17.95 18.47 18.96 19.40 19.83 20.23 20.61 20.97 21.31 21.64 21.95
points from doing so. The Consumer receives a benefit in points from consumption of the amount of the good produced by the Producer as part of the exchange. If the Producer clicks the Reject button, then no trade takes place: the point balances of both participants will remain unchanged. After all decisions have been made the results of the Decentralized Meeting (round 1) are revealed. Any exchanges are implemented and we next move on to the Centralized Meeting round 2. Round 2: Centralized Meeting In the second round of a period, all 14 participants have the opportunity to interact in a single Centralized Meeting (there is no pairwise matching in the Centralized Meeting). In the Centralized Meeting, each participant can decide whether to produce-and-sell units of a perishable good called “good X.” Participants who choose to produce-and-sell units of good X can
298
JOHN DUFFY AND DANIELA PUZZELLO
Fig. A1.
Consumer Decision Screen, Decentralized Meeting.
Fig. A2. Producer Decision Screen, Decentralized Meeting.
further choose to buy-and-consume units of good X. Participants can also choose not to produce or buy any units of good X. The first decision screen you face in the Centralized Meeting is shown in Fig. A3. There you are asked how many units of good X you would like to offer to produce-and-sell. You can enter any number between 0 and 22 units
Essentiality and Neutrality of Money in a Search Model
Fig. A3.
299
Production of Good X Decision Screen, Centralized Meeting.
inclusive (fractions allowed). Call this quantity “q.” If you do not want to produce and sell any units of good X then enter 0 in the first input box. After you have entered your choice for q click the red submit button. If you offer to produce q > 0 units of good X, you may be able to sell those units of good X at the market price, P, to buyers of good X if there is some demand for good X (as explained below). After all participants have chosen how many units of good X to offer to produce, those participants who entered q > 0 units of good X are asked on a second, Centralized Meeting decision screen whether they would like to bid to buy-and-consume any units of good X see Fig. A4. Each participant can bid to buy-andconsume any number of units of good X between 0 and q inclusive (fractions allowed), where q is again the quantity of good X they chose to produceand-sell. Call the amount you offer to bid to buy-and-consume units of good X, “b,” so that 0 ≤ b ≤ q. If you do not want to bid to buy and consume any units of good X then enter 0 in the input box of the second Centralized Meeting screen. When you are done making this choice, click the red submit button. Table A2 shows the points that you can earn from producing-and-selling or from buying-and-consuming units of good X. For instance, if you choose to produce and sell two units of good X and you are able to sell those units (more on this below), then producing those two units will cost you two points. If you are able to buy and consume seven units of good X (again, see below), this will give you a benefit of seven points.
300
JOHN DUFFY AND DANIELA PUZZELLO
Fig. A4. Bid for Good X Decision Screen, Centralized Meeting.
After all participants have clicked the red submit button, the computer program calculates the total amount of good X that all participants have offered to produce and sell; call this: “Total Amount of Good X Produced.” The program also calculates the total number of units of good X that all participants have bid to buy and consume; call this: “Total Amount Bid for Good X.” Then the program calculates the market price of good X as follows: If Total Amount of Good X Produced > 0 and Total Amount Bid for Good X > 0, then the market price of good X, P, is determined by: P=
Total Amount Bid for Good X Total Amount of Good X Produced
If Total Amount of Good X Produced = 0 or Total Amount Bid for Good X = 0 (or both are equal to 0), then P = 0. Notice that you do not know the value of P when you are deciding whether to produce or bid for units of good X; P is determined only after all participants have made their Centralized Meeting decisions. Once the market price, P, is determined, if P > 0 then individuals who participated in the Centralized Meeting earn points according to the formula: Centralized Meeting payoff in points = − q þ
b P
ðA:1Þ
301
Essentiality and Neutrality of Money in a Search Model
The first term, −q, represents the cost to you of the q units of good X that you offered to produce and sell. The second term, b/P, represents the number of units of good X that you were able to buy and consume given your bid, b, and the market-determined price, P. Notice several things. First, if −q + b/P is negative (equivalently, if Pq − b is positive16), so that you are a net seller of good X, then you lose points from the Centralized Meeting according to formula (A.1). Second, if −q + b/P is positive (equivalently, if Pq − b is negative) so that you are a net buyer of good X, then you earn additional points from the Centralized Meeting according to formula (A.1). Thus, if P > 0, those who are net seller-producers of good X will leave the Centralized Meeting with lower point totals, while those who are net buyer-consumers of good X will leave the Centralized Meeting with higher point totals. Finally, note that if P = 0,
Table A2.
Cost and Benefit (in Points) for Producers and Consumers, Centralized Meeting.
Quantity Produced, q, or Quantity Bought, b/P 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Produce-and-Sell Cost in Points
Buy-and-Consume Benefit in Points
0 −1 −2 −3 −4 −5 −6 −7 −8 −9 −10 −11 −12 −13 −14 −15 −16 −17 −18 −19 −20 −21 −22
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
302
JOHN DUFFY AND DANIELA PUZZELLO
or if you do not produce or bid for good X in the Centralized Meeting, then your point balance remains unchanged. Players’ new (or unchanged) point totals carry over to the Decentralized Meeting of the next period of the sequence, if there is a next period, which depends on the random number drawn. If the sequence does not continue with a new period, then all participants’ point totals for the sequence are final. Depending on the time available, a new sequence may begin.
Information After each Decentralized Meeting round, all participants will be informed about their point earnings and those of the participant with whom they were paired. Nobody will ever be informed about the identity of the participant with whom they were paired in any round of this experiment. Following round 2 (Centralized Meeting) you will see your point totals both for the Decentralized Meeting round 1, the Centralized Meeting round 2, the period (rounds 1 and 2 combined), and your cumulative point total for the current sequence. For your convenience, on each decision screen you will see a history of your decisions in prior rounds of the Decentralized Meeting (DM) or the Centralized Meeting (CM).
Determination of your Earnings At the end of the first part of today’s session, your point total from all sequences played, including the initial 20 points you were given at the start of the experiment, will be converted into dollars at the rate of 1 point = $0.30. You will have a chance to earn additional payments in the second part of today’s session.
Summary 1. You start with 20 points. You will play a number of sequences each consisting of an unknown number of periods. Your point total accumulates over all sequences. 2. Each period in a sequence consists of two rounds.
Essentiality and Neutrality of Money in a Search Model
303
Round 1 Decentralized Meeting i. Participants are randomly matched in pairs with one member of the pair randomly chosen to be the Consumer and the other chosen to be the Producer. Both roles are equally likely. ii. Consumers decide how many units of a perishable good to request from the Producer with whom they are paired. iii. Producers decide whether to accept or reject the proposal of their matched Consumer. iv. If the proposal is accepted, the Consumer’s point earnings are increased as in Table A1. The Producer’s point earnings are decreased by the cost of producing the amount of the good agreed upon. v. Participants are informed about the point earnings in their pair. Round 2 Centralized Meeting i. All participants interact together in the Centralized Meeting to decide whether to produce-and-sell, buy-and-consume or not participate in the market for a perishable good X. ii. Participants who choose to produce-and-sell enter a quantity, q, of units they wish to produce for sale. Participants who enter a positive quantity q > 0 are then asked whether they would like to bid to buyand-consume units of good X. A participant’s bid b can be any amount between 0 and q, inclusive, where q is the quantity they offered to produce and sell of good X. iii. The market price, P, of good X is determined as the ratio of the total amount bid for good X to the total amount of good X produced. If there are no bids (demand) for good X or no amount of good X produced (supply) then P = 0. iv. If P > 0, each participant’ Centralized Meeting points are determined by the formula: −q +b/P. If P = 0, there is no market for good X and all participants earn 0 points from the Centralized Meeting. v. Participants are informed of the market price, P, and about their own Centralized Meeting point earnings (if any). 3. At the end of each 2-round period, a number (integer) from 1 to 6 is randomly drawn and determines whether the sequence continues with another 2-round period. If a 1,2,3,4, or 5 is drawn the sequence continues. If a 6 is drawn, the sequence ends. Thus, there is a 5/6 chance that a sequence continues and a 1/6 chance that it ends. 4. If a sequence continues, then a new period begins. Point balances carry over from the end of the prior period and participants are randomly
304
JOHN DUFFY AND DANIELA PUZZELLO
paired anew in the Decentralized Meeting (round 1) of the new period. If a sequence ends, then depending on the time available, a new sequence may begin. 5. Points accumulate over all sequences. At the end of the session, each participant’s cumulative point total from this first part of the session will be converted into cash at the rate of 1 point = $0.30.
Questions? Now is the time for questions about these instructions. If you have a question, please raise your hand and an experimenter will come to you.
Quiz Before we start, we would like you to answer a few questions that are meant to review the rules of today’s experiment. The numbers that appear in these questions are for illustration purposes only; the actual numbers in the experiment may be different. When you are done answering these questions, raise your hand and an experimenter will check your answers. 1. How many rounds are there in each period?_______ 2. Suppose it is period 2 of a sequence. What is the probability that the sequence continues with a period 3? _______ Would your answer be any different if we replaced period 2 with period 12 and period 3 with period 13? Circle one: yes/no. 3. Can you choose whether you are a producer or consumer in the first round of a period, that is, the Decentralized Meeting?_______ 4. Can you choose whether you are a producer/seller or buyer/consumer in the second round of a period, that is, the Centralized Meeting?_______ 5. Suppose in the Decentralized Meeting that you are the Consumer. You propose that the producer produce two units of the perishable good and the Producer accepts your proposal. a. What are your additional point earnings this round? (Use Table A1) _______ b. How many points does it cost the Producer for agreeing to your proposal? (Use Table A1)_______ 6. Suppose that in the Centralized Meeting you offered to produce and sell q = 4 units and you bid b = 1 to buy and consume units of good X. After
Essentiality and Neutrality of Money in a Search Model
305
all participants have made their decisions, it turns out that the market price, P = 1/2. a. How many points does it cost you to produce and sell the four units? (Use Table A2)_______ b. How many units of good X were you able to buy-and-consume with your bid of 1? (use the formula b/P) _______ How many points is this worth? (Use Table A2) c. What are your total points from the Centralized meeting? (use the formula: −q + b/P) _______ 7. Suppose that in the Centralized Meeting you offered to produce and sell q = 5 units and you bid b = 5 to buy and consume units of good X. After all participants have made their decisions, it turns out that the market price, P = 1. a. How many points does it cost you to produce and sell the five units? (Use Table A2) _______ b. How many units of good X were you able to buy-and-consume with your bid of 5? (use the formula b/P) _______ How many points is this worth? (Use Table A2) c. What are your total points from the Centralized meeting? (use the formula: −q + b/P) ______ 8. True or False: Your point total from all sequences played in this first part of the session will be converted into money and paid to you in cash at the end of the session. Circle one: True False. Overview Part 2 The second part of the experiment is exactly the same as the first part of the experiment in that there are 14 participants making consuming, producing, buying, or selling decisions in sequences of two-round periods. The probability a sequence continues from one period to the next remains 5/6 and you will also start this second part of the session with an endowment of 20 points. You earn or lose points each period according to the decisions you make, and your points earned from all sequences played in this second part will be converted into dollars at the same rate as before, with each point worth $0.30. The main change from the first part is that in this second part of the session each of the 14 participants will begin each new sequence of periods with an endowment of 8 “tokens.” The total number of tokens (14 × 8 = 112), is fixed for the duration of each sequence. Participants may
306
JOHN DUFFY AND DANIELA PUZZELLO
choose whether or not to use tokens for exchange purposes as discussed below. Tokens have no value in terms of points. As before in the first round of each new period, the 14 participants are randomly matched in seven pairs and make decisions with one another in a Decentralized Meeting. In the second and final round of each period, all 14 participants interact together in a Centralized Meeting. The tokens can be used in both the Decentralized and Centralized Meeting rounds as explained in the next two sections. Round 1: Decentralized Meeting As before participants are randomly paired. In each pair, one participant is randomly chosen to be the Consumer and the other is the Producer. At the start of each Decentralized Meeting round, you are equally likely to be assigned either role. As before, the Consumer moves first. The Consumer is informed about his own token holdings as well as the token holdings of the matched Producer. Then the Consumer decides how many units of the perishable good they want their matched Producer to produce for them and how many tokens they are willing to give the Producer for this amount of goods see Fig. A5. As before, Consumers can request any amount of the good between 0 and 22 units inclusive (fractions allowed) and can now offer to give the Producer between 0 and the maximum number of tokens they currently
Fig. A5.
Consumer’s Decisions Screen, Decentralized Meeting.
Essentiality and Neutrality of Money in a Search Model
307
have available, inclusive (fractions allowed). After all Consumers have made their decisions, Producers are informed of their own token holdings as well as the token holdings of their matched Consumer. Producers are then presented with their matched Consumer’s proposal (amount of good requested and tokens offered in exchange). Producers must decide whether to “Accept” or “Reject” the Consumer’s proposal see Fig. A6. If a Producer clicks the Accept button, the proposed exchange takes place: the Producer produces the requested amount of the good and incurs a cost in points from doing so as given in Table A1, but now the Producer receives the amount of tokens, if any, the Consumer has offered in exchange. The Consumer receives a benefit in points from consumption of the amount of the good produced as indicated in Table A1 but loses any tokens offered to the Producer as part of the exchange. If the Producer clicks the Reject button, then no trade takes place: the token and point balances of both participants will remain unchanged. After all decisions have been made the results of the Decentralized Meeting (round 1) are revealed. Any exchanges are implemented and we next move on to the Centralized Meeting round 2. Round 2: Centralized Meeting In the second round of a period, all 14 participants again interact in a single Centralized Meeting. Each participant carries with him/her the token
Fig. A6.
Producer’s Decision Screen, Decentralized Meeting.
308
JOHN DUFFY AND DANIELA PUZZELLO
Fig. A7.
Decision Screen for All Participants in the Centralized Meeting.
holdings that s/he had as of the end of round 1 (the Decentralized Meeting) after any exchanges have taken place in that round. In the Centralized Meeting, each participant now decides whether to (1) produce-and-sell units of the perishable “good X” in exchange for tokens, (2) use their tokens to bid for units of good X, (3) do both, or (4) do neither. The points you can earn from producing-and-selling or from buying-and-consuming units of good X are the same as in the first part and are given in Table A2. Notice that, differently from the first part, if you produce-and-sell units of good X, you incur costs in points according to Table A2, but you now receive tokens in exchange for any units you are able to sell. Also, to bid for units of Good X you now use your tokens and in exchange you receive units of Good X if you are able to buy such units (depending on supply and the market price as detailed below). The value of units of Good X is given in Table A2. The decision screen you face in the Centralized Meeting is shown in (Fig. A7). You enter your produce-and-sell decision in the first box and the amount of your tokens you would like to bid for Good X in the second box. Note that you cannot bid more tokens than you have available. If you do not want to produce-and-sell units of Good X or if you do not want to bid your tokens for units of Good X, then enter 0 in the appropriate box(es). After all participants have clicked the red submit button, the computer program calculates the total amount of good X that all participants have
Essentiality and Neutrality of Money in a Search Model
309
offered to produce and sell; call this: “Total Amount of Good X Produced.” The program also calculates the total number of tokens bid toward buying units of good X by all participants; call this: “Total Amount of Tokens Bid for Good X.” Finally the program calculates the market price of good X in terms of tokens as follows: If Total Amount of Good X Produced > 0 and if Total Amount of Tokens Bid for Good X > 0, then the market price of good X, P, is determined by: P=
Total Amount of Tokens Bid for Good X Total Amount of Good X Produced
If Total Amount of Good X Produced = 0 or if Total Amount of Tokens Bid for Good X = 0 (or both are equal to 0), then P = 0. Notice that you do not know the value of P when deciding whether to produce or bid tokens for units of good X; P is determined only after all participants have made their Centralized Meeting decisions. Once the market price, P, is determined, if P > 0 then individuals who participated in the Centralized Meeting earn points according to the formula: Centralized Meeting payoff in points = − q þ b=P
ðA:2Þ
The first term, −q, represents the cost to you of producing and selling q units of good X. The second term, b/P, represents the number of units good X you were able to buy and consume given your bid of b tokens and the market-determined price, P. In addition, if P > 0, each individual who participated in the Centralized Meeting will see their own token balance adjusted as follows: New Token Balance = Old Token Balance þ Pq − b
ðA:3Þ
Notice several things. First, if −q + b/P is negative (equivalently, if Pq − b is positive17), so that you are a net seller of good X, then you lose points from the Centralized Meeting according to the formula (A.2). However, at the same time, your new token balance increases relative to your old token balance by the positive amount Pq − b according to the formula (A.3). Second, if −q + b/P is positive (equivalently, if Pq − b is negative) so that you are a net buyer of good X, then you earn additional points from the Centralized Meeting according to formula (A.2). However, at the
310
JOHN DUFFY AND DANIELA PUZZELLO
same time, your new token balance decreases relative to your old token balance by the negative amount Pq − b according to formula (A.3). Thus, if P > 0, those who are net seller-producers of good X will leave the Centralized Meeting with higher token balances but with lower point totals, while those who are net buyer-consumers of good X will leave the Centralized Meeting with lower token balances but with higher point totals. Finally, note that if P = 0, or if you do not produce or bid tokens for good X in the Centralized Meeting, then your point and token balances remain unchanged. Players’ new (or unchanged) token balances and point totals will carry over to the Decentralized Meeting of the next period of the sequence, if there is a next period, which depends on the random number drawn. If the sequence does not continue with a new period, then all participants’ token balances are set to zero, and their point totals for the sequence are final. Depending on the time available, a new sequence may then begin. At the beginning of each new sequence, each participant is given eight tokens.
Information After each round, participants will be informed about their point totals and their token holdings. After round 1 (Decentralized Meeting) participants will also be informed about the point totals and the token holdings of the participant they were paired with. All interactions remain anonymous. After round 2 (Centralized Meeting) you will see your token and point totals both for the Decentralized Meeting round 1, the Centralized Meeting round 2, the period (rounds 1 and 2 combined), and your cumulative point total for the current sequence. For your convenience, on each decision screen you will see a history of your decisions in prior rounds of the Decentralized Meeting (DM) or the Centralized Meeting (CM).
Determination of your Earnings At the end of this second part of today’s session, your point total from all sequences played, including the initial 20 points you were given at the start of this second part, will be converted into dollars at the rate of 1 point = $0.30. You will be paid your total earnings from the first and second parts of today’s session plus a $5 show-up payment in cash and in private.
Essentiality and Neutrality of Money in a Search Model
311
Summary Part 2 is the same as part 1 except that: • Each player starts each new sequence with eight tokens. The total supply of tokens remains constant at 14 × 8 = 112 tokens over all rounds of a sequence. Tokens have no value in terms of points. • In the DM, Consumers’ proposals now include both an amount of the good the Producer is asked to produce and an amount of tokens the Consumer offers to give the Producer in exchange. As before, Producers can either accept or reject the Consumer’s proposal. • In the CM, all 14 participants meet and individually decide whether to produce and sell units of Good X in exchange for tokens and/or to bid their available tokens for units of Good X. All sales of units of Good X for tokens are at the single market-determined price, P. In all other respects, this second part of the experiment is the same as the first part.
Questions? Now is the time for questions about these instructions. If you have a question, please raise your hand and an experimenter will come to you.